[go: up one dir, main page]

Academia.eduAcademia.edu
TRANSDISCIPLINARY LIFECYCLE ANALYSIS OF SYSTEMS Advances in Transdisciplinary Engineering Advances in Transdisciplinary Engineering (ATDE) is a peer-reviewed book series covering the developments in the key application areas in product quality, production efficiency and overall customer satisfaction. ATDE will focus on theoretical, experimental and case history-based research, and its application in engineering practice. The series will include proceedings and edited volumes of interest to researchers in academia, as well as professional engineers working in industry. Editor-in-Chief Josip Stjepandić, PROSTEP AG, Darmstadt, Germany Co-Editor-in-Chief Richard Curran, TU Delft, The Netherlands Advisory Board Jianzhong Cha, Beijing Jiaotong University, China Shuo-Yan Chou, Taiwan Tech, Taiwan, China Cees Bil, RMIT University, Australia Milton Borsato, Federal University of Technology, Paraná-Curitiba, Brazil Parisa Ghodous, University of Lyon, France Kazuo Hiekata, University of Tokyo, Japan John Mo, RMIT University, Australia Essam Shehab, Cranfield University, UK Mike Sobolewski, TTU, Texas, USA Amy Trappey, NTUT, Taiwan, China Wim J.C. Verhagen, TU Delft, The Netherlands Wensheng Xu, Beijing Jiaotong University, China Volume 2 Recently published in this series Vol. 1. J. Cha, S.-Y. Chou, J. Stjepandić, R. Curran and W. Xu (Eds.), Moving Integrated Product Development to Service Clouds in the Global Economy – Proceedings of the 21st ISPE Inc. International Conference on Concurrent Engineering, September 8–11, 2014 ISSN 2352-751X (print) ISSN 2352-7528 (online) Tran nsdiscciplinaary Liffecyclee Anallysis of System ms Procceedings of the 22nd ISPE Inc. Internation nal Conference on Con ncurrent En ngineering,, July 20–2 23, 2015 Edited by y Ricchard Currran TU Delft, The Neth herlands um Neel Wognu ISPE, Inc. Millton Borssato Federal University U of Technology T Josiip Stjepan ndić PROS STEP AG, Geermany and Wim J.C. Verrhagen TU Delft, The Neth herlands Amstterdam • Berrlin • Tokyo • Washington, DC © 2015 The authors and IOS Press. This book is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. ISBN 978-1-61499-543-2 (print) ISBN 978-1-61499-544-9 (online) Library of Congress Control Number: 2015945249 Publisher IOS Press BV Nieuwe Hemweg 6B 1013 BG Amsterdam Netherlands fax: +31 20 687 0019 e-mail: order@iospress.nl Distributor in the USA and Canada IOS Press, Inc. 4502 Rachael Manor Drive Fairfax, VA 22032 USA fax: +1 703 323 3668 e-mail: iosbooks@iospress.com LEGAL NOTICE The publisher is not responsible for the use which might be made of the following information. PRINTED IN THE NETHERLANDS Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. v Preface This book of proceedings contains papers peer reviewed and accepted for the 22nd ISPE Inc. International Conference on Concurrent Engineering, held at the TU Delft, The Netherlands, July 20–23th, 2015. This is the second issue of the newly introduced series “Advances in Transdisciplinary Engineering” which publishes the proceedings of the CE conference series. The CE conference series is organized annually by the International Society of Productivity Enhancement (ISPE, Inc.) and constitutes an important forum for international scientific exchange on concurrent engineering and collaborative enterprises. These international conferences attract a significant number of researchers, industry experts and students, as well as government representatives, who are interested in the recent advances in concurrent engineering research, advancements and applications. Developed in the 80’s, the CE approach is based on the concept that different phases of a product lifecycle should be conducted concurrently and initiated as early as possible within the Product Creation Process (PCP), including the implications within the extended enterprise and networks. The main goal of CE is to increase the efficiency of the PCP and to reduce errors in the later phases, as well as to incorporate considerations for the full lifecycle and through-life operations. In the past decades, CE has become the substantive basic methodology in many industries (e.g., automotive, aerospace, machinery, shipbuilding, consumer goods, process industry, environmental engineering) and is also adopted in the development of new services and service support. The initial basic CE concepts have matured and have become the foundations of many new ideas, methodologies, initiatives, approaches and tools. Generally, the current CE focus concentrates on enterprise collaboration and its many different elements; from integrating people and processes to very specific complete multi/inter/transdisciplinary solutions. Current research on CE is driven again by many factors like increased customer demands, globalization, (international) collaboration and environmental strategies. The successful application of CE in the past opens also the perspective for future applications like overcoming natural catastrophes and sustainable mobility concepts with electrical vehicles. The CE2015 Organizing Committee has identified 31 thematic areas within CE and launched a Call For Papers accordingly, with resulting submissions submitted from all continents of the world. The conference is entitled: “Transdisciplinary Lifecycle Analysis of Systems”. This title reflects the variety of processes and methods which influences the modern product creation. Finally, the submissions as well as invited talks were collated into 18 streams led by outstanding researchers and practitioners. The Proceedings contains 63 peer-reviewed papers by authors from 21 countries and 2 invited keynote papers. These papers range from the theoretical, conceptual to strongly pragmatic addressing industrial best practice. The involvement of more than 13 companies from many industries in the presented papers gives additional importance to this conference. This book on ‘Transdisciplinary Lifecycle Analysis of Systems’ is directed at three constituencies: researchers, design practitioners, and educators. Researchers will benefit from the latest research results and knowledge of product creation processes and vi related methodologies. Engineering professionals and practitioners will learn from the current state of the art in concurrent engineering practice, new approaches, methods, tools and their applications. The educators in the CE community gather the latest advances and methodologies for dissemination in engineering curricula, while the community also encourages young educators to bring new ideas into the field. Part 1 of the Proceedings comprises the keynotes while Part 2 is entitled Systems Engineering and contains an extensive overview on new research and development in Systems Engineering in research and practice. Part 3 outlines the importance of Customization and Variability Management within CE. It contains several methods for developing and producing customer-specific products and managing the variability of components and modules from which the product is composed. In Part 4, ProductionOriented Design and Maintenance and Repair, a variety of (cloud) approaches in manufacturing and service are highlighted. Part 5 addresses Design Methods and Knowledge-Based Engineering with many approaches to support the design process and building, saving, and using knowledge in the complex environment of CE. Part 6 focuses on Multi-Disciplinary Product Management with an emphasis on information management. Part 7 contains contributions on Sustainable Product Development, a subject that is gaining growing attention. Part 8 illustrates a number of key-topics on Service-Oriented Design. This topic is also very important in the context of CE. Part 9 deals with the Product Lifecycle Management, emphasizing the importance of management product data, information, and knowledge throughout the whole life of a product. Finally, Part 10 contains contributions on Trends in CE with ideas for further research on methods and tools with involvement of practice. We acknowledge the high quality contributions of all authors to this book and the work of the members of the International Program Committee who assisted with the blind triple peer-review of the original papers submitted and presented at the conference. Readers are sincerely invited to consider all of the contributions made by this year’s participants through the presentation of CE2015 papers collated into this book of proceedings. We hope that they will be further inspired in their work for disseminating their ideas for new approaches for sustainable product development in a multidisciplinary environment within the ISPE, Inc. community. Richard Curran, General Chair TU Delft, The Netherlands Nel Wognum, Co-General Chair ISPE, Inc. Milton Borsato, Program Chair Federal University of Technology, Paraná-Curitiba, Brazil Josip Stjepandić, Co-Program Chair PROSTEP AG, Germany Wim J.C. Verhagen, Secretary General TU Delft, The Netherlands vii Conference Organization Program Committee General Chairs: Richard Curran, TU Delft Nel Wognum, ISPE Inc. Program Chair: Milton Borsato, Federal University of Technology, Paraná-Curitiba, Brazil Josip Stjepandić, PROSTEP AG Local Chair: Wim J.C. Verhagen, TU Delft ISPE Steering Committee Richard Curran, TU Delft, The Netherlands (ISPE Inc. President) Mike Sobolewski, TTU, Texas, USA (ISPE Inc. Vice President) Georg Rock, Trier University of Applied Science, Germany Essam Shehab, Cranfield University, UK Jianzhong Cha, Beijing Jiaotong University, China Shuo-Yan Chou, Taiwan Tech, Taiwan Josip Stjepandić, PROSTEP AG, Germany Amy Trappey, NTUT, Taiwan, China Shuichi Fukuda, Stanford University, USA Cees Bil, RMIT University, Australia Chun-Hsien Chen, Nanyang Technological University, Singapore Eric Simmon, NIST, USA Fredrik Elgh, Jönköping University, Sweden John Mo, RMIT University, Australia Jerzy Pokojski, SIMR, Poland Kazuo Hiekata, University of Tokyo, Japan Milton Borsato, Federal University of Technology, Paraná-Curitiba, Brazil Parisa Ghodous, University of Lyon, France Ricardo Gonçalves, UNINOVA, Portugal Geilson Loureiro, INPE, Brazil Ahmed Al-Ashaab, Cranfield University, UK Nel Wognum, Wageningen University, Netherlands Rajkumar Roy, Cranfield University, UK viii International Program Committee Carlos Agostinho, UNINOVA, Portugal Ahmed Al-Ashaab, Cranfield University, UK Ronald Beckett, University of Western Sydney, Australia Alain Biahmou, EDAG GmbH & Co. KGaA, Germany Cees Bill, RMIT, Australia Volker Böß, University of Hannover, Germany Milton Borsato, Federal University of Technology – Paraná, Brazil Osíris Canciglieri Junior, Pontifical Catholic University of Paraná, Brazil Jianzhong Cha, Beijing Jiaotong University, China Chun-Hsien Chen, NTU, Singapore Ming-Chuan Chiu, National Tsing Hua University, Taiwan Shuo-Yan Chou, Taiwan Tech, Taiwan Adina Georgeta Cretan, “Nicolae Titulescu” University of Bucharest, Romania Richard Curran, TU Delft, The Netherlands Evelina Dineva, German Aerospace Center (DLR), Germany Jože Duhovnik, University of Ljubljana, Slovenia Fredrik Elgh, Jönköping University, Sweden Daniela Faas, Harvard University, USA Catarina Ferreira Da Silva, LIRIS, CB University of Lyon 1,France Nicolas Figay, Airbus SAS, France Alain-Jérôme Fougères, Universite de Technologie de Belfort – Montbeliard, France Shuichi Fukuda, Stanford University, USA Giuliani Garbi, College Anhanguera of São José, Brazil Parisa Ghodous, University of Lyon, France Gloria-Lucia Giraldo, Universidad Nacional de Colombia, Columbia Ricardo Goncalves, UNINOVA, Portugal Kazuo Hiekata, University of Tokyo, Japan John C. Hsu, California State University, USA Masato Inoue, Meiji University, Japan Teruaki Ito, University of Tokushima, Japan Roger Jiao, Georgia Tech, USA Joel Johansson, Jönköping University, Sweden Leonid Kamalow, Ulyanovsk State Technical University, Russia Milan Kljajin,University of Osijek, Croatia Nan Li, Beijing Technology and Business University Geilson Loureiro, INPE, Brazil Zoran Lulić, University of Zagreb, Croatia Nils Macke, ZF AG, Germany Ivan Mahalec, University of Zagreb, Croatia Nozomu Mishima, AIST, Japan Maria Lucia Miyake Okumura, Pontifical Catholic University of Parana, Brazil John Mo, RMIT University, Australia Bryan R. Moser, Massachussets Institute of Technology, USA Egon Ostrosi, Universite de Technologie de Belfort – Montbeliard, France João Adalberto Pereira, COPEL Companhia Paranaense de Energia, Brazil Margherita Peruzzini, Università Politecnica delle Marche, Italy Jerzy Pokojski, SIMR, Poland ix Jose Rios, Madrid Polytechnic University Georg Rock, University of Applied Science Germany Henrique Rozenfeld, University of São Paulo Rajkumar Roy, Cranfield University, UK Joao Sarraipa, UNINOVA, Portugal Essam Shehab, Cranfield University, UK Gang Shen, Huazhong University of Science and Technology, China Jianjun Shi, Georgia Tech, USA Pekka Siltanen, VTT, Finland Eric Simmon, NIST, USA Wojciech Skarka, Silesian University of Technology, Poland Michael Sobolewski, TTU, USA Josip Stjepandić, PROSTEP AG, Germany Jingyu Sun, The University of Tokyo Goran Šagi, University of Zagreb, Croatia Blaženko Šegmanović, ThyssenKrupp AG, Germany Jože Tavčar,University of Ljubljana, Slovenia Amy Trappey, National Tsinghua University, Taiwan Charles W. Trappey, National Chiao Tung University, Taiwan German Urrego-Giraldo, Universidad de Antioquia, Columbia Wim Verhagen, TU Delft The Netherlands Nel Wognum, Wageningen University, The Netherlands Wensheng Xu, Beijing Jiaotong University Xun Xu, University of Auckland, New Zealand Xiaojia Zhao, TU Delft, The Netherlands Yongmin Zhong, RMIT Australia Zhaowei Zhong, NTU, Singapore Xiaomin Zhu, Beijing Jiaotong Unversity, China Organizers International Society for Productivity Enhancement Inc. TU Delft x Past Concurrent Engineering conferences 2014: Beijing, China 2013: Melbourne, Australia 2012: Trier, Germany 2011: Boston, USA 2010: Cracow, Poland 2009: Taipei, Taiwan 2008: Belfast, UK 2007: São José dos Campos, Brazil 2006: Antibes-Juan les Pins, France 2005: Dallas, USA 2004: Beijing, China 2003: Madeira, Portugal 2002: Cranfield, UK 2001: Anaheim, USA 2000: Lyon, France 1999: Bath, UK 1998: Tokyo, Japan 1997: Rochester, USA 1996: Toronto, Canada 1995: McLean, USA 1994: Pittsburgh, USA xi Sponsors International Society for Productivity Enhancement Inc. TU Delft IOS Press PROSTEP AG This page intentionally left blank xiii Contents Preface Richard Curran, Nel Wognum, Milton Borsato, Josip Stjepandić and Wim J.C. Verhagen Conference Organization v vii Part 1. Keynotes Developments and Challenges in Design for Sustainability of Electronics A.R. Balkenende and C.A. Bakker What Is the Next Big Innovation Management Theme? Rob de Graaf and Iason Onassis 3 14 Part 2. Systems Engineering Heuristic Systems Engineering of a Web Based Service System John P.T. Mo and Sholto Maud 21 Stakeholder Management as an Approach to Integrated Management System (IMSSTK) Andreia F.S. Genaro and Geilson Loureiro 31 Quality Problems in Complex Systems Even Considering the Application of Quality Initiatives During Product Development Cosimo R. Bertelli and Geilson Loureiro 40 Enhancing Robustness of Design Process in Individual Type of Production Mitja Varl, Jože Tavčar and Jože Duhovnik Using Ontology-Based Patent Informatics to Describe the Intellectual Property Portfolio of an E-Commerce Order Fulfillment Process Abby P.T. Hsu, Charles V. Trappey and Amy J.C. Trappey Kinematic Model of Project Scheduling with Resource Constrained Under Uncertainties Giuliani Paulineli Garbi, Geilson Loureiro, Luís Gonzaga Trabasso and Milton de Freitas Chagas Cloud-Based Project Supervision to Support Virtual Team for Academic Collaboration Teruaki Ito, Mohd Shahir Kasim, Raja Izamshah, Norazlin Nasir and Yong Siang Teoh 52 62 71 81 xiv The Improved Global Supply Chain Material Management Process Framework for One-Stop Logistic Services Abby P.T. Hsu, Ai-Che Chang, Amy J.C. Trappey, Charles V. Trappey and W.T. Lee Using the “Model-Based Systems Engineering” Technique for Multidisciplinary System Development Carolin Eckl, Markus Brandstätter and Josip Stjepandić 91 100 Aircraft Bi-Level Life Cycle Cost Estimation Xiaojia Zhao, Wim J.C. Verhagen and Richard Curran 110 Design for Assistive Technology: A Preliminary Study Maria Lucia Miyake Okumura and Osiris Canciglieri Junior 122 Managing Stakeholder Voices for the Development of a Novel Device for the Elbow Forearm Rehabilitation Aline Marian Callegaro, Raffaela Leane Zenni Tanure, Amanda Sória Buss, Carla Schwengber ten Caten and Márcia Elisa Soares Echeveste Mechanisms of Dependence in Engineering Projects as Sociotechnical Systems Bryan Moser, William Grossmann and Phillip Starke A Novel Hybrid Multiple Attribute Decision Making Procedure for Aspired Agile Application Shuo-Yan Chou, Gwo-Hshiung Tzeng and Chien-Chou Yu 134 142 152 Part 3. Customization & Variability Management Implementation and Management of Design Systems for Highly Customized Products – State of Practice and Future Research Tim Hjertberg, Roland Stolt, Morteza Poorkiany, Joel Johansson and Fredrik Elgh 165 Glencoe – A Visualization Prototyping Framework Anna Schmitt, Sebastian Wiersch and Stefan Weis 175 Consumer-Oriented Emotional Design Using a Correlation Handling Strategy Danni Chang, Yuexiang Huang, Chun-Hsien Chen and Li Pheng Khoo 184 Model-Based Variant Management with v.control Christopher Junk, Robert Rößger, Georg Rock, Karsten Theis, Christoph Weidenbach and Patrick Wischnewski 194 View Specific Visualization of Proofs for the Analysis of Variant Development Structures Lisa Grumbach Measuring and Evaluating Source Code Logs Using Static Code Analyzer Gang Shen, Fan Luo and Gang Hong Mass Properties Management in Aircraft Development Process: Problems and Opportunities Vera De Paula and Henrique Rozenfeld 204 214 224 xv Part 4. Production-Oriented Design & Maintenance and Repair Product Development Model Oriented for R&D Projects of the Brazilian Electricity Sector – MOR&D: A Case Study João Adalberto Pereira, Osíris Canciglieri Júnior and André Eugênio Lazzaretti 239 Sustainment Management in the Royal Australian Navy Robert Henry and Cees Bil 249 Application of Lean Methods into Aircraft Maintenance Processes Borut Pogačnik, Jože Tavčar and Jože Duhovnik 259 A Supporting Model for the Dynamic Formation of Supplier Networks Kellyn Crhis Teixeira and Milton Borsato 269 Data Flow to Manufacturing Simultaneous with Design Phase Dilşad Ilter, Gülden Şenaltun and Can Cangelir 279 An Architecture for Remote Guidance Service Pekka Siltanen, Seppo Valli, Markus Ylikerälä and Petri Honkamaa 288 Impact of Non-Functional Requirements on the Products Lines Lifecycle German Urrego-Giraldo, Gloria Giraldo and Myriam Delgado 298 Manufacturing Resource Servitization Based on SOOA Wensheng Xu, Lingjun Kong and Jianzhong Cha 308 An Approach to Assess Uncertainties in Cloud Manufacturing Yaser Yadekar, Essam Shehab and Jorn Mehnen 318 Part 5. Design Methods & Knowledge-Based Engineering Howtomation© Suite: A Novel Tool for Flexible Design Automation Joel Johansson 327 Generic Functional Decomposition of an Integrated Jet Engine Mechanical Sub System Using a Configurable Component Approach Visakha Raja and Ola Isaksson 337 A Study on Marine Logistics System for Emergency Disaster Control Heng Wang and Kenji Tanaka 347 A Guideline for Adapted System Dynamics Modeling of Rework Cycles in Engineering Design Processes Elisabeth Schmidt, Daniel Kasperek and Maik Maurer 357 Design Optimization of Electric Propulsion of Flying Exploratory Autonomous Robot Mateusz Wąsik and Wojciech Skarka 367 Towards Cloud Big Data Services for Intelligent Transport Systems Gavin Kemp, Genoveva Vargas-Solar, Catarina Ferreira Da Silva, Parisa Ghodous, Christine Collet and Pedropablo Lopez Amaya 377 xvi Cooling and Capability Analysis Methodology: Towards Development of a Cost Model for Turbine Blades Film Cooling Holes Javier Continente, Essam Shehab, Konstantinos Salonitis, Sree Tammineni and Phani Chinchapatnam A Methodology for Mechatronic Products Design Applied to the Development of a Instrument for Soil Compaction Measurement Mauricio Merino Peres, Iana G. Castelo Branco and Andréa Cristina dos Santos Process Knowledge Model for Facilitating Industrial Components’ Manufacturing Jingyu Sun, Kazuo Hiekata, Hiroyuki Yamato, Pierre Maret and Fabrice Muhlenbach 386 396 406 Part 6. Multidisciplinary Product Management Evaluation of Support System Architecture for Air Warfare Destroyers John P.T. Mo and Douglas Thompson Towards a Proposed Process to Manage Assumptions During the In-Service Phase of the Product Lifecycle John Iley and Cees Bil Four Practical Lessons Learned from Multidisciplinary Projects Evelina Dineva, Thomas Zill, Uwe Knodt and Björn Nagel 419 429 439 Part 7. Sustainable Product Development A Feasibility Study of Remote Inverse Manufacturing Nozomu Mishima, Ooki Jun, Yuta Kadowaki, Kenta Torihara, Kiyoshi Hirose and Mitsutaka Matsumoto Proposal for Intelligent Model Product Definition to Meeting the RoHS Directive José Altair Ribeiro dos Santos and Milton Borsato 451 461 Towards a Green and Sustainable Software Hayri Acar, Gülfem I. Alptekin, Jean-Patrick Gelas and Parisa Ghodous 471 Sustainable Product Development: Ecodesign Tools Applied to Designers Pâmela T. Fernandes and Osíris Canciglieri Junior 481 Sustainable Consumption and Ecodesign: A Review Vitor De Souza and Milton Borsato 492 Reducing the Energy Consumption of Electric Vehicles Wojciech Skarka 500 Part 8. Service-Oriented Design Technical-Business Design Methodology for PSS Margherita Peruzzini, Eugenia Marilungo and Michele Germani 513 xvii A Service-Oriented Architecture for Ambient-Assisted Living Margherita Peruzzini and Michele Germani 523 Studies of Air Transport Management Issues for the Airport and Region Z.W. Zhong, Y.Y. Tee and Y.J. Lin 533 Service-Oriented Life Cycles for Developing Transdisciplinary Engineering Systems Michael Sobolewski and Raymond Kolonay 541 Part 9. Product Lifecycle Management A Gingival Mucosa Geometric Modelling to Support Dental Prosthesis Design Rodrigo Meira de Andrade, Anderson Luis Szejka and Osiris Canciglieri Junior 555 Engineering Collaboration in Mechatronic Product Development Sergej Bondar, Henry Bouwhuis and Josip Stjepandić 565 Leveraging 3D CAD Data in Product Life Cycle: Exchange – Visualization – Collaboration Alain Pfouga and Josip Stjepandić 575 The Research of Music and Emotion Interaction with a Case Study of Intelligent Music Selection System Li-Wei Ko, Kai-Hsiang Chuang and Ming-Chuan Chiu 585 The Design Process Structural & Logical Representation in the Concurrent Engineering Infocommunication Environment Denis Tsygankov, Alexander Pokhilko, Andrei Sidorichev, Sergey Ryabov and Oleg Kozintsev Search Engine Optimization Process: A Concurrent Intelligent Computing Approach Sylvain Sagot, Alain-Jérôme Fougères and Egon Ostrosi Advances in Parameterized CAD Feature Translation Sergej Bondar, Abdul Shammaa, Josip Stjepandić and Ken Tashiro 595 603 615 Part 10. Trends in CE CE Challenges – Work to Do Josip Stjepandić, Wim Verhagen and Nel Wognum 627 Customer Engagement in Product Development: Bring UX Upstream Shuichi Fukuda 637 Improving the Ability of Future Engineers by Using Advanced Interactive 3D Techniques in Education Wojciech Skarka, Marek Wyleżoł, Marcin Januszka, Sebastian Rzydzik and Miroslaw Targosz 647 xviii Product Avatar as Digital Counterpart of a Physical Individual Product: Literature Review and Implications in an Aircraft José Ríos, Juan Carlos Hernández, Manuel Oliva and Fernando Mas 657 Subject Index 667 Author Index 671 Part 1 Keynotes This page intentionally left blank Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-3 3 Developments and Challenges in Design for Sustainability of Electronics A.R. BALKENENDEa,1 and C.A. BAKKER b Philips Research, High Tech Campus 4, 5656 AE Eindhoven, Netherlands b Delft University of Technology, Faculty of Industrial Design Engineering, Delft, Netherlands a Abstract. Sustainability of electronic products until recently mainly focused on improving the energy efficiency. Recently, resource efficiency has become of growing importance. Due to the use of relatively small amounts of many valuable and scarce materials, often intimately mixed, the design of electronic products deserves specific attention. From a materials perspective measures are needed to improve on recyclability. In addition to the use of recyclable materials, the ability to break connections between materials that are not compatible in recycling processes is crucial. Environmentally and economically more interesting than recovery of materials is the reuse of components or products. To enable multiple product lifecycles, product design should also explicitly address maintenance, upgradeability, modularity and disassembly. Design guidelines will be presented and challenges with respect to impact assessment and business model development will be discussed. Keywords. Sustainability, product design, design tools, electronics, resource efficiency, recycling, re-use, circular economy, electronic waste, end-of-life treatment Introduction Sustainability describes our potential to maintain the well-being of humans and our environment over the long term. As we create, design and manufacture globally increasing volumes of electronic products, the sustainability of scarce and critical resources for new electronic products, as well as the treatment of electronic waste, become critical. The notion of sustainability for electronics in the past decades predominantly has been focused on energy efficiency. This is reflected in the Ecodesign Directive [1]. Examples are provided by the large reduction in standby power consumption of electronic devices and by the replacement of incandescent lamps by compact fluorescent lamps and LED-lamps. In the past decade we have seen increased concerns about materials, focusing on both physical scarcity and economic criticality [2]. Demand and competition for finite and critical resources will continue to increase, and pressure on resources is causing greater environmental degradation and fragility. In the field of Design for Sustainability this primarily leads to a focus on improved recyclability of products. The high complexity of electronic products with 1 Corresponding Author, E-mail: ruud.balkenende@philips.com 4 A.R. Balkenende and C.A. Bakker / Developments and Challenges in Design for Sustainability their intimate mixing of many materials limits the amount of valuable materials that are actually recovered. Further, material value is usually only a small fraction of the actual product value in the case of electronic products. The economic perspective of recycling of electronics is thus limited. Higher value can be retrieved if modules or the product as an entity are used again. Therefore, the idea of transitioning to a circular economy, in which product life is extended to multiple lifecycles, is currently being explored. This paper will give an overview of the product requirements, business models and environmental assessment methods needed to enable this transition to a circular economy. Examples from lamps recently developed by Philips Lighting will be used. 1. Circular economy Since the industrial revolution, our economies have developed a ‘takemake-consume and dispose’ pattern of growth. Valuable materials are easily lost upon disposal of a product at the end of its lifecycle. The transition to a more circular economy requires changes throughout value chains, from product design to new business and market models, and from new ways of turning waste into a resource to new models of consumer behavior [3]. The transition to a circular economy will result in a more efficient use of resources. This is particularly relevant in the electronics area, where products contain a large variety of valuable and critical materials. Enabling effective recycling, i.e. recovery of the materials is therefore a prerequisite. However, as such this is insufficient: 80%-90% of the value and energy is lost during recycling, where highly functional electronics are simply turned Figure 1. Product life cycles in the technological into a kind of ore from which only part product sphere. of the materials are eventually recovered. In addition to optimizing for recycling (materials recovery), electronic products should therefore also be designed for reuse, repair, and refurbishment (implying recovery/harvesting at the level of the product) as well as parts harvesting (recovery at the component level). This is represented in the circular structure of figure 1 by the three loops Service, Remake and Recovery. In the following we will focus on recent developments of increasing the resource efficiency of electronic products. The focus is on product design. This leads to specific challenges in a number of areas. Primarily, this requires design methodologies and tools. Such tools must be based on proper insights in dealing with products at the end A.R. Balkenende and C.A. Bakker / Developments and Challenges in Design for Sustainability 5 of a lifecycle. This in turn requires insight in the relation between product and business model: services lead to other requirements than sales. To enable assessment of the environmental impact reliable and transparent assessment methods are required. Finally, in the case of electronics specific challenges arise from miniaturization and increasing integration of functionalities as well as embedding of electronics in other materials. In the next sections these aspects will be addressed in more detail. 2. Product requirements for multiple lifecycles Insight in the way in which a product a product can be designed for multiple lifecycles provides an essential starting point. Knowledge on the way in which a product is dealt with during and at the end of a lifecycle must thus be acquired and related to the design properties. In the case that the product will be re-used suitability for appropriate maintenance and cleaning is a prerequisite. Ultimately, the product might be disposed of when it is at the end of its functional life as an entity. In that case optimal recycling, i.e. maximum recovery of the constituting materials is the target. In between are options like refurbishment, remanufacturing and parts harvesting. A recent analysis of Philips and TU Delft based on product use and service requirements distinguishes a number of aspects that need specific attention when designing for multiple lifecycles [4]. x x x x x Maintenance enables the prolonged use of products and consists of all aspects related to delivering performance for as long as possible in the use phase when the product is with the customer. Lifetime prognostics, which allows to predict the remaining future performance of a product, is a useful addition. Upgradeability and adaptability describes products that will last long (functionality), are used long (desirability) and take into account a change in expectations from a product. Time becomes an explicit factor in design. Disassembly is part of every circle. It is the first step in most actions performed to the product in order to either extend its lifetime or to give a new life to the components or materials. In general, disassembly needs to be nondestructive if the product or component will be reused, implying that also reassembly has to be taken into account. Modularity implies the ability to reuse components or refurbish or remanufacture a product and consists of all actions performed when a product is returned from the customer Recycling enables the reuse of materials and consists of recovery of pure materials at end-of-life to secure real resource efficiency and as the last option to recover any remaining value that a product or component has. This means that, in contrast to the previous aspects, recyclability is a mandatory requirement for every product. In a circular economy, however, recycling must be postponed as long as possible. Figure 2 depicts these focal areas and their main intention. Challenges that need to be addressed in design are the complex relationship between product design and business models, technological versus economical life time of a product (family) and its components, as well as insight in the behavior of users during product life and at the end of a product life cycle. 6 A.R. Balkenende and C.A. Bakker / Developments and Challenges in Design for Sustainability Figure 2. Main topics in circular product design. 3. Design guidelines Product design usually aims at producing a product with a particular performance at minimum cost, the latter implying particular material choices and – in the case of most electronic products – suitability for mass production. Increasingly, end-of-life treatment is taken into account, but usually limited to a compliance level. Ideally, the topics outlined in the previous section are taken into account. Recyclability is essential for all electronics products, irrespective of their use and associated business model. To obtain insight in the effect of pre-processing (i.e. shredding) conditions and separation procedures, a large batch of ‘standard’ LED lamps has been processed By studying the resulting fragments, the recycling yield could be directly linked to various design aspects. Some results of the test recycling runs are shown in Figure 3. Similar experiments have been done on LCD displays, also involving (partly) manual disassembly [5]. The basic requirement for improved recyclability is to establish well-defined material streams. It turns out that, even if recyclable materials are used, the way in which different materials are connected is crucial. An example is the screwed connection between a LED PCB and the heat sink. This causes that the aluminum heat sink cannot be separated effectively from the electronics, thus limiting the recyclability of both aluminum and electronics. Based on such recycling insights guidelines were derived that strongly focus on the ability to break connections under actual disintegration or dismantling conditions. Electronic components are considered separately; as effective recycling routes exist for recovering many elements from complex electronic parts. The resulting guidelines for recyclability are summarized in Figure 4. A.R. Balkenende and C.A. Bakker / Developments and Challenges in Design for Sustainability 7 Figure 3. LED lamps and material fractions from large scale shredding of LED lamps [5]. Figure 4. Design guidelines for recycling [5]. To enable re-use of components and products instead of recovery of materials, this approach should be extended to enable resource efficiency at all stages of the product life cycle taking also into account maintenance, upgradeability, modularity and disassembly. This leads to additional aspects that deserve specific attention in design as is shown in Figure 5 [4]. 8 A.R. Balkenende and C.A. Bakker / Developments and Challenges in Design for Sustainability Figure 5. Design guidelines for design for multiple lifecycles. A.R. Balkenende and C.A. Bakker / Developments and Challenges in Design for Sustainability 9 4. Environmental impact assessment The ability to assess specific properties is crucial for product specification as well as impact evaluation. Life cycle analysis provides a useful starting point in determining the environmental impact of an existing product. The level of detail required and the uncertainty in many database values makes this method less useful for the initial design stages. The development of transparent (semi-)quantitative methods to enable feedback on choices early in the design process is therefore a major challenge. The commonly used methodology is Life Cycle Assessment (LCA). Dealing with the end-of-life stages of products (i.e. reuse, remanufacture, recycling) is one of the significant challenges facing LCA, because the assessment needs to take into account the lifespan of the products and the technological changes over time. There is currently no generally accepted approach in LCA about how to deal with reuse, remanufacture and recycling. The international LCA standards (ISO 14040/44) only give general guidelines. However, the details of different treatments at the end of a lifecycle may have a decisive influence on the results. Proper assessment needs accurate insight in the way in which a product is dealt with at the end of a lifecycle. This implies that knowledge on the end-of-life treatments of a product is not only essential to take into account during the design stage, but also is a critical starting point in assessing the environmental impact. Most assessment methods for recyclability determine the fraction of a product that is recycled, usually based on the weight of the materials involved. Such an approach does not take into account fixation of materials that are not compatible in the final recovery processes. It also neglects the actual environmental impact of different materials, implying that recovery of bulk materials is rewarded above recovery of critical materials. This is illustrated in Figure 6 for the materials present in a LCD television. Figure 6. Product composition and environmental weighted composition of a LCD television [6]. The concept of avoided losses (in terms of materials, environmental impact and value) meets the objections mentioned above and deserves further development. Also, 10 A.R. Balkenende and C.A. Bakker / Developments and Challenges in Design for Sustainability aspects like dematerialization, services, identification at end-of-life and life time prognostics, will be rewarded, whereas these in current methods often appear unfavorable. A complication is that methodologies based on avoided losses require detailed knowledge on end-of-life treatments and their associated limited yields. As illustration we will consider a ‘standard’ LED spot (MR16) which constitutes a relatively large heat spreader made of die-casted aluminum to which both the PCB containing the driver electronics and the PCB with the LEDs are screwed. Upon shredding the PCBs to a large extent remain attached to the heat spreader. By introducing fracture lines in the aluminum heat spreader along the screw holes, fracturing of the aluminum is controlled. This leads to release of the screws and detachment of the PCBs, which can now be separated into a suitable stream for further recovery as is shown in Figure 7. The actual recyclability that is subsequently calculated depends on the definition used; the table shows values assuming optimal separation of the fragments resulting from shredding. Figure 7. MR16 spot light, fragmentation resulting from shredding, and fragmentation resulting from shredding with fracture lines. Table 1. Recyclability rating according to various definitions assuming optimal separation of fragments. In recycling-% standard +fracture lines remarks WEEE (wt) 82 92 Weight-basis, determined after separation, i.e. neglecting recovery yield Strict (wt) 41 67 Weight-basis, determined after actual recovery QWERTY (env) 63 80 Environmental impactbasis, determined after actual recovery Notably, it has been found that current WEEE recyclability targets do not always provide the right design incentives. The focus on overall weight neglects the importance of recovery of valuable and critical materials. The detrimental effect of unbreakable connections between incompatible materials is ignored and actual recovery yields are not taken into account. Regulations in which these aspects are addressed, although likely more complicated, are needed to drive towards increased resource efficiency. A.R. Balkenende and C.A. Bakker / Developments and Challenges in Design for Sustainability 11 5. Service-based business models for electronic products Service-based business models already exist in B2B (e.g. Rolls Royce jet engines) and B2C markets (e.g. mobile phones). In many cases this is accompanied by transfer of ownership. However, if extension of product life to multiple lifecycles is of interest, business models access to a product becomes more important than its ownership. Understanding intrinsic remaining product value and tracking its change over time is fundamental to set up such product-service systems in an economic sound way. In various product categories manufacturers are exploring opportunities for setting up and further developing such business models. As an example we discuss the Light-as-a-Service (LaaS) concept. Shifting from sales to ‘light as a service’ requires changes in every part of the value chain. It starts with the product design. Products for sale are optimized to have the highest value for the lowest price at the moment of sale. Products for LaaS require optimization for serviceability and total lifetime. Technological advancements and changes in consumer demand should be foreseen through roadmaps that incorporate expected technological developments as well as consumer behavior. A shift will occur from reliance on ownership to optimal service of space and quality of light. Handling of a product during its lifetime requires an integrated service organization that manages the servicing needs of the product, whilst taking care of the reverse logistics to get the product or part back to the right place in the company (production, parts storage, etc.). Predictable whole life performance of building assets, including performance systems and maintaining a high standard of efficiency will be crucial. The marketing will be different and the products need to be financed upfront. Setting up LaaS as a business requires the right products, business logistics that fit the model, partners that serve parts of the ecosystem, marketing concepts and an organization that can and will set the right targets and propositions. A concrete example is provided by the 10-year performance lighting contract between Philips and the Washington Metropolitan Area Transit Authority (WMATA) [7]. Over 13,000 lighting fixtures are being upgraded to a custom-designed LED lighting solution at no upfront cost to WMATA providing lighting-as-a-service in 25 WMATA parking garages. Philips will monitor and maintain the system during the life of the contract, and also reclaim and recycle any parts of its system that must be replaced. The luminairs used (Figure 8) feature the latest Philips LUXEON LED technology, as well as a modular design that can be configured to the lighting needs of each garage. An adaptive motion response system and innovative wireless controls allow the system to dim when no one is present and seamlessly increase light levels when a space is occupied – creating a safe environment while achieving even higher energy savings. Figure 8. Modular luminairs (right: G3; left: EcoForm) used in WMATA Light-as-a-Service contract. 12 A.R. Balkenende and C.A. Bakker / Developments and Challenges in Design for Sustainability Providing services also opens new ways to product trust and attachment. Prolongation of product life span by stimulating an emotional bond between user and product is often considered as an interesting way to improve on sustainability by affecting behavior. However, such an approach links to personal interests and is therefore difficult to achieve on a large scale merely though product design. For service-based circular products trust and attachment might be achieved in different, more predictable ways. Key here is the recognition that product reliability and regular direct interaction with customers on a service basis may lead to different form of trust and attachment: not only to the product, but also to the manufacturer or service provider. 6. Specific challenges for electronics From sustainability perspective especially technologically advanced products (e.g. electronics, ICT, automotive, medical equipment) pose special challenges and deserve dedicated attention. In part this is due to their intrinsic complexity. Recent developments like the embedding of electronics in all kind of other items largely complicate end-of-life treatment: electronics are diluted with other materials to the extent that high yield recovery becomes almost impossible. As an example we refer to an analysis of disposal and recycling of electronics embedded in textiles [8]. On the other hand, increased functionality might also be used to determine the optimal treatment at a particular stage of product use. Connectivity opens opportunities for identification and life-time prognostics. This enables improved handling at the end of a lifecycle. Addressing customer behavior and setting up product service systems is also especially interesting in the context of advanced systems. For introducing services into the complex market of relatively small mediumvalued electronic products, lessons can be learned from experience with large and valuable electronic products. An example of a service based business model for this type of equipment is in the professional copier/printer business. The (professional) customer buys a service from the producer, which comprises the delivery of the device, service, disposal at end of life, change of toner cartridges, and sometimes even supply of paper. The producer invoices per page printed. Producers are forced to understand the need of their customers in a very precise way. This has led to robust and modular appliances on one hand and high reactivity on customer requests on the other. The difficulty in the transition is linking producer, service company, logistics, sorting, harvesting and final treatment. This difficulty becomes significantly more pronounced for lower valued electronics. 7. Conclusions Design for sustainability increasingly is driven by challenges in resource efficiency. Electronic products in particular contain a diversity of valuable and critical materials, often intimately mixed and in small quantities. In order to retrieve materials, preferably at the level of components or products, it is essential that already at the stage of product design the likely treatments at the end of a product lifecycle are considered. A.R. Balkenende and C.A. Bakker / Developments and Challenges in Design for Sustainability 13 To improve on the recyclability of products not only recyclable materials should be used, but also the ability to break connections between materials should be explicitly taken into account To enable multiple product lifecycles, product design should also explicitly address maintenance, upgradeability, modularity and disassembly. Proper assessment of the environmental impact needs accurate insight in the way in which a product is dealt with at the end of a lifecycle. Methodologies to account for multiple product lifecycles are still in an initial stage. Further, the development of methods based on avoided losses instead of recovered fractions deserves further development. Acknowledgement The authors wish to acknowledge the contributions of and discussions with Maarten van den Berg (TU Delft) on design guidelines for circular product use and Maurice Aerts (Philips Lighting) on light as a service. The insights discussed here are partly based on results obtained in the GreenElec project that received funding from the ENIAC Joint Undertaking under grant agreement n° 296127. References [1] Ecodesign of Energy Related Products Directive 2009/125/EC. [2] Committee on Critical Mineral Impacts of the U.S. Economy, Committee on Earth Resources, National Research Council, Minerals, Critical Minerals, and the U.S. Economy, National Academies Press, 2008. [3] Ellen MacArthur Foundation, Towards the Circular Economy. Economic and business rationale for an accelerated transition, 2012. [4] M.R. van den Berg and C.A. Bakker, A product design framework for a Circular Economy, in ‘Proceedings of the PLATE conference’, Nottingham Trent University, 17-19 June 2015. [5] A.R. Balkenende, V. Occhionorelli, W. van Meensel, J. Felix, S. Sjölin, M. Aerts, J. Huisman, J. Becker, A. van Schaik, M. Reuter, Greenelec: Product Design Linked to Recycling, in ’Proceedings of Towards a Resource Efficient Economy, Going Green - Care Innovation 2014’, Vienna, Austria, 17-20 November 2014. [6] J.Huisman, The QWERTY/EE concept: quantifying recyclability and eco-efficiency for end of life treatment of consumer electronic products, Delft University of Technology, Delft, 2003. [7] S. Casanova, Philips North America, 2013, Washington Metro Goes Green & Saves Green with Philips Performance Lighting Contract, Delivering on Sustainability Goals with 15 Million kWh Saved Annually, Accessed 01.06.2015 [Online]. Available: http://www.newscenter.philips.com/us_en/standard/news/press/2013/20131112-philips-wmata.wpd, [8] A.R. Köhler, C.L. Hilty, and C.A. Bakker, Prospective Impacts of Electronic Textiles on Recycling and Disposal, Journal of Industrial Ecology 15, 496-511, 2011. 14 Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-14 What is the Next Big Innovation Management Theme? Rob de GRAAF1 and Iason ONASSIS Philips Innovation Services, Industry Consulting, The Netherlands Abstract. The next big innovation management theme, what could it possible be? Are we stuck with standarsizing the End to End innovation process, using innovation ecosystems and lean forever or is there more to come? There is more to come, the next big thing may be an evolutionary innovation process sucking in all kinds of best practices, it may be white spot anaylsis, helping companie to look beyond their current horizons, or having the user take over innovation, and pulling their ideas through the corporate ranks to get them to market. Keywords. End to End Innovation Process, Innovation Ecosystems, Lean Innovation, Evolutionary Innovation Process, Best Practices, White Spot Analysis, User Driven Innovation, Meaningful Innovation. Introduction When Concurrent Engineering and later Collaborative Engineering were coined in the 1990s [1], innovation management practices got a boost. Several institutes were set up to research the topic, like the Concurrent Engineering Research Center, at Morgantown’s West Virginia University in the USA. Tools, practices, and methodologies were developed to improve the process of innovation in term of cost (input), leadtime (throughput), and output (quality). Since then many new innovation management theories and practices have risen. In 2010, Open Innovation, Design Thinking, and Blue Ocean Strategy were wellestablished innovation management theories. Innovation-related management theories succeed each other rapidly more rapidly every time. Today’s themes include Ecosystems, Big Data and Lean Start-ups. What’s next? Philips Innovation Services, Industry Consulting, has successfully been identifying trends in innovation management theory to apply them inside and outside Philips. Based on that experience is always good to reflect on what has happened in innovation management in recent years and, more importantly, what we need to do to be prepared for the future. As innovation is more and more becoming a core process today, thrives in ecosystems, and needs to become lean and speedy even more, the challenges of today are things we will work with for quite a while more. Still there is more work to do, where we think may be the next big things in innovation, such as continuous best practice implementation, blind spot detection, and user driven innovation. 1 Corresponding Author, E-mail: rob.de.graaf@philips.com R. de Graaf and I. Onassis / What Is the Next Big Innovation Management Theme? 15 1. Today: Innovation as a standardized End to End process Innovation is no longer seen as a separate process hidden somewhere in the company, many companies now see it as a key process in the end to end value chain. The value a new product or service creates is strongly driven by innovation, followed by a smooth transition to the sales & marketing process highlighting the benefits, and enabling operations to provide the required variety at low costs. It is all about ‘How to become an Innovation Chain Master. Innovation in now far more global and based on multi-party collaboration than five years ago. The key is being able to build the innovation chain (scouting, selecting and involving the innovation partners), linking the entities and sustaining the chain. And creating a set of control points to ensure ability to win and to rewarding value appropriately. And we have seen that indeed, going beyond the one-on-one Open Innovation successes to multiparty successes is a challenge. 2. Today: Innovation in Ecosystems That brings us to how value is created today. Understanding you business model is no longer enough, you have to understand the whole ecosystem in which your innovation takes place and how its value finds its way towards the end user [2]. Today there are many different types of value companies can create: not just money for a product or service, but also value of aspects such as brand preference and information. Data is created by all kinds of parties today. Making sense of that data and thus creating new value is something we now call Data Analytics. Depicting the value network, i.e. actors and their interactions is nowadays key to understand the value of your innovation to the ecosystem. It turns out everyone has a part of that picture in their heads, but bringing those pictures together and then looking to boost the value in the network is key. Understanding which connections are missing, blocking or fortifying, drives a business to come up with more meaningful innovations, that can thrive in the ecosystem they are intended for. Thriving may also mean that others build on your platforms to add even more value. That makes innovation a lot more scalable, as we have experienced with the tens of thousands of people that have downloaded the SDK for hue to build home lighting applications themselves. 3. Today: Innovation the lean way Lean thinking can be applied in innovation as well, especially in the latter more expesive phases of development and lauch. It seems that many things, large and small, can still be improved. This reduced waste and speeds up development. Aditionally it frees up resources to innovate more. The lean start up methodology has become rather popular over the last few years, not just for the start up bootcamps of this world, also in the corporate world where early and cyclic testing of ideas sufaces the customers need so much better. Many companies nowadays are trying to implement Lean thinking for Innovation themselves, including the use of Lean Start-up methods where the emphasis is on how can we learn more quickly what works, and discard what doesn’t. 16 R. de Graaf and I. Onassis / What Is the Next Big Innovation Management Theme? 4. Next: Evolutionary Innovation Processes There are plenty of theories and practices, and many new ones will be developed over the next few years. The key question is always: How do you incorporate them in your organization? Having this standradised End to End innovation process is a good start, as you only have to find out how to do it once, and then can replicate that in the rest of the organization if it works out. What is key there is to understand the gaps between your capabilities and your ambition, strategy and the industry’s requirements. Once you’ve determined what you need to work on, pilot the key elements you need from existing innovation approaches. Learn, adapt and roll-out. The innovation process will continuously evolve. The keen reader will have noticed that this actually follows some of the Lean Startup lessons above, like finding out what you need to learn up front (the diagnostic), trial fast and cheaply and pilot where needed. 5. Next: Finding the blind spots Any organization will have them, blind spots. Things that are happening just beyond the horizon and that you don’t see until the come charging at you. Whole needs spaces are missed at times and this creates a lot of room for new players. Therefore you need blind spot detection. The advances in ICT make information so readily available that structurally addressing them is becoming possible. Companies now have decision rooms for strategy and business tracking. These are also applicable in innovation, looking at markets, needs, competitors and upcoming technologies. If a new development could cannibalize your business, you better act on it and be the first, because chances are high that someone else will move and preempt you. Also literally taking people beyond their horizon in this ever changing world is key. Show innovation leaders new things and they will understand that their current roadmap is flawed. Challenge them to find the blind spots and change course accordingly. That will further increase the chances of success. 6. Next: User innovation Where currently the majority of innovation spent still comes from companies, users themselves start becoming the innovators. Our research has shown that only 54% of people consider innovation to be meaningful [3]. And today users can innovate themselves. Not necessarily improving exiting offerings like lead users can help do, but really building new innovations from scratch themselves. There are so many examples already of user driven innovation in the public domain, one may expect the user to take over the innovation process entirely. The user will not be asked by the company what they think of their next best idea, no, the user will tell the company what you need to work on. They will use the company as a resource to get their innovation to market. It is no longer a pull from the user, the user is actually pushing the innovation project. And there’s tons of these users around. Key is now to understand which capabilities your company can bring to the table to make it happen. Fewer and fewer good ideas will get stuck in the funnel when R. de Graaf and I. Onassis / What Is the Next Big Innovation Management Theme? 17 companies can enable users to innovate. Partly disclosing intellectual property for more meaningful innovation is key there. 7. Conclusion The field of innovation science has developed tremendously over the past decades, giving us all kinds of new methodologies and theories that should help companies get better at innovation. Obviously there is still quite some way to go before every brilliant idea converts into a meaningful innovation. Currently recently introduced methodoligies and visions are being implemented in companies and do seem to improve their innovation performance, currently we see a lot of companies embracing innovation as an End to End process, starting to use ecosystems to go beyond business models and making their innovation processes more lean. We expect that these items will still be on the agenda of many companies wanting to innovate better in the next years. What lies beyond that are newly developing methodologies to continuously implement proven best practices in innovation – so going from a standardized innovation process to an evolutionary one. Furthermore, companies will proactively look beyond their horizons to sensitize themselves of blind spots, things that they have missed because of too much corporate focus. Finally, in the longer run, the users themselves may become our innovation managers, just leveraging corporate capabilities to get their meaning innovations to market. References [1] [2] [3] S.S.A Willaert, R. de Graaf and S. Minderhoud,Collaborative engineering: A case study of Concurrent Engineering in a wider context, Journal of Engineering and Technology Management, Volume 15, Number 1, March 1998, pp. 87-109(23). E. den Ouden, Innovation Design, Creating Value for People, Organizations and Society, Springer, London, 2011 Philips, 2013, Philips Meaningful Innovation Index, Accessed: 29-05-2015. [Online]. Available: http://www.newscenter.philips.com/pwc_nc/main/standard/resources/corporate/press/2013/SurveyWEF/2013-01-23-Philips-Meaningful-Innovation-Index-Report.pdf This page intentionally left blank Part 2 Systems Engineering This page intentionally left blank Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-21 21 Heuristic Systems Engineering of a Web Based Service System John P.T. MOa,1 and Sholto MAUD b a RMIT University, Australia b University of Queensland, Australia Abstract. Complex engineering systems development comprises many technological elements that have to be integrated together to function as one system. Traditionally, a project based approach will create a new product. However after many years of product engineering, the systems engineer can now be faced with engineering of technologies that need to integrate with legacy systems which may continue their deployment over a long period of time. Legacy integration poses problems for traditional systems engineering methods, such that the success of a complex engineering product cannot be measured simply in terms of the successful commissioning of the system, but requires a measure of product performance within the larger scale system and pre-existing development system over the course of the product’s deployment. This paper uses the development of a hydrological smart phone system to illustrate the concept of a “Heuristic” approach to Systems Engineering. The authors propose that the traditional systems engineering method needs flexibility to perform a Heuristic translation to provide error correction shortcuts in the legacy engineering methodology.. Keywords. Systems engineering methodology, Web services, Integration with legacy systems, Mobile system monitoring, Hydrological data acquisition 1. Introduction Australia’s water supply system has many stakeholders with different requirements for environmental data: private individuals such as farmers may want to know whether they can pump water from a river, government agencies may want data on oxygen content of a river for preventing fish kills or for maintenance and billing purposes, media and the general public may have a requirement for accessing stream and flood data. Each of these different uses cases has different requirements in terms of the quality and quantity of data, the degree of accessibility to the data, and the timeliness of the data. Until the advent of System on a Chip (SoC) technology, it was difficult to supply environmental data for the many different use cases given above. In the past, field observations were commonly recorded either on paper on a spreadsheet on a laptop, and data had to be entered into systems within the office network. SoC technology underlies modern smart phones such as those that use the iOS(TM), Windows(TM), Blackberry(TM) and Android(TM) operating systems. It has enabled an enhanced capability of accessing internet services on mobile devices in the field. These services 1 Corresponding Author, E-mail: john.mo@rmit.edu.au. 22 J.P.T. Mo and S. Maud / Heuristic Systems Engineering of a Web Based Service System can provide data through wireless and mobile phone networks to mobile phones and tablet-like PCs equipped with SoC and web browser technology, and the last five years has seen rapid development of this technology However, application of these new devices with existing infrastructure can be challenging because they typically need to integrate with legacy systems. In the case of the systems of interest for this article, the legacy system are hydrometric. Hydrometric information (rainfall, river level, water quality, groundwater) is commonly collected by large entities like environmental service corporations that provide hydrometric data collection and maintenance services on behalf of a government agency, for the purposes of flood warning for example. Whilst the project life cycle for the development of such systems was relatively short (potentially within one year), the system program life cycle for the overall system product is several times longer (typically 4-8 years). In some cases the infrastructure life span has been over 20-25 years, with databases incorporating observations accumulated over 100 years of hydrographic endeavor. Hence the success of a product which integrates with the legacy of the hydrometric profession cannot be measured simply in terms of the successful commissioning of a system, but requires a measure of product performance within the larger system and large time scale over the course of the product’s deployment. This paper uses the development of a smart phone web system for accessing hydrological data as an example to illustrate the main concept of a Heuristic approach to Systems Engineering of problems encountered during the system design, implementation and development of on-going support. The problems include the geographical constraints, the uptake of mobile web applications within the hydrological industry, the number of users accessing the application to acquire information, the reliability of the system during extreme hydrological events such as floods or droughts, to name a few. 2. Literature Review The goal of Systems Engineering (SE) is to generate development and implementation methods that are designed to manage this type of interdisciplinary integration, and then progressively narrows the vision onto technical details [1] [2] [3]. However, a curious feature of the systems world view is the capacity for encapsulated, or “subsystem” world views to evolve out of the consideration of specific systems, and for these subsystem world views to exist without knowledge of other system world views [4] [5]. 2.1. Systems Macroscope This research aims viewing world environmental systems through data acquired from sensors, and which is accessible through the internet. In this context, Van Zyl et al [6] contended that the internet could evolve into a “macro sensing instrument”, capable of accessing sensory data from around the world and presenting it on the touch-screen of an individual's mobile device, putting data from a variety of sensors at the fingertips of individuals [7]. This “macro sensing instrument”, which has been called a “macroscope”, is itself used to form a bigger picture of the world than is available through the senses a human is born with. J.P.T. Mo and S. Maud / Heuristic Systems Engineering of a Web Based Service System 23 On the contrary, Nixon [8] argued that the hardware and software required for the macroscope has been invented, and largely implemented. Nixon included satellites, the internet, search engines, high-speed computers and various sensors in the definition of the macroscope. Supporting this argument is the fact that large-scale environmental data acquisition networks have been referred to elsewhere as macroscopes [9] [10]. Delin described the Sensor Web as a “macro-instrument”, which further supported to the contention that the ensemble of sensors and data acquisition systems constitutes a macroscope [7]. Odum [11] had the opinion that a macroscope was a vision of sensor networks feeding empirical data into a filtering and decision making system. These different views of the ‘macroscope’ are examples of how different approaches to systems can generate different views regarding what appears to be similar phenomena. 2.2. The system engineering models A model in systems engineering describes the abstract behavioral procedures involved in implementing a system solution. The model can include a complex interacting network of software, hardware, human and ecological resources. There are many modelling methods commonly used and referred to in SE literature. The V-Model is one such modelling method, and has been promoted as one of the most reliable and effective methods [12, 13, 14, 15, 16, 17]. However, there are some deficiencies in the V-Model, and further enhancements of this model have been made using what have been called the W-Model. The W-model was introduced as an enhancement of the V-model to include the debugging process and early testing feedback [18] [19]. Spillner [20] found that in software systems development, 30 to 40% of software activities are related to testing, and as a result it is critical to launch test activities at the beginning of the project rather than after coding is complete. The 'W-Model' incorporates this iterative testing requirement mentioned above, allowing the developer and project manager some generic flexibility, whereby not all changes in sub-system requirements needed to be captured during the testing phase. With this flexibility, after a fault is corrected, it is possible to re-execute the testing and leave some sub-system requirements un-captured. This means that the developers can focus on producing a system that works without a heavy documentation overhead. The actual implementation of the system model relies on software engineering methods such as UML [21]. The software engineering technologies may themselves include legacy tools and procedures that may or may not have been used during the life cycle of the legacy system at various times. There may, for example, be a culture of agility [22] [23], or concurrent testing within a development team, however they may not even refer to these terms during a project. 2.3. The heuristic translation layer The evolution of W-model illustrates that SE needs the flexibility to accommodate informal or ad-hoc methods which may be present within a legacy environment that may not have resources, skills or requirements in formal SE methods. While W-model is useful for tightly coupled system modelling such as aircraft, it is inadequate to handle the diversity of legacy systems in an infrastructure system such as hydrometric information system. The authors propose a heuristic zone that can facilitate the 24 J.P.T. Mo and S. Maud / Heuristic Systems Engineering of a Web Based Service System translation between formal SE methods like the V or W-Models, and those legacy engineering systems which may not have any formal methods. A heuristic zone uses ‘rule of thumb’ to manage undocumented, pre-existed system requirements in a legacy development environment. In Figure 1, the heuristic zone operates as a kind of valve, letting through useful elements of formal SE methods where they may benefit the legacy methods, without imposing the requirement for a full-blow formal SE methodology throughout the project. Formal SE Methods (V/W Model, MBSE, etc) Heuristic Zone Formal SE Methods extended with Informal and Legacy Methods Figure 1. The heuristic zone for translating between formal systems engineering and legacy methods. 3. Concept of Operations Until the last few years it was common for Hydrological data stored on a government server to be very difficult to access. In some cases it might only be accessed on request, by manually filling out a form to the government entity. Even then the data might be supplied in a variety formats making it difficult to interpret. 3.1. Presentation of data on Mobile-enabled devices: a “mobile macroscope” In the context of the world view presented above, the innovation in this research is the development of a “mobile macroscope”. This concept is depicted in Figure 2. Figure 2. HMWA conceived as a part of a mobile macroscope (adapted from Odum [11]). The mobile application was designed specifically to interface with the webservices provided through the Hydstra Web Module [24]. In terms of the basic elements of the system’s macroscope, the HMWA would be located at phase 3, where data acquired J.P.T. Mo and S. Maud / Heuristic Systems Engineering of a Web Based Service System 25 needs to be presented to the user for determination of flows of water, energy or whatever parameter is being monitored. 3.2. Use Cases In order for environmental data to be presented to different users and stakeholders through a mobile device, it needs to be of sufficient quality and reliability. In this context, each different user generates a different system context and use case in the quality requirements (Figure 3). Figure 3. HMWA Use Case Diagram. There are three distinct users currently envisaged for HMWA: A governing agency, general public who might be a private user, or a field hydrographer. There are other potential users such as State Emergency Services, however they were not included as a part of the stakeholders for the project. The use cases here are somewhat conceptual, and not necessarily part of the requirements. Some of them are stretch use cases which might be incorporated into the system if there are adequate resources. Other use cases are: locating a site, inspecting a hydrological parameter such as river level, entering field observations used in the validation and verification of the stream level, getting operational information for decision making purposes such as flood warning information on latest river levels. These use cases are developed similarly. 26 J.P.T. Mo and S. Maud / Heuristic Systems Engineering of a Web Based Service System 3.3. System server The architecture of the HMWA system requires a host server running the Microsoft Internet Information Services (IIS) product [25]. Dynamic folders are created from the batch files, scheduled for a nightly run. This also includes the addition or removal of any parameters, and new navigation options on the index page, for example an administrator might wish to include groundwater or domestic meters on the index page. During the project, to allow communication between the client and the server an AJAX call to the server was developed, and the Hydstra dll enhanced to enable latest values to be returned from the server to the client following the data flow diagram in Figure 4 [26]. When the user selects a site webmobile.js triggers a getData function that sends a JSON string to the webservice, indicating what parameters and data should be returned. Figure 4. Data flow diagram for HMWA. 3.4. Implementation, Integration and Testing Testing and implementation of the system was a simple process, involving a sequence of secure login steps to gain access to the client's IT infrastructure, and then navigating to the appropriate server where the Hydstra Web Portal is located. The distinctions between UI and the integration development were not always strong with significant overlap between the development duties. However integration and pushing technology was performed by the senior web developer and integration expert. Integration then was both a matter of pushing the technology to the server, but also bringing the UI within the folder structure and within the capabilities of the batch engine responsible for building the index.html page on a scheduled basis. Once the system had been integrated and deployed to the client server, priority bugs were identified and fixed. This type of bug fixing is described in the W-model, whereby successive iterations are needed to address identified after the integration of the system. One of the difficulties identified during this process was number of steps required to fix a bug. Figure 5 shows that HMWA UI development was undertaken on a local desktop system. Emulation of mobile device performance was conducted through Google Chrome, however this did not fully replicate the features of the mobile device browser. J.P.T. Mo and S. Maud / Heuristic Systems Engineering of a Web Based Service System 27 Figure 5. Integration, development and testing strategy. 3.5. User Acceptance Testing Once the system had been tested, system verification and validation was undertaken. The system verification and validation was comprised of sign-off user acceptance. This involved functional testing from both internet browsers and mobile phones for a number of different browser and phone models and versions. During these testing phases, a number of different users from different levels within the client organisation accessed the HMWA. In each of these cases the users generated new requirements after accessing the interface. The most common new requirements were a favourites functionality, plots of the last 7 days of data, and a text-based version which could be accessed from older devices. Although the HMWA met the initial requirements specifications, the generation of new requirements within the system verification and validation phase meant that the HMWA had to undergo a second round of user acceptance. Fortunately most of these new requirements were already conceived within the stretch goals for the project, and hastened their development. After another couple of iterations of UAT, the system was accepted. 28 J.P.T. Mo and S. Maud / Heuristic Systems Engineering of a Web Based Service System 3.6. Operations and Maintenance One HMWA passed UAT, the final phase under consideration was the Operation and Maintenance of the system. It was felt that HMWA could be maintained and supplied under the Hydstra Web Portal licence with no additional licence fee imposed on the client. As the same time, it was noted that operation and maintenance of HMWA required a skill set in the bespoke web services provided by Hydstra together with knowledge of JavaScript, css, jQM, jQuery, HTML and HTML5 [27] [28]. It is not expected that these skills would be possessed by the user. The standard maintenance license is that any developments made by the user themselves would not be supported. This means that the vendor requires a skill set in the above technologies to continue supporting the HMWA. These technologies are not native to the Hydstra platform, however the Perl web services is, so there is an open question about how to keep the skill-set current for maintenance and future developments. 4. Discussion This discussion will address two issues. First is the theoretical and methodological issues faced throughout the project, and second is the actual technical issues arising from the project development. 4.1. The project methodology One of the main considerations of this project other than the development and implementation of the HMWA is to examine the applicability of the different methodologies in systems engineering. In particular, one aim was to establish whether systems engineering methods can be used in the context of projects with legacy development systems. One of the lessons learnt was that the role of the systems engineer needed to be flexible and adaptive to the tools and circumstances at hand. Many of the tools of tracking V-model phases were not present within the company, and the tools required for the type of analysis and model-based engineering used in SysML were also not available internally. The absence of these tools did not appear to impact on the development timeline and management of the project. Moreover, it appears that these tools may not be a requirement of development at a certain level of complexity and team size, and a decision needs to be made by the systems engineer at the beginning of the project, as to whether systems engineering software applications and analysis methods are a requirement for the completion of a project. 4.2. System engineering principles applied The macroscope has many elements of SE. However, there are two significant gaps in the literature, and the practice, which might provide the most relevant criticism of the thesis presented here. This is to say that Ecological Systems Engineering(s) (ESE) do J.P.T. Mo and S. Maud / Heuristic Systems Engineering of a Web Based Service System 29 not have a behavioral model for system implementation and requirements development. This is a significant omission. Whilst ESE does appear to have a parametric model and requirement for Modelbased Systems Engineering (MBSE) it has not developed a behavioral model for system implementation and requirements development, such as the V-model and Wmodel considered here. There is nothing present in the literature which amounts to a similar methodology in the systems ecology field. In terms of system world view, the gap between Systems Engineering proper, and an Ecological approach to Systems Engineering might be identified in the different emphasis on the scale of the engineering enterprise, and the subsequent effect on the scope of a development project. Another significant gap in the theory underlying this thesis is the one of practical relevance. The concept of a generic system 'macroscope' encompassing world data, enabling humans to form a picture of the whole from the parts may appeal to notions of a grand unified theoretical synthesis. However in practical terms there are several macroscope use cases which exist, such as telemetry systems for flood warning, and whilst such a telemetry system has a useful purpose and has elements of the vision for a world macroscope, it does not need to fulfill any of the macroscopic goals in order to operate as a discrete system performing a specific purpose. This paper uses the development of a hydrological smart phone system as an example, to illustrate the concept of a “Heuristic” approach to Systems Engineering. The authors propose that the traditional systems engineering method needs flexibility to perform a Heuristic shortcut to correct potential issues in the legacy integration 5. Conclusion In summary, the project was completed on time and to a budget, and met the functional requirements requested by the client. The success of this project might be evaluated by the uptake and maintenance of HMWA within the Hydrological industry. It has already been implemented as a core component of the Hydstra Web Portal product offering, and has been deployed on the Australian State Government of New South Wales website, along with the Department of Natural Resources in Queensland, Australia. The main category of problems that were encountered during the progress of the project were methodological, and in particular the application of the conceptual tools of SE to a legacy system of software engineering. To address these methodological issues, this paper proposed the concept of a “Heuristic” approach to Systems Engineering used as a type of translation layer between formal methods of SE proper, and the informal engineering methods of legacy systems. Although the legacy system presented significant obstacles to the application of a heuristic SE methodology to project management enabled the integration between computer engineering and hydrographics to be achieved, through consideration of world views and the use of agile systems techniques. The HMWA project delivered a system addressing the requirement for accessing site data from mobile devices. Since mobile devices can access the data supplied from HMWA on call - while in range of the mobile internet services - this presents a major step forward in remote, field access, to hydrological data acquire from any site within the a government agency, or corporate server database [29]. 30 J.P.T. Mo and S. Maud / Heuristic Systems Engineering of a Web Based Service System References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] P. Checkland, Systems Thinking, Systems Practice: Includes a 30-Year Retrospective, John Wiley & Sons, New York, 1999. R. Pressman, Software Engineering: A Practitioner’s Approach, 7th ed., McGraw-Hill, New York, 2009. A.A. Puntambekar, Software Engineering and Quality Assurance, Technical Publications, Pune, 2010. S.Maud, D. Cevolatti, Realising the Enlightenment: H.T. Odum’s Energy Systems Language qua G.W.v Leibniz’s Characteristica Universalis, Ecological Modelling, Vol.178, (2004) 1-2, pp. 279–292. R.C. Beckett, Functional system maps as boundary objects in complex system development, Int. J. Agile Systems and Management, Vol. 8, 2015, No. 1, pp. 53–69. T.L. van Zyl, I. Simonis, G. McFerren, The Sensor Web: Systems of Sensor Systems, International Journal of Digital Earth, Vol.2 (2009) pp. 16–30. K. A. Delin, The Sensor Web: A Macro-Instrument for Coordinated Sensing, Sensors, Vol. 2, (2012) pp. 270–285. S.W. Nixon, Eutrophication and the macroscope, Hydrobiologia, Vol.629 (2009) pp. 5–19. D.E. Culler, Toward the sensor network macroscope, in Proceedings of the 6th ACM international symposium on Mobile ad hoc networking and computing - MobiHoc ’05, 25-28 May, 2005, UrbanaChampaign, pp.1-1. M. Baqer, A. Kamal, S-Sensors: Integrating physical world inputs with social networks using wireless sensor networks, in International Conference on Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 7-10 December, 2009, Melbourne, pp. 213- 218. H.T. Odum, Environment, Power and Society, Illustrated edition, John Wiley, New York, 1971. B. Blanchard, W. Fabrycky, Systems Engineering and Analysis, Pearson, Upper Saddle River, 2006. M. Kuhrmann, D.M. Fernández, T. Ternité, Realizing Software Process Lines: Insights and Experiences, International Conference on Software and Systems Process, ICSSP’14, 26-28 May, 2014, Nanjing. E.Y. Nakagawa, M. Gonçalves, M. Guessi, L.B.R. Oliveira, F. Oquendo, The state of the art and future perspectives in systems of systems software architectures, in Proceedings of the First International Workshop on Software Engineering for Systems-of-Systems SESoS '13, July, 2 2013, Montpellier, pp. 13-20. A. Kossiakoff, W.N. Sweet, Systems Engineering Principles and Practice, Wiley, New York, 2002. NASA, Systems Engineering Handbook, NASA/SP-2007-6105 Rev 1, December, 2007. A. Biahmou, Systems Engineering, in: J. Stjepandić et al. (eds.): Concurrent Engineering in the 21st Century: Foundations, Developments and Challenges, Springer International Publishing Switzerland, 2015, pp. 221–254. L-H. Li, L. Qiong, L. Jing, The W-Model for Testing Software Product Lines, International Symposium on Computer Science and Computational Technology, ISCSCT '08, 20-22 December, 2008, Shanghai, Vol.1, pp. 690–693. A. Spillner, W-Modell. in: A. Spillner et al. (eds.) Praxiswissen Softwaretest – Testmanagement. Ausund Weiterbildung zum Certified Tester - Advanced Level nach ISTQB-Standard, 2nd ed, Dpunkt, Heidelberg, 2008. A. Spillner, The W-MODEL–Strengthening the Bond Between Development and Test, Software Testing Analysis & Review East, STAReast 2002, Orlando, 15.-17. May, 2002. T. Weilkiens, Systems Engineering with SysML/UML: Modeling, Analysis, Design, Morgan Kaufmann, Burlington, 2008. A. McLay, Re-reengineering the dream: agility as competitive adaptability, Int. J. Agile Systems and Management, Vol. 7, No. 2, 2014, pp. 101–115. A. Singh, K. Singh, N. Sharma, Agile in global software engineering: an exploratory experience, Int. J. of Agile Systems and Management, Vol. 8, 2015, No.1, pp.23–38 A. Tamayo, P. Viciano, C. Granell, J. Huerta, Sensor Observation Service Client for Android Mobile Phones, International Journal of Digital Earth, 2015, (in press). Microsoft, Overview: The Official Microsoft IIS Site, http://www.iis.net/overview, Accessed: 5 June 2015. E. Sanchez-Nielsen, S. Martin-Ruiz, J. Rodriguez-Pedrianes, Mobile and dynamic web services, In: C. Pautasso, C. Bussler (eds.) Emerging Web Services Technology, Birkhäuser, Basel, pp. 117–133, 2007. A.Zibula, T.A. Majchrzak. Cross-Platform Development Using HTML5, jQuery Mobile, and PhoneGap: Realizing a Smart Meter Application. In: K.-H. Krempels, A. Stocker (eds.) Web Information Systems and Technologies, Springer-Verlag, Berlin Heidelberg, 2014, pp. 16-33. M. Pilgrim, HTML5: Up and Running, O’Reilly, Sebastopol, 2010. A. Mark, Browser and publisher for multimedia object storage, retrieval and transfer, U.S. Patent No. 6,269,403, 31 July 2001. Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-31 31 Stakeholder Management as an Approach to Integrated Management System (IMSSTK) Andreia F. S. GENARO1 and Geilson LOUREIRO 2 National Institute for Space Research, Brazil Abstract. This paper aims to present a new approach to Integrated Management System (IMS), as a management system able to manage all stakeholders identified by an organization. A worldwide trend to integrate the requirements of different standardized management systems is observed, but organizations have faced an increasing number of standardized systems, motivating many researchers to focus on new integration methodologies. Taking as reference the management systems used by the most organizations (e.g.: quality, environmental, safety and occupational health and social responsibility), this paper intends to present the existing commonalities among the requirements of these standards and after that, to present the existing commonalities among stakeholders identified for each integrable requirements. Stakeholder management can also be interpreted as an effective way to map out requirements for processes and products, as well as a way to map out management requirements for an organization, which enables to implement requirements in addition to those already defined in standardized systems such as ISO standards. Therefore, it can be affirmed that the traditional IMS approach allows a generalization of the IMS concept towards stakeholder management by analyzing the commonalities among the most used standardized systems and its stakeholders. The Integrated Stakeholders Management proposed in this paper is unlimited, making organizations do not become dependent only on standardized systems. This new approach helps to incorporate requirements provided by an analysis of stakeholders demand. In this context, it is concluded that the new concept of IMS proposed herein is an alternative solution organizations that aim to achieve better levels of satisfaction of stakeholders, focusing on meeting their requirements and also in overcoming their expectations in an integrated manner within their management processes, not depending only on the standardized systems. Keywords. Integrated Management Systems, Requirements Management, Stakeholder Management, Deming Cycle Introduction This paper aims to present a new approach to integrated management systems (IMS) as being a management system able to help the organizations to manage, in an integrated manner, their stakeholders. 1 Genaro A. F. S ., National Institute for Space Research, INPE, CEP: 12227-010 São José dos Campos, SP, Brazil, E-mail: andreia.sorice@inpe.br 2 Loureiro, G., National Institute for Space Research, INPE, CEP: 12227-010, São José dos Campos, SP, Brazil, E-mail: geilson@lit.inpe.br 32 A.F.S. Genaro and G. Loureiro / Stakeholder Management as an Approach to IMS Domingues [1], Poltronieri [2] and Cerqueira [3] agree that an IMS is a system that contains at least the integrated requirements came from quality and environmental management systems. This understanding is supported by Moraes [4] who claim that the management systems used by the most organizations for the development of their IMS are: Quality, Environmental and Safety and Occupational Health management systems. According to Cerqueira [3]; Bernardo et al. [5]; Karapetrovic and Jonker [6] in the last years it can be observed a worldwide trend to integrate requirements from different standardized management systems. The compliance with requirements of standards, such as ISO 9001 and ISO 14001, in an integrated manner, is helping the organization to structure their management systems, reducing costs when comparing the management of all these standardized management systems individually. However, Lopez-Fresno [7]; Gianni and Gotzamani [8] emphasize that a fully integrated management system should cover all requirements describes in standards, and also emphasize that the management processes should be extended for all business stakeholders. Moreover, Asif et al. [9] point out that a key question is how to create business processes able to accommodate the needs of all stakeholders in an integrated manner. It is noteworthy that the isolated management systems already provide means for organizations to achieve the requirements of their stakeholders. Bernardo et al. [5], Karapetrovic and Jonker [6] and Jorgensen [10] agree that the implementation of IMS should converge to the satisfaction of all organizations’ stakeholders. According to Asif et al [11], the organizations have faced numerous management systems, making it necessary to integrate all of them. This is a subject that is being studied by a large numbers of researches, focusing on the practical aspects such as integration methodologies. Karapetrovic and Jonker [6] emphasize that standardization working group around the world is spending efforts in order to elaborate their IMS national standards. However, Asif et al. [11] claim that the integration of management systems can vary from organization to organization, because each organization has their own specific characteristics, such as being placed in a particular market niche, with different characteristics and stakeholder’s requirements. According to Sartori and Weise [12] it’s important to notice that changes occur rapidly and these changes is influenced by globalization, competition increasing, and also technological, environmental and social constraints. Asif et al. [9], Rocha and Goldschmidt [13]; Trentim [14] and Bourne [15] emphasize that in this scenario it’s important to take into account the identification of new stakeholders. Using as reference the Quality, Environmental, Safety and Occupational Health and Social Responsibility Management Systems, this paper also aims to present that there are not only commonalities among requirements of those standards, but also there are commonalities among stakeholders. Finally, it can be conclude that the generalization of the concept of IMS is possible using the approach to stakeholders management. 1. Methodology According to Richardson [16], the research methodology applicable in academic papers must be appropriate to the type of study that need to be performed, but it is the nature of the problem that determines the choice of method. A.F.S. Genaro and G. Loureiro / Stakeholder Management as an Approach to IMS 33 Vergara [17] says that scientific research can be classified according to the purposes and methods. Regarding the purposes, researches can be exploratory, descriptive, explanatory, methodological, applied and interventionist. Regarding the method, the research can be laboratorial, documental, bibliographical, experimental, participant and case study. Regarding the purpose, this study is characterized as exploratory because it was analyzed deeply the quality, environment, occupational health and safety and social responsibility management systems and the relationship among their requirements and stakeholders. Regarding the method, this paper is characterized as bibliographical because a systematic study was made based on consultation in books, journals, conference proceedings, academic databases and repositories for dissertations and theses. First of all, it was performed study of the standards that traditionally compose a typical IMS, but supported by the stakeholders theory. To perform the analysis and verify the necessity of a generic IMS, not only dependent on rules and regulations, a study about stakeholder management theory was performed, from the definition of the terminology “stakeholder” proposed by Freeman in the 1980s, to the more contemporary stakeholder theory. As a result of the literature review undertaken in this paper, it was possible to find arguments to propose a new concept of IMS, supported by stakeholder management, presented in Section 5. 2. Common requirements among quality, environmental, safety and occupational health and social responsibility standards Cerqueira [3], Moraes [4] and Ribeiro Neto [18] reported that when the ISO 9001:2000 was being updated, there was a concern that this standard had to be integrable with the ISO 14001:1996, thereby some of their requirements have been aligned, renumbered as common requirements. Moraes [4] and Ramos [19] cited that the same procedure was taken when the OHSAS 18001 was developed by the British team and in Brazil, same when procedure was done when elaborating the NBR 16001:2004 for social responsibility. The Table 1 shows a summary spreadsheet containing some common requirements among the standards ISO 9001:2008, ISO 14001:2004, OHSAS 18001:2007 and NBR 16001:2012. The fully spreadsheet is presented in Genaro [20]. As example, the Table 1 is presenting the commonalities among requirements related to human resources, infrastructure and product realization. According to Table 1, it is possible to verify the existence of synergies among management requirements imposed by those standards. In fact, all decision that implies in changing inside the organization is a top level decision (internal stakeholder), it can be supposed that implementation of any management system is always motivated by needs of organizations’ stakeholders (e.g., compliance with environmental legislation; contamination of soil and groundwater; compliance with labor law; implementation a quality management system and so on). This understanding is corroborated by Asif et al. [9]; Rocha and Goldschmidt [13]; Trentim [14]; Bourne [15] and Svendsen [21] which state that the organizations need to have effective means of communication with their way to identify, analyze and engage stakeholders in order to make them partners. 34 A.F.S. Genaro and G. Loureiro / Stakeholder Management as an Approach to IMS Table 1. Requirement matrix among the standards ISO 14001:2004, OHSAS 18001:2007 and NBR 16001:2012. ISO 9001:2008 requirements 6.1 Provision of resources ISO 14001:2004 requirements 4.4.1 Resources, roles, responsibility and authority 6.2 Human resources 6.2.1 General 6.2.2 Competence, training and awareness 6.3 Infrastructure 6.4 Work environment 7 Product realization 7.1 Planning of product realization ISO 9001:2008, 9001:2008, 4.4.2 Competence, training and awareness 4.4.2 Competence, training and awareness 4.4.2 Competence, training and awareness 4.4 Implementation and operation OHSAS 18001:2007 requirements 4.4.1 Resources, roles, responsibility, accountability and authority 4.4.2 Competence, training and awareness 4.4.2 Competence, training and awareness 4.4.2 Competence, training and awareness 4.4 Implementation and operation NBR 16001:2012 requirements 3.3.7 Resources, roles, responsibility, accountability and authority 3.4.1 Competence, training and awareness 3.4.1 Competence, training and awareness 3.4.1 Competence, training and awareness 3.4 Implementation and operation 4.4 Implementation and operation 4.4.6 Operational control 4.4 Implementation and operation 4.4.6 Operational control 3.4 Implementation and operation 3.4 Implementation and operation The proposal of IMS approach to stakeholder management (IMS STK) is to generalize the concept, when common requirements for different stakeholders can be translated into a unique requirement (integrated) that meets all of them simultaneously without relying on only to establishment of a particular standard. This approach is advantageous. According to Morikawa and Morisson [22] the development of a new standard is a process that can take about 5 years until publication. In addition, changes are occurring very fast and new stakeholders can be identified; a standard can be published today and very soon requires adjustments and corrections to be adjusted to the new scenario. 3. Commonalities among stakeholders and IMS traditional standards Genaro [20] has analyzed critically all requirements of ISO 9001:2008, ISO 14001:2004, OHSAS 18001:2007 and NBR 16001:2012 and has listed all possible stakeholders interested in each requirement of those standards. This study identified a list of most common stakeholders, such as top management, employees (it includes service providers and outsourced employees), customers, suppliers, press, government, shareholders, unions, regulatory agencies, non-governmental organization, family, community, UNESCO, control agencies, lawyers, carriers, distribution centers. The Table 2 shows a summary spreadsheet containing stakeholders & requirements of ISO 9001:2008, ISO 14001:2004, OHSAS 18001:2007 and NBR 16001:2012 standards. The Table 3 provides a list of stakeholders having interest in the organization’s statement of policy for each management systems (ISO 9001:2008, ISO 14001:2004, OHSAS 18001:2007 and NBR 16001:2012) and a compilation of common stakeholders for all systems using an integrated view. 35 A.F.S. Genaro and G. Loureiro / Stakeholder Management as an Approach to IMS Table 2 - ISO 9001:2008, 9001:2008, ISO 14001:2004, OHSAS 18001:2007 and NBR 16001:2012 requirements & stakeholders ISO 9001:2008 requirement identification 6.1 Provision of resources 6.2 Human resources ISO 9001:2008 identified stakeholders Top Management, employees; clients, suppliers; competitors; government, shareholders Top Management; employees, clients, suppliers, press, competitors, shareholders, community, unions 6.2.1 General Top Management; employees, clients, suppliers, press, competitors, shareholders, community, unions 6.2.2 Competence, training and awareness Top Management; employees, clients, suppliers, press, competitors, shareholders, community, unions 6.3 Infrastructure Top Management, employees, clients, government; competitors, shareholders 6.4 Work environment 7 Product realization 7.1 Planning of product realization ISO 14001:2004 requirement identification 4.4.1 Resources, roles, responsibility and authority 4.4.2 Competence, training and awareness 4.4.2 Competence, training and awareness 4.4.2 Competence, training and awareness ISO 14001:2004 identified stakeholders Top Management; employees , clients, government, shareholders, competitors, regulatory agencies, unions. Top Management; clients, government, shareholders, employees, competitors, regulatory agencies, community, press, unions Top Management; clients, government, shareholders, employees, competitors, regulatory agencies, community, press, unions Top Management; clients, government, shareholders, employees, competitors, regulatory agencies, community, press, unions OHSAS 18001:2007 requirement identification 4.4.1 Resources, roles, responsibility, accountability and authority 4.4.2 Competence, training and awareness 4.4.2 Competence, training and awareness 4.4.2 Competence, training and awareness 4.4 Implementation and operation Top Management, employees, clients, suppliers; regulatory agencies; community; government, press, competitors, shareholders, unions. 4.4 Implementation and operation 4.4 Implementation and operation Top Management, employees, clients, supplies, press, competitors, government, shareholders, regulatory agencies, unions Top Management, employees, clients, supplies, regulatory agencies, community, government, press, competitors, shareholders, unions. 4.4 Implementation and operation OHSAS 18001:2007 identified stakeholders Top Management; employees, government, family, regulatory agencies, unions Top Management; clients, government, shareholders, employees, competitors, regulatory agencies, community, press, family Top Management; clients, government, shareholders, employees, competitors, regulatory agencies, community, press, family Top Management; clients, government, shareholders, employees, competitors, regulatory agencies, community, press, family Top Management, employee, regulatory agencies, family, government, unions NBR 16001:2012 requirement identification NBR 16001:2012 identified stakeholders 3.3.7 Resources, roles, responsibility, accountability and authority Top Management; employees, government, community, shareholders, regulatory agencies, unions; press, competitors 3.4.1 Competence, training and awareness 3.4.1 Competence, training and awareness 3.4.1 Competence, training and awareness Top Management; clients, government, shareholders, employees, competitors, regulatory agencies, community, press Top Management; clients, government, shareholders, employees, competitors, regulatory agencies, community, press Top Management; clients, government, shareholders, employees, competitors, regulatory agencies, community, press 3.4 Implementation and operation Top Management, employees, government, shareholders, unions Top Management, employees, clients, supplies, competitors, government, shareholders Top Management, employees, clients, supplies, competitors, government, shareholders. 4.4.6 Operational control 4.4.6 Operational control Top Management, employees, regulatory agencies, family, government, shareholders, unions Top Management, employees, regulatory agencies, family, government, shareholders, unions 3.4 Implementation and operation 3.4 Implementation and operation Top Management, employees, clients, supplies, community, government, press, competitors, United Nations, shareholders, unions, regulatory agencies. Top Management, employees, clients, supplies, community, government, press, competitors, United Nations, shareholders, unions, regulatory agencies. Table 3: Stakeholders interested in Policy Management Statement QMS Policy QMS Stakeholders Top Management, employees, clients, suppliers, press, competition, government, shareholders, regulatory agencies, unions. EMS Policy EMS Stakeholders Top Management, employees, clients, suppliers, press, competition, government, regulatory shareholders, agencies, unions, nongovernmental organizations, community. S&OH Policy S&OH Stakeholders Top Management, employees, clients, suppliers, press, competition, government, shareholders, regulatory agencies, unions, nongovernmental organizations, community, families. SR Policy SR Stakeholders Top Management, employees, clients, suppliers, press, competition, government, shareholders, regulatory agencies, unions, nongovernmental organizations, community, families, ONU. Stakeholders interested in Integrated Policy Top Management, employees, clients, suppliers, press, completion, government, shareholders, regulatory agencies, unions, nongovernmental organizations; families, ONU. 36 A.F.S. Genaro and G. Loureiro / Stakeholder Management as an Approach to IMS Analyzing the Table 3 is clear to verify that is much more easier to manage the group of stakeholders identified in this example using the integrated view, instead of manage the same group of stakeholders interested in the organizational policies singly. 4. Constructing the argument Asif et al. [9] declare that an IMS helps the organization to manage the stakeholders requirements in a coordinated way in order to “build” the organizational processes, but it is important to check if the stakeholders requirements do not conflict among them. Targeta et al. [23] agree that many organizations only apply the requirements described in standards focusing on meeting the stakeholder’s expectations, mainly the clients. Targeta et al. [23] alert that each client has different requirements with different levels of intensities and conclude that requirements described in Quality, Environmental, Safety and Occupational Health and Social Responsibility Management Systems are not inductors to excellence. By the other side, many organizations worldwide have already recognized the value in constructing good relationship with stakeholders. According to Rocha and Goldschmidt [13], Bourne [15], Svendsen [21] and Porter [24] this behavior is a simple way to achieve the competitive advantage. Porter [24] and Freeman [25] agree that the stakeholder management obliges the organizations to make a collection of stakeholders and keep a close relationship with them. After that, the organization started to manage stakeholders focusing on extracting their real needs and expectations. Asif et al. [11] declare that modern organizational practices require that all stakeholders must be considered during the planning, design and implementation of business processes, and a great difficulty that organizations have facing nowadays is to recognize multiple stakeholders and that each of them has different expectations. Borne [15] and Svendsen [21] declare that the success of the organization in satisfying all stakeholders is in ability to translate the needs and expectations identified, turning them into requirements for their processes. However, Freeman [25] reinforce the importance of having effective means to measure the satisfaction of stakeholders, because these practices are essentials for the organizations’ survival. Bryson (2004) apud Freeman [25] emphasizes the importance of making a continuous stakeholder analysis, because their needs and expectations can change over time. It can be necessary adjustments according to changing in stakeholders expectations. However, Rocha and Goldschmidt [13] defend the importance of the organization always seeks overcome and anticipates the stakeholder needs. The proposal for a new concept of integrated management system takes into account the organizational capacity to manage, using an integrated approach. The fact of the organizations are implementing the requirements described in standards does not ensure that these organizations are really concern about complying with stakeholders needs and expectations, because the requirements described in standards are the minimum that the organizations must meet and hardly the organizations will endeavor to do beyond what is explicitly described in the standards. An aspect to be reinforced is that during an audit processes, the majority of organizations focus on the achievement of the minimum requirements described in standards, not trying to expand deeply their view into their stakeholders when building their management systems. The strategy used by the majority of the organizations is to A.F.S. Genaro and G. Loureiro / Stakeholder Management as an Approach to IMS 37 assure their accreditation, meeting the minimum requirements, and sell products and services with a quality seal. 5. Definition of an integrated management system as stakeholder integrated management requirements Based on the ideas presented above, it proposes the following definition for Integrated Management System approach to Stakeholders Management (IMS STK): Integrated Management System (IMSSTK) of an organization is a system composed by processes in a structured and strategic way, focuses on the management of its stakeholders (internal and external) in order to translate their needs and expectations into requirements, transforming these requirements into entries of its processes, providing products and services aimed at the satisfaction of its stakeholders, and may even overcame it. The organizations must be able to analyze their position in the market, the worldwide socioeconomic scenario and to interact with their stakeholders and potential stakeholders. According to Juran [26], the organizations must map their processes. In this paper, the idea is to map processes in an integrated way, so the organizational planning, monitoring, verification and taking correction actions will be integrated too. In a globalized world scenario, where who can see farther is able to remain in the market and attract more customers, Rocha and Goldschmidt [13] explain that the strategic planning focusing on stakeholder management can be an indispensable tool within organizations. Moreover, Porter [24] had already affirmed in 1985 that a strategic mapping permit to visualize all competitors in the organizational scenario. Based on this new definition, it can be stated that is not necessary to be accredited in standardized management systems to achieve the excellence in terms of management. The organizations can be able to translate the requirements of their stakeholders into organizational processes’ entries, where the output of these processes has to be products and services that meet the stakeholder’s expectations. 6. Conclusions This paper explain that using the stakeholder management approach, the organization can build an integrated management system based on stakeholders requirements to complement those ones described in standardized management systems. Targeta et al. [23] showed that the accreditation processes is not enough to assure the stakeholders’ satisfaction, while Gianni and Gotzamani [8] affirm that there are organizations abandoning their integrated management systems because a critical analysis was not performed in order to determine if the standards chosen were aligned with organization business, or if the standards were chosen only to follow a market trend. The new concept proposed in this paper helps the organization not to commit this kind of mistake, motivating them to make a planning and to define the IMS scope prior, taking into account the way they are insert into the market. This point forward, the organizations can identify their relevant stakeholders and consequently implement the 38 A.F.S. Genaro and G. Loureiro / Stakeholder Management as an Approach to IMS integrated management of those requirements identified, not limiting only to integration of requirements established in standardized systems. In general, the new definition proposed in this paper enables a comprehensive view of the management processes inside the organizations, making the management system most adherent to any new imposition that may arise in the future. Additionally, organizations can achieve the excellence faster in relation to their competition, ensuring its competitive advantage. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] J. P. T. Domingues, Sistemas de gestão integrados: desenvolvimento de um modelo para avaliação do nível de maturidade. 2013. 288f. Tese (Doutorado) – Engenharia Industrial e de Sistemas, Universidade do Minho, Braga, Portugal, 2013. C. F. Poltronieri, Avaliação do grau de maturidade de sistemas de gestão integrados. 2014. 118f. Dissertação (Mestrado) – Engenharia de Produção, Escola de Engenharia de São Carlos (USP), São Carlos, Brasil, 2014. J.P. Cerquerira, Sistemas de gestão integrados – conceitos e aplicação. Qualitymark, Rio de Janeiro, 2006. G. Moraes, Elementos do sistema de gestão de SMSQRS sistema de gestão integrada, Volume 2. Gerenciamento Verde Editora e Livraria Virtual, Rio de Janeiro, 2010. M. Bernardo, M. Casadesus, S. Karapetrovic and I. Heras, Management systems: integration degree. empirical study. In: Proceedings QMOD – Quality Management and Organizational Development, 11, 2008, Lunds Universtity & Linköping University. S. Karapetrovic and J. Jonker, Integration of Standardized Management Systems: Searching for a Recipe and Ingredients, Total Quality Management Magazine, vol. 14, no. 4, pp. 451-459, 2003. P. Lopez-Fresno, Implementation of an integrated management system in an airline: a case study. The Total Quality Management Journal, Vol. 22, 2010, No. 6, pp. 629-647. M. Gianni and K. Gotzmani, Management systems integration: lessons from an abandonment case, Journal of Cleaner production, 2015, No. 1, pp. 265-276. M. Asif, C. Searcy, A. Zutshi and O. A. M. Fisscher, An integrated management systems approach to corporate social responsibility. Journal of Cleaner Production, Vol. 56, 2013, pp. 7-17. T. Jørgensen, A. Remmen and M. Mellado, Integrated management systems – three different levels of integration, Journal of Cleaner Production, Vol. 14, 2006, No. 8, pp. 713- 722. M. Asif, E. Joost de Bruijn, O. Fisscher and C. Searcy, Meta-management of integration of management systems. The TQM Journal, Vol. 22, 2010, No. 6, pp. 570 – 582. T. Sartori and A. D. Weise, Models of quality management applied to organizations seeking to innovation management, Independent Journal of Management & Production (IJM&P), Vol. 4, 2013, No.1, pp. 55-70. T. Rocha and A. Goldschmidt, Gestão dos stakeholders – como gerenciar o relacionamento e a comunicação entre a empresa e seus públicos de interesse, Editora Saraiva, Rio de Janeiro, 2010. M. H. Trentim, Managing stakeholders as a client. sponsorship, partnership, leadership, and citizenship. Project Management Institute, Pennsylvania, 2013. L. Bourne, Stakeholders relationship management – a maturity model for organization implementation, Gower Publishing, Farnham, 2009. R.J. Richardson, Pesquisa social: métodos e técnicas. Atlas, São Paulo, 1999. S.C. Vergara, Projetos e relatórios de pesquisa em administração. Editora Atlas, São Paulo, 1998. J.B.M. Ribeiro Neto, J.C. Tavares and S.C. Hoffmann, Sistemas de gestão integrados. qualidade, meio ambiente, responsabilidade social, segurança e saúde no trabalho. Senai SP, São Paulo, 2012. A.F.B. Ramos, Medição da maturidade em gestão de projetos de sistemas de gestão integrada: um estudo de caso na área de petróleo e energia. 2009. 117f. Dissertação (Mestrado) – Sistemas de Gestão, Universidade Federal Fluminense, Niterói, Brasil, 2009. A.F.S. Genardo, Proposta de um modelo de avaliação da capacidade e maturidade de sitemas de gestão integrada (STKM3) utilizando a abordagem da gestão de stakeholders. 2014. 336p. Tese (Doutorado) – Engenharia e Tecnologia Espaciais/Engenharia e Gerenciamento de Sistemas Espaciais, Instituto Nacional de Pesquisas Espaciais, São José dos Campos, Brasil, 2014. A. Svendsen, The stakeholders strategy: profiting from collaborative business relationships. BerrettKoehler Publishers, San Francisco, 1998. A.F.S. Genaro and G. Loureiro / Stakeholder Management as an Approach to IMS 39 [22] M. Morikawa and J. Morisson, Who develops ISO standards? A survey of participation in ISO’s international standards development process. Pacific Institute for Studies in Development, Environment and Security – October 2004. [23] S.B.J. Targeta, J.R. Nascimento, H.R.M. da Hora and H. G. Costa, Sistema integrado de gestão da qualidade: uma análise dos clientes versus requisitos das normas. In: ENCONTRO NACIONAL DE ENGENHARIA DE PRODUÇÃO. DEVENVOLVIMENTO SUSTENTÁVEL E RESPONSABILIDADE SOCIAL, 32., 2012, Bento Gonçalves – RS, Brasil. Anais ... Bento Gonçalves – RS: 2012. [24] M.E. Porter, Competitive advantage: creating and sustaining superior performance. with a new introduction. Free Press, New York, 1985. [25] R.E. Freemann, J.S. Harrison, A.C. Wicks, B. Parmar and S. Colle, Stakeholders theory – the state of art. Cambridge University Press, Cambridge, 2010. [26] J. M. Juran, J.A. de Feo, Juran’s quality handbook – the complete guide to performance excellence. McGraw Hill, New York, 2010. 40 Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-40 Quality Problems in Complex Systems even Considering the Application of Quality Initiatives during Product Development Cosimo R. BERTELLI1 and Geilson LOUREIRO INPE, São José dos Campos - São Paulo, Brazil Abstract. This paper aims to present the importance to create a new and lean process for identifying potential failures during development of complex products. It has been identified in the literature and in companies the lack of knowledge to select the most appropriate quality tools in order to solve and or prevent the potential problems that might appear during the prototype development and launching phases of complex products. Literature about quality tools are easily found, therefore there is much questioning on the appropriate quality tools to be selected and how, where and when they should be applied. Based on this, this article aims to provide the understanding of the quality tools during program development and direct their application (Design for Six Sigma, Design FMEA, QFD, TRIZ, Robust Engineering, DFM and DFA). It is noticeable that even applying quality tools during all phases of complex products the failures still exist and, therefore, still cause a lot of problems to the companies that can be letal (example: at aerospace, automotive, metallurgical, medical and others companies) (CANCIGLIERI, OKIMURA, 2015) [1]. Besides all this, this paper will provide the evidence that something in addition to quality tools application should be done to guarantee design robustness of complex products. Two case studies provide evidences that only performing quality tools analysis like Design FMEA, DFSS, DFM, DFA and others is not enough to achieve the objectives of quality, as well as competitiveness, that large companies are looking for. A new and lean process is necessary to evaluate and identify the failures in a robust and definitive scenario. The new processes is based on the concept of Lean System Engineering as well as Lean Engineering Principles and propose to create a dual process to the systems engineering process mitigating the risks of failing what should be done as planned in the product and its life cycle processes. Keywords: Lean System Engineering, Lean Engineering Principles, Mitigating the risks of failing. Introduction This work is designed to emphasize the importance of the application of quality tools in the development process of complex products (PDP) and highlight the need to have something additional to this process for quality improvement, since, even with the effective implementation of quality plan, failures continue to be weakness points of product. 1 Corresponding Author; E-mail: cosimo17@terra.com.br C.R. Bertelli and G. Loureiro / Quality Problems in Complex Systems 41 The identification and failures analysis should be done throughout the product development process from the beginning of the project conception until the design release phase in order to make the product robust against failures that also matches with customer satisfaction in innovative products (D. CHANG, C.H. CHEN, 2014) [2]. It will be shown, through case studies analysis, that quality problems in systems and/or complex products remain relevant even with the favorable scenario for analyzing and identifying potential failures through quality tools at the right time of program development, that is, since the beginning of conceptual phase of the project. One of the major problems currently found in the PDP is to define the correct application of Quality Tools (what and when) to the solution and or prevention of potential problems that may occur in the phases of prototypes and even after the launch of the product on the market (BERTELLI, 2006) [3]. It is easily found in the literature the information about the Quality Tools, but there is a big gap in the orientation of their correct application as well as the outcomes if, after their application, the quality results match with the objectives of the stakeholders of organization. According to Griffin (2010) [4] faults still exist even considering that all procedures were done by the company. This paper shows, through case studies analysis, that failures in systems and/or complex products remain relevant, even with the favorable scenario performed since the beginning of the design phase relative to the application of quality tools in both case studies that will be described. The purpose of this work is, at the same time, illustrate the importance of the application of quality tools, but also emphasize the need to map (R.C. BECKETT, 2015) [5] a new process that supports this application in order to make the development of systems and/or complex products robustness against failures, once, there are opportunities for improvement in the identification and analysis of potential failures during the development of complex products. 1. Problem Search Currently there is a strong focus on developing and implementing the DFMEA (Design Failure Mode Effects and Analysis) [6] in various situations. ISO [7] and QS [8] standards show this direction. However, it is important to verify and analyze whether DFMEA application is appropriate for the specific situation that is being analyzed. The identification and failure analysis are performed later during the product development phase, since this analysis is made, in most cases, when the architecture phase is already defined. There are contrary situations, that is, the analysis and fault identification follows the appropriate process at the correct development phase time, but the quality problems still affecting the product performance. It is recognized that failure is a common occurrence at the projects and that the project's success is often a result of the reaction to faults (LOUTHAN, 2010) [9]. The effective application of DFMEA is recommended for the vast majority of cases, while in other scenarios, it is necessary to combinations and applications of other quality initiatives tools and, also, there are situations where the application of DFMEA does not add high value in the development of that particular project (BERTELLI, 2006) [3]. 42 C.R. Bertelli and G. Loureiro / Quality Problems in Complex Systems So, it becomes important the need to guide the implementation of Quality Tools for different situations of project and verify that the application reached the organization's goals. 2. Definitions of Quality Tools For understanding the integrative character of Quality Tools, the comprehension of each initiative is necessary. Follow are shown - in brief - the tools that are identified as the most suitable and Known to be applied during the PDP process focusing, specially, on quality; but they are not limited to those defined herein. More detailed information about the these as well as others quality tools initiatives can be found in several references including those mentioned in this work. The definition of the quality tools initiatives done by theirs respectives authors have the goals to be collaborative (BORSATO, PERUZZINI, 2015) [10] with the industry and with the improvement of the world. 2.1. DFSS (Design For Six Sigma) [11] The method Design for Six Sigma (DFSS) became widespread and applied in 1970 by Motorola and, in the 90s, by General Electric. Similarly, the concept of quality itself has several definitions associated with the DFSS method. Therefore, the following definition is the result of experience obtained in specifics “Case Studies”. The DFSS is defined as follows: "Design for Six Sigma is a method used in creating products and processes that reach a level of quality desired by the customer through the identification and optimization of critical parameters of the project”. DFSS should be assimilated as a new manner to work in the process of product development. The DFSS goals is to become possible to engineering technical area to reduce and optimize their costs and design a product into Six Sigma levels once all current business in the world is governed by the following equation: PROFIT = REVENUE – COSTS (1) As "PROFIT" is a matter of survival in the highly competitive market in which profit operate companies, the variable "REVENUE" is no longer controlled by companies and was controlled by the volatile market, that imposes it as subtle way, but decisive. So the only variable on which companies can still exercise control to increase your profits is just the "COST". From the perspective of variable cost, DFSS can be defined as a tool that seeks to find the ideal balance point in the project between consumer satisfaction and the development and production costs of the product, also called "Risk Consumer x Producer". By drawing a comparison between Six Sigma and DFSS, specifically for product development inside the industry, there are five distinct stages in a project: a. research b. development c. tests and prototypes C.R. Bertelli and G. Loureiro / Quality Problems in Complex Systems 43 d. production e. sales and after sales. Considering these steps, Six Sigma efforts are applied only in the last two steps production, sales and after sales - in which, both the product and the process already exist. Therefore, it can be said that defects are easy to be seen, but the costs to correct the defects are very high. On the other hand, DFSS is applied in the initial two steps, that is, research and development. Opposed to Six Sigma, defects are hard to be seen during DFSS application, but correction costs are low. Note that the defects are hard to be found by DFSS and, because of that, is very important applies others engineering methods to support its analysis like: CAE simulation, prediction methods to anticipate results of project performance as well as provide estimation about design behavior during its life cycle once there is no actual data. The task of carrying out this correlation is the great challenge of DFSS. The DFSS is to be understood as a way to: x x x x x x Identify critical functions affecting the quality level required by the customer; Create specific engineering measurements for these functions; Understanding the system level functionality; Identify the nominal requirements and its variation for project parameters; Predict and optimize rather than correct; Analyze "intelligently" the best "Quality Tool" to be used to optimize a project, resulting in a more reliable product to the end consumer. By DFSS application, the projects will be designed taking in consideration a clear customer needs that will be translated in technical language through system requirements descriptions. DFSS will also generate more robust and effective development process characterized by no sensitivity of variation of the manufacturing process as well as by their users or any other possible change in the environment. 2.2. QFD (Quality Function Deployment) [12] Developed in Japan in 1966 by Dr. Yoji Akao, Professor of Industrial Engineering in Tokyo. In 1972, QFD was used in the shipyards of Mitsubishi in Kobe (super tanker manufacturing that are products that have requirements clearly defined by their users). In 1974, was created the Committee of the Foundation for the QFD within the JSQC (Japonese Society For Quality Control). From 1977 to 1984 began the use of QFD in the automotive industry (Toyota). In 1983, QFD is introduced in the US by Dr. Akao through an article in Quality Progress magazine. In 1985, QFD was first time applied in US by Ford and General Electric. QFD is a tool that translates customer requirements into business requirements along the phase of the product development cycle since the initial activities of research until distribution phase. Follow some principles of QFD: x x Customer focus is the key. Satisfied customers keep business growing. It is essential to have a deep understanding of their needs, Development of the product in proactive basis is more effective than in reactive base, 44 C.R. Bertelli and G. Loureiro / Quality Problems in Complex Systems x It is a methodology for teamwork (Concurrent Engineering and or concurrent), enabling the participation of more people, with a greater degree of involvement and focus. QFD is developed in four phases (or arrays): x Phase (Matrix) 1: Product Planning. At this stage are determined customer requirements & company requirements Objectives: 1. 2. 3. 4. x Identify customer requirements (Voice of Customer), Determine overall requirements of product performance, Determine goals for product requirements, Determine items for further studies. Phase (Matrix) 2: Deployment of the Project. At this stage are determined requirements company & features parts. Objectives: 1. 2. 3. 4. x Select the best design concept, Identify critical parts, Identify critical features of the parts, Determine goals for the characteristics of the parts. Phase (Matrix 3): Process Planning. This stage determines the parts characteristics & process operations Objectives: 1. 2. 3. 4. x Determining the best combination between process and design, Identify the critical process parameters, Set goals for the process parameters, Select items for subsequent development. Phase (Matrix 4): Production Planning. This phase determines the process control & requirements controls. Objectives: 5. Translate the provisions of previous phases in terms of operating activities, so that all people involved in the QFD process understand what need to be controlled to meet with key points of the Voice of the Customer. The intention to perform the deployment in arrays is to define the actions and tasks on that are really important. Benefits of QFD: x x x reduction of development cycle time (30 to 50%), less problems in the implementation of the product, reduce costs since the beginning of production phase, C.R. Bertelli and G. Loureiro / Quality Problems in Complex Systems x x x x 45 reduction of field problems, own knowledge base, documentation, integration between functions 2.3. FMEA (Failure Mode Effects and Analysis) [6] Originated at NASA in 1960, was included in Military Standard number 1629 (American Military Standard). Soon after, Ford Motors adapted the theory and initiated using FMEA in the development of their automobile vehicles. The FMEA is based on preventing potential field failures. It takes part of its study to analyze the index RPN (Risk Priority Number) which is the result of multiplying three other indexes: x x x Severity, Occurrence, Detection. These indexes vary numerically from 1 to 10. The numerical value of RPN index is purely subjective. RPN index indicates that a preventive action needs to be taken to prevent potential problem does not occur. The numerical value of this index varies among the companies. There is no policy or procedure, which determines the lowest value for this index. Companies generally stipulate the value of the RPN index equal to 125. This number is derived by multiplying the severity, occurrence and detection indexes. In case those 3 indexes – severity, occurrence and detection - be equal to 5 (5 middle of full range – 10 is the maximum index), the product between them results in 125. It is adopted when this ratio is the value of or higher than 125 a preventive action should be taken to avoid that potential problem does not occur. The goal is to reduce this index rate to understandable range below than 125. The DFMEA analysis applies to product design as well as at manufacturing process. For this reason, DFMEA is called as Design FMEA and PFMEA is called as Process FMEA. 2.4. Robust Engineering (Taguchi Methods) [13] This method was developed by Genichi Taguchi as of 1950. It is a method used for designing products and processes in order that they suffer minimal impact of external factors such as manufacturing conditions, environment and use by the consumer. This is achieved using the principles of the Energy Transformation in order to optimize performance for the designated product instead of attempting to control the undesirable symptoms or problems presented by it (product). Application of Robust Design enables to: x x Develop products and processes that behave consistently (reliability) under a wide range of conditions of use throughout their life cycle (durability), Maximize robustness - improve the function required of the product, developing and increasing their insensitivity to factors that tend to degrade performance. 46 C.R. Bertelli and G. Loureiro / Quality Problems in Complex Systems x Develop or modify parameters of products and processes to achieve the desired performance at the lowest cost. Following will be defined the TRIZ. It should be relevant to describe that even TRIZ not being a quality initiative tool, it has a very noticeable application with regard to technological innovation. 2.5. TRIZ (Theory of Inventive Problem Solving) [14] Russian theory developed by Genrikn Altshuller for the development and creation of new ideas that can contribute to the project improvement. The TRIZ is based on: x x x x x x x x x x x x be a method to get innovation in a systematically manner, be a method to, consciously, help the growing of the technologic systems, a tools to eliminate engineering conflicts without making "Trade-off", a way to dramatically increase the knowledge and creativity , a way to share the experience of the brilliant inventors in any time, in fact the great majority of the basic problems which are faced today by the engineers have been solved in the past, typically in a completely different industry and in a totally disconnected scenario, often using a different technology . using it, there is no more necessity to wait for a “new inspiration” or analyze them through known methods of “trial and error”. Basically, the TRIZ study provides: an effective way to explore an extensive "knowledge base", cover numerous physical, chemical and geometrical effects, in tuning with the experience of different industries and elements of science and technology, increases the engineer's capacity for rapid development of innovative solutions to their toughest technical problems. 2.6. DFM / A (Design for Manufacture / Assembly) [15] The project focused on manufacturing aims to integrate the planning of the manufacturing process with the product development. The use of these techniques requires the systematic involvement of the product development teams and manufacturing process teams, which promotes an efficient feedback of requirements to accommodate industrial needs during conceptual phase of product development. This integration helps to reduce development time by eliminating rework cycles, usually made to facilitate the assembly line process and possible delays arise when the current resource production are not considered in product development time. These are the Quality Tools, which will be the focus of the next section that will address the issue of Determination of the Oriented Application of Quality Tools in Product Development through comparative table. 47 C.R. Bertelli and G. Loureiro / Quality Problems in Complex Systems 3. Methodology As initially described, it will be shown below through a matrix "i, j" a combination of alternatives which facilitate the determination of which quality toll should be applied vs. the project configuration. All studies should be initiated during the conceptual design phase. The analysis of possible failures evaluated from this stage contributes to the development of design improvement, reducing rework as well as structural cost organization. Examples of these costs are: hours spent on project development of a non-robust design and additional budget to be invested in a design that will be changed due to wrong decisions done during conceptual design phase. So, as it is important that all studies began in conceptual design phase is important for them to be finalized before the final release of the project - final format design of the project. Any changes arising from the failure analysis which are done through DFMEA, DFA, DFSS studies, etc should be implemented at the design release phase. The proper conduct of this procedure prevents project delays, increases the possibility of investment earnings and increases the "know how" for future projects. 3.1. Determination of the Oriented Application of Quality Tools in Product Development Through Comparison Table The proposal written is based on the thesis “Determination of the Oriented Application of Quality Tools (BERTELLI, 2006) [3]. This proposal is applied for more than 8 years in a big engineering company, specifically the product development area. As result, it can be confirmed that is being achieved continuously high rework reduction rates as well as reduction time of program development. Lower engineering change and increased customer satisfaction in relation the final product are also observed. All learning gotten by experts during years of execution of these activities helped to elaborate the following matrix (BERTELLI, 2006) [3], (F. ELGH, 2015) [16]. The first column identifies key points that should be used as base to define the best quality tools to be applied (DFSS, QFD, DFMEA, RE, TRIZ, DFM and DFA). Table 1. Matrix "i, j" for determining the Quality Tool. DFSS Understand requirements Customer Innovate Technologically Using new technology which is locally known but applied in other companies by supplier Using new technology which is applied locally on another platform with good performance (Warranty Data ) Using new technology which is applied locally on another platform with medium or low performance (Warranty Data) Using new technology which is locally known QFD DFMEA RE TRIZ DFM X X X X X X X 0 X DFA 48 C.R. Bertelli and G. Loureiro / Quality Problems in Complex Systems DFSS Check Manufacturability of Component Check Assembly Component / System Check interference and or Mount Conditions Check tubes, hoses, harnesses , roots, … Analyze functions and components / systems Analyze function of failed components and or systems Analyze security items Improve system performance Solve problems with root cause unknown Solve problems with root cause known but no evidence of effectiveness of the solution through confirmatory tests Analyze design changes Analyze application of resin Solve problems with root cause known however with the solution proven efficacy through confirmatory tests Analyze assemblies tampering, sealing & manufacturability of metal parts (BIW) Analyze functions of metallic hardware (BIW). Example: breaks, Resistance welding, cracks, sealing, … Analyze performance/ result of new metal parts (BIW) Analyze functions of plastic parts Analyze assembly / manufacturability of plastic parts Analyze re- use of parts / systems / components in different environments Analyze software configuration Analyze electronic circuits Analyze fabrics and/ or foams QFD DFMEA RE TRIZ DFM DFA X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X Legend: BIW: Body in White (metal part without finishing). 4. The Relevance of Comparative Table of Application The consumer since the beginning of the century is more demanding, known so better their rights, make more accuracy analyzes about the alternatives to make a decision based on data on which product is more reliably. The elaboration and preparation of a quality plan implemented during the product development adds value during design, prototyping and post launch phases. This analysis can be made, for example to select an airline for travel, the purchase of a TV set that has better reception, image and sound transmission, the acquisition of a more practical electrical equipment appliance to be used by the user and even in selection of a more robust car and therefore high immune to maintenance and breaks during and after the warranty period. Considering currently the level of industrial competition is increasing, it is extremely necessary to the existence of a framework that makes the difference between the similar products offered by various competitors. C.R. Bertelli and G. Loureiro / Quality Problems in Complex Systems 49 The table could be filled in with an infinite of items. For items that are not included in the table, it is believed that an analysis by the specialist on the component or system associated with understanding of the table vs. quality tools initiatives will assist in defining the best quality tool to be applied. The table analysis should be done considering 3(three) different product parameters: 1. focus on project / design, 2. focus on manufacture, 3. focus on assembly and disassembly (services). In most cases, can be considered the tools noted in the table as primary applications. Secondary tools, may also be added in the analysis. To notice that there are cases where the application of DFMEA can be omitted taking into account the application of another tool. The inclusion of this evidence is one of the objectives of this article. It must be emphasized that the multi- functional team has to be part of all studies to ensure a good quality of discussion and decision making. 5. Case Study Many illustrative examples could be shown in this article as "Case Studies". For better understanding, were selected two case studies taking into account the metrics in "warranty claims and warranty cost”. The term warranty cost should be understood as all money invested to correct determined issue in a product that has already been industrialized, that is, in production. 5.1. Case Study # 1 This case study is relative to the headlamp which is a component used in a determined vehicle "X" (part used in the automotive industry). During the development of a new vehicle, there was failure analysis identification performed by DFMEA. All analyzes were evaluated and the DFMEA was completed. In addition to the DMEA, other quality tools like DFA and DFM were applied according to table 1 definition. 5.2. Case Study # 2 As a second case study, will be cited the DFMEA study of a fan which is a part used in the automotive industry. In this analysis were assessed the environment conditions applied to the fan into the engine compartment and were taken the improvement actions during the product development phase of the specific vehicle. The work team members developed the critical failures analysis and after developing some meetings, the failure analysis was concluded. The quality tools defined to be applied in this subject were in accordance with table 1 presented at section 3. 50 C.R. Bertelli and G. Loureiro / Quality Problems in Complex Systems 5.3. Field data of case studies At case study # 1, after initiating the production of automobiles (A. KATZENBACH, 2015) [17], there were field complaints relating to water ingress inside the lens causing problems of burning lamps. Furthermore, there was a headlamp poor appearance due to water infiltration. To fix the problem, it was necessary to exchange headlamps because the stickers (adhesives) initially designed to attend the requirements of the lens of headlamp to avoid water penetration were incorrect. After changing of bonding adhesive, the problem was solved. Regarding the 2nd case study, after starting of production, there was some records relating fan malfunctioning causing damage and, per consequence, poor operation of the system. After replacing the part (fan), the system returned to work. After experimental tests carried out at fan by the work team, it was identified a failure during attachment of the wire harness connector at fan plug region that, after sometime, started to detach causing increasing of electrical current and, therefore, burn at fan connector region. The problem was solved, eliminating the customer problem (D. CHANG, C.H. CHEN, 2014) [2] and, consequently, additional fault report When analyzing the DFMEA, the failure mode of connector detachment that could cause potential electrical problem was not identified allowing, in this way, the occurrence of the failure. Examples such as these 2 models depicted, constantly repeat in projects regardless of the branch of industry, that is, aerospace, automotive, chemical, metallurgical and others. 6. Conclusion and final thoughts As previously mentioned, the focus of this paper is the application of quality tools in engineering design environment and make evidences that, even with effective quality tools initiatives implementation, design problems still occurring which impact directly to the organization's goals . It is a fact that the application of quality tools initiatives brings significant gains to the organization. However, it is shown in the case studies that even with the elaboration and application of a quality plan, completion of DFMEA and other analysis; problems that cause project redesign, rework, issues with maintenance, increasing of warranty costs as well as the company's image reputation continue existing. Those issues confirm there is a need to propose a new lean process based on weak points of the traditional process focusing on strengthen the analysis. The time of product development becomes less and less, which requires that quality problems be prevented as soon as possible, at the right time, in a proactively manner throughout the development process and right since the first time. This should occur for companies to maintain their competitiveness, which is essential for their survival in the market. It is extremely important to emphasize the implementation of the Quality Plan remains essential and the purpose of this article is to show that something new must be developd to solve this dilemma. C.R. Bertelli and G. Loureiro / Quality Problems in Complex Systems 51 A new agile (A. McLAY, 2014) [18] process to analyze and identify failures focused on systems engineering concepts (BIAHMOU, 2015) [19] and lean development of complex products [14] are the concepts used to create the new process. Future articles will be developed focusing on the proposal for a new and lean system engineering process (OPPENHEIM, 2011) [20] of identification and analysis of failures of complex products. The purpose is to create a dual process of engineering process mitigating the risks of failing what should be planned in the product and its life cycle processes. It will be more detailed in future articles. References [1] O. Canciglieri jr, M.L. Miyake Okimura, The Application of an Integrated Product Development Process to the Design of Medical Equipment, in: J. Stjepandić et al. (eds.): Concurrent Engineering in the 21st Century: Foundations, Developments and Challenges, Springer International Publishing Cham, 2015, pp. 735–759. [2] D. Chang, C.H. Chen, Understanding the Influence of Customers on Product Innovation, Int. J. Agile Systems and Management, Vol. 7, Nos 3/4, 2014, pp. 348 - 364. [3] C.B. Bertelli, Quality Tools Method for Application at Product Development. MSc Thesis. 2006 by ITA (Aeronautic Technologic Institute) in Brazil. [4] M.D. Griffin, How do we fix systems engineering. 61st International Austronaut Congress Prague, Czech Republic, 2010. [5] R.C. Beckett, Functional system maps as boundary objects in complex system development, Int. J. Agile Systems and Management, Vol. 8, No. 1, 2015, pp. 53–69. [6] FMEA - Juran's Quality Control Handbook – 4th Edition - Ed . Mc Graw Hill. [7] ISO 9000 - Handouts ASQ (American Society Quality). [8] QS 9000 - Handout Chrisler. [9] M.R. Louthan, Overcome failure. J Fail Anal Event, 2010. [10] M. Borsato, M. Peruzzini, Collaborative Engineering, in: J. Stjepandić et al. (eds.): Concurrent Engineering in the 21st Century: Foundations, Developments and Challenges, Springer International Publishing Cham, 2015, pp. 165–196. [11] DFSS - Handouts ASQ (American Society Quality). [12] QFD - Qualiplus - ASI (American Supplier Institute). [13] G. Taguchi, - Robust Design Manual Workshop - ASI (American Supplier Institute). [14] TRIZ - Tutorials ASQ (American Society Quality). [15] DFM/A - Tutorials ASQ (American Society Quality). [16] F. Elgh, Automated Engineer-to-Order Systems A Task Oriented Approach to Enable Traceability of Design Rationale, Int. J. Agile Systems and Management, Vol. 7, Nos 3/4, 2014, pp. 324 - 347. [17] A. Katzenbach, Automotive, in: J. Stjepandić et al. (eds.): Concurrent Engineering in the 21st Century: Foundations, Developments and Challenges, Springer International Publishing Cham, 2015, pp. 607– 638. [18] A. McLay, Re-reengineering the dream: agility as competitive adaptability, Int. J. Agile Systems and Management, Vol. 7, No. 2, 2014, pp. 101–115. [19] A. Biahmou, Systems Engineering, in: J. Stjepandić et al. (eds.): Concurrent Engineering in the 21st Century: Foundations, Developments and Challenges, Springer International Publishing Cham, 2015, p. 221–254. [20] B.W. Oppenheim, Lean for systems engineering with lean enablers for systems engineering, John Wiley & Sons, Inc. Hoboken. New Jersey, 2011. 52 Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-52 Enhancing Robustness of Design Process in Individual Type of Production Mitja VARLa,1, Jože TAVČARb, Jože DUHOVNIKb Kolektor ETRA, d.o.o., Šlandrova ulica 10, 1231 Ljubljana, Slovenija b University of Ljubljana, Faculty of Mechanical Engineering, Aškerčeva 6, SI-1000 Ljubljana, Slovenia a Abstract. The robust engineering and many related engineering applications are seeking for design of products and processes insensitive to changes in the work environment as well as to variation of the components. At the individual type of production the basic function of the product and the main design solutions are generally known, but a unique product is assumed to have its own details that require the respective individual design approach and have a direct impact on both, the design process as well as on a later production process. Such an industrial environment is very specific, therefore robust construction process plays a key role in the final value of the product. The robust process has built-in mechanisms to detect potential errors on time, to eliminate them and to initiate all the necessary measures to ensure the same error does not occur again. Implementation methodology of these mechanisms is essential, as it should provide cost-effective and useful engineering solutions. The sample company is engaged in the development and production of large power transformers. Based on a systematic analysis of current development and design process we propose a multi-level, systematic approach for a complete renewal of system information and working methodology, where reorganization of activities are anticipated to result in an increase of overall effectiveness. The paper presents the key preliminary findings and deals with answers of how to analytically manage individual segments of design process in order to achieve optimal conditions for individualized construction process. At the end the instructions for the implementation of improvements as well as recommendations for further activities are given. The final aim of the research is to implement the identified solutions in a real-world industrial environment, to obtain their approval and finally to establish a generalized model of support processes for individual production. Keywords. Robust engineering, parametric design, lean methods, individual production, concurrent engineering Introduction Robustness means reliable execution of processes for which a technical system is designed and optimal achieving of the planned objectives. People are involved in processes at different levels, either as a part of the implementation process, additional cooperation, control or super control. Human integration in every single part of the process increases the possibility of error. Robust process has built-in mechanisms to detect potential errors on time, to establish their definition, to limit them initially and to 1 Corresponding Author, E-mail: mitja.varl@kolektor.com M. Varl et al. / Enhancing Robustness of Design Process in Individual Type of Production 53 eliminate them in the next step and finally to trigger the necessary steps in order to ensure that the same error script does not occur again in the future. A method of incorporation of these mechanisms into the process is essential, as it should be costeffective and useful from engineering aspect. The basis of robust design process is adequate IT support, expressed in an absolute control of data related to individual products and reliable retention of knowledge and experience. There are numerous PDM/PLM (Product Data Management/Product Lifecycle Management) solutions on the market, which can be adapted to the specific process requirements of individual company by custom configuration. All the advantages provided by such information management systems are optimally developed only with sound combination of knowledge of the working methods, concurrent reorganization of existing work and specific upgrade, for example, the expert system. Such a system can, with continuous tracking of the implementation of the work process and appropriate corrective measures, supplements basic system in a comprehensive manner. The design process of the individual or small-scale production is characterized by the majority of construction activity taking place inside of a so-called golden loop of development. Individual production is usually tied to the adaptation of a basic design to individual customer's requirements [1]. Occasionally activities expand further in the inventive and exploratory loop, especially when the customer has specific product requirements and their completion demands a realization of yet unknown solutions. Activities in these two loops can be deliberately triggered by the company's management, primarily due to the need for constant development of their own products, deepening of knowledge, introduction of new working methods or similar. The question we can ask ourselves at this point is how to analytically manage individual segments of design process in order to achieve optimal design conditions (overall efficiency) in an industrial environment with an extremely high degree of individuality? Based on relevant scientific literature review and the latest discoveries of the existing scientific literature, hypothesis of research work has been set. Later on, a brief presentation of the chosen problem from the real industrial environment is presented. Complex, wide and in the literature relatively poorly researched topic of the robustness in the individual production demands comprehensive study and research of many disciplines, among which direct or indirect logical correlation or dependence exists. Recognized research field primary consists of the following basic areas of expertise: Engineering Design Methods, Concurrent Development, Technical Information Systems, Robust design, Lean Engineering and Theory and aspects of the individual type of production. The introduction of concurrent engineering methods into an industrial environment should consider the type of production (individual/serial), product complexity and the level of design. Source [2] argue that for a successful implementation of concurrent engineering principles numerous solutions are important: early inclusion of customers and suppliers into a design process, established adequate communication, working team formation, good process definition and sufficient IT support. They present levels of design (original design, innovative design, variation design and adaptive design). This article will only concentrate on the last two levels, where searching for a new technical shape is a main goal. Every project has well defined goal and is usually realized for a known customer [3]. Mastering of the following fields is mandatory: team work, project management, 54 M. Varl et al. / Enhancing Robustness of Design Process in Individual Type of Production time planning techniques, concurrent engineering, development of IT and communication channels [3], [4], [5]. The success of project management is based on efficient methodological and IT support. In contemporary industrial environment Product Lifecycle Management (PLM) and Product Data Management (PDM) systems has become a necessary tool, which enables suitable dealing with global competitors, product individualization, shorter products lifecycles and increase in products complexity [6]. Design of high quality products or processes at a competitive price represents technical and economical challenge to engineers. Robust design method represents a systematic and efficient way to reach this goal [7], [8]. Lean engineering originates from Toyota Product Development System (TPDS). Source [9] argue that lean principles are much harder to implement in an individual production environment rather than in an environment of a serial production, where there are almost no changes during production process. They present a concept of modularity that allows time efficient realization of engineering changes. Source [10] claim that there are considerable differences in value-added activities that characterize lean production on one hand and lean development on the other. Therefore, a low variable process that is optimal for the production environment should not be a final aim for the development operation. Unlike production development process, it creates a value-added work with new operations. Taking rational risk is a main component of effective research activity [11]. 1. Presentation of a real case The design of the new transformer belongs to an adaptation type of design, as it contains all the distinctive features of its characteristic activities. The basic working principles as well as the peripheral functional requirements are fully known. The design model is well defined. Every time the construction begins from the same baseline, which is the selection of the appropriate pre-made parametric 3D construction. The design and functional characteristics of individual assemblies are known in advance. The parametric models consist of smart subassemblies and parts, design of which is well-considered and based on experience and knowledge of the company, as well as of a number of standard components. The essence of the problem for any new construction is thus expressed in the search for new technical shape that would optimally meet the individual requirements of each specific client. Despite wellstructured and content-rich base parametric constructions, each new individual contract demands a modification of the numerous details, which make each final product unique. The individualization process includes parametric change of in advance prepared components as well as a certain degree of completely new design. Both processes are affected by technological constraints of production and the characteristics and rules of detailed design. The degree of complexity of the product has a significant influence on a product design process. Optimization of individual stages of the process must be carried out in accordance with the type and complexity of the manufacturing process of the product. A power transformer is a highly complex product (Figure 1 and 2). The complexity of the design is a reflection of an individual type of production, where each product is essentially a prototype with a certain degree of engineering uncertainty. The complexity of the production process is the result of an extremely strict production M. Varl et al. / Enhancing Robustness of Design Process in Individual Type of Production 55 protocols that provide a high quality product (low losses and low noise) with long lifecycles (25 to 30 years). This complexity is triggered by a large number of possible variations, which is a consequence of a global manufacturer. Customers from all over Europe have different requirements and expect the realization of different technical solutions. Another complexity, which refers to a large number of components, requires the adequate logistical and IT support. Figure 1 and 2. Example of two power transformers where the same functionality is realized with two very different shape models. The characteristic phases of design are: planning, conceptual phase, design of parts and modules, detailed design, testing with application of improvements and finally the production. The product is defined by five key parameters: function, structure, shape, material and architecture. The interaction between these parameters is carried out within so-called golden loop of a design process [2]. 1.1. Definition of robustness for large power transformers On the basis of the scientific literature reviews, term robustness can be defined for an individual production of large power transformers. Robustness can be roughly divided into the concept of robust design of the transformer and the concept of robust production process. In practice, it appears that the biggest part of the company's financial losses is generated through the errors in design process. The process of mechanical construction is the one on which the progress of all further activities is strongly dependent. A high degree of product individuality has a major impact on potentially reduced robustness of the construction process. Individuality itself is essential for a company, because a complete accommodation to specific customer needs represents an important competitive advantage. On the other hand, each new project requires new, individual project (technical) documentation. The human factor at this point is of a great importance as it directly affects the final number of detected or undetected errors. The core of the future research will thus be based on a search for alternatives of structural renovation process of individualized products, where the target function is the elimination of the human factor, of course, within the limits of everyday engineering usability and economic viability. 2. Summary of the research and plan of further activities A thorough literature review revealed that in the field of development and optimization of individual type of industry, very little research has been done, with insignificant 56 M. Varl et al. / Enhancing Robustness of Design Process in Individual Type of Production examples of real life solutions. Most of the research work, methodologies and knowledge about the optimization and development of design environments result from the massive industry, mostly automobile. In theory, the use of advanced design techniques and IT solutions simplifies the development of the product. In real engineering environment, this is only true if different program tools, databases and relevant processes are linked together into a coherent whole. The fragmentation of the working environment and unadjusted working operations lead to cost and time inefficiencies [6], [12]. Many references claim that companies with an established PLM system are more competitive on a global scale, they better manage measures after mistakes and they reach better time-to-market ratio for individualized products [13], [14]. Statistically about 80% of the product price is defined already in its design phase [12], [13]. Nowadays companies are facing intense and increasing pressure to reduce costs, establish shorter time-to-market and to increase the added value of the products by investing in their development. That pressure has led to the expansion of activities related to the development and improvement of production processes [10]. Highly individual production requires a high degree of flexibility and timely response to customer needs. It is characterized by very low levels of process stability and standardization. The principles of lean manufacturing can easily be used in mass production, while increasing the level of individuality and earlier involvement of the customer in the design stage of the product makes direct application of the principles of lean processes considerably more difficult [9]. Accordingly, good IT solutions play a key role in the company's operations [15]. Appropriate solutions are thus hiding in the development of the IT environment (development of an expert system, combining existing software tools, potential introduction of PDM and PLM systems), in the introduction of universality and uniformity of working procedures [16], which are characteristically similar from project to project (increasing the reliability via the system of check-lists), an in-depth analysis of the development and construction process [16], in eliminating redundant tasks and in the introduction of fast, robust, real-time feedback loops to verify the partial and over-all efficiency [11]. 2.1. The vision of the design process reorganization in the sample company (case study) The vision of reorganization of key design activities is on this place presented for the case of magnetic circuit of the transformer (Figure 3). The basis of robust construction is represented by advanced parametric 3D model with all planned functionalities included. For this purpose, we completely redefined an existing process of 3D modeling, which represents the core of the design process. Key attribute of innovative parametric 3D assembly, which represents a baseline of each new project, is the division of import data into two main groups. In one group, there are features that change from project to project and are directly linked to the list of input data (parameters). In the other are features that are dependent on those in the first group. Therefore, it is possible to write a mathematical relationship between them (relation). The set of independent variables was arranged by chapters and collected in one place as input data. A set of dependent variable was also reasonably arranged and collected elsewhere as a set of mathematical expressions. The whole package of these mathematical equations represents so-called brain of innovative advanced parametric M. Varl et al. / Enhancing Robustness of Design Process in Individual Type of Production 57 start assembly. The import of the input data (e.g. manually or through an expert system), followed by resolving the whole package of mathematical expressions, which runs in the background of regeneration process of 3D model, leads to extremely rapid adjustments of the initial construction to the actual customer demands. It has turned out that the adjustment of the 3D structure of magnetic circuit can be highly automatized, since the import of inputs without designers’ interference adjust around 80% - 90% of the starting construction. The manual adjustment is needed only for those components, by which a dependent relationship with the input parameters cannot be set due to their high variability or for components exclusively defined by customer and which, as such, also individualize the product. Time savings, resulting in introduced improvements, are considerable. Based on an adjustment of starting 3D model, the design work was reduced on average by about 50% and preparation of corresponding technical documentation by about 25%. The whole operation total time savings are about one-third of the time required before the optimization. At the same time we have perceived a significant increase in the quality of the Figure 3. The active part of the transformer prior to manufactured products. The number of entering the drying oven. errors is reduced significantly, especially due to the elimination of the human factor in the initial stage of parametric adjustment. Figures 4 and 5 present a practical demonstration of achieved results. Figure 4 (left) shows the magnetic circuit of the transformer with rated power of 10 MVA, with the weight of the structure on the figure being 7.3 tons. Figure 5 (right) shows the magnetic circuit with rated power of 100 MVA, with the weight of the structure being 56.0 tones. For the design of both, the same starting 3D layout model was used. After importing the input parameters, regeneration of the model is performed. Innovative 3D parametric layout model provides a very high degree of components that adapt already in the first iterative process. Both outcomes are distinguished by approx. 30 minutes of work, which is impressive. With advanced 3D model we set a solid foundation for the further steps that focus on managing data and extend to the vision of a complete methodological renewal. In the next step, detailed analysis of the data flow within the development and construction process was made, using diagram IDEF0. The analysis revealed the facts about the flow of information, which represents the central axis of the process. It confirmed the assumption of excessive fragmentation of the information field, which is a result of software fragmentation and adversely affects: 1. time efficiency of the process, 2. general reliability of the process, 3. robustness of the process in terms of errors management, 4. optimality of final structure attained by only one-way flow of information – the absence of iterative loops. 58 M. Varl et al. / Enhancing Robustness of Design Process in Individual Type of Production Figure 4 and 5. Fully parametric 3D models of magnetic circuit, constructed in computer aided design software PTC Creo. The analysis conducted on the example of magnetic circuit of the transformer shows that for a single realization of the process, 12 different inputs (colors on Figure 6) are required from 9 different sources (numbers on Figure 6) and 8 different outputs are generated into 7 different sources, wherein the entire process has two levels. First phase serves the completion of the order of sheet metals for the magnetic core. Second phase, which is the main one, focuses on the actual design of the magnetic circuit. Vision of the design process renovation is based on application of the main principles of concurrent engineering, lean methods and increase in the system robustness. Presentation is made for the case of magnetic circuit only due to clear and unambiguous issues. The proposed renovation is based on a gradual reduction of data paths that are necessary for the process completion. On one hand, we intend to achieve this with radical changes of design environment, on the other we plan to realize some crucial changes in methodology and organization of work as well. The first measure is to establish a direct and functional link between the electrical designers and the procurement, which would result in abandoning the first level of the design process. An appropriate software extension would allow the proposed functionality. Such measure would reduce the workload of the construction department and contributed to the greater robustness of the process as the realization of any subsequent changes is directly linked to the program and is no longer in the individuals’ domain. The next step (Figure 7) is to optimize the flow of the input data in the main activity of the process, which is the construction of magnetic circuit. At this point, the development and the establishment of an expert system seem logical. Essentially such a system represents an intelligent computer system that uses knowledge and inference procedures to solve problems in a narrow field of expertise. The central part of the expert system is represented by the base of knowledge, which contains facts and rules, or so-called inference mechanisms that describe the relationships between those facts. The system is supplemented by two interfaces, interface for knowledge storing and user interface. The function of an expert system in a particular case would be to link currently scattered information sources that are characteristic for each new project in the first phase and later on the logical processing of those information. M. Varl et al. / Enhancing Robustness of Design Process in Individual Type of Production Figure 6. The current design process of the magnetic circuit with emphasis on dataflow. 59 Figure 7. Renovated magnetic circuit design process. Expert system would also supervise the generation of data, its logical processing and its final implementation in target field. In addition to the basic information distribution it would also offer to some degree automated decision making on appropriate design solutions. Beside of that, possibilities of concurrent iterative optimization loops would arise. The principal benefit of an expert system with parametrical background would be an increase in the robustness of the entire work process, not just in the setting-up the initial 3D model. After optimization of the input data branch, second stage of renovation would follow, and this stage focuses on output. Systematic software solution for data management (PDM system – PTC Windchill) would connect all outputs associated with parametric 3D modeler of PTC Creo. The main advantages of such systems are overall control over the output information and all adjacent files, advanced allocation of rights, unique error management protocol and functional link between all participants in the project without individual intervention. 2.2. Idealized design and development process Idealized design and development process represents a long-term goal, which is pursued as part of the presented strategy (Figure 8). It requires a change in the work philosophy since with the introduction of exclusively 3D design, discussed process enters into a completely new dimension. Expert system at this stage becomes a core tool of the whole process where parametric 3D layout is gradually developed from the conceptual solutions into a detailed study and subsequently into a detailed real construction. A comprehensive approach to problem solving that involves intensive simultaneous participation of key members of the project team is essential, in particular case electrical and mechanical designers and development engineers. Due to the fully parametric environment simple and effective connection between the modeling tools and manufacturing structural strength analysis is established, wherein the functional 60 M. Varl et al. / Enhancing Robustness of Design Process in Individual Type of Production feedback loops allow the execution of multiple iterations of the actual mass optimization of the structure. The number of process inputs and outputs is on minimum in this case. We estimate that with those measures, the data field necessary for the realization of t h e e n t ir e a ct i v i t y wi l l b e reduced by about 50% on both, input and output side. Estimated additional time savings are between 15% and 20%, consisting of abolished times of unnecessary (redundant) tasks, t a s k s t h at ar e re pl ac ed b y automatic software operations, elimination of time spent on checking for steps performed automatically as well as the time devoted to any repetition of a step due to an error. Reliability Figure 8. Idealized magnetic circuit design process. of the process will thus rise infinitesimally to the quotient of 1, even though it is on a very high level already (only a few minor errors per year). 3. Conclusions The article presents an example of the initial phase of the comprehensive renovation of development and design process in highly individual production environment. In the first part of the paper theoretical background is explained, as well as a thorough review of related scientific literature. After that, a presentation of a real industrial problem is conducted. With reference to the detailed analysis of the current situation, we were able to develop a competent conceptual framework for systematic renovation of a development and design process. The main objective is to increase its robustness. In practice, this means finding the most reliable and the least time consuming way to design a product that would fully meet the customer's requirements and preferences. Initial activities with already implemented improvements are presented. In the final part of the paper, plans and guidelines for future work are explained and vision of the final system reform is set forth. For all planned steps, the expected results we intend to achieve are stated. In this context, it should be noted that a universal approach for introducing this kind of changes does not exist. Introduction of lean engineering principles must be set on the basis of individual features of each particular company. It is also necessarily to take into account the fact that the approaches to the introduction of lean engineering principles in the production environment compared to the development environment differ significantly [11]. Furthermore, a number of encouraging results were recorded in various adjacent fields, for example in introduction of project teams and project management [10], renovation of companies’ IT systems with a focus on the introduction of lean principles [15] and in the transition from sequential to the M. Varl et al. / Enhancing Robustness of Design Process in Individual Type of Production 61 concurrent development, which is primarily done via the introduction of lean principles [16]. A summary of the featured research and presented plans for further work demonstrate that the presented issue is complex, therefore its professional width requires the cooperation of the entire company. This is also confirmed by the number of open activities that are currently being implemented in the company. The frame of further research will focus on design process and its directly related activities. The first objective is to produce a representative overview of the development and design process for the transformer with use of value stream mapping method. Based on this analysis critical moments that generate bottlenecks in the work process or cause redundant tasks will be identified. A key challenge remains the integration of various software tools in an integral information system. We anticipate that appropriate measures will help us to overcome previously identified deficiencies. The ultimate goal is to establish a renovated model of design process that will comply with individual production principles, with the principles of lean and concurrent development and will, with its development into a generalized form, represent a universal solution for such type of production. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] J. Tavčar, J. Duhovnik, Engineering change management in individual and mass production, Robotics and computer-integrated manufacturing, 21 (3), pp. 205-215. 2005. J. Duhovnik, J. Tavčar, Concurrent engineering in machinery. in: J. Stjepandić et al. (eds.): Concurrent Engineering in the 21st Century: Foundations, Developments and Challenges, Springer International Publishing Cham, pp. 639–670, 2015. J. Kušar, L. Bradeško, J. Duhovnik, M. Starbek, Project management of product development, Strojniški vestnik – Journal of mechanical engineering, 54 (9), pp. 588-606, 2008. L. Rihar, J. Kušar, J. Duhovnik, M. Starbek, Teamwork as a precondition for simultaneous product realization, Concurrent engineering: research and applications, 18 (4), pp. 261-273, 2010. L. Rihar, J. Kušar, S. Gorenc, M. Starbek, Teamwork in the simultaneous product realization, Strojniški vestnik – Journal of mechanical engineering, 58 (9), pp. 534-544, 2012. J. Stark, Product lifecycle management – 21st century paradigm for product realization. SpringerVerlag, London, 2005. S. P. Jones, George Box and robust design, Applied Scholastic Models in Business and Industry, 30, pp. 46-52, 2013. M. S. Padhke, Quality engineering using robust design, AT&T Bell Laboratories, Prentice Hall International, 1989. B. Stump, F. Badurdeen, Integrating lean and other strategies for mass customization manufacturing: a case study, Journal of Intelligent Manufacturing, 23, pp. 109-124, 2012. G. Letens, J. A. Farris, E. M. Van Aken, A multilevel framework for lean product development system design, Engineering Management Journal, 23 (1), pp. 69-85, 2011. D. Reinertsen, L. Shaeffer, Making R&D lean, Research Technology Management, 48 (4), pp. 51-57, 2005. A. Fathallah, J. Stal-Le Cardinal, J. L. Ermine, J. C. Bocquet, Enterprise modelling: building a product lifecycle management model as a component of the integrated vision of the enterprise, International Journal on Interactive Design and Manufacturing, 4, pp. 201-209, 2010. W. Liu, Y. Zeng, Conceptual modelling of design chain management towards product lifecycle management. in: S. Y. Chou et al. (eds.): Global Perspective for Competitive Enterprise, Economy and Ecology, Proceedings of the 16th ISPE International Conference on Concurrent Engineering, pp. 137148, Springer London, 2009. S. Rogalski, Factory design and process optimization with flexibility measurements in industrial production, International Journal of Production Research, 50 (21), pp. 6060-6071, 2012. J. Riezebos, W. Klingenberg, Advancing lean manufacturing, the role of IT, Computers in Industry, 60, pp. 235-236, 2009. B. P. Nepal, O. P. Yadav, R. Solanki, Improving the NPD process by applying lean principles: a case study, Engineering Management Journal, 23 (3), pp. 65-81, 2011. 62 Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-62 Using Ontology-Based Patent Informatics to Describe the Intellectual Property Portfolio of an E-Commerce Order Fulfillment Process a Abby P.T. Hsu a,1, Charles V. Trappey b, Amy J.C. Trappey a Department of Industrial Engineering and Engineering Management, National Tsing Hua University, Taiwan b Department of Management Science, National Chiao Tung University, Taiwan Abstract. Electronic commerce (EC) is the process of selling and buying goods or services through an online platform used for conducting the necessary business communications and transactions for sellers and buyers over the Internet. EC companies sell products online with an emphasis on running the entire supply chain process efficiently. The business processes, that enterprises use to conduct ecommerce business, are quite valuable and can be treated as intellectual properties (IPs). Business method patents provide inventors and enterprises with protection for the unique business process. The United States provides business method patent owners an exclusive IP right for 20 years. A good quality business method patent is considered a powerful and effective tool to generate revenue and bar potential competitors from duplicating the practices. Patent analysis can assist companies in evaluating their business strategies or redesign their business processes. Grouping patent documents and defining a domain ontology helps companies describe technology trends and innovations. This research uses Amazon’s business processes as a case example to conduct business method patent analysis, particularly considering order fulfillment as a key method to manage inventory and purchase orders. An EC ontology schema is constructed based on the key EC business processes and key-phrase extraction from the patents. Understanding Amazon’s patents and their relationships to the business process, other EC enterprises can examine their own patents’ strategic strengths and weaknesses. In addition, they can prevent their business processes not to infringe upon existing EC patents. Keywords. patent, ontology, business method, e-commerce 1. Introduction The Internet has offered a wealth of new business opportunities. One of the greatest opportunities is electronic commerce (EC) application. E-commerce is the process of selling and buying goods or services through an online platform for the necessary business communications and transactions between sellers and buyers over the Internet. According to the Internet Retailer [1], as shown in Figure 1, the largest U.S online retailer in 2013 is Amazon.com, which is $49.6 billion larger than the second-largest 1 Corresponding Author, E-mail: s103034529@m103.nthu.edu.tw. A.P.T. Hsu et al. / Using Ontology-Based Patent Informatics to Describe the IP Portfolio 63 online retailer, Apple. Successful EC companies sell products while emphasizing their supply chain process efficiency. Order fulfillment is a core step of the EC process. Consumer-direct e-commerce is compelling to customers for the online shopping experience, on-time delivery, fewer fulfillment errors, extra services, and convenience. These are services that provide values to customers [2]. Therefore, the processes that enterprises use for building e-commerce business is valuable, especially the patented business methods. Business method patents issued in the United States provide an ecommerce firm with exclusive right for 20 years. A high quality business method patent is a powerful and effective asset used to generate revenue and stay competitive. In view of the proliferation of Internet business method patents, e-commerce enterprises need continuously evaluate and implement their patent strategy [3]. Figure 1. Top 10 largest U.S. online retailers in 2013. This research uses Amazon.com as a case to analyze business method patents and construct the ontology of key business processes. The scope the research focuses on EC order fulfillment, including inventory management and outbound processes. 2. Literature review This section introduces the definition of business method patents and the existing approaches in patent analysis and key phrase extraction. 2.1. Business method patent Business methods were developed and provide inventors and companies with protection for their new products, new software or new business processes. However, business methods are abstract descriptions of how companies input their resources and transform these inputs to value-added outputs. According to the United States Patent and Trademark Office (USPTO), the Current US classification of business method patents is Class 705, which definition is defined as “Data Processing: financial, business practice, management, or cost and price determination.” Meurer [4] classified business methods into two categories: administrative methods and customer service methods. Administrative methods are back-office methods that increase productivity or reduce organizational or production costs in a firm. Customer service methods yield services that are consumed by customers or methods related to pricing, advertising, or other marketing concerns. Chang et al. [5], using cluster analysis, divided basic 64 A.P.T. Hsu et al. / Using Ontology-Based Patent Informatics to Describe the IP Portfolio business method patents into 3 groups: Marketing, Data Security and second generation Data Security. The market group contains the most important technology for marketing business methods, such as coupons, promotion programs, POS, reservations, check-in and booking. Data Security protects transaction security and increases the customer’s trust in e-commerce. Second generation Data Security is a continuation to accommodate new technologies. In conclusion, no matter how business methods are categorized, they are all classified in Class 705. Therefore, this research collects business method patents under Class 705 from the USPTO database and analyzes the case of Amazon.com. 2.2. Patent analysis Patent analysis is a tool to assist companies in determining their business strategies or redesign their business processes. Patent trend analysis indicates the growth pattern of a technology, the technological shifts that are occurring, investment opportunities in acquisitions and divestitures and R&D planning for new product development [6]. According to Tseng et al. [7], a typical patent analysis scenario has 7 processes: Defining the scope of the analysis task, searching, segmenting and normalizing structured and unstructured parts, abstracting and extracting the key phrases, clustering the patents based on extracted attributes, visualizing the results, and making a suggestion. Each process has its own application and the outcome of the patent analysis can be visualized and interpreted in different ways. For example, clustering methods group patent documents by identifying key phrases and define a domain ontology which help to describe technology trends, processes and innovations [8]. Jun et al. [9] used text mining and K-medoids to cluster patent documents for technology forecasting. In this research, the patent analysis focuses on order fulfillment process and corresponding Amazon patents, visualizes the patent distribution by constructing an ontology of its process and draws a conclusion based on the contribution of key processes. 2.3. Key phrase extraction for ontology schema Key phrase extraction is considered as an important step prior to the patent ontology construction. Patent documents are usually described in a lengthy written explanation and hard to understand the knowledge or innovative contribution in short period of time. In addition, text mining methodology has been applied to extract key phrases from patent documents. Term frequency (TF) and inverse document frequency (IDF) are two major factors used in text collection and information retrieval. TF measures the occurrence frequency of a specific term in a document [10]. IDF presents the rarity of the term in a set of documents [11]. Salton and Buckley suggested that using TF-IDF method can find out the representative terms in a document based on how frequently they appear across a collection of documents [12]. However, TF factor does not take account of the length of documents. The normalized TF (NTF) approach considering document length and word counts is adopted in this research [8]. ġ A.P.T. Hsu et al. / Using Ontology-Based Patent Informatics to Describe the IP Portfolio 65 3. Methodology The methodology of this research consists of four steps. First, define the order fulfillment process and identify the key operations. Second, search for the business method patents related to the process from USPTO database (patents searched on 2014/12/08). Third, use text mining approach to calculate NTF term values from the abstract and claims of the searched patent documents, and extract key phrases associated with order fulfillment operations from the ranked NTF. The last, group the patents and construct the ontology. The ontology will show an overview of Amazon’s its core patent strategy and logistics innovations. 3.1. Business processes for Amazon.com Amazon.com is the first company to operate a long-scale e-commerce book retailer, which quickly diversified its range of products. Amazon.com has three major products suppliers: the marketplace, direct vendors, and Amazon.com itself. The Amazon marketplace enables sellers to draw on the e-commerce services and tools to present their products alongside Amazon.com on the same product page and allowing customers to compare between suppliers [13]. This research, using Amazon.com as a case example, uses Income software to provide a holistic view of the Amazon.com business processes. Figure 2 is the Amazon.com business process overall view. When the customer places an order, Amazon starts its outbound processes supported by inventory management. After the order items are packed, the package is delivered to customer’s delivery address by a third-party logistics company. Figure 3 shows the detailed processes and sub-processes. The detailed descriptions of each stage of the process follow Bragg’s analysis [14]. Figure 2. Amazon.com business process overview. Figure 3. Amazon.com detailed business processes. 66 A.P.T. Hsu et al. / Using Ontology-Based Patent Informatics to Describe the IP Portfolio Step 1: Inventory management The products for sale are sent to Amazon’s fulfillment centers through the inbound processes for cataloging and storage as ready-to-ship inventory. After receiving, scanning, and recording the inventory, Amazon continues monitoring the inventory level and handling the disposition operations, such as inventory replenishment. Step 2: Upload product information Amazon will upload or update product information on Amazon.com for the next ordering process cycle. Step 3: Ordering When customers want to buy something on Amazon.com, they browse the products on the website and add items to the shopping cart. Before check out, Amazon.com requires the customer to login or register as new member (Figure 4). Then the customer places the products into the shopping cart, and Amazon.com requests members to enter the shopping and billing address and method for order fulfillment (Figure 5). Figure 4. The drill-down log in activity. Figure 5. The drill-down check out activity. Step 4: Outbound processes After receiving an order, Amazon checks the inventory level and assigns orders to one of the domestic fulfillment centers which has the least shipping cost to fulfill this order. Once receiving an assigned order, the fulfillment center begins to pick and sort items and combine all of the different items into one package. In this process, fulfillment center workers use radio-frequency scanners to fill the picking cart, sort the picking batch into the individual customer orders, and pack into a package. Amazon also sorts different orders into different shipping method operations and shipping plans for customer delivery (Figure 6). A.P.T. Hsu et al. / Using Ontology-Based Patent Informatics to Describe the IP Portfolio 67 Figure 6. The drill-down outbound process activity. Step 5: Delivery The third-party logistics company outsourced by Amazon is responsible to deliver the package to customers based on Amazon’s shipping plan. After describing the business processes of Amazon.com, the logistics related technologies and methods are clear appearances. In the next section, this research focuses on the patents related to order fulfillment including inventory management and outbound processes. 3.2. Order fulfillment ontology and corresponding Amazon patents Amazon.com is a consolidation center adding value by combining different items into one single order for shipment to the customer’s front door in a few days. Amazon’s success depends on the fulfillment center which is key to handling inventory and orders efficiently. Therefore, the related patents are used construct an ontology by identify the key phrases of order fulfillment process. The patent documents are searched from USPTO database. The key phrases are all associated with methods or technologies in logistics and inventory management, such as the phrases “disposition,” “fulfillment,” “delivery,” “pick path.” And by text mining methodology, the innovative logistics practices, technologies and methods derived from patents are considered as the key phrases. The order fulfillment process ontology with Amazon corresponding patents is shown in Figure 7. In the Figure 7, the order fulfillment has two sub-processes: inventory management and outbound processes. Inventory management processes are divided into two sub-processes. The inbound sub-process focuses on inbound inventory positioning and data collection for optimal performance of inventory management, and this sub-process has one sub-domain: defect. The defect is to detect the incoming shipment through a cataloging system and generate the defect rate, enabling material handling facilities to operate the corrective actions. The disposition sub-process is used to handle the stored inventory. In this sub-process, there are three sub-domains: determine is to set the target inventory related levels or identify the identifier contents, dimensionally-constrained patents related to automatically estimate the dimensions of items to facilitate operations and optimize space utilization, and replenishment is the planning for replenishment strategy for the risk of inventory shortage. The determine sub-domain has three sub-subdomains: healthy is to determine a healthy inventory level of the item and initiate an appropriate action such as disposition of unhealthy inventory, exhaustion is a method analyzing order trends and calculating the expected item depletion time from inventory model and identifier increases the chance of accurate recognizing of the products or presents the position information by communication devices for agents. 68 A.P.T. Hsu et al. / Using Ontology-Based Patent Informatics to Describe the IP Portfolio Figure 7. Order fulfillment ontology with Amazon patents (searched on 2014/12/08). Outbound processes contain five sub-processes and the assigning and picking subprocesses also have their sub-domains. The drop sub-domain of assigning sub-process is a method predicting and identifying which certain items can be directly fulfilled from merchant without stocking items in fulfillment centers. The picking sub-process has two sub-domains: the pick path patents are related to optimizing the picking path or direct the movement of the agents towards to targeted location within less time, and stow integrates picking and stowing operations within a single picking travel so as to decrease the quantity of labor time utilized to perform given quantity of work. The ship sorting sub-process focuses on the delivery plans and the delivery methods, such as its A.P.T. Hsu et al. / Using Ontology-Based Patent Informatics to Describe the IP Portfolio 69 sub-domains tote and community delivery methods which help Amazon.com to save on package costs and lower shipping costs. The geographical sub-domain of shipping sorting anticipates the customers’ ordering activity and ships the items to geographical area without completely specifying the delivery address, and the route sub-domain is used to improve the real time planning of vehicle routes. 4. Conclusions This research uses text mining to extract Amazon’s business method patents’ key phrases and constructs an ontology. The results demonstrates that inventory management and ship sorting contain the largest number of patents (Figure 7). In other words, these two processes are strategic elements of Amazon's order fulfillment process. Inventory management is an important for allocating ready-to-ship inventory and for enhancing follow-up operations more efficiently while handling inventory levels and status. Ship sorting focuses on optimizing shipping routes and offering a variety of shipping methods for customers. From competing e-commerce enterprises’ perspective, Amazon uses these two processes to represent their core patent strategy. Furthermore, an ontology of Amazon’s order fulfillment process provides other enterprises with an overview. Other enterprises can examine their own patent strategies' strengths and weaknesses comparing to Amazon's patented distribution processes and logistics innovations, and proceed to improve their related business processes. Besides, by understanding Amazon’s patented distribution processes, they can also insure their patent claims do not infringe upon the existing EC patents when associated patents are filed. 5. Acknowledgement This research is partially supported by the Ministry of Science and Technology and Industrial Technology Research Institute in Taiwan. References [1] Internet Retailer, Accessed: 3/25/2015. [Online]. Available: https://www.internetretailer.com [2] F. Ricker, R. Kalakota, Order fulfillment: the hidden key to e-commerce success, Supply Chain Management Review 11(3) (1999), 60–70. [3] J. C. Lang, Management of intellectual property rights: Strategic patenting, Journal of Intellectual Capital 2(1) (2001), 8–26. [4] M. J. Meurer, Business method patents and patent floods, Washington University Journal of Law & Policy 8 (2002), 309–339. [5] S. B. Chang, K. K. Lai, and S. M. Chang, Exploring technology diffusion and classification of business methods: Using the patent citation network, Technological Forecasting & Social Change 76(1) (2009), 107–117. [6] R. S. Campbell, Patent trends as a technological forecasting tool, World Patent Information 5(3) (1983), 137–143. [7] Y. H. Tseng, C. J. Lin, and Y.I. Lin, Text mining techniques for patent analysis, Information Processing & Management 43(5) (2007), 1216–1247. [8] C. V. Trappey, A. J.C. Trappey, and C. Y. Wu, Clustering patents using non-exhaustive overlaps, Journal of Systems Science and Systems Engineering 19(2) (2010), 162–181. 70 A.P.T. Hsu et al. / Using Ontology-Based Patent Informatics to Describe the IP Portfolio [9] Sįġ Jun, Sį Park, and D. Jang, Technology forecasting using matrix map and patent clustering, Industrial Management & Data Systems 112(5) (2012), 786–807. [10] H.P. Luhn, A statistical approach to mechanized encoding and searching of literary information, IBM Journal of Research and Development, 1(4) (1957), 309–317. [11] K.S. Jones. A statistical interpretation of term specificity and its application in retrieval, Journal of Documentation, 28(1) (1972), 11–20. [12] G. Salton, C. Buckley, Term-weighting approaches in automatic text retrieval, Information Processing and Management, 24(5) (1998), 513–523. [13] P. Ritala, A. Golnam, and A. Wegmann, Coopetition-based business models: The case of Amazon.com, Industrial Marketing Management 43(2) (2014), 236–249. [14] S. J. Bragg, Analysis of sorting techniques in customer fulfillment centers, Master Thesis, Massachusetts Institute of Technology, 2003. Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-71 71 Kinematic Model of Project Scheduling with Resource Constrained under Uncertainties Giuliani Paulineli GARBIa,1, Geilson LOUREIRO b, Luís Gonzaga TRABASSO c, and Milton de Freitas CHAGASa a Main Administration, Brazilian Institute of Space Research b Integration and Testing Laboratory, Brazilian Institute of Space Research c Mechanical Engineering Department, Technological Institute of Aeronautics Abstract. Projects represent the principal means of materialization of products. The inherent complexity of product projects is treated through the techniques and approaches project management. Throughout products life cycle the techniques and approaches project management are mainly involved in the planning, programming and control of project activities conducted in context of resource constrained under uncertainties. In addition of scenarios the complexity of the projects, there are some classes of products, typical of industries of the defense, aerospace, telecommunication, software, and biomedicine, which are problematic for current methods of resource constrained project planning and scheduling under uncertainty. The existing methods fail because they suffer from one or more of the following limitations: focused mainly on the basic RCPSP (Resource Constrained Project Scheduling Problem) model; dealing with only one source of uncertainty, mostly in duration of activities; and do not model uncertainties. This paper presents the kinematic model of projects scheduling which considering the inherent restrictions in nature of the projects: precedence among project activities; uncertainties of the duration of project activities; and uncertainties in availability of resources for execution of project activities. The kinematic model of projects scheduling provides a graph and mathematical model with the advantages: estimation of the project duration and resources due to uncertainties; estimation of the uncertainties due to project duration and resources; improvement of the outcomes of planning and scheduling of project activities; and assists the dynamics of projects providing information for collaboration policy of the durations and resources between project activities and between different projects. This article describes the Resource Constrained Project Scheduling Problem under uncertainties, discuss previous work on planning under uncertainty, and presentation of the kinematic model of projects scheduling with resource constrained under uncertainties along with a small example of implementation. Keywords. Project scheduling, product project, resource constrained under uncertainties, project management, kinematic model.  /CKP#FOKPKUVTCVKQP$TC\KNKCP+PUVKVWVGQH5RCEG4GUGCTEJCXGPWG#UVTQPCWVCU%'2 ,CTFKOFC)TCPLC5ºQ,QUÃFQU%CORQU5ºQ2CWNQ$TC\KN GOCKNIKWNKCPKICTDK"IOCKNEQO 72 G.P. Garbi et al. / Kinematic Model of Project Scheduling with Resource Constrained Introduction The research on project scheduling problem was intensified from the recognition that network models CPM (Critical Path Method), PERT (Project Evaluation Review Technique) and PDM (Precedence Diagram Method) are based on the assumption that all needed resources will be available [1]. The importance of project scheduling and control is bolstered with many examples where the inadequate scheduling and control are often identified as the most common causes of project failure [2]. This paper presents the kinematic model for projects scheduling with resource constrained under uncertainties which considering the inherent restrictions in nature of the projects: precedence of project activities; uncertainties of the duration of project activities; and uncertainties in availability of resources for execution of project activities. The project scheduling may be considered as an open kinematic chain, which is formed by: a set of rigid links (precedence of activities) that are connected by joints (activities of project) with one fixed extremity (activity that represents the beginning of the project) and one free extremity (activity that represents the end of the project). The methods for resource constrained project scheduling problem are aimed the project scheduling under uncertainties to minimize the expected total time of project, but these methods, however, these methods present some limitations [3, 4, 5]: The proactive or reactive methods dealing with uncertainty. Proactive methods work better in cases when the uncertainty is quantifiable. Reactive methods work better in cases when the degree of uncertainty is too great. Researches indicate the perspective in combining proactive and reactive methods. The necessity for new models of scheduling which account for production environment conditions. Most of the research up to date has dealt with only one source of uncertainty, mostly in duration of activities. The methods are focused mainly on the basic RCPSP model. The methods do not model the uncertainties. This paper is divided into 5 sections, being: section 1 presents an introduction for the paper; section 2 presents a literature review on resource constrained project scheduling problem under uncertainties; section 3 presents the kinematic model concepts as well as the mathematical fundaments; section 4 presents the application of the equations of direct and indirect kinematics for a simple schedule with uncertainties in durations and resources of activities; and section 5 presents some conclusions over the implementation of the kinematic model for projects scheduling with resources constrained under uncertainties. 1. Literature Review on Project Scheduling Problem The literature review states that the project scheduling is the main cause of failure of the projects, only 30% of the projects are completed on schedule and budget [6]. The project scheduling may be defined as the arrangement, leveling and allocation of these activities regarding the duration and resources required for performing each activity [7]. The project scheduling must to consider the present constraints in nature of the projects: precedence of project activities; uncertainties of the duration of project activities; and uncertainties in availability of resources for execution of project activities [8]. G.P. Garbi et al. / Kinematic Model of Project Scheduling with Resource Constrained • • • 73 Activities: represented as a network in a discrete and finite set of activities [9]. Durations: the scheduling depends of the duration estimation modeling, may be probabilistic or possibilistic [9]. Resources: may be grouped into three types of categories [9]: - Renewable: there is a certain amount of resource available for each activity, for example, hours of employees, facilities and others. - No Renewable: there is a certain amount of resource available for entire project, for example, raw materials, financial resource, and others. - Doubly restricted: resources are considered renewable and no renewable. The challenge of the project scheduling is related with allocation of resource constrained in a multiple projects environment with different sources of uncertainty, for example, duration of activities, resource availability, among others [9]. The formal research for the project scheduling problems began after second world war, until the 1950 decade, the challenge of project management was to determine a detailed graphical representation for the project scheduling problem, which was solved through of the approaches and techniques [10]: Gantt diagram, and project network diagram. In 1960 and 1970 decades there was the need of approaches and techniques for project scheduling problem driven by duration of activities was solved through approaches and techniques for the project scheduling problem with activities on node [10]: Critical Path Method (CPM) and Precedence Diagram Method (PDM) assuming deterministic activity duration, and Program Evaluation and Review Technique (PERT) assuming probabilistic activities duration. The 1980 and 1990 decades treated of the problem related with the omission of required resources for the execution of activities that is known as Resource Constrained Project Scheduling Problem (RCPSP) which is represented by a CPM deterministic problem with addition of resources as constraints [11]. The RCPSP problems are treated by algorithms, the exact methods are applied to projects with small instances (until 30 activities), for the case of projects with large amounts of instances must be used the heuristic methods [12]. From 2000 decade the project scheduling problem considering the uncertainties is known as Stochastic Resource Constrained Project Scheduling Problem (SRCPSP) which is a stochastic variant of the RCPSP and it can involve many sources of uncertainty like: activity durations, renewable resource availability, task insertion, resource consumption, and others. In general, there are four approaches to dealing with uncertainty in a scheduling environment [13]: reactive scheduling, stochastic project scheduling, fuzzy project scheduling, and proactive methods. 2. Kinematic Model of Project Scheduling with Resource Constrained under Uncertainties The kinematic model of project scheduling with resource constrained under uncertainties deals the movements of the schedule activities without to consider the causes of movement origin. The kinematic model of project scheduling presents two types of variables: • Project variables: precedence, duration of activities, resources to perform the activities. 74 G.P. Garbi et al. / Kinematic Model of Project Scheduling with Resource Constrained • Parameters of project activities: - Activities variables: estimation of duration of activities, estimation of resources to perform the activities, estimation of precedence of activities, critical factor of activities. - Uncertainties of activities: uncertainties of duration of activities, uncertainties of resources to perform the activities. In kinematic model of project scheduling the activities are represented as vector into coordination system through project variables and parameters of project activities. Thereby, from parameters of project activities (activities variables and uncertainties of activities) may be determined the project variables with direct kinematic model of project scheduling, as well as, from project activities variables may be determined the uncertainties of activities with inverse kinematic model of project scheduling. Figure 1. Kinematic model of project scheduling. The project schedule may be considered as an open kinematic chain, which is formed by a set of rigid links that are connected by joints, with one fixed extremity and one free extremity, making equivalence between the components of open kinematic chain and schedule: • Fixed extremity: activity that represents the beginning of the project. • Free extremity: activity that represents the end of the project. • Rigid links: precedence of activities. • Revolute joints: activities of project. In order to model the project scheduling depending on variables and parameters of project, the activities are represented by three dimensional coordinate system and direct: • Abscissa (x axis): duration of project activities. • Ordinate (y axis): resources to perform the project activities. • Cote (z axis): precedence, this value do not have uncertainties. • Alpha for uncertainties of estimation of activities duration. • Beta for uncertainties of estimation of resources to perform the activities. G.P. Garbi et al. / Kinematic Model of Project Scheduling with Resource Constrained 75 Figure 2. Three dimensional coordinate system for kinematic model. The representation of (direct and inverse) kinematic model for project scheduling with resource constrained under uncertainties is the outcome of sum of the homogenous transformations matrices between project variables and parameters of project activities. The equations of (direct and indirect) kinematic model for project scheduling with resource constrained under uncertainties must consider the set of activities and resources. An0(j) = n m ∑∑ A i −1 i (j) (1) i =0 j =1 Aii −1(j) = {[H ii −1(αij ).H ii −1(Ti j ;Ri j )] + [H ii −1(βi j ).H ii −1(Ti j ;Ri j )]} + H ii −1(Pi j ) • • • • (2) i - set of activities: i = (0, 1, 2,…, n). j - set of resources: j = (1, 2, 3,…, m). A ii−1 ( j ) - homogenous transformation matrix of kinematic model for project scheduling with resource constrained under uncertainties. t i - abscissa axis representing the estimation of duration of project activities. • ri - ordinate axis representing the estimation of resources to perform the activities. p i - cote axis representing the precedence of activities. • Tij - estimation of duration of project activities. • R ij - estimation of resources to perform the activities. • Pij - precedence of activities. • θ ij - critical factor of activities, for cases where the durations and resources • may be greater than double of estimations. • α ij - uncertainties of duration of project activities, represented through the rotation in the abscissa axis (t). • β ij - uncertainties of resources to perform the activities, represented through the rotation in the ordinate axis (r). 76 G.P. Garbi et al. / Kinematic Model of Project Scheduling with Resource Constrained • H ii −1 - basic homogenous transformation matrix. • H ii −1 (α ij ) - basic homogenous transformation matrix with rotation α ij in the abscissa axis t i . • H ii −1 (β ij ) - basic homogenous transformation matrix with rotation β ij in the ordinate axis ri . • H ii −1 (Tij ; R ij ) - basic homogenous transformation matrix with translation θ ij .Tij and θ ij .R ij . • H ii −1 (Pij ) - basic homogeneous transformation matrix with translation Pi j − [θ ij .(Tij .sβ ij − R ij .sα ij ] to cancel the effects of uncertainties. Therefore, the representation of kinematic model for project scheduling with resource constrained under uncertainties must be determined through developing of homogenous transformation matrix of kinematic model for project scheduling with resource constrained under uncertainties. - cα ij = cos (α ij ) ; - sα ij = sen(α ij ) ; - cβ i j = cos (β i j ) ; - sβ i j = sen(β i j ) 1 Aii-1(j) = 0 0 a ii-1t i (j) 0 1 0 a ii-1 ri (j) = 0 0 1 a i-1 p (j) i i 0 0 0 1 0 0 0 cα ij + 2 θ i j .Ti j .( 1 + cβ i j ) sβ i j sα ij sα ij θ i j .Ri j .(cα ij + 1 ) 0 0 cα ij + cβ i j + 1 Pi j 2 + cβ i j 0 (3) (4) 1 3. Implementation of Kinematic Model of Project Scheduling with Resource Constrained under Uncertainties This section presents the implementation of kinematic model for project scheduling with resource constrained under uncertainties. The equations of direct kinematic model for project scheduling with resource constrained under uncertainties must be implemented from work breakdown structure (WBS) and execution of project time management processes: define activities, sequence activities, estimate activity resources, estimate activity durations, develop schedule (arrangement, leveling and allocation of activities). To illustrate the implementation of kinematic model for project scheduling with resource constrained under uncertainties must be done some observations over the schedule diagram precedence illustrated in Fig. 3: • The example presents a small schedule with activities of project critical path. • The criterion for determination of the resources must be part of organization policies. • The criterion for determination of the uncertainties must be part of organization policies. • The uncertainties must range between 0° (highest degree of uncertainty) and 89° (lowest degree of uncertainty). When alpha and beta equal 90°, there are G.P. Garbi et al. / Kinematic Model of Project Scheduling with Resource Constrained • • • • 77 not uncertainties or certainties. The certainties must range between 91° (lowest degree of certainty) and 180° (highest degree of certainty). A0: beginning of the project. A1: with uncertainties of duration and uncertainties of resources (j = 1 and j = 2). A2: with uncertainties of duration and uncertainties of resources (j = 1 and j = 2). A3: end of project. Figure 3. Schedule diagram precedence of direct kinematic model. Eq (5) represents of project variables and parameters of project activities in regarding to resources of direct kinematic model. (5) Ai (j) = (Ti j ; Ri j ;Pi j ) , [θ i j ; α1j ;β i j ] , {a ii-1t i (j); a ii-1 ri (j); a ii-1 p i (j)} A0( 1 ) = ( 0; 0; 0 ), [θ01 = -; α10 = −;β01 = − ] A1( 1 ) = ( 5; 2; 1 ), [θ11 = 1; α11 = 45 ;β11 = 45 ] A1( 2 ) = ( 5; 1; 1 ), [θ12 = 1; α12 = 45 ;β12 = 45 ] A2( 1 ) = ( 3; 1; 1 ), [θ21 = 1 ; α12 = 45 ;β21 = 45 ] A2( 2 ) = ( 3; 1; 1 ), [θ22 = 1 ; α22 = 45 ;β22 = 45 ] A3( 1 ) = ( 0; 0; 1 ), [θ31 = -; α13 = −;β31 = − ] For direct kinematic model the Figure 4 presents the project variables and parameters of project activities. Figure 4. Direct kinematic model of the implementation example. 78 G.P. Garbi et al. / Kinematic Model of Project Scheduling with Resource Constrained Homogenous transformation matrices for each project activities in regarding to resources applying the Eq. (3, 4) and information of Figure 4 for direct kinematic model. Direct kinematic model for activity (i = 1) and resource (j = 1). (6) A10( 1 ) = {[H10( 45 ,t1 ).H10( 5; 2 )] + [H10( 45 , r1 ).H10( 5; 2 )]} + H10( 1 ) 0 0 a10t1( 1 ) 2 ,7 0 1 0 a10r1( 1 ) 0 1 A10( 1 ) = 0 0 1 a0 p ( 1 ) 1 1 0 0 0 1 = 0 ,7 0 0 0 2,7 -0 ,7 0 0 ,7 2 ,4 0 8,5 3,4 1 1 (7) Direct kinematic model for activity (i = 1) and resource (j = 2). A10( 2 ) = {[H 10 ( 45  ,t1 ).H 10 ( 5; 1 )] + [H 10 ( 45  ,r1 ).H 10 ( 5; 1 )]} + H 10 ( 1 ) A10( 2 ) = 1 0 0 1 0 a10t1( 2 ) 0 0 0 1 0 0 0 a10r1( 2 ) a10 p1( 2 ) = 2,7 0 0,7 0 1 0 2,7 -0,7 0 0,7 2,4 8,5 1,7 0 0 1 (8) (9) 1 Direct kinematic model for activity (i = 2) and resource (j = 1). A21( 1 ) = {[H 21( 45  ,t 2 ).H 21 ( 3; 1 )] + [H 21 ( 45  ,r2 ).H 21 ( 3; 1 )]} + H 21 ( 1 ) 1 A21( 1 ) = 0 0 a12t2( 1 ) 0 1 0 0 0 1 0 0 0 a12 r2( 1 ) a12 p2( 1 ) = 1 2 ,7 0 0 5,1 0 0 ,7 2 ,7 -0 ,7 0 ,7 2 ,4 1,7 1 0 0 0 1 (10) (11) Direct kinematic model for activity (i = 2) and resource (j = 2). A21( 2 ) = {[H 21( 45 ,t 2 ).H 21( 3; 1 )] + [H 21( 45 ,r2 ).H 21( 3; 1 )]} + H 21( 1 ) 1 0 0 a12t2( 2 ) 0 1 0 a12 r2( 2 ) A21( 2 ) = = 0 0 1 a1 p ( 2 ) 2 2 0 0 0 1 2 ,7 0 0 5,1 0 0 ,7 2 ,7 -0 ,7 0 ,7 2 ,4 1,7 1 0 0 0 1 (12) (13) Direct kinematic model for activity (i = 3) and resources (j = 1 and j = 2). A32( 1 ) = A32( 2 ) = H 32(P31 ) = H 32(P32 ) A32( 1 ) = A32( 2 ) = H 32( 1 ) (14) G.P. Garbi et al. / Kinematic Model of Project Scheduling with Resource Constrained 1 0 0 a32t3( 1/ 2 ) 1 0 0 0 0 1 0 a32 r3( 1/ 2 ) 0 A32( 1 ) = A32( 2 ) = = 0 0 1 a 2 p ( 1/ 2 ) 0 3 3 0 0 0 0 1 1 0 0 1 0 1 0 0 1 79 (15) The elements of fourth column are analyzed in order to determine the project variables for each activity of direct kinematic model. Direct kinematic model for activity (i = 1) and resource (j = 1), from Eq. (7). • Project variable of duration: a 10 t 1 (1) = 8,5 • Project variable of resource: a 10 r1 (1) = 3,4 • Project variable of precedence: a 10 p1 (1) = 1 Direct kinematic model for activity (i = 1) and resource (j = 2), from Eq. (9) • Project variable of duration: a 10 t 1 (2) = 8,5 • Project variable of resource: a 10 r1 (2) = 1,7 • Project variable of precedence: a 10 p1 (2) = 1 Direct kinematic model for activity (i = 2) and resource (j = 1), from Eq. (11). • Project variable of duration: a 12 t 2 (1) = 5,1 • Project variable of resource: a12r2 (1) = 1,7 • Project variable of precedence: a12 p 2 (1) = 1 Direct kinematic model for activity (i = 2) and resource (j = 2), from Eq. (13). • Project variable of duration: a 12 t 2 (2) = 5,1 • 1 Project variable of resource: a 2 r2 (2) = 1,7 • Project variable of precedence: a12 p 2 (2) = 1 Direct kinematic model for activity (i = 3) and resource (j = 1 and j = 2), from Eq. (15). • Project variable of duration: a 32 t 3 (1) = a 32 t 3 (2) = 0 • Project variable of resource: a 32 r3 (1) = a 32 r3 (2) = 0 • Project variable of precedence: a 32 p 3 (1) = a 32 p 3 (2) = 1 4. Conclusions and Comments Analyzing the application of the equations of direct kinematics model for example schedule with uncertainties in durations and resources of, should be highlighted some conclusions: Achievement of robust project scheduling provided by balancing of uncertainties for durations and resources between project activities and between 80 G.P. Garbi et al. / Kinematic Model of Project Scheduling with Resource Constrained projects. The project activities are modeled as an open kinematic chain which the graphical and mathematical representation different of the RCPSP basic model. The model is driven by multi source of uncertainties for the estimation of durations and resources availabilities. The project variables and parameters of project activities are influenced by sources of uncertainties. Outcomes of direct kinematic model for the implementation example: • For activity (i = 1): increase in 70% de estimation of duration, increase in 70% de estimation of resource (j = 1) and increase in 70% de estimation of resource (j = 2). • For activity (i = 2): increase in 70% de estimation of duration, increase in 70% de estimation of resource (j = 1) and increase in 70% de estimation of resource (j = 2). • For project: increase in 70% de estimation of duration, increase in 70% de estimation of resource (j = 1) and increase in 70% de estimation of resource (j = 2). For future works, the uncertainties for durations and resources may be modeled through indirect kinematic model. There are some opportunities in regarding to the developing the criterion of identification and categorization for resources, as well as, the criterion of identification and categorization for uncertainties which are treated as indirect risks. References [1] F. Acebes, J. Pajares, J. M. Galán, A. L. Paredes, A new approach for project control under uncertainty. Going back to the basics. International Journal of Project Management, 2014; 32(3); pp. 423-434. [2] S. Yang, L. Fu, Critical chain and evidence reasoning applied to multi-project resource schedule in automobile R&D process. International Journal of Project Management, 2014; 32(1); pp. 166-177. [3] ] J. R. M. Torres, E. G. Franco, C. P. Mayorga, Project scheduling with limited resources using a genetic algorithm. International Journal of Project Management, 2010; 28(6); pp. 619-628. [4] C. T. Chen, R. G. Askin, Project Selection, Scheduling and Research Allocation with Time Dependent Returns. European Journal of Operational Research, 2009; 193(1); pp. 23-34. [5] W. Herroelen, R. Leus, Project scheduling under uncertainty: Survey and research potentials. European Journal of Operational Research, 2005; 165(2); pp. 289- 306. [6] R. Magnaye, B. Sauser, P. Patanakul, D. Nowicki, W. Randall. Earned readiness management for scheduling, monitoring and evaluating the development of complex product systems. International Journal of Project Management, 2014; 32(7); pp. 1246-1259. [7] P. Godinho, F. G. Branco. Adaptive Policies for Multi-Mode Project Scheduling under Uncertainty. European Journal of Operational Research, 2012; 216(3); pp. 553-562. [8] G. Garbi, G. Loureiro, Shared Management of Product Portfolio. In J. Cha et al. (ed.) Proceedings of 21th ISPE International Conference on Concurrent Engineering (CE2014), Sep, 8 - 11 2014, Beijing, China, IOS Press, Amsterdam, pp. 537- 546, 2014. [9] G. Garbi, G. Loureiro, Integrated Development for Brazil’ Space Systems Portfolio, 65th IAC International Astronautical Congress IAC2014, Toronto, 2014. [10] E. L. Demeulemeester, W. S. Herroelen. Project Scheduling: A Research Handbook, Kluwer Academic Publishers, Boston, 2002. [11] V. V. Peteghem, M. Vanhoucke, An experimental investigation of metaheuristics for the multi-mode resource-constrained project scheduling problem on new dataset instances, Europ. J. of Operational Research, 235(1); pp. 62-72, 2014. [12] J. Weglarz, J. Josefowska, M. Mika, G. Waligora. Project Scheduling with Finite or Infinity Number of Activity Processing Modes – A Survey. Europ. J. of Operational Research, 2012; 208(3); 177-205. [13] G. Garbi, G. Loureiro, Business-Product-Service Portfolio Management, in: C. Bil et al. (eds.) Proceedings of 20th ISPE International Conference on Concurrent Engineering (CE2013), Sep, 2 - 5 2013, Melbourne, Australia, IOS Press, Amsterdam, pp. 137- 146, 2013. Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-81 81 Cloud-based Project Supervision to Support Virtual Team for Academic Collaboration Teruaki ITOa,c,1, Mohd Shahir KASIM b,c, Raja IZAMSHAH b,c , Norazlin NASIR a,b,c, Yong Siang TEOH a,b,c a The University of Tokushima, Japan b Univerisiti Teknikal Malaysia Melaka, Malaysia c TMAC (TokushimaU UTeM Academic Centre), Japan & Malaysia Abstract. Concurrent Engineering (CE) aims at the goal of cost and time reduction as well as quality improvement. For this achievement of CE, the collaboration of various activities are considered, ranging from design disciplines, manufacturing and assembly, marketing and purchasing, all the way to the end users. In this respect, collaboration of people from various activities among different locations is crucial to the success of CE, where the collaboration does not mean the activities for industries but also for academia to pursue global research/education. TMAC (TokushimaU UTeM Academic Centre) has been established in September, 2014 in order to enhance the academic collaboration between the two institutions. TMAC is not a satellite office of Tokushima University at UTeM but a joint academic center which is designed to be operated by a virtual team composed of the existing faculties who serve for each institution. In other words, TMAC has a unique organizational structure based on the virtual team across the globe. Therefore, it is a very critical project to figure out how to enhance the global collaboration among TMAC staffs. For enhancing the collaboration, some existing communication tools and collaboration system are already under use. However, a new type of cloud-based computing system is required to satisfy the specific needs of this unique organization. This paper overviews the outline of TMAC and presents an idea of cloud-based supervision system to support virtual team organization for global academic collaboration of TMAC, which could be applied to the similar types of global collaboration. Keywords. Remote supervision, virtual team organization, academic collaboration, cloud-based computing 1. Introduction The number of Japanese companies expanding business manufacturing in Malaysia is getting more than 1,400 [1], which implies the strong relations between the two countries in terms of business and industry partnership. On the other hand, the academic partnership between the two nations is also getting active as can be seen from the various on-going projects, such as the project of MJIIT (Malaysia-Japan 1 Corresponding Author: Teruaki ITO, Institute of Technology and Science, The University of Tokushima, 2-1 Minami-Josanjima, Tokushima, 770-8506, Japan; Email: tito@tokushima-u.ac.jp 82 T. Ito et al. / Cloud-Based Project Supervision to Support Virtual Team International Institute of Technology) [2], JMTI (Japan-Malaysia Technical Institute) [3], MSSC (Kyushu Tech. Malaysia Super Satellite Campus) [4], TUT-USM Penang (Toyohashi University of Technology - Universiti Sains Malaysia Technology Collaboration Centre in Penang) [5], etc. Under these circumstances, Tokushima University - Univerisiti Teknikal Malaysia Melaka Academic Center (TMAC) [6] has been established in September of 2014 after MOU (Memorandum of Understanding) agreement in 2013, in order to promote further collaboration between the two institutions in terms of education and research activities. One of the critical factors of this successful establishment was the long-term reliable relationship between the two institutions based on the alumni network in the past one decade, supporting various activities such as mobility program [7], or the international conference, for example, iDECON [8][9]. TMAC has opened two offices, one of which is J-TMAC at Tokushima University (TU) and the other one is M-TMAC at Univerisiti Teknikal Malaysia Melaka (UTeM). It is unclear if the operation of TMAC function works well or not. This is because of the fact that the organization of TMAC is quite unique as a virtual organization, where TMAC staffs are supposed to work collaboratively over the computer network. Incidentally, all of the TMAC staffs are concurrently appointed from the regular staffs of TU/UTeM. Therefore, in order to make it successful, the remote collaboration activities among TMAC staffs play a very critical role and are the fundamental function of TMAC [10]. Various types of collaboration tools are available these days, for example, cloudbased mail system [11], project scheduling system [12], video conference system [13], etc., some of which are already installed at TMAC offices and are under use for the activities. These tools definitely support the remote collaboration over the internet even between different countries, such as Japan-Malaysia [14]. However, the environment for remote collaboration has still many issues to be considered to satisfy the needs of TMAC and to truly support the collaboration of TMAC. Five months have been passed since the first Education-and-Research Unit (ER Unit) members of TMAC were appointed in October 2014 after the establishment of TMAC. Reviewing the past 5 months activities of TMAC under the collaboration of ER unit 2014, this paper presents how TMAC worked so far towards the collaboration, and discusses the basic requirement factors of remote collaboration support system of TMAC for further enhancement of collaboration. First, this paper overviews of TMAC academic center to show its unique feature. Reviewing the TMAC activities related to the remote collaboration, this paper clarifies the critical factors of remote collaboration support system. Finally, this paper proposes the cloud-based remote collaboration system, which could be the basis of project supervision over the virtual team like TMAC staff under the system. 2. Overview of TMAC Academic Centre TokushimaU-UTeM Academic Centre, or TMAC was established at the main campus of Universiti Teknikal Malaysia Melaka (UTeM) in September 2014 as a result of long term academic collaboration between Tokushima University and UTeM in the laboratory level in its start, followed by MOU at institutional level and its upgrade to university level. The branch office of TMAC, or J-TMAC was also opened at the same time in Josanjima campus of Tokushima University. T. Ito et al. / Cloud-Based Project Supervision to Support Virtual Team 83 TMAC pursues a more advanced method of collaboration by way of global educational and research approach [15], which is not identical to what is commonly performed in typical satellite offices of host universities. One of the core ideas of TMAC framework is to invite/assign an education/research unit, or ER Unit from UTeM to TU attachment every year and assign them as the TMAC staff in Japan. ER Unit is defined as a pair of student and his/her supervisor. The student is basically a double-degree program Ph.D. student who belongs to TU/UTeM. The supervisor is a Ph.D researcher who supervises his/her student towards Ph.D degree. TMAC invites one or two pairs of ER unit every year under the financial support of TU. Figure 1 shows the organizational structure of TMAC, which is jointly operated under the collaboration of TU and UTeM. As for educational approach, the goal of TMAC is to provide a framework of global education for both institutions, which include lectures via teleconferencing, on-site lectures at UTeM by TU professors, English lectures organized by ER Unit for TU students, etc. As for research approach, the goal of TMAC is to offer a framework of global joint research including researchers not only from TU/UTeM but also from wide areas of Japan / Malaysian Universities. In 2014, two pairs of ER Unit were invited and working for TMAC. This paper presents the joint collaboration between TU/UTeM using two case studies. One of them is a machine oriented collaboration and the other one is a software oriented collaboration. Reviewing these case studies, this paper shows how the collaboration has been supported by the activities of TMAC and clarifies what kind of environment is required for further collaboration in TMAC. Figure 1. Overview of TMAC academic center as a virtual organization. 3. Case studies of remote supervision-based projects at TMAC ER Unit members at J-TMAC work cooperatively at TU not only with TU researchers/students but also with UTeM people under the remote collaboration. This section picks up the case of ER Unit 2014 and reviews how their remote collaboration 84 T. Ito et al. / Cloud-Based Project Supervision to Support Virtual Team has been carried out. The members of ER Unit were appointed as a TMAC staff for 1year during 2014-2015. 3.1. Remote supervision from J-TMAC over final year project at UTeM Final year project at UTeM, which is commonly called graduation research at TU, is one the ultimate manifestation what the undergraduate student learns at the end of the 4 year engineering degree program. The research topic in the project is basically relevant to what the student has obtained by the time. Final year project encourages students to explore the areas of interest in depth, and to collaboratively work in a project team as well as an independently pursuing the scholarly area. Basically, the final year project is a series of main projects initiated by the supervisor. However, the project scope is exclusively dedicated to the students. Therefore, it is very important to design an appropriate research plan, structure, and supervision model in the final year project so that the project could be fulfilled by the project team as well as by individual students in the team. Common practices of UTeM student experiments were conducted as the final year project of UTeM for five months in 2014-2015 [16]. The experiment was mostly completed under the supervision of one of the authors from the remote site, or TU, of which collaboration framework is shown in Figure 2. Table 1 below shows examples of basic experiments conducted under this framework. Figure 2. Remote supervision framework for final year projects. Table 1. Examples of final year project experiments. Study # Equipment Area study No. of student No. of Research Assistant 㻝 Electrical discharge machine Surface roughness and Electrode 2 㻞 Grinding machine 㻟 㻠 Simulation in CNC Milling milling process machine Surface roughness of Nickel based 1 Cutting force of hard material 1 NA NA NA 㻢 Water hammer test rig Tool performance and surface 2 㻡 Wire electrical discharge machine Cutting performance of Nickel based 1 NA 1 NA Accumulator performance 1 T. Ito et al. / Cloud-Based Project Supervision to Support Virtual Team 85 These basic experiments have been completed in the framework of remote supervision. This is mainly because co-supervisor at UTeM well took care of the students to perform the experiments guided by the supervisor at TU. However, several problems below were recognized during the projects due to the fact that the supervision to the UTeM students in Malaysia was offered from a far distant place, or TU in Japan. Research guidance problem: It was sometimes not easy to give an appropriate guidance from a distance. Just a brief meeting in the same room could make it easier. Low planning of the research project: It was a tendency for students to plan/design the research experiments in an easier way not in an appropriate way. Supervising the appropriate way of experimental design was an issue. Poor language proficiency: Pure language communication was not enough for research discussion. More direct interaction using human gestures, body languages, physical movement, could enhance the communication. 3.2. Remote supervision from J-TMAC over FEM simulation projects at UTeM Finite Element Method (FEM) analysis is another area covered by the researcher at J-TMAC [17]. FEM is one of the very popular approaches to validate structures/fabrications of mechanical parts design. Based on the various kinds of input data to model the target process, accurate results can be obtained as shown in Figure 3. Several FEM simulation projects are on-going under the remote supervision of the researcher as shown in Figure 4, with the help of the local technician at UTeM. Figure 3. Input and output data overview in FEM analysis. Reviewing the collaboration activities during the last few months, remote communication with the students is well supported thanks to the information and communication technology (ICT) environment at TMAC. As for the good points of remote communication, shorter meeting time, better meeting preparation by students, better engagement of students, suitable for simple problem solving, easier data exchange, etc. were recognized. However, as for some drawbacks, it is not available to have three way call, online machine monitoring, etc. 86 T. Ito et al. / Cloud-Based Project Supervision to Support Virtual Team Figure 4. Remote supervision framework for FEM analysis projects. 3.3. Remote supervision from M-TMAC over a Ph.D project from a view point of Double Degree student This section presents an example of remote supervision to a Duble-Degree student of TMAC, where the supervisor is located at UTeM and his student is located at TU. The student is pursuing a Double-Degree of Ph.D from TU and UTeM during the three years of 2014-2017 regarding the research topic of cloud-based manufacturing [18][19]. In addition to that, she has been appointed as a student staff of TMAC in the first year of her Ph.D program. Figure 5. PDCA cycle of Double-Degree student under remote supervision. T. Ito et al. / Cloud-Based Project Supervision to Support Virtual Team 87 Reviewing the activity of the student, a PDCA (Plan-Do-Check-Action) cycle of the activity was dipitced as shown in Figure 5, which shows how the student worked under the remote supervision from UTeM. Education actitities is the first phase to obtain knowledge about the resarch project, with the help of Turnitin database to learn how to write good papers under the support of the supervisor. Process is the second phase to learn the systematic thinking in data/information management. Feedback is the third phase to review the research activities by the advise/instruction through the interaction with the supervisor using ICT tools. Analysis is the last phase of the cycle to make adjustment on the research activities to upgrade towards the next step. In addition to the ICT tools such as turnitin, yahoo messsages, Twitter, as shown in Figure 5, the student often used phone calls, which is one of the traditional communication tools but is proved to be the effective communication tool over the globe. 4. Review of the remote supervision examples at TMAC 4.1. Communication tools in UTeM/TMAC collaboration The communication between the supervisor and his students was very frequent, cooperative, and close for all cases in order to collaborate between the two difference locations, or UTeM and TU. The typical communication tools were Email, VoIP applications, video conference system, etc. This section shows what we have found in using these tools. Email: Email is the basic and widely used communication media. The well-known advantage is that the message can be sent in different time zones, which is very suitable to make overseas communication in TMAC. The function of Email is not only to exchange messages, but also to share files, such as weekly report, experimental data, statistical data and journal of current research. Therefore, Email offers the opportunity to students to write reports in a nice way and send them in a timely manner. Wikis: Wikis, such as SharePoint, PmWiki, XWiki, etc., is a viable alternative to e-mail, because wikis has many advantages when working in a team. Even though TMAC has not used Wikis so far, a defining characteristic of wiki technology is known as the ease with which pages can be created and updated. Considering the security issues, Wikis is under consideration as the collaboration tool of TMAC. IM and VoIP applications: instant messaging (IM) and voice over internet protocol (VoIP) are recently quite popular. Typical applications used in the projects were Skype, Viber, Line, and WhatsApp, all of which are free software and can easily be accessible by mobile devices. As a result, behavior/movement of experimental machines could be captured/monitored very clearly in high resolution, and be shared among the members of the experiment. The captured video could also be sent as a vido clip file after the meeting to be shared among the team. High-quality video conference system: J-TMAC at TU and M-TMAC at UTeM are connected with a high-quality video conference system (Polycom), which is installed on both sites and is ready to use anytime for TMAC collaboration activities. Thanks to its high quality audio/video communication, TMAC members often use it for regular meetings, or ad-hoc discussion over the computer network, which is very useful for remote collaboration. 88 T. Ito et al. / Cloud-Based Project Supervision to Support Virtual Team 4.2. Limitations in project supervision under long distance communication Even though the typical communication tools have advantages as mentioned in the previous section, some drawbacks were observed in the projects, some of which are shown in this section. File sharing: Due to capacity limitations, some large size files could not be sent via email and IM applications. Email file attachment requires proper file management, otherwise, important information may be lost. Some integrated files linked to the specific applications, such as CAD/CAM and CAE simulation software, could not be opened with a single data file in a different application. Some students captured the desktop images or made video clips to share the results of experiments with the supervisor. However, these files cannot be edited to make corrections by the supervisor unless both student and supervisor operate the same software concurrently to edit the original data during the video communication. Physical data sharing: The experiment in the final year project research involves various activities including machine operation. Therefore, machine setup and machine handling have to be completed correctly in order to avoid any accidents or undesirable results. However, it is not possible to share the physical data regarding the machine operation over the computer network. Data transmission speed: Typical communication tools, such as Skype, worked very effectively in all of the projects. However, the network speed and video quality was an issue. Poor internet speed also restricted to perform group video calling in some cases. High-quality video conference: Polycom provides high quality video streaming, and does not suffer from the data transmission problem just like the Skype case. However, it could not be used in the experimental facilities because neither the machine tools nor the Polycom system was portable. Time zone: The different time zone was another issue, even though it was only one hour difference in Japan and Malaysia, for example, difficulties in meeting scheduling, time misunderstanding, etc. Data handling: The communication data requires large memory space, especially video recorded data. Due to scarcity of memory space of PC or mobile device, it often happened that some communication data was completely lost, which made it impossible to recall back later. 5. A proposal of cloud-based project supervision framework Virtual organization of TMAC has been established in a unique manner and is in operation under the global collaboration between TU and UTeM. For enhancing the collaboration, some existing communication tools and collaboration system are already under use. However, the computer network-based framework to support this global collaboration is merely based on the typical ICT tools so far as mentioned in the previous sections. It is very critical to figure out how to enhance the global collaboration among TMAC staffs who are globally separated in two different countries. Therefore, a new design of cloud-based project supervision system, which is based on the idea of cloud-based manufacturing framework, is under study to satisfy the needs of this unique organization. T. Ito et al. / Cloud-Based Project Supervision to Support Virtual Team 89 The framework is composed of two types of cloud servers, or cloud-based information server and cloud-based remote machine control server as shown in Figure 6. The cloud-based information server is composed of an internet-based public cloud and an intranet-based private cloud in a seamless integration. The data files can be stored in these servers, and be shared among the members in a seamless manner with high security. As a result, the use of email file attachment for data sharing would be avoided. The cloud-based remote machine control server is also an integration of public and private cloud server, but it is directly connected to the machine tools to be controlled/monitored. As opposed to the video monitoring via ICT tools, the user could directly operate/monitor the machine tools at UTeM facilities from J-TMAC office. The implementation of this framework is under study as a TMAC project. Figure 6. Framework of cloud-based project supervision system. 6. Concluding remarks This paper overviewed the outline of TMAC and presented the case studies of academic collaboration at TMAC under remote supervision. Then, the paper proposed the idea of cloud-based supervision system framework to support the virtual team organization of TMAC towards the global academic collaboration. Future work includes the development of the system based on this framework and its feasibility study at TMAC. TMAC has a unique organizational structure based on the virtual team across the 90 T. Ito et al. / Cloud-Based Project Supervision to Support Virtual Team globe. Therefore, it is very critical to figure out how to enhance the global collaboration among TMAC staffs, who work as the critical linkages between the two institutions. For enhancing the collaboration, some existing communication tools and collaboration systems are already under use as reported in the paper. However, a new type of cloudbased computing system is required to satisfy the specific needs of this unique organization. References [1] Japanese related companies in Malaysia, (2011), https://www.jetro.go.jp/malaysia/services/jpncoinmsia/index.html/JRC_Statistic.pdf [2] MJIIT - Malaysia-Japan International Institute Of Technology (2013), URL http://mjiit.utm.my [3] JMIT - Japan-Malaysia Technical Institute (1998), URL http://www.jmti.gov.my/v1/ [4] MSSC (2013), URL https://www.kyutech.ac.jp/facilities/mssc/ [5] TUT-MSU Penang (2013), URL ignite.tut.ac.jp/cie/penang/english/ [6] TMAC, (2014) URL http://www.my.emb-japan.go.jp/English/JIS/education/TMAC.html [7] T. Ito, E. Mohammad and M.R. Salleh, A Student Mobility Program through Cross Cultural and Technical Exposure, JSME No.13-205 The 13th Design Engineering Workshop 2013, No.13, pp.1-6, Kitakyushu, Nov. 2013. [8] iDECON2014, URL http://idecon2014.utem.edu.my/ [9] iDECON2015, URL http://www.eng.osakafu-u.ac.jp/idecon2015/ [10] A.C. Lessing and S. Schulze, Lecturers' experience of postgraduate supervision in a distance education context: research in higher education, South African Journal of Higher Education, Vol 17 No 2 (2003), pp.159-168. [11] T. Schadler, Should Your Email Live In The Cloud? A Comparative Cost Analysis, Forester research institute, 2009. [12] R. Kaur, S. Singh, and M. Rkshit, A Review of various Software Project Scheduling techniques, International Journal of Computer Science & Engineering Technology (IJCSET), Vol.4 (2013), No.7, pp. 877-882. [13] R. Jeevitha, and M.C. Kumar, Performance evaluation of video conferencing in various networking environment, International Journal of Research in Computer Applications and Robotics, Vol.3 (2015), No.2, pp. 105-110. [14] T. Ito, A proposal of body movement-based interaction towards remote collaboration for concurrent engineering, Int. J. Agile Systems and Management, Vol. 7 (2014), Nos 3/4, pp. 365-382. [15] T. Ito, Y. Kamamura, A. Tsuji, M. Hashizume and T. Moriga, Academic collaboration towards manufacturing system globalization, Proc. of Manufacturing System Division Conference 2015, No.158, pp. 45-46. (in Japanese) [16] M.S. Kasim, C.H. Haron, J.A. Ghani, M.A. Azam, R. Izamshah, M.A. Ali, M.S Md Aziz, The influence of cutting parameter on heat generation in high-speed milling Inconel 718 under MQL condition, Journal of Scientific & Industrial Research, Vol.73 (2013), pp. 62-65. [17] R. Izamshah, N. Husna, M. Hadzley, M. Amran, M. S. Kasim, S. Amri, Determination on the Effect of Cutter Geometrical Feature for Machining Polyetheretherketone (PEEK) Using Taguchi Methods, Applied Mechanics and Materials, Vol.699 (2015), pp.192-197. [18] A.P. Puvanasvaran, N. Jamibollah, and N. Norazlin, Integration of Poka Yoke Into Process Failure Mode And Effect Analysis: A Case Study, American Journal of Applied Sciences, 11(8), (2014) 13321342. [19] P. Puvanasvaran, Y.S. Teoh, and C.C. Tay, Consideration of demand rate in overall equipment effetiveness (OEE) on equipment with constant process time, Journal of Industrial Engineering and Management, 6(2), (2013) 507-524. Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-91 91 The Improved Global Supply Chain Material Management Process Framework for One-Stop Logistic Services Abby P.T. Hsu a,1, Ai-Che Chang b, Amy J.C. Trappey a, Charles V. Trappey c, W.T. Lee d, a Department of Industrial Engineering and Engineering Management, National Tsing Hua University, Hsinchu, Taiwan b Institute for Information Industry, Taipei, Taiwan c Department of Management Science, National Chiao Tung University, Hsinchu, Taiwan d Technology Center for Service Industries, Industrial Technology Research Institute, Chutung, Taiwan Abstract. This research focuses on the material procurement process improvement of a manufacturer under one-stop logistic services and proposes a to-be process offering a vendor-managed inventory (VMI) and information integration services! provided by one-stop logistic services providers (1SLP) for shortening the procurement and manufacturing lead time, and enhancing the information flow accuracy and transparency. The 1SLP is an integrator that assembles the resources, capabilities, and technologies of supply chain networks to design and implement comprehensive logistic solutions. The research develops the 1SLP process framework by dividing the service scope into four service models. We use a case example to demonstrate the improved process. The case company is a leading manufacturer in producing projectors for the global market. The current material procurement process causes long lead times, delays the manufacturing process, and does not integrate all information in the supply chain. In the to-be model, a 1SLP incorporated model 2 services and provides a VMI warehouse as a valueadded service and integrated information platform to ensure the buyer's inventory level and implements appropriate logistics optimization for efficient delivery. The AnyLogic Simulation Software is used to model the current and the to-be business processes. The comparison between the as-is and the to-be models demonstrates that the improved material procurement process increases the current process's efficiency under the one-stop services without impeding product availability to the target market. Keywords. one-stop service, vendor-managed inventory, material procurement Introduction Global enterprises need to broaden their cooperation and coordination with partners in the supply chain network to share risks, responsibilities, and profits [1]. The means and methods of integrating supply chain services, particularly for global logistics operations, 1 Corresponding Author, E-Mail; s103034529@m103.nthu.edu.tw 92 A.P.T. Hsu et al. / The Improved Global Supply Chain Material Management Process Framework are of growing importance. Traditional logistic services focus on warehousing and shipping of physical goods. Nonetheless, with the rapid development of the information technology and e-business models, logistic service industries have undergone several transformations. Logistic services are considered to be an integral part of ensuring the profitability of global enterprises. Trappey et al. proposed that the most urgent need of modern enterprises is to satisfy growing business demands with efficiency and transparency [2]. This research focuses on the improved material procurement process of a manufacturer under 1SLP. The case company has developed and produced projectors with seven factories in China and its headquarters is in Taiwan. The case company’s current material procurement process causes long lead times, delays the manufacturing process, and lacks integration of information between the supply chain partners. Therefore, this research proposes an improved process offering a vendor-managed inventory and integrated information platform services provided by 1SLP. The process shortens the production and delivery lead time and enhances the transparency and efficiency of the required information flow. Since time and information are critical, implementing the one-stop logistic services with 1SLP is a necessary option. The paper is organized as follows. Section 2 reviews and discusses the background literature. The methodology is described in Section 3. Section 4 presents the case implementation, improvements, and results. Finally, the concluding remarks are provided in Section 5. 1. Literature review The literature related to the one-stop logistic services and vender-managed inventory services are reviewed and discussed in this section. 1.1. One-stop logistic services This research uses the concept of one-stop logistic services to assist the case company in improving its material procurement process. How to integrate and improve the diverse and valuable logistic services to help enterprises optimize their business operations is an important concern for logistic companies. As the complexity of supply chain network operations increases, each company must turn non-core functions over to contractors enabling companies to concentrate on their core operations. Therefore, companies attempt to outsource their logistic needs to the professional service providers called third party logistics (3PL) providers [3]. The 3PL service is regarded as a means to upgrade operations and enhance manufacturer’s competitiveness as long as these services are effectively integrated into the manufacturer’s order fulfillment processes [4]. The innovation of 1SLP services has been discussed by several authors. The integrated logistic service providers (ILSP) reflect on the integration of overall logistic services including general services, value-added services, and customized services [5]. Trappey et al. defined 1SLP services more comprehensively and classified services into five parts including physical logistic services, information integration, value-added supply chain services, third-party cash management and sales services![6]. This research uses the 1SLP models to define the scopes and categories of the case company’s needs in logistic services. A.P.T. Hsu et al. / The Improved Global Supply Chain Material Management Process Framework 93 1.2. Vendor-managed inventory The vendor-managed inventory (VMI) is a business model for improving multi-firm supply chain efficiency. In a VMI partnership, the supplier, usually the manufacturer, makes the inventory replenishment decisions for the consuming organization. This means that the vendor monitors the buyer inventory levels and makes periodic replenishment decisions and actions regarding order quantities and shipping [7]. The benefits of VMI are cited by several authors. Disney and Towill [8] compared VMI with the traditional supply chain and stated that VMI can eliminate the bullwhip effect. VMI's transport operation costs and goods in transit are less than the traditional supply chain in both short and long terms [9]. An efficient VMI implementation depends on the information system. The information flow between manufacturers and suppliers is often in a state of uncertainty with chaotic communications. The traditional information sharing methods, such as EDI, lack flexibility. Hence, Liu and Sun applied the IoT to improve the effectiveness and convenience of the information flow in the VMI supply chain [10]. In this research, the purpose of offering VMI service by 1SLP is to connect the manufacturer with suppliers, try to integrate the information of supply chain and shorten the lead time of material procurement. 2. Methodology The methods or technics used in the research are mentioned in this section. This research introduces the framework of 1SLP services defined in four models (model 0~3), describes the case company’s current material procurement process and proposes the improved process under a suitable one-stop logistic service model. 2.1. The framework of 1SLP models A one-stop logistic service provider (1SLP) is an integrator that assembles the resources, capabilities, and technologies of supply chain networks to design and implement comprehensive logistic solutions. The framework of 1SLP services are designed by Trappey et al. [11] and defined in four models. As described in Table 1, a higher level model contains services of the previous model and enhance the services with additional functions. Model 0 provides the basic physical logistics services. Model 1 focuses on the enhancement of information integration in additional physical logistics services and partial value-added logistics planning. Model 2, comparing to model 1, offers more comprehensive information integration, value-added services and cash transactions. Model 3 is the highest level in the 1SLP models. It provides the sales services which allow 1SLP managing the sales process after product manufacturing. The detailed one-stop logistics needs from the interview of the case company is listed in Table 1. In the aspect of cash transactions, the case company only uses 30 days payment terms by remittance. The clients of case company are engaged in selling projects and marketing, hence the sales services offered by model 3 are not necessary. However, 1SLP can assist case company with model 2 services including information integration, completed value-added services and material purchasing related operations. To implement the model 2 of 1SLP, this research focuses on the improvement of case company’s material procurement process for shortening the average lead time and enhancing information transparency. In the next section, the current material 94 A.P.T. Hsu et al. / The Improved Global Supply Chain Material Management Process Framework procurement process (as-is) and improved process (to-be) are introduced respectively. In section 4, AnyLogic Simulation Software, providing a method and tool, models and simulates the proposed improvement to processes. Table 1. Case company logistics needs under the 1SLP service models (9). 1SLP’s four level models and services Distribution / Warehousing / Quality control 9 Customs declaration / Reverse logistic 9 Model 0 Order management / Inventory status / Shipping notification Model 1 Merchandise tracking Model 2 Model 3 Demand of case company 9 9 Sales information / Accounts information Cash receipt Transportation optimization / Warehousing planning 9 Consulting management / Optimization of resource allocation 9 Purchase information / Purchase agreements 9 Third-party payment / Accounts management / Monetary transactions Channel searches / Marketing management / Products displays 2.2. The manufacturer’s as-is model This section depicts the manufacturer’s current material procurement process. The case company divides materials into three categories: non-key materials, key materials which are located in China and key materials which are located in other countries. After the factory plans material requirement planning, the procurement process is submitted the material demand in known. Each category of material has its own process. If the factory needs non-key materials or key materials from the suppliers located in China, the materials are purchased. On the other hands, if the factory needs key materials from suppliers located in other countries, it sends the material demand to the headquarters for unified procurement for every factory. In addition, the headquarters and the factories have integrated information system to communicate and pass the orders. When the headquarters and factories purchase the material, the procurement is requested via email, EDI or the suppliers’ information systems. And when the material orders have been shipped to factories, manufacturer tracks shipping status via phone which requires a lot of manpower. These situations mean that the material procurement supply chain does not integrate information within the procurement process and cannot share the important information. The current material procurement process causes long lead time and also delays the manufacturing process. If the customer order demands are variable, each factory has to prepare more inventory to respond to the variable orders. Therefore, this current process requires comprehensive services to solve these A.P.T. Hsu et al. / The Improved Global Supply Chain Material Management Process Framework 95 problems. In the next section, this research proposes the improvement process using the model 2 services provided by a one-stop logistic provider (1SLP). 2.3. The manufacturer’s uo-be model This section introduces the model 2 services provided by 1SLP based on the case company’s logistics needs. Model 2 in the improved process mainly focuses on information integration, comprehensive logistics value-added services and procurement operations. Therefore, 1SLP offers a vendor-managed inventory (VMI) warehouse as a value-added service and integrated information platform, tries to integrate the information flow of supply chain and optimize the material procurement process, resource allocation and transportation. In the to-be model, a 1SLP provides the VMI warehouse to connect the case company with its suppliers. Since the factories are located in China, the 1SLP builds a VMI warehouse at a strategic location in China. The overall view and the physical flow, information flow and cash flow between each role are shown in Figure 1. Figure 1. The overall view of the manufacturer under the 1SLP’s improved model 2 services. The detailed operation process of the manufacturer’s to-be model is illustrated in Figure 2. Suppliers reply when the headquarters issues blanket orders including material quantities and multiple delivery periods to suppliers via the information platform demand predictions. After suppliers confirm the material order, the VMI warehouse automatically replenishes each category of material and updates the receiving time, receiving quantity and inventory status on the information platform. When one of the factories needs materials, it sends a “call-off” message to the information platform and the 1SLP delivers those materials from the VMI warehouse without delay. After the factory receives the materials, it sends the receiving record to the information platform to verify the shipping quantity and shipping time while synchronizing the VMI’s records [12]. In the to-be model, to enhance information accuracy and transparency, and reduce the requirement of manpower, the 1SLP provides every role in the supply chain using an information platform for making purchasing orders and information exchange in real time. To optimize the resource 96 A.P.T. Hsu et al. / The Improved Global Supply Chain Material Management Process Framework allocation and transportation, 1SLP continuously controls the inventory and conducts the logistics planning. Therefore, aggregating all material inventory in the VMI warehouse providing by 1SLP not only can decrease the lead time of the material procurement processes, but also diminish uncertainty of material supply and help reduce the reserve inventory required in each factory without hurting product availability. Figure 2. The operation process of the manufacturer under the 1SLP’s model 2 services (to-be model). 3. Case implementation The purpose of this section is to construct the current material procurement processes of the manufacturer and improved processes for comparison using the simulation software AnyLogic. 3.1. Simulation scenario Before illustrating the as-is and to-be processes, this section introduces the simulation scenario. The simulation scenario assumes the case company has only one product, contains three categories of material. The simulation starts when an order is assigned to one factory and ends with the manufacturing completed. The simulation period is set for 365 days. 3.2. As-is process modeling The as-is process is depicted in Figure 3. Each block in the process has its own property. For example, the block named “Key Material Delivery from Supplier_Other” may spend 7 to 20 days based on the supplier location. A.P.T. Hsu et al. / The Improved Global Supply Chain Material Management Process Framework 97 Figure 3. As-is process modeling. 3.3. To-be process modeling The detailed as-is process is depicted in Figure 4. Each block in the process has its own property. For example, the block named “Delivery from VMI” may spend 0.5 to 1.5 days based on the factory location. Figure 4. To-be process modeling. 98 A.P.T. Hsu et al. / The Improved Global Supply Chain Material Management Process Framework 3.4. Simulation comparison The simulation result is presented in Table 2. The average lead time of the improved process including procurement and manufacturing is shorter than the current process and the orders processed are greater. Incorporated the model 2 of one-stop logistic provider (1SLP) into to-be model improves the current process's efficiency with VMI and integrated information platform. In the as-is model, the processes of purchasing each category of materials are complex, especially in the key material procurement. Considering the procurement cost, price discount and the suppliers’ throughputs, case company in as-is model adopts unified procurement for key material from the suppliers located in other countries. However, before headquarter purchases material, the time waiting for every factories’ purchasing requirements is very long. The to-be model using VMI service allows case company to directly centralize the purchasing that can eliminate the duplicated purchasing operations, decrease the purchasing time, and get the better material prices and better control of inventory. In addition, the 1SLP offering an integrated information platform accelerates the information flow, reduces the requirement of manpower, decreases the time of purchasing and tracking merchandises. Besides, the annual orders processed are increased as the average lead time is improved. The increased orders processed may benefit the manufacturer in order fill rates and customer satisfaction levels. Table 2. The comparison of the simulation result. Model As-Is To-Be Difference annual orders processed 25 59 +34 avg. lead time (days) 14.6 6.19 -8.41 4. Conclusions The 1SLP provides enterprises with comprehensive services and integrated resources for supply chain members, and its services are designed into four models. This research applies model 2 offering physical goods logistics, information integration and valueadded services to the partial supply chain of case company. The simulation modeling and comparison show the improvement of the material procurement process in decreased average lead time and increased orders processed. After manufacturing, case company has to distribute the products in different ways and destinations. At the strategic location, there are the hubs managed by third-party logistics with the services of warehousing, distribution, packing and other value-added processing. If the upstream and downstream in the supply chain are integrated tightly, the more flexibility and efficiency management can be operated through keeping constantly aware of relevant information. Therefore, suggest that researchers extend the concept of 1SLP to improve the entire supply chain. 5. Acknowledgement This research is partially supported by the Ministry of Science and Technology and Industrial Technology Research Institute in Taiwan. A.P.T. Hsu et al. / The Improved Global Supply Chain Material Management Process Framework 99 References [1] C.V. Trappey, A.J.C. Trappey, G.Y.P. Lin, C.S. Liu, and W.T. Lee, Business and logistic hub integration to facilitate global supply chain linkage, Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture 221(7) (2007), 1221–1233. [2] C.V. Trappey, A.J.C. Trappey, C.S. Liu, W.T. Lee, and Y.L. Hung, The design and evaluation of a supply chain logistic hub for automobile and parts distribution, In Materials Science Forum 594 (2008), 119–131. [3] K.A. Reeves, F. Caliskan, and O. Ozcan, Outsourcing distribution and logistics services within the automotive supplier industry, Transportation Research Part E: Logistics and Transportation Review 46(3) (2010), 459–468. [4] C.V. Trappey, A.J.C. Trappey, A.Y.L. Huang, and G.Y.P. Lin, Automobile manufacturing logistic service management and decision support using classification and clustering methodologies, In: S.Y. Chou et al. (eds.) Global Perspective for Competitive Enterprise, Economy and Ecology, Springer London (2009), pp. 581–592. [5] Y.Q. Shi, C.F. Hu, Z.Y. Zhang, and G.Y. Shu, Logistics service innovation based on the service innovation model for ILSP, The 8th International Conference on Supply Chain Management and Information Systems (SCMIS), Hong Kong, October 6–8 (2010), 1–3. [6] A.J.C. Trappey, C.V. Trappey, D.W. Dai, S.W. Chang, and W. T. Lee, The implementation of global logistic services using one-stop logistics management, The 18th International Conference on Computer Supported Cooperative Work in Design (CSCWD), Taiwan, May 21–23 (2014), 307–312. [7] M. Waller, M. E. Johnson, and T. Davis, Vendor-managed inventory in the retail supply chain, Journal of business logistics 20 (1999), 183–204. [8] S. M. Disney, D. R. Towill, The effect of vendor managed inventory (VMI) dynamics on the Bullwhip Effect in supply chains, International journal of production economics 85(2) (2003), 199–215. [9] S. M. Disney, A. T. Potter, and B. M. Gardner, The impact of vendor managed inventory on transport operations, Transportation Research Part E: Logistics and Transportation Review 39(5) (2003), 363– 380. [10] X. Liu, Y. Sun, Information flow management of Vendor-Managed Inventory system in automobile parts inbound logistics based on Internet of Things, Journal of Software 6(7) (2011), 1374–1380. [11] A.J.C. Trappey, C.V. Trappey, S.W.C. Chang, W.T. Lee, and T.N. Hsu, A one-stop logistic services framework supporting global supply chain collaboration, Journal of Systems Science and Systems Engineering (2015) (accepted). [12] eChannelOpen Inc., Accessed: 3/25/2015. [Online]. Available: http://www.echannelopen.com/solution/valuepromotion/index.htm 100 Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-100 Using the ”Model-based Systems Engineering” Technique for Multidisciplinary System Development Carolin ECKLa,1, Dr. Markus BRANDSTÄTTERab and Dr. Josip STJEPANDIĆ b Technische Universität München, Institute of Astronautics, Garching (Germany) b PROSTEP AG, Darmstadt (Germany) a Abstract. ”Model-based Systems Engineering” is currently a hot topic at INCOSE (International Council on Systems Engineering). It involves multidisciplinary development based on the usage of models as main artifact. The frequent use of models during the development of the pico-satellite MOVE (Munich Orbital Verification Experiment) was attributed to the long history of the chair for astronautics at the TU München with Systems Engineering. The development of MOVE displayed many of the characteristics of a real-world multidisciplinary engineering project and resulted in a successful space flight of the engineered satellite. Within the satellite, communication was lead through a central bus between the different components and required expertise and coordination from all of the involved disciplines. An equivalent task of distributing information and energy can be found in automotive engineering: in the wire-harness. In contrast to the satellite bus, it does not distribute centrally created coordination commands, but supports the orchestration between distributed systems. Even though these two systems and their development processes are inherently different, they exhibit similar difficulties during their design phase (e.g. with compatibility) and can be modeled similarly. This paper uses the design of satellite bus systems and automotive wire-harnesses as examples, describes their common pitfalls, explains ”Model-based Systems Engineering” and demonstrates how the development of communication systems in both satellite and automotive engineering can benefit from relying on it in early design and concept phases. Keywords. Systems Engineering, Model, Model-based Systems Engineering. Introduction The development of most technical products involves specialists from different engineering domains. A modern communication interface, for example, requires both electrical knowledge to transmit signals as well as an abstract understanding of the protocol and the hardware required to send/receive the signals. Additionally, flexibility, connectivity and performance gains demand this separation into the realms of different domains to be realizable at all. The number of domains involved in the development of a product influences the number of different components, because a typical breakdown of the tasks of a system regards the boundaries of domain knowledge. The increase in the number of different 1 Corresponding Author, E-Mail: c.eckl@tum.de C. Eckl et al. / Using the “MBSE” Technique for Multidisciplinary System Development 101 and new components as well as the flexibility required from each of the components increases the complexity of the system [1]. Automotive development is at one of the extremes of complex product development as it requires a lot of very flexible components, which interact in various configurations and variations [2]. At the other end of the spectrum, satellite development produces complex systems, where many components have to be newly developed and are specifically engineered to interact with each other. Most other engineering domains exhibit some characteristics of both types of domain presented in this paper [3]. The most important common feature of both domains is the involvement of engineers from various domains. As a result, methods for fostering multidisciplinary cooperation and alleviating the risks introduced by these challenges have been on the agenda of both engineering branches for some time, e.g. by Model-based Systems Engineering (MBSE) at INCOSE (International Council on Systems Engineering) [4,5]. For example, MBSE for multidisciplinary teams has been prototyped by the German chapter of INCOSE (GfSE [6]), an organization with origins in the space industry in the ”Telescope systems modelling by SEˆ2” and ”Space Systems Modelling” [7] projects. 1. Engineering (Bus) Systems Differently Differences in engineering between automotive and space derive mainly from the differences in the contexts and are detailed in the following subsections for the example of their bus systems. 1.1. Satellite Engineering and Bus Systems Almost all satellite development is initiated by a customer order. The customer’s use cases provide the basis for the requirements analysis. Resulting requirements reflect the wishes of only one customer for a certain purpose. Typically, this customer-driven development leads to the engineering of a single (or a few similar) satellites without the need for variation. Reliability is typically one of the highest aims due to high system costs and impracticality of repair in orbit. Satellites are built with a similar general structure [8], which contains a bus system [9] that comprises all components, which contribute to the life support of the satellite (e.g. power unit or the attitude control system). Equally important is the payload of the satellite, which is defined by the satellite mission. For example, if the mission was to take pictures of the earth, the payload would contain a camera to take them. Additionally, the satellite typically contains a mechanical connection of all components of the satellite (the structure), a power supply such as solar cells and battery, an attitude determination and control system (ADCS), which orientates the satellite in space, a communication unit for communicating with the ground station, a central steering unit (the on-board computer) and a thermal system, which regulates the temperature budget of the satellite. The full system ”satellite” also comprises the launcher and the ground station, which largely contribute to the success of the satellite’s mission. Few suppliers are involved in the development of one of these spacecraft. For the sake of certification of the whole system, each of them has to provide full documentation of the delivered components. Especially the payload of the satellite is 102 C. Eckl et al. / Using the “MBSE” Technique for Multidisciplinary System Development usually created by one highly specialized supplier (in case of scientific satellites it is typically the customer), who develops independently and delivers the built and tested payload. In parallel, the satellite structure is created by the developer of the satellite specifically for this satellite system. It includes a specific bus system, if no commercial bus system fits the purpose. Finally, the payload and all the other satellite components are integrated and tested as a whole. Figure 1. Simplified structure of the satellite system ”MOVE”. All parts of a satellite are steered centrally by the ”on-board computer” (which is part of the satellite bus). Its signals are distributed and routed to the components of the satellite through the satellite harness. Because of the dedicated master represented by the central ”on-board computer”, no special coordination of components for signal transmission and bus arbitration on the harness is necessary. This architectural feature contributes to the determinism and testability (and therefore reliability) of the bus system – mutual interferences and unwanted communication via the satellite harness are improbable. Each of the signals transmitted by the bus system has two facets: an observable electrical manifestation on the satellite harness and the contained information. The information can be viewed as software signal, which is virtually transmitted between the encoder (converting the information to electrical signals) and the decoder (converting the signal back to information). Even though there are commercial-off-theshelf alternatives for (partial) bus systems in satellites, the higher communication layers of the harness conveying more abstract information rather than signals have to be created specifically to allow communication with the specialized payload. In order to send and receive correct information on all of these communication layers, a close communication between the supplier of the satellite bus and the developers of the components is required. C. Eckl et al. / Using the “MBSE” Technique for Multidisciplinary System Development 103 Another mechanism to increase the determinism is the state-based behavior of the satellite. This means that the satellite has at least the states ”initialization mode” (the initialization phase), ”nominal mode” (normal operation) and ”fail-safe mode” (for error handling)[8,10]. The ”on-board computer” knows the required actions for all of these states and distributes the knowledge about the current mode to the components. A transition from one state into another requires a certain trigger and/or condition to hold. Almost all of the standard satellite components are also visible in the student project to engineer the very small satellite MOVE [11], which has been developed at the chair for astronautics of the TU München. The development exhibited many of the characteristics of a real-world satellite development in its multidisciplinary approach and resulted in a successful space flight. The models, which were created after the development, show the structure and behavior of this concrete satellite. In general, the development of technical systems such as satellites in the space industry is steered by a Systems Engineering group [12], which is responsible for the coordination and distribution of design information. It collects design information and generates an abstract model of the whole system. The objective of the model is to provide an overview of the system for involved engineers: the general context, behavior and structure of connected components [13]. Especially the use of an overview model during the early development phases has been well tested in the so-called concurrent design facilities [14]. During the course of the development, the Systems Engineering group continues to enhance the model. If connections to development models (such as models from CAD (Computer-aided Design), FEM (Finite Element Method) or Software-descriptions) are required, they are handled by links (e.g. via the OSLC (Open Services for Lifecycle Cooperation) protocol [15]). These development models detail the components, which have an abstract representation in the system model. This system model is best used throughout all development phases and especially during the early system conception and for the central component. In the case of MOVE, it is engineered in SysML (Systems Modeling Language) [16] (as displayed in Figure 1), which is used for all models in this paper. Figure 2 displays the detailed internal connections of the satellite including the satellite bus system and its connection to the payload. Figure 2. Details of the connections within the satellite. Usages of ports - these connections are realized in the system. 104 C. Eckl et al. / Using the “MBSE” Technique for Multidisciplinary System Development 1.2. Automotive Engineering and its Bus Systems The trigger of a development project in the automotive industry is not a customer order, but comes from the organization itself based on market analysis and studies. Abstract use cases for the product have to be anticipated. The development is based upon a ”master plan”, which contains all components of the car and is detailed during the course of development. The master plan lays out the component development on a time schedule, but does not track connections between the developed components. Geometrical aspects of the components are also captured in a common model that provides a sketch of the completed car, but does not hold any invisible, intangible information such as behavior or software. There is no concrete central model of the system, which could provide an overview of non-geometrical connections between the components of a car. A lot of specialized design models (i.e. mechanical & electrical CAD-models) are created during the early phases. The documents containing these engineering models are coordinated by Product Data Management (PDM - see e.g. [17]) systems, which contain all of the required information and may be exported to other systems. These systems contain the references to the separate models and provide possibilities to create links between them, but do not make the contents accessible for adding connections to parts of other models. A detail of the mechanical CAD model of the ignition switch of the car, for example, cannot be connected to its electrical signal, which is specified within a document containing the whole communication across a wire harness. Every automotive development project leads to a large variety of vehicles: the customer determines the exact configuration of the car from a wide range of variation possibilities. This leads to the fact that almost all individual cars are built differently. The car is built after the order, but does not undergo a complete test anymore. Due to the large variability, not all of the cars that can be configured can be built for testing. Therefore, each configuration of closely connected components (which are much less than actual car configurations) has to be detected and tested before production. Since the exact configuration is not known at design time, the organization of components/control units has to be flexible. Flexibility is introduced by using bus systems, which do not require a receiver/sender at every port and by a cooperative communication, which is not steered by a central unit, but is rather orchestrated between the control units. In some cases, smaller components are directly steered by a composite component. The internal state of a car is defined by the current state of each component. This leads to a myriad of global states, because all combinations of component states have to be regarded. A transition cannot be defined clearly, because any of the contributing components may trigger the transition. All components, which are connected to the bus system, communicate with each other through this channel. Standard frameworks (such as the CAN (Controller area network) bus protocol [18], [19] or the MOST (Media oriented systems transport) protocol [20] as described in [21]) and drivers for accessing the bus infrastructure are available – especially for extracting information from the communication. Additionally, frameworks for supporting the development (e.g. AUTOSAR [22]) are widely available. Even though this infrastructure is readily available, the bus including the attached components has to be thoroughly tested to rule out unwanted effects of one component on another. C. Eckl et al. / Using the “MBSE” Technique for Multidisciplinary System Development 105 Each of the components that have to work together may come from different suppliers as many are involved in the development of automotive components. Since each supplier develops its components independently and there is no immediate need for certification, the documentation remains with the suppliers. The application of SysML for model-based systems engineering (MBSE) has not been adopted in an automotive context and there is no model of the whole system. Systems Engineering is usually applied on a smaller level to steer the development within one department. Therefore, the following models depict the rough structure of a fictional car and its bus system. In contrast to the satellite model, in which the ”onboard computer” is responsible for coordinating the whole system, the car contains many control units, which organize themselves by listening on a bus system to receive a free communication slot. Communication starts when a free time slot is detected. Similar to the satellite model, the bus system connects all control units structurally. Modern cars contain several bus systems for specialized tasks within the vehicle. Basic functions are, for example, steered through the CAN bus [18], [19] whereas entertainment functions are handled by the MOST bus [20], [21]. Because of the orchestration of components without central control unit, all communication paths (including the paths of ”virtual” software signals) as well as possible interferences with other signals have to be known to understand the communication between the components. 2. Differences between Automotive and Space Engineering in Model-based Systems Engineering As has been outlined in the previous section, both the engineered product (including the use of its bus systems) and model-based systems engineering experience vary within the extremely different contexts in the automotive and space industry. 2.1. Stakeholder Analysis, Use Case Creation and Requirements Elicitation In the development of spacecraft, the customer is known before the development is started. The customer issues the order. In contrast, the automotive engineer does not know the concrete customer, but a scheme developed from customer analysis and studies. Both types of customers lead to a similar stakeholder analysis, with more concrete or abstract definition of the stakeholder ”customer”. The creation of use cases and the derivation of requirements from the use cases can be modeled equally in both contexts. 2.2. Structural Modeling The model of the structure of a satellite and a car differs only by small parts. A satellite model contains one representation for each component of the satellite. This one variant of the component is used within the model of the concrete satellite. This instance model is exactly equivalent to the customer order. Figure 1 displays the structure of a satellite on an abstract level. The structural model of a car contains – in general – many different components that could theoretically be built into a car. In contrast to the single-variant instance model (which would be a ”100% model”), this is termed a ”150% system model” (see 106 C. Eckl et al. / Using the “MBSE” Technique for Multidisciplinary System Development e.g. [23]). Because of the variety of components, one instance model of a car cannot contain all possible variants. Figure 3 displays the ”150% system model” of a car on an abstract level, where the customer can order at most one navigation system in variant ”Standard” or ”Exclusive”. Figure 3. Structural model of a 150% car. This type of structural model displays all possible connections between components, but does not show in detail, which alternatives can be composed. Figure 3 displays that none or one type of navigation system is used – the semantical relationship between these connections is not explained in detail (the model would allow for choosing both navigation systems in parallel). This relationship is fairly simple and can be annotated, but when more variants are introduced, the full combinatorial view of the relationships is not possible. Figure 4. Model of an instance of a car with a standard navigation system. C. Eckl et al. / Using the “MBSE” Technique for Multidisciplinary System Development 107 Instance models describing one concrete car can be derived, which serve as witnesses for correctly composed choices. One of the belonging instances or ”100% models” is displayed in Figure 4 – the car, where the customer chooses the standard navigation system. 2.3. Behavioral Modeling The biggest differences in the model lie in the type and complexity of the behavioral model. Since the satellite components are centrally controlled by the master control unit ”on-board computer”, most communication and component activities occur sequentially (Figure 5). Figure 5. SysML state chart of a satellite including the sequential progress during its initialization. Figure 6. SysML activity diagram of parallel activities during initialization of an automobile. The model of the system behavior discerns between models of the states, activities and sequences of collaboration. States are modeled similarly within the satellite and the car. The level at which this occurs is different – the satellite has defined global states, whereas the car requires the components to be defined first and then assigns a state machine to each of the components. The difference in activity models is the sequential description for the satellite versus a description of highly parallel and concurrent activities in the car. The same holds for models of the communication and 108 C. Eckl et al. / Using the “MBSE” Technique for Multidisciplinary System Development collaboration sequences. Even though the satellite exhibits a sequential structure at the abstract level, modeling more details leads to an increased concurrency in more concrete levels. This is contrary to the model of the automobile, which exhibits concurrency in each level, but displays more determinism in the details of the components (Figure 6). 2.4. Usage of the models Not only the behavioral model, but also the priorities for their usage differ greatly, since the engineering of satellites is basically a one-time development and automotive engineering focuses on varying, multiple realization of a certain model. In the context of satellite development, a common, coarse model of the system supports in the synchronization of views on an abstract level. The layout and behavior of the satellite system determines the necessary interfaces between the different components, which have to be realized and possibly defined. The definition of the interfaces and connections between the components provides a mild support for finding simple compatibility issues early on. Also, using the abstract common model in the early phases of development allows for a less costly exploration of implications posed by special requirements. Several alternatives can be modeled, communicated, discussed and evaluated without actually building flight hardware. And finally, the model documents the decisions made during the development of the satellite. These decisions can be the basis for knowledge transfer to subsequent satellite development projects. Automotive engineering also benefits from a common model for the synchronization of involved disciplines. This synchronization is especially important for the interface definitions. As each car has to be defined in a variety of variants, one interface often has to be used by several components with similar functionality (such as the navigation systems in the previous example). The common ”150% model” helps in finding components connected to the interface, which are affected by changes (both of the structural interface and of the behavior supplying it). Since the components connected by one interface are define by the common model, it can also be used as basis for selecting groups of closely connected components for systematic testing. Additionally, all possible ”100% models” of the car can be created combinatorial from the model to be used as witnesses for extreme configurations. Finally, the model can be used to document the development for reuse purposes and to satisfy process requirements (such as imposed by [24], [25]). 3. Conclusion Satellites and automobiles are inherently different in the main objectives that underlie their development: satellites are made to order while automobiles are constructed with variants that can be composed in a way that suits the customer. The satellite and automotive domain are similar in some ways to construct products. Satellites are one-of-a-kind development, which requires a certain amount of manual design for each product and necessitates high reliability of all its components. Cars also require manual design of each group of similar choices in separate, descriptive instances, but do this on the basis of a catalog of different applicable variants. C. Eckl et al. / Using the “MBSE” Technique for Multidisciplinary System Development 109 Both domains can benefit from model-based systems development using a central model, which describes the development object in detail – but in different ways: the satellite developer mainly from synchronizing global views, the possible exploration of design alternatives and knowledge transfer to subsequent projects and the automobile developer from finding components affected by interface changes, validating concrete combinations of components and documentation of the development. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] J.W.S. Pringle, On the Parallel between Learning and Evolution. Behaviour. 3, 3 (Jan. 1951), 174215. A. Katzenbach, Automotive, in: J. Stjepandić et al. (eds.): Concurrent Engineering in the 21st Century: Foundations, Developments and Challenges, Springer International Publishing, Cham, 2015, pp. 607– 638. R.M. Kolonay, A physics-based distributed collaborative design process for military aerospace vehicle development and technology assessment, Int. J. Agile Systems and Management, Vol. 7, 2014, Nos 3/4, pp 242 - 260. International Council on Systems Engineering, 2015. Accessed: 04.04.2015. Available: http://www.incose.org/ S. Friedenthal., R. Griego, & M. Sampson, INCOSE model based systems engineering (MBSE) initiative. In: INCOSE 2007 Symposium. Gesellschaft für Systems Engineering e.V.: http://www.gfse.de/. Accessed: 2015-04-04. Model Based Systems Engineering: http://mbse.gfse.de/. Accessed: 2015-04-04. W.J. Larson and J.R. Wertz, eds. Space Mission Analysis and Design, 3rd edition. Microcosm, 1999. Spacecraft bus subsystems, 2015. Accessed: 04.04.2015. Available: http://www.lr.tudelft.nl/en/organisation/departments/space-engineering/spacesystemsengineering/expertise-areas/spacecraft-engineering/design-and-analysis/configurationdesign/subsystems/subsystems/. A. Peukert, Spacecraft Architectures Using Commercial Off-The-Shelf Components. Technische Universität München - Lehrstuhl für Raumfahrttechnik, 2008. MOVE — Munich Orbital Verification Experiment, 2015. Accessed: 04.04.2015. Available: http://move2space.de/ NASA Systems Engineering Handbook, 2007. Accessed: 04.04.2015. Available: http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20080008301.pdf. R.C. Beckett, Functional system maps as boundary objects in complex system development, Int. J. Agile Systems and Management, Vol. 8, 2015, No. 1, pp. 53–69. M. Bandecchi, B. Melton, B. Gardini, and F. Ongaro, 2000. The ESA/ESTEC Concurrent Design Facility. Systems Engineering. Open Services for Lifecycle Collaboration, 2015. Accessed: 04.04.2015. Available: http://openservices.net/. OMG Systems Modeling Language Version 1.3, 2012. Technical Report #formal/2012-06-01. Object Management Group. J. Stark, Product Lifecycle Management – Volume 1: 21st Century Paradigm for Product Realisation, 3rd ed, Springer, Cham, 2015. Road vehicles - Controller area network (CAN) - Part 1: Data link layer and physical signalling. Technical Report #ISO 11898-1:2003. International Standards Organization (ISO), 2013. CAN Specification Version 2.0. Robert Bosch GmbH, 1991. MOST Specification Rev.3.0 Errata 2. MOST Cooperation, 2010. W. Zimmermann and R. Schmidgall, Bussysteme in der Fahrzeugtechnik. Springer Fachmedien Wiesbaden, 2014. AUTOSAR, 2015 Accessed: 04.04.2015. Available: http://www.autosar.org/. A. Seiberts, M. Brandstätter, and K. Schreiber, Kompositionales Variantenmanagement Ganzheitlicher Ansatz zur Komplexitätsbeherrschung im Systems Engineering Umfeld, Tag des Systems Engineerings, 2012. Road vehicles - Functional safety - Part 5: Product development at the hardware level. Technical Report #ISO 26262-5:2011. International Standards Organization (ISO), 2011. Road vehicles - Functional safety - Part 6: Product development at the software level. Technical Report #ISO 26262-6:2011. International Standards Organization (ISO), 2011. 110 Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-110 Aircraft Bi-level Life Cycle Cost Estimation a Xiaojia ZHAOa,1 , Wim J.C. VERHAGENa and Richard CURRANa Delft University of Technology, Kluyverweg 1, Delft, The Netherlands, 2629 HS Abstract. In an integrated aircraft design and analysis practice, Life Cycle Cost (LCC) is essential for decision making. The LCC of an aircraft is ordinarily partially estimated by emphasizing a specific cost type. However, an overview of the LCC including design and development cost, production cost, operating cost and disposal cost is not provided. This may produce biased cost estimates. Moreover, aircraft LCC estimation is largely dependent on the availability of input parameters. It is often a problem for the analyst to supply a limited group of data into a detailed cost estimation process. Therefore, it is necessary to provide flexibility in conducting both high level and detail level LCC assessments based on the data accessibility. An input-dependent bi-level LCC estimation method is proposed. It illustrates the comprehensive estimation of the cost elements in the LCC with clearly defined high level and detail level analyses to form the final cost. Knowledge of the product and the life cycle process are structured based on a predefined meta model and logic rules. Cost is then evaluated by traversing the meta model linked with computing capabilities. This method is applied on a case study concerning A330-200 aircraft. With the support of weight estimation and bottomup process-based parametric cost estimation methods, it builds up a practical costing approach in quantifying the influence of LCC to the product life cycle. Keywords. Life Cycle Cost, cost estimation, design for cost Introduction LCC analysis was initiated in the early 70s by the US DoD [1-4]. It aimed at providing guidelines for equipment/system procurement. Gradually, life cycle costing has been employed to support decision making. Various authors have reviewed the state of the art in LCC analysis over the years [5-9]. In summary, LCC estimation tends to achieve accurate engineering simulations by modelling product and relevant processes, identifying cost compositions, evaluating cost driving parameters, and establishing analytical relationships, especially parametric Cost Estimation Relationships (CERs). When reviewing the recent research on LCC estimation, most LCC models are dedicated to certain LCC components such as manufacturing cost and operating cost. However, an integrated and systematic LCC estimation methodology is still missing. Furthermore, most of the LCC analyses are largely dependent on the availability of the input parameters and their level of detail. It leads to obstacles for analysis with limited resources. This paper presents an input-dependent bi-level LCC estimation methodology which is built on the basis of both high level and detail level costing 1 Corresponding Author. PhD candidate, Faculty of Aerospace Engineering, Air Transport and Operations, E-mail: X.Zhao-1@tudelft.nl. X. Zhao et al. / Aircraft Bi-level Life Cycle Cost Estimation 111 methods. The emphasis is also drawn on the integration of the cost module with aircraft geometry and life cycle process details. In Section 1, the proposed framework is illustrated along with the corresponding costing methods. Section 2 shows the initial attempt of applying the method on an A330 Aircraft (A/C) case. Next, conclusions considering the framework implementation and challenges are highlighted. It is followed by the discussions of future steps for this development. 1. Methodology The framework addresses a systematic LCC estimation process. Two levels of analysis methods based on the availability of input parameters are developed, see Figure 1. If only weight estimation can be conducted based on the geometric and material parameters, high level LCC estimation adopting weight as the main cost driving parameter is implemented; if the parameters and rules relevant to geometry resegmentation and process planning are available, the detail level cost estimation using extended Bill of Materials (BOM) will be implemented. The process planning is conducted based on the process meta model and operational rules to obtain a product specific LCC process. Thereafter, an extended BOM containing lists of product properties is generated for detail level LCC estimation. After all the cost elements needed for the economic indices are obtained, the cost estimation is completed. If the cost estimation method is not designated in advance, the detail level LCC estimation will supersede the high level LCC estimation. The high level LCC estimation method is only applied when there is not enough data available for the detailed level LCC estimation, i.e. when the parameters for the detailed LCC estimation are missing or cannot be derived. Figure 1 Bi-level cost estimation framework 1.1. Aircraft component in life cycle process  Product model Based on the cost type which will be evaluated, the parameters needed for the estimation vary correspondingly. This leads to a pre-processing step of generating categorized data groups, which are specific to certain geometry properties and processes. Therefore, the detailness and the emphasises of the product models needed for different cost types are distinct, in the meantime, they are all on the basis of the same master geometry. For high level cost estimation in this research, only weight information evaluated based on the master geometry is needed. Whereas, for detailed level cost estimation, geometry re-segmentations and/or cost type specific properties extraction are needed for all four cost types involved in the LCC. 112 X. Zhao et al. / Aircraft Bi-level Life Cycle Cost Estimation  Life cycle process model The A/C life cycle is processed in four major phases: Research, Development, Test and Evaluation (RDT&E), production, operating & maintenance and disposal & recycling. Each phase is further elaborated into respective process flow meta model. The activities involved in a specific A/C life cycle is predicted based on the process flow meta models and the rules used for deriving detailed activities according to the design and process properties. An example of an applicable process model for the operation and maintenance life cycle phase is given in Figure 2. Figure 2 Operation & maintenance process model 1.2. LCC estimation Figure 3 Life Cycle Cost breakdown X. Zhao et al. / Aircraft Bi-level Life Cycle Cost Estimation 113 The CBS for LCC is illustrated in Figure 3 with a comprehensive division summarized from cost estimation practices. It is divided in two streams on the basis of two main stakeholders: the manufacturer and the operator. Miscellaneous costs such as tooling and equipment depreciation cost, insurance, interest, tax and the overhead, which mainly depends on the companies’ strategies, are categorized and quantified as wrap up factors. Some companies allocate percentages on the estimates to represent those cost impacts, while others assign a fixed amount to quantify their influences on the LCC. Furthermore, more detailed cost breakdown is built up separately for the RDT&E, production, operation & maintenance and disposal & recycling cost type. The cost elements under each cost type vary based on the various LCC characteristics. For example, the recurring cost and nonrecurring cost are defined specifically for the production cost, while for operating and maintenance cost the direct cost and indirect cost are established. There are also typical cost types such as labour cost, material cost, tooling & equipment cost, energy consumption and facility cost appear in all cost categories. The labour cost is generally time dependant and the material cost are product dependant, while the other cost types are often related to the company policies. In this paper, the labour and material costs are the most focused elements and are less dependent on the company strategies. Therefore, they are elaborated in next sections 1.2.1 and 1.2.2.  High level model High level cost model adopts weight/mass as cost driving parameters. It is built based on the product breakdown and the cost breakdown, while due to the limited available knowledge, the process plan is not considered on this level comparing with the detail level cost estimation. The estimations are generalized in Eqs. (1) - (3). Ci , j = f i , j (W ) [when j refers to labour cost] (1) Ci , j = ri , jW [when j refers to material/fuel cost] (2) CLCC = ∑ Ci = ∑∑ Ci , j i i (3) j Where, f represents the equation evaluating the labour / material cost ( t ) for each LCC phase. Generally, the expression is obtained from statistical data analyses based on A/C weight ( W ) using power law models or polynomial regressions. The weight contains both the flying weight and chipped weight. r stands for the labour rate in $ / hr or the unit price in $ / kg . i symbolizes one of the four LCC phases, j is the cost item such as labour cost and material cost shown in the CBS under each LCC cost type.  Detail level model Detail level model is implemented when the data relevant to the product design and its life cycle operations are accessible. In addition, inference mechanisms relevant to deriving the process properties should also be available when applying this model. The detail level cost estimation allocates the CBS cost items under each process step shown 114 X. Zhao et al. / Aircraft Bi-level Life Cycle Cost Estimation in Figure 2 with the same cost structures (Figure 3) as they are in the high level cost estimation. Comparing with the high level model, an extra layer of process step prediction and relevant processing activities are inserted in the model between the product geometry and the cost estimation, which can also be observed from Figure 1. Labour time estimation/collections are conducted on each process step level. The driving parameters for the time analysis are not limited on weight/mass but parameters more closely linked with the process steps based on physics and/or statistics. Once the labour times of the detailed process steps are obtained, cost time analyses are implemented to accumulate the LCC. The general formulations of the process step cost time evaluations are highlighted in Eqs. (4)-(7). ti , k , j = f i , k , j ( x ) [when j refers to labour cost] (4) Ci , k , j = ri , k , j ti , k , j [when j refers to labour cost] (5) Ci , k , j = ri , k , j ( ΔW ) [when j refers to material/fuel cost] (6) CLCC = ∑ Ci = ∑∑ Ci , k = ∑∑∑ Ci , k , j i i k i k (7) j Where, f represents the equation evaluating the labour time ( t ) for each process step in the aircraft life cycle. Generally, the expression is obtained from statistical data analyses based on design and process parameters ( x ) using power law models or physical approximations. For example, the composite manufacturing process are approximated by first order law models and further adapted to hyperbolic function models [10]. r , i and j are the same as they are in the high level costing. The added footnote k represents process steps derived based on the process meta models. ΔW is the A/C weight (or the fuel weight in a flight trip operating process) increase or decrease during the operation of each process step. 2. Case study-A330 flight trip operating cost example 2.1. Trip operating process Figure 4 illustrates a typical aircraft mission profile for a trip of flight operating. Time and fuel consumption are deployed on each operating process segment. The Reserve fuel required includes the contingency trip fuel, alternative fuel and the final reserve fuel. Three operating cost items, viz., the crew cost, the airport charge fee and the fuel cost are estimated for the flight mission profile. X. Zhao et al. / Aircraft Bi-level Life Cycle Cost Estimation 115 Figure 4 Aircraft mission profile [11-14] 2.2. High level model The operating relevant items in the DOC + I method [15] are adapted for high level trip operating cost estimation, see Eqs. (8)-(12). Less than 20 parameters are required for this estimation. toperating ,crew = R V (8) Coperating , crew = ⎡⎣ 482 + 0.590( MTOW / 1000) + ( nseat / 30 ) 78⎤⎦ toperating , crew (1 + rinf lation ) y − y0 (9) Coperating , fee = ⎣⎡ 4.25 ( MTOW / 1000 ) + 0.136 × 500 MTOW / 1000 ⎦⎤ (1 + rinf lation ) y − y0 (10) ⎛ W fuel ⎞ W fuel = MTOW ⎜ ⎟ ⎝ MTOW ⎠ (11) Coperating , fuel = roperating , fuelW fuel (12) Where, operating time for crew cost ( toperating , crew ) is obtained from range ( R ) and speed ( V ). Operating crew cost ( Coperating , crew ) is calculated based on Maximum Take Off Weight ( MTOW ) and seat capacity ( nseat ). Operating fee ( Coperating , fee ) is based on MTOW . The fuel weight is the production of the jet fuel price ( roperating , fuel ) and fuel weight ( W fuel ), which is related to the fuel weight fraction ( W fuel ). Since DOI + I MTOW adopts the mid-1993 money, which is converted to 2015 money by applying the 116 X. Zhao et al. / Aircraft Bi-level Life Cycle Cost Estimation constant inflation rate rinf lation from the reference fiscal year ( y0 ) to the fiscal year ( y ) in Eqs. (8) and (9). The total operating cost is then the summation of crew, fee and fuel expenses. 2.3. Detail level model Time and fuel calculations for the segments of the mission profile are adopted for the detail level trip operating cost model. More than 60 parameters are needed for this estimation. Warm-up: Warm-up crew cost ( Coperating , warm − up , crew ) is estimated based on an average operating time ( toperating , warm − up , crew ) [14], the hourly rates ( roperating , warm − up , flight − crew , roperating , warm − up , cabin − crew ) and the numbers of flight and cabin crew ( n flight − crew , ncabin − crew ), see Eq. (13). The fuel cost is obtained from the fuel rate ( roperating , warm − up , fuel ) and the consumed fuel mass ( Woperating , warm − up , fuel ) according to empirical weight fraction, see Eqs. (14)-(15). Coperating , warm − up , crew = roperating , warm − up , flight − crewtoperating , warm − up , crew n flight − crew + roperating , warm − up , cabin − crewtoperating , warm − up , crew ncabin − crew ⎛ Wwarm − up ⎞ Woperating , warm − up , fuel = ΔWwarm − up = ⎜ 1 − ⎟ MTOW ⎝ MTOW ⎠ Coperating , warm − up , fuel = roperating , warm − up , fuelWoperating , warm − up , fuel (13) (14) (15) Taxi-out and take-off: Since the time of a smooth taxi-out and take-off is very little, the crew cost due to this operation segment can be neglected. The fuel weight WTO in Eq. (14) and fraction is employed for the fuel cost evaluation by adopting MTOW corresponding fuel weight ( Woperating ,TO , fuel ) in Eq. (15). In addition, the take-off charge is assessed as part of this segment cost. Based on the airport charge report [16], Eq. (16) shows that the cost including the weight based take off charge ( roperating ,TO , feeWTO ), and the service charge ( rservice nseat roccupancy ), security fee ( rsec urity nseat roccupancy ) and Passengers with Reduced Mobility (PRM) levy ( rPRM _ levy nseat roccupancy ), which all based on the number of available seat ( nseat roccupancy ). Coperating ,TO , fee = roperating ,TO , feeWTO + rservice nseat roccupancy + rsec urity nseat roccupancy + rPRM _ levy nseat roccupancy (16) Climb: According to the typical climb law, three climb segments are considered (Figure 4): from 0ft (0m) to 10000ft (3050m) at constant 250 knots Indicated Air Speed (IAS); from 10000ft to crossover altitude above 30000ft (9140m) at constant 300 knots (IAS); from 30000ft to the Top of Climb (TOC) 36000ft (11000m) at constant 0.80 Mach [17]. The climb IAS should be converted to Ground Speed (GS) [14]. Therefore, the time to climb ( toperating , c lim b , crew ) is the aggregation of the time ( ( Δtoperating ,c lim b ,crew )l ) X. Zhao et al. / Aircraft Bi-level Life Cycle Cost Estimation 117 for each climbing segments l , which is the integral of altitude ( h ) over the Rate of Climb ( R / C ), also represents the vertical velocity [11], see Eqs. (17) and (18). toperating ,c lim b , crew = ∑ ( Δtoperating , c lim b ,crew ) l ( Δt ) =∫ operating , c lim b , crew l h final hinitial l dh R/C (17) (18) Assume the R/C changes linearly with the change of the altitude, then ( Δt ) = operating , c lim b , crew l ⎛ R / Cl +1 ⎞ hl +1 − hl ln ⎜ ⎟ R / Cl +1 − R / Cl ⎝ R / Cl ⎠ (19) Where, the R/C can be evaluated according to force equilibrium during climb [13], Eq. (20) applies. It is calculated by applying formulas of thrust ( T ) and density ( ρ ) at altitude h , converting IAS to GS as V∞ ([13] and [17]), and substituting average weight during climb ( W ), reference area ( S ), zero-lift drag coefficient ( CD0 ) and drag due to lift coefficient ( K ). Thereafter, the operating climb crew cost can be obtained by employing Eq. (13) while using toperating ,c lim b,crew for the time term. −1 ⎡T 1 2 ⎛ W ⎞⎤ ⎛W ⎞ R / C = V∞ ⎢ − ρV∞2 ⎜ ⎟ CD0 − K ⎜ ⎟⎥ ρV∞2 ⎝ S ⎠ ⎥⎦ ⎝S ⎠ ⎣⎢W 2 (20) The operating climb fuel cost is estimated by accumulating fuel consumption for each climb segment based on the average Specific Fuel Consumptions ( ( SFC ) ave ,l ), the thrust ( Tl ) and the segment time ( ( Δtc lim b )l ), see Eq. (21). With the combination of Eq. (15) while adopting Woperating ,c lim b, fuel in the calculation, the fuel cost is obtained. Woperating ,c lim b, fuel ≈ ∑ ( ΔWc lim b )l = ∑ ( SFC ) ave ,l Tl ( Δtc lim b )l l (21) l Cruise: Assume the aircraft cruises at altitude 36000ft (11000m) at Mach 0.8. The crew cost is calculated based on the time to cruise by Eq.(8) from the high level cost model while employing cruise range ( Rcruise ) and cruise speed ( V∞,cruise ), and substituting toperating ,cruise,crew in Eq.(13). The Breguet range equation (Eq.(22)) is adopted to estimate the fuel consumption, where the lift drag ratio ( L / D ) is needed, see Eqs. (23) and (24). The fuel cost is again obtained by utilizing Woperating ,cruise, fuel in Eq.(15). RCruise = L / D V∞ Wcruise ,initial ln SFC g Wcruise , final (22) 118 X. Zhao et al. / Aircraft Bi-level Life Cycle Cost Estimation Wcruise , final = Wcruise ,initial exp − RCruise SFCg V∞ ( L / D ) (23) Woperating , cruise , fuel = ΔWcruise = Wcruise, initial − Wcruise, final (24) Descent: A descent process is a reversed process of climb. Three descent segments are considered: from the Top of Descent (TOD) 39000ft (11890m) to 30000ft (9140m) at constant 0.80 Mach; from 30000ft to 10000ft (3050m) at constant 300 knots IAS; from 10000ft (3050m) to 35ft (300) at constant 250 knots [17]. During descent, the engine thrust is normally set to flight idle, i.e. the thrust is close to zero, and the speed is controlled by the aircraft altitude [13]. Similar to R/C, the rate of descent ( R / D ) is applied for crew cost evaluation (Eqs. (25) and (26)), while the empirical weight W fraction ( descent ) is adopted by fuel cost calculation using Eqs. (14) and (15) while MTOW replacing the corresponding weight parameter ( Woperating , descent , fuel ) for descent segment. ( Δt ) operating , descent , crew l R/D=− = ⎛ R / Dl +1 ⎞ hl +1 − hl 1 l +1 1 d ( R / D) = ln ⎜ ⎟ a ∫l R / D R / Dl +1 − R / Dl ⎝ R / Dl ⎠ −1 −1 ⎡ 1 ⎛1 ⎞⎛W ⎞ ⎛1 ⎞ ⎛W V∞ = −V∞ ⎢C D0 ⎜ ρV∞2 ⎟ ⎜ ⎟ + K ⎜ ρV∞2 ⎟ ⎜ CL / CD ⎝2 ⎠⎝ S ⎠ ⎝2 ⎠ ⎝S ⎢⎣ ⎞⎤ ⎟⎥ ⎠ ⎥⎦ (25) (26) Approach, landing and taxi-in: Similar to taxi-out and take-off segment, the time of a smooth approach, landing and taxi-in is negligible. The fuel cost is based on the W weight fraction ( landing ), Eqs. (14) and (15) with the counterpart parameters apply. MTOW The airport landing fee is considered for this segment including the weight based landing fee ( roperating ,landing , fee MTOW ), the government noise levy ( rgov _ noise / insulation _ levy ) and weight based planning compensation levy ( rgov _ planning _ levy × MTOW )[16] (Eq. (27)). Coperating ,landing , fee = roperating ,landing , fee MTOW + rgov _ noise / insulation _ levy + rgov _ planning _ levy × MTOW (27) Reserve: It is assumed the reserve fuel is carried but not used, therefore, the crew cost due to the time for reserve is zero. According to Raymer [11], 5% reserve fuel and 1% trapped fuel are considered. In summary, the Total operating trip cost is obtained from the following: Coperating = Coperating , warm −up + Coperating ,to + Coperating ,c lim b + Coperating ,cruise +Coperating , descent + Coperating ,landing + Coperating ,reserve (28) X. Zhao et al. / Aircraft Bi-level Life Cycle Cost Estimation 119 2.4. Results The trip operating cost (excl. maintenance cost and miscellaneous cost) estimated by the high level model (4.2 cents/ASK (Cents per Available Seat Kilogram (ASK)) and the detail level model (2.9 cents/ASK) are realistic when comparing to the average expenses [18]. The cost shares of cost types estimated by both the high level and detail level models are illustrated in Figures 5 and 6 separately. The fuel cost accounts for the major part of a flight trip, which is agreed by both models. The crew cost estimated from the low level model are lower than that of the detail model, this is because the operating time considered by the detail level model tends to reach the lower bound based on the assumption of a time-efficient flight trip, while the airport charge per flight has increased compared with the mid-90s when the parametrical equations of the high level model were generated. Figure 5 High level model trip operating cost Figure 6 Detail level model trip operating cost Figure 7 Detail level model trip operating cost by operating segments Figure 7 shows the cost allocation on each segment of a flight operation. The actual percentages are shown on each of the cost type. The airport charges are allocated on the take-off and landing segments separately. A break point is applied on the figure to zoom in to the shares of crew and fee expenses. This gives a detailed overview of the actual cost distribution over a flight trip, which can be used for trip operating optimization studies. It can be seen that the fuel and crew costs are consumed mostly during cruise, climb and the descent segments. Moreover, fuel consumption is 120 X. Zhao et al. / Aircraft Bi-level Life Cycle Cost Estimation generated during the whole process, which also explains its major impact on flight operating cost and even the whole LCC. 3. Conclusions and future work This research established a generalized LCC estimation methods in both high and detail levels on the basis of data accessibility in aircraft design phase. High level cost estimation adopts product weight as cost driving parameters. It is capable of a fast cost evaluation based on limited data within aircraft conceptual design phase. A process layer is introduced between product model and cost model for each of the life cycle phase to facilitate the implementation of the detail level cost estimation. This can be applied along with the conceptual design development with gradually extended availability of design and process properties. It provides an in-depth insight of the cost distributions in an aircraft life cycle. The proposed method is exemplified on the A330 operating cost estimation study case, which shows its practical and significant industrial relevance in a strong sense. The future research will focus on the cost estimation strategies for RDT&E and disposal phases in the life cycle and further development on design optimization studies. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] “Life Cycle Costing in equipment procurement,” Washington, D.C., 1965. United States Department of Defence, “Life Cycle Costing procurement guide (interim),” Washington, D.C., 1970. United States Department of Defence, “Life Cycle Costing in equipment procurement- case book,” Washington, D.C., 1970. United States Department of Defence, “Life Cycle Costing guide for system acquisitions (interim),” Washington, D.C., 1973. Y. S. Sherif and W. J. Kolarik, “Life cycle costing: concept and practice,” Omega, vol. 9, no. 3, pp. 287-296, Jan. 1981. B. S. Dhillon, Life cycle costing: techniques, models and applications. New York, NY: Gordon and Breach, 1989. W. J. Fabrycky and B. S. Blanchard, Life-Cycle Cost and economic analysis. Englewood Cliffs, New Jersey: Prentice Hall, 1991. Y. Asiedu and P. Gu, “Product life cycle cost analysis: state of the art review,” International Journal of Production Research, vol. 36, no. 4, pp. 883-908, Apr. 1998. B. S. Dhillon, Life cycle costing for engineers. Boca Raton, FL: CRC Press Taylor & Francis Group, 2009. L. B. Ilcewicz, G. E. Mabson, S. L. Metchan, G. D. Swanson, M. R. Proctor, D. K. Tervo, H. G. Fredrikson, T. G. Gutowski, E.-teck Neoh, and K. c. Polgar, “Cost Optimization Software for Transport Aircraft Design Evaluation (COSTADE): design cost methods,” NASA Contractor Report 4737, Lanley Research Center, Hampton, Virginia, 1996. Raymer, Aircraft design: a conceptual approach. Washington, D.C.,: American Institute of Aeronautics and Astronautics, 1989, p. 503. J. Roskam, Airplane design. Ottawa, Kansas: Roskam Aviation and Engineering Corporation, 1985. Airbus, “Getting to grips with aircraft performance,” Flight Operations Support & Line Assistance, Customer Services, France, 1998. Airbus, “Getting to grips with the cost index,” Flight Operations Support & Line Assistance, Customer Services, France, 1998. R. H. Liebeck, D. A. Andrastek, J. Chau, R. Girvin, R. Lyon, B. K. Rawdon, P. W. Scott, and R. A. Wright, “Advanced subsonic airplane design & economic studies,” Long Beach, CA, 1995. Schiphol Amsterdam Airport, “Summary airport charges and conditions 2015,” 2015. X. Zhao et al. / Aircraft Bi-level Life Cycle Cost Estimation [17] [18] Airbus, “Getting to grips with fuel economy,” Flight Operations Support & Line Assistance, Customer Services, France, 2004. IATA, “US DOT form 41airline operational cost analysis report,” 2011. 121 122 Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-122 Design for Assistive Technology: a Preliminary Study 1 Maria Lucia MIYAKE OKUMURA , Osiris CANCIGLIERI JUNIOR Pontifical Catholic University of Parana Production and System Engineering Graduate Program - PPGEPS 2 Abstract. The Integrated Product Development (IPDP) for Assistive Technology (AT) is a complex process that involves different areas of knowledge. This process nor-mally uses methods, techniques and tools that help IPDP in a Concurrent Engineering environment, providing the integration of areas to meet the AT product requirements. In fact, it should be noted that the requirements of AT product user have different needs once the users have physical, sensory or cognitive limita-tions. This is the case of people with disabilities and physiological aging, whose population is globally increasing on a significant scale. Thus, there is a gap in the IPDP to comprehend and interpret data of the specific needs users to set the Product Design for Assistive Technology. This gap can be filled by a moderator that attributes an assisting function in the information mediation between multidisciplinary areas for the development of the product design oriented for AT. This paper presents a preliminary design model oriented for Assistive Technolo-gy and its structure performs a moderator role in IPDP in order to meet the users’ expectations and allow reliable and easy information sharing between design participants. Data collection is a survey of existing IPDP models and configurations of "Design for" that support the stages of the development process. This approach make possible to detect and provide relevant data, which includes the most used and significant processes. As a result, we can highlight the key func-tions identified in the Design model for Assistive Technology. Keywords. Concurrent Engineering, Integrated Product Development Process, Assistive Technology, People with special needs, Design for Assistive Technology. Introduction The new products development is often challenged to comprehend solutions to contemporary issues mainly related to the user's needs. Product Development oriented for Assistive Technology (AT) presents a complex process involving different areas of knowledge. In this case, it refers to methods, techniques and tools that support the Integrated Product Development Process (IPDP) in a Concurrent Engineering environment enabling integration of the areas to meet AT product requirements. In fact, the requirements of AT product user have different needs once the users have physical, sensory or cognitive limitations. This is the case of people with disabilities and physiological aging, whose population is globally increasing on a significant scale. On this point, there is a gap in the IPDP to comprehend and interpret data of the specific 1 2 Corresponding Author. E-Mail: lucia.miyake@pucpr.br Corresponding Author. E-Mail: osiris.canciglieri@pucpr.br M.L. Miyake Okumura and O. Canciglieri Jr. / Design for AT: A Preliminary Study 123 needs users to set the Product Design for Assistive Technology. This gap can be filled by a moderator that attributes an assisting function in the information mediation between multidisciplinary areas for the development of the product design oriented for AT. The paper’s objective presents a preliminary study of a design model oriented for Assistive Technology which incorporates in its structure a moderator functions role in IPDP in order to meet the users’ expectations and allow reliable and easy information sharing between design participants that belong to different areas of knowledge. The research is exploratory with qualitative approach, wchich deepens in the phenomenon of investigating concepts and techniques applied in engineering designs. The reseach method began with a literature review and data collection, which is a survey of the existing IPDP models and configurations of "Design for". The data anlysis aims to support the IPDP phases to detect relevant activities data involving the most used and significant processes. As a result, we can highlight the key functions identified and correlated in the Design model for Assistive Technology formulation. 1. Literature Review 1.1. Integrated Design: Product Development and Concurrent Engineering The term design comes from the fact of forming an idea or plan to execute an act or formulate a configuration for communication and action. To elaborate a design, Back [1] mentions that it is an activity focused on meeting the human need, especially those that can be met by technological factors of our culture and, consequently include technical, human, economic, social and political factors. Therefore, Back et al. [2] affirm that the design is a plan for a project to be carried out – a product that aims to meet a need. The majority of new products are variants of existing product and derivated by evolution, innovation or creative processes, which includes the Product Development Process. The objective of the Product Development Process is to convert the customers' needs and requirements into information that allow the design and manufacturing of a product or technical system [3]. So that, the identification of market and customer needs are linked, and it is proposed suitable solutions at every stage of the product life cycle, from the design elaboration, assigning and ensuring manufacturability, seeking quality, low cost and competitive price. Actually, the desing complexity increases in products oriented for Assistive Technology, as it is necessary to understand the users’ specificity and the integration of different areas of knowledge in the product development process. Thus, Concurrent Engineering comprehends system for integrated development and parallelism of a product design and its related processes, including the phases of manufacturing and support [4]. Thus, the integration of different fields of knowledge opens the possibility of an Integrated Product Development Process (PDIP) environment, comprising the diversity of methods, tools and models. IPDP and Concurrent Engineering are highlighted concerning the tools selection. Araujo and Duffy [5] affirm that the tool selection process is based on three fundamental dimensions: functionality, suitability for use and qualities. They also consider the influence of intuition, knowledge and experience elements of those involved in the acquisition decision. 124 M.L. Miyake Okumura and O. Canciglieri Jr. / Design for AT: A Preliminary Study 1.2. Assistive Technology (AT) The Individuals with Disabilities Education Act (IDEA) and the Americans with Disabilities Act (ADA) defines Assistive Technology as AT device and AT services. The term "assistive technology device" means any item, piece of equipment or product system, whether acquired commercially, modified or customized, which is used to increase, maintain or improve the functional capabilities of people with disabilities. The term "assistive technology service" means any service that directly helps a disabled person to choose, purchase or use an AT device [6, 7]. Cook e Hussey [8] define AT as "a wide range of equipment, services, strategies and practices designed and implemented to reduce the functional problems encountered by individuals with disabilities." AT devices goes from simple artifacts such as a cane to sophisticated computer programs that aim accessibility [9]. 1.3. Assistive Tecnology Product User Users of Assistive Technology products are people with disabilities, incapability or reduced mobility [10], which through AT have the possibility of their autonomy, independence, quality of life and social inclusion. Within the theoretical foundation, people with special needs are also designated as they demand for resources, service or differentiated support to achieve their autonomy. Therefore, the AT product users are people with a specific need to perform some task and can include disabled, pregnant women, nursing mothers, the elderly and others people with physical, sensory or cognitive limitations. It is observed that not all elderly or pregnant women are users of AT resources. It is worth noting the worldwide phenomenon of contemporary gerontology because of the people’s longevity, which reflect in the social and economic areas [11], that is, with the technology improvement the life expectancy has increased but with the population aging there has been an increasing in the incidence of patients, especially the ones over 60 years, with chronic, multiple or degenerative diseases. Thus, more than one billion people worldwide have some form of disability, and between them, there are about 200 million which has considerable functional limitations [12]. So that, the number of number of people with disabilities tends to increase, following the population growing projection, that according to United Nations statistics [13] should grow from 7.2 billion in mid-2013 to 8.1 billion by 2025, 9.6 billion in 2050 and 10.9 billion in 2100, and some developed countries show increasing in elderly in population density by age. The IBGE [14] presented the preliminary result of a sample from Brazilian Census from 2010 where 23.9% of the population has at least one disability. Another aspect is the social factors, WHO [12] mentions that disabled people have worst health prospects, lower levels of education, less economic participation and higher poverty rates when compared to people without disabilities. Leading authorities join forces in encouraging studies related to technologies that have bias in the socioeconomic sustainability [12]. Among the core competencies is the investigation of impeding barriers for people with disabilities, looking for solutions that guarantee access to health, education, employment, transportation and information, which reflect mainly in the poorest communities. 125 M.L. Miyake Okumura and O. Canciglieri Jr. / Design for AT: A Preliminary Study 2. Methodology The reseach’s purpose is to shape a moderator in IPDP that assign a support function to mediate the information between the multidisciplinary areas involved in the process and present a preliminary study of a design model oriented for AT. At first, the moderator’s challenge is to comprehend and interpret data of the specific needs users in order to meet the users’ requirements and allow reliable and easy information sharing between design participants that belong to different areas of knowledge. The reseach method began with a literature review on the current Product Development models. The selected Product Development (PD) models are applied in different areas. Next, the models were classified and analyzed according to the activities undertaken in the product’s development life cycle in order to define the constructo of the research and list the main functions that has accordance with IPDP oriented Assistive Technology. Table 1. IPDP Models Classification. The life cycle of Product Development Author Pre Development Asimov [20] Study of primitive need Archer [21] Establish the schedule Research & Project Conception Generate and make triage ideas Cain [22] Kotler [23] Bomfim, Nagel & Rossi [24] Pahl & Beitz [25] Bonsiepe [26] Barroso Neto [27] Need for understanding Clarification and task planning Discover the need, analyze, formulate the problem Product definition Development Feasibility study Preliminary Design Detailed Design Data collection Analyze and develop Design product Development and testing concept Development product Develop marketing strategy Analyze the market Solving processes and analysis Documentation for production Development produt Test/market Development Conceptual design Realization of the project Detailed Design Solution Investigate the requirements Divide and arrange the problem Develop and select alternatives Details, prototype and evaluate Project Prototype construction Feasibility study Feasibility study and conceptual desing Preliminary Design Detailed design, review, testing Generating concep Evaluate the concept Development Bonsiepe [29] Guideline VDI 2221 [30] Identify and select opportunities Defining the problem Formulation task, verification of functions Preliminary draft generating alternatives Andreassen & Hein [31] Need for research Suh [32] Identify the need Functional requirements Idea Preliminary study Crawford [28] Vincent [33] Clark e Fujimoto [34] Pugh [35] Rosenthal [36] Ullman [37] Wheelwright & Clark [38] Principle of the solutions Principle of the product Product conception Planning (development of the specification) Generate, design and develop idea Conceptual design Determining the requirements and detailed design Documentation Modfy the prototype and manufacture pre series Experimental production Production planning and marketing Launching Fixing of information Product lauching Preparation of production, production Prototyping Production Laboratory Model Production Product Design Process design Detailed design Manufacture Specification and design Production and testing of prototype Detailed design and documentation Develop the selected project Planning for consumption, maintenance and obsolescence Final analysis of the solution Development, pilot production engineering and testing Conceptual design Marketing Implantation Product attributes Conceptual design Idea of validation Achievement Product design Product planning Product design specification Idea Project (evaluation, decision, choice) Configuration: Structuring for modules & the acheivement product Communicate Testing Preliminary draft generating alternatives Back [1] Implementation e Post Development Planning Planning for manufacturing consumption &distribution & remove Pilot production Manufacture Launching 126 M.L. Miyake Okumura and O. Canciglieri Jr. / Design for AT: A Preliminary Study Cooper (Stage Gate) [39] Bürdek [40] Idea, preliminary investigation, detailed investigation Identify the problem and analyze the situation Development Validation and testing Define the problem Clausing [41] Evaluate the choice Generate alternatives Concept Preliminary studies Creation Three dimensional implementation (models) Problem analysis Synthesis solutions Simulation of the solutions Project evaluation Conceptual design Preliminary design Detailing Magrab [46] Problem definition Generate idea and develop concept Product definition Prasad [47] Mission definition Rozenburg & Eeckles [43] Hubka & Eder [44] Dickson [45] Cooper et al. [48] Kaminski [49] Baxter [50] Cooper [51] Mod. Toyota [52] Modelo V [53] Stuart Pugh [54] PRODIP [55] Pahl et al. [56] Crawford & Benedetto [57] Planning development Develop product Generation of viable projects Concept definition Idea Identify the need and feasibility study Design specification Discovery, define scope and market Strategic planning and design Project evaluation Engineering and analysis Product design Information design Conceptual design Basic design Specification define Analysis of the requirements Configuration design Detailed design Develop the product Test and validate the product Case study business Hardware and Software Design Component evaluation System Design Components design Definition and System Specification Product design specification Project Planning Information design Conceptual design Conceptual definition Achievement Industrialization Prototype and testing Test product Product design and process Prototyping and engineering operationalization Conceptual and detailed design Executive design Projeto conceitual Launching Preparation and production Project Achievement (technical develpment, prototypes and costs) Schulmann [42] Production Pilot production Product lauch Manufaturing and assembly Operationalization of production and manufacturing Preparation for production Production planning and execution Design for manufacturing Implement product and marketing Improvements, support and delivery Product launch Review post launch Production Product maintenance Implementation Conceptual design Detailed design Manufacture Preliminary design Detailed design Preparation for production Preliminary design and detailing Identify and select opportunities Generating concept Development of concepts validation Development Rozenfeld et al. [58] Strategic planning and design Informational design Conceptual design Detailed design Modelo Cascata [59] Case study business Customer requirements analysis System specification Design systems and components Construction of the components Ulrich e Eppinger [60] Mission statement Concept development Project system level Detailed design Test and improvement Product lauch and validation Solution Product lauch Preparation for production Launching, monitoring and discontinuation of the product Validation tests Preparation and production Launching The PDP models classification were positioned in the product development life cycle and divided into three macro phases: Pre-development, Development and implementation and Post-development, as shown in Table 1. In the macro phases, the activities that belong to the process were identified, with emphasis on the Development, which performs more the role of Moderator. Table 2. Desig for - Models. Design Oriented For Purpose PDP – Design phase Design for Aesthetics Aesthetics Product Function adequacy with pleasing forms. Conceptual and Detailed Design for Assembly Assembly Verify functions, shapes and materials to simplify the assembly process Informational, Conceptual and Preliminary Design for Environment ou Ecodesing Environmental impact and production Minimize environmental impact in the process, production, recycling and in the disposal. The designs have sustainability concepts, eco-tools, Green Design and Design for Recycling. Designing, Production and End of the cycle Author Pahl e Beitz (1996), Macdonald (2001), Baxter (2001), Rozenfeld et al. (2006) Boothroyd et al. (2002), Back et al. (2008), Rozenfeld et al. (2006) Rozenfeld et al. (2006), Back et al. (2008). M.L. Miyake Okumura and O. Canciglieri Jr. / Design for AT: A Preliminary Study Design for Excelence New peoduct and product lyfe cycle Results in satisfaction and efficiency of the needs set of all persons or organizations involved. Involves the techniques of Design for Manufacturability, Design for Assembly, Design for Testability and Design for Operability. Informational, Conceptual, Preliminary, Detailed and implementation Voss, Blackman, Hanson and Claxton (1996), Nunes (2004). Design for Manability, Design for Use / Ergonomic or Design for Human Factor. Relantionship between man and equipment Understand psychological and anthropometric factors. It uses the concepts of Product Ergonomics. Informational, Conceptual and Preliminary Blanchard; Fabrycky (1990), Rozenfeld et al. (2006), Back et al. (2008), Iida (2005) Design for Manufacturing Manufacturing Process Conceptual, Preliminary and Detailed Rozenfeld et al. (2006), Ulrich; Eppinger (2011), Back et al. (2008). Design for Productibility Manufacturing and Asembly Processes Design to facilitate or simplify and improve the production, manufacture of the product components making up the product. Producibility (quality and process): to facilitate and simplify the product or component production placing on the agenda: configuration, the degree that the product minimizes the labor, materials and costs. Involved Techniques: Design for Manufacturability, Design for Assembly Preliminary, Detailed, Implementation and Production Kuo, Huang e Zhang (2001), Nunes (2004) Detailed and Implementation Blanchard; Fabrycky (1990), Kaner; Bach; Pettichord (2002), Kuo, Huang e Zhang (2001), Edwards (2002), Nunes (2004) Designing, Implementation and Production Huang (1996), Rozenfeld et al. (2006) Conceptual, Preliminary and Detailed Pahl and Beitz (1996), Kuo, Huang and Zhang (2001), Edwards (2002), Nunes (2004) Informational, Conceptual and Preliminary Iida (2005), Baxter (2001), Story et al.(1998), EDeAN (2009) Design for Testability Software and Design Process Design for X Manufacturing, recycling, assembly, etc Modular Design, Design for Adaptability Design and Manufacturing Process Universal Design Design providing as comprehensive as possible of users’ skills Control the most observed information inputs and outputs. It aims to facilitate the subsequent trials / tests, modifying the design to make this possible. Set of organized rules and procedures to support the problems related to the life cycle of a product. Method applied to divide the product in components or sets of components and identify the necessary variations without affecting significantly the remaining modules or the overall product design. Design according the7 principles: equitable use, flexibility, simple and intuitive, perceptible information, tolerance for error, low physical effort and size and space for approach and use. It is used techniques such as Design for All, Inclusive Design, Desing for Disability, Design for a Broader Average, etc. 127 Adicionally, tools of ‘design for’, that are formed in the IPDP, were selected and listed, in order to analyse and set similar concepts applied on the Design Development, as shown in Table 2. 3. Analysis and Results Discussion The IPDP models’ study listed different activities and detected relevant activities data, which were included in the most used and significant processes. Then, we emphasizing the main functions that were identified and correlated to formulate the preliminary study of the AT Design Model. The terms most used by the authors are: Conceptual Design, Detailed Design, Informational Design and Prototyping. However, many of the items have meanings or similar activities that were specified with different terms and are implicit in the subsequent activity, according to the analysis of activities in the Design Development phase. Therefore, it is essential the detailment of the process activity in the parts ou phases design. The research on "Design for" focused on tools, design methods and ways of modeling, according to the thematic affinity. Among the "Design for" models, it was verified that the application focuses more on the macro phase of the Design Development. Another situation was the models designed through association of other 128 M.L. Miyake Okumura and O. Canciglieri Jr. / Design for AT: A Preliminary Study models, forming a new design oriented model, for example, Design for Productibility, Design for Excellence, Design for Environment and Universal Design. Thus, the design oriented models are elaborated and configured to simplify and facilitate part of a design process that aims to strengthen the main aspects of its function. 3.1. Preliminary Study: Design oriented for AT Model. The structure formed with the terms used in the preliminary study of the Design oriented for AT Model, illustrated in Figure 1, is the evolvement of the IPDP oriented for AT Conceptual Framework [15, 16]. Figure 1. Preliminary Study of Design oriented for AT Model. The IPDP oriented for AT Conceptual Framework refers to research that aims to meet the largest possible number of users, and next to it, there are the Society Demands related to the persons with special needs subject. Following up, it is the “Moderator” which has the function of mediate the existing information and methods in different areas and whose elements constitutes groups of: standardization using Unit ISO; sensory and/or physical impairment by International Classification of Diseases (ICD) and instructions of the Brazilian Association of Technical Standards (ABNT); skills identification by the methodology of Supported Employment; identification of functionality and ability through International Classification of Functioning, Disability, and Health (ICF) and other bases which are present in the diversity related to AT. M.L. Miyake Okumura and O. Canciglieri Jr. / Design for AT: A Preliminary Study 129 Representations of methods, concepts and tools used in PDIP and Concurrent Engineering are at the bottom of the "Moderator", in Figure 1, and help according the activities approach for the product designs elaboration. 3.2. Moderator Main Functions In view of the design oriented for Assistive Technolog process, the function of the Moderator is to seek characteristics in order to interpret and direct them as product requirements througthout the IPDP phases. In this way, it investigates the variables of multidisciplinary areas that involve the user and correlated environments. In the following items, the activities assigned in design macro phase and the Modera-tor function performance are described. 3.2.1. Pre-design: Planning In the Pre-Design it is emphasized the investigation of the needs of users with disabilities to perform an activity, and which aspect is questioned in the project Planning. The well defined problematization allows identifying the opportunity for Design Development, which focus on the product planning and design to outline the technical procedure, team building, searching for technical support and financial resources. At this planning phase, the kind of Assistive Technology is identified according to the type classification [17], which are: products, resources, methodologies, strategies, practices or services. Next, AT product category is identified and classified in AT devices or resources [18] that are: mobility aids; orthotics and prosthetics; aid for daily life and practical life; Architectural designs for accessibility or personal mobility; environment control system; augmentative and alternative communication (AAC); adaptations in vehicles; aid equipment to improve the environment, tools and machines; proper positioning; and, aid especially for people with visual and hearing disabilities. Thus, the investigation of the user need and definition of the product scope start. It takes into account the approval of the Pre-Design phase including strategic planning with the technical and financial support. 3.2.2. Design Development Phase The macro Design Development phase is subdivided into: Informational Design, Conceptual Design, Preliminary Design and Detailed Design that are the essence for the design model that is engaged in the process to establish the AT product design. At the stage of Informational Design the technical team and the product characteristics is defined in order to constitute the requirements. AT categories are detailed, positioning the product information and the way of use by costumers. The way of use can be classified as follows: customized use, individual use, groups use or use in universal design [15, 16] So that, it is relevant the data survey that identifies the obstructive barrier and accessibility, the user and the group related directly or indirectly with the use of the AT product. In this sense, the participation of professionals from multidisciplinary areas enable the clarification of the user’s specifications in relation to the characteristics of the disability or limitation, and complemented with information aimed at specificity and skills. 130 M.L. Miyake Okumura and O. Canciglieri Jr. / Design for AT: A Preliminary Study In the Conceptual Design phase is established the product concept through the product requirements gathering. At this phase, the technical team involved in the process seek viable alternatives to make up the product design. Thus, the role of the Moderator is to comprehend and interpret product requirements and direct them to the technical areas that can accomplish the purpose of the product, establishing the product concept. The design oriented for AT comprehends concepts that support and facilitate the user with disability, such as Ergonomics of Product and Usability. At the stage of Preliminary Design, the assigned techniques are selected and distributed to design the product components. This stage tends to correlate and integrate the concepts to visualize the components and preparation for the product prototype. The attention in this process preparation is on the set up in a way that the user can participate in the prototype evaluation. Additionally, there is the importance of choosing and defining the type of material, which unfolds in safety and durability during the product use, and sustainability for the time of the disposal. In the Detailed Design phase the product design is estableshed, whose product components are grouped and organized to structure the product prototype. The first evaluation is pertinent to the technical team, which adjust the components. Following, an evaluation is structured to be applied in the user’s prototype test, to verify if it meets and solves all the questioning exposed in the design planning. Thus, the evaluation result enables corrections in the product and meet the expectations of the users and teams involved in the project. 3.2.3. Post-Design: Implementation and Production Post-project phase includes transfer to production and monitoring of the product during the launch. It is considered relevant the document preparation for production implementation, because this material comprises the last changings after the approval of the prototype. Further, this documentation contains foundations to provide feedback to the IPDP of new AT products. Monitoring of the product at launch aims at upgrading through acceptance and market behavior, and especially to determine the user's performance, if the user managed to achieve autonomy and got some personal development, when using the product as defined by the Asssitive Technology. 4. Conclusion This research identified the main activities in IPDP, which were allocated at the Moderator's function. However, it was observed during the research that many of the AT product may have intermediary users, as professionals working in the AT services or even in the product maintenance. Thus, it is necessary to researcher further in this segment, as these professionals and intermediary users impact the end user, both in the product selection as in the incentive for its use [19]. The preliminary study demonstrates the performance of Concurrent Engineering in IPDP oriented for AT to accomplish work of teams from different areas enabling the activities tht runs simultaneously between the process phases. Thus, it was observed that is possible more than one area of knowledge be present in the same process and activity. The next stage of the work, giving continuity to the research, is to address the influence of interdisciplinary areas in the Moderator's function in the AT Design Model. M.L. Miyake Okumura and O. Canciglieri Jr. / Design for AT: A Preliminary Study 131 Acknowledgements The authors are thankful for the financial support provided by the Coordination for the Improvement of Higher Level Personnel - CAPES and Pontifical Catholic University of Paraná – PUCPR. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] N. Back, Metodologia de projeto de produtos industriais, Guanabara Dois, Rio de Janeiro, 1983. N. Back, A. Ogliari, A. Dias, J. C. Silva, Projeto integrado de produtos: planejamento, concepção e mo-delagem, Manole, Barueri, 2008. D.W. Smith, Introducing EDG students to the design process. in. Proceedings of the 2002 Annual Midyear Meeting of the Engineering Design Graphics Division of the American Society for Engineering Education, Berkeley, 2002. B. Prasad, Concurrent engineering Fundamentals: integrated product and process organization. Prentice Hall, New Jersey, 1996. C. S. Araujo, A. H. B. Duffy, Assessment and Selection of Product Development Tools. In: Proceedings of the 11th International Conference on Engineering Design - ICED'97. Tampere, Finland, August 19-21, 1997, pp. 157-162. United States Of America - USA. Education of the Handicapped Act Amendments of 1990. Sec. 101, Section 602: (25) “Assistive Technology Device”, (26) “Assistive Tecnology Service”, 101st Congress (1989-1990). United States Of America - USA. Public Law 105–394 of Nov. 13, 1998. Assistive Technology Act of 1998. To support programs of grants to States to address the assistive technology needs of individuals with disabilities, and for other purposes. 105th Congress. 112 STAT. 3627. Congressional Record v.144. A. M. Cook; E. S. M. Hussey, Assistive Technologies: Principles and Pratice. 3 ed., Elsevier, Mosby, 2008. Instituto de Tecnologia Social – ITS, Microsoft. Cartilha tecnologia assistiva nas escolas: Recursos bási-cos de acessibilidade sócio-digital para pessoal com deficiência. ITS – Instituto de tecnologia social e Microsoft Educação, 2008. Brasil, Subsecretaria Nacional de Promoção dos Direitos da Pessoa com Deficiência. Comitê de Ajudas Técnicas. Tecnologia Assistiva. Brasília: CORDE, 2009. 138 p. R. P. Veras, A Inclusão Social do Idoso: promovendo saúde, desenvolvendo cidadania e gerando renda. in. Barros Júnior, J. C. (Org.), Empreendedorismo, Trabalho e Qualidade de Vida na Terceira Idade. 1 ed. Editora Edicon, São Paulo, 2009. World Health Organization, World report on disability 2011, WHO, The World Bank. United Nations, World Populations Prospects The 2012 Revision: Highlights and Advance Tables. United Nations: New York, 2013. Instituto Brasileiro de Geografia e Estatística – IBGE, Resultados preliminares da amostra Censo Demográfico 2010. IBGE, Rio de Janeiro, 2011. M.L.M. Okumura, A engenharia simultânea aplicada no desenvolvimento de produtos inclusivos: uma proposta de framework conceitual. Dissertação de mestrado em Engenharia de Produção e Sistemas, da PUC-PR, 2012. M.L.M. Okumura, O. Canciglieri Junior. Engenharia Simultânea e Desenvolvimento Integrado de Produto Inclusivo: Processo de Desenvolvimento Integrado de Produtos orientados para Tecnologia Assistiva. OmniScriptum GmbH & Co. KG (NEA), Saarbrücken, 2014. Coordenadoria Nacional para Integração da Pessoa Portadora de Deficiência CORDE/SEDH/PR. Convenção sobre os Direitos das Pessoas com Deficiência, Brasília: SISCORDE, 2007. UNIT-ISO 9999. Norma Internacional ISO 9999:2007. Productos de apoyo para personas com discapacidad – Clasificación y terminologia. La traducción de AENOR en la Norma UNE-EN ISO 9999:2007. Comité General de Normas, 2008. O. Canciglieri Junior, M. L. M. Okumura, R. I. M. Young, The Application of an Integrated Product Development Process to the Design of Medical Equipment. In: J. Stjepandić, N. J. C. Wognum and W. Verhagen (eds.). Concurrent Engineering in the 21st Century. Springer International Publishing Switzerland, 2015. M. Asimov, Introduction to Design, Prentice-Hall, Englewood Cliffs, 1962. 132 M.L. Miyake Okumura and O. Canciglieri Jr. / Design for AT: A Preliminary Study [21] L. B. Archer, A View of the Nature of the Design Research, In: R. Method, J. Jaques and A. Powell, (eds.) Design Science, IPC Business Press Ltd., Guilford, 1981, pp. 30–47. [22] W.D. Cain, Engineering Product Design, Business Books, London, 1969. [23] P. Kotler, Marketing Decision Making: A Model-building Approach, Holt, Rinehart & Winston, London, 1974. [24] G. A. Bomfim, K.D. Nagel and L. M. Rossi, Fundamentos de uma metodologia para desenvolvimento de produtos. COPPE/UFRJ, Rio de Janeiro, 1977. [25] G. Pahl and W. Beitz, Engineering Design: a Systematic Approach, Springer Verlag, Berlin Heidelberg, 1996. [26] G. Bonsiepe, Teoría y práctica del diseño industrial, Gustavo Gili, Barcelona, 1978. [27] E. Barroso Neto, Desenho Industrial: Desenvolvimento de Produtos. Oferta Brasileira de Entidades de Projeto e Consultoria. CNPq, Coordenação Editorial, Brasília, 1982. [28] L. Crawford, Project Performance Assessment. Masters in Project Management Course, 10th-15th June, 2002, Paris, France. UTS/ESC-Lille. [29] G. Bonsiepe, P. Kellner and H. Poessnecker, Metodologia experimental. CNPq, Brasília, 1984. [30] VDI-Richtlinie 2221, Methodik zum Entwickeln und Konstruieren technischer Systeme und Produkte, VDI Verlag, Düsseldorf, 1993. [31] M. M. Andreassen, L. Hein, Integrated Product Development, IFS/Springer Verlag, London, 1987. [32] N. P. Suh. Principles of Design, Oxford University Press, New York, 1990. [33] G. Vincent, Managing new-product development. Van Nostrand Reinhold, New York, 1989. [34] K. B. Clark and T. Fujimoto, Product development performance: strategy, organization and management in the world auto industry, Harvard Business Press, Boston, 1991. [35] S. Pugh, Total Design: integrated methods for successful product engineering, Addison-Wesley, Wokingham, 1991. [36] S. R. Rosenthal, Effective product design and development, How to cut lead time and increase custumer satisfaction, BOI, Illinois, 1992. [37] D. G. Ullman, The Mechanical Design Process, McGraw-Hill, New York, 1995. [38] S. C. Wheelwright, K. B. Clark, Revolutionizing Product Development: Quantum Leaps in Speed, Efficiency, and Quality. Free Press, New York, 1992. [39] R. Cooper, Stage-gate System: a new tool for managing new products, Business Horizons, Mai/Jun 1988, pp. 63-73. [40] B. Bürdek, Diseño: Historia, teoría y práctica del diseño industrial. Gustavo Gilli, 1994. [41] D. Clausing, Total quality development: a step-by-step guide to world-class Concurrent Engineering, ASME, New York, 1994. [42] D. Schulmann, O Desenho Industrial, Papirus, Campinas, 1994. [43] N. F. M. Rozenburg and J. Eeckles, Product Design Fundamentals and Methods, John Willey & Sons, Chichester, 1995. [44] V. Hubka and W. E. Eder, Design Science: introduction to the needs, scope and organization of Engineering Design Knowledge, Springer-Verlag, London, 1996. [45] P. Dickson, Marketing management, 5 ed., the Dryden Press, Forth Worth, 1997. [46] E. Magrab, Integrated Product and Process Design and Development: the product realization process, CRC Press, Boca Raton, 1997. [47] B. Prasad, Concurrent engineering fundamentals. Prentice Hall, New Jersey, 1997. [48] R. Cooper, S. J. Edgett and E. J. Kleinschmidt, New Problems, New Solutions: Making Portfolio Management more Effective, Research Technology Management, Vol. 43, No. 2, p. 18-33, 2000. [49] P. C. Kaminski, Desenvolvendo produtos com planejamento, criatividade e qualidade. LTC, Rio de Janeiro, 2000. [50] M. Baxter, Projeto de produto: guia prático para o design de novos produtos, tradução Itiro Iida. – 2.ed.rev. –Edgard Blücher, São Paulo, 2001. [51] R. Cooper, Winning a new products: accelerating the process from idea to lauch, 3 ed., Perseus, Cambridge, 2001. [52] J. M. Morgan and J. K. Liker, The Toyota product development system, Productivity Press, New York, 2006. [53] V-Modell XT, version 1.3, ftp://ftp.heise.de/pub/ix/ix_listings/projektmanagement/vmodell/V-ModellXT-Gesamt-Englisch-V1.3.pdf, Accessed: 30.03.2015. [54] R.C. Beckett, Functional system maps as boundary objects in complex system development, Int. J. Agile Systems and Management, Vol. 8, 2015, No. 1, pp.53–69. [55] Khadi and Village Industries Commission, Scheme For Product Development, Design Intervention and Packaging - PRODIP, Circular, Jharkhand/158/2007-08. [56] G. Pahl, W. Beitz, J. Feldhusen and K. Grote, Projeto na engenharia: fundamentos do desenvolvimento eficaz de produtos, métodos e aplicações. Trad. H.A.Werner, 6 ed., Edgar Blücher, Sao Paolo, 2005. M.L. Miyake Okumura and O. Canciglieri Jr. / Design for AT: A Preliminary Study 133 [57] C. M. Crawford and C. A. Di Benedetto, New Products Management. 8th ed. McGraw-Hill/Irwin, Homewood, 2005. [58] H. Rozenfeld, F. A. Forcellini, D. C. Amaral, J. C. Toledo, S. L. Silva, D. H.Alliprandini and R.K. Scalice, Gestão de desenvolvimento de produto: Uma referência para a melhoria do processo, Saraiva, São Paulo, 2006. [59] I. Sommerville, Software Engineering, 9th ed., Addison-Wesley, Boston, 2010. [60] K. T. Ulrich and S. D. Eppinger, Product Design and Development, 5th ed, Irwin McGraw-Hill, New York, 2011. 134 Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-134 Managing Stakeholder Voices for the Development of a Novel Device for the Elbow and Forearm Rehabilitation Aline Marian CALLEGARO1, Raffaela Leane ZENNI TANURE, Amanda Sória BUSS, Carla Schwengber ten CATEN, and Márcia Elisa SOARES ECHEVESTE Product and Process Optimization Laboratory/Graduate Program of Industrial Engineering, Federal University of Rio Grande do Sul Abstract. This study aims to manage requirements from critical stakeholders for the development of a novel rehabilitation device for the elbow and forearm rehabilitation using Costumer Value Chain Analysis (CVCA) and Quality Function Deployment (QFD) tools. Results are described in accordance with the engineering requirement process adapted to this case: (i) elicitation: the requirements are from primary and secondary sources supported by CVCA application; (ii) analysis: requirements were identified and prioritized by means of QFD tool (quality, product and part characteristics matrices). The association of CVCA with QFD is an innovative and successful approach of mapping critical stakeholders to identify and prioritize requirements. Keywords. Rehabilitation device, Costumer Value Chain Analysis, Quality Function Deployment Introduction The innovation process has often been represented as a linear process which funnels customer needs through business and process filters. This method may be appropriate for some consumer products, but the traditional innovation funnel approach has some inherent limitations in the medical device industry. Different stakeholder should have their voices heard throughout the innovation process. Each stakeholder has diverse and unique needs, and the needs of one may highly affect the needs of another. The relationships between stakeholders may be tenuous [1]. Before developing any system, you must understand its requirements, the goal of being projecting and how it can support the goals of the individuals or business that will pay for the product/system. The product development process should be formalized, clarifying the product, process and resource requirements [2, 3, 4]. Most of the models of product development presents an identification step of ideas and opportunities [5], followed by the concept development and detailed design up to launch and product or service monitoring in the market [6]. The discontinuity and recycling are the end of the life cycle. 1 Corresponding Author, E-mail: nimacall@gmail.com. A.M. Callegaro et al. / Managing Stakeholder Voices for the Development of a Novel Device 135 Thus, the requirement analysis of different stakeholders is an essential step in the innovation process of medical devices. Inherent difficulties are present in the process of discovering and identifying stakeholders and their needs, as well as the documentation of these in a form that is amenable to analysis, communication, and subsequent implementation [7]. Based on this, tools as surveys, CVCA and QFD can be used to assist in the process. The Customer Value Chain Analysis (CVCA) is a strategic and tactical tool, implemented from the organization’s business model. It establishes a value map in the product definition phase, a comprehensive identification of relevant stakeholders, relationships with each other and their role in the product life cycle. The CVCA tool application output can be the input for other tools such as Quality Function Deployment (QFD) [8]. QFD was proposed to collect and analyze the voice of the customer, to develop products with higher quality to meet or surpass stakeholder needs. The primary functions of QFD are: product development, quality management, and customer needs analysis. The requirements are deployed throughout the product development process. Customer needs analysis is always the very first step of a QFD process. Essentially, there is no definite boundary for QFD potential fields of applications. It has been expanded to wider fields such as design, planning, decision-making, engineering, management, teamwork, timing, and costing. Quality management and product development are also fields for QFD applications [9]. Considering this context, the association of CVCA with QFD can assist the innovation and consequent creation of value for the health products. The CVCA tool allows the process mapping, helping in understanding the business unit, product value chain and identification of critical stakeholders [8]; while the QFD method assists the requirements quantification to align concepts and resources; increasing the team’s ability to recognize the diverse requirements of the product and priorities to define the product. Based on this, a research question was formulated. Would be possible to identify and prioritize requirements for the development of a novel rehabilitation device for upper limbs rehabilitation using CVCA associated with QFD tool, managed by the requirements engineering process? Thus, this paper aims to present the requirements managing from critical product value chain stakeholders for the development of a novel rehabilitation device for upper limbs rehabilitation. The structure comprises the following sections: (i) methodology, (ii) findings, and (iii) conclusions. 1. Methodology A model associating CVCA tool [8] with the adapted QFD tool [10] was developed to analyze the product value chain stakeholders, identify their needs, analyze and prioritize their requirements (see Figure 1). 136 A.M. Callegaro et al. / Managing Stakeholder Voices for the Development of a Novel Device Figure 1. Model integrating CVCA with QFD. Source: primary. The CVCA tool was used to carry out the value chain analysis and identify the product value chain stakeholders. The CVCA tool has seven stages: (i) to define the initial business model and its assumptions; (ii) to delineate the parties involved with the product; (iii) to determine how the parts relate; (iv) to identify the relationships between the parties defining flows between them; (v) to analyze the resulting CVC (customer value chain) to determine the critical customers and their propositions; (vi) to include the information in PDA (Product Definition Assessment); and (vii) to use the results of CVCA in the product. The CVCA’s seventh stage consists in using the results of the value network. The first five steps related to the customer's value chain were used in this study [8]. The QFD tool was deployed in three matrices: quality matrix, product matrix, and part characteristics matrix [11]. Results of CVCA and QFD applications are described in accordance with the engineering requirement process adapted to this case: elicitation, analysis, documentation [2]. 2. Findings A conceptual model that integrates the CVCA tool with QFD tool was developed for the collection and prioritization of the requirements for the development of a novel rehabilitation device. Results are presented in three subsections: elicitation, analysis, documentation. 2.1. Elicitation Requirements came from primary and secondary sources. The primary sources include interviews with product value chain stakeholders: patients, graduation students of the product development are, experts and representatives of companies operating in the area of medical products. The critical costumers identified by the CVCA answered a qualitative questionnaire and later filled up a quantitative questionnaire. Data collection was done as shown in previous studies publish by the research team [11,12]. Other requirements came from (i) a focused group with fifteen health area professionals: physical therapists and occupational therapists who have ever worked with CPM machines for elbow and/or forearm rehabilitation, and (ii) a brainstorming with the team research, engineers and graduate students of biomedical and industrial engineering; secondary sources (iii) a qualitative study made with physical therapists, patients and physicians who have ever work/used similar machines for upper and lower A.M. Callegaro et al. / Managing Stakeholder Voices for the Development of a Novel Device 137 limbs rehabilitation [12], a benchmarking study [13] and a literature review [14]; other literature studies, standards and laws. 2.2. Analysis First of all, the CVCA was applied and the critical stakeholders were identified (see Figure 2). The critical stakeholders identified were clinical, product, process and reliability engineering; project and product managers; financial sector; quality system and regulatory affairs, internal and external to the organization. Physical therapists, occupational therapists, physicians and patients are related to the clinical engineering and regulatory affairs. Thus, the researchers also considered these users as critical stakeholders and also highlighted they must necessarily be considered in a direct mode in the beginning stage of the project. The stakeholder needs were collected; attributes were deployed and prioritized; requirements were understood, their overlaps, conflicts and prioritization were done by means of QFD tool [15]. Three QFD matrices were taken into account for this study: quality, product, and part characteristics matrix. Figure 2. CVCA output – parties related, flows and customer value chain analysis [16]. 138 A.M. Callegaro et al. / Managing Stakeholder Voices for the Development of a Novel Device 2.3. Documentation The CVCA output was written as a business model. The critic costumers were highlighted. The identified attributes and requirements were deployed in matrices with their quantitative prioritization. x Demanded Quality Deployment: the demanded quality survey done by the critical costumers identified in the CVCA output showed the primary attributes according the product relative importance were ergonomics, functions, aesthetics, handling, material, components/elements. The secondary level of attributes for each primary attributes were: ergonomics – effective performance, patient and operator safety, anthropometric adjust, and patient’s comfort; functions – simple and intuitive interface, possible physiological amplitude, multiple functions, applicable to various joints; aesthetics compact and portable, organic design, innovative, discrete; handling – easy to assembly, install, configure, adjust and use, easy to transport (accessory), dismountable, and easy to store; material – resistant to conditions of use and maintenance, soft, breathable and not allergic surface skin contact, trustworthy, easy to clean/asepsis; components/elements – low weight of the equipment, safe components, reduced maintenance (do not require specific technical care), replacement parts guaranteed. x Quality matrix: the application of the quality matrix shown the need of prioritizing the following decreasing order of the quality characteristics: material percentage (%); anthropometric adaptation (centimeters); reliability level of the system and movements (%); degrees of flexibility of the transportation system (%); modular systems (number of parts); lifetime (years); level of maintainability (%); level of facility of assembly, installation, configuration, adjustment and usage (%); applicable to various joints of the body (number of joints); compact size (volume: centimeters x centimeters x centimeters); weight limit (grams); storage facility (%); number of risk points (number); effective performance index (%); quality standards (%), application of comfortable material (%); range of motion (degrees), reliability level of the material (%); compatibility with other devices (%); warranty on replacement parts (numbers of parts); possibility of assistive, active and resistive movements (number of parameters); resistance to cleaning products (%); understated look (%), and innovative kind (radical or incremental). x Product matrix: the application of the product matrix showed the need of prioritizing the following decreasing order of the product parts: arm support, forearm support, support shaft, joystick, support base, mechanical system, electronic system, and software. x Part characteristics matrix: after the deployment and the prioritization or the parts, characteristics of parts matrix were filled and the greatest parts were crossed with their quality characteristics. Thus, it was possible to identify which characteristics must be controlled in the critical parts to provide the product quality. Through characteristics of the parts matrix, it was observed the need to prioritize the following descending order of the product part characteristics: arm support dimensions (centimeters), shoulder angle adjustment (degrees), forearm support anthropometric adjustment, arm and forearm support congruence, shaft thickness (millimeters), height adjustment, support shaft angle (90°), joystick dimensions (centimeters), support base A.M. Callegaro et al. / Managing Stakeholder Voices for the Development of a Novel Device 139 leveling, mechanical system operating parameters, electronic system operating parameters, arm support weight (grams), arm support anthropometric adjustment, forearm support dimensions (centimeters), forearm support weight (grams), support shaft dimensions (centimeters), support shaft weight (grams), programming flexibility (1 to 10), joystick weight (grams), support base dimensions (centimeters), support base weight (grams). The study findings are in accordance to other study that considers the information about each individual patient characteristics and the understanding of interactions between the device and the patient’s anatomy/function are essential to ensure safe device design for the majority of medical applications [17]. Other similar study showed a spiral innovation process to the development of a medical device which considers three distinct stakeholder voices: the voice of the customer, the voice of the business and the voice of the technology. The process presented is a case study focusing on the front-end redesign of a class III medical device for an orthopedics company. Starting from project initiation and scope alignment, the process describes four phases: discover, envision, create, refine; and concludes with value assessment of the final design features [1]. While this study reports a front-end redesign, the results of our study report technology inputs for the development process of a new medical device. After identifying the critical customers in the business model resulted from CVCA application, open and closed questionnaires were applied to all parties involved in order to survey and prioritize their requirements. The results from this study (primary source) were considered in association to others requirements from secondary sources as previous literature, regulations/legislation, and previous studies published by the research team. This innovative way to collect the stake-holder requirements are more complete comparing to others only consider what patients and clinicians want according the literature [18]. Patients and clinicians are critical product lifecycle stakeholders, but the perception of all those stakeholders identified by CVCA application should also be considered, analyzed and prioritized. The combination of strong visibility of different stakeholders is essential. No single individual or discipline alone has the ability to create, develop, and implement an effective solution successfully. Different areas need to work together [19]. As marketers are regularly involved in understanding costumers and their needs, the VoC research is sometimes confined to marketing departments. How-ever, it is becoming more prevalent for research teams to be multidisciplinary to take advantage of the diversity of skills and perspectives available within an organization. It is important to maintain a broad perspective and not limit the focus. The initial approach of listening to the stakeholders, or customers immediately posed the question: “who are the stakeholders?” [1]. Previous studies identified different product value chain stakeholders, when used the CVCA associated with QFD compared to QFD application only. The results of the association between the QFD and the CVCA tools had an effect on the stakeholders definition and requirements elicitation phases in the development of a CPM device for elbow and forearm rehabilitation [13,14]. Considering these previous results, this study presented a successful application of the CVCA tool associated with the QFD tool, describing the results in accordance with the engineering requirement process for the development of a novel rehabilitation device for the elbow and forearm rehabilitation. 140 A.M. Callegaro et al. / Managing Stakeholder Voices for the Development of a Novel Device The elicitation step showed that many requirements were similar when com-pared those identified from interviews and focused group to others identified by the critical stakeholders from CVCA application. The most of the repeated requirements were identified by means of interviews with experts. Two of the repeated requirements came from graduation students who work in the product development area. According this study, the documentation allowed the definition of the main characteristics of parts should be considered in the requirements list to be used in the device development process. Setting up the requirements is a step that starts in the beginning of the product development process. Those requirements are absolutely necessary in order to proceed to the next working step need to be documented at the beginning of the design process. The contents of a requirements list therefore depend on the state of the product design and the stage of the design process. The list has to be continuously amended and extended. Managing requirements lists in this way avoids having to deal with questions and requirements before they can be adequately answered and specified3. 3. Conclusion The association of CVCA with QFD tool is an innovative approach to consider the different product value chain stakeholders to identify and prioritize requirements, product parts and their characteristics managed by the engineering of requirements. The identified and prioritized product parts and their characteristics should be considered in the development process of the novel rehabilitation device for the elbow and forearm rehabilitation, the case of this study. Future studies should present the product development process steps in association to the requirements engineering, allowing the visualization of the list of requirements updates. References [1] F.J. Ana, K.A, Umstead, G.J. Phillips, C.P. Conner, Value driven innovation in medical device design: a process for balancing stakeholder voices, Annals of Biomedical Engineering 41 (2013), 1811–1821. [2] I. Sommerville, Integrated Requirements Engineering: a tutorial, In: IEEE Computer Society, 2005, pp. 16-23. [3] S. Wiesner et al., Requirements Engineering. In: J. Stjepandić et al. (eds.) Concurrent Engineering in the 21st Century, Springer International Publishing Switzerland, pp. 103-132, 2015. [4] D. Chang, C.H. Chen, Understanding the Influence of Customers on Product Innovation, International Journal on Agile Systems and Management, Vol. 7, 2014, Nos. 3/4, pp. 348 – 364. [5] G. Pahl, W. Beitz, J. Feldhusen, K.H. Grote, Engineering design: a systematic approach, 3rd ed, Springer, London, 2007. [6] Â.M. Marx, I.C., Paula, Proposta de uma sistemática de gestão de requisitos para o processo de desenvolvimento de produtos sustentáveis, Produção 21 (2011), 417–431. [7] B. Nuseibeh, S. Easterbrook, Requirements engineering: a roadmap, In: Proceedings of the conference on the future of software engineering, New York, 2000, pp. 35–46. [8] K.M. Donaldson, K. Ishii, S.D. Sheppard, Customer Value Chain Analysis, Research in Engineering Design, 16 (2006), 174–183. [9] L-K. Chan, M-L., Wu Quality function deployment: a literature review, European Journal of Operational Research, 143 (2002), 463–497. [10] J.L.D. Ribeiro, M.E. Echeveste, Â.M.F. Danilevicz, A utilização do QFD na otimização de produtos, processos e serviços, FEENG, Porto Alegre, 2001. A.M. Callegaro et al. / Managing Stakeholder Voices for the Development of a Novel Device 141 [11] A.S. Buss, A.M. Callegaro, R.L.Z. Tanure, C.A. Monteiro, M.E.S. Echeveste, Utilização do método QFD no desenvolvimento de um equipamento para a reabilitação do cotovelo e antebraço”, In: Annals of XXXII National meeting of industrial engineering, Bento Gonçalves, 2012. [12] A.M. Callegaro, C.S.t. Caten, C.F. Jung, J.L.D. Ribeiro, Percepção de fisioterapeutas, médicos e pacientes sobre equipamentos de movimentação passiva contínua – CPM, In: XVIII SIMPEP – Simposium of industrial engineering, Bauru, 2011. [13] A.M. Callegaro AM, C.F. Jung, C.S.t. Caten, Análise funcional e operacional de equipamentos de Movimentação Passiva Contínua para a reabilitação do cotovelo e antebraço, In: Annals of 8º Brazilian conference of product management and development, Porto Alegre, 2011, pp. 1-11. [14] A.M. Callegaro AM, C.F. Jung, C.S.t. Caten, Uma síntese sobre o desenvolvimento de Equipamentos para Movimentação Passiva Contínua como contribuição a futuras pesquisas, In: Annals of 8° Brazilian conference of product management and development, Porto Alegre, 2011, pp. 1-12. [15] R.L.Z. Tanure, A.M. Callegaro, A.S. Buss, Differences between Quality Function Deployment application and its association to Costumer Value Chain Analysis to development of the CPM equipment, In: Annals of XVIII International conference on industrial engineering and operations management, Guimarães, 2012, pp. 1-10. [16] R.L.Z. Tanure, A.M. Callegaro, A.S. Buss, I.C. Paula, Identification of critical customers: differences between the applications of QFD and CVCA methods for the development of a Continuous Passive Motion equipment, In: Annals of 4th World Conference P&OM, Amsterdam, 2012, pp. 1-10. [17] C. Capelli, G. Biglino, L. Petrini et al., Finite element strategies to satisfy clinical and engineering requirements in the field of percutaneous valves, Annals of Biomedical Engineering 40 (2012), 2663– 2673. [18] J.H.M. Bergmann, H. McGregor, Body-worn sensor design: what do patients and clinicians want? Annals of Biomedical Engineering 39 (2011), 2299–2312. [19] Y. Yazdi, S. Acharya, A new model for graduate education and innovation in medical technology. Annals of Biomedical Engineering 41(2013), 1822–1833. 142 Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-142 Mechanisms of Dependence in Engineering Projects as Sociotechnical Systems Bryan MOSERa,1, William GROSSMANNb and Phillip STARKEc a Massachusetts Institute of Technology b Virginia Polytechnic Institute and State University c Technical University of Munich Abstract. With coordination defined as the management of dependencies, complex engineering projects are well coordinated when teams are aware and able to respond to demands for interaction across product and organization systems. Classic representations of dependence in project management standards and practices emphasize sequence: a single dimensional consequence of dependence. However, underlying mechanisms of dependence which drive remain assumed or hidden, preventing analysis of systemic consequence on scope, quality, schedule, and cost. This paper begins with a review of dependence as viewed commonly in system engineering and project management. Building on our recent work to consider engineering projects as sociotechnical systems, we propose dependence characteristics which more meaningfully capture underlying project activity dynamics. Mechanisms are proposed for dependence which are satisfied by the interplay of demand for interaction and the supply of coordination. Attention allocation and exception handling behaviors in the project organization influence the extent of local satisfaction of dependence. Project architectural characteristics lead to emergent and systemic impacts on cost, schedule, and quality. Our next step in this research is introduced, the instrumentation of teamwork experiments to observe and validate the demand and satisfaction of dependencies by project team during complex project execution. Keywords. Project Design, Complexity Management, Complex Dependencies, Interdependence, Concurrent Engineering, Program Management Introduction In response to increasing sociotechnical complexity of engineering projects, an emphasis has been placed on standards and practices for systems engineering and project management. While there have been successful improvements, examples of failure in large engineering projects are sufficiently alarming to focus attention on the efficacy of these practices. We have argued that today’s engineering environment of dispersed teams, concurrent work, and increasingly complex subsystems has led to a decline in teams’ abilities to anticipate and respond to needed coordination. Information capture, archive, and search have become abundant and inexpensive, yet performance has not necessarily improved. Wasteful attention to information of little value can itself create risk of poor quality, schedule delays, and budget overruns. Surprisingly, existing project management (PMBOK, Prince2, P2M, etc.) and systems engineering (SEBOK) standards treat task dependence in a narrow way – as sequence. Often the activities driven by dependence are characterized as overhead or “soft” activities, with only a few consequences of dependence treated through network or topological analyses on top of these narrow representations. Yet we continue to see 1 Corresponding Author, E-Mail: bry@mit.edu B. Moser et al. / Mechanisms of Dependence in Engineering Projects as Sociotechnical Systems 143 that activity driven by dependence in engineering can reach up to half of a project's attention, often in unexpected positions and timing across a project. Thus existing handling of dependence assuming no, little or uniform coordination is inaccurate and calling for better coordination a truism. Instead we consider design of an engineering project to reflect – within limited capabilities and capacities – when, where, and why coordination to satisfy dependence is valuable and feasible. In this paper we begin by exploring an improved definition of dependence suitable to capture the dynamics of complex engineering systems projects. 1. Context: Instability of Experience and System Complexity lead to Surprises What we produce and how we work combine as a sociotechnical system in which products, processes, and people interact and evolve. Over time, if system characteristics and work styles align, then improved performance is promoted and predictable. If a product system changes quickly and work styles are unable to adapt, the relevance of past practices diminishes. Likewise, if ways of working change quickly without consideration of the systems produced, performance may also be at risk. In an environment when both ways of working and complexity of systems change simultaneously, the emergent performance characteristics of an engineering project become quite difficult to anticipate. For this reason the thoughts leaders of scientific management a century ago promoted standard work and reduction of variation in both parts and people. Teams in the past commonly worked on new product systems located in the same buildings, corporation, and work culture. Development phases were more likely to have been sequential, the workforce likely to be experienced with the previous generation of product. Relationships – both formal and informal – existed amongst teams over time and across multiple projects. In contrast, recent complex engineering development projects are marked by meaningful changes in how product, process, and organization architectures are organized and linked. These characteristics of recent projects are beset by fundamental trends which exacerbate the instability and complexity leading to surprise. In this paper we view interactions across boundaries as activity, with ability and experience relevant to performance. Since demand for coordination changes in a sociotechnical system due to the overlap of dependence and teams, then instability in product and teaming naturally leads to changed patterns of demanded interaction. We argue, consistent with Nonaka [1], that interaction patterns over a career are an aspect of tacit knowledge. Organizations may be highly capable to coordinate given past experience, through both formal and informal relationships across system and organization. The teams become experienced and able to recognize critical interactions in tradeoff with all of the demands on their time. We witnessed the contrary in a recent multi-billion dollar aerospace initiative, cancelled after years of planning, designing, and engineering across a highly distributed organization and product system. A lead engineer lamented that the performance of teams was lacking, even though requirements had been carefully mapped, interfaces listed, work packages defined and assigned. In her words, the teams had no “feeling for the dependencies.” The I.T. automated workflow, meant to reinforce attention to critical matters may have had the opposite effect, instead preventing development of tacit knowledge and experience at uncertain interactions. The teams were focused on 144 B. Moser et al. / Mechanisms of Dependence in Engineering Projects as Sociotechnical Systems their own work and their side of each interface, rather than the back and forth interactions across dependencies. 2. Related Work Literature on dependencies within engineering projects can mainly be found in the domains of portfolio management, process management and the study of team performance. In all of these fields much is based on the work of Thompson [2] on organizational dependencies. It is commonly stated that project activities are dependent due to resource sharing and other factors such as time constraints, project outcomes or risk profiles [3]. Understanding and managing the dependencies between projects is considered to be a critical issue [4]. Dependence of specific tasks is often discussed centered on improving product development processes [5]. The links are shown to be a result of work outputs that to be passed from an upstream to a downstream task, thus making it an issue of task sequence. Only few see task dependencies as a broader concept in need of further investigation beyond precedence, functional, and probabilistic dependencies [6], [7]. Task dependence is also central in many works concerned with team performance [8]. It is generally stated that high task dependencies in teams produces positive effects [9]. Research by Shea and Guzzo [10], for instance, suggests that task dependence positively influences team efficacy while Campion [11] finds that it correlates with employee satisfaction. Dependence also plays a large role on interpersonal relationships [12]. Thus, insights gained by analyzing the psychology and sociology of interpersonal relationships can be adapted and applied to the study of dependencies in product development. Two main theories serve as a basis for the research of dependence in interpersonal relationships, Interdependence Theory [13] and The theory of Cognitive Interdependence [14]. 2.1. Dependencies in Textbooks and Project Management Standards Tasks are the lowest level of activity unit, an atomic element to the models of projects. Dependence modeled as precedence is therefore a relationship amongst milestones [15], [16]. A task dependent on another task is characterized by these precedent constraints: e.g. Finish to Start (FS). These commonly used practices also categorize dependence as Internal vs. External (do the two activities fall within or across some defined system or boundary?) and Hard vs. Soft (the dependence cannot or can change within the horizon of the project?) We note that the mathematical relationship describing the dependence contains very little information, only the expected schedule consequence rather than any of coupled characteristics of the activities. Importantly, the underlying driver of dependence – the essential meaning – is not expressed. 2.2. Dependence as Process The Critical Path Method (CPM) and PERT were born of industrial and military projects in the 1950s. Dependence in these models is taken as discrete sequence constraints amongst tasks. PERT differs in that uncertainty in duration is considered. B. Moser et al. / Mechanisms of Dependence in Engineering Projects as Sociotechnical Systems 145 Typically these models are used to control critical elements of development. PERT and IDEF, task-based charting methods, model the process of tasks. As in CPM, tasks are related through time-based relations such as precedence constraints. Rather than a Gantt-like display against calendar time, both PERT and IDEF portray the flow of a tasks as a network. Each task is a box, with dependencies shown as lines connecting the boxes. Other modeling languages (IDEF, UML, SysML, OPM) and tools have since emerged. Enterprise architectures, such as PERAM, CIMOSA, and GERAM, are used to represent a total product development process.[17] The design structure matrix (DSM) is an N-squared matrix which emphasizes the dependencies and their pattern across a set of functions or tasks. teward, published a description of DSM in 1981 [18] with the tasks as the element along the rows and columns of the matrix. The problem addressed by DSM is to find a sequence of the tasks avoiding a delay, rework, or poor quality. In some cases in which a subset of tasks are tightly coupled these tasks are partitioned; acting as a subset which as a whole satisfies the desire to have all dependencies in the lower left triangle. Dependence has also been defined as a demand in one activity for information from/in another activity. Traditionally one might refer to the “source” of the information as the upstream activity, and the demanding “sink” activity as downstream. In a network or PERT diagram which shows activities on nodes and arrows as dependence, the arrow can be considered a pipe for directional flow of needed information, from upstream to downstream. A measure of the dependency strength (or depth) can be represented as the amount of information available upstream that is needed downstream.[6] 2.3. Coordination as the Management of Dependence Clark and Fujimoto [19], in their characterization of the development process as a "system of interconnected problem solving cycles", observe that in theory shorter lead times can be achieved through "Integrated Problem Solving". Integrated problem solving increases the dependency amongst tasks, with increased overlap and mutual communication. However, case study data showed that the shorter lead times expected theoretically from such concurrency are difficult to realize. When teams work on dependent activities they may need to coordinate. “Different types of coordination result from different kinds of dependencies, which in turn are dependent upon different kinds of products, services, actors, work specializations, efforts, tasks and purposes.” [20] Malone and Crowston, after a review of many related terms, defined coordination as “the management of dependencies”. [21] 3. Definition and Characteristics of Dependence in Engineering Projects In our previous work we have presented a modelling framework for continuous, concurrent, and mutual dependence. [7],[22] We quickly summarize this previous work and expand the definition to describe the underlying mechanisms of dependence driving dynamics of engineering projects as sociotechnical systems. 146 B. Moser et al. / Mechanisms of Dependence in Engineering Projects as Sociotechnical Systems 3.1. Dependence as Essential Need to Interact In project management, systems engineering, computer science, information theory, linguistics and other fields “dependence” is a common term. Here we explore a definitions across these different disciplinary uses of the word, beginning with a dictionary definition of dependence: “the quality or state of being dependent; especially: the quality or state of being influenced or determined by or subject to another” [23] The nature of a system determines the meaning of influence and determination in the definition of dependence amongst elements within the system. The significance of a dependency is that the essence of an element – a task, a system, an organization, a phrase – cannot be realized without the awareness of some other element, which itself exists with a meaning of its own. Therefore, in contrast to definitions of project dependence as task sequence, our definition is tied a broader view of dependence as need. A meaningful representation should go beyond a measure of consequence and be tied to the root cause of the need. For engineering projects, “dependence is defined as need for interaction that matters – a demand for coordination – so that an activity’s outcome is successful” [7]. 3.2. Dependence as Demand for Interaction during Concurrent Progress Design Drawings Design Drawings In addition to a demand for some amount of information, one can also consider the timing of the flow. The information from an upstream task may be needed completely before another task begins, which is the meaning of classic Finish to Start (FS) dependency. However, the tasks could also be dependent while proceeding in parallel, creating a continuous, concurrent demand for interaction. This continuous, concurrent dependence is common amongst critical design tasks, yet unable to be easily represented in the classic approaches. This same representation of dependence may be subdivided to allow further granularity as stages of concurrent progress (shown on in Figure 1.) The concurrent progress diagram is divided into sections which each may have a depth and shape to express the need from upstream to downstream. Likewise, in a view of concurrent progress, there may be a demand for interaction through flow of information and resources amongst tasks in both directions. Develop Prototypes Develop Prototypes Figure 1. Concurrent Dependence with Stages from [13]. B. Moser et al. / Mechanisms of Dependence in Engineering Projects as Sociotechnical Systems 147 3.3. Dependence as Exception Trigger Progress of Designs In Figure 2 taken from our recent paper, concurrent and mutual dependency characteristics are shown in one diagram. If the state of progress is in an unconstrained area, and dependencies have been satisfied through coordination, the two tasks at that moment are effectively independent. Finally, our representation of project dependence includes not only dependence as continuous flow, but also the exception handling behavior of the project organization. Other researchers have used network diagram representation of dependence with related exception handling [25]. The concurrent and mutual dependence diagrams treat the “shaded” areas as zones of exceptional activity. Why? If a downstream task proceeds without the needed upstream information, or perhaps the received information is defective, then the progress and quality of the downstream task is at risk. In this way, the dependence if poorly coordinated is a source of exception propagation. It is possible that mutual 100% progress is positioned in the exceptional activity In Designs due to concurrent dependence that should Prototype exception otherwise be constrained: in a boundary shaded area. Two dependent tasks which touch and cross the boundary nominal activity Design & Prototype (referred to as the “exception proceed with boundary”) will trigger errors and coordination exception handling behavior. The dependency in Figure 2 is exceptional activity mutual, driven by need in both In Prototype due to Designs directions, from Design to Prototype activity, and inversely from Prototype to Design activity 0% Progress of Prototype 100% (the shaded area in the upper left). Thus the open area -- when the two Figure 2. Mutual, concurrent dependence with exception tasks can operate as long as boundaries [7]. coordination is effective -is a zone of nominal activity. Crossing the boundary into a zone of exceptional activity triggers a demand for the organization’s exception handling behavior. 4. A Framework of Dependence Mechanisms 4.1. Characteristics of a Practical Dependence Representation In summary this representation of project dependence is continuous, concurrent and mutual. A dependence captures a concurrent (and possibly mutual) need for information and results as a continuous function of progress of activities, generating demands for coordination activity. If a dependence boundary is crossed, exceptional activity is triggered. The dependence is a constraint on independent performance. This constraint is mitigated through effective coordination which satisfies the dependence by addressing an underlying need. Coordination itself is real activity, requiring awareness, attention 148 B. Moser et al. / Mechanisms of Dependence in Engineering Projects as Sociotechnical Systems allocation, abilities, and experience to be well performed. Therefore, coordination takes time, cost, and impacts quality. We’ve seen across tens of projects that this simple, visual representation becomes readily useful in practical, industrial settings. By examining the mechanisms of dependence more closely, one can analyze the local and systemic impacts of the dependence on project performance. To overcome the limits of the classic view of dependence as sequence, the consequences of coordination (better or worse) should be multi-dimensional: one should forecast the systemic effect of the dependence being satisfied on project cost, schedule, and quality. 4.2. Mechanisms of Dependence A close view of the mechanisms for satisfaction of dependence is shown in Figure 3 below. Dependence is driven by two types of causes, sources of need we term Flow and Pool causes. A flow cause of dependence is a need for results or information from another task. Figure 3. Mechanisms of Dependence from Cause to System Effects. A pool cause of dependence is a need for a resource shared by another task. They both lead to a demand for interaction. Awareness of the dependence and allocation of attention are the major factors influencing how or if any interaction takes place. The volume, timeliness, cost, and quality of the interaction all have consequences regarding the satisfaction of the dependence. Dependency management, or coordination, may influence the demand itself, the awareness and the allocation of attention, as well as the interaction. Many classic dependency management techniques aim at improving the awareness of the dependence (e.g. CPM or DSM) or improving the interaction (e.g. action plans or standardization). The extent to which the dependence is satisfied determines the local effects, which in turn influence the systemic effects. Local effects are the immediate consequences for the tasks (e.g. delay, costs, and rework) and the individuals (e.g. frustration or establishment of trust) whereas the systemic effects influence the significance of the local effect on product quality, the process as a whole, and the organization. These effects in turn can lead to a change in the remaining demand to interact. If the dependence is fully satisfied the demand is effectively eliminated and thus no demand to interact remains. If the dependence is only partly satisfied or not at all satisfied through insufficient interaction, demand to interact may decrease or – in some cases – increase. B. Moser et al. / Mechanisms of Dependence in Engineering Projects as Sociotechnical Systems 149 4.3. System Consequences of Timing and Quality of Dependence Satisfaction Given these characteristics and mechanisms of a project dependency, how do these combine with behaviors of teams to drive systemic results in cost, schedule, and quality? Below in Figure 4 we show a concurrent dependency with two stages. For the first 20% of the upstream design work the progress of downstream prototypes is prevented,. e.g. information from that initial 20% is needed to begin the first prototype. After this point, in a second stage of dependence, as long as designs proceed (at pace) ahead of the progress of prototypes, and coordination ensures transfer of upstream results, then the prototypes can proceed without exception. Figure 4. Systemic Consequence of Complex Dependence. A dark dashed line shows a hypothetical mutual progress path, beginning in the lower left. The path shows designs proceeding to about a third of progress before the prototypes begin. Until about a quarter of the prototype scope is completed the mutual progress seems nominal, far from the dependency exception boundary. However, the progress upstream is reversed (a bug discovered), and the mutual progress is no longer in the open, unconstrained area but instead placed in the middle of the area of exceptional activity. In order to analyze the cost, schedule, and quality implications of this condition, the behaviors of the teams come into play. The mutual progress from that point, now shown in the middle of the exceptional activity area, is driven by the combination of team behaviors. Have the teams prioritized and paid attention to quality? Is this issue even noticed? If so, how does each team respond and make a decision on how to proceed? Given these teams’ allocation attention and exceptional handling behaviors the resulting paths could lead to a recovery of quality through rework at some delay and increased cost. It is also possible that the quality issue be missed or ignored, leading to undetected quality issues both in the Designs and the Prototype. Significantly, in contrast to a singular dimensional analysis of schedule effects such as the critical path method, this representation and the mechanisms of dependence allow one to forecast total results as emergent and a mix of total cost, duration, and 150 B. Moser et al. / Mechanisms of Dependence in Engineering Projects as Sociotechnical Systems quality. Choices by teams amongst many demands on their attention, driven by behaviors and priorities, changed the paths of progress within and across tasks. Typically in complex engineering programs the awareness of teams to the systemically meaningful dependencies is challenged and combined with natural limits in capacity and ability. 5. Next Steps and Validation The validation of our research into engineering projects as sociotechnical systems will require the instrumentation of performance during complex project planning and execution. For this paper’s representation of dependence, we will observe projects in progress to test the dependency model’s practicality and usefulness. We are preparing a platform to measure the demands on and the attention of teams across product, process, and project organization. The responses to dependence will be correlated to local and systemic performance. Also, we have begun a series of experiments to test the effect of increased awareness of concurrent and mutual dependence on local and systemic performance of the engineering project. 6. Conclusions Why do some teams, even in the face of complexity, perform with excellence? The situation faced by product development initiatives today includes changes in our product, our teams, and how we work together. Judgment and embedded practices – built on decades of traditions and standards -- have lost relevance as new, product, process, and team architectures emerge and overlap. This trend will continue; future engineered system projects will continue to increase in complexity technically and organizationally. We are driven in our thinking and experiments to better understand the nature of teamwork across boundaries, to uncover performance in these complex sociotechnical systems and significantly improve schedule, cost, and quality. We have seen in industrial practice that techniques which rest on traditional planning significantly misrepresent dependence. Coordination activity requires effort, time, and cost, and for modern engineering projects can be one-third to half of the total attention by teams. Some assume that coordination is a qualitative rather than real activity, and therefore do not recognize the limited capacity of teams to both work and interact with others. This paper began with a review of dependence as viewed commonly in system engineering and project management. Building on recent work to consider engineering projects as sociotechnical systems, we described dependence characteristics which more meaningfully capture underlying project dynamics. We proposed mechanisms for dependence and their satisfaction through the interplay of demand for interaction and the supply of coordination. We’ve shown how attention allocation and exception handling behaviors in the project organization influence the extent of local satisfaction of dependence. In turn, project architecture leads to emergent and systemic impacts on cost, schedule, and quality. Our next step in this research focusses on the instrumentation of teamwork experiments to observe and validate the demand and satisfaction of dependencies by project team during complex project execution. B. Moser et al. / Mechanisms of Dependence in Engineering Projects as Sociotechnical Systems 151 References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] I. Nonaka, The Knowledge-Creating Company : How Japanese Companies Create the Dynamics of Innovation, Oxford University Press, 1995. J. D. Thompson, Organizations in action: Social science bases of administrative theory, Transaction Publishers, New Brunswick, 1967. D. Verma and K. K. Sinha, Toward a theory of project interdependencies in high tech R&D environments, Journal of Operations Management, Vol. 20, no. 5, pp. 451–468, 2002. A. de Maio, R. Verganti, and M. Corso, A multi-project management framework for new product development, European Journal of Operations Research, Vol. 78, no. 2, pp. 178–191, 1994. T. R. Browning, E. Fricke, and H. Negele, Key concepts in modeling product development processes, Systems Engineering, vol. 9, no. 2, pp. 104–128, 2006. A. D. Christian, K. J. Grasso, and W. P. Seering, Validation studies of an information-flow model of design, In: Proceedings of the 1996 ASME Design Engineering Technical Conferences, 1996. B. R. Moser and R. T. Wood, Design of Complex Programs as Sociotechnical Systems, In: J. Stjepandić, N. Wognum and W. J.C. Verhagen (eds.): Concurrent Engineering in the 21st Century, Springer International Publishing Switzerland, pp. 197–220, 2015. S. M. Gully, K. A. Incalcaterra, A. Joshi, and J. M. Beaubien, A meta-analysis of team-efficacy, potency, and performance: Interdependence and level of analysis as moderators of observed relationships, Journal of Applied Psychology, Vol. 87, no. 5, pp. 819–832, 2002. G. Van der Vegt, B. Emans, and E. Van de Vliert, Effects of Interdependencies in Project Teams, Journal of Social Psychology, Vol. 139, no. 2, pp. 202–214, 1999. G. P. Shea and R. A. Guzzo, Groups as human resources, In: K. M. Rowland and G. R. Ferris (eds.) Research in personnel and human resource management, Vol. 5, JAI Press, Greenwich, 1987. M. A. Campion, G. J. Medsker, and A. C. Higgs, Relations between work group characteristics and effectiveness: Implications for designing effective work groups, Personal Psychology, Vol. 46, no. 4, pp. 823–847, 1993. C. E. Rusbult and P. A. Van Lange, Interdependence, interaction, and relationships, Annual Review of Psychology, Vol. 54, no. 1, pp. 351–375, 2003. H. H. Kelley and J. W. Thibaut, Interpersonal relations: A theory of interdependence, Wiley, New York, 1978. C. R. Agnew, P.A.M. Van Lange, C. E. Rusbult, and C. A. Langston, Cognitive interdependence: Commitment and the mental representation of close relationships, Journal of Personality and Social Psychology, Vol. 74, no. 4, pp. 939–954, 1998. S.J. Mantel, Project management in practice, 4th ed, Wiley, Hoboken, 2011. H. R. Kerzner, Project Management: A Systems Approach to Planning, Scheduling, and Controlling, Wiley, Hoboken, 2013. P. Bernus and L. Nemes, A framework to define a generic enterprise reference architecture and methodology, Computer Integrated Manufacturing Systems, Vol. 9, no. 3, pp. 179–191, 1996. D.V. Steward, The design structure system: a method for managing the design of complex systems, IEEE Transactions on Engineering Management , no. 3, pp. 71–74, 1981. K.B. Clark, Product Development Performance: Strategy, organization and management in the world auto industry, Harvard Business Press, 1991. R. Müller, Coordination in organizations, In: Cooperative Knowledge Processing, Springer-Verlag, London, 1997, pp. 26–42. T.W. Malone and K. Crowston, The interdisciplinary study of coordination, ACM Computing Surveys CSUR, vol. 26, no. 1, pp. 87–119, 1994. B. Moser, F. Kimura, and H. Suzuki, Simulation of distributed product development with diverse coordination behavior, In: Proc. of 31st CIRP International Seminar on Manufacturing Systems, 1998. M.-W. Dictionary, Dependence, 2008. Accessed: 03.06.2015. [Online]. Available: http://www.merriam-webster.com/ H.H. Baligh, R.M. Burton, and B. Obel, Organizational consultant: Creating a useable theory for organizational design, Managment Science, Vol. 42, no. 12, pp. 1648–1662, 1996. Y. Jin and R. E. Levitt, The Virtual Design Team: A computational model of project organizations, Computational and Mathematical Organization Theory, Vol. 2, no. 3, pp. 171–195, 1996. 152 Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-152 A Novel Hybrid Multiple Attribute Decision Making Procedure for Aspired Agile Application Shuo-Yan CHOUa,1 , Gwo-Hshiung TZENGb,c, Chien-Chou YUa,2 Department of Industrial Management, National Taiwan University of Science and Technology No. 43, Sec. 4, Keelung Road, Taipei 106, Taiwan b Graduate Institute of Urban Planning, College of Public Affairs, National Taipei University No. 151, University Road, San Shia, New Taipei City 23741, Taiwan c Institute of Management of Technology, National Chiao Tung University No. 1001, Ta-Hsueh Road, Hsinchu 300, Taiwan a Abstract. This study proposes a novel hybrid multiple attribute decision-making (HMADM) procedure to ensure that aspiration levels of the agile application outcomes are achieved. The agile is a concept widely applied by organizations worldwide to enhance their capabilities for better managing software development projects. The agile application requires a pragmatic procedure to handle decision making and continuous improvements over the application life cycle. The proposed procedure evaluates and systemizes inter-influence effects among agile application factors in a context of an influential network relation map (INRM). The INRM helps managers find routes in making application decisions, meanwhile, determining improvement strategies for implementing the selected decisions toward aspiration levels. A numerical example is used to illustrate applicability of the proposed procedure. The results showed that by applying HMADM model, this study can provide a significant foundation to ensure that the best agile application outcomes are reached. Keywords. Agile, HMADM (hybrid multiple attribute decision making) method, Aspiration levels. Introduction To response to challenges of volatile market competitions, rapid technological emerge, unpredictable customer demands, and shorten produce life-cycle; many organizations worldwide have adopted or have considered adopting agile methods to enhance their capabilities for transforming the challenges into opportunities for making business success [1, 2]. However, many studies have noted that agile application is not a simple task; there are numbers of interdependent factors/dimensions can influence its failure and success [3~6]. Organizations must assess carefully their readiness of these factors, before pursuing the path of agility [7]. If agile application does not seem appropriate for projects or organizations at outset, apply it partially to most important areas, then 1 Corresponding Author; E-mail: sychou@mail.ntust.edu.tw. S.-Y. Chou et al. / A Novel HMADM Procedure for Aspired Agile Application 153 continuously improve and expend to overall areas [8]. Additionally, the improvements of enterprise agility should typically be started with systematic assessment of the business environment and its turbulence factors [9], then uses a long term strategy to direct improvement activities based on the actual needs for agility [10]. This study aims to propose a hybrid multiple attribute decision making (HMADM) procedure to model agile application factors/dimension in a context of influence network relation map (INRM). This approach provides managers with systematic information to find a route in making agile application decisions, meanwhile, determining strategies for accomplishing the selected decisions toward aspiration levels through continuous improvements. A numerical example is used to illustrate the applicability of the proposed procedure. The results show that this study can provide a foundation to ensure that the best agile application outcomes are achieved. The remainder of this study is organized as follows. Section 2 reviews the agile literature in relation to our procedure. Section 3 presents the main elements within the HMADM model and introduces the proposed procedure. Section 4 uses a numerical example to illustrate the application of the proposed procedure. The last section discusses the main findings and draws the conclusions. 1. Literature review A software development project involves complex tasks based on stakeholders’ expectations and requirements that present a high degree of uncertainty. Traditional approach to manage such projects grounded in principles of system engineering. It assumes that requirements are predictable, and with extensive management efforts upfront, outcomes can be manageable through periodically controlling and monitoring of performance progress on project tasks in different phases over development life cycle [11]. Unlike traditional process-driven method, agile method relies on people and their creativities to cope with unpredictable requirements. Additionally, agile method emphases empowerment, collaboration, and communication type of management, thus, facilitates an environment of learning and adaptation, thereby, increases flexibility and speed of team to response to changing situations [1~9, 12]. However, wide development and application of agile methods do no guarantee it could improve business effectiveness and competences. There are many prerequisites for agile projects success [13~15]. According to Bosghossian [16], to ensure success of agile software projects, managers shall be capable of assessment and evaluation of strategic-level resource planning to executive management and project management throughout development life cycle. Nerur et al., (2005) conducted literature review and summarized that the management, organization, process, people, and technology (tools and techniques) are main components influencing the outcomes of adopting agile methodologies. Each component further contains different issues to consider. Chow and Cao (2008) conducted a survey to 109 agile projects across 25 countries worldwide using multiple regression techniques and concluded that (a) delivery strategy, (b) agile software engineering techniques, and (c) team capability are three critical success factors to agile software development projects. According to all above discussed literatures, the factors influencing the success agile application can be organized into four dimensions: (1) the management, (2) the process, (3) the people, and (4) the technology. Each dimension involves respective factors as shown in Table 1. 154 S.-Y. Chou et al. / A Novel HMADM Procedure for Aspired Agile Application Table 1. Critical factors and dimensions for success agile application. Dimensions/Factors Organization (D1) Management commitment (D11) Organizational environment (D12) Team environment (D13) People (D2) Team capability (D21) Customer involvement (D22) Training and education (D23) Process (D3) Project management process (D31) Descriptions and reference ʳ Strong executive support from committed sponsor and managers [1, 5, 11, 13, 17]. Cooperative culture, high value on face-to-face communication, universally accepted agile method, proper agile-style work facility, and levering real-time market knowledge to project development [1, 2, 5, 7, 13, 16]. Collocation of the whole team with reward system for agility [5, 7, 13, 15, 16,17]. ʳ Mangers and team members with knowledgeable in agile process and adaptive management style, and having great motivation and required expertise [3, 5, 7, 15, 16]. Customers having full authority, strong commitment and presence, and good relationship with project team members [5, 7, 13, 15, 17]. People are valued as key assets for sustainable agile software development and maintain sponsors, developers, and users with a constant pace of training and education for high level of competences [5, 7, 15, 17]. ʳ Following agile oriented process for management of requirement, project execution, and configuration [ 3, 4, 5, 11] Robustness (D32) Using time-boxed development iterations and strong communication focus with daily face-to-face meeting to elicit deficiencies, changes and innovations before they actually occur in order to satisfy the customer through early and continuous delivery of valuable software [4, 5, 6, 9, 15, 17]. Continuous improvement (D33) Tailoring and adjusting specific work practices and process models [3, 4, 5, 6, 7, 9, 13, 14, 17]. Technology (D4) Agile software techniques(D41) Delivery strategy (D42) Information technology (D43) ʳ Appropriate technical training to team for well-defined coding standards up front and pursuing simple design, rigorous refactoring activities, right amount of documentation, and correct integration testing [2, 5, 14, 17]. delivering most important features first, and regular delivery of overall software project [6, 13, 16]. Information technologies available on the market and organization's existing equipment [2, 4, 6, 7, 14, 15, 17]. Based on the Table 1, a HMADM procedure is proposed in the next section, enabling to evaluate application dimensions/factors in relation to the selection of agile application decisions and to find improvement strategies for ensuring that the selected decisions are implementing toward aspiration levels. 2. Methodology The HMADM model was introduced by Tzeng [19], who combined many new concepts and procedures to solve complex and dynamic real-world problems. The HMADM employs a Decision Making Trial and Evaluation Laboratory (DEMATEL) technique to quantify inter-influence effects among decision factors and visualize them on an influential network relation map (INRM) for solving real-world interdependent S.-Y. Chou et al. / A Novel HMADM Procedure for Aspired Agile Application 155 decision problems. Second, this model provides a procedure, known as DANP (DEMATEL-based ANP) that applies the basic concept of an analytic network process (ANP) to transform the inter-influence values of DEMATEL into influential weights (IWs) for prioritizing the decision factors in decision-making. This feature enables the interdependent decision situations to be viewed as decision processes and outcomes. Third, this model adopts the principle of “aspiration level” [20] through a modified VIKOR method (ViseKriterijumska Optimizacija I Kompromisno Rešenje in Serbian, translated as the multi-criteria optimization and compromise solution method) to avoid “choosing the best among inferior options/alternatives,” (i.e., to avoid “picking the best apple among a barrel of rotten apples.”) [19]. Combining all the above-mentioned concepts and procedures, the HMADM model can also produce valuable information for determining improvement strategies to ensure that aspired decision outcomes are achieved. The detailed descriptions, notations and computational processes for the HMADM model can be found on [19, 20]. This study applies the HMADM model to develop a novel procedure, named as HMADM procedure, for obtaining aspired agile application outcomes through four main stages: (1) forming an expert team, (2) developing a decision framework, (3) selecting application decisions, and (4) determining improvement strategies based on INRM. The proposed procedure is depicted in Figure 1. The HMADM procedure can be operational in Microsoft Excel 2010. In the following section, we use a numerical example to illustrate how this procedure functions in practice. Figure 1. A graphical representation of the HMADM procedure. 3. Empirical example illustrates applicability of the proposed HMADM procedure The case-study organization has been complained by customers about long delivery schedule on its hardware and software projects, thus, considering whether to apply agile method to shorten delivery schedule of software development projects. However, the organization has many units. Each unit exhibits certain differences in infrastructure for the management of software development projects with different customers. These differences have made agile application in the organization a complicated one, where required a comprehensive and systematic evaluation to select and improve the appropriate decisions that would enable the aspired application outcomes to be reached in applying agile in the different units. The organization thus applied the HMADM procedure to assess two units and obtained satisfactory outcomes. 156 S.-Y. Chou et al. / A Novel HMADM Procedure for Aspired Agile Application 3.1. Forming a team. A team was formed with eight experts (ET), selected based on their proficiency in relation to agile, as assessed by a top management committee according to a set of predetermined qualifications. 3.2. Developing the decision framework. In this stage, the ET members identify 12 influencing factors as evaluation factors in 4 dimensions and establish a decision framework (DF) as shown in Figure 2. The highest level of the DF is the goal: reaching the aspired agile application outcomes in two units denoted by U1 and U2, where two units also represents the number of alternatives to be evaluated at the fourth level of the DF. The second and third levels represent the interrelated dimensions and factors (groups of influencing factors) used to evaluate the alternatives. The fifth and final level includes the gaps for each dimension and the factor associated with each alternative to be measured in terms of how to reach the aspiration level. Figure 2. A decision framework for agile application. Next, the ET members evaluated the inter-influence effects among 12 factors and averaged the results in an initial-average 12-by-12 matrix A (Table 2). Table 2. The initial-average 12-by-12 matrix A A D11 D12 D13 D21 D22 D23 D31 D32 D33 D41 D42 D43 D11 0.000 2.500 2.500 3.000 2.750 2.875 2.375 2.750 2.875 3.000 3.125 2.250 D12 2.875 0.000 2.125 2.250 2.375 2.375 2.375 2.375 3.250 2.625 3.375 3.125 D13 2.875 3.000 0.000 2.125 2.500 2.875 2.250 2.375 3.500 2.500 2.875 2.875 D21 3.375 3.500 2.250 0.000 2.500 2.500 2.750 2.625 3.375 2.750 3.250 3.125 D22 3.000 2.625 2.625 2.375 0.000 2.750 2.125 2.625 2.375 2.625 3.000 2.750 D23 3.125 3.125 2.250 3.250 2.375 0.000 3.125 3.250 3.500 3.250 3.000 2.500 D31 2.750 3.250 2.375 3.125 1.625 3.000 0.000 2.875 3.250 3.125 3.125 2.625 D32 3.000 3.250 2.500 3.125 2.875 2.875 3.000 0.000 3.500 3.250 3.375 3.250 D33 2.625 2.750 2.125 3.125 2.250 2.625 2.625 2.375 0.000 2.750 2.250 2.500 D41 2.500 3.250 2.625 2.625 2.875 3.500 2.875 2.875 3.625 0.000 3.375 2.750 D42 2.625 2.500 2.750 2.375 3.000 3.375 2.750 3.500 2.625 3.000 0.000 3.375 D43 2.125 2.000 2.500 2.375 2.375 3.000 2.500 2.625 3.125 1.875 3.125 0.000 The initial-average matrix was further normalized as an initial-influence matrix, matrix D (Table 3), using Equation 1. D = A / s = [dij ]n×n , where s = m ax ⎛⎜ m ax ∑ a ij , m ax ∑ a ij ⎞⎟ . 1≤ i ≤ n 1≤ j ≤ n ⎝ n n j =1 i =1 ⎠ (1) 157 S.-Y. Chou et al. / A Novel HMADM Procedure for Aspired Agile Application Table 3. The initial-influence matrix, matrix D. A D11 D12 D13 D21 D22 D23 D31 D32 D33 D41 D42 D43 D11 0.000 2.500 2.500 3.000 2.750 2.875 2.375 2.750 2.875 3.000 3.125 2.250 D12 2.875 0.000 2.125 2.250 2.375 2.375 2.375 2.375 3.250 2.625 3.375 3.125 D13 2.875 3.000 0.000 2.125 2.500 2.875 2.250 2.375 3.500 2.500 2.875 2.875 D21 3.375 3.500 2.250 0.000 2.500 2.500 2.750 2.625 3.375 2.750 3.250 3.125 D22 3.000 2.625 2.625 2.375 0.000 2.750 2.125 2.625 2.375 2.625 3.000 2.750 D23 3.125 3.125 2.250 3.250 2.375 0.000 3.125 3.250 3.500 3.250 3.000 2.500 D31 2.750 3.250 2.375 3.125 1.625 3.000 0.000 2.875 3.250 3.125 3.125 2.625 D32 3.000 3.250 2.500 3.125 2.875 2.875 3.000 0.000 3.500 3.250 3.375 3.250 D33 2.625 2.750 2.125 3.125 2.250 2.625 2.625 2.375 0.000 2.750 2.250 2.500 D41 2.500 3.250 2.625 2.625 2.875 3.500 2.875 2.875 3.625 0.000 3.375 2.750 D42 2.625 2.500 2.750 2.375 3.000 3.375 2.750 3.500 2.625 3.000 0.000 3.375 D43 2.125 2.000 2.500 2.375 2.375 3.000 2.500 2.625 3.125 1.875 3.125 0.000 Subsequently, through the matrix operation of D, using Equation 2, the totalinfluence matrix T was obtained as (Table 4). In the Table 4, all factors in T were further classified into the correspondent dimensions (clusters) as matrix TC; and each dimension was averaged to obtain matrix TD. T = D ( I − D ) − 1 , when lim D u = [0]n× n , I is an identity matrix. u→∞ (2) Table 4. The total-influence matrix T, and clustered by factors TC and by dimensions TD obtained through DEMATEL. T(TC) D11 D12 D13 D21 D22 D23 D31 D32 D33 D41 D42 D43 TD D1 D2 D3 D4 D11 0.515 0.595 0.516 0.579 0.537 0.606 0.548 0.580 0.654 0.594 0.643 0.580 D12 0.573 0.510 0.492 0.544 0.513 0.576 0.532 0.554 0.644 0.567 0.630 0.585 D1 0.536 0.558 0.586 0.599 D13 0.583 0.599 0.442 0.550 0.524 0.598 0.538 0.563 0.661 0.573 0.628 0.588 D21 0.631 0.648 0.534 0.528 0.556 0.626 0.584 0.605 0.699 0.616 0.677 0.631 D22 0.574 0.578 0.502 0.544 0.447 0.583 0.523 0.558 0.621 0.565 0.619 0.573 D2 0.590 0.561 0.617 0.627 D23 0.640 0.655 0.547 0.628 0.567 0.574 0.608 0.635 0.719 0.643 0.687 0.630 D31 0.605 0.632 0.528 0.599 0.525 0.627 0.502 0.601 0.684 0.615 0.662 0.608 D32 0.655 0.676 0.569 0.641 0.595 0.668 0.621 0.568 0.738 0.661 0.715 0.667 D3 0.585 0.585 0.591 0.625 D33 0.552 0.568 0.477 0.550 0.495 0.565 0.523 0.538 0.542 0.554 0.586 0.553 D41 0.625 0.658 0.557 0.612 0.579 0.666 0.602 0.627 0.722 0.559 0.696 0.637 D42 0.614 0.625 0.547 0.592 0.569 0.648 0.585 0.628 0.682 0.623 0.593 0.637 D4 0.576 0.584 0.613 0.595 D43 0.533 0.542 0.481 0.526 0.492 0.568 0.514 0.539 0.616 0.527 0.600 0.480 Based on the Table 4, the ET further employed DANP to compute the IWs for the dimensions and factors. During which, the ET normalized the total influence matrices TC and TD into TCα and TDα ; next, transposed matrix TCα into an un-weighted supermatrix W = (TCα )′ and then used equation TDα W to obtain a weighted super-matrix W α , and finally, multiplied W α until it converged into IWs for the factors and dimensions as shown in Table 5. As shown in Table 5, the ET generally agreed that in terms of the IWs, all dimensions and factors have the same level of importance in reaching the best agile application. However, the DEMATEL outcomes (Table 4) provide managers with additional information to understand the different degree and direction that each dimension and factor may influence the aspired agile application outcomes. 158 S.-Y. Chou et al. / A Novel HMADM Procedure for Aspired Agile Application Table 5. The influential weights for the factors and dimensions obtained through DANP. lim (W α ) z z →∞ D11 D12 D13 D21 D22 D23 D31 D32 D33 D41 D42 D43 Global D11 0.082 0.079 0.081 0.086 0.079 0.089 0.085 0.092 0.077 0.089 0.087 0.076 D11 0.082 Local 0.339 D12 0.082 0.079 0.081 0.086 0.079 0.089 0.085 0.092 0.077 0.089 0.087 0.076 D12 0.079 D1 0.242 0.328 D13 D21 D22 D23 D31 D32 D33 D41 D42 D43 0.082 0.082 0.082 0.082 0.082 0.082 0.082 0.082 0.082 0.082 0.079 0.079 0.079 0.079 0.079 0.079 0.079 0.079 0.079 0.079 0.081 0.081 0.081 0.081 0.081 0.081 0.081 0.081 0.081 0.081 0.086 0.086 0.086 0.086 0.086 0.086 0.086 0.086 0.086 0.086 0.079 0.079 0.079 0.079 0.079 0.079 0.079 0.079 0.079 0.079 0.089 0.089 0.089 0.089 0.089 0.089 0.089 0.089 0.089 0.089 0.085 0.085 0.085 0.085 0.085 0.085 0.085 0.085 0.085 0.085 0.092 0.092 0.092 0.092 0.092 0.092 0.092 0.092 0.092 0.092 0.077 0.077 0.077 0.077 0.077 0.077 0.077 0.077 0.077 0.077 0.089 0.089 0.089 0.089 0.089 0.089 0.089 0.089 0.089 0.089 0.087 0.087 0.087 0.087 0.087 0.087 0.087 0.087 0.087 0.087 0.076 0.076 0.076 0.076 0.076 0.076 0.076 0.076 0.076 0.076 Influential weights of DANP for dimensions/factors D13 D21 D22 D23 D31 D32 D33 D41 D42 D43 0.081 0.086 0.079 0.089 0.085 0.092 0.077 0.089 0.087 0.076 D2 D3 D4 0.254 0.253 0.251 0.333 0.340 0.310 0.350 0.335 0.362 0.304 0.354 0.345 0.302 3.3. Selecting application decisions In this stage, the ET generates a questionnaire to gather the opinions of users at different levels concerning the outcomes that their unit can attain through agile application base on their current operational capability. The questionnaire set scores to evaluate on a scale from 1 to 5: "N/A (1)," "Accept (2)," "Accept and Use (3)," "Accept, Use and Improve (4)," and "Accept, Use, Improve, and Satisfy (5)." 20 and 18 respondents in U1 and U2 were interviewed respectively. The ET averaged the responses as performance values f lj , and set the worst value f j− and the best value f j∗ (the aspiration level), and then used the IWs of the DANP to compute the index value through using the equations 3~6. The computational results are summarized in Table 6. ( rlj = f j∗ − f lj Sl = n ∑ j =1 ) (f ∗ j ) − f j− . w j rlj , wher l = 1,2, …, m ; w j is the IWs of DANP. (3) (4) Ql = max j {rlj j = 1, 2, , n} , l = 1,2,…,m. (5) Rl = vS l + (1 − v )Ql (6) where l=1,2,…,m, and v is presented as the weight of the strategy of maximum group utility (priority improvement). 1- v is the weight of individual regret. As shown in Table 6 the improvement indices for alternatives U1 and U2 are 0.460 and 0.706, respectively. These values revealed that the agile application with required continuous improvements would enhance performance of the software projects in U1; however, the agile application may not help U2 to enhance the performance of projects unless the current operational capabilities of U2 are further improved. Consequently, the ET selected the decisions to apply agile at U1 and to delay its application in U2 until the dimensions, factors, and/or overall gaps for that unit could be improved to a level below 0.500. 159 S.-Y. Chou et al. / A Novel HMADM Procedure for Aspired Agile Application Table 6. Improvement indices for factors/dimensions/alternatives obtained through the modified VIKOR. Influence Weights according to DANP Dimension/factor Local Global Organization (D1) 0.242 ʳ Management commitment (D11) 0.339 0.082 Organizational environment (D12) 0.328 0.079 Team environment (D13) 0.333 0.081 People (D2) 0.254 ʳ Team capability (D21) 0.340 0.086 Customer involvement (D22) 0.310 0.079 Training and education (D23) 0.350 0.089 Process (D3) 0.253 ʳ Project management process (D31) 0.335 0.085 Robustness (D32) 0.362 0.092 Continuous improvement (D33) 0.304 0.077 Technology (D4) 0.251 Agile software techniques(D41) 0.354 0.089 Delivery strategy (D42) 0.345 0.087 Information technology (D43) 0.302 0.076 Performance Values U1 U2 ʳ ʳ 3.350 3.200 3.000 ʳ ʳ 1.944 1.833 2.722 ʳ 4.000 3.500 3.300 ʳ ʳ 2.833 1.722 2.889 ʳ ʳ 4.510 3.100 3.200 4.320 3.056 2.500 3.000 2.950 3.300 3.056 2.056 2.444 Improvement indices: Size of gaps to aspiration levels U1 U2 0.454 0.708 0.413 0.764 0.450 0.792 0.500 0.569 0.350 0.630 0.250 0.542 0.375 0.819 0.425 0.528 0.349 0.427 0.123 0.170 0.475 0.486 0.450 0.625 0.479 0.620 0.500 0.486 0.513 0.736 0.425 0.639 0.460 0.706 In addition, according to Table 4, the ET computed the degree of total influence that a factor exerted on the other factors (sum of each row) ri and the degree of total influence that a factor received from the other factors (sum of each colume) ci. The results are summarized in Table 7, which also derives the degree of the central role, ri + ci , and the degree of net influence, ri − ci , for the factors and the dimensions. Table 7. The total influence given and received on dimensions and factors obtained through DEMATEL. Dimensions/Factors Organization (D1) Management commitment (D11) Organizational environment (D12) Team environment (D13) People (D2) Team capability (D21) Customer involvement (D22) Training and education (D23) Process (D3) Project management process (D31) Robustness (D32) Continuous improvement (D33) Technology (D4) Agile software techniques(D41) Delivery strategy (D42) Information technology (D43) ri 6.859 ci 6.838 7.100 7.285 6.192 6.865 6.946 6.721 6.847 7.185 6.893 6.399 7.304 7.220 7.334 6.687 7.533 7.155 6.679 6.996 7.983 7.334 7.187 7.773 6.504 7.100 7.096 7.736 7.169 7.540 7.342 6.418 ri+ci 13.829 14.046 14.005 13.040 14.050 14.227 13.086 14.837 14.374 13.866 14.770 14.487 14.434 14.636 15.078 13.587 ri-ci -0.095 0.154 0.564 -0.655 -0.319 -0.441 -0.289 -0.228 0.065 -0.508 -0.777 1.479 0.234 -0.444 0.394 0.751 As shown in the Table 7, the degree of the central role of four dimensions D1~D4 are 13.829, 14.050, 14.374 and 14.434, respectively. These values indicate that all 4 dimensions play a central role in the agile application. However, among the 4 dimensions, the degree of net influence on D3 and D4 are 0.065 and 0.234, respectively. These values imply that if the technology and the process are not well established, then agile application would be negatively affected. Moreover, Table 7 also contains information on the inter-influence effects among the factors. 160 S.-Y. Chou et al. / A Novel HMADM Procedure for Aspired Agile Application Based on Table 7, the ET established an INRM showing the degree and direction of inter-influence effects among 12 factors within 4 dimensions associated with the best agile application in the case study organization (Figure 3). With the dimensions as an example (on the center in Figure 3), the x-coordinate is the degree of central role, ri + ci , and the y-coordinate is the net influential degree, ri − ci . First, we marked the coordinates of the organization (D1), the people (D2), the process (D3), and the technology (D4), which are (13.829, -0.095), (14.050, -0.319), (14.374, 0.065), and (14.434, 0.234), respectively. We then determined the arrow directions based on the degree of total influence Figure 3. The INRM. of the dimensions in Table 5. 3.4. Determining improvement strategies The ET arranged a series of meetings. All of the participants reviewed the Tables 1~7 and, with reference to the INRM, analyzed the improvement strategies to be adopted. For instance, according to the size of gaps to aspiration levels of Table 6, the ET classified the respective dimensional levels for U1 and U2 in descending order and found that the technology (D4) was a problem that arose for both U1 and U2. Additionally, with reference to the INRM, D4 (14.434, 0.234) was in the cause group; thus, improvements in the technology (D4) would have the greatest effects in terms of improving the other dimensions and the selected application decisions. Furthermore, the INRM (Figure 3) showed that 2 of 3 factors under the technology (D4) belonged to the cause group: the Information technology [D43 (13.587, 0.751)], and the delivery strategy [D42 (15.078, 0.397). These values suggested that these 2 factors should be accorded top priority for improvements. 4. Discussions and conclusions Several critical results were derived from the above-described numerical example and from the discussion with members of the ET concerning the agile application. First, according to the DEMATEL results (Table 4 and Figure 3), the interdependent relationships among 12 factors and 4 dimensions can influence the best agile application. Additionally, using the DEMATEL technique can quantify and visualize the degree and direction of inter-influence effects that each dimension and factor would exert on one another and on the best agile application outcomes. Second, the results from the modified VIKOR method with the IWs of the DANP (Table 6) confirm that the development and the wide adoption of agile worldwide may not guarantee that agile application will be successful for all units in an organization. Organizations can use our procedure to thoroughly evaluate application situations at S.-Y. Chou et al. / A Novel HMADM Procedure for Aspired Agile Application 161 different levels and to select application decisions that are suitable for all units. However, the factors to be used shall be suitable for respective situations. Third, according to the results of the modified VIKOR method with reference to the INRM in factors/dimensions/alternatives (Table 6 and Figure 3), each dimension/factor can create different sizes of gaps influencing the best agile application in each subordinate unit. Thus, to achieve the best agile application, organizations should identify critical gaps in the dimensions/factors. Identifying these gaps is useful in determining improvement strategies to enable each application unit to take the most influential actions to facilitate and ensure the best agile application results. This study has several limitations. First, the decision framework was obtained from a limited review of the literature. Further research could use other approaches to select additional factors and explore the differences and similarities between these approaches. Second, the conclusions drawn are based on a numerical example. Future research could apply our procedure to other cases and thus gain further insights into the usefulness of our procedure. These limitations provide directions for future research. References [1] S. L. Goldman, R. N. Nagel, and K, Preiss, Agile Competitors and Virtual Organizations: Strategies for Enriching the Customer, Van Nostrand Reinhold, 1995. [2] K. Preiss, Agility—the origins, the vision and the reality. In: Proceedings of the International Conference on Agility (ICAM) (2005), 13–21. [3] J.S. Reel, Critical success factors in software projects. IEEE Software, 16.3 (1999), 18–23. [4] M. Cohn, D. Ford, Introducing an agile process to an organization, Computer, 36.6 (2003), 74–78. [5] C. Larman, Agile & Iterative Development, Addison-Wesley, Boston, Massachusetts, 2004. [6] B. Boehm, R. Turner, Management challenges to implement agile processes in traditional development organizations, IEEE Software 22.5 (2005), 30–39. [7] S. Nerur, M. RadhaKanta and G. Mangalaraj, Challenges of migrating to agile methodologies, Communications of the ACM, 48.5 (2005), 72-78. [8] S. Ambler, Agile modeling: effective practices for extreme programming and the unified process, John Wiley & Sons, New York, 2002. [9] S. Ismail Hossam, S.P. Snowden, J. Poolton, and R. Reid, Agile manufacturing framework and practice. International Journal of Agile Systems and Management, 1.1 (2006): 11-28 [10] H. Sharifi, and Z. Zhang, A methodology for achieving agility in manufacturing organisations: An introduction." International journal of production economics, 62.1 (1999): 7-22. [11] N.N., A Guide to the Project Management Body of Knowledge (PMBOK® Guide). Project Management Institute, Newtown Square, 2013. [12] P. Abrahamsson, Agile Software Development Methods: Review and Analysis, VTT publications, 2002. [13] M. Christopher, The agile supply chain: competing in volatile markets, Industrial marketing management, 29.1 (2000), 37-44. [14] J. Highsmith, and A. Cockburn, Agile software development: The business of innovation, Computer, 34.9 (2001), 120-127. [15] A. Cockburn, and Jim Highsmith, Agile software development: The people factor, Computer, 34.11 (2001), 131-133. [16] P. Abrahamsson, J. Warsta, M.T. Siponen, and J. Ronkainen, New directions on agile methods: a comparative analysis, Software Engineering, 2003. Proceedings. 25th International Conference on. IEEE, 2003. [17] B. Boehm, Get ready for agile methods, with care, Computer 35.1 (2002), 64-69. [18] C. Tsun, and D.-B. Cao, A survey study of critical success factors in agile software projects, Journal of Systems and Software, 81.6 (2008): 961-971. [19] J.Liou, New concepts and trends of MCDM for tomorrow – in honor of Professor Gwo-Hshiung Tzeng on the occasion of his 70th birthday. Technological and Economic Development of Economy, 19.2 (2013), 367-375. [20] G. Tzeng, J. Huang, Multiple attribute decision making methods and applications, CCR Press, Boca Rayton, 2011. This page intentionally left blank Part 3 Customization & Variability Management This page intentionally left blank Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-165 165 Implementation and Management of Design Systems for Highly Customized Products – State of Practice and Future Research Tim HJERTBERGa,1, Roland STOLT a, Morteza POORKIANY a, Joel JOHANSSON a, Fredrik ELGH a a School of Engineering, Jönköping University, Sweden Abstract. Individualized products, resource-smart design and production, and a focus on customer value have been pointed out as three opportunities for Swedish industry to stay competitive on a globalized market. All these three opportunities can be gained by efficient design and manufacture of highly customized products. However, this requires the development and integration of the knowledge-based enabling technologies of the future as pointed out by The European Factories of the Future Research Association (EFFRA). Highly custom engineered products require an exercising of a very rich and diverse knowledge base about the products, their production and the required resources for design and manufacture. The development and implementation of systems for automated design and production preparation of customized products is a significant investment in time and money. However, our experience from industry indicates that significant efforts are required to introduce and align these kinds of systems with existing operations, legacy systems and overall state of practice. In this paper, support for system development in literature has been reviewed in combination with a survey on the state of practice in four companies regarding implementation and management of automated systems for custom engineered products. A gap has been identified and a set of areas for further research are outlined. Keywords. Customization, Design Automation, Implementation, Management Introduction Customization of products is more and more frequently demanded by consumers and OEMs. More time has to be spent on engineering work in order to create the external variety demanded by the market. At the same time competition lowers prices which creates the requirement of high efficiency within product development and production in order to ensure profitability [1]. Fogliatto [2] compares the current utilization of customization with [3], a literature made ten years earlier, and concludes that a clear change have been made in the manufacturing industry towards a higher degree of customization. Rudberg and Wikner [4] divide manufacturing companies in four categories depending on their ability to create customized products. The four categories, starting with the category with lowest possibilities for customization, are: Make-ToStock (MTS), Assemble-To-Order (ATO), Make-To-Order (MTO) and Engineer-ToOrder (ETO). They further put the four categories together with the concept of Customer Order Decoupling Points (CODPs). This is done in order to demonstrate 1 Corresponding Author, E-mail: tim.hjertberg@jth.hj.se. 166 T. Hjertberg et al. / Implementation and Management of Design Systems where in the product realization process the different company categories receives the customer input in the form of an order (Figure 1). Computer based support systems which enable customization in different degrees have been created and tested in industry. Configuration systems make use of a modular product structure where the modules are combined in the available configuration most suitable for a specific customer [5, 6], and can be seen as the simplest type of system for enabling customization. Then there are systems which enable a higher degree of customization by parametric design [7] and utilization of Knowledge Based Engineering (KBE) [8]. Other systems have been created which are used to automate design as well as simulations for different purposes [9]. Implementing computer support in technology and product development as well as in quotation and order processes have over a time proved to be beneficial for companies’ efficiency and productivity. Figure 1. Marks the CODPs for the different company models where larger commitment represents higher possibility for customization [4]. However, to shape either the computer system or the company organization to get the most out of their systems, have been pointed out by industry to be a hard task. Today, a need can be seen for systems which enables increased variety for the market or which are built to adapt products to a specific costumers demand. Cederfeldt and Elgh [10] concludes, from an investigation of 11 SMEs in the Swedish industry, that this need exists from the industry point of view. However, more complex functionality of computer based tools in the engineering work requires a thorough adoption and management strategy. A need can be seen in industry for strategies which enables a more effective use of systems using company knowledge to support product customization. Much research have been focusing on different variations of functionality in the systems in order to fulfill the need of the companies. However, it seems that there is a lack of methods and tools to support implementation and long term management of this kind of systems witch aggravates the companies achieve the fully advantage and profit of their investment. The objective of this work is to outline the current need for research regarding implementation and management of support systems in the engineering design process by investigating existing development methods as well as the current practice in today’s industry. The current state of implementation and management of systems for engineering support in technology and product development have been investigated at four T. Hjertberg et al. / Implementation and Management of Design Systems 167 companies. Beside the in-depth survey of industrial practice, existing methodologies for system development in literature has been reviewed regarding their support for implementation and management. The results have been analyzed and discussed for the purpose to outline the knowledge gap which identifies the need of future research. This paper is an initial step in a project active over three years. Within the project this paper can be seen as part of the step “Research Clarification” in the design research methodology [11]. 1. Methods and models supporting system development A method for how to plan a new design automation system is described in [12]. A top down approach starting from specification of system requirement and the problem characteristics is described followed by a mapping to appropriate methods for system realisation. The contribution does not include aspects such as user-friendliness, maintainability or documentation despite the author’s statement that they are of significant importance to success in industrial praxis. Implementation and management issues are argued to be considered only when the fundamentals of the problem have been solved. A set of criteria of system characteristics is defined in [13] including transparency, knowledge accessibility, flexibility, ease of use and longevity. Most likely, these characteristics affect system implementation and management. The criteria are to be considered and weighted in the planning of a design automation system, however, the author state that they do not give concrete answers on implementation and management issues. A procedure for development of design automation systems has been outlined by Rask [14] where issues about documentation and maintenance are addressed by emphasizing the need and importance of routines regarding versioning, verification and traceability. A possible means to support the updating of the knowledge-base is to strive for a design automation system implementation that allows the revision and the documentation to be executed at system runtime [15]. Stokes [16] described a methodology for the development of knowledge based engineering applications called MOKA, Methodology and software tools Oriented to Knowledge Based Engineering Applications. Two central parts of the methodology are the Informal and Formal models. The Informal model is used to document and structure knowledge elicited from experts, handbooks, protocols, literature etc. The Informal model can be regarded as paper-based with text and illustrations. The Formal model is derived from the Informal model with the purpose to model and structure the knowledge in a fashion suitable for system specification and programming. The Formal model is described by an object-oriented annotation called MML (Moka Modelling Language) that is based on UML (Unified Modelling Language). La Rocca et al describe the Design and Engineering Engine, DEE, approach [17-19]. This approach consists of three major elements: The first element is concerned with the design process, which includes multidisciplinary optimisation. The second major element is the MultiModel Generator (MMG) that uses the product model parameter values in combination with formalised domain knowledge to generate product models. Report Files are generated and fed to the third major element, the detailed analysis modules. These modules calculate the design implications. Finally, the loop is closed by analysing the data files using convergence and evaluation checks. Curran et al [20] extends the DEE approach to the Knowledge Nurture for Optimal Multidisciplinary Analysis and Design, KNOMAD, methodology. The KNOMAD acronym highlights method process of: 168 T. Hjertberg et al. / Implementation and Management of Design Systems (K)nowledge capture; (N)ormalisation; (O)rganisation; (M)odeling; (A)nalysis; and (D)elivery. These implementation steps are taken and repeated as part of the knowledge life cycle and in this context, KNOMAD nurtures the whole Knowledge Management across that life cycle. Further, this method includes an approach for multidisciplinary design (optimization) and for knowledge capture, formalization, delivery and life cycle nurture. Despite the methods described above and the numerous KBE applications developed and describe in scientific publications, a number of issues to be research still exists. This is supported by an extensive literature review by Verhagen et al [21] where the major shortcomings of KBE have been outlined. Hvam et al [5] describes a complete and detailed methodology for constructing configuration systems in industrial and service companies. They suggest an iterative process including the activities: analysis of product portfolio, object-oriented modelling, object-oriented design and programming, among others. Every activity results in a description of the problem domain with different levels of abstraction and formalisation. The analysis of product portfolio results in a Product Variant Master (PVM) and Class Relationship Collaboration (CRC) cards. The maintenance is proposed to be organised by introducing Model managers. The Model managers are responsible for the delegation, coordination, collection and documentation of domain expert knowledge. This documentation is then used by the programmers to update the system. Haug et al [22] have developed a prototype system for the documentation of configuration systems that is founded on one data model. This documentation system is separated from the implemented product configuration system. Documentation is in both cases above considered as an important enabler for efficient maintenance. Claesson [6] have introduced and developed the concept of configurable components. The concept includes a function-means model to provide design rational for the encapsulated design solutions which could support the understanding of the system and thereby support system implementation and maintenance – this is, however, not surveyed. During implementation, limitations are set of how the system can be used in the future as well as how it can be maintained/updated over time. Bermell-Garcia [23] proposes a method directed to KBE which aims to facilitate management of applications by increasing transparency and traceability by the utilization of Enterprise Knowledge Resources (EKRs). This method however considers the systems at a low level of granularity with whole KBE applications as elements in a knowledge base and does not explain how the applications or the system for creating the applications should be developed in order to be transparent and enable traceability. In [24], case studies within KBE has been investigated and five problems regarding long-term use of KBE application were identified: x Poor application modeling causes knowledge loss. x Flaws in development language causes knowledge loss. x Application development for wrong reasons causes knowledge misuse. x Low amount of standardization in applications causes high maintenance costs. x Full potential of the knowledge are not used due to problems in sharing and re-use of knowledge. From the reviewed literature it can be concluded that relevance is seen in further elaboration of development methodologies to consider the area of implementation and management of engineering support systems. More specifically, factors which are thought to be affecting this are related to system transparency, traceability of knowledge and modelling of knowledge. T. Hjertberg et al. / Implementation and Management of Design Systems 169 2. Industrial practice at the companies The companies subject for the in-depth survey acts in different areas of the market and are of varying size from a few hundred up to 8000 employees. All companies work according to the model Engineer-To-Order (ETO). A short description of each company follows. Company 1 is a global actor in the area of development, production, service and maintenance of components for aircraft, rocket and gas turbine engines with high technology content. Company 1 provides products that are completely custom engineered in an internationally market with high competition. The products are integrated in complex systems working in extreme environments for long time periods with both customer and legal demands for complete documentation and traceability. The company takes full responsibility for the functionality of their products during its operation including service, maintenance and updates. Fulfilling these harsh requirements is a challenge but at the same time an opportunity to sustain a competitive edge. Automation of design and production preparation by the use of knowledge based engineering (KBE) has been used at the company for more than a decade to enable quick adaptation to changes in customer specifications and evaluation of different design solutions. Company 2 is the world’s leading supplier of tools, tooling solutions and knowhow to the metalworking industry. Company 2 is active in an internationally very competitive market and needs to constantly cut development lead time by seeking means to improve their processes and system maintenance. The company has a long standing tradition in automation of quotation and order processes and has adopted an engineer-to-order business model supported by systems for automated design and production preparation of customized product. A request for quotation of a custom engineered product is replied within 24 hours including detailed design drawings and a final price. All the necessary documents and manufacturing programs are automatically generated when the bid is accepted by the customer. Company 3 is a global manufacturer of a wide assortment of products for transporting equipment by car, including roof racks, bike and water sport carriers and roof boxes. Company 3 sees an opportunity to considerably cut time and cost in their development and manufacture of roof racks for cars by the implementation of a system for the customization of rack attachments to new car models. Every car model requires an individual adapted attachment consisting of a footpad and bracket and currently there exist more than 400 footpads and 1100 brackets. To be able to quickly launch a roof rack for a new model is considered as very important as it is common that a roof rack with accessories to be mounted on the rack is included as additional equipment when a new car is bought. Company 4 is a worldwide supplier of insert stapling units for copiers, printers and document handling systems. Company 4 has recently been incorporated under a larger brand given the directives to focus and strengthen their position as a worldwide supplier of insert stapling units for copiers, printers and document handling systems. The insert stapling units is developed and manufactured on contract with different OEMs. Every unit model has to be adapted to the system it will be an integrated part of. A product platform has been defined to cut product cost and development lead time. The platform is based on a modular product architecture to be configured for the different OEM’s individual specification. However, the platform covers only a limited part of the product design and additional custom engineered parts have to be added. 170 T. Hjertberg et al. / Implementation and Management of Design Systems Activities directed towards formalization and structuring of knowledge in applications supporting the engineering of these parts have been taken by individuals but they are not shared on a corporate level. The industrial practice at these companies is presented within four areas. 2.1. Implementation of commercial and in-house developed systems It could be concluded that none of the companies which participated in the investigation utilizes any pre-described model or guidelines in order to handle implementation of new computer based support systems. The approach for handling implementations are decided for each individual case and no information was found regarding consideration of lessons learned from earlier implementations. Because of this it is more likely that problems will re-occur in different implementation projects. The company closest to having a formalized method is Company 1 which, for larger implementations, assigns an organization change manager. They are also testing all new systems on test servers before it is released in live operation. This might not always save time in the implementation process but it can save costs by detecting errors before anything have been produced with the system. Company 2 which develops a lot of similar applications could benefit by having a more standardized proactive thinking regarding implementation as well as management during development of the applications. This can also be reflected in research claiming that low standardization of such applications causes knowledge loss [21]. The creation of their own programming language can be seen as a step towards a more standardized structure of the code in new applications. Company 3 and 4 perform all implementation completely case-based. 2.2. Management of commercial and in-house developed systems None of the companies have a proactive approach to management and maintenance of the systems. They all perform maintenance when new needs or problems are identified. Over all transparency of the development processes is something Company 1 wishes to keep high. The transparency in an individual support system however is usually low since there are no focus on that. The organization have trust in the process assurance of all systems which are implemented. By having a low level of transparency in systems and applications, the company cuts the connection between the produced material and the knowledge which was used in the system or application. If errors occur in a product or if an engineer wishes to re-use knowledge, the low traceability followed by the low transparency could result in time and/or knowledge waste. If defective knowledge cannot be traced, it can continue to produce errors or lowered quality in future products. This lack of transparency could create obstacles for people trying to perform maintenance of the systems since it could be hard to find out what has to be done. Over all the management of systems in Company 1 can be seen as structured compared to the other companies due to the reason that they do have some specified steps of which to follow. Company 2, 3, and 4 have not adopted any structured guidelines of how to perform their maintenance. In Company 3 and 4 it is not seen as a big problem but Company 2 which makes use of a larger amount of applications, and also performs more maintenance work to their systems, could benefit from a more systematic method to address this aspect. T. Hjertberg et al. / Implementation and Management of Design Systems 171 2.3. System connection to produced material A problem which have been noticed during the interviews in all companies, however most applicable to Company 1, was the lack of documenting information about system version and knowledge document version to produced material. Since both the systems and the knowledge documents are affecting the project output in different degrees, it is seen as relevant to save information of this connection. This is seen as important in order to ensure that an error can be traced to the knowledge which created it. Company 2 do not document connections between produced material and systems which affects the outcome of the product. This puts Company 2 in the same situation as described for Company 1 above. Company 3 and 4 have no need for this topic. 2.4. Knowledge re-use It can clearly be seen that Company 1 to a large extent are able to re-use knowledge created in technology development to be used within product development. A problem which was discovered in Company 1 is that no common terminology is used when creating the documents containing the knowledge. This have caused some issues regarding searching for specific knowledge documents. The low amount of knowledge re-use in Company 2 is thought to be a result caused by low standardization in the formalization of knowledge created by the design engineers. Report files content varies a lot from engineer to engineer and it is not certain that the Company can make use of them in new projects. A more standardized way to formalize this knowledge could result in a higher re-use of the knowledge which could both save time and ensure quality of the produced products. Company 3 are able to re-use knowledge if there exists a finished project with output which satisfy new requirements. If there are no matching case, they cannot make use of any formalized knowledge. In order to get Company 4 to start re-using knowledge (CAD models are possible to re-use) they would have to adopt a knowledge formalization model, suitable for the type of knowledge stored, which enables engineers to easily find a CAD model which could be used for a new set of requirements. 3. Knowledge gap and future research As can be concluded from the review of supporting methods and models, extensive research and development have been devoted to technical aspects of building systems for specific products, and some research have been directed towards general methods and models supporting system development, although, little attention has been paid on the actual implementation and management of systems in operations. The experience from industry indicates that significant efforts are required to introduce and align these kinds of systems with existing operation, legacy systems and overall state of practice. System management including adapting existing systems to changes in product technology, new product knowledge, production practices, new customers and so forth is also challenging. Research in design automation and knowledge based engineering has not focused on implementation and management issues in industrial operation. The aspects are pointed out as important but merely treated as consequences of other actions without studying the actual need, trade-offs in development and supporting methods and tools required. Verhagen et al [21] has based on an literature review 172 T. Hjertberg et al. / Implementation and Management of Design Systems outlined shortcomings of KBE. Four of these that could have an impact on implementation and management are: system transparency, knowledge sourcing and reuse, semantics of knowledge models and traceability. However, the review is not supported by any survey in industry. Concerning configuration system, documentation has been pointed out as important for maintenance issues and different models and tools exist to be used as support. The main principles are interesting and are most likely applicable to some degree, however, there is no evidence that the specific methods and tools can support a generative product model in an automated engineer-to-order business model. There is currently a difference in the extent of usage both of computer support in the engineering design as well as formalized methods in how to implement and manage systems in the investigated companies. Since there is a difference in this utilization between the companies, they express the need of methods with varying scope. Two main problems were identified in Company 1. The first problem regards the knowledge flow between different domains. The way of formalizing the knowledge in one domain is not always suitable for another. Here a need of a knowledge formalization model, which facilitates multi-domain utilization are seen. The other problem regards traceability of knowledge. A finished product can be traced back to the systems used to create it. However, there are no documentation of which version of the systems used to create this specific product. This means that if the knowledge used in the systems is changed, the knowledge used to create the product in focus cannot be found. For Company 2 the main problem lies within the use of knowledge created by the design engineers. Programmers are supposed to use this knowledge to create design automation applications but the low standardization in how this knowledge is formalized frequently creates obstacles in their work. They also have a low re-use of knowledge created in old projects which is believed to be a result of the method used to formalize and store knowledge. The current need in the smaller companies, Company 3 and 4 which more rarely implements new systems in their organizations, are seen in a more general method which can help the companies to more effectively introduce new systems in their live operation. The need for support and further research can be summarized in three areas: (1) Models which enables companies to formalize their knowledge to facilitate multidomain utilization: The lack of methods which aids engineers to formalize their derived knowledge in a way to make it understandable to people in other domains, which might be the users of the knowledge, have proven to be a reason to communication inefficiency. A need is seen for the creation of guidelines or models of which to follow when formalizing knowledge which is to be of multi-domain utilization. This is seen as relevant in order to enable engineers communicate their knowledge in a way which enables a diversity of knowledge interpreters. (2) Documentation of relations between produced products specific system versions, used in the products creation, in order to connect it to the knowledge of which it was derived from: Increasing the possibilities to find knowledge, which once have been created or stored within a company, could lead to higher transparency and thereby decrease knowledge loss by facilitated maintenance. The transparency could be gained by creating and maintaining connections between product instances, system version, and knowledge version. (3) Guidelines for introducing design support systems in an existing process: Smaller companies with little experience of system implementations might not find it profitable to apply an extensive methodology. Here a need is seen for more general guidelines of which to T. Hjertberg et al. / Implementation and Management of Design Systems 173 follow in order to obtain a successful implementation while keeping the aspect of maintenance during use in mind. The Companies in the study are seen as representable for both the industrial frontier in utilization of engineering support systems as well as for companies with less experience in the field. While Company 4 might be considered as somewhat behind the average company regarding utilization of engineering support systems, Company 1 would be representing the frontier with a large number of academic employees which performs work and research in related areas, especially KBE. The company is well aware of existing methods but cannot find the desired support in them for their activities. Company 2 is also far ahead of the average company with an in-house developed programming language adapted for creating design automation applications and long experience within the area. Company 3 are seen as ahead but closer to the average company in its utilization of engineering support systems. A few systems exists within the company but they are new in the area with a low amount of experience. 4. Conclusions Over all it can be concluded that some of the companies, especially Company 1, have adopted structured methods in order to handle implementation and management of their systems as well as documentation of knowledge derived from technology and product development. However, in all companies a need for methods of how to handle implementation and management in order to make a more effective use of their systems can be seen. A set of areas relevant for further research have been identified which are thought to affect this. In general, literature conforms to these and the need for further research are strengthened by the confirmation of their industrial relevance. Future work will focus on the development of methods, supporting implementation and management of engineering support systems by consideration of the identified research gaps, through further investigations at the companies. Success criteria will be derived and case studies will be defined and executed at the participating companies. 5. Acknowledgment The authors would like to express gratitude towards the participating companies in the study as well as The Knowledge Foundation who partly funds the project. 6. References [1] [2] [3] E. Commission, Directorate-general for research, Industrial technologies, ed. Unit G2 – New generation of products: Factories of the Future Road Map PPP Strategic Multi-annual Roadmap, 2010. F. S. Fogliatto, G. J. C. da Silveira, and D. Borenstein, The mass customization decade: An updated review of the literature, International Journal of Production Economics, vol. 138, pp. 1425, 7, 2012. G. Da Silveira, D. Borenstein, and F. S. Fogliatto, Mass customization: Literature review and research directions, International Journal of Production Economics, vol. 72, pp. 1-13, 6/30/ 2001. 174 [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] T. Hjertberg et al. / Implementation and Management of Design Systems M. Rudberg and J. Wikner, Mass customization in terms of the customer order decoupling point, Production Planning & Control, Vol. 15, pp. 445-458, 2004/06/01 2004. Hvam, Mortensen, and Riis, Product customization: Springer-Verlag Berlin, 2008. A. Claesson, C. t. h. D. o. Product, and P. Development, A Configurable Component Framework Supporting Platform-based Product Development: Engineering and Industrial Design, Product and Production Development, Chalmers University of Technology, 2006. K. Amadori, M. Tarkian, J. Ölvander, and P. Krus, Flexible and robust CAD models for design automation, Advanced Engineering Informatics, vol. 26, pp. 180-195, 4, 2012. J. Johansson, Manufacturability analysis using integrated KBE, CAD and FEM, in Proceedings of the ASME Design Engineering Technical Conference, 2008, pp. 191-200. J. Johansson, A flexible design automation system for toolsets for the rotary draw bending of aluminium tubes, in 2007 Proceedings of the ASME International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, DETC2007, 2008, pp. 861-870. M. Cederfeldt and F. Elgh, Design automation in SMEs - Current state, potential, need and requirements, in Proceedings ICED 05, the 15th International Conference on Engineering Design, 2005. L. T. M. Blessing and A. Chakrabarti, DRM, a design research methodology, Springer-Verlag, London, 2009. S. Sunnersjö, Planning design automation systems for product families - A coherent, top down approach, in Proceedings of International Design Conference, DESIGN, 2012, pp. 123-132. M. Cederfeldt, Planning Design Automation : A Structured Method and Supporting Tools, Department of Product and Production Development, Chalmers University of Technology, Göteborg, 2007. I. Rask, Rule-based product development - report 1, in Industrial Research and Development Corporation, Mölndal, Sweden., 1998. I. Rask, S. Sunnersjö, and R. Amen, Knowledge Based IT-systems for Product Realization, presented at the Industrial Research and Development Corporation, Mölndal, Sweden, 2000. M. Stokes, Managing Engineering Knowledge: MOKA: Methodology for Knowledge Based Engineering Applications: Professional Engineering Publishing, 2001. G. La Rocca, L. Krakers, and M. J. L. Van Tooren, "Development of an ICAD generative model for blended wing body aircraft design," in 9th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization, 2002. P. Lisandrin and M. Van Tooren, "Generic volume element meshing for optimization applications," in 9th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization, 2002. M. van Tooren, G. La Rocca, L. Krakers, and A. Beukers, Design and technology in aerospace. Parametric modeling of complex structure systems including active components, 13th International Conference on Composite Materials. S. Diego, 2003. R. Curran, W. J. C. Verhagen, M. J. L. van Tooren, and T. H. van der Laan, A multidisciplinary implementation methodology for knowledge based engineering: KNOMAD, Expert Systems with Applications, vol. 37, pp. 7336-7350, 11, 2010. W. J. C. Verhagen, P. Bermell-Garcia, R. E. C. van Dijk, and R. Curran, A critical review of Knowledge-Based Engineering: An identification of research challenges, Advanced Engineering Informatics, vol. 26, pp. 5-15, 1, 2012. A. Haug, A. Degn, B. Poulsen, L. Hvam, A. Haug, A. Degn, et al., Creating a documentation system to support the development and maintenance of product configuration systems, in Proceedings of the 2007 WSEAS International Conference on Computer Engineering and Applications, Gold Coast, Australia, January, ed, 2007. P. Bermell-Garcia, W. J. C. Verhagen, S. Astwood, K. Krishnamurthy, J. L. Johnson, D. Ruiz, et al., "A framework for management of Knowledge-Based Engineering applications as software services: Enabling personalization and codification," Advanced Engineering Informatics, vol. 26, pp. 219-230, 4, 2012. I. S. Fan, G. Li, M. Lagos-Hernandez, P. Bermell-García, and M. Twelves, A rule level knowledge management system for knowledge based engineering applications, in Proceedings of the ASME Design Engineering Technical Conference, 2002, pp. 813-821. Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-175 175 Glencoe – a Visualization Prototyping Framework Anna SCHMITT1, Sebastian WIERSCH 2 and Stefan WEIS 3 University of Applied Sciences, Trier, Rhineland-Palatinate, 54293 Abstract. Today product manufacturers are concerned with an ever growing complexity of their respective products. The main complexity driver has been identified as the variability of products. It results from the constantly growing need to individualize products according to customer needs or market constraints. Car manufacturers are typical examples for such companies. Talking about ten to the power of 8 variants of one car, it seems nearly impossible to overlook the corresponding product line and the consequences of changes to the product line. In literature there are several approaches to visualize the variability of such product lines. But according to our review of the corresponding literature we made the experience that there is no one-fits-it-all visualization technique. The Glencoe project, hosted at the Trier University of Applied Sciences, aims at providing a rapid visualization prototyping framework giving the possibility to quickly implement the preferred visualization technique and test it during a proof of concept under industrial conditions. The chosen framework platform allows not only to run the programmed visualization on desktop machines, but also on tablets. In this paper we present a student project realizing different views of features trees as well as views for logical constraints based on the Glencoe platform. Keywords. visualize variability of product lines, feature trees, logical Constraints, Introduction Customer requirements in the car industry lead to an increasing number of individual product variants (environmental protection, product safety etc.). Most of these product variabilities nowadays are realized using software product lines (SPL) by reusing software artifacts systematically in variant/different products (i. e. cars for different countries). This reuse aims at providing similar but different products with low production costs combined with short manufacturing times while keeping the high quality of the products. At the very beginning of designing a product line, one of the main problems with product variability is the visualization of them. The challenge is to keep focus on variability and similarities of software artifacts. For example, every car has an engine but there are different types of it. Variability instead concentrates on the differences between the products (ecological vs. high-performance cars). These connections between software artifacts can be specified using constraints. The semantics of these constraints tells us whether the feature is a must or optional for a certain product 1 schmiann@hochschule-trier.de wierschs@hochschule-trier.de 3 weisst@hochschule-trier.de 2 176 A. Schmitt et al. / Glencoe – A Visualization Prototyping Framework variant. Typically constraints are written in some formal language with as specified semantics as for example propositional logic or they are represented simply as links between artifacts within the product line description. Since there are typically many constraints and not all the engineers are familiar with the used description language there is a need to visualize the product line in a simple and understandable way This paper tries to describe and solve this problem by finding ways to simply represent these constraints and keeping the clarity when plenty of constraints have to be displayed [1]. 1. Related work There are already various approaches to solve the problem of visualization of variability. This chapter presents some of these solutions by introducing tables and graphs. A simple way to visualize product variants and their respective features are tables. The product name or the feature name is entered in the rows while their variants are entered in the columns (or vice versa). This type of visualization is good for small feature models but for large models it gets confusing very quickly. In the worst case, the number of product variants growths exponentially by adding one additional feature. Therefore, tables are not very suited even for small product lines. Figure 1. Radial tree: The initial visualization with the root at the center on the left[3] and the focus on a child node on the right[4]. A. Schmitt et al. / Glencoe – A Visualization Prototyping Framework 177 Another solution to the visualization of variability are graphs. These graphs are mostly trees with different representations. One of these visualizations is the radial tree by John Lamping [2]. It is based on the hyperbolic geometry. The structure of this visualization as a radial tree can be seen in Figure 1. The focus can be changed by clicking on any visible node which will then smoothly move into the center (Figure 1, right image). Another tree is the cone tree layout by Pablo Trinidad [5]. It is a three-dimensional way to visualize variability (Figure 2). All child nodes will be arranged in a circle under their Figure 2. An example of a visualization with Cone Trees [7]. parent node. The nodes are formed like a cone. The following model in Figure 2 is an illustration of this view [6]. The feature tree is the last one presented in this chapter. The feature tree organizes the features of the product in feature groups which will then merge in a feature model. In these feature trees it is possible to visualize constraints between each feature in the tree. These can be implied or exclude conditions but also other complex conditions. Because of the popularity of these feature trees, the Glencoe project uses this type of visualization [7]. 2. Glencoe The Glencoe project is a flexible application which provides a rapid visualization prototyping framework to produce custom output in form of mass customization. It is used to visualize complex feature models in a way the user can analyze them with the opportunity to choose what he wants to see. The project is based on the work of Christian Bettinger, M. Sc. at Trier University of Applied Sciences, who provided the initial implementation. In a student team project, we implemented different views with the aim to visualize these complex feature models. Every view has some options the user can choose from to adjust the visualization of constraints. These options help to analyze the complete feature model. Figure 3 shows one example view. Exclude constraints are drawn red while implies constraints are drawn green. A pop up on the right bottom of the windows shows some useful information like the name of the current feature, all visible nodes, the number of features which are in the transitive closure and a metric which represents all nodes who are affected if the current chosen node will be changed or deleted. In every view it is possible to show only a sub tree of the existing feature model. 178 A. Schmitt et al. / Glencoe – A Visualization Prototyping Framework 2.1. Tree View The first implemented view is the so called tree view. This view is optimized for the user to see the connections between the individual features so he can directly see the hierarchical construction of the complete feature model by the gray lines. At the top of this view is the first level with the root feature. This is followed by other features on the next level which extend under the root feature et cetera. The tree view provides all possible options to adjust the view. The first option is called “No constraints”, so only the basic tree feature model is shown. The second option “All constraints” shows all available constraints. The third option is “Only on hover” which only shows the constraints of the feature node where the mouse cursor is hovered over. The last option is the “Transitive closure”. In this option all constraints are shown which have a direct connection to the chosen feature and also the constraints, which have a connection to them. The constraints in this view are designed as lines from one to another feature. An arrow on the green line illustrates the characteristic of a constraint. Thus the user knows that the feature where the green arrow points to is needed for the other features on the end of the green line. The red lines demonstrate that only one of the features that are connected with the actual chosen feature is possible to choose. For example, it is only possible to build either an engine with 160 KW or 280 KW in a car and not both. These symbols make it is easier to understand the complete model. 2.2. Icicle View The second view is the so called icicle view (see Figure 3). It provides an overview of all the features like the tree view but in another design, which shall facilitate the use of the application for touch-based devices. This view is constructed hierarchically with the root feature on top of the view. The other features are also displayed under the root feature and the constraints are illustrated by coloring them without arrows. In this view, there are only three options available to choose from (“No Constraints”, “Only on hover” and “Transitive closure”). The option “All constraints” is not available because it would show all features with constraints in their respective color, but it would not be possible to see the connection between each feature, so this option is disabled. If the option “Only on hover” (Figure 3) is chosen and the mouse cursor is hovered over a node then all connected nodes will be highlighted in magenta. The corresponding features will be highlighted in green as implies and red as exclude. If the option “Transitive closure” is chosen and the mouse cursor is hovered over a feature every direct connected feature will be highlighted in the respective constraint color. Also every other feature that has a connection through constraints to these corresponding features will be highlighted in the color blue. A. Schmitt et al. / Glencoe – A Visualization Prototyping Framework 179 Figure 3. Icicle-View with the option to show the constraints only when the mouse cursor is hovered over the chosen feature (magenta colored). 2.3. Circle View The third view is the circle view (Figure 4). This view tries to set the focus on the different constraints between the features. As the name already says, this view is constructed in a circle. All features are arranged in a dynamically displayed circle whose size depends on the number of all features. Again, the root feature is set on the top of the view and all other features are positioned clockwise in a circle. The available options in this view are “All constraints”, “Only on hover” and “Transitive closure”. The option “No constraints” is disabled Figure 4. Circle-View with the option to show all existing constraints. because the focus in this view is set on the visualization of constraints. The usage of this view is the same like in the other views. When the user has chosen an option and moves the mouse cursor over a feature the 180 A. Schmitt et al. / Glencoe – A Visualization Prototyping Framework corresponding features will be highlighted in the constraints color and also a line in the color of the constraints connection will be drawn between the features. If the option “All constraints” is chosen, all possible constraints are shown (Figure 4). 2.4. Metrics A little pop up (Figure 5) at the right bottom of each view shows the metrics regarding to the feature model. This includes the number of features which are in the transitive closure and a metric which represents all nodes who are affected if the actual chosen feature will be changed or deleted. This metric is calculated by the following formula: Figure 5. Pop up which shows metrics regarding to the feature model. node = current node children(a) = number of all children (not only the direct children) of the node “a” child(a) = child node of the node “a” ta = |transitive closure of the node “a valuenode = ∑ tchild (node) + tnode ∀t x , t y gilt t x ≠ t y mit x ≠ y∧ x, y ∈ {node, children(node)} Thus, the user gets information about the impact on the feature model if he wants to change or delete the currently selected node. 3. Solutions for specific problems The Glencoe project was implemented to solve the problem of visualizing complex product lines using feature models. The aim of this application is to give the user an easy to use framework to create prototypes in a short time. It is configured such that the user can choose what he wants to see associated to his main attention of the feature model. For example, if the user wants to know how extensive each feature is dependent from another feature he can choose the circle view to get the best overview for analyzing this and can easily reach the desired aim. 4. Technical implementation The Glencoe software is implemented in Adobe Flash. The initial implementation was done by Christian Bettinger and was continuously further developed by him and other students at University of Applied Sciences Trier. Because of the implementation in Adobe Flash, the software is able to run on most platforms. A. Schmitt et al. / Glencoe – A Visualization Prototyping Framework 181 The development of an Android application is already in progress. In this application, the focus is mainly set on how to control it by touch gestures. 5. First experiences When feature models get complex it is hard to keep clarity of the constraints. This problem exists in all of the three views explained above. If a feature model has a large amount of constraints, the visualization of these with lines between the features gets confusing because a lot of them cross each other. It gets even more confusing if the crossing lines have the same color. A possible solution for this problem will be discussed in the next chapter “Conclusion and future work”. 6. Future work As explained above, the clarity of constraints in complex feature models could possibly not be given because lots of constraints cross each other. A possible solution for this could be realized in the Circle View. This solution could be done by separating the circle into multiple circles. Each circle would represent a group of related features of the feature model. To keep the clarity, the individual circles must not be connected by constraints. To group the features into several circles, an algorithm could go through each feature and group them in several circles. The following model in Figure 6 demonstrates how this new Circle View could look like. Another solution in the Circle View could be a zoom feature. When the view is completely  zoomed out, only the root feature Figure 6. Modified circle View. The Circle View separated and the parent nodes of the next into several individual circles. level are visible. The child nodes of these parent nodes are not visible at this stage. While zooming in on the view, the next level of the feature model gets visible. Now the child nodes of parent nodes and the collapsed parent nodes of the next level are visible. Again, the following model in figure 7 shows a possible visualization of this modified Circle View. 182 A. Schmitt et al. / Glencoe – A Visualization Prototyping Framework Figure 7. Modified circle View. The Circle while zoomed out on the left and zoomed in on the right. In addition, another solution also includes a zoom feature is possible. While the feature model is zoomed out, when neighboring features have the same constraint, their lines will be combined into one bold line. The line gets bigger when there are even more neighboring features with the same constraint. This keeps the clarity of constraints. The bold line gets split again when the user zooms in on the feature. The model in figure 8 demonstrates how this visualization could look like. Figure 8. Modified circle View. Again, the Circle while zoomed out on the left and zoomed in on the right. A. Schmitt et al. / Glencoe – A Visualization Prototyping Framework 183 7. Conclusion In this paper we presented a visualization prototyping framework, that deals with the problem of visualization of variability. It comprises different views and the user can choose between them, dependent on what he wants to see. As explained in the chapter “related work” there are already a lot of approaches to solve the problem of visualization of variability, but in practice it turns out, that all of them have the same problem. There are confusing if the data were very complex. In the authors’ opinion the Glencoe project is also not a perfect solution for the described problem, but it offers the possibility to minimize the problem. References [1] [2] [3] [4] [5] [6] [7] G. Botterweck, S. Thiel, D. Nestor, S. bin Abid, and C. Cawley: “Visual Tool Support for Configuring and Understanding Software Product Lines” In: 12th International Software Product Line Conference (SPLC 2008), Limerick, Ireland, September 2008. ISBN 978-7695-3303-2. J. Lamping, R. Ramana, and P. Pirolli, A Focus+Context Technique Based on Hyperbolic Geometry for Visualizing Large Hierarchies,” CHI ’95 Proceedings, Accessed: 06.03.2015. [Online]. Available: http://www.sigchi.org/chi95/proceedings/papers/jl_bdy.htm J. Lamping, R. Ramana, and P. Pirolli, A Focus+Context Technique Based on Hyperbolic Geometry for Visualizing Large Hierarchies, CHI ’95 Proceedings Papers, Accessed: 06.03.2015. [Online]. Available: http://www.sigchi.org/chi95/proceedings/papers/jl_figs/fg-eg.gif J. Lamping, R. Ramana, and P. Pirolli, A Focus+Context Technique Based on Hyperbolic Geometry for Visualizing Large Hierarchies, CHI ’95 Proceedings Papers, Accessed: 06.03.2015. [Online]. Available: http://www.sigchi.org/chi95/proceedings/papers/jl_figs/fg-eg2.gif P. Trinidad, A. Ruiz-Cortés, D. Benevides, S. Segura, Three-Dimensional Feature Diagrams Visualization, Software Product Lines, 12th International Conference, SPLC 2008, Limerick, Ireland, September 8-12, 2008, Proceedings. Second Volume (Workshops), Accessed: 06.03.2015. [Online]. Available: http://www.lsi.us.es/~trinidad/docs/trinidad08-VISPLE.pdf T. Munzer, P. Buchard: Visualizing the Structure of the World Wide Web in 3D Hyperbolic Space. Website, 2012, Accessed: 06.03.2015. [Online]. Available: http://www.graphics.stanford.edu/papers/webviz/ D. Blanke, Konzeption und prototypische Realisierung einer Visualisierung varianter Entwicklungsstrukturen, Masterthesis, Hochschule Trier, 2013. 184 Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-184 Consumer-Oriented Emotional Design Using a Correlation Handling Strategy a Danni CHANGa, Yuexiang HUANG a, Chun-Hsien CHEN a,1, Li Pheng KHOOa School of Mechanical & Aerospace Engineering, Nanyang Technological University, Singapore Abstract. Emotional product design is of great importance in new product development (NPD). Especially, Kansei engineering has been widely advocated because of its effectiveness and reliability in handling consumers’ emotional requirements. However, the following key issues in Kansei engineering have not been well addressed: 1) how to capture human emotions, 2) how to identify the relationships between products and emotional needs, and 3) how a product can be improved to better fit consumers’ emotional needs. This research aims at realizing a product design system for emotional effect (PDSEE) to facilitate emotional design processes. Generally, the proposed PDSEE comprises three modules, i.e. an emotional needs management module (ENMM) to capture and manipulate customer emotions, a product classification module (PCM) to examine the relationship between the product and emotions, and a product reconfiguration module (PRM) to manage and analyze product attributes so as to achieve product configurations with desired emotional impacts. To illustrate the capability of the prototype PDSEE, a case study of wedding ring design is presented. The results show that the prototype system is able to handle a large number of Kansei adjectives, address relationships between Kansei and products, and effectively identify key product parameters for designing new products with better emotional impacts. Keywords. Product Design and Development, Emotional Design, Kansei Engineering 1. Introduction 1.1. Related works & Background In recent years, consumers have become more sophisticated and have shifted their focus from the functionality of a product to their experience with the product. A look at the current offering of hot selling products reveals that almost all products are designed in every possible manner to arouse the customer’s interest. In other words, a change from product-centered to human-centered design is in progress [1, 2]. In fact, many aspects of human-centered design are evident, such as ergonomics [3], human factors [4], affective design [5], and crowdsourcing-based customer cocreation [6]. Among these various domains, the study of the customer’s emotional requirements is of great importance, since emotions are a central quality of human 1 Corresponding author. Tel.:+65 6790 4888. E-mail address: MCHchen@ntu.edu.sg (C.-H. Chen) D. Chang et al. / Consumer-Oriented Emotional Design Using a Correlation Handling Strategy 185 being and could influence most human behavior, motivation, and thought. The experience of certain kinds of emotions (e.g. joy and excitement) is itself a goal in the purchase and use of products [7]. Customer emotional factors are so powerful in the choice of products that many companies try to incorporate the desired emotional impacts into their products so as to achieve customer satisfaction. Accordingly, the incorporation of the customer’s emotional factors or emotional requirements is vital. However, emotional product design is a complex task, as it involves more than one distinct discipline from engineering to psychology [8] and simultaneously deals with human feelings or emotions, which are complex, ambiguous, and difficult to quantify. To fully take advantage of emotional design in order to achieve better products, it is worthwhile to conduct an in-depth exploration on consumer-oriented emotional design. 1.2. Research issues & Objectives Specifically, the following three important issues have to be addressed well when dealing with emotional product design. 1) How can human emotions be captured so that a consumer’s emotional requirements are revealed for targeted development? Human emotions are subjective, circumstance related, and highly individual. Finding reliable ways to identify a customer’s emotional requirements of a product from the consumer’s point of view becomes a must. 2) How can the relationships between products and emotional needs be established? Different consumers may not have the same feelings about a product. Methods to assess the relationships between the products and a customer’s emotional requirements based on the consumer’s opinions are thus needed. 3) How can products be improved in such a way that new products provide a better fit for a consumer’s emotional needs? Product designers may not effectively address the key design parameters of a new product for the desired emotional impact. A systematic approach to designing products so that they are able to fulfill the desired emotional requirements is therefore required. Accordingly, the objective of this study is to tackle the research problems identified above. More specifically, this work involves the following three targets. 1) To establish an approach to identify the consumer’s emotional requirements of a product. 2) To develop a methodology for classifying products according to consumers’ emotional effect. 3) To investigate a product feature/configuration analysis strategy to address key design parameters with emotional impacts. 2. Methodology – A product design system for emotional effect The framework of the so-called product design system for emotional effect (PDSEE) proposed in this study is shown in Figure 1. The proposed prototype PDSEE consists of three modules: an emotional needs management module (ENMM), a product classification module (PCM), and a product re-configuration module (PRM). 186 D. Chang et al. / Consumer-Oriented Emotional Design Using a Correlation Handling Strategy Figure 1. Framework of the proposed PDSEE. 2.1. Module 1: Emotional needs management module (ENMM) As shown in Figure 1, ENMM deals with collecting and analyzing the customer’s emotional requirements of a product. It comprises (1) an information collection procedure for collecting the raw data of customer voices and opinions; (2) a noise filtering process to eliminate redundant information in the raw data collected; (3) a verbalization step to standardize emotional needs using a unified format; and (4) a clustering algorithm to group Kansei adjectives based on customer concerns. The result is a well-formatted, comprehensive, and representative database of Kansei tags. Amongst these procedures, the crucial one is to develop an effective clustering method. For this purpose, a hybrid Kansei clustering method based on Design Structure Matrix (DSM) and Graph Decomposition (GD) is proposed. The specification of the proposed method is divided into 3 parts and presented as follows: Part 1: Generation of Kansei adjectives matrix Firstly, as many Kansei adjectives as possible should be collected (Equation 1); then product samples are collected and will be evaluated according to these Kansei adjectives using a 7-point scale; having the evaluation results, statistical methods are employed to compute the correlation coefficients of the Kansei adjectives (Equation 2~6), based on which DSM subsets can be constructed and a combined DSM could be obtained (Equation 8); finally, the DSM subsets and the combined DSM are partitioned, which means the positively correlated Kansei adjectives should be kept in one block, and negatively correlated Kansei adjectives in one block. Two lists of pseudo-codes are generated to assist the partitioning process (Figures 2 and 3). Particular equations are: Product samples are denoted using psu , so the set of product samples or representatives is PSi { ps1 , ps2 ,..., psu } , and u is a natural number (1) The representative products are evaluated with respect to every Kansei subset by corresponding participant set. The evaluation score for a product psu , with respect to a Kansei adjective ail in Kansei subset n , by a participant piv , is denoted by ESnuilv D. Chang et al. / Consumer-Oriented Emotional Design Using a Correlation Handling Strategy 187 (2) ESnuilv f ( psu , ail , piv , n) The mean evaluation score for a product psu , with respect to a Kansei adjective ail in Kansei subset n , is denoted by MESnuil and calculated by: ¦ MESnuil k v 1 ESnuilv (3) k where k: The number of evaluators in a subset (k>=15) A matrix of mean evaluation score for products with respect to Kansei adjectives in a Kansei subset n , M (MESnuil ) , can be established. M ( MESnuil ) ª MESn11 « « «¬ MESnu1 MESn1l º » » » MESnul n ¼ (4) The correlation coefficients of Kansei adjectives within each Kansei subset can be calculated based on M (MESnu ) using Pearson’s product-moment algorithm [9]. U MES cov( MESnl , MESn (l 1) ) nl V MES V MES , MESn ( l 1) nl (5) n ( l 1) The above Equation defines the population correlation coefficient. Substituting estimates of the covariances and variances based on a sample gives the sample correlation coefficient, denoted by J . J MES nl , MESn ( l 1) ¦ ¦ u k 1 u k 1 _________ ___________ ( MESnkl  MES nl )( MES nk ( l 1)  MES n ( l 1) ) _________ ( MESnkl  MES nl ) 2 ¦ u k 1 (6) ___________ ( MES nk ( l 1)  MES n ( l 1) ) 2 The correlation coefficients between Kansei adjectives in each Kansei subset are used to construct DSM of subsets. The Kansei adjectives on which the correlation coefficients are based are the nodes in each DSM. DSM n ,J MES nl ,MESn ( l 1) 1 ª «J « MESn1 , MESn 2 « « ¬« J MESn1 , MESnl 1 J MES n 2 , MESnl º » » » » 1 ¼» (7) The DSM subsets are combined into an overall DSM, DSM 0 , by taking average values of correlation coefficients between the same pairs of nodes in each DSM, DSM n . DSM 0 ( x, y ) ­1 ° 2 ( DSM n (a, b)  DSM n 1 (a, b)), if there are corresponding value s in DSM n ® ° if there is no corresponding value in DSM n 0, ¯ Figure 2. Pseudo-code for DSM for Kansei Partitioning. Figure 3. Pseudo-code for a combined DSM for Kansei Partitioning. (8) 188 D. Chang et al. / Consumer-Oriented Emotional Design Using a Correlation Handling Strategy Part 2: Generation of Kansei adjective sets A GD-based Kansei clustering method is proposed. Firstly, the connection ratio is proposed to represent the ratio of actual incident internal links to possible incident internal links for the vertex under consideration using Equation 11; then a proper weight is assigned to vertices using Equation 9 and 10; subsequently, the vertices are sorted in descending order by weight, and a strong-link decomposition pass is applied using Equation 14, 15 and 16; through a comparison of the subgraphs, the value of the cutting function could be computed by Equation 17 so as to select the largest subgraph; then the operations of selecting and removing the largest remaining subgraph are repeated until all vertices are covered and the decomposition is complete. An application program implementing the algorithm is available for the efficient and effective decomposition of graphs (Figure 4). The specific equations for the algorithm are listed as follows: A weight is assigned to all the links (a link is a relationship between Kansei adjectives in the DSM 0 ) and all the vertices (corresponding nodes or Kansei adjectives) W jlink Wi vert Cj  Oj 1 hi ¦ h (9) 1 hi W jlink i 1 ¦ h i 1 (10) (C j  O j ) where W link is the link weight of the j-th link, j=1,2,…,g (g is the total number of links); W vert is the i j vertex weight of the i-th vertex; O is the weight index for link j; C is the number of three-link circuits in j j the j-th link; hi is the number of links incident to the i-th vertex A connection ratio for a vertex is denoted as cr. ILact cr ILpos (11) where IL is the actual incident internal links for a vertex; IL is the possible incident internal links act pos for a vertex A subgraph block sb is a set of vertices (Kansei adjectives, asb ). The set of subgraph blocks is denoted as SB. sb {asb1 , asb 2 ,..., asbw} (12) SB {sb1 , sb2 ,..., sbv } (13) A vertex a may be an elements of sb if cra  TV (TV is a threshold value) (14) or max(Walink ) Walink (15) sb where M ax(W link ) is the strongest link of a vertex, a; W link is the value of a link of a vertex in set sb, asb a a sb A set, DV, is a collection of disconnected vertices which do not belong to any sb. (16) DV SBc A cutting function is used to deal with the problem of “noise” from extraneous vertices and links that are only nominally associated with nucleus vertices. The cutting function will cut low-weight links to isolate or reveal cluster nuclei. By applying a series of cutting passes, the cutting function enables several decompositions for comparison and subgraph selection. The value of the cutting function ki for a possible i-th cutting pass is calculated using: ki max[cfi 1  1, min(wlink j )] (17) D. Chang et al. / Consumer-Oriented Emotional Design Using a Correlation Handling Strategy 189 Many manipulations are applied to manage sb. As a result, the Kansei adjectives sets (KAS) are formed to help identify Kansei clusters. Figure 4. The application program implementing the GD algorithm [10]. Part 3: Identification of Kansei tags It involves 4 steps: (1) converting the Kainsei adjectives matrix into a color map, (2) address the Kansei adjectives sets on the color map, (3) merging overlaps and removing branches, and (4) checking the individual clusters i.e. identified Kansei tags. 2.2. Module 2: Product classification module (PCM) A double Semantic Differential (SD) approach is proposed for product classification. The method consists of 7 steps: (1) collecting Kansei tags and products, (2) selecting survey participants, (3) evaluating the products using the SD method (Equation 18), (4) evaluating the Kansei tags using basic emotions (Equation 20), (5) calculating the Kansei mean values and variances (Equation 21, 22 and 23), (6) calculating the adjusted Kansei mean values (Equation 24 and 25), and (7) listing the classification results (Equation 26 and 27). The particular algorithm of emotions-based SD is presented in details: The representative products are evaluated with respect to every Kansei tag by the survey participants. The results are recorded in a series of matrix M PSi  KT M PSi  KT psi p1 p2 pv kt1 ª si11 «s « i 21 « « ¬ sim1 si12 si11vv º si 2v »» and i {1,..., u} » » simv ¼ kt2 ktm si 22 sim 2 (18) where M is a matrix recording evaluation results for product ps with respect to every Kansei tag by i PSi  KT v participants; simv is the evaluation score of the i-th product sample with respect to the m-th Kansei tag by the v-th participant The mean evaluation score for all products with respect to all Kansei tags can be shown in one matrix M PS  KT , ps1 kt1 M PS  KT kt2 ktm ª ¦ v s11n / v « n1 « v s /v « ¦ n 1 12 n « « v «¬ ¦ n 1 s1mn / v ps2 ¦ ¦ ¦ v n v s /v s /v n 1 22 n v psu s /v 1 21n n 1 2 mn ¦ ¦ ¦ v / vº » s / v» 1 u 2n » » » s / v »¼ 1 umn s (19) n 1 u1n v n v n Each Kansei tag is evaluated with respect to every basic emotion in each basic emotions system by the same survey participants. The results are recorded in a series of matrix M KTi BESw , 190 D. Chang et al. / Consumer-Oriented Emotional Design Using a Correlation Handling Strategy kt j M KTj  BESw where M KTi  BESw p1 bew 2 ª bsw1 j1 «bs « w 2 j1 « « ¬« bswzj1 p2 pv bsw1 jv º (20) bsw 2 jjv »» and j {1,..., m} bs » » b wzjv ¼» bs bswzj 2 bewz is a matrix recording evaluation results for Kansei tag kt with respect to every basic bew1 bsw1 j 2 bsw 2 j 2 j emotion in the w-th basic emotions system by v participants; bs is the evaluation score of the j-th Kansei wzjv tag with respect to the z-th basic emotion in the w-th basic emotions system by the v-th participant The mean evaluation score for all Kansei tags with respect to all basic emotions in the w-th basic emotions system can be shown in one matrix M KT BESw , ª ¦ bsw11n / v « n1 « v bs /v « ¦ n 1 w 21n « « v «¬ ¦ n 1 bswz1n / v v bew1 M KT  BESw bew 2 bewz kt1 kt2 ¦ ¦ bsw12 n / v ¦ v n 1 v n ktm bsw 22 n / v 1 v n 1 bswz 2 n / v ¦ ¦ bsw1mn / v º » v » bs v / w 2 mn n 1 » » » v ¦ n 1 bswzmn / v »¼ v n 1 (21) The variance of a Kansei tag kt j with respect to a basic-emotion dimension bewz in the w-th basic emotions system is denoted by Vwz  j , Vwz  j ¦ ( M KTj  BESw ( z, n)  M KT  BESw ( z, j )) 2 v n 1 (22) v The total Kansei variance, KVw j , for a j-th Kansei tag based on the w-th basic emotions system is defined as: z (23) KVw j ¦ Vwn j n 1 The total Kansei variance could be mapped into Kansei-tag dimensions via various mapping functions. By so doing, the adjusted mean evaluation score based on the w-th basic emotions system for products with respect to Kansei tags can be obtained using: AM PS  KT w ( x, y) M PS  KT ( x, y)  F ( KVw x ) and x {1,..., m}, y {1,..., u} where AM PS  KT w is the matrix of adjusted mean evaluation score; F is the mapping function (24) Accordingly, a sequence matrix of the adjusted mean evaluation score based on the w-th basic emotions system is denoted by Seq( AM PS  KT w ) , Seq( AM PS  KT  w ) kt1 kt2 ktm ª max( AM PS  KT  w (1,1: u )) « max( AM PS  KT  w (2,1: u )) « « « ¬ max( AM PS  KT  w (m,1: u )) min(( AM PS  KT  w (1,1: u)) º min(( AM PS  KT  w (2,1: u)) »» » » i ( AM PS  KT  w (m,1: u )) ¼ min( (25) Based on the sequence matrix, two ways can be applied to obtain a matrix of adjusted products classification results with respect to Kansei tags: 1) by the number that is allowed in a classification result; and 2) by the threshold value that is used for comparison with the adjusted mean evaluation score of each product. Therefore, a matrix of adjusted products classification results, AC, is: 1) ACw Seq( AM PS  KT  w )(1: sn) and sn [1, u] (26) 2) ACw Seq( AM PS  KT  w )(1: st ) and st [1, u] (27) 191 D. Chang et al. / Consumer-Oriented Emotional Design Using a Correlation Handling Strategy iff Seq( AM PS  KT  w )( j : st ) t tv j and j {1,..., m} where AC is the matrix of adjusted products classification results based on the w-th basic emotions w system; sn is the number that is allowed in a classification result; st is a number between 1 to u; tv is a j threshold value for the j-th Kansei tag 2.3. Module 3: Product re-configuration module (PRM) A personal construct theory (PCT) based approach is proposed in this module to address product attributes and suggest prototype products with the desired emotional impacts. The specification of this method is explained as follows: Part 1: Identification of targets (1) collecting the Kansei tags (via Module 1 & 2), (2) establishing overall mind maps through a focus group, (3) coding overall mind maps into a tree structure, (4) designing survey forms and evaluating the items in the overall mind maps, (5) calculating occurrence values and weights (Equations 28~30), (6) defining the replications, (7) determining the underlying values of the replications, (8) establishing means-value chain, and (9) identifying the targets. Specific equations are presented as: ­1, if " yes " is given by the survying participant (28) S _i bbi _ pv OV _ ibbi ® ¯0, ¦ v if " no " is given by the survying participant (29) S _ ibbi _ pv OV _ ibbi (30) v Where S _ i is the evaluation score of the i-th item in the b-th branch by the v-th participant; OV _ i bbi bbi _ pv OW _ ibbi is the occurrence value of the i-th item in the b-th branch; OW _ ibbi is the occurrence weight of the i-th item in the b-th branch Part 2: Comparison between targets and candidate products A six step approach is used to conduct these tasks (Figure 5). Figure 5. Steps for comparison between targets and candidate products. Figure 6. Steps of product modification. Part 3: Product modification Prototype products can be obtained by modifying the values of the key product attributes. The modification process consists of four steps (Figure 6). 3. A case study A case study on wedding rings design is used to illustrate the complete procedures of the proposed PDSEE framework. The effectiveness of the system is demonstrated using a software tool that has been realized using Microsoft Visual Studio. 192 D. Chang et al. / Consumer-Oriented Emotional Design Using a Correlation Handling Strategy Step 1: Identification of the customer’s emotional needs – In this case study, the target customers were limited to local Chinese, male, ages between 21 and 28 (inclusive), and with a tertiary level educational background. Information about people’s emotional needs of wedding rings was collected from all available sources, and a total of 1294 information pieces were collected. After filtering out the nonrelevant and repetitive information, 337 emotional statements were selected. Through verbalization and standardization, the result of 168 Kansei adjectives is attained. The collected Kansei adjectives were divided into 14 Kansei subsets. Regarding product samples, 60 wedding rings (men) that were popular in the market were employed as product representatives. Having Kansei adjectives and product samples, a 7-point scale evaluation among 210 survey participants was performed. Based on the evaluation scores, a series of computation was conducted using the equations in Section 2.1. The resulting Kansei tags are presented in Figure 7. Step 2: Classification of products – The products (rings) were evaluated using identified Kansei tags by 30 participants. Having the evaluation scores, the calculation of Kansei mean value and variance and adjusted mean values can be realized using Equations 18~27, and the product classification results are presented in Figure 8. Figure 7. Resulting Kansei tags. Figure 8. Product classification result. Step 3: Addressing product attributes and designing prototypes – A focus group involving totally 30 participants was organized. The collected information was placed around the Kansei tags to form the mind map. Subsequently, survey participants were invited to judge whether an item was associated with the Kansei tag. The occurrence values and weights were calculated using Equations 28~30. Consequently, 9 items were identified as the replications of Kansei tag “responsibility”. Combining the opinions of participants in terms of to what extent a replication met their underlying desires, the replication with the highest score was considered as the target. In this case, the target was identified as honor guards. Based on this result, designers elicited corresponding product attributes and constructed a design structure to meet the target. With the help of a software tool (Figure 9), the basic structure of the ring configuration can be identified, and a final design was achieved as shown in Figure 10. D. Chang et al. / Consumer-Oriented Emotional Design Using a Correlation Handling Strategy Figure 9. A software tool for wedding ring design. 193 Figure 10. A final ring design. For the evaluation of this design, an iterative process was designed to involve a 7point scale evaluation survey to examine the consumer satisfaction, and the results show that this design achieved a final score higher than 6 which is higher than the scores of existing product samples. 4. Discussion and Conclusion A prototype product design system for emotional effect (PDSEE) is proposed in this paper. The proposed PDSEE comprises three main modules: the emotional needs management module (ENMM), the product classification module (PCM), and the product re-configuration module (PRM) to tackle the critical issues of (1) a consistent representation of emotional needs and effective computational algorithms for identifying emotional requirements, (2) a reliable methodology for product classification, and (3) a product configuration analysis method. Through a case study, it appears that the prototype PDSEE is effective for achieving designs with desired emotional impacts. To further improve the PDSEE, a reliable and operable approach to effectively capture consumers’ opinions and feelings remains to be established. Acknowledgement This research was supported by Singapore Maritime Institute research project (SMI2014-MA-06). References [1] D. Norman, Preface. In: H.N.J. Schifferstein and P. Hekkert (Ed.) Product Experience, Elsevier, San Diego, 2008. [2] D. Chang, C.-H. Chen, Understanding the influence of customers on product innovation, International Journal of Agile Systems and Management, 7(3/4), 2014, pp. 348-364. [3] K.H.E. Kroemer, H.B. Kroemer, K.E. Kroemer-Elbert, Ergonomics: how to design for ease and efficiency, Prentice Hall Press, Upper Saddle River, 2001. [4] W.S. Green, P.W. Jordan, Human Factors in Product Design: Current Practice and Future Trends, Taylor & Francis, Philadelphia, 1999. [5] D.A. Norman, Emotional Design, Basic Books, New York, 2004. [6] D. Chang, C.-H. Chen, and K.M. Lee, A crowdsourcing development approach based on a neuro-fuzzy network for creating innovative product concepts, Neurocomputing, 142(2014), 60–72. [7] M.L. Richins, Consumption emotions, In: H. N. J. Schifferstein and P. Hekkert (Ed.) Product Experience, Elsevier, San Diego, 2008. [8] A. Horiguchi, T. Suetomi, A Kansei engineering approach to a driver/vehicle system, International Journal of Industrial Ergonomics, 15 (1995), 25-37. [9] J.L. Rodgers, W.A. Nicewander, Thirteen ways to look at the correlation coefficient, The American Statistician 42(1), (1988) 59–66. [10] C.-H. Chen, T. Wu, L.G. Occeña, Knowledge organisation of product design blackboard systems via graph decomposition, Knowledge-Based Systems, 15 (2002) 423–435. 194 Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-194 Model-Based Variant Management with v.control Christopher JUNK a and Robert RÖßGER a and Georg ROCK b and Karsten THEIS c and Christoph WEIDENBACH d and Patrick WISCHNEWSKIa,1 a Logic4Business GmbH, Saarbrücken, Germany b Hochschule Trier, Germany c PROSTEP AG, Darmstadt, Germany d Max-Planck Institute for Informatics, Saarbrücken, Germany Abstract. Manufacturers of products that are instances of variants out of a complex product portfolio have learnt that a rigid process management is mandatory to meet today's standards of quality. An important part are processes that aim at mastering variant complexity. v.control supports these by providing for the first time both a complex product model able to represent detailed engineering, manufacturing, logistics, finance and marketing data in the very same model and a workbench of provably mathematically correct and rigid analysis tools.You want to know whether product changes performed by different engineers are compatible? Press a button and v.control guarantees consistency of all product variants. You want to know whether all your products suggested by marketing can actually be build? Press a button and v.control checks your portfolio and detects problematic variants. You are searching for a product meeting partial customer requirements and being optimal in profit? Press a button and v.control provides the optimal product cash cow. You want to make sure that your product portfolio meets future environmental regulations? Press a button and v.control identifies opportunities. You want to engineer shared parts of your product line to meet manufacturing inventory requirements? Press a button and v.control designs an optimal solution. This paper presents a detailed overview of the functionality of v.control as well as typical industrial applications successfully conducted with the help of v.control. It addresses current research in the field of complexity management, variability management and SAT-solving and their functional integration within v.control. Keywords. Variant Management, PLM, Product Mangement Introduction Managing complex product portfolios is one of the major challenges of manufacturers today. This is because there is a range of requirements for each individual product concerning, for example, quality, safety, market demands, compliance with legal requirements, time-to-market and product costs. This is even more challenging because the products are instances of huge product portfolios and, therefore, it is not possible to build and test each product manually if it meets the requirements. In particular market 1 Corresponding Author, Mail: Patrick.wischnewski@logic4business.com C. Junk et al. / Model-Based Variant Management with v.control 195 demands for individual products have increased in recent years and are still increasing. As a consequence, for the manufacturers this means that they need to build a diversity of products in order to address their customers and keep/gain market shares. Therefore, mastering the product complexity and meeting all the requirements for each of the individual products is the key for obtaining competitive advantages in the future. As a result, it is essential to develop methods and tools that support manufacturers in achieving these goals today [14]. On the background of the increasing demands for individual products, identifying and reducing complexity from the products and processes [8,7,9] can only by one part of a solution. Even after removing “useless” variants there is still a huge complexity remaining that needs to be dealt with in the product life cycle. In [10] feature models are used to model product variability. Based on feature models there exist commercial software tools that aim at managing products and their respective artifacts [1,3]. In addition, there has been research done on developing automatic analysis procedures for feature models [12,5]. However, feature models are restricted to structural relations and cannot represent general product build rules as they are induced, e.g., by engineering requirements or marketing strategies. Furthermore, product model analysis techniques need to be extended to these richer models. In particular, requirements are • a combination of detailed product building rules from different areas such as engineering, manufacturing, logistics, finance, marketing into a single model (single source of truth) • support for feature attributes and values enabling a detailed modeling • analysis and optimization procedures The model-based variant management method supported by the software tool v.control fulfills all these requirements. It provides, for the first time, both, the ability to model complex products together with their detailed product information in the very same product model and a workbench of efficient, provably mathematically correct [13,11,6] and rigid analysis and optimization procedures. v.control is able to express a diversity of product properties in its model and contains efficient analysis and optimization procedures for it. This paper presents Model-Based Variant Management from a user and application perspective and is structured as follows: Section 2 briefly presents the Model-Based Variant Management method. Section 3 gives an overview of our software tool v.control and Section 4 presents case studies where we have successfully applied the Model-Based Variant Management method together with v.control. 1. Model-Based Variant Management This section presents the Model-Based Variant Management approach together with its properties in order to master the complexity of today‘s variant products. The central part of this method is the product model that is introduced in the first part of this section. The second part describes the steps necessary to implement this method for a product portfolio. 196 C. Junk et al. / Model-Based Variant Management with v.control 1.1. Product Model As depicted in Figure 1, a product model Φ is the central part in the Model-Based Variant Management method that has to fulfill three requirements. Figure 1. Product Model: Collective Composition that defines the product portfolio Firstly, the product model Φ combines the requirements, properties, interfaces and dependencies of all stake holders involved in the product life cycle, i.e. engineering, marketing, construction, management, after-sales. Secondly, the product model Φ is rigid in the sense of being expressed in a precise formal language with a unique semantics which represents the above constituent parts of the product. Thirdly, the product model Φ enables push button analysis and optimization procedures. As a consequence, the product model Φ is composed of relevant product information from all involved departments. It is a comprehensive representation of the product portfolio containing all relevant properties of the products. The above specified requirements for the product model are achieved by a formal logic, where a logic is simply a formal and precise language with a fixed, unique semantics. This means that Φ is a set of formulas of a logic. More precisely, any “instantiation” of the product model Φ represents a product in terms of the specified properties. So Φ represents all eventual products in a compact form. The product model does not only represent all products of the portfolio, it is the basis for a diversity of analysis and optimization operations which can be performed push button. These operations give valuable properties and information about the products back to the stake holders that cannot be obtained otherwise. Important properties of the product model are: • Is the composition of all product requirements and properties consistent, i.e., can these all be fulfilled at the same time for a real product? C. Junk et al. / Model-Based Variant Management with v.control • • • • • • 197 What are properties of the products that are not explicitly specified but are consequences of the specified properties? Are all parts of the product model eventually used at least in one product? (dead feature) Are there product configurations with respect to predefined properties? Are there redundant rules in the product model, i.e., rules that are not needed for any eventual product? What is the optimal product family according to attributes such as cost, profit contribution, time to build with respect to profit, regulations, customer orientation? What is an optimal product portfolio with respect to customer requirements? 1.2. Implementing the Method Implementing this method for a product portfolio involves the following steps: • Definition of an adequate logic that can express the products, properties, attributes. • Definition of a translation from available product data to the formal product model. • Integration into PLM processes and IT-Infrastructures. Because of the fact that products can be composed of several thousand parts and the analysis and optimization operations are computationally expensive, in general, the definition of the right product model is essential for a successful implementation of the Model-Based Variant Management method. In addition, the product model is defined in terms of the specific properties of the products of a particular customer and adapted to its specific requirements. 1.3. Summary This section has presented the Model-Based Variant Management approach that combines all product relevant information from all stake holders into one collective model. Because of the fact that this model contains all these information, it can perform sophisticated analysis and optimization operations on the product that give valuable insights to the products that could not be obtained with other methods. In addition, this section has presented the necessary steps in order to implement this method for a product portfolio. The following Section 3 describes our tool v.control that implements the Model-Based Variant Management method and section 4 presents industrial use cases where we have successfully used this method together with v.control. 2. v.control v.control [2] is a tool supporting the Model-Based Variant Management method. v.control supports a variety of logics for building product models together with the respective analysis and optimization operations. v.control is very flexible and can be easily modified and integrated into existing IT-landscapes and infrastructures. 198 C. Junk et al. / Model-Based Variant Management with v.control ontrol: A serv ver version th hat performs scheduled Theree are two verrsions of v.co analysis and a optimizatio on operations on specified product data and a generates respective reports. This T version iss particularly useful for maintaining m a high quality of product data in thee developmentt process wheere the productt data is frequ uently changinng. The second version is a desk ktop applicattion that pro ovides functioonality for d understan nding and fixiing defects. Thhis version interactiveely exploring the product data, is used to t perform a What-If an nalysis on th he product data d and undderstanding relationsh hips of the indiividual parts and a their properties. Figure 2 shows a scrreenshot of the interacctive product explorer e of v.control. Figure 2. v.con ntrol: Interactive Product P Explorer uct explorer iss divided into five areas: The produ 1. 2. The list of con T nfigurations: A configuratiion is a tempo oral modificaation of the p product modell. The followin ng modificatio ons are possib ble: • selection/d deselection off options • activating g/deactivating of rules • changing of rules • adding rulles T These changees do not chaange the prod duct model itself, it ratheer stores a teemporal chan nge of the prod duct model in the configuraation. Consequuently, this h no effect to the other configuration has ns. Note, the selection/deseelection of o options leads from f a 150% model that do oes not represent a complette model to a 100% modeel that represeents a concrette product. Th his is in conttrast to the m modification o rules, that change of c the pro oduct model and, a thereforee, the set of v valid products represented by b the productt model. T product parts The p arranged d in the stru ucture of the product (1500% BOM). B Because of thee fact that v.co ontrol has a ceentral productt model for alll aspects of C. Junk et al. / Model-Based Variant Management with v.control 3. 4. 5. 199 the product, there are multiple of such structures possible, for example: One structure for the engineering structure, one for electronics structure and another for the marketing structure. This area is an action history that shows the exact temporal actions performed in a particular configuration on the product model in terms of the actions described above. Removing, reordering and temporarily disabling actions provide interactive exploration possibilities for configuration changes. The area 4 contains all rules of the product model arranged with respect to their source. The area 5 is the output area. In the case of an error the reason for the error is shown here. Otherwise, all consequences and effects to the product parts are indicated. This presents the result of a What-If analysis. v.control can be either used to manage the master data for the product model or it supports the process of changing the data in another master data system by allowing to export a change report. Aside of the product the standard desktop application of v.control allows to perform the following operations: • Dead-Feature detection, i.e. parts that can never occur in any valid product • Consistency check, i.e. is the product model itself consistent • Product Optimization with bounds In addition to these analysis and optimization operation, v.control supports a variety of product specific analysis operations. These have to be adapted specifically to the structure of the product data. 3. Case Studies This section presents two case studies that have successfully implemented the ModelBased Variant Management approach together with v.control extended by product specific analysis operations 3.1. Consistency of Marketing and Engineering Product Data. In this case study, the goal was the implementation of a method that ensures the consistency of the engineering product data with the marketing product data for a globally operating manufacturer. Figure 3 depicts the implementation of the Model Based Variant Management into the customer process. The engineering product data consists of rules that describe the dependencies between the parts of the product. Likewise, the marketing product data describes the products from a marketing point of view. This means, the marketing departments define rules with respect to their marketing strategy, for example that part 1 shall only be sold in combination with part2 and with color red. 200 C. Junk et al. / Model-Based Variant Management with v.control Figure 3. Consistency of engineering and marketing product data Both, the engineering data as well as the marketing data, are changing daily. Consequently, every day they produce new revisions of the respective data. Because of the fact, that many employees from different areas in the company are involved this is an error prone process. For example, it regularly happened that marketing created offers that actually could not be build. For this reason, they wanted to implement an automatic check verifying that no invalid product was offered and integrate this into their process. Both, the engineering data as well as the marketing data, are changing daily. Consequently, every day they produce new revisions of the respective data. Because of the fact, that many employees from different areas in the company are involved this is an error prone process. For example, it regularly happened that marketing created offers that actually could not be build. For this reason, they wanted to implement an automatic check verifying that no invalid product was offered and integrate this into their process. The first step towards this goal was the definition of a product model Φ i , j = Φ E i ∪ Φ Mj Where ΦEi is created from the engineering data in revision i and Φ M j from the marketing data in revision j, respectively. Example rules from ΦEi are “part1 requires part2”, “either part1, part2 or part3 is contained in the product”, “if the attribute value x of part1 is larger than 5 then part3 is needed”. Example rules from Φ M j are “red products all have part1”, “part1, part3 and part5 can only be ordered as a group and then lead to a reduction in price”. All these rules have their formal logic counterparts in ΦEi and Φ M j , respectively. After the definition of the logical model and the definition of the automatic translation of the product data, the actual analysis operations were implemented. The first check verifies the consistency between the engineering data and marketing data, i.e., all potential products according to the two rule sets can actually be build. In terms of the combined logical product model Φ i, j this means verifying if the product model is consistent: C. Junk et al. / Model-Based Variant Management with v.control 201 Φi, j ຀ ⊥ In addition, we implemented another check that verifies if every offered option occurs in at least one valid product. In terms of the combined logical model Φ i, j that means for all options opt there is an instance of Φ i, j containing opt: Φ i , j ∪ {opt} ຀ ⊥ By appropriate algorithm design this does not need to consider each option separately. After implementing this check, verifying if the engineering data and marketing data are consistent to each other is just the push of a button in v.control or, alternatively, is checked automatically via a scheduled analysis task. 3.2. Car Optimization based on CO2 Emission The goal of the car optimization based on the CO2 emission is to find the cars with the best profit by complying with the CO2 budget regulation of the European Union [4]. The goal of this regulation is the reduction of the average CO2 emission of passenger cars. If a manufacturer does not comply with the specified budget, they have to pay penalties. Figure 4 depicts the integration of an optimization check with v.control into a PLM system that contains all master data. Figure 4. Compute most profitable products with a given CO2 emission bound In order to perform this optimization operation based on a part level an extended product model is required. In addition to parts and their relations, attributes of parts like weight, fuel, gas consumption and price is required. The non-trivial objective function defines for each individual product the CO2 emission and, therefore, the respective optimization procedure is operating on the whole product portfolio. Table 1 shows an example of parts extended with attributes and respective values as they may be stored in a PLM system. The function for computing the CO2 emission is a function with the following signature which is the objective function for finding the products with the least CO2 emission: ECO2:fuel × consumption → ԳǤ  202 C. Junk et al. / Model-Based Variant Management with v.control Table 1. Example: Parts with attributes part Engine1 Engine2 Engine3 Extra1 Extra2 weight 90 120 110 30 50 fuel Petrol Petrol Diesel consumption 5.1 7.5 3.6 0.3 0.6 price 4000 6000 5000 250 280 In order to compute the CO2-emission of a particular product (car) with this function, the product must contain an engine because this is the only part that has the attribute fuel. v.control verifies for each product if it satisfies the signature of the specified objective functions. Consequently, this defines valid products. The fleet optimization with respect to cost and a given CO2 emission budget B em is the following operation: Note, Ψ is interpreted as a multi-set of products defined by the product model Φ . From 2020 the regulation of the European parliament [4] defines a CO2 bound for B em of 95 g CO2/km. In addition, other non-trivial bounds can be used during an optimization operation. An example for such a bound is the computation of a CO2 label [4] which relates the CO2 emission to the weight of a vehicle. Because of the tight CO2 regulations, it is crucial to respect the emission individually for each product instead of an emission per vehicle class. This section has depicted an approach based on the model-based variant management method in order to implement a solution for this requirement. 4. Conclusion This paper has presented a comprehensive approach for rigid process and product management for mastering the complexity of huge variant product portfolios. This approach consists of the method Model-Based Variant Management and a respective workbench of of provably mathematically correct and rigid analysis tools. The method Model-Based Variant Management aims at defining a detailed product model that contains all product relevant data from a diversity of business departments such as engineering, manufacturing, logistics, finance and marketing in the very same model. This model builds the basis for the rigid analysis operations that are implemented in our tool v.control. It performs these operations push button on the whole product model and returns valuable information back to the respective stake holders which could not be obtained otherwise. These results build the foundations for informed decisions and actions concerning business strategies, corrections of defects, changes in marketing strategies, etc. On the basis of two use cases we have presented in this paper the potential of the Model-Based Variant Management approach. The first use case implements the approach for the marketing and engineering aspects of the product in order to verify the C. Junk et al. / Model-Based Variant Management with v.control 203 consistency between these two. The second use case combines product properties with costs and aims at developing business strategies with respect to CO2 regulations of the European Parliament. We have presented model-based variant management from a user and application perspective. Because of the fact that the products of one particular customer have specific properties and requirements, the product model has to be designed and adapted for each customer individually. The specific reasoning techniques that are implemented in v.control and enable the shown detailed analysis are beyond the scope of this paper. In general, almost any rigid analysis of an arbitrary rich product model requires exponential time in the size of the model. However, the real-world product models that we have studied so far enjoy additional structure. In v.control, we explore this additional structure by specific algorithms resulting in a “push-button” behavior for all use cases presented in this paper. With respect to all use cases we considered so far, we have been able to guarantee a response timing of v.control that meets the requirements of the respective use case. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] Biglever software inc., http://www.biglever.com. Logic4business gmbh, http://www.logic4business.com. pure-systems gmbh, http://www.pure-systems.com. Regulation (ec) no 443/2009 of the european parliament and of the council. D. Benavides, S. Segura, and A. Ruiz-Corts. Automated analysis of feature models 20 years later: A literature review, Information Systems, 35(6):615 – 636, 2010. D. Dhungana, C.H. Tang, C. Weidenbach, and P. Wischnewski. Automated verification of interactive rule-based configuration systems. In: E. Denney, T. Bultan, and A. Zeller (eds.) 28th IEEE/ACM International Conference on Automated Software Engineering, ASE 2013, Silicon Valley, CA, USA, November 11-15, 2013, pp. 551–561. IEEE, 2013. H. ElMaraghy, A. Azab, G. Schuh, and C. Pulz, Managing variations in products, processes and manufacturing systems, CIRP Annals - Manufacturing Technology, 58(1):441 – 446, 2009. H. ElMaraghy, G. Schuh, W. ElMaraghy, F. Piller, P. Schönsleben, M. Tseng, and A. Bernard, Product variety management. CIRP Annals - Manufacturing Technology, 62(2):629 – 652, 2013. W. ElMaraghy, H. ElMaraghy, T. Tomiyama, and L. Monostori, Complexity in engineering design and manufacturing, CIRP Annals - Manufacturing Technology, 61(2):793 – 814, 2012. K.C. Kang, S.G. Cohen, J.A. Hess, W.E. Novak, and A.S. Peterson, Featureoriented domain analysis (foda) feasibility study, Technical report, DTIC Document, 1990. J. Larrosa, R. Nieuwenhuis, A. Oliveras, and E. Rodriguez-Carbonell, A framework for certified boolean branch-and-bound optimization, Journal of Automated Reasoning, 46(1):81–102, 2011. M. Mendonca, A. Wasowski, and K. Czarnecki, Sat-based analysis of feature models is easy. In: D. Muthig and J. D. McGregor (eds.) SPLC, volume 446 of ACM International Conference Proceeding Series, pp. 231–240. ACM, 2009. R. Nieuwenhuis, A. Oliveras, and C. Tinelli, Solving SAT and SAT modulo theories: From an abstract davis–putnam–logemann–loveland procedure to dpll(T), Journal of the ACM, 53(6):937–977, 2006. G. Rock, K. Theis, P. Wischnewski, Variability Management, in: J. Stjepandić et al. (eds.): Concurrent Engineering in the 21st Century: Foundations, Developments and Challenges, Springer International Publishing Cham, 2015, pp. 491–520. 204 Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-204 View Specific Visualization of Proofs for the Analysis of Variant Development Structures Lisa GRUMBACH Trier University of Applied Sciences, Schneidershof, Trier Germany Abstract. The importance of variant development structures has increased continuously over the past few years. Nowadays the keyword is mass customization. Manufacturers have to satisfy the personal needs of their clients to keep up with competitors. Individual wishes and increasing demands of customers require the possibility of flexible and nearly limitless adaptations of a product. The result is a diversity of variants in one product line. Issues occur during the development of the corresponding development structures that were related to the complexity of the arising product data. Not only the amount of functionality and therefore individual components is rising, but also the interrelationships among the single components are getting more complex. The number of new evolving variants once a feature is added increases in the worst case exponentially. The resulting complexity cannot be handled manually. Thus, a formal logic based approach has to be used to describe the underlying variability model of the product structure. These formal specifications provide a basis for algorithms, which analyse the structures in terms of finding all kinds of errors like inconsistencies or dead features. Such results include formal proofs, which reason about the derivation of the found errors. As the users who construct and manage the development structures typically have no expert knowledge about formal languages and proofs, the analysis output has to be represented in a role and userspecific way. The presented work concentrates on an approach to visualize the formal results in an understandable, adaptable and user-oriented fashion. Different concepts are elaborated, which cover the information needs of specific user groups to match their respective knowledge level. As feature models are used to represent variant development structures in a simple and compact manner, they are used as a basic visualization technique. Other views represent the proof, in fact a resolution graph, a proof tree and a proof step. One possibility to understand the proof is to simulate through the individual steps. Each of the features and relationships, which play a role in the current step, are highlighted in the feature model. The mapping between proof steps and features and their relationships simplifies the comprehension. Based on these concepts a prototype is implemented, whose functionality respects the common human computer interaction requirements. To conclude, the result is summarized and prospects on future increments, further concepts and possible improvements are given. Keywords. Visualization, Proof, Variance, Resolution L. Grumbach / View Specific Visualization of Proofs for the Analysis 205 Introduction Variants are standard supply by now in almost every industry. Issues occur because of the development structures which are constantly changing. In former times one simple product consisted of a countable amount of single parts, whose composition was fixed. Mass production did not tolerate any individualization of a product. The formal specification was simple and of little scope. If there was any problem with a newly introduced feature, the so-called local hero was able to solve it with minimal effort, because he knew all of the specification. Nowadays as the industry is focused on the customer, variant diversity increases continually and thus the complexity of product specifications. If a new component is added to the development structure, not only one new configuration evolves, but the number of different variants explodes exponentially. It is not possible to preserve an overview of the specification and not to mention to find inconsistencies or solve problems. Therefore automatic analyses are needed, which discover inconsistencies or dead features. The output of such analyses is a formal proof, which base on SAT instances, as the specification was translated into formal language before. As several user groups with different levels of knowledge and information demands may need to understand the reason of the inconsistency, it is required to visualize the line of proof and thus the cause in an adequate way for each of the user groups. The result of the analysis needs to be processed, so that the user is able to understand the proof. Information is visualized, adapted to his specific needs. The main aim is to support the user via arbitrary interaction with the developed system, which means, he can choose from predetermined modes of visualization to manual setting of the display. For a first impression of the current state of research, two tools were found, which are described in the next section. 1. Exisiting tools Existing Tools which visualize SAT instances are examined for finding useful possibilities to support the user's understanding. DPvis and SATIn were both developed at University Tübingen and are presented in the next subsections. 1.1. DPvis DPvis, which stands for Visualizing the Davis-Putnam Procedure, was developed by Edda-Maria Dieringer and Carsten Sinz ([1]). The Davis-Putnam Procedure, also refered to as DPLL-algorithm, is an algorithm which determines whether a propositional logic formula is satisfiable. The purpose of DPvis is to analyse SAT instances concerning their internal structure with regard to tractability. The run of a DPLL-algorithm can be simulated. This is illustrated via two different views, namely the variable interaction graph and a search tree. The user can step through each phase of the DPLL-algorithm and is navigated by the search tree, which shows the state of allocated variables. The variable interaction graph displays the remaining ones and their relationships. If a valid assignment of values to variables, which satisfies the formula, is found, this is shown to the user. As this visualization concept only demonstrates the algorithm for testing the satisfiability of a formula and not the cause 206 L. Grumbach / View Specific Visualization of Proofs for the Analysis of a possible failure, it creates no additional benefit for the investigated use case and no aspect is considered further. 1.2. SATIn SATIn, SAT Insights abbreviated, is a tool for visualizing SAT instances and was developed by Stephan Kottler ([2]). The input, a SAT instance in cnf-format, is translated into different graphs, which can be chosen manually and are displayed contemporaneously. Variables, literals and clauses are set into relation by arbitrary combination. The interrelationships of the individual components become apparent. One possible user interaction is selecting one node in any graph, while the system responds by highlighting this specific node in any other graph displayed. Through these references, which are shown to the user, he is able to build a consistent mental model on the basis of different perspectives. This concept is adopted in the implemented prototype. Having these concepts as foundation, existing structures are combined in the developed prototype to form a complete visualization concept, which should support the user in understanding a proof, and thus the cause of the inconsistency in the development structures. Each of the used structures and their purposes are elucidated in detail in the next section. 2. Visualization structures Different visualization structures are applied in the prototype to display various information. The individual structures vary in level of detail, interaction possibilities and kind of information. Adequate visualization concepts to increase understanding are used and described in detail in this section. 2.1. Feature Model A basic graphical representation of development structures are feature models. They consist of a tree-like structure, where nodes represent the individual components, called features, and edges the interrelationships between those. Additionally there are crosstree-constraints outside the edges, to express relationships between nodes which are not connected within the tree structure. The hierarchy determines the allocation between single features. The used notation and interpretation is adopted from Kang ([3]). Advantages of feature models are a simple, clear and packed visualization of all the components and their interrelationships. As the user should be confident with feature models, the visualization bases one these, hence the user is able to build comprehension upon a familiar structure. Another benefit of feature models is the possibility of translating them into formal language with the help of simple rules ([4]), thus automatic analyses are made easier. A disadvantage is that the cross-tree-constraints often cause inconsistencies, because they generate complexity as impacts are not directly obvious. L. Grumbach / View Specific Visualization of Proofs for the Analysis 207 2.2. Proof The proof which results from the performed analysis is based on resolution. As the feature model can be translated into formal language, namely into a set of given clauses, the proof consists of these clauses which resolve to several other clauses during resolution that finally can be resolved, if the model is invalid, to the empty clause. The proof is displayed as a set of clauses, not modified, shown in a simple listed text form. Each line represents one clause with the following scope: ID [REF_ID1, REF_ID2]: {LITERAL1, LITERAL2, ...} The clauses are numbered consecutively with a unique identifier (ID), to allow referencing. The lines contain the literals of the clauses (LITERAL1, …), and if the clause infers from two other clauses, the identifiers from the two latter are mentioned (REF_ID1, ...). An example is given as follows: Given a SAT instance S2 (C1 , C2 , C3 , C4 , C5 ) ({x, y},{™x, y},{™y},{ y},{}) the corresponding proof is displayed in the following form: C1 : {x, y} C2 : {™x, y} C3 : {™y} C4 [C1 , C2 ] : { y} C5 [C3 , C4 ] : {} This visualization structure is used to give an overview of the complete proof. Experts, which have basic knowledge of formal languages and resolution based proofs, might just need this representation to understand the cause of the problem. For other user groups this information is processed, more specifically arranged in a reasonable order, and splitted into single steps. This is implemented with the two following structures. 2.3. Proof Step The proof step picks one specific clause and with the help of the referencing identifiers, shows these two resolving clauses. The content of all the clauses lets the user trace the resolution and does not overburden him with needless information. The understanding is supported by using the established notation of a step in a resolution-based proof. The notation can be seen for the resolution of clauses C1 and C2 from the example of the previous paragraph: C1 : {x, y} C2 : {™x, y} C4 : { y} With this small amount of information collected in a single view, the unexperienced user is not overstrained and is able to concentrate on one single step of the proof. 208 L. Grumbach / View Specific Visualization of Proofs for the Analysis 2.4. Resolution Graph As resolution is the proof technique on which the analyses are built on, a visualization concept is needed. It is used to give an overview of the proof structure and its components. Thus this can be applied as navigation. The used resolution graph is a modification of the definition of Carsten Sinz ([5]), which is as follows: Given a SAT instance S with the set of edges E , undirected edges are drawn between clause C1 and C2 , if one variable x  E exists, so that x  C1 and ™x  C2 . Clauses which are adjacent in the resolution graph lead to a resolvent. Example: Given a SAT instance S1 (C1 , C2 , C3 ) ({x, y},{™x, y},{™y}) , the corresponding resolution graph is as in figure 1. Figure 1. Resolution Graph This sort of resolution graph describes the connections of clauses in a model and reveals possible resolving variables. This graph only contains the existing clauses, which result from the model, without analysing it. As the resolution graph is used for manual navigation, the clauses which appear during the line of proof should also be visualized, as they are the ones of interest. This concept is achieved through a slightly modified resolution graph, which is comparable to the common notation of a resolution proof tree. Given a SAT instance the generated S2 (C1 , C2 , C3 , C4 , C5 ) ({x, y},{™x, y},{™y},{ y},{}) modified resolution graph is displayed in figure 2. If during the following text, resolution graph is mentioned, the one referred to is always the modified one. Figure 2. Modified Resolution Graph L. Grumbach / View Specific Visualization of Proofs for the Analysis 209 3. Implemented Prototype In this section the implemented prototype is presented. The scope of the graphical user interface is described, as well as application concepts, which are offered to the user. The employed framework for visualizing the graphs and other structures is mentioned. 3.1. Graphical User Interface The graphical user interface is divided into four minor areas: x x x x Feature Model Resolution Graph Formal Proof Single Proof Step A rough structure is recognizable in figure 3. The feature model takes half of the space as it represents the model and is a familiar structure for every user. The inconsisteny results from an interelationship within the model. Figure 3. Structure of the graphical user interface The individual concepts and interactions that help the user in understanding the proof, which are implemented in the prototype, are presented in the next section. 3.2. Application concepts x x x x x Show/hide different views o Manually separate o Pre-built modes for certain user groups Collapse/expand parts of the feature model Zoom in/out of parts of the feature model o Continually via scrolling o Step by step via double click Shift of the feature model via drag’n’drop Interactive run-through of the proof and thus adaptation of the focus in all of the views 210 L. Grumbach / View Specific Visualization of Proofs for the Analysis Sequentially go through the steps of the proof (backward and forward) by ƒ Clicking buttons of the graphical user interface ƒ Clicking left or right arrow key o Arbitrarily through selective choice of nodes in the resolution graph Highlighting of corresponding components in the individual views Fade-out/Hide irrelevant information like a subtree of the feature model o x x With these concepts the user is supported in understanding the line of proof. There are not only pre-built modes and predetermined visualization processes for unexperienced users, but users are also able to choose freely which views are shown and in which sequence the line of proof is visualized. The build process of a mental model can take place in different contexts concerning user experience, level of knowledege or information demands. 3.3. Framework D3.js ([6]) is a framework based on JavaScript. It was developped mainly by Mike Bostock, Vadim Ogievetsky and Jeffrey Heer, derived from the framework Protovis. D3 stands for Data-Driven Documents which describes the application area. Its objectives are improving web based development with the main focus on dynamic and interactive data visualization. One advantage of D3.js is the direct access to DOM objects of the HTML document, hence direct manipulation of these is possible. Furthermore data can be bound to elements of the visualization, which initiates an automatic update, if data is modified. Automatic layout algorithms improve the intuitive understanding of the user. D3.js fits to the considered use case, as it combines data with grahic visualization in a comfortabel manner. Updates on the graphical user interface, are performed automatically, if data changes. D3.js offers layout algorithm for graph structure, as it is necessary for implementing this concept. The use of JavaScript as programming language and the web-based development creates a very flexibel use context. 4. Further improvements In this section a prospect on future increments is given, which implicate improvements or alternative implementations. 4.1. Proof Tree As alternative to the proof, which is adopted unchanged from the input and listed in simple text form, an improvement to this representation is a structured, ordered presentation as a proof tree. The line of proof is combined with the former presentation. References to resolving clauses are used to position the clauses with content nearby. One node represents one clause. The child nodes are those who led to the parent clause during the resolution. For an example of a proof tree see Figure 4. L. Grumbach / View Specific Visualization of Proofs for the Analysis 211 The advantage is the structured representation. The user does not have to search for referenced clauses to understand a connection. A disadvantage might be displaying redundant information. One clause could be used for resolution of two other clauses, and using the described concept, it is therefore displayed twice. Figure 4. Proof Tree Another benefit is the hierarchical structure. An introduced interaction might be collapsing or expanding parts of the proof. The user is able to hide information of no interest. Especially for large proofs this function is very useful, as parts of the proof can be hidden, and the user is able to concentrate on a small simple proof and join the results from those. 4.2. Constraint Interaction Graph Some research was made about visualizing the structure of a SAT instance, but on a level which is more abstract as the variable interaction graph from DPvis, which was presented in section 1.1. Anthony Mak ([7]) introduces the Constraint Graph, which sets relation to variabels and constraints. Some users of the developed tool might only be aware of cross-tree-constraints from the feature model, and if the line of proof is visualized with references to these structures, it might be a lot easier to understand for these unexperienced users. Four different constraint graphs are presented with the following exemplary set of clauses: C1 : A Ÿ B C2 : C Ÿ ( D › E ) C3 : F Ÿ C C4 : G Ÿ A Rossi ([8]) describes four different possibilities of relating clauses and variables graphically: x Primal Constraint Graph or Variable View (see Figure 5) Edges indicate a common occurrence in one clause, which labels the edge. 212 L. Grumbach / View Specific Visualization of Proofs for the Analysis Figure 5. Primal Constraint Graph x Dual Constraint Graph or Constraint View (see Figure 6) Edges indicate occurrence of one variable in several clauses. Figure 6. Dual Constraint Graph x Constraint Hypergraph or Bipartite View (see Figure 7) Edges indicate the occurrence of a variable in a clause. Figure 7. Constraint Hypergraph x Hyperplane View (see Figure 8) Circles incorporate occurring variables in the clause, which labels the circle. These graphs could be integrated in the visualization concept, to build the understanding upon a level which is closer to the user. The logical interrelationships between components in the development structures become much clearer, thus the user is able to understand the problematic constraints and does not need to understand issues from another level of expertise. L. Grumbach / View Specific Visualization of Proofs for the Analysis 213 Figure 8. Hyperplane View 4.3. Additional information Another improvement might be displaying additional information, such as degree of edges, within the given graphs, but visualized with new concepts. Added value could be accomplished via ([7]): x Size of nodes x Colour of nodes x Thickness of edges x Colour of edges 5. Conclusion A concept was developed, which supports different user groups in understanding the reason for inconsistent development structures. The visualization of the proof occurs in a user-oriented fashion, as unneccessary information is hidden and different interaction possibilities with the system are provided. These are depending on the user experience, level of knowledge and kind of information demand. Further improvements are presented, which may be implemented in the future, to step to another level of meta language and offer more support to unexperienced users. References [1] [2] [3] [4] [5] [6] [7] [8] C. Sinz, E.-M.Dieringer, DPvis - A Tool to Visualize the Stucture of SAT Instances, 2005. Symbolic Computation Group, WSI Computer Science. University Tübingen, 72076 Tübingen, Germany. S. Kottler Backdoors in SAT-Instanzen, 2007. Eberhard-Karls-Universität Tübingen, Fakultät für Informations- und Kognitionswissenschaften. K.C. Kang,S.G. Cohen, J.A. Hess, W.E. Novak, A.S. Peterson, Feature-Oriented Domain Analysis (FODA) Feasibility Study, 1990. Carnegie-Mellon University, Software Engineering Institute D. Benavides, S. Segura, A. Ruiz-Cortes, Automated Analysis of Feature Models 20 Years Later: A Literature Review, 1997. Dpto. de Lenguajes y Sistemas Informaticos, University of Seville. C. Sinz, Visualizing SAT Instances and Runs of the DPLL Algorithm, 2007. Johannes Kepler University. M. Bostock, V. Ogievetsky, J. Heer, D3: Data-Driven Documents, 2011. IEEE Trans. Visualization & Comp. Graphics (Proc. InfoVis). http://vis.stanford.edu/papers/d3. A. Mak, Constraint Graph Visualization, 2005. National ICT Australia. F. Rossi, P. Van Beek, T. Walsh, T. Frhwirth, L. Michel, C. Schulte, Handbook of Constraint Programming, 2006. 214 Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-214 Measuring and Evaluating Source Code Logs Using Static Code Analyzer Gang SHEN1 Fan LUO and Gang HONG School of Software Engineering, Huazhong University of Science and Technology Wuhan, China 430074 Abstract. In this paper, we investigate the evaluation of the logs in software source code. Logs usually play the critical role in detecting, tracing and removing bugs of the software in the software development process. More importantly, after the delivery to the customers, software developers have to rely on the log records to locate bugs other than reproduce the software defects in the customer’s environment. Therefore, there are strong quality and efficiency demands for good logs in the software source code. We model the source code as a hierarchy of graph models by presenting the relationships of different components of the source code, namely function call stacks, control flows and data flows. Then the fundamental metrics can be generated by the static analysis of these models and the metrics will be further processed to form the measurements of the log statements in the following dimensions: correctness, efficiency, maintainability, extensibility and conformity. We use a few open source projects to demonstrate the effectiveness of the proposed approach and discuss the directions of the future research. Keywords. Multi-agent, neural networks, message passing Introduction There is no possibility for a software developer rarely to scrutinize the printed out logs record by record when challenged by the stressful but time-consuming trouble shooting assignments, either in the development stage or after the software’s delivery to customers. As a collection of temporally ordered records describing the detailed run time information about the activities of the application programs, operating systems and the related users, the source code logs play an increasingly important role in today’s software industry. In general, there are many reasons that the developers and the customers may have to rely on the logs to debug the software as well as evaluate the quality of the software: first, since the logs may contain informative semantics a developer can understand the software behavior in an effective and efficient way; second, after the software is delivered and installed in the customer’s environment, the logs provide a convenient profile to check without interrupting the execution of the software; third, the audition of logs can help the customer find the potential weaknesses like the security loopholes[1-3]. However, in some circumstances, the lack of an industry-wide standard for the source code logs leaves the developers to decide where 1 Corresponding Author, E-Mail: gang_shen@hust.edu.cn. G. Shen et al. / Measuring and Evaluating Source Code Logs Using Static Code Analyzer 215 to lay a log and what to put into a log at random, and in other cases, even if the software organization has adopted certain practices for the coders to follow, it is usually costly to enforce the observation of all the rules[4, 5]. In this paper, we investigate the problem of measuring and evaluating the source code logs using static source code analysis tools[6-8]. We are motivated by the objectives of measuring the source code logs: first, measurement and analysis is a critical process area for any software organization focusing on process improvement[11], and logging the software run-time behaviors provides key information for the software maintenance and thus becomes a software artifact to measure and evaluate; second, the metrics of source code logs lay a foundation for quantitatively understanding and assessing the work of different developers, and the quality of different projects; finally, the data collected from the evaluation of the source code logs may make the management gain insights into the organization wide status of the quality and productivity with respect to the source code logs and may further lead to the adoption of the standardized logging in the code. In order to achieve the above goals, we must directly handle a few obstacles, including the diversity of logging statements, poor readability of some logs, complicated connections among different logs, and the vast amount of the information contained. In practice, software engineers use the static code analyzers to automatically detect the existing issues in their code before officially delivering the code. In contrast to the dynamic analytic tools, the static analyzer searches the hidden problems in the program by scanning source code, without the need of executing a program, thus can be viewed as an efficient alternative to running the program for test. It has drawn a lot of interests in applying the static code analyzer to enhance the quality of both software and logs[5, 8].In this paper, we use Clang to obtain the basic source code analysis results[12]. The contributions of this paper are the proposed multidimensional log evaluation model, and the related metrics for the log quality and productivity measurement. We also develop a prototype source code analysis system for C programs. The rest of the paper is organized as follows. In Section 1, we present the evaluation model and the metrics as well as the algorithms used to retrieve the metrics from the source code; then in Section 2, we discuss the design of a prototype analysis system that can be used for evaluating C projects. The experiments performed for some open source projects are provided in Section 3 showing the effectiveness the proposed approach. And finally in Section 4 we conclude this paper by remarking the proposal and the future research. 1. Evaluation Model and Metrics One can find many different versions of the definitions and elaborations about software quality. Generally speaking, software quality can be considered as the degree to which the software satisfies the requirements, and software quality measurement is used to quantify the extent a system or software possesses desirable characteristics. More specifically, software quality model is defined by the ISO/IEC standard 9126-1 and categorized into six main attributes, namely, functionality, reliability, usability, efficiency, maintainability and portability[9]. In addition to the quality model, ISO/IEC 9126-1 defines the internal metrics as those do not rely on software execution, i.e., the static measures. In order to apply only the static code analysis to measure the comprehensive aspects of the source code logs, we need to take into accounts the external metrics and quality in use metrics of the logs, that is to say, examine how well 216 G. Shen et al. / Measuring and Evaluating Source Code Logs Using Static Code Analyzer the logs function in determining the health status of the running software and the logs are able to help detect the faulty parts of the software when the customer reports a problem.  Figure 1. Patches for Apache logs. Unlike the general quality factors of the software as a whole, the quality of source code logs are rarely specified in the requirements and specifications. The quality of the logs usually depends on the expertise of the software developers. Nonetheless, an experienced engineer may write inconsistent logs for different source files if the schedule and other pressures vary. From time to time, the project team has to review the code and manually enhance the logs in the code. An example is the enhancement for the open source project Apache which has over 900 log points, when the bugs related to logging were reported, enhancement had to be carried out as the remedy[5] (see Figure 1). A unified evaluation model thus is fundamental for a software organization to produce quality source code logs consistently. By introducing this model, a software organization is expected to save cost in debugging problems and increase productivity of code review and inspection. Based on the ISO/IEC standard 9126-1, we propose a multidimensional model to evaluate the quality of the logs in the software source code, the major quality attributes are: correctness, efficiency, conformity, maintainability and extensibility. Internal and external quality 1.Correctness 1.1 Accuracy 1.2 False alarm rate 1.3 Miss rate 2.Efficiency 3.Conformity 4.Maintainabilit 5.Extensibilit 2.1 Log density 2.2 Average analysis time 2.3 Variable coverage 3.1 Location conformity 3.2 Content conformity 4.1 Repetition rate 4.2 Search-ability 4.3 Traceability 5.1 Trigger complexity 5.2 Branch complexity 5.3 Extensible margin Figure 2. Source code log quality model. 1.1 Accuracy Denote the set of points that need to be logged as A, and the set of logged points as B. We note that each software organization may have it own criteria to log faults or warnings, for example, a suspicious activity may be logged for a particular application G. Shen et al. / Measuring and Evaluating Source Code Logs Using Static Code Analyzer 217 while being omitted in another. Also, the log statements may be in very different forms. In Implementation, there should be sufficient flexibility to configure the rules to detect these points. Then, the logging accuracy is given by , where represents the cardinality of a finite set. 1.2 False alarm rate Falsely reported logs may increase the complexity of auditing and analyzing the . logs. The False alarm rate is 1.3 Miss rate Similarly, a missed log will cause the log report less informative for a debugger to locate the defects. The miss rate is defined as , or can be derived from the accuracy. 2.1 Log density Suppose the total number of lines of code is N, and the number of log statements is A, the log density is defined as A/N. 2.2 Average analysis time The average time complexity from the entry of the function containing a log statement to the log point is used to measure the time spent by a program to reach this log. 2.3 Variable coverage Suppose the set of variables in a function that may trigger the execution of a log statement is S, and the variables logged in that statement is a subset s, the variable coverage is defined as |s|/|S|. If a log is enhanced to achieve better fault locating results, the coverage will increase[log enhancer paper]. 3.1 Location conformity It is desired that the developers process the severe problems of a program with explicit statements like return (FAILURE). Suppose we know the location of the fault points, the log statement should be placed rafter the detection of the fault and right before the return. Another case is that one should avoid putting a log statement into a loop and make the repeated information be printed many times. 3.2 Content conformity Each software organization may adopt its own standard for what should be included in a log statement. The log statements should follow the standard to encode the defined fields. 4.1 Repetition rate Let D denote the set of repeated logs, then the repetition rate is |D|/|B|. Repeated logs will consume precious computation resources and make the log report larger than needed. There are several conditions that may lead to the repetition of an identical log. If a few log statements are on the same unconditional execution path, and these statements contain repeated information, they should be treated as repeated logs. If one 218 G. Shen et al. / Measuring and Evaluating Source Code Logs Using Static Code Analyzer log requests many log adaptors, the adaptors will generate the unnecessary printed copies. 4.2 Search-ability When the log report contains a large number of records, it becomes hard for a developer to find the logs of interest. If each log statement bears some special keywords or phrases, these terms can be used as its identity and one may search the interested logs with the help of automatic string match tools. If the logs can be organized into a certain number of fine classes specifying a log’s attributes, the searchability metric is used to test whether a log statement has one or more defined class identity keywords. 4.3 Traceability This metric is used to check if a printed log record can be traced back to its location in a source file. For example, in a C file, we may add “__FILE__, __FUNCTION__, __LINE__” information in a log statement to link it to the file name, function name and line number of this statement. 5.1 Trigger complexity If the condition that triggers the log statement consists of a number of Boolean operations of the Boolean expressions, that number is defined as the trigger complexity. The complex conditions will make the analysis of the log records difficult, thus leaving little room to extend the log information. 5.2 Branch complexity Starting from the entry of a function, if the execution path to a log statement contains a number of conditional branches, that number is defined as the log’s branch complexity. 5.3 Extensible margin If the log information is hard coded, this log may not be useful if conditions vary or the code evolves. This metric is used to specify how flexible the log statement is with respect to its context. It is worth mentioning that the metrics can be used to measure the log quality for a single source file, or a set of files, for instance, a configuration of a project. It is worth mentioning that the metrics can be further processed to discover higher-level properties of the software. One possible application of the metrics is that they can be applied jointly as a similarity measure for two or more projects in terms of the logging patterns. In some cases, there are many source files involved in assessing log quality. The distribution of metrics is an interesting subject to study. Because the source files may have evolved over a long span of time, and many developers may have contributed in writing logs, it is not a straightforward task to find the proper prior for the metric distributions. Instead of assuming that the metrics follow a particular random process, we use the information entropy to characterize the uniformity of metric distribution. In many fields, entropy is a measure of uncertainty and complexity[10]. A high entropy value represents high uncertainty, or in other words, the distribution is more uniform. G. Shen et al. / Measuring and Evaluating Source Code Logs Using Static Code Analyzer 219 Let us discretize the range of a metric into separate bins, and denote the , the information entropy of this metric’s normalized occurrence of bin as distribution is calculated as . Similarly, mutual information can be used to measure the joint distributions of two distinct metrics. Let X and Y be two metrics, then their mutual information is evaluated , where is as the joint entropy of X and Y. 2. Design and Implementation of the Analysis Tool In this section, we discuss the design and implementation of web-based log quality evaluation and analysis tool we developed for demonstration purposes. A three layer architecture is designed to facilitate the interactions between the user and the server: because the static code analyzer is a command line program, the presentation layer can visualize the analyzed results and help the users better understand the statistics; the logic layer wraps the processing and transforming functionalities into modules, and the static analysis of the loaded code is done with the help of LLVM-Clang, an open source static code analyzer; the source files and extracted data are stored and manipulated in the persistent layer. The low level virtual machine (LLVM) was developed by the University of Illinois at Urbana-Champagne as an open source project, providing a modularized reusable set of services and utilities[llvm]. Clang is a LLVMbased compiler written in C++, distributed under the LLVM-BSD license, and often used as the frontend of LLVM for the compilation of C/C++/Objective C/Objective C++ programs[Clang]. Compared with other mainstream compilers, Clang possesses the following advantages: fast compilation with low memory usage, clear and expressive diagnosis description, compatible with GCC, and can be readily integrated into IDEs. Presentation Layer Browsers Logic Layer File/project processing AST analysis Metrics extraction Configuration Visualization/report Persistent Data Layer Files, database Figure 3. The architecture of log quality analyzer. 220 G. Shen et al. / Measuring and Evaluating Source Code Logs Using Static Code Analyzer The log quality analyzer was developed with Spring 3, MyBatis and SpringMVC framework. The functionalities include loading single C/C++ source file or files in a project, analyzing files and generating the abstract syntax trees, computing log quality metrics, visualizing the quality measurements and generating reports. The static scanning of the source file is implemented with Clang, to get the control flow and data flow information. Users can configure the necessary user-specific rules such as the regular expressions for log points and fault points. Then these rules can be used to match the points on the ASTs (Abstract Syntax Trees) by calling APIs in the ASTMatchers library. libASTMatchers provides simple but powerful matching utilities, which may match the nodes on the AST with the source code, or extract information at other AST levels. The nodes on a Clang AST belong to a collection of basic types, such as Declaration and Statement. The top element in Clang ASTs is called a translation unit declaration, the key AST nodes, i.e., the structures of a program are derived from Type 㸪 Declaration 㸪 DeclarationContext or Statement, or their combinations. The information about all Clang AST nodes is contained in its ASTContext, which can be accessed via getTranslationUnitDeclaration. In order to get the detailed information about the AST nodes, it can be started from the root TranslationUnitDeclaration, and other nodes can be subsequently visited recursively. For example, suppose we want to match a log function with the name log_error_write(), we can use StatementMatcher to find the functions in that name with the type of methodDeclaration. The Clang AST based static code analysis consists of three steps: creating consumer which preprocesses high level structures, adding AST context which contains long lived objects, and parsing AST. When the argument “-analyze” is passed to Clang –cc, the complier will create corresponding consumers by the arguments that follow. For example, if the arguments are “-warn-dead-store”, an object AnalysisConsumer will be created, and a function pointer ActionWarnDeadStores will be appended to the container named FunctionActions. These arguments are defined as macros in the file Analyses.def. Subsequently, the context will be created. Built-in types like void, bool, int will be placed into the global type list and assigned unique IDs before the TranslationUnitDeclaration is created. After initialization, the lexers and syntax parsers are created to work on flows. Once a top level Declaration gets analyzed, one may call the consumer’s HandleTopLevelDeclaration interface to access the analysis results as needed. Control flow analysis and data flow analysis form the base for the further handling of the quality metrics because they help decide what should be placed into a log statement if the log is to help locate the cause of a fault more precisely. Control flow analysis is the prerequisite step for the source code quality because it provides the (symbolic) execution path selection information, and by this information we can divide the statement blocks into must-do, may-do and never-do classes, depending on whether they are on the paths leading to the fault point. Examine a control flow graph, a cut node is a node that cuts a connected graph into a few separate connected graphs if it is removed together with its associated edges. In the CFG (Control Flow Graph) starting from the function entry and exiting at a fault point, any nodes blocking all paths from entry to exit are critical and thus belong to must-do. We use the modified algorithm to the articulation vertices in undirected graphs (listed in Table 1, s is the entry and e is the exit) to find the cut nodes in a CFG and determine the path attributes (in Figure 4, the must-do statements are highlighted by red, the may-do parts are marked by blue, while the rest is must-not with respect to the log_error_write() function). G. Shen et al. / Measuring and Evaluating Source Code Logs Using Static Code Analyzer 221 Table 1. Path labeling algorithm PL(s,e). Step 1 Step 2 Step 3 Step 4 Step 5 For all nodes v Let type[v]=NEVER-DO Let FDFS#[v]=0, BDFS#[v]=0 Starting from s, do forward depth-first-search, update FDFS#[v] for all reachable nodes v as the number of distinct paths via v Starting from e, do backward depth-first-search, update BDFS#[v] for all nodes v reachable to e as the number of distinct paths via v For all nodes v if FDFS#[v]=1or BDFS#[v]=1 let type[v]=MUST-DO else if FDFS#[v]>0 andr BDFS#[v]>0 let type[v]=MAY-DO Return the array type[] In data flow analysis we need to determine whether a variable on an execution path is active. An active variable is the one that is not evaluated at a point but may be set later. Evaluation of a variable revokes its activeness. By recursively searching the dependencies in the evaluation statements for the variables in the conditional braches (if/while/do-while/switch), we may determine the active variables that may influence the execution of a log statement. The values of these variables serve as the indicators about whether a path is selected at the run time and hence should be included in the log for more precise defect locating. Figure 4. Visualized control flow analysis results. In our implementation, we make use of the interfaces and library in libTooling to generate the analysis. If a project instead of a single source file is to be analyzed, the expansion of the headers, substitutions of macros and the preprocessing of the configurations need to be processed in the first place with the help of LLVM. 3. Experiments and Analysis In this section, we discuss the performance of the proposed approach by apply the implemented tool to some open source projects. Specifically, we use the evolutionary versions of lighttpd master for test (https://github.com/lighttpd), including lighttpd1.4master1, lighttpd1.4-master2, lighttpd1.4-master3 and lighttpd1.4-master4 (see Table 1). In the test, we set the fault points as the statements of “goto error”, “return -1” and etc. Versions 1, 2, 3 all contain 33 .c files, while version 4 has 32 .c files to be analyzed. 222 G. Shen et al. / Measuring and Evaluating Source Code Logs Using Static Code Analyzer Table 2. The size of tested projects. projects lighttpd1.4-master1 lighttpd1.4-master2 lighttpd1.4-master3 lighttpd1.4-master4 Lines of code 42187 42097 42426 42003 Number of log points 569 568 583 571 Number of fault points 858 854 885 875 The sizes of these projects remains very stable, but the metrics shows that the evolution of the versions is accompanied by the slow improvement of the source code quality, see Table 3. Table 3. The quality metrics of tested projects. projects lighttpd1.4-master1 lighttpd1.4-master2 lighttpd1.4-master3 lighttpd1.4-master4 Correctness 48.9510% 48.8290% 49.3785% 50.7429% False alarm rate 26.1863% 26.5845% 25.0429% 22.2417% Miss rate 51.0490% 51.1710% 50.6215% 49.2571% We also calculated the distributions of the metrics over source files using the entropy values. In the first three versions, all projects have the same number of source files. If we use bins of 10% to discrete the correctness into 10 bins, the file distribution is given in Table 4. As shown in Table 5, we see the general trend of improvement of the metric uniformity in the source code evolution process. Table 4. The distribution of correctness among files. projects lighttpd1.4-master1 lighttpd1.4-master2 ighttpd1.4-master3 10% 2 2 2 20% 4 4 4 30% 2 2 1 40% 3 2 3 50% 4 5 5 60% 5 4 4 70% 5 4 4 80% 0 1 1 90% 0 0 0 100% 3 3 3 Table 5. The comparison of metric entropies. projects lighttpd1.4-master1 lighttpd1.4-master2 lighttpd1.4-master3 lighttpd1.4-master4 4. Log point density entropy 3.145 3.105 3.241 3.139 Correctness entropy 3.095 3.095 3.169 3.165 False alarm entropy 2.858 2.858 2.769 2.707 Conclusions The logs in the source code are important components of the source files. In different phases of the software development, engineers utilize the logs for various purposes. In general, after the software is handed over to the customer’s premise, the log reports generated by executing the software in many cases are the only resource that the developers can rely on to debug existing defects. In order to improve the productivity of locating defects, the logs have to provide sufficient information to determine the cause of printing the related logs. In the mean time, the software organization may G. Shen et al. / Measuring and Evaluating Source Code Logs Using Static Code Analyzer 223 impose more requirements for the logs such as readability. In this paper, we first propose a quality model for source code logs, with the reference to ISO/IEC standard 9126. We also present our investigation on the evaluation of the log quality with the help of static code analyzer Clang. We developed a three-layer architecture for a quality assessment tool visualizing the results in web browsers. The abstract syntax trees generated by Clang encode expressive diagnostic information which may be further processed to derive the log quality metrics. We introduce information entropy and information gain to measure the distribution of metrics. In the test applications, a few open-source projects were assessed using the tool. The extracted metrics were consistent with our observations. However, the quality of logs is somehow connected to manually coded rules for Clang to match, like fault points and log points. While it is acceptable to limit the developers to use a small set of log functions, it is much helpful to detect the fault points in the code automatically by analyzing semantics. This will be an attracting subject for our future research. References [1] Wei Xu, Ling Huang, Armando Fox et al., Detecting Large-Scale system problems by mining console logs. In: Proc. of SOSP ’09, New York, USA, ACM Press, pp. 117-132, 2009. [2] Ding Yuan, Haohui Mai, Weiwei Xiong et al., Sherlog: Error Diagnosis by Connecting Clues from Runtime Logs. In Proc. of ASPLOS ’10, New York, ACM Press, pp. 143-154, 2010. [3] W. Lee, S.J. Stolfo, and K.W. Mok, Mining in a data-flow environment: Experience in network intrusion detection. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD-99), ACM Press, pp. 126-131, 2009 [4] T. Nurkiewicz. 10 Tips for Proper Application Logging. http://www.javacodegeeks.com/2011/01/10tips-proper-application-logging.html [5] Ding Yuan, Jing Zheng, Soyeon Park, Yuanyuan Zhou, Stefan Savage. Improving software diagnosability via log enhancement. In Proc. of ASPLOS’11, New York, ACM Press, pp. 3-14, 2011. [6] I. Aghav, V. Tathe, A. Zajriya et al., Automated Static Data Flow Analysis, Computing, Communications and Networking Technologies (ICCCNT),2013 Fourth International Conference on, IEEE, pp. 1-4, 2013 [7] C.B. Chirila, D. Juratoni, D. Tudor et al., Towards a Software Quality Assessment Model Based on Open-Source Statical Code Analyzers, 6th IEEE International Symposium on Applied Computational Intelligence and Informatics, IEEE, pp. 341—346, 2011. [8] N. Nagappan, L. Williams, J. Osborne et al., Providing Test Quality Feedback Using Static Source Code and Automatic Test Suite Metrics, Proceedings of the 16th IEEE International Symposium on Software Reliability Engineering, IEEE, pp. 94-103, 2005. [9] H.-W. Jung, S.-G. Kim, C.-S. Chung, Measuring Software Product Quality: A Survey of ISO/IEC 9126. IEEE SOFTWARE, pp. 88-92, September/October 2004. [10] O. Panchenko, S. H. Mueller, A. Zeier, Measuring the Quality of Interfaces Using Source Code Entropy. 16th International Conference on Industrial Engineering and Engineering Management, IEEE, 1108 – 1111, 2009. [11] M. Nakamura, T. Hamagami, A Software quality evaluation method using the change of source code metrics, 23rd International Symposium on Software Reliability Engineering Workshops, IEE, pp. 65 – 69, 2012 . [12] Clang Compiler User’s Manual, http://clang.llvm.org/docs/UsersManual.html 224 Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-224 Mass Properties Management in Aircraft Development Process: Problems and Opportunities Vera DE PAULA1 and Henrique ROZENFELD University of São Paulo-USP Abstract. The product development is a challenging process for any product. However, as aircraft are highly complex and should comply with a multiplicity of interconnected requirements, their design process represents an outstanding challenge. Hence, among the various requirements to be met by a new aircraft, there are cost, performance and sustainability requirements. The mass properties of the aircraft are related to the fulfillment of these three requirements, so it is crucial that realistic estimate of aircraft mass properties be used during early conceptual design, and that it be strictly controlled during later stages. The main mass properties of an aircraft are: weight, center of gravity, moments and products of inertia. The Mass Properties Management (MPM) is an iterative process that has to deal with information in the lowest level of the system and yet be robust enough to answer in the aircraft level, it shall provide accurate and timely mass properties data for design optimization decisions. The main objective of this work is to analyze the MPM problems and solutions encountered during the aircraft development, and identify opportunities and propose best practices for MPM process improvement. A synthesis of literature review and the analysis of an exploratory case study applied in an aeronautical development company were conducted and the primary result is the understanding of the mass properties relationship within the aircraft development process, considering its main perspectives: activities and deliverables, roles and responsibilities, goals and tools. This is a descriptive research and the methodology adopted is the longitudinal participatory case study, since one of the authors has been participating of the MPM during an aircraft development process for three years. A synthesis of the MPM theory is proposed in form of a process reference model to represent the perspective mentioned, in order to allow comparing it with the findings of the case study. Research instruments used were a logbook and a questionnaire for interviewing major stakeholders of the product development process. The findings of this work highlight the differences between MPM theory and practices, mainly considering technical integration, strategy and responsibilities in the aircraft development process. Keywords. mass properties, aircraft development process, requirements, systems engineering Introduction Aviation global demand has historically shown a strong growth, and that trend is projected to continue; air transport forecast suggests a 4.7% yearly average growth over 1 Corresponding Author, E-Mail: verabp@gmail.com V. De Paula and H. Rozenfeld / Mass Properties Management in Aircraft Development Process 225 the next twenty years. It means that over the next 15 years passenger traffic will almost double [1]. Such growth potentially conflicts with national and international emissions targets and reducing aviation emissions of the global aircraft fleet is likely to come from policies aimed at influencing the rate of technology development [2]. New technologies should increase engine efficiency and decrease structural and systems weight in future aircraft developments [2]. Aircraft weight reduction is an important area of improvement for future development. A 30% reduction in aircraft weight could reduce cruise fuel consumption by 7 to 15% [3].The weight is a critical variable to guarantee the aircraft performance requirements achievement, weight and center of gravity impact the cruise efficiency, runway length for takeoff and landing, climb and descent rate, payload and range [4]. Furthermore, the major cost calculation methods used in initial development phase apply aircraft empty weight as input ([4], [5], [6]). As those methodologies essentially quantify the cost used of each raw material, it is possible to identify a proportional relationship between aircraft weight and cost. If the aircraft empty weight increases during the detail design process, it will require a more than proportional increase in takeoff gross weight to maintain the capability to perform the sizing mission. Weight added to aircraft structure requires additional wing area for greater lift, additional engine thrust, and additional fuel to provide the same range. Thus, an initial 1 pound increase in structural weight ultimately results in a 2 to 10 pounds increase in aircraft weight [7]. Frequently, when weight growth occurs late in the development cycle, the propulsion system developers are tasked to produce more thrust to ensure meeting vehicle performance parameters. All of these interactive weight issues impact control surface effectiveness and control system gains. This vicious interplay between the various subsystems is a contributor to program delays [8]. Thus it is crucial that realistic estimates of empty weight be used during early conceptual design and that the weight be strictly controlled during later stages of design [4]. The ability to retard weight growth during design and development can be the discriminator of success of an aircraft [9]. Experience has shown that any vehicle has a tendency to increase the weight during the design, construction and validation [10]. There are many cases of programs cancelled due to weight increase [9]. The determination of aircraft mass properties is an integration of all areas involved in the aircraft development process; it depends upon the airframe segments, systems and operational items definition [4]. Figure 1 illustrates that the mass properties information in the aircraft level are an integration of systems in the lowest level. Society of Allied Weight Engineers (SAWE) [11] specifies that the objective of the Control and Manage Mass Properties Process is “to provide aircraft products with mass properties that ensure system product performance requirements are met or exceeded. The process shall provide accurate and timely mass properties data to the chief engineer for making design optimization decisions balancing contract cost, schedule and performance requirements”. The Mass Properties Management (MPM) is an iterative process which has to supply on time mass properties data for trade-offs and decisions during the development process. The MPM tracks the aircraft empty weight during all phases of the development process, ensuring aircraft compliance with the performance, cost and sustainability requirements. The main mass properties of an aircraft are: weight, center of gravity, moments and products of inertia [12]. 226 V. De Paula and H. Rozenfeld / Mass Properties Management in Aircraft Development Process Figure 1. Aircraft Mass Properties in aircraft level: integration of airframe, systems and operational items. As the success of MPM depends upon achieving technical integration and achieving it in an effective and efficient manner, the objective of this work is to analyze the MPM problems and solutions encountered during the aircraft development. The ability to retard weight growth during design and development is a challenge to all aircraft manufacturers and this work is aimed at identifying opportunities and proposes practices for MPM process improvement. The proposed work is different than what is commonplace in aircraft design process because of its holistic and integrated approach. This paper is divided in six sections: introduction, aircraft development process and MPM, system engineering and technical integration, methodology, process dimensions and variables, conclusions and future work. Firstly, in introduction the context and main contributions of this work is presented. Secondly, in the aircraft development process and MPM section it is discussed the mass properties relationship within the aircraft development process. Thirdly, in system engineering and technical integration section it is presented the relationship of MPM and system engineering. Then, the methodology presents the main phases conducted in this work. The main results are presented in process dimensions and variables section, considering the MPM main perspectives: activities and deliverables, roles and responsibilities, goals and tools. Finally, conclusions and future work section presents the next steps and future of this research area. 1. Aircraft Development Process and MPM Torenbeek [13] identified the phases of the aircraft development process as shown in Figure 2. In Conceptual Design phase basic questions of configuration arrangement, size, weight and performance are answered, during Preliminary Design specialists will design and analyze their portion of the aircraft [4]. In Detailed Design phase actual pieces to be fabricated are designed, it is the phase in which the whole will be broken down into individual pieces [4].The MPM is present in all phases of development, and V. De Paula and H. Rozenfeld / Mass Properties Management in Aircraft Development Process 227 has to accurately estimate the vehicle’s mass properties considering the characteristics and maturity of each phase [10]. Figure 2. Aircraft Development Process (adapted from [13] ). Andrew [9] studied the development process of four different aircraft, analyzing the empty weight gain between the original specified value in the conceptual design phase and the aircraft empty weight in entry into service. It was found that during the development phases, all projects have had problems controlling the weight, and the average increase in empty weight was of 19% (lowest value was 7% and the highest value was 36%). During the development process of aircraft with lower weight gain, it was identified that the company adopted a strategy associated with weight known as Profile Planned Value (PVP). In this strategy the empty goal weight of the aircraft in the early stages of development (Preliminary Design) should be 5% lower than calculated, and it allows an increase of 2% during the detailed design phase, 1% during fabrication phase and 2% during testing phases, Figure 3 . Thus, if in all stages the aircraft meet the goal than the final weight of the aircraft will be the same as initially specified. This strategy was developed after studying the weight growth of the different aircraft and it is considered an effective approach for new developments [9]. In addition, Andrew [9] emphasizes that for aircraft with less weight increase, the leadership has adopted a policy of "pound in pound out mandate”, where every weight increase should be followed by a weight decrease. In this project, the leadership believed that the overall health of an aircraft development program was reflected in weight status charts [9]. The aircraft development process is in many ways the compromise of the knowledge, experience and creativity of the numerous engineers involved in the process. The currently accepted design approach is to compartmentalize by subsystems and to decompose the subsystem design tasks into discipline tasks. This division is driven mainly by the need of sharing the workload. Functional characteristics of the subsystems enable decoupling, which allows for the distribution of the workload to the design functions and allows for the use of industrial specialization [14]. The way forward to meet the challenges is to develop an integrated, effective and efficient process for the aircraft development process, known as systems engineering (SE). SE is a holistic approach to a product that comprises several components and it 228 V. De Paula and H. Rozenfeld / Mass Properties Management in Aircraft Development Process involves interaction between disciplines and has become the state-of-the-art methodology for organizing and managing aerospace production [15]. Figure 3. Profile Planned Value (PVP): Typical development weight growth is 12%( weak weight control) and 5% (strong weight growth). PVP strategy is important to ensure a strong weight control [9]. 2. System Engineering and Technical Integration ISO/IEC 15288 [16] defines system as “a combination of interacting elements organized to achieve one or more stated purposes” and INCOSE [17] defines SE as ‘‘an interdisciplinary, collaborative approach that derives, evolves, and verifies a lifecycle balanced system solution’’. ISO/IEC 15288 [16] applies to the full life cycle of systems, including conception, development, production, utilization, support and retirement of systems. The life cycle processes of this International Standard can be applied concurrently, iteratively and recursively to an aircraft and its elements. Technical integration is identified in this Standard as one of the main process of system life cycle and is defined as “a process that combines system elements to form complete or partial system configurations in order to create a product specified in the system requirements” [16]. It is considered that, technical integration is the fundamental fiber that goes through every aspect of the aerospace system design and it is accomplished with an increasing intensity as the design process progresses through the various phases [14] Lawrence and Lorsch [18] studied the contradiction in the design process between the specialization into disciplines and technologies, what they called differentiation, and the need of integration, what they defined as the process of achieving unity of effort among the various subsystems. They defined an organization as “a system of interrelated behaviors of people who are performing a task that has been differentiated into several distinct sub systems, each subsystem performing a portion of the task, and the effort of each being integrated to achieve effective performance of the system”. V. De Paula and H. Rozenfeld / Mass Properties Management in Aircraft Development Process 229 SAWE [12] identified that the MPM is part of the overall systems engineering. The aircraft mass properties contains every part of the overall design and lays out where all those parts are located, how their mass affects the total design, and how the aggregate compares to the limits or goals of the design. The mass properties data has to be continuous updated, and the total compared to calculated design considering goals and limits of the design. Hammond [14] identifies that the key to providing a quality product is the process used to bring it into being, and everything that exists is the result of a process. An elementary process is a simple input, a transformation, and an output. Complex processes are sets of elementary processes, performed in sequence and in parallel. The output of one elementary process becomes the input to the next; consequently, the preceding process has an influence on the following process. SAWE [12] states that during all phases of aircraft development process there should have an iterative MPM process that is scalable to the lowest design team level up through the system level. The process consists of eight systems engineering-based sub-process elements, categorized as Management or Technical. The identified sub processes in the Management category are: Plan Mass Properties Technical Effort, Manage Mass Properties Risk, Develop Mass Properties Metrics and Control Mass Properties Baseline. The sub processes of the Technical category are: Analyze Mass Properties Requirements, Allocate Mass Properties Requirements, Optimize Mass Properties and Verify and Validate Mass Properties (Analysis and Measurement). MPM process should be redefined to better fit with SE, with a strong emphasis on integrating information across and through disciplines. The key to successful systems engineering is the environment in which it is practiced, good communications throughout disciplines is paramount [14][19]. 3. Methodology This study is an operational management research; it has concurrent need of generating knowledge to the academy and to the professional community [20].This is a crossdisciplinary study aimed at solving a real world problem. Figure 4 represents the general methodology of this project. Firstly, Phase 1 consists of the problem clarification, description of the existing process and situation. After the understanding of the problems, a literature review was conducted considering the main factors that influence the MPM (Phase 2). The Phase 3 consists of a longitudinal participatory case study; one of the authors has been participating of the MPM during an aircraft development process for three years. Research instruments used during this phase were a logbook and a questionnaire for interviewing major stakeholders of the product development process. Finally, Phase 2 and Phase 3 converge and a synthesis of the MPM theory is proposed in form of a process model to represent the main process perspectives: activities and deliverables, roles and responsibilities, goals and tools (Phase 4). Process model is the key to the effective integration of the project system models and the effective management of projects [21]. MPM process model treat processes as systems, using an SE approach [22]. 230 V. De Paula and H. Rozenfeld / Mass Properties Management in Aircraft Development Process Figure 4. Research Methodology. 4. Process dimensions and variables Aguilar-Savén [23][1] defines business processes as “the combination of a set of activities within an enterprise with a structure describing their logical order and dependence whose objective is to produce a desired result”. This concept of business process replaces the classical functional vision by a horizontal view, where the unit of analysis becomes a chain of activities / events [24]. Product development processes are business processes with some particular characteristics related to the fact that product development involves creativity and innovation and is nonlinear and iterative. Although certain activities may repeat, the desired overall result is unique. The main characteristics that differentiate product development processes are: the outputs of activities in most business processes can be verified immediately while the outputs of many product development activities, such as information, cannot be verified until much later; product development is most multidisciplinary endeavor, with many interdependencies among activities; product development process tend to be more parallel than sequential; dependencies in product development process are not clear because a number of assumptions tend to be undocumented [21]. According to Silva and Rozenfeld [24] the product development process consists of four dimensions, which should work in an integrated way: the goal/ strategy (involving portfolio management, performance evaluation, cross-functional relationships and partnerships with suppliers); organization / roles and responsibilities (involving the organizational structure and leadership, team work culture and learning conditions); the activities / information (the set of specific operational activities performed in the product development process and the corresponding handled information); and resources / tools ( techniques, methods, tools and systems used to support the development of the product), . V. De Paula and H. Rozenfeld / Mass Properties Management in Aircraft Development Process 231 Figure 5. MPM dimensions (adapted from [24]). Each of these dimensions would have its own set of variables2. Collectively, these dimensions and their variables provide a sufficient basis to model and link the MPM to the SE development process. Hence, MPM process variables describe the MPM features and are the main part of the proposed conceptual model. Variables identified in the literature review and in the exploratory case study were classified in these four dimensions. The case study was conducted in order to increase the mass properties management understanding in the aircraft development process. Semi-structured interviews were conducted with people of different technologies / disciplines and different hierarchical positions, contributing to the synthesis and variables identification. Table 1 shows the variables identified for each dimension in the MPM process, the type of each variable (ordinal or binary).and the literature reference used in the identification of each variable. The activity/ information dimension has three variables: weight estimation is critical, weight sub process categories and weight saving award. Firstly, weight estimation is considered a crucial activity in the MPM synthesis, this variable evaluates the expertise of engineers involved in this activities, mainly for conceptual design and preliminary design phases. The variable weight sub process categories is related to the classification of MPM activities in the organization, it can be: technical, management and integrative and is presented in all product development phases. Finally, weight saving award is related with the activity of recognizing the weight saving of all parts involved in the project, during detailed design and later phases. The goals/ strategy dimension has five identified variables: target weight for each phase, local target weight, leadership support, weight information in trade-off, use of new technologies. Target weight for each phase, local target weight and leadership support variables were identified in the synthesis as essential and should be present in MPM. The variable weight information in trade off evaluates the weighing of the 2 The term variable in this research has the meaning of properties and attributes used by other authors ([1][21], [23]). 232 V. De Paula and H. Rozenfeld / Mass Properties Management in Aircraft Development Process weight information compared to cost and schedule in the decision making process. The use of new technologies evaluates the level of new technologies application in the product; it was identified in the synthesis a positive relation between use of new technologies and MPM. These five variables should be embedded in all phases of the aircraft development, but the variable related to the use of new technologies is more present in the beginning of the development, until detailed design. Table 1. MPM Variables. Dimension Activities / Information Activities / Information Activities / Information Goals/ Strategy Goals/ Strategy Goals/ Strategy Goals/ Strategy Goals/ Strategy Organization / Roles and Responsibilities Organization / Roles and Responsibilities Organization / Roles and Responsibilities Resources/ Tools Resources/ Tools Resources/ Tools Resources/ Tools Resources/ Tools Resources/ Tools Variable Weight estimation is critical Weight sub process categories Weight Saving Award Target Weight for each Phase (strategy such as PVP) Local target weight Leadership support Weight information in trade-offs Use of new technologies Technical Integration Type Ordinal (scale) Ordinal (scale) References [4], [9], [11], [12], Case Study [11], [16] Case Study Binary (yes/no) Binary (yes/no) [11] [9][11], Case Study Binary (yes/no) Binary (yes/no) Ordinal (scale) [11], Case Study [9], [12], Case Study [11], [25] Case Study Ordinal (scale) [2], [26] Binary (yes/no) [14], [12], Case Study MPM: People’s responsibility Aeronautical Culture Ordinal (scale) [11], [4], Case Study Ordinal (scale) [11], Case Study Methods and tools for calculation in each development phase Integrated mass properties data base Integration data base/ CAD/ DMU Risk management database Automated weight status visibility Engineering change management database with mass properties information Ordinal (scale) [15], Case Study Ordinal (scale) [26], Case Study Ordinal (scale) [15], Case Study Ordinal (scale) [26], Case Study Ordinal (scale) [26] Ordinal (scale) [11], [16] Organization / Roles and Responsibility dimension has three variables: technical integration, MPM: people’s responsibility and aeronautical culture. Firstly, technical integration was identified in the synthesis as essential to MPM, it evaluates the formal existence of this role in the organization. The variable MPM: People’s responsibility evaluates the MPM workload between designers, engineers and managers. Finally, the aeronautical culture identifies if the MPM has the empowerment to request changes and drive the design. These three variables should be embedded in all phases of the aircraft development. The resources/ tools dimension has six variables: methods and tools for calculation in each development phase, integrated mass properties data base, integration data base/ V. De Paula and H. Rozenfeld / Mass Properties Management in Aircraft Development Process 233 CAD/ DMU, risk management database, automated weight status visibility, engineering change management database with mass properties information. In the synthesis it was identified that there are specific methods and tools for mass properties calculation for each phase of development, this variable evaluates the manufacturer’s adaptability and the quality of calculation. Synthesis showed the importance of the quality of MPM databases, variables such as mass properties database, risk management database and engineering change management database with mass properties information were identified as essential to guarantee data quality in all product development phases. The integration of those databases and CAD/DMU database was identified as essential in all product development phases. Finally, the databases should be integrated to an automated weights status visibility during all product development phases. The relationship between the MPM process variables and the aircraft development process is illustrated in Figure 6. Each dashed rectangle is one of the dimensions: activities/ information, resources/tools, goals/ strategy and organization/ roles and responsibilities. Each blue rectangle represents the variables and the respective phase in the aircraft development process. The variables of resources/ tools were divided in three groups: databases (identified inside the light blue rectangle), integration (identified by each arrow) and methods and tools for each phase. The variables of organization/ roles and responsibilities are indicated in gray rectangles because it should be considered as part of all development and inside all organization. Figure 6. MPM process and aircraft development process. 234 V. De Paula and H. Rozenfeld / Mass Properties Management in Aircraft Development Process 5. Conclusions and Future Works In the proposed model the dimension organization/ roles and responsibilities is an integrative dimension. The variables identified in this dimensions are key to enable a synergy between the MPM process and the SE development process. Synthesis of the literature and the exploratory case study converged in the identification of variables. It was highlighted the importance of technical integration, strategy and responsibilities in the aircraft development process to the MPM success. The identified variables can be used as basis for conducting a case study on an aircraft manufacturer. Hence, next steps of this research consist of defining structured questions for each variable and case study conduction. After the case study analysis it will be possible to identify opportunities and propose practices for MPM process improvement, improving manufacture’s ability to retard weight growth during design and development. References [1] AIRBUS, Global Market Forecast, Flying on Demand 2014-2033, 2014 [2] L. Dray, An analysis of the impact of aircraft lifecycles on aviation emissions mitigation policies. Journal of Air Transport Management, v.28. Institute for Aviation and the Environment, Cambridge University, pp. 62-69, 2013. [3] D.L. Greene, Energy-Efficiency Improvement Potential of Commercial Aircraft, Annual Review of Energy and the Environment, pp. 537-573, 1992. [4] D.P. Raymer, Aircraft design: a conceptual approach, 2 ed., American Institute of Aeronautics and Astronautics. Washington, 1992. [5] L.M. Nicolai, Fundamentals of Aircraft Design. Aerospace Engineering, University of Dayton. Ohio. 1975. [6] J. Roskam, Aircraft Design, Hardbound, Kansas, 1990. [7] D.L. Greene, Commercial Air Transport Energy Use and Emissions: Is Technology Enough? Conference on Sustainable Transportation – Energy Strategies, 1995. [8] E.M. Kraft, Integrating Computational Science and Engineering with Testing To Re-engineer the Aeronautical Development Process. 48th AIAA Aerospace Sciences Meeting Including the New Horizons Forum and Aerospace Exposition, 2010. [9] W.G. Andrew, Do Modern Tools Utilized in the Design and Development of Modern Aircraft Counteract the Impact of Lost Intellectual Capital within the Aerospace Industry, Master Thesis, University of Massachusetts, 2001. [10] W. Boze, P. Hester, Quantifying Uncertainty and Risk in Vehicle Mass Properties Throughout the Design Development Phase, 68th Annual SAWE Conference, 2009. [11] SAWE Society of Allied Weight Engineers, Inc. Recommended Practice Number 7.Mass Properties Management and Control for Military Aircraft. Revision Letter D. 2004 [12] SAWE, Society of Allied Weight Engineers, Inc.TO1. 2011 [13] E. Torenbeek, Synthesis of Subsonic Airplane Design: An introduction to the preliminary design of subsonic general aviation and transport aircraft, Delft University Press,1982. [14] E.W. Hammond, Design Metodologies for Space Transportation Systems, AIAA Education Series. DOI: 10.2514/4.861734.2001, [15] M. Price, S. Raghunathan, R. Curran, An integrated systems engineering approach to aircraft design, Progress in Aerospace Sciences, Vol. 42. pp. 331–376, 2006. [16] ISO/IEC/IEEE Systems and Software Engineering - System Life Cycle Processes, IEEE STD 152882008, pp c1–84, 2008. [17] INCOSE-TP-2003-016-02. Systems engineering handbook, INCOSE Technical Product, 2004. [18] P.R. Lawrence, J.W. Lorsch, Differenciation and integration in complex organization, Administrative Science Quarterly, vV.1. pp.1-47. 1967 [19] R.M. Kolonay, A physics-based distributed collaborative design process for military aerospace vehicle development and technology assessment. Int. J. Agile Systems and Management, Vol. 7, Nos 3/4, pp. 242 – 260, 2014. V. De Paula and H. Rozenfeld / Mass Properties Management in Aircraft Development Process 235 [20] C. Karlsson, Researching Operations Management. 1st ed., Routledge, New York, 2009. [21] T.R. Browning, E. Fricke, H. Negele, Key Concepts in Modeling Product Development Processes, Published online in Wiley InterScience, 2005. [22] R.C. Beckett, Functional system maps as boundary objects in complex system development, Int. J. Agile Systems and Management, Vol. 8, No. 1, pp. 53–69, 2015. [23] R.S. Aguilar-Savén, Business process modelling: review and framework, International Journal Production Economics, V. 90, pp. 129–149, 2003. [24] L.S. Silva, H. Rozenfeld, Modelo de avaliação da gestão do conhecimento no processo de desenvolvimento do produto: aplicação em um estudo de caso. RevistaProdução, v. 13. n. 2. 2003. [25] E.M. Murman, M. Walton, E. Rebentisch, Challenges in better, faster, cheaper era of aeronautical design engineering and manufacturing. The Lean Aerospace Initiative, Massachusetts Institute of Technology, 2000. [26] H. Dahm, ComputerAided Weight and Cost Management in Vehicle and Aircraft Industry, Society of Allied Weight Engineers, Inc. TGM GmbH, 2007. This page intentionally left blank Part 4 Production-Oriented Design & Maintenance and Repair This page intentionally left blank Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-239 239 Product Development Model Oriented for R&D Projects of the Brazilian Electricity Sector - MOR&D: A Case Study João Adalberto Pereira a,1, Osíris Canciglieri Júnior b,2 and André Eugênio Lazzaretti c,3 a COPEL – Companhia Paranaense de Energia b PUCPR – Pontifical Catholic University of Paraná c LACTEC – Institute of Technology for Development Abstract. The current article reaffirms MOR&D as a practical application through a case study in order to propose a production line design for a device that was created and developed previously in R&D projects within the R&D ANEEL (National Electric Energy Agency) Program. The lack of specific methodological models for product development in R&D projects in the Brazilian electricity sector generated the need for this specific knowledge. From a comprehensive research about respectable models of product development, the authors developed a suitable model for planning designs framed within the R&D Program of Brazilian Electric Sector regulated by ANEEL. The developed model promotes integration and dynamism between development stages and multidisciplinary teams. The model, named MOR&D, is comprehensive enough to meet the most types of R&D projects assigned to the R&D ANEEL Program and contains the main recurring concepts in product development, reason of its flexibility and adaptability. Keywords. Product development models, Brazilian Electricity Regulatory Agency, electricity sector, concurrent engineering. Introduction The operation and maintenance services of electricity distribution networks often require from the electrician’s teams a very close work to the energized drivers of medium voltage network, 13.8 kV and 34.5 kV systems. For this reason, multiple occurrences are registered annually, in which electricians, inadvertently or accidentally, do not respect the safety distances. In view of this reality and the dynamics of the work performed by electricians, COPEL (Companhia Paranaense de Energia) chose to develop, within the Research and Development Program of the Brazilian Electric Sector [3, 4], an electronic device based on electric field sensing generated by the energized distribution lines, as an accessory 1 joao.adalberto@copel.com osiris.canciglieri@pucpr.br 3 lazzaretti@lactec.org.br 2 240 J.A. Pereira et al. / Product Development Model Oriented for R&D Projects to be attached to safety helmets, aiming to warn the electricians about the excessive proximity to the medium voltage energized network. However, recent researches showed that up to now there was a gap characterized by the absence of a suitable model for R&D projects of the Brazilian Electric Sector with potential inclusion of products on market [9]. In this sense, through a wide approach about PDP methodologies the authors proposed a development model fully aligned to the R&D Program guidelines, whose application will be demonstrated in this text. 1. Design Description The maintenance performed by electricians in electricity distribution lines presents a high risk of serious injury from electrical shock, particularly in cases where the electrician exceeds the minimum safe work proximity recommended on standards. The fact that there is no Personal Protective Equipment (PPE) on market for this condition justified the choice made by COPEL to perform two R&D projects, within the ANEEL R&D Program, for the development of technology in the form of an accessory to be attached to the safety helmet used by electricians. In the first project, as shown in Figure 1, characterized as Experimental Development (ED) [5] of ANEEL Innovation Chain [6], the main technical features for electric field detection within the safety distance established standards have been defined [1], as well as verified the feasibility of a small device, with autonomy and reliability that could be adapted as a safety helmet accessory, that usually is standardized by the Power Company. In the second project, subsequent to the first, characterized as Head Series (HS) [7], we tried to develop a design with appropriate ergonomics and in a way that the equipment could be produced on an industrial series. Following the ANEEL Innovation Chain (Figure 1), it is used the MOR&D (Product Development Model Oriented for the R&D Projects of the Brazilian Electricity Sector [8, 9]) aiming to structure the development stages a project proposal characterized as Lot Pioneer (PL), presented in the sequence, to the "Electric Field Sensor as Security Accessory for Helmets (PPE)" equipment, referred from here as "Helmet Sensor". Figure 1. ANEEL Innovation Chain and the historical of the proposal of pioneer lot project. J.A. Pereira et al. / Product Development Model Oriented for R&D Projects 2. 241 Methodology The research strategy was a case study with a qualitative approach in which the analysis unit were the R&D projects as defined by the ANEEL R&D Program [6]. As research method, it was initially considered the strategies used to prepare the Helmet Sensor equipment [5, 7], followed by the application of MOR&D [9] in elaboration of new PL project design structuring. 3. Considered Criteria 3.1. Technical Criteria Because it is a research directing to electricity distribution networks, it was considered the variety of standardized network arrangements in COPEL Distribution, so that it is possible establish the configuration of the electric field in the safety distance for each case. Figure 2 shows some of the standard arrangements. For this R&D project, the structure used as the study basis is the number 1, Figure 2 (standard N1), due to its higher frequency of use in the company distribution network. It was used to create models for electric field simulation in software applications as COMSOL Multiphysics [10] for the 13.8 kV voltage levels and 34 kV, according to the dimensions illustrated in Figure 3. Figure 3. Simulation model [5]. Figure 2. Distribution networks arrangements used as reference [5]. In one example of simulation based on the three-dimensional model as described, it can be observed that the z-direction, indicated in Figure 3, has a lower intensity compared to the other directions (x and y). Figure 4 illustrates the three-dimensional simulation result using COMSOL software [5]. In this illustration is represented the standard of the electric field in an instant of time, in the three directions, at two different levels: one plane perpendicular to the centre of the span evaluated from the line and the other near the post. 242 J.A. Pereira et al. / Product Development Model Oriented for R&D Projects Figure 4. Simulation [5]. The plan near the post shows the influence of the insulators, cross and post in the electric field configuration. However, the simulation suggests that the presence of the post does not alter significantly the safety distances adopted. Thus, for the other analysis in this project it will only be considered the two main components of the electric field (y and z). Another factor considered in the project is shown in Figure 5, which demonstrates the influence of an electrician located at the safety distance range (60cm from the conductor) and at 0.3m from the central conductor (Figure 3), making clear that the electric field is changes considerably when the electrician is at the local. Figure 5. Influence of an eletrician in the safe distance range [5, 7]. Given the limit conditions presented and aiming alert the electrician about the imminent danger, the project proposed the development of an electronic device in industrial format and in minimum scale to allow field testing and market (Pioneer Lot). The technology based involved is the measurement of the electric field near the medium voltage network. The device must have reduced size and weight, have low cost and able to be incorporated into standard safety helmets according to the norms. It must also emit sound and visual signals every single time that the electrician approaches the medium voltage grid. Further, it should work permanently without user intervention, be powered by long-life battery, visually indicate the low load condition of the battery and have the self-test function. J.A. Pereira et al. / Product Development Model Oriented for R&D Projects 243 3.2. Regulatory Criteria The Research and Development Program of the Brazilian Electric Sector, derived from the Law No. 9991 of July 2000 [11], established the compulsory investment in research and development (R&D) by the concessionaires, licensees and authorized from electricity sector. The framework conditions for R&D projects in the program are established by the National Electricity Energy Agency (ANEEL) and are described in the “Manual for Research and Development Program of the Brazilian Electricity Sector” [6]. The activities related to R&D projects, according to the ANEEL’s Manual, are those of creative or entrepreneurial nature, with technical and scientific basis, for generating knowledge or innovative application of existing knowledge, including new applications research. Following the classic line established internationally [12, 13], but with the intention of achieving the productive chains, ANEEL ranked R&D projects in six categories, represented in the "ANEEL Innovation Chain" (see Figure 1). The possibility of implementing projects in order to improve products for industrial production and marketing makes clear the ANEEL R&D Program intention of encouraging technological innovation and as well as the development of practical solutions to the daily energy companies. The merit of an R&D project is defined by ANEEL through four primary criteria that should be considered during the planning process: Originality, Applicability, Relevance and Costs Reasonableness [6], where Originality is eliminatory factor considered in the proposal evaluation. This criterion is assessed according the Challenges (complexity) and Technological Advances and Innovation inherent in the techniques applied to the project development and the products generated by them. 3.3. Development Methodology Criteria The MOR&D (Product Development Model Oriented for the R&D Projects of the Brazilian Electricity Sector) was used to structure the project development phases. The model was designed based on ANEEL R&D Program guidelines and in line with key concepts and techniques recurrent in the Industrial Product Development Process (IPDP) [9]. Based on Product Engineering concepts, the MOR&D considers the interaction between the various stages of a project and the formation of multidisciplinary teams that will result in well-structured projects, fitted to the criteria set by ANEEL. The MOR&D consists structurally in three macro-phases: Pre-development, Development and Post-development, represented in Figure 6. The macro-phases are subdivided into six sequential phases: Initiation, Planning, Design, Implementation, Production and Maintenance, which are divided into 14 steps of different activities. The model suggests IPDP tools distributed according to their applications throughout the stages of the model. The MOR&D was designed to be dynamic, adapting to the whole range of R&D projects in the electricity sector. 244 J.A. Pereira et al. / Product Development Model Oriented for R&D Projects Figure 6. MOR&D [9]. 4. Project Formalization The MOR&D proposes interaction between the phases Initialization and Planning [9] (Figure 7). This process is characterized by constant revisions of the design proposal so that it is in full accordance with the customers wishes, the power utility strategies and criteria of the ANEEL R&D Program. Thus, in the Initialization phase, the starting point was the Statement of Demand, characterized by the need for specialized equipment and for whose Tests of the Strategic, it was considered the approval of the concessionaire regarding the results of the ED and HS projects carried out previously (Figure 1). The preparation of the proposal followed with the Definition of Scope for PL project. In this step, as a complement to the results obtained previously, customer requirements were reviewed, organized and prioritized allowing the definition of the basic guidelines for the PL Project Planning, which contains the demands for the project and the expertise of the team, both summarized in Table 1. Implicit in scope, are also the results obtained in previous ED and HS projects, technology basis and primary industrial preliminary design for the product, respectively [5, 7]. Figure 7. Pre-development Macro-phase [9]. 245 J.A. Pereira et al. / Product Development Model Oriented for R&D Projects Table 1. Demand prioritization and necessary expertise for PL project. Order 1 2 3 4 5 PL project demands Review the sensor HS project Development of sensor test device Production line Desig Production of Pioneer Lot Testar lote piloto em condições de campo 6 7 8 End of Production line project Product marketing plan outline Document and product launch Expertise (Disciplines) Physics, Materials, Electrical, Mechanical Designer Electronics, Mechanic, Industrial Designer Industrial Projects, Mechanic, Electrotechnical Industrial Projects, Electronic, Mechanic Industrial Projects, Electronics, Electrotechnical, Mechanics Industrial Projects, Marketing experts Marketing and market study experts Marketing experts Further, in the Planning phase, it was considered for Development the steps 9, 10 and 11, as suggested by MOR&D (Figure 8), which deal with the process of the Implementation phase to the manufacturing equipment developed previously, whouse dynamic between these stages are illustrated in Figure 9. However, given the need for adjustments to the developed prototype in the previous CS project (Figure 1), in the preparation of the activities related to LP project, it was also felt a strong dynamic between the Design and Implementation phases, as suggested in Figure 9, to that there was a better match of technology to production processes and product refinement, given fullness in the wishes of customers. Therefore, it was added to the step 8 to the development methodology for this new project. Pre-Development Innovation Chain Industrialization Macro-Phases Phases Initialization Steps 1 Development Planning 2 Design Implementation 3 8 9 10 11 Manufacturing Process Design Manufacturing and Finishing Product Marketing Planning Production Line Tests Compliance Tests (Standards) Market Assessment Statement of Demand Scope Definition Project Planning Refinement of the Design Tests of the Strategic Acceptance Tests R&D Program Guidelines Tests Certification Tests PL Management IMPROVEMENT OF PDP MANAGEMENT OF ENGINEERING CHANGES Figure 8. Development steps according to MOR&D [9]. Figure 9. Development macro-phase (Implementation phase) [9]. 246 J.A. Pereira et al. / Product Development Model Oriented for R&D Projects As part of the phase of planning and taking the flowchart of Figure 10, we sought to formalize the activities of steps. For this, was made use of specific Template for LP projects, presented by the authors in the thesis document describing the MOR&D [9]. The colours that identify the stages of the project in the Template correspond to the colours assigned to the activities scheduled in Figure 10, allowing a rapid cognitive association of MOR&D to the proposed PL project. Figure 10. Activities assigned to the Implementation phase [9]. The execution of the steps that address the refinement of the technologies applied to the new product are assigned to the research team that developed the previous projects of ED and HS (Figure 1) supplemented by specialists in product engineering and marketing, which, concurrently, direct the product design to a production line that meets the market. Finally, the project proposal was formalized in a specific documentation containing the Description of the Project, Expenses Worksheet, XML format file for registration in ANEEL and contract between research institution, safety material industry and the electricity concessionaire. Figure 11. Post-Development macro-phase. Development Phases Implementation Design Implementation Macro Phases Guidelines and Actions Requirements and Priorities Market Study !! Finalization Production of pilot lot Market acceptance testing Guidelines for insertion in the market Redefinition of concepts and specifications to meet the operational testing and market Adequacy of the Production Line Refinement of the Production Line Functionals and Field Tests Delivery of the production line and technical documentation Application of operational tests with the production line 34 Complete technical documentation 33 Functional production line delivery 32 Production line technical documentation 31 30 Adjustments on the production line 29 Adaptations and adjustments to the project 28 Review of technical documentation 27 Tests with the head series production prototype 26 Head production series prototype assembly 25 Production line construction 24 Planning the production line 23 Technical documentation for production 22 Tests and trials based on standards 21 Functional tests with prototypes Functional and Operational Tests 19 Design of testing equipment 18 Review of the project aimed at producing 17 Report with information about adjustments to be made 16 Acceptance testing and operation testing. 15 14 Evaluation of compliance with requirements 13 Implementation of the marketing plan 12 Field testing and information gathering 11 Training of the team to use the equipment 10 Application of functional and phisical tests 9 8 7 20 Physical tests with the prototype Production Line Tests !! Strategy of market insertion Definition of new product requirements Study of opportunities and threats 6 5 Concept redefinition and specifications Collection and processing of information Project planning and initial studies Development Steps 4 3 2 1 Nº Certification Testing Manufacturing Process Design Mounting of the Head Production Serie Prototype Certification Tests ! Refinement of Prtoduct Engineering the Design Compliance Tests (Standards) Refinement of Design Review the Design Market Tests Manufacturing Functionals and Field Tests and Finishing Product Production of Pilot Lot Market Assessment Marketing Planning ! Planning for the project implementation Activities Refinement of Design Review the Design Initiation Steps 2 3 5 6 7 8 Manufacturing and Finishing Product 4 Manufacturing Process Design Refinement of the Design Marketing Planning 1 9 10 11 12 1 2 3 4 5 6 7 Year 2 Months Year 1 Months 8 9 10 11 12 1 2 3 4 5 7 Gates 6 8 9 10 11 12 Tools FAST Year 3 Months FMEA Template - MOR&D J.A. Pereira et al. / Product Development Model Oriented for R&D Projects 247 RB DFX WBS QFD 248 5. J.A. Pereira et al. / Product Development Model Oriented for R&D Projects Final Considerations The case study showed that the MOR&D is comprehensive and flexible enough to be adapt to the sequence of the project steps according to the development stage of the product technology. The template assigned to the model, enables and directs the flow of activities for the project execution, facilitating management during the development. In this work was demonstrated that the model application is promising, enabling to optimize the development process, detect potential problems beforehand, improve the allocation of staff and resources and, consequently, reduce re-engineering costs, making it possible to achieve higher quality products with more efficiency. Acknowledgments The authors are thankful for the financial and technical support provided by the Companhia Paranaense de Energia (COPEL), Pontifical Catholic University of Paraná (PUCPR), Institute of Technology for Development (LACTEC). References [1] ABNT - Exposição a Campos Elétricos de 50 e 60 Hz. Norma ABNT – 2000. [2] ANSI/IEEE - IEEE Guide for Maintenance Methods on Energized Power Lines. IEEE Std 516™2003 (Revision of IEEE Std 516-1995). [3] A.E. Lazzaretti, P.M. Souza, Sensor de proximidade de rede de distribuição energizada como acessório de capacete de segurança. Final Project Report, LACTEC/COPEL, Curitiba, Paraná, Brazil, 2009. [4] A.E. Lazzaretti, M.A. Ravaglio, G.P. Resende, S. Ribeiro, R.J. Bachega, E.L. Kowalski, V. Swinka Filho, P.M. Souza, A.O. Borges, J.P. Lima, M.G.D. Voos, Simulação e medição de campos elétricos em linhas de distribuição para desenvolvimento de acessório de capacete de segurança. Proceedings of Congreso Internacional sobre Trabajos con Tension y Seguridad em Transmision y Distribucion de Energia Electrica (IV CITTES-CIER), Buenos Aires, Argentina, 2009. [5] .E. Lazzaretti, P.M. Souza, Sensor de proximidade de rede de distribuição energizada como acessório de capacete de segurança. Final Project Report, LACTEC/COPEL, Curitiba, Paraná, Brazil, 2009. [6] Manual do programa de pesquisa e desenvolvimento tecnológico do setor de energia elétrica. ANEEL. Available at: http://www.aneel.gov.br. Brasília, DF, Brazil, 2012. [7] A. E. Lazzaretti, P. M. Souza, Desenvolvimento de Cabeça de Serie de Sensor de Proximidade de Redes de Distribuição como Acessório de Capacete de Segurança. Final Project Report, LACTEC/COPEL (2013). [8] J.A. Pereira, O. Canciglieri Jr., A.E. Lazzaretti, A.M.A. Guimarães, Product Development Model for Application in R&D Projects of the Brazilian Electricity Sector. In J. Cha et al. (eds.), Moving Integrated Product Development to Service Clouds in Global Economy. Proceedings of the 21st ISPE Inc. International Conference on Concurrent Engineering, IOS Press, Amsterdam, pp. 33-45, 2014. [9] J. A. Pereira, Modelo de Desenvolvimento Integrado de Produto Orientado aos Projetos de P&D do Setor Elétrico Brasileiro – MOP&D. Tese de Doutorado PUCPR, Curitiba, PR, Brazil, 2014. [10] COMSOL Multiphysics: The Platform for Physics-Based Modeling and Simulation. Avalible at: http://www.comsol.com/comsol-multiphysics. [11] Lei 9.991 de 24 de julho 2000. Diário Oficial da União. Brasília, DF, Brazil, 2000. [12] Oslo Manual: Guidelines for collecting and interpreting innovation data. OECD (Organization for Economic Co-Operation and Development), Rio de Janeiro, RJ, Brazil, 2005. [13] Frascati Manual: Proposed Standard Practices for Surveys on Research and Experimental development. OECD (Organization for Economic Co-Operation and Development), Paris, 2002. Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-249 249 Sustainment Management in the Royal Australian Navy Robert HENRYa,1 and Cees BIL b BAE Systems Australia, Hydrographic In Service Support, Cairns, QLD 4870, Australia b School of Aerospace, Mechanical and Manufacturing Engineering, RMIT University, Melbourne, VIC 3001, Australia a Abstract. The Australian Defence Force (ADF) like many industries faces ongoing challenges in the support of their assets, acquisitions, budgets and workforce management. Unlike other industries, the ADF is heavily affected by changes in Government, changes in Government policy direction, diversity of potential conflict scenarios and the manner in which budgets are set. This has led to a series of decisions being made that have addressed problems in the short term but have not adequately considered long term implications. More often than not, Government directives to deliver improved efficiencies come along with a corresponding budget cut, a direction to maintain services and capability without any real guidance on how this could or should be achieved and this continues to impact on the organisation long after the incumbent Government has left office. Resultant problem areas within the ADF maritime such as engineering and maintenance are covered by a number of Reports including the more recent Rizzo Report. The purpose of this paper is to look at the area of Sustainment Management within the ADF from a maritime perspective and the holistic view that defence industries need to consider in the development of their Support Solutions when entering into support arrangements such as Alliances and InService Support contracts. Keywords. sustainment management, maintenance, Royal Australian Navy Introduction Many large organisations face continued pressures to ensure maximum plant up-time and availability while reducing support costs; the Australian Defence Force (ADF) and their prescribed supplier the Defence Materiel Organisation (DMO) are no different in this regard. The strategies implemented are very dependent on a range of factors including competition, quality and level of available data for analysis, desire for change, organisational culture, organisational structure, finances, change management practices within organisations, politics and many others. These factors have a heavy influence on outcomes and generally are not easily changed [1]. The Rizzo Reform Team has identified that to affect the required cultural changes necessary to improve engineering 1 Robert Henry, Maintenance Manager, Hydrographic In Service Support, BAE Systems Australia, Cairns, QLD 4870, Australia. Email: bob.r.henry@baesystems.com 250 R. Henry and C. Bil / Sustainment Management in the Royal Australian Navy outcomes within the Royal Australian Navy (RAN) is expected to take a generation dependent on the consistency of approach. Organisations may have a dominate engineering functional basis, have a heavy plant asset base or have little in the way of primary plant but all have an operational focus of some description. Balancing these apparent apposing requirements can often be difficult and is made more complex when the organisations prime objectives are not made the centre piece of each department’s objectives and managed with an overarching coordinated approach [2]. Within the ADF, staff retention, staff mobility and costs of training add considerable cost and complexity to this issue which has had negative impacts to engineering management within the RAN as identified by the Rizzo Report [3]. Since the late 1990’s, the Australian Defence Organisation (ADO) has continued its shift from self-reliance to ever greater levels of industry support where its functions have been deemed not part of the core business of war-fighting or not considered frontline support services required to support this purpose. This fundamental shift saw the introduction of Class Logistics Organisations later named System Program Offices (SPOs) that were focused on specific asset classes and often partnered with an industry service provider. This model has continued to develop and evolve gaining greater pace when the DMO was created in 2000 as a statutory independent organisation and prescribed supplier to Defence. The original model was supposed to retain military and Australian Public Service (APS) personnel in executive, finance, engineering and governance roles with industry partners providing the leg work to be brought in as required to meet project needs. This concept was supposed to allow for the flexible resizing of the organisation to support the varying work load. Unfortunately, this vision failed to be realised resulting in inefficient and costly structures. In more recent times primarily due to budget pressures, freezes on recruitment and other Government reviews into staffing levels of public servants, the original concept is likely to gain greater traction. The process of outsourcing services has come at a cost to the ADO as a whole through the effective deskilling of critical engineering functions as identified in a number of Reports including the more recent Rizzo Report [3]. An article appearing in the Australian Defence Magazine has gone further in revealing that the Commission of Audit review into government programs identified that the DMO has not been effective or enhanced accountability due to a range of skills shortages and high staff turnover [4]. Interestingly, this was highlighted in the Mortimer Review with a range of recommendations made to improve commercial practices, skills, risk management and workforce numbers [5]. What does this mean to defence industries? The obvious answer is a possible business opportunity but a discussion of business opportunities or how to win them is not part of this paper. The more likely answer is that defence industry may be asked to take on responsibility beyond the traditional supplier of materials and services. As discussed in an editorial article titled Time to let go of the Valley of Death and make a decision, many defence related industries have focused on a relatively narrow part of the spectrum and have primarily been concerned with acquisition and construction projects calling on the Government to provide certainty for the Australian Defence shipbuilding program in order to maintain skills [6]. While this maintains a large workforce with a range of skills that may not be easy to pull together again at short notice, the notion that it maintains a highly skilled work force that would somehow irrecoverably disperse is questionable. R. Henry and C. Bil / Sustainment Management in the Royal Australian Navy 251 Most of the skills needed in the bulk construction phases of shipbuilding are transferable between Defence and commercial work and are also required to support the maintenance and modification programs that are largely performed in-country. Specifically, the areas often cited as a concern for skills retention in the ship builder sector are more related to maintaining employment opportunities rather than skills retention. Additionally, the skills that are valuable for retention including engineering for systems integrations continue to be required in support modification programs and high end technical skills that are required to support the vessels past the build phase. From the author’s direct experience over the last 20 years, the most difficult skills to retain and replicate from Sustainment perspective are those related to complex systems diagnostics and field repairs of military systems/equipment which up until the last decade or so largely originated from within defence due to the lack of commercial equivalent training. Generally, reviews into Defence have focused on the performance of either a single service, DMO or the partnership. These reviews have placed many of the engineering and sustainment performance problems squarely at the feet of the ADO. This may be valid when considering the performance of the governance function and commercial acumen but industry also needs to stand up and be counted in underperforming. From the author’s experience of being on both sides of the fence, it is evident that the base problems affecting the ADO are equally applicable to industry in Sustainment support of military systems / equipment. What are these shortcomings and are they easily addressable? 1. Problem Description Under pressure to reduce manning levels on board Her Majesties Australian Ships as this represents the single biggest cost of running a Naval vessel, the RAN began requesting replacement vessels that were considered to be minimum manned in line with commercial shipping trends. New vessels have more automated control and equipment requiring less routine maintenance to be performed while at sea requiring a smaller highly skilled technical workforce as part of the crew. This shifted the focus from providing engineering, technicians and tradesmen as a significant crew component to one of addressing the core operational requirements of war-fighting and seamanship in line with RANs mission statement of “to fight and win at sea”. One aspect of this approach was to distribute the function normally handled by a “whole of ship” maintenance planning cell amongst various “Work Centres”. The term Work Centre came about in response to the Computerised Maintenance Management System (CMMS) progressively rolled out across the maritime fleet. This has led to circumstances resulting in poor outcomes including adverse press concerning the materiel state of ships and consequently, a number of Government driven reviews. This problem is further compounded by the organisational size, work place culture, geographical diversity, diversity of equipment, the manner in which capability procurement activities are implemented, separate training programs and diversity of activities that ADF personnel are expected to perform. Over the last decade or so, maintenance management as a discipline has been increasingly recognised as something other than a sub function of engineering management or project management. In response to industry demand, a number of universities now offer post degree qualifications in maintenance management or asset 252 R. Henry and C. Bil / Sustainment Management in the Royal Australian Navy management. In comparison, the RAN dropped maintenance management training as they rolled out Computerised Maintenance Management Systems (CMMS) and increasingly relied on DMO to pick up the shortfall. DMO became a prescribed service provider under the Financial Management and Accountability Act 1997 in 2000. The services provided in support of maritime maintenance have generally been in regards to contracting and contract management of maintenance tasks identified as being outside of the crew capability or capacity. Initially, there were no defined accountabilities or performance measures identified which was identified in the Mortimer Review that included recommendations for Defence and DMO to enter into Material Sustainment Agreements (MSAs) [5]. Though the recommendation was taken up, implementation across the RAN was inconsistent and dependent on the relationship between the various SPO’s and Capability Group’s. Around the same timeframe, the RAN reduced the recognised maintenance categories from Organisational (ship level), Intermediate (navy support workshops) and Depot (those functions not within Defence capability) down to two levels – Organic (ship level) and External (not conducted by ship’s crews). One of the unfortunate side effects of this decision was that training programs supporting intermediate level maintenance were either reduced or dropped with nothing put in place to fill the gap. The underlying assumption was that industry would fill it, but the reality is far more complex in areas where the technology is unique to defence or where other limitations exist. Figure 1. Differences in Work Allocation. Figure 1 is a simplistic view of how job allocation for maintenance was broken into three levels which was initially determined at the acquisition stages based on the level of training required, facilities required, test equipment, parts availability, required turnaround times, certification and performance testing, access and availability of intellectual property amongst others. The first part of Figure 1 shows a clean allocation which in reality was not neat and clean. The second part of the figure depicts the current practice in the maritime context but the formal training that was originally R. Henry and C. Bil / Sustainment Management in the Royal Australian Navy 253 available for the Intermediate work scope has largely disappeared. This aspect had a significant impact as the training and development of maintenance staff in the intermediate facilities allowed the development and retention of corporate knowledge. 2. Key Issues From a defence naval perspective, there are a number of areas that need to be addressed that could be equally applicable to a range of other service support arrangements. This poses a number of questions that will be considered by this paper in line with the principles of System Support Engineering focused on Sustainment (maintenance) related issues and are summarised in this section: 1. 2. 3. 4. 5. What’s does the support environment look like? Support contracts are let for any number of reasons. Understanding the rational and objectives can be crucial to success. Are there any constraints being applied or mandated tools, equipment or facilities being provided by the customer? What could be argued by the customer as a cost saving measures could unwittingly be a set of liabilities. What’s in the contract? This is often fundamental to many of the problems encountered between DMO, Industry and the subsequent disconnected expectations of the RAN. Do all parties understand what’s being asked for and is it realistic? As basic as it seems, often the contract is written by one group of parties and handed over to another to implement based on a briefing. While the contracts are made available, the page/word count, complexity and legalese way in which they are written tends to discourage reading beyond the introductory sections. The result is that at least one party often doesn’t understand their obligations or what is required to be delivered. In this regard, relationships between the parties can become unnecessarily strained when the difference between the parties becomes too large placing them in an adversarial position. What are the explicit and implicit requirements? The explicit components are normally easy to manage on the proviso that they are realistic; the implicit ones including unstated expectations and intent are the ones that often bite. Often, aspects are covered in a series of related reference material and written in a manner that assumes the other party has pre-acquired knowledge. Are the references used in the contract available and are they understood? In the last 2 major support projects the author has worked within, this was a fundamental failing in that some publications were not made available prior to the contract signature and in other cases assumptions were made that were never suitably questioned by either party. From the Contracting parties perspective, potential Contractors contribute to this situation by providing assurances that the contract is understood. What tools have been mandated? This is in reference to corporate software tools that the Customer may want the service provider to use. This can and does impact on staff work-loads but is rarely able to be evaluated upfront or may be introduced or modified after the contract start date with an assumed or asserted nil impact assessment by the customer in regards to the required level of effort. Generally, this is not intentionally misleading, just a reflection of the individuals direct experience with the tools. 254 R. Henry and C. Bil / Sustainment Management in the Royal Australian Navy 6. Are there Key Performance Indicators (KPIs) identified in the contract that are quantified and defined up front or are they listed as generic performance areas with KPIs that are to be determined and agreed post contract signature? Difficult to measure or quantify KPIs are not uncommon. More to the point, incorrectly qualified KPIs can inadvertently setup undesirable behaviours. 7. Are training requirements or competency profiles identified upfront; is there sufficient information to determine this? Working within Defence, there are a number of applications and systems that are unique and not accessible via normal commercial training providers. In some cases, the contract may identify that the training is to be at the Contractors expense or made available as a one off delivery in the early days of the contract period. If this is the case, there is an expectation that the Contractor will use this opportunity to develop and implement their own training programs to ensure continued service delivery. 8. Are the stakeholders clearly and easily identifiable? The ADO is a complex entity to deal with not least as a result of frequent name changes, geographical diversity of support business units and relatively short staff posting cycles. A common mistake is to only focus on the business unit managing the contract. 9. What is the framework / context that the contract will operate? This question relates to the regulatory and statutory frameworks. Maritime assets by their very nature are required to be supported in all Australian states and territories and on occasion during overseas deployments. The question then becomes one of international vs federal vs state vs RAN and often all of the above are applicable. 10. What are the cultural influences and differences between defence and industry that may affect performance? This is a broad question but will be looked at in the context of the contract, attitudes, stability and change management. 3. Moving Forward While the purpose of the paper is not to propose an organization structure, a review of some structures associated with Defence Maritime will be used to highlight some aspects of the structural framework that are both good and poor and discuss some possible areas of improvement along with providing clarification of requirements. Step 1 As a start point, what are the key elements that need to be understood by all parties involved in the Support Solution? This is succinctly identified by the following elements extracted from DI(N)LOG 47-3 - Regulation of Technical Integrity of Australian Defence Force Maritime Materiel [7]: People – Individuals and Technical Support Network – qualified, authorised and competent; Systems – Quality Management System (QMS) appropriate for the type of work performed with a minimum standard certified to AS/NZS ISO 9001:2000 or equivalent standard acceptable to Chief of Naval Engineering (CNE); Processes – Procedures and plans – evidence of compliance; and Data and facilities – use of relevant and authorised data and facilities appropriate for the activity being performed. R. Henry and C. Bil / Sustainment Management in the Royal Australian Navy 255 Responsibilities for ensuring most of these elements are in place and met rests with the Contracting Authority with points (a) and (b) being the responsibility of the Authorised Engineering Organisation (AEO). In most instances, both of these functions are now managed by each System Program Office (SPO). Generally, the level of assessment is only undertaken as far as the Prime Contractor’s management team and a review of their management plans. It is often assumed that the Prime Contractor flow these requirements down through their own organisation structure and onto any subcontractors engaged. In order to develop a support solution, a potential service provider needs to gain a level of understanding behind the elements contained with the ABR 6492 - Navy Technical Regulations Manual [8]. This is a five volume set with Vol 1 – Policy and Vol 2 being the prime volumes of interest and the remaining volumes providing simplified overviews. Vol 2 is broken into 8 sections that cover specific areas and follow the same basic format: chap 1 - policy, chap 2 – regulations, sect 3 organisational and individual responsibilities and chap 4 – guidance (Figure 2). Figure 2. Section headings from NTRM Vol 2. Step 2 It should come as no surprise that many of the SPOs arrange themselves to functionally align with the Naval Technical Regulatory Framework (NTRF) and Defence Procurement Policy Manual (DPPM) where staffing levels permit. Understanding this allows an organisation providing support services to determine likely organisational interface requirements. Within this structure, there are 3 different types of governance function: 1. 2. Executive – This is led by the SPO-Director. Individuals within each functional area may be delegated an Executive Authority based on levels of risk and or finance involved; Engineering – This is led by the Chief Engineer and Level 2 delegates are assigned by CNE. This person may be authorised to delegate lower levels of engineering authority within some of the functional areas. These are voluntary roles and awarded based on demonstrated understanding of the NTRF, qualifications, experience and type of engineering function being performed; and 256 3. R. Henry and C. Bil / Sustainment Management in the Royal Australian Navy Finance – This may be assigned and led to the SPO-D or by the Business Manager. Individuals within each functional are may be delegated a level of financial authority dependent on the nature of tasks managed and requires completion of a Simple or Complex Procurement course. Exercising of executive and engineering or finance delegations cannot be performed by the same person against the same task. Many support organisations have similar structures for exercising different types of authorities and delegations though this may not be as transparent. Figure 3. Comparison of NTRF Structure to possible SPO structure. Within the functional areas depicted in Figure 3, the governance functions are depicted in the upper three colored boxes as shown, they provide support and direction to personnel awarded delegations in each of the other functional areas. Step 3 Understand the “scope” which encompasses both explicit and implicit requirements. This notion is also applicable in determining the boundaries of authority and responsibility. Using the Hydrographic In Service Contract as an example, it is evident that areas of Sustainment Management are being compromised in the mistaken belief that since the Contractor has been tasked with compiling information, they are also responsible for the endorsement and acceptance of that information even though authority has not been given. This aspect is an example of confusing a Governance function with a support activity. 4. Regulatory framework Across the ADF, different views exist concerning the technical and statutory regulatory framework and the applicability to a particular asset or to the organisation as a whole. Defence has traditionally enjoyed a view that aspects of the regulatory framework are not applicable due to the nature of their core business activity. This view was recently challenged by the Australian Government in support of the WHS Act 2011 resulting in the ADF agreeing that outside of direct combat situations there were no operational situations precluding compliance of the Act. However, there remains a persistent view within the ADF community that these aspects are only a guide and therefore somehow R. Henry and C. Bil / Sustainment Management in the Royal Australian Navy 257 not applicable if they are inconvenient, cause an increase in costs or add to the overall job complexity. Fortunately, this view continues to decline as one of the Rizzo reform activities continues with a focus on cultural change. Just as in other areas of the Australian industrial context, there is a raft of regulatory pitfalls that need to be navigated including some that are not so familiar to industry such as the Weapons of Mass Destruction Act. Defence Maritime in particular has to contend with Australian law as well as a number of international laws pertaining to Shipping though there are some peculiarities associated with warships that make it difficult to comply with some IMO codes and these are addressed by the Naval Ship Code (NSC) introduced circa 2007. Each arm of the ADF has its own implementation of a Regulatory Framework that is required to be implemented in line with Defence Instruction (General) LOGistics (DI(G)LOG) 4-5-012 – Regulation of technical integrity of Australian Defence Force materiel. In support of this requirement, the RAN version is DI(N) LOG 47-3 – Regulation of technical integrity of Australian Defence Force maritime materiel. This requires that ADF maritime materiel is fit for service and pose no hazards to personnel, public safety or the environment. The Navy Technical Regulatory System aims to ensure that ADF maritime materiel is designed, constructed and maintained to approved standards, by competent and authorised individuals, who are acting as members of authorised engineering organisations and whose work is certified as correct [9]. Relatively recently, ADO began consolidating a number of manuals into a single electronic manual set known as the Defence Logistics Manual (DEFLOGMAN). This is being undertaken to reduce the level of duplication and standardise the approach taken in support of a range of policies and activities. Maintenance in particular is now covered by DEFLOGMAN, Part 2, Volume 10 and provides clarification concerning applicability to both ADF members and contractors performing maintenance. The maintenance policy spelt out in this manual reinforces the requirement to ensure that both ADF members and industry are appropriately qualified and authorised to perform the work using approved documentation, to approved standards and the work is certified as correct. DI(G) LOG 4-5-020 – Defence Engineering and Maintenance Manual (Chief of Defence Force, 2013) covers a new governance framework being introduced across the ADO and must be read in conjunction with DEFLOGMAN. DI(G)LOG 4-5-020 represents a further consolidation of more than 10 separate policy manuals affecting engineering and maintenance activities conducted by both ADO and industry. What does this mean to industry in a support contract? Fundamentally, this means that industry needs to understand the requirements that are identified in the current ASDEFCON templates being used. In particular, it means that industry partners need to take note of the high level references used and to follow and understand a myriad of related references. 5. Conclusion In amongst an ever increasingly complex environment, the approach by defence services has the appearance of searching for the proverbial silver bullet to solve a wide range of woes. In reality, the problems are becoming more and more complex and require a combination of strategies that need to involve a wide range of stakeholders to 258 R. Henry and C. Bil / Sustainment Management in the Royal Australian Navy ensure that impacts or risks are not unintentionally transferred to other elements of the support chain noting that they may not be able or geared to cope with them. The “silver bullet” also commonly called the “magic bullet” or “magic wand’ is simply a metaphor referring to an expectation that a straightforward or simple solution will resolve a complex issue. This can and often is an allusion based on the notion that a “new” method or new technological solution or a new contract can provide an immediate fix to complex issues or provide problem resolution at a substantially lower cost and within an improved timeframe. Over the years, the defence maritime sector has continued searching for the silver bullet in an attempt to reduce costs and improve outcomes. Unfortunately, this has not been the result as identified in the Rizzo Report [3]. What does this mean to a defence sustainment engineering services provider? Overall, this means that industry needs to get more familiar with the defence regulatory frameworks, corporate systems, corporate tools, technical risk management and stakeholders in order to gain a better understanding of the complex chain of interactions. This is not an easy task made more difficult by the level of poor understanding of the legal framework by many of the decision makers working within the RAN and SPOs. To highlight this facet, a common misconception espoused by some senior management personnel within DMO made the observation that there is a Financial Management Act (FMA) that they are accountable to but there isn’t an Engineering Management Act or that the FMA somehow takes precedence over the other applicable legislative instruments. This particular argument doesn’t hold in Queensland which has a Professional Engineers Act 2002 requiring registration of practising Engineers providing Engineering services. While this does not directly affect military personnel, it does affect industry service providers. The Board of Professional Engineers of Queensland has recently made a proposed amendment to broaden the definition of Engineering Services to include operations and maintenance [10]. References S.P. Robins, B. Millett, R. Cacioppe, T. Waters-Marsh, Organisation Behaviour. 3rd ed., Prentice Hall, Frenchs Forest, 2011. [2] K. Bartol, M. Tein, G. Mathews, D. Martin, Benefits of goals. In: A. Brackley du Bois, ed. Management: A Pacific Rim Focus. 3 ed., McGraw Hill Australia Pty Ltd, Macquarie Park, pp. 174 176, 2001. [3] P. Rizzo, Plan To Reform Support Ship Repair And Management Practices, Department of Defence, Canberra, 2011. [4] T. Muir, Abolish the DMO?. Australian Defence Magazine, June, 22(6), p. 12, 2014. [5] D. Mortimer, Going To The Next Level - The report of the Defence Procurement and Sustainment Review, Strategic Communications and Ministerial Services, DMO, Canberrs, 2008. [6] K. Ziesing, Time to let go of the Valley of Death and make a decision, Australian Defence Magazine, March, 22(3), p. 4, 2014. [7] N.N., DI(N)LOG 47-3 - Regulation of Technical Integrity of Australian Defence Force Maritime Materiel. AL2 ed. Canberra: Department of Defence (Navy Headquarters), 2009. [8] N.N., ABR 6492 - Navy Technical Regulations Manual (NTRM). AL1 ed., Director of Technical Regulations - Navy, Department of Defence, Canberra, 2003. [9] N.N., DI(G)LOG 4-5-020 Defence Materiel Engineering and Maintenance Manual. 1 ed, Chief of Defence Force, Department of Defence, Canberra(ACT), 2013. [10] N.N., Proposed Amendments to the Professional Engineers Act 2002, The Board of Professional Engineers of Queensland, Board of Professional Engineers, Queensland Government, Brisbane(QLD), 2014. [1] Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-259 259 Application of Lean Methods into Aircraft Maintenance Processes Borut POGAČNIK1, Jože TAVČAR2, Jože DUHOVNIK2 1 Adria Airways Tehnika, MRO services, d.d., Zgornji Brnik 130h, 4210 Brnik Aerodrom, Slovenija 2 University of Ljubljana, Faculty of Mechanical Engineering, Aškerčeva 6, SI-1000 Ljubljana, Slovenija Abstract. Aircraft Maintenance and Repair Organizations (MRO) have to be competitive and attractive for the existing and new customers. Aircraft ground time at the MRO has to be as short as possible as well as cost efficient without reducing the quality of the accomplished work. Application of lean methods into aircraft maintenance processes means continuous improvement process and elimination of non-value-added activities during the maintenance check. There is on one hand an obligation to follow the prescribed procedures and on the other hand a pressure for the time and cost reduction. The paper presents the application of lean methods to the aircraft maintenance processes. A comprehensive study of lean methods has been done in the first phase. Selected methods were then applied as pilot projects. The promising results have focused activities to the optimization of logistics. Several conclusions from the pilot project can be generalized to similar processes and organizations. Keywords. Lean methods, aircraft maintenance, MRO, Lean, value-added activities, concurrent engineering Introduction Since aircraft traffic is rapidly growing, MRO market is also growing. A lot of world aircraft operators are buying new aircraft in order to reduce operational and maintenance costs. In the last years the fuel price had a big impact to the operational costs. Consequently, more pressure was focused to the reduction of the maintenance costs for keeping profitability. Therefore MROs have to find their internal reserves, they have to optimize internal processes and they have to be focused to satisfy their customers with the goal that customers will return and ask for another check. __________________________________________ 1 Corresponding Author; E-mail: borut.pogacnik@aateh.si. 260 B. Pogačnik et al. / Application of Lean Methods into Aircraft Maintenance Processes Generally in the aircraft business, the rise of the new competitors will enforce continuous consolidation in the entire supply chain, like it already had been experienced in the automotive industry during the past two decades [1]. On the other hand, the structure of MRO has to be very well adapted to the specific aircraft type and required maintenance type. Man power has to be experienced, tooling has to be adapted and the stock of spare material has to be adapted to the needs and expectations, based on the experiences from the previous checks as much as possible. MRO business is based on the projects and depends on the specific aircraft conditions [2]. This means that the majority of the job is defect-based and it cannot be completely predicted in advance. In some cases (on the basis of the previous similar aircraft checks, aircraft flight hours and flight cycles) the prediction of the condition of the aircraft is possible, but there can always be surprises in a positive or negative way. Lean manufacturing approach was originally developed for the production environment with pre-defined job steps which were continuously repetitive through the manufacturing cycle [2]. Therefore some adjustments have to be made on the lean tools and principles to be adaptable with MRO business. Anyhow, the goal is to improve the organization’s performance on the operational metrics which make a competitive difference by drawing employees in a hunt to eliminate waste. Before implementing lean thinking it is recommended to measure the readiness of the enterprise for the introduction of lean into the current processes [3]. 1. Basics of lean theory There are five lean principles [4]: x x x x x Value Value Stream Flow Pull Perfection M. Jasiulewicz - Kaczmarek wrote that the lean manufacturing is the practice of eliminating waste in every area of production including customer relations (sales, delivery, billing, service and product satisfaction), product design, supplier networks, production flow, maintenance, engineering, quality assurance and factory management. Its goal is to utilize less human effort, less inventory, less time to respond to customer demand, less time to develop products and less spare to produce top quality products in the most efficient and economical manner possible [5]. For C. Jagadees lean maintenance is not a cost cutting tool, but a methodology to reduce waste and improve maintenance efficiency. Reducing waste leads to cutting costs, but cutting costs does not always lead to reducing waste [6]. Apparently, elimination of these wastes looks simple, but their identification is often difficult [7]. Application of lean methods into aircraft maintenance processes means application of continuous improvement into aircraft maintenance process. One of the most important aspects of lean maintenance is developing an understanding of the maintenance process [8]. The goal is to minimize waste in terms of non-value-added activities, such as waiting time, motion time, set-up time, etc. [9]. B. Pogačnik et al. / Application of Lean Methods into Aircraft Maintenance Processes 261 Lean maintenance means delivery of maintenance services to ultimate customers with as little waste as possible. This means elimination of everything in the maintenance value stream that does not add value to the customer or product [6]. Value creation and understanding of value from the perspective of the ultimate customer are two basic items in lean [10]. For MRO the ultimate customer could be considered as the aircraft operator who wants the maintenance to be done on time and within budget, the pilot who requires that all the equipment operates within specification all the time, the passenger who requires the aircraft to depart on time and the entertainment system to operate, the airworthiness authorities and manufacturers, which have standards to be adhered [4]. There are three different types of value activity within an organization [4]: x x x Value-Adding Activities (VAA) – Activities which are valuable through the eyes of the customer (the customer is prepared to pay for them). Non-Value-Adding Activities (NVAA) – These activities are all the activities which the customers consider non-valuable and present waste - Muda. Necessary but Non-Value-Adding Activities (NNVAA) – These activities are for the customer considered as non-valuable, but are necessary in the process. Muda – Japanese word for waste and is central to understanding value. Wastes are categorized into seven Muda types (Inventory, Motion, Over-production, Waiting, Processing, Corrections and Transportation). Value stream map is a tool which helps to visualize a system by the representation of information and material flow [11]. 2. MRO Environment Basically aircraft maintenance can be divided into Line Maintenance and Base Maintenance: x x Line Maintenance activities include pre-flight checks/technical assistance, daily/weekly checks, aircraft servicing, refueling/defueling assistance, de/antiicing, supervision control, coordination of unscheduled technical support etc. These types of checks/activities usually require short stoppages of the aircraft. Base Maintenance activities require longer ground time of the aircraft and are planned in advance in accordance with aircraft flying hours/cycles. This group includes C-checks, D-checks, 6-Years-checks, 12-Years-checks and other heavy maintenance checks. Usually many unpredicted defects are found there. On the other hand, MROs can be classified by using various criteria [12]: x Classification of the MRO Industry, based on the ‘type-function’: - Heavy Maintenance Visit - Engine Overhaul - Component Overhaul - Line Maintenance - Avionics - Retro-fits and Conversions 262 B. Pogačnik et al. / Application of Lean Methods into Aircraft Maintenance Processes x Classification of the MRO Industry, based on the Organizational Structure: - Independent / Third party MRO Organization Airline Operated / Owned MRO Organization 3. Analysis of a typical project in MRO organization through the lean eyes The analysis of the process was done on one of the European MRO. On its capability list you can find line maintenance checks, A-checks, C-checks, D-checks, 6Y-checks, 12Y-checks on Airbus A320 family, Bombardier CRJ100/200 and Bombardier CRJ 700/900/1000 family. Beside these, QEC (quick engine change) removal/installation and inspection, line maintenance and troubleshooting on V2500, CFM56-5, CF34-3 and CF34-8 aircraft engines can be found on their capability list. 10 days C-check on Airbus A321 aircraft was taken as a sample check for this analysis. The check itself was planned approximately a month in advance. Project members from engineering, purchasing and operative department were known two weeks before aircraft came into hangar and customer representative was announced 2 days before the start of the check in hangar. Work-orders were checked 10 days and spare material was ordered one week before the start of the check. Approximately 70 NRC (Non-Routine Cards) / defect cards were raised during the check. Project was extended for one day. As shown in Figure 1, sample check was first analyzed in a sense of definition of project milestones and project phases. After that the estimation of time-schedule for project milestones and project phases within the time period given to the project was carried out. In the third step the existing main tasks were analyzed in a sense of their duration and the ratio between VAA / NVAA was calculated. In the fourth step of the analysis the main mechanics and heads of each work-shop completed the questionnaire about project anomalies data collection – Muda and project deficiency. They were also asked about possible improvements for each Muda or project deficiency. In the last, fifth step, a new time-schedule of main tasks and new ratio between VAA / NVAA on the basis of the questionnaire results were calculated. During the project analysis all the implemented work activities were divided into two groups: VAA and NVAA group. VVA group included the following activities: x x x Inspections – On the basis of these activities the customer is allowed to extend the aircraft airworthiness and therefore they represent VAA. Modifications – After the accomplishment of the requested modification, the customer expects positive impact on aircraft D&C Rate (Delay and Cancellation Rate), ABTO Rate (Aborted Take-off Rate), IFSD Rate (In Flight Shut-Down Rate), SV Rate (Shop Visit Rate), etc. and consequently lover operational costs and therefore these activities also represent VAA. Incoming defects – Usually these defects are known from the near past and were deferred till the aircraft maintenance check. At the time of the check customer knew about them and wanted to eliminate them. Therefore these activities also represent VAA. On the other hand, all the findings, discovered during the check, present new/additional costs for the customer and therefore belong into NVAA group. B. Pogačnik et al. / Application of Lean Methods into Aircraft Maintenance Processes NVAA group includes all the activities in the next sub-groups: x x x x x Acceptance Preparation Defect Rectification (just Findings) Close-up Tests AIRWORTHINES PROCEDURES, HANGAR-SPACE MAN-POWER, TOOLING ULTIMATE CUSTOMER REQUIREMENTS (WORKPACKAGE ACCOMPLISHMENT, COSTS, GROUND TIME) TYPICAL 10 DAYS AIRCRAFT MAINTENANCE CHECK PROJECT ANALYSIS PHASES & MILESTONES PROJECT GIVEN TIME TIME SCHEDULE ANALYSIS WITHIN PROJECT GIVEN TIME PLANNED TIME FOR EACH TASK ACCOMPLISMENT MAN-POWER, TOOLING, SPARE MATERIAL MAIN TASKS ANALYSIS WITHIN PROJECT PHASES EXISTANT TASKS TIMESCHEDULE AND EXISTANT VAA/NVAA RATIO MAN-POWER EXPERIENCES QUESTIONNAIRE ABOUT PROJECT DEFICIENTS, ANOMALIES (MUDA) QUALITY OF SUGGESTIONS FOR POSSIBLE ACCOMPLISHED WORK PROJECT IMPROVEMENTS ON THE BASIS OF COLLECTED MUDA ANALYSIS OF QUESTIONNAIRE RESULTS Figure 1. Analysis Flow Chart. NEW TASKS TIMESCHEDULE AND NEW VAA/NVAA RATIO 263 264 B. Pogačnik et al. / Application of Lean Methods into Aircraft Maintenance Processes As shown in Figure 2, it was discovered that almost 2/3 of all the work represent NVAA and only 1/3 of the implemented work represents VAA. A comparison between the used working hours for every project phase has been done. After a closer look to Figure 3 it can be observed that the preparation and the close-up phase together take 37,5% of the used project hours, which represents more than all the used hours for VAA (34,35%) together. If we also add the Findings part of the Defect rectification phase, they represent 58% of the total project time. Figure 2. Work activities ratio between VAA and NVAA and time consumed percentage of each phase of work. Further on, unnecessary project events, which have negative impact on the used working hours and consequently to the final project price, were analyzed. The most exposed events, irrespective of the group into which they belong, were: x x x Tooling loan was planned for the first day of the check, although it was known in advance that it will be needed on the 3rd day of the check. Due to the simultaneous start of more projects (aircraft checks) on the same day, there was a lack of manpower in the first days of the project. Consequently, the preparation and inspection phases were completed one day later. The ground time of the aircraft was extended for one day. Due to the aircraft position in the hangar 1 some work-shops were not close (just on the opposite side) to the aircraft. Consequently, many parts transportation and man-power motions were required during the inspection and the re-assembly phase of the project. In Figure 4 ways from the aircraft to the composite and paint work-shop and from the aircraft to the cabin and interior work-shop are shown [13]. Figure 3. Work activities sub-groups portions. B. Pogačnik et al. / Application of Lean Methods into Aircraft Maintenance Processes x x x x 265 A lack of some consumable material was discovered in the middle of the check. It had to be additionally ordered on a higher priority level and was consequently more expensive. In the re-installation phase of the project a few man-hours were lost due to the waiting for the material. The particular material could not be released from the store due to certificate issues. Due to the aircraft flight hours / flight cycles and in advance known job cards (inspections) of the project, some spare material was ordered in advance on the basis of the previous experiences and expectations, but later on it was not used during the project. On the last day, the aircraft departure was delayed for a few additional hours because of the uncompleted work orders, which could easily be closed one day before the end of the project. Figure 4. Analyzed Aircraft position in hangar. If you are looking through the lean eyes all the above events present NVAA or waste and as such they are unnecessary in the project and have to be eliminated from the process. 4. Corrective Actions Generally, improvements and other changes are always implemented during regular processes, therefore they must be performed or implemented rapidly to prevent delays [14]. In case of any interruption during the improvement implementation, the possibility for quick reaction in a sense of final solution implementation has to be available. It should be taken into account that process improvements always involve high level of unpredictability [15]. Frequently, the additional research, cooperation with external suppliers, customers, authorities and other approvals are required. Figure 2 shows that the most time consumable NVAA portions of the project are: x x x Preparation phase Close-up phase Finding part of the Defect rectification phase 266 B. Pogačnik et al. / Application of Lean Methods into Aircraft Maintenance Processes Therefore, the biggest impact on the project working hours and costs savings in combination with elimination of unnecessary project events can be found in the above mentioned three phases. Various suggestions on the basis of the questionnaire were checked. Below, the most promising are listed: Preparation phase: x x x Due to in advance known type of inspections and consequently in advance known requests for access panels and passenger seats removal, aircraft should be positioned closer to the composite and cabin work-shops, where the panels and passenger seats are inspected and repaired. The optimum position would be in Hangar 2. Hangar 1 should be used for other types of checks. The estimation of the time saved for access panels and passenger seats removal is more than 13%. This is quite a huge number, but taking into account that all the passenger seats, all the cabin interior panels, all the floor panels, galleys and toilets are closer to the composite and cabin work-shop, the number becomes more reasonable. This action reduces Muda of Transportation and Muda of Motion. In Figure 5 better aircraft location regarding check type is shown. Due to the simultaneous start of more projects on the same day (although unplanned) the lack of manpower occurred in the preparation and inspection phases. At every sub-phase of the preparation phase some time savings can be gained with a proper planning of manpower. This action reduces Muda of Waiting. By starting the loan period on the day, when the tooling is required, loan costs of the tooling can be reduced. This action reduces Muda of Over Production. Close-up phase: x x x Similar as in the preparation phase, Muda of Transportation and Muda of Motion can be reduced due to a better position of the aircraft in hangar. Component’s certificate issue extended installation time of the particular component for 2 hours. By settled certificates 2 hours or 6,7% of the installation time of the particular component could be gained. This action eliminates Muda of Waiting and Muda of Correction. Duly closed work-orders reduce time in the Close-up phase. This action eliminates Muda of Waiting. Finding part of the Defect rectification phase: x x x Delivery time for some spare material, for defect rectification, can be reduced by completing the inspection phase earlier (sufficient manpower). This action eliminates Muda of Waiting. By eliminating a lack of consumable material, time and cost savings in the process are possible. Although it is difficult to estimate the savings, the material on time and on position presents savings in the project. This action reduces Muda of Waiting. In aim to be well-prepared for the project, some spare material (component) was ordered on the basis of the previous experience. However, afterwards it was not used on the project, because there was no finding. The component was returned to the supplier, two way transportation costs and return fee to the supplier had to be paid. Due to the planned 10 days check, the particular B. Pogačnik et al. / Application of Lean Methods into Aircraft Maintenance Processes 267 inspection could be done just at the start of the project and the potential failure of the component could be confirmed or rejected on the first days of the project, which could allow delivering the component on time. This action eliminates Muda of Transportation and Muda of Inventory. Figure 5. New Aircraft position in hangar. As shown in Figure 6, after the implementation of corrective actions a new VAA presents 36,62% of all the activities. NVAA are reduced to 63,38% of all the project activities. Figure 6. Work activities ratio between New VAA and New NVAA. 5. Conclusion By the above mentioned actions 112 working hours could be saved for a particular project, or the project time could be reduced for 6,2%. As shown in Table 1, ratio between VAA and NVAA is changed for more than 2% in a positive way for VAA. Table 1. Corrective actions overview ACTIVITY VAA NVAA Before implementation of corrective actions 34,35 65,65 After implementation of corrective actions 36,62 63,38 CHANGE [%] +2,27 -2,27 This analysis presents the benefits of lean implementation into MRO organization. It is important that the changes do not influence the quality of the service, as well as that the changes in the processes positively affect the quality. Anyway, lean 268 B. Pogačnik et al. / Application of Lean Methods into Aircraft Maintenance Processes implementation into MRO processes requires close cooperation between the involved company departments, such as quality, marketing, engineering, operative and purchasing department as well as the top management. Beside this all the employees must be aware of the importance of the never-ending implementations of improvements into MRO processes. By a careful definition of inputs, which present waste and NVAA into the process, further improvements of VAA / NVAA ratio are possible. With the implementation of some tools for optimization, like Genetic Algorithm, various wastes can be eliminated from the process and the best solutions for each individual project in a sense of aircraft position in the hangar, spare material, required tooling, man-power, etc. can be planned in advance with a higher level of reliability. 6. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] R. Curran, X. Zhao and W. J.C. Verhagen, Concurrent engineering and integrated aircraft design, in: J. Stjepandić (ed.), N. Wognum (ed.), W. J. C. Verhagen (ed.). Concurrent engineering in the 21st century: foundations, developments and challenges, Springer International Publishing, Cham, pp. 571605, 2015. G. Nanova, L. Dimitrov, T. Neshkov, C. Apostolopoulos, P. T. Savvopoulos (2012) Lean Manufacturing Approach in Aircraft Maintenance Repair and Overhaul, Recent, Vol. 13, No.3(36), November, pp. 330÷339, 2012. A. Al-Ashaab et.al., Lean Product Development Performance Measurement Tool, Proceedings of the 11th International Conference on Manufacturing Research (ICMR2013), Advances in Manufacturing Technology XXVII: 19-20 September 2013, Cranfield, 2013. S. Murphy (2011) The Status of Lean Implementation within South African Aircraft Maintenance Organisations, Johannesburg, 2011. M. Jasiulewicz – Kaczmarek, Integrating Lean and Green Paradigms in Maintenance Management, Preprints of the 19th World Congress, The International Federation of Automatic Control, Cape Town, South Africa. August 24-29, 2014, pp. 4471÷4476. C. Jagadees, Enhancing equipment availability and production efficiency with lean maintenance and its application in oil production process, For IORS 2015, Oil and Natural Gas Corporation Limited; Mumbai, India, 2015. G. F. Barbosa, J. Carvalho, E. V. G. Filho, A proper framework for design of aircraft production system based on lean manufacturing principles focusing to automated processes, The International Journal of Advanced Manufacturing Technology, Volume 72, Issue 9-12, pp. 1257-1273, 2014. G. Clarke, G. Mulryan, P. Liggan, Lean Maintenance – A Risk-Based Approach, The Official Magazine of ISPE September/October 2010, Vol. 30 No. 5, 2010. S. Kolanjiappan, K. Maran (2011) Lean Philosophy in Aircraft Maintenance, Journal of Management Research and Development, Volume 1, Number 1, January-December (2011), pp. 27÷41. M. E. Johnson, S. I. Dubikovsky, Incorporating Lean Six Sigma into an Aviation Technology Program, Purdue University, Department of Aviation Technology, West Lafayette, Indiana, USA, 2010. S. Kannan, Y. Li, N. Ahmed, Z. El-Akkad, Developing a Maintenance Value Stream Map, Department of Industrial and Information Engineering, The University of Tennessee, Knoxville, 2010. P. Ayeni, T. Baines, H. Lightfoot, P. Ball, State-of-the-art of ‘Lean’ in the Aviation Maintenance Repair Overhaul Industry, Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture, November 2011, vol. 225, no. 11, pp. 2108÷2123, 2011. N.N., Adria Tehnika; Part-145 Approved Maintenance Organisation; Ref. Certificate No. 5I.145.100; Adria Airways Tehnika, vzdrževanje letal d.d., 4210 Brnik Aerodrom, Zgornji Brnik 130h, Slovenia; MOE – Maintenance Organisation Exposition; Rev. 10; Jul. 4th, 2014 J. Tavčar, J. Duhovnik (2005) Engineering change management in individual and mass production, Robotics and computer-integrated manufacturing, Vol. 21, No. 3, pp. 205-215, 2005. J. Duhovnik, J. Tavčar (2015) Concurrent engineering in machinery : chapter 22. V: J. Stjepandić (ed.), N. Wognum (ed.), W. J. C. Verhagen (ed.) Concurrent engineering in the 21st century: foundations, developments and challenges, Springer International Publishing, Cham, pp. 639-670, 2015. Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-269 269 A Supporting Model for the Dynamic Formation of Supplier Networks a Kellyn Crhis TEIXEIRAa,1 and Milton BORSATO a,2 Federal University of Technology – Parana, Av. Sete de Setembro 3165, Curitiba, PR 80230-901, Brazil Abstract. Supply chains have become an important focus for competitive advantage. The performance of a company increasingly depends on its ability to maintain effective and efficient relationships with its suppliers and customers. The extended enterprise (i.e. composed of several partners) needs to be dynamically formed in order to be agile and adaptable. According to the Digital Manufacturing paradigm, companies have to be able to quickly share and disseminate information regarding planning, designing and manufacturing of products. Additionally, they must be responsive to all technical and business determinants, as well as be assessed and certified for guaranteed performance. The current research intends to present a solution for the dynamic composition of the extended enterprise, formed to take advantage of market opportunities quickly and efficiently. A protocol model has been elaborated and inspired in the OSI reference model with reference to the Supply Chain Operations Reference model (SCOR®). This model provides a framework for linking customers and suppliers. It is presented in the form of seven layers that relate to steps for negotiating the participation of candidate companies in the dynamic establishment of a network for responding to a given demand for developing and manufacturing products, as follows: request for information; request for qualification; alignment of strategy; request for proposal; request for quotation; compatibility of process; and compatibility of system. An information model has been defined based on the concepts of SCOR® as well. The protocol model has been implemented by means of process modeling according to the BPMN standard and, in turn, implemented as a web-based application that runs the process through its several steps, which uses forms to gather data. An application example in the context of the oil and gas industry is used for demonstrating the solution concept. Keywords. Supply chain. SCOR. Model-based enterprise. 1. Introduction Supply chains have become an important focus for competitive advantage. The performance of a single company increasingly depends on its ability to maintain effective and efficient relationships with suppliers and customers [1]. Extended Enterprise is the most descriptive term highlighting managerial aspects of 1 Corresponding author. Tel.: +55-41-3248-6744; mobile: +55-41-9631-2656; kellyncrhis@hotmail.com. 2 Corresponding author. Tel.: +55-41-3029-0609; e-mail: borsato@utfpr.edu.br. e-mail: 270 K.C. Teixeira and M. Borsato / A Supporting Model for the Dynamic Formation of Supplier Networks intraorganizational collaboration [2]. In this context, the Intelligent Manufacturing Research Program, inspired in IMTI’s challenges for manufacturing in the 21st century [3] and implemented at UTFPR, has pointed out several demands, among which the need to develop models for dynamically establishing supplier networks is dealt in the current research project. It proposes a scenario of great agility and adaptability, where an extended enterprise is evaluated, certified for assured performance and built from numerous best-in-class suppliers. Its operation would allow information sharing, costeffective and accelerated design, and optimized manufacturing processes. One of the core problems in the dynamic formation of supply chains is the proper selection of partners to realize the goal of the whole network. Developing appropriate supply chain strategies that align effective supply chain practices with information quality can be challenging [1], as stakeholders from the supply and demand side are brought together to share and understand design information [4]. Most of the research studies conducted with in the major topic of supply chain formation focus on costs. Kim and Cho [4] present a method for the supply chain formation problem by using negotiation agents for information sharing. Both internal and external factors are considered for making a decision. All members are rewarded simultaneously and thus consequently accelerate performance of the whole supply chain. This enables resource allocation and pricing to be made more efficiently. Negotiation is grounded in costs, which is verified with suppliers and manufacturers. Best combinations are chosen as to obtain lower cost values. Kim and Segev [5] propose a mechanism, named Multi-Component Contingent Auction (MCCA) for combining the suppliers with the minimum total cost. A major problem in MCCA is the number of computations required for winner determination. The current research intends to present a solution for the composition of the extended enterprise, formed to take advantage of market opportunities quickly and efficiently. The research aims to answer the following question considering the above problems: what would a model, which can be used for dynamically forming supplier networks, look like? This work has as contribution a proposal of protocol model, information model and process model. The present article is structured as follows: Section 1 presents introduction, Section 2 shows theoretical background, Section 3 details the methodological aspects, Section 4 shows results and discussion and Section 5 presents conclusion. 2. Theoretical Background The extended enterprise (EE) framework arose in high-tech industries with large chains of suppliers to face the current challenges related to innovation and competition in complex scenarios [6]. EE is defined as a long-term cooperation and partnership based on information and knowledge exchange and the coordination of the manufacturing activities of collaborating independent enterprises and related suppliers. Enterprise modeling is considered as the process of building models from the whole or part of the enterprise such as process models, data models, resource models etc [7]. In this context, different modeling approaches such as OSI, SCOR and BPMN can be combined and applied. OSI is a standard description or a reference model. It is a conceptual blueprint of how communication should take place. On the other hand, SCOR is a supply chain reference model with standardized terminology and processes. K.C. Teixeira and M. Borsato / A Supporting Model for the Dynamic Formation of Supplier Networks 271 And BPMN is a standard notation for capturing business processes and graphically representing them. These approaches are detailed in the following sections. 2.1. OSI Model ISO (International Organization for Standardization) introduced the OSI (Open System Interconnection) standard in 1984. The model is thought to contribute to make network troubleshooting faster and easier [8]. OSI is a standard description or a reference model for defining how messages should be transmitted between any two points in a telecommunication network. A reference model is a conceptual blueprint of how communication should take place. It addresses all the process required for effective communication and divides these processes into logical grouping called layers. When a communication system is designed in this manner, it is known as layered architecture. OSI isn’t a physical model, though. Rather, it’s a set of guidelines that application developers used to create and implement application that run on a network. It also provides a framework for creating and implementing networking standards, devices, and internetworking schemes [9]. 2.2. SCOR The Supply Chain Operations Reference model (SCOR®) is the product of Supply Chain Council, Inc. (SCC) a global non-profit consortium whose methodology and benchmarking tools help organizations make dramatic and rapid improvements in supply chain processes. SCC membership is open to all companies and organizations interested in applying and advancing the state-of-the-art and practices [10]. SCOR is a reference model with standardized terminology and processes. It was first defined in 1996. Since then, several companies have adopted the SCOR model and methodology [11]. SCOR provides a unique framework that links business process, metrics, best practices and technology into a unified structure to support communication among supply chain partners and to improve the effectiveness of supply chain management and related supply chain improvement activities [10]. Currently, SCOR is in Version 11.0. Revisions of the model are made when it is determined by SCC members that changes should be made to facilitate the use of the model in practice. The SCOR model has been associated with phases for fulfilling a customer's demand and is organized around six primary management processes: Plan, Source, Make, Deliver, Return and Enable [10]. 2.3. BPMN Business Process Model and Notation (BPMN) is the de facto standard for representing, in a very expressive graphical way, the processes occurring in virtually every kind of organization [12]. BPMN is a standard notation for capturing business processes, especially at the level of domain analysis and high-level systems design [13]. To get more agility and efficiency, a higher degree of automation is required [14]. Software Tools are used for the representation of BPMN. Bonita, a software suite, has been pointed out as one of the best open-source software for modeling and publishing BPMN 2.0 processes, based on a survey conducted with members of a LinkedIn group related to BPMN [12]. 272 K.C. Teixeira and M. Borsato / A Supporting Model for the Dynamic Formation of Supplier Networks 3. Methodological Aspects The present work has been conducted in three phases, as described by the flowchart in Figure 1. In phase 1, literature research was carried out on topics OSI model, SCOR and BPMN. In phase 2, a protocol model was drawn up. This protocol model is a diagram that indicates the steps to be followed in order to select companies as suppliers. Next, requirements and information important to assess suppliers were defined, based on SCOR, resulting in an information model. Still in this phase, the negotiation process was modeled in BPMN with the community edition of software tool Bonita [15], which allows its implementation in a web-like environment. In phase 3, a product related to the oil and gas industry was selected as an application example. Information regarding the singularity of this context was gathered, to be used as an application example. A simulation of the modeled process was carried out using the automated process created in Bonita, from the moment a given demand is initiated by the customer, until the selection of suppliers is accomplished. In phase 4 the analysis of results was carried out. Figure 1. Flowchart of research activities. 4. Results and Discussion The resulting protocol model provides a framework for linking customers and suppliers. This protocol model, as represented in Figure 2, has been elaborated and inspired in the OSI reference model. In this model, 7 layers were defined: Request for information; Request for qualification; Alignment of strategy; Request for proposal, Request for quotation, Compatibility of process; and Compatibility of system. K.C. Teixeira and M. Borsato / A Supporting Model for the Dynamic Formation of Supplier Networks 273 Figure 2. Protocol Model. The layers in the protocol model represent the steps for negotiating the participation of candidate companies in the dynamic establishment of a network for responding to a given demand for the development and manufacturing of products. Candidate companies can move on to the next step each time they are approved, until they are finally approved to fulfill demand. A brief explanation of each layer follows: • Request for information (RFI): in this step information is collected about which products and services are to be provided, in order to verify whether they meet a business need; • Request for qualification (RFQ): in this step, information is collected on the qualifications of suppliers, specifically certifications for the production processes; and if environmental issues are to be fulfilled; • Alignment of strategy: in this step information is verified about mission, vision and values of the supplier, and if they are aligned with the customer company; • Request for proposal: in this step information is verified about the companies’ history, financial information, expertise, compliance with technical specifications and time-to-supply requirements; • Request for quotation: in this step, prices of products or services are checked; • Compatibility of process: in this step, information is collected on the compatibility of production processes; • Compatibility of system: in this step, information related to the compatibility between IT systems is checked. The contents of the negotiation with the supplier are based on the Supply Chain Operations Reference model (SCOR®). As a result, an information model for all negotiation steps has been built. For example, Table 1 describes the information model developed for layer Alignment Strategy. Process mapping was performed according to BPMN. The following activities were modeled: Demand presentation by the client; Receipt and analysis of demand; Definition of specifications related to product and process; as well as the steps involved in the supplier selection. 274 K.C. Teixeira and M. Borsato / A Supporting Model for the Dynamic Formation of Supplier Networks Table 1. Information model of the layer named Alignment Strategy. The overall process was then broken into subprocesses: Demand; RFI; RFQ; Alignment Strategy; Request for Proposal; Request for Quotation; Compatibility of Process; and Compatibility of System. The resulting business process model is presented in Figure 3. Figure 3. Overall process model. K.C. Teixeira and M. Borsato / A Supporting Model for the Dynamic Formation of Supplier Networks 275 Next, the subprocesses were detailed and are the individual tasks are assigned to actors, as represented in Figure 4. It is the possible to include forms related in the tasks, Figure 5, which directly map to the information model created previously. Figure 4. Representation of tasks, actors in a subprocess. Figure 5. Inclusion of forms associated with tasks. After the process and subprocesses are modeled, and tasks, actors and forms are defined, it is possible to “run the process” in the form of a web portal as depicted in Figures 6 and 7. 276 K.C. Teixeira and M. Borsato / A Supporting Model for the Dynamic Formation of Supplier Networks Figure 6. “Running” the process. Figure 7. Starting the process. An application of the process and information model was conducted. For that purpose, a common product in the oil and gas industry was selected: a transportation skid. In this case, a transportation skid is to be designed to interface with the Ocean Epoch’s skidding system [16]. In the first step of the example, the customer accesses the web portal and defines the demand. The customer can describe the product, quantity and upload a file with details of the demand in order to assist the analysis that will follow. The next task is performed by the tender professional, who examines whether the customer proposal is part of the company's scope of action. The actor, tender professional, accesses the data and indicates if the opportunity is related to the scope of the company. If the opportunity isn’t related to the company then the tender professional sends a message to the customer through the portal and the customer can access the answer. If the opportunity is related to the company, the next task is sent to Engineering, who decides if it is technically feasible. If so, then the process proceeds through the tasks related to product and process specifications as accountable actors get engaged and respond. With the defined specifications, the company starts to contact candidates to become suppliers. The negotiation is performed through tasks and access through the web portal. In this step the supplier fills in the forms with the required information for each step. And after a procurement professional examines the information, selects the option if the supplier is approved for the next step or not. If the supplier isn’t approved for the next step, the procurement professional sends a message to the supplier that can be accessed via the web portal. If the supplier is approved for the next step, a new task becomes available for accomplishment. K.C. Teixeira and M. Borsato / A Supporting Model for the Dynamic Formation of Supplier Networks 277 Figure 8. Tasks related to Supplier - Subprocess Alignment Strategy. Figures 8 and 9 show the negotiation of step Alignment Strategy. Figure 8 shows tasks (1, 2, 3) accomplished by the supplier, whereas Figure 9 shows tasks (4, 5, 6, 7, 8) accomplished by the procurement professional. After all subprocesses are completed and a given supplier may actually be considered part of the network to be engaged in a given demand. Figure 9. Tasks in subprocess Alignment Strategy, to be assigned to Procurement. 278 K.C. Teixeira and M. Borsato / A Supporting Model for the Dynamic Formation of Supplier Networks 5. Conclusion In the present work, a layered protocol model has been developed to support the dynamic formation of supply networks, triggered by a given demand. As negotiation advances in steps, information about the demand is only revealed as needed, and completely exposed once the selection of suppliers is complete. Using SCOR as a basis for building the information model allows companies to develop internal procedures that support standards for supply chain operations. In addition, BPMN can support such standards and allow integration of processes in customers and suppliers within the Extended Enterprise. Modeling business processes in tools like Bonita also allows the direct creation of software applications that are driven by customized processes. Nevertheless, some limitations have been identified as to how adherent to BPMN tools such as Bonita are presently. For example, simulating negotiation processes with several suppliers simultaneously (i.e. multiple instances) is an issue yet to be solved. In the case of the application example used for validation, each supplier would have to run separate instances of the process, not necessarily simultaneously. For future work, this interaction of competing suppliers would be desirable. References J.Leukel, V. Sugumaram, Formal correctness of supply chain design, Decision Support System, vol. 56, 2013, pp. 288-299. [2] N. A. Dobrynin, Extended enterprise as a new context for manufacturing, Vestnik Samara State University of Economics, vol. 5, 2010, pp. 15-16. [3] IMTI, Manufacturing Success in 21 st century: A Strategic View, Oak Ridge, Tennessee: IMTI, Inc, 2000. [4] H. S. Kim, J. H. Cho, Supply Chain formation using agent negotiation, Decisions Support Systems, vol. 49, 2010, pp. 77-90. [5] J. B. Kim, A. Segev. Multi-component contingent auction (MCCA): a procurement Mechanism for dynamic formation of supply networks, International conference on Electronic commerce, vol. 5, 2003, pp. 78-86. [6] S. Alguezaui, R. Filieri, A knowledge-based view of the extending enterprise for enhancing a collaborative innovation advantage, Int. J. Agile Systems and Management, vol. 7, 2014, pp. 116-131. [7] E. I. Neaga, J. A. Harding, An Enterprise Modeling and Integration Framework based on Knowledge Discovery and Data Mining. Int. Journal of Production Research, vol. 43, 2005, pp. 1089-1108. [8] M. Kayri, I. Kayri, A proposed "OSI Based" network troubles identification model, International Journal of Next-Generation Networks (IJNGN) , vol. 2, n°. 3, September 2010. [9] G. Bora, S. Bora, S. Singh, S.M. Arsalan, OSI Reference Model: An Overview, International Journal of Computer Trends and Technology (IJCTT, vol. 7, n° 4, January 2014. [10] Supply Chain Council, Inc. SCOR: The Supply Chain Reference, United States of America, 2012. [11] F. Persson, SCOR template - A simulation based dynamic supply chain analysis tool, Int. J. Production Economic, , vol. 131, 2011, pp. 288–294. [12] M. Chinosi, A. Trombetta, BPMN: Na introduction to the standard, Computer Standards & Interfaces, vol. 34, 2012, pp. 124–134. [13] R.M. Dijkman, M. Dumas, C. Ouyang, Semantics and analysis of business process models in BPMN, Information and Software Technolog, vol. 50, 2008, pp. 1281–1294. [14] P. Moynihan, W. Dai, Agile supply chain management: a services system approach, Int. J. Agile Systems and Management, vol. 4, n° 3, 2011, pp. 280-300. [15] Software Bonita. 2015, Bonitasoft Open Source Workflow &amp; BPM software, http://br.bonitasoft.com/, Acessed: 22 Oct 2014 . [16] AME. 2015, AME Offshore Solutions. http://www.amepl.com.au/theme/ameplcomau/assets/public/File/pdf/c4ca4238a0b923820dcc509a6f758 49b_1.pdf, Acessed: 1 March 2015. [1] Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-279 279 Data Flow To Manufacturing Simultaneous With Design Phase Dilşad ILTER1, Gülden ŞENALTUN2 and Can CANGELIR3 Turkish Aerospace Industries Inc., Ankara, Turkey 06980 Abstract. Designing and producing an item with the best quality is a common concern for companies. Decreasing time consumed design and production stages is a new concern with the accelerating world views. Design or production phases cannot be accelerated in order not to concede the quality. So as to, managing both design and production related stages in parallel can be an option to reduce the time consumed. Concurrent engineering philosophy suggests different ways to achieve this goal. In traditional approach, production and production related stages’ planning (NC programming, tool design, raw material procurement) starts after design phase is completed. This approach causes time loss. As wells as, project management struggles shortening the project schedule. At this point, concurrent engineering offers a solution that enables procurement of raw materials and making production arrangements (NC programming, tool design, manufacturing planning) in accordance with data flow from design while project is still in design phase. Tight project schedule can be relieved in two ways with this concurrent engineering approach. First is backdating production arrangements and ordering raw materials earlier while project is still in design phase. Second proposal provides taking preventive actions by discovering prospective design related problems. In this paper, management of data flow from design department to manufacturing department will be studied. In order to get the best outcomes, management of data flow with a PDM (Product Data Management) tool will be clearly stated in requirements, restrictions, type and planning of data perspectives. In addition to these topics, management of changes in data shared to manufacturing planning department will be studied. As well, all topics studied in this paper will be supported with an industry implementation. Keywords. concurrent engineering, project management, product lifecycle management (PLM), product data management (PDM), information flow Introduction Concurrent Engineering (CE) spans the complete product lifecycle from feasibility and concept studies through manufacturing and marketing to disposal and recycling, including quality, cost, schedule and user requirements. Concurrent Engineering is defined by the Institute of Defense Analysis (IDA) as: “the systematic approach to the integrated concurrent design of products and related processes, including manufacture and support. Thus, product lifecycle management confronts the need to balance fast response to changing customer demands with competitive pressure to seek cost 1 Corresponding Author. E-mail: dilsad.ilter@tai.com.tr Corresponding Author. E-mail: gsenaltun@tai.com.tr 3 Corresponding Author. E-mail: ccangelir@tai.com.tr 2 280 D. Ilter et al. / Data Flow to Manufacturing Simultaneous with Design Phase reductions in sourcing, manufacturing and distribution.” ([1]; [2]; [3]; [4])In traditional approach, these stages occur in succession. This sequence causes longer product lifecycle and exceeding budget limitations. With the accelerating worldview, goal of the companies focused on time and money which leads to producing at shortest time with minimum cost. With this evolution, after 1970s (era of mechanization) and 1975s (era of departmentalization) [5], different disciplines started working on project schedules. At this point, concurrent engineering suggests managing product lifecycle simultaneously. Simultaneous management of product lifecycle helps project management to shorten the project schedule and gain time for unpredictable issues. Coordinated working environment between departments and shortening product lifecycle contributes the profit of the company. In our company, changes in design data are managed with revision. When a part is first created, it is saved in our PDM tool with revision “A”. Designer can work on this revision “A” until part gets a status. If designer needs to make any changes on part after it gets “Released” status, revision of part should be prosecuted to next revision “B”. With this process, changes on part are recorded in PDM tool and can be easily accessed. Projects follow a schedule in order to deliver their end product on time. During this schedule, project management may predict some critical processes. In order not to cause delay in schedule, project management reschedule critical process by undertaking all responsibilities. In our organization, rescheduling process and backdating preproduction processes like NC programming, tool design was conducted before related part is released. And manufacturing engineers communicated with design engineers via e-mail. Manufacturing engineering was getting information from PDM tool. The problem was being informed about the changes on part. Since part is not “Released” yet, it is possible that designer is still working on that part. If manufacturing engineering checks part today, it is not certain that part will stay same for the following days. Manufacturing engineering can only be informed by design engineering. Design engineer may forget to inform manufacturing engineer. In order to prevent errors due to lack of information, manufacturing engineering had to check differences of design data of two different days. Because of this control phase, there was additional workload and time loss in manufacturing engineering processes. To get rid of this additional workforce and to have a more systematic data flow “Data Drop” process is proposed. With this process, revision management is done in a PDM tool. Manufacturing engineering can work on plannings without the necessity of checking design data several times. In this paper, “Data Drop” process will be studied. 1. “Data Drop” Process Product lifecycle is composed of different phases (see in Figure1) and each phase has specific activities that contribute to build the product. Manufacturing related activities start after CDR (Critical Design Review) during development phase. However, design changes may be needed due to manufacturing requirements. If all manufacturing related activities start after CDR, there will be a huge number of changes that causes longer product lifecycle. In order not to face these problems and shorten the product lifecycle, concurrent engineering suggests different methods to design and develop products. With these concurrent engineering approaches, change numbers decrease and product lifecycle shortens. D. Ilter et al. / Data Flow to Manufacturing Simultaneous with Design Phase 281 Purpose of “Data Drop” is to prepare manufacturing predata while design activities are proceeding. In order to start manufacturing arrangements, related departments compromise on required data during preliminary and detailed design phases. But this data is used in development phase. Thus, development phase processes can be backdated. Figure 1. Product Lifecycle Phases [6]. With this process some manufacturing activities like tool design can be started earlier. Departments work concurrently with controlled data transaction and technical mistakes can be discovered in early stages. With the implementation of “Data Drop”, the time will be saved in project schedule can be seen in Figure 2. Figure 2. Time comparison between traditional and concurrent approach. “Data Drop” imitates Kanban principle for scheduling information needed. Kanban is a visual replenishment signaling system that effectively connects the supplying and consuming processes that exist throughout the entire value stream [7]. In literature, scheduling can be done in two ways: Backward Scheduling or forward scheduling. Forward scheduling starts as soon as the requirements are known. While backward scheduling begins with the due date and schedules the final operation first. “Data Drop” information planning is based on backward planning. In order to prepare manufacturing planning, manufacturing engineering department needs information. This information should be shaped and documented with a collaborative work between design engineering and manufacturing engineering. Details of information transaction are kept in “Stage Management Plan”. Since the information prepared at different time intervals will vary in content of design completeness, “Data Drop” can be done in stages. Following stage numbers and 282 D. Ilter et al. / Data Flow to Manufacturing Simultaneous with Design Phase their contents can differ for different cases. These differences are caused by different maturity of design which defines the development of design. For this case, 2 stagedflow is defined as an example. At stage 1; 3D modelling is not yet been completed, information for following activities can be provided. x x x x x x x x x x x x Procurement of raw material Freezing Edge of Part (EOP) Freezing surface geometry of designed metal parts for preparation of casting molds Jig and tool design Manufacturing processes and NC programming NC programming and mold design for CFC (Carbon Fiber Composite) At stage 2; 3D modelling is completed so information for following activities can be provided. Jig and tool manufacturing and verification Preparation and verification of manufacturing and assembly processes, Verification and confirmation of NC programming design Freezing BOM for detailed part production or material planning (especially procurement of materials that have lead time longer than 2 months) Freezing dimensions for raw material procurement After project schedule is finalized and resource planning in clarified, overlap between schedules of design activities and tool design, NC programming and procurement are examined. This examination helps to find ‘Long Lead Items’. With respect to this information, “Data Drop” plannings for these items are set so as to project can stay devoted to schedule. Also, design groups should support this process by giving priority to setting required data at planned time. “Data Drop” schedule is set starting from end of the project schedule and going back to beginning. From Figure 3, a project schedule can be seen. Production date of a part has been set in project schedule. Figure 3. Manufacturing Planning. D. Ilter et al. / Data Flow to Manufacturing Simultaneous with Design Phase 283 With respect to Figure 3, which information is needed and when it can be clarified will be found. This will help to set “Data Drop” schedule. With a schedule study, time that design can provide information needed and time that information needed can be compared. So that, optimized scheduling of activities like tool design, tool manufacturing can be set. After this study, considering the progress of design, timing of information needed to conduct tool manufacturing activities can be optimized. “Stage Management Plan” can be prepared with this information. (See Figure 5) Figure 4. “Data Drop” Schedule. 2. Stage Management Plan “Stage Management Plan” defines x x x content of data to be published, risks that are taken with the use of this data and premanufacturing arrangements. “Stage Management Plan” is prepared by Manufacturing Engineering Department, acknowledged by Design Engineering and Manufacturing Engineering Departments and authorized by Project Management and Program Management. Advantage of “Stage Management Plan” is to officialize the update of the Stage Management Plan content if the industrial process changes. In “Stage Management Plan”, all parts are grouped with respect to their types like ribs, bearings, frames, composite parts. Detailed parts’ and assembly’s requirements are set separately. Plan should be updated when following cases occur x x x There is a change that affects data established in “Data Drop” There is a schedule update of project that affects the required data New “Data Drop” is needed because of a change. 284 D. Ilter et al. / Data Flow to Manufacturing Simultaneous with Design Phase If there is a risk stated before and a proposed risky change is related with several parts stated in “Stage Management Plan”, that plan can be revoked with mutual decision. Figure 5. Example of Stage Management Plan. 3. “Data Drop” Management in PDM “Dolezal states that product structure is composed of main and sub-components in a hierarchical way. It refers to system architecture, internal structure, and relationship of system components and associated configuration documentation.” ([6]; [8]) Product structure lives during all product lifecycle. All parts are managed with revisions in product structure. When design of a part is completed, designer will start workflow. Other departments will examine and approve part during this workflow. When workflow ends, part will get a “Released” status. If design change is needed after part gets “Released” status, the revision number/letter increases. (Figure 6) Latest revision of the design data is connected to product structure. However, “Data Drop” is a snapshot for a specific time and this snapshot should be managed without affecting the product structure. To solve this problem, there should be a new revision for “Data D. Ilter et al. / Data Flow to Manufacturing Simultaneous with Design Phase 285 Drop” information and that revision should not be connected to product structure. This revision is created with a combination of letter and number like “A01” in PDM tool. And if new “Data Drop” information is needed than the new revision of “Data Drop” will be created as “A02”. The example of this can be seen in Figure 6. 4. Examples for a “Data Drop” Process Tool design of some parts may take long time. This will cause delay in project schedule. For the cases like this, “Data Drop” process is applied. Information needed to give start to tool design studies is sent to related department with high tolerance values at the beginning. As design proceeds, it will converge to the released data step by step. During this period, tool design department immediately start to design activities foreseeing that tolerances will change but tool basic structure will not. Meanwhile design department will continue to improve and elaborate design trying not to digress constraints of “Data Dropped” information. Thus, studies of these two groups can be carried out concurrently with developing data. At stage 1 stated as “Data Drop 1”, edge of part and dimension with high tolerance for length, height and width information are delivered. At this stage, raw material procurement is done, production process design starts and just in case of assembly operation requiring special jigs and tools design starts. At stage 2 stated as “Data Drop 2”, holes definition and dimension with minimum tolerance for length, height and width information are delivered. At this stage, production process validation is done, design of numeric control programs starts, jigs and tools design finishes. At stage 3 stated as “Data Drop 3”, 3D model is finalized. At this stage, production process is released, jigs and tools design validation is done, work orders are launched but production does not start before release of design. To clarify these step see Figure 6. Figure 6. “Data Drop” of a Part. 286 D. Ilter et al. / Data Flow to Manufacturing Simultaneous with Design Phase 4.1. Detail Part Example for Composite Part Lead time of composite parts is between 4 and 70 weeks. Average lead time of composite materials is 16weeks. In traditional approach, companies place an order for needed composite material, after design of composite parts are released. This causes production of composite parts to start 16 weeks later. In our company, we apply “Data Drop” process to backdate order of composite materials. “Data Drop” steps are as follows; x Data Drop1: “A01” revision of part is published. This revision gives following information; o Composite material specification (material form, material thickness, material width, material areal weight etc.) o Edge of Part (EoP) dimensions with ± 1mm tolerance With Data Drop1, material order is given 16 weeks before release of related part with respect to “Stage Management Plan”. x Data Drop2: “A02” revision of part is published. This revision gives following information; o EoP dimensions with zero tolerance o Surface of part With Data Drop2, tool design activities are started 3 to 4 weeks before manufacturing. 4.2. Assembly Example In our company, we apply “Data Drop” process to perform assembly tool designs and modifications. Procurement orders are given for materials and standard parts under related assembly. “Data Drop” stages are as follows; x Data Drop1: “A01” revision of assembly is published. This revision gives following information; o Released/frozen detail part list in assembly BOM(Bill of Material) o Design information (Interface points, tool coordination data etc) for jigs and tool design With Data Drop1, manufacturing activities for detail parts and jigs/tool design are started. x Data Drop2: “A02” revision of assembly is published. This revision gives following information; o Modify released/frozen detail part list in assembly BOM o Long lead standard part list in assembly BOM o Long lead materials list of detail parts With Data Drop2, manufacturing activities for detail parts are modified according to modified BOM, and preliminary study of assembly manufacturing planning is started. Also, procurement of long lead items is set. x Data Drop3: “A03” revision of assembly is published. This revision gives following information; D. Ilter et al. / Data Flow to Manufacturing Simultaneous with Design Phase o o 287 Modify design information for jigs and tool design Modify released/frozen detail part list in assembly BOM With Data Drop3, jigs/tool design is finalized and manufacturing and verification of them is started. 5. Conclusion Shortening product lifecycle is a common problem that companies face to compete with other companies. When it is approached to problem with the concurrent engineering philosophy, one way of shortening product lifecycle and decreasing changes in design is flowing data to manufacturing simultaneous with design. This approach is stated and explained as “Data Drop” in this paper. “Data Drop” is a process where design data is shared with other departments of company while the design of that part still proceeds. “Data Drop” process is explained in this paper under the topics of PDM tool management and planning. “Data Drop” is a snapshot of the design at the time it is taken and also a description of design for manufacturing arrangements. To manage “Data Drop” process properly, timing and planning of stages are important. For this reason, at the beginning of project, comparison of design and manufacturing timings is stated and the importance of overlapping these timings is explained. Outcome of this study should be recorded as “Stage Management Plan”. Tool design processes are backdated at projects that implemented “Data Drop”. It is observed that project schedule has shortened between 20 to 70 workdays at these projects. The variation of shortening period is caused due to different part types. Comparing product lifecycles with and without implementation of “Data Drop” process shows that it will be more profitable, time saving and desirable for companies to implement “Data Drop” at their facilities. References [1] [2] [3] [4] [5] [6] [7] [8] R. Addo Tenkorang, Concurrent Engineering (CE): A Review Literature Report, Proceedings of the World Congress on Engineering and Computer Science 2011 Vol II (2011), 1. L. Combs 2004, The right channel at the right time. Industrial Management, Vol. 46, No.4, pp. 8-16. M. Conner 2004, The supply chain’s role in leveraging PLM, Supply Chain Management Review Vol. 8, No. 2, pp. 36-43. K. O’Marah 2003, The business case for PLM. Supply Chain Management Review, Vol. 7, No. 6, pp. 16-18. B. Prasad, Concurrent Engineering Fundamentals: integrated product and process organization, New Jersey, 1996. G. Şenaltun, C. Cangelir, Software Management in Product Structure, Product Lifecycle Management, Towards Knowledge-Rich Enterprises, 2012, pp. 369-378. J. C. Vatalaro, Implementing a Mixed Model Kanban System: the Lean Replenishment Technique for Pull Production, New York, 2003. W. Dolezal, Success Factors for DMU in Complex Aerospace Product Development, Technische Universität München, 2007. 288 Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-288 An Architecture for Remote Guidance Service Pekka SILTANEN1, Seppo VALLI, Markus YLIKERÄLÄ Petri HONKAMAA VTT Technical Research Centre of Finland LTD , Abstract. Modern maintenance service requires better support for maintenance teams when local maintenance personnel do not have knowledge to manage complicated maintenance tasks, but needs to be guided by a remote expert. In this paper, a software architecture for a service enabling a remote expert to guide a maintenance person via video connection is proposed. The service allows maintenance technicians to send video stream from site to remote support center, where an expert can give feedback and instructions by adding virtual objects (e.g. pointers or 3D models of the object maintained) to the video stream. The maintenance technician can see the virtual objects in the live video stream on her mobile terminal. In the proposed service, a maintenance person has a mobile terminal equipped with screen and camera (such as smartphone or tablet, in a future Augmented Reality headset), capable for sending and receiving video stream. Remote expert has a standard computer with a modern browser. Both users are connected to an application server, running a WebRTC capable media server, such as open source Kurento platform. Augmenting the virtual objects into video stream is implemented on the media server. The manipulated stream is then rerouted to the users. Previous architectures are based on either proprietary technologies (e.g. Microsoft HoloLens or native mobile terminal apps) or restricted by capabilities of the mobile terminals (e.g. browser based Augmented Reality applications). Proposed architecture allows use of the best available video manipulation technologies even if they are not implemented in the mobile terminal used. Keywords. Remote guidance, collaborative augmented reality Introduction In current industrial business where manufacturing companies are shifting from pure manufacturing to more service oriented business models, field service is becoming increasingly important. Two growing trends seem to be happening simultaneously: on the one hand industrial products have become more complicated, and on the other hand service operations are outsourced to parties that may have little experience on the product manufactured in other side of the world. Therefore it becomes obvious that experts need to be able to guide the service technicians to perform the task in hand, even remotely without being on-site. Importance of mobile technology in helping the service technician has been researched a lot [1]. In this paper, we concentrate on using collaborative augmented reality [2] for remote guidance. The use scenario described in this paper is following: a local maintenance technician with basic knowledge of the task in hand requires guidance from an expert in a remote maintenance center (Figure 1). The local technician has a mobile terminal equipped with a video camera, e.g. a tablet or head mounted display. The local technician sends a live video stream to the remote maintenance center. The local 1 Corresponding author: Pekka Siltanen, VTT Technical Research Centre of Finland LTD, P.O. Box 1000, FI-02044 VTT, Finland; email: pekka.siltanen@vtt.fi P. Siltanen et al. / An Architecture for Remote Guidance Service 289 technician and the remote expert can both see the live video, and the remote expert can add virtual objects into the it. Virtual objects can be for example warning signs showing dangerous zones in the working environment, 3D-model based animations describing an assembly sequence, or just pointers showing the point of interest to the local maintenance technician. The virtual objects are linked to image features, so that even if the camera moves, the virtual objects’ positions relative to the real world do not change. The positioning is done automatically by tracking visual features from the video and calculating the virtual object position based on these features. Figure 1. Use scenario. Recent advances in the Augmented Reality (AR) technology (such as Microsoft HoloLens AR headset) promise bright future for AR applications. However, these technologies are often based on proprietary technologies or they do not support remotely located users to see and manipulate same video stream simultaneously. In this paper, we propose collaborative augmented reality service architecture for a remote guidance system described above. The research questions addressed are the following: What kind of architecture is needed for a remote guidance service that could be used on different types of mobile terminals, with as little as possible terminal specific coding? The modern mobile application development environments try to enable writing mobile application code once and generating native code for different operating systems. However, more complicated applications, such as Augmented Reality, normally use several third party libraries that need to be compiled and configured for each operating system. Making operating system specific versions adds a significant amount of extra work for software development. We want to study how the implementation can be divided between client and server, as efficiently as possible. How this kind of architecture can be implemented without vendor specific software, preferable using open source tools? Current interest in AR and telepresence applications has led to big software vendors to introduce their own plans for making this kind of service (e.g. Microsoft HoloLense). However, there are several open source efforts that can be used as building blocks for such a system. We want to show how such a service can be implemented using open source tools. 290 P. Siltanen et al. / An Architecture for Remote Guidance Service 1. Background 1.1. Augmented Reality In Augmented Reality, camera-captured views from real-world environments are enriched (augmented) with digital 3D objects and effects in real time. The challenges for the speed, accuracy, and quality of the augmentation can be understood when comparing the effort needed for producing special effects for movies. Figure 2 illustrates the idea of Augmented Reality visualization Figure 2. Augmented reality visualization pipeline. Augmenting real-world scenes with 3D objects requires tracking the camera’s position and view direction relative to the scene. An often used method, called marker tracking [3], is to detect the 3D positions of graphical “markers”, predefined 2D graphical elements, from the camera view (Figure 2). However, this presumes that the markers can be placed a priori in the environment, and that their visibility does not unduly disturb the user. Instead of markers, a more elegant – and demanding – solution is offered by feature-based, markerless tracking [4]. This is based on the detection of distinctive natural features of the scene from various viewpoints, which – after various filtering and matching processes – results in a set of representative feature points in three dimensions. These 3D feature points are then used for rendering the augmented information without the need for artificial markers. Augmented Reality is a well-known concept in the area of maintenance support. The idea to use AR for industrial applications dates back to the early 1990’s [5]. Since then, there have been a huge number of projects implementing AR applications for different industrial scenarios. Nee & al. [6] list more than a hundred different industrial AR research publications. Also, there are several commercial software vendors offering tools and services for industrial maintenance, such as Total Immersion 2 and Inglobe Technologies3. However, these technologies are mainly visualization tools, allowing users to see predefined 3D scenes positioned in the video stream. In the area of Collaborative Augmented Reality, where augmentations are defined on the fly, much less applications are available. One of the earliest demonstrations of AR in collaboration tasks was the Studierstube project [7]. Their application was a telepresence application where group of people, located in same space, could see same virtual objects from own viewing angles. Among the first to use AR in remote collaboration were 1999 by Hirokazu and Billinghurts [8] (1999), who augmented 2D images of remote participants to video stream using marker tracking. Collaborative remote guidance has been studied by e.g. Huang & al. [9] and Reimar & al.[10], where remote expert guides the another user by communicating via 2 3 http://www.t-immersion.com http://www.inglobetechnologies.com P. Siltanen et al. / An Architecture for Remote Guidance Service 291 simultaneously viewed video stream. Huang & al. propose a system where the remote expert’s hand gestures are shown in the other user’s terminal and the users tries to imitate them. Reimar & al. describe an application where the remote expert can select position from the videos shot by another user, and add annotations such as texts, 3D arrows and object highlights to the video. These annotations can then be seen by both users. 1.2. Video communication There have been two main architectures for implementing video communication services: server centric and peer-to-peer [11]. In the server centric architecture, there is a central hub that acts as a mediator, forwarding the video and audio streams between the clients. There are no direct transmissions between the users. In the peer-to-peer architecture, the video and audio streams are transmitted directly between the clients. WebRTC (Web Real-Time Communication) is an API specification to enable browser to browser (peer-to-peer) communication. It originated as a Google development project, but was open sourced and is currently being developed by W3C. WebRTC defines JavaScript APIs that allow accessing camera and microphone from browser application [12]. Together with HTML5 video capabilities, this enables showing video stream from a local camera on a web page. For transmitting a video stream to another browser, WebRTC defines PeerConnection API [13]. PeerConnection API defines methods for creating the peer-to-peer connections, i.e. letting the browsers negotiate on offering the video stream to another browser and accepting the other browser’s offer. The offers and answers are serialized in Session Description Protocol (SDP) -format [14]. The session descriptions can be delivered between browsers using any messaging method, e.g. WebSockets. The main components of WebRTC are shown in Figure 3. There are also several lower level specifications used e.g. for video coding, connection establishment and communication security, but they are left out of this discussion. Figure 3. Main components of WebRTC. 1.3. Video Stream Manipulation by Media Server WebRTC is currently concentrated on a client technology capable of providing peer-topeer communication. However, there are cases where peer-to-peer communication is not enough, such as communication recording, integration with legacy communication 292 P. Siltanen et al. / An Architecture for Remote Guidance Service platforms and new type of services like computer vision and media augmentation. Such services can be implemented by a media server that is capable of delivering WebRTC streams. By definition a media server is a device that stores and shares media. However, it can be extended to manipulate the video streams shared to the clients. Open source Kurento platform [15] is a media server enabling stream manipulation. Kurento provides a WebRTC capable media plane on top of GStreamer media-handling application, enabling WebRTC communication to be manipulated on the server. Kurento provides a framework for developing multimedia services. It contains different types of media functionalities: e.g. media transport, media encoding/decoding, and media processing. Application development in Kurento is based on two main concepts: media elements and media pipelines. Media element is a functional unit performing an action on a media stream. Media elements can receive a media stream, send the stream to other elements or process it. A media pipeline is a sequence of media elements receiving, sending, and processing media. Example of a media pipeline with different media elements is shown in Figure 4. Figure 4. Kurento media elements and media pipeline. 2. Remote Guidance System Architecture 2.1. System Structure In the proposed architecture, all functionalities, including Augmented Reality interaction are preferred to be used in any client environment, e.g. as a native mobile application or as a browser application. Possibilities for using any complex or computationally demanding applications are therefore limited by the processing power of the client. When aiming at AR functionalities, this holds especially true for the required marker/feature detection and tracking algorithms. The problem is solved by implementing marker detection and tracking transparently on a server having the P. Siltanen et al. / An Architecture for Remote Guidance Service 293 required processing power, and implementing only user interaction features on the client terminal. The system architecture is a three layer architecture, with the following layers (Figure 5): x media server that is capable of receiving and sending WebRTC based video streams, and manipulating the stream, x application server that implements the application logic such as controlling user sessions, initializing stream manipulation, and routing the stream between clients, x client application, implementing a user interface. In current prototype, the client is a HTML5/JavaScript browser application. Figure 5. Three layer architecture. A prototype of the generic architecture above is being implemented. The prototype uses marker based tracking and allows the remote expert to add virtual elements into video stream, relative to the markers placed in the environment. If it is not possible to add markers to the maintenance target, the remote expert can select any area of the video and add virtual elements relative to that area, using markerless tracking. The prototype system components and more detailed architecture are described in the next chapters. 2.2. Compeit Prototype In the prototype implemented in the Compeit project 4 , the central component is Kurento media server that can receive and send WebRTC based video streams, and manipulate the stream by a pipeline of stream manipulation filters. The most important filter used in the prototype is ArMarkerDetector [16], a video manipulation filter produced in Nubomedia5 project and extended by Compeit project to enable interaction between the filter and application server. ArMarkerDetector filter uses WebRTC video stream as input and tracks for markers in the stream. After detecting a marker or noticing the marker is moved, it sends a message to the application server, enabling user interface e.g. create hotspots on the video for user interactions. ArMarkerDetector also manipulates the video stream by drawing 3D objects on top of the marker, relative to the size and orientation of the marker. In the prototype, Tomcat HTTP/WebSocket server is used as application server, implementing the application logic such as controlling user sessions, initializing media 4 5 http:/www.compeit.eu http://www.nubomedia.eu 294 P. Siltanen et al. / An Architecture for Remote Guidance Service server pipelines and attaching video manipulation filters to them, and routing the WebRTC connections between media server and client. Application logic is implemented in Java, utilizing Kurento Java API. Client application user interface is implemented using open source jQuery and Kurento JavaScript libraries, and Bootstrap responsive user interface framework. It communicates with application server using WebSockets and with media server using WebRTC. WebRTC is used for sending/receiving video streams, and WebSockets for delivering user interactions to the application server. Since the client/server communication is based on open standards, client could be implemented as e.g. native mobile application, without changing server components. The prototype implements the scenario’s functionality described earlier. Using the prototype, the remote expert can: x add virtual elements to the video stream, relative to the markers attached e.g. to the machine to be maintained, x link additional content to the markers, e.g. detailed instructions, letting the local maintenance technician to access the content by clicking a highlighted hot-spot on the video. The prototype allows the local maintenance technician to: x send a video stream from his/her own environment to remote maintenance center, x view the video, with annotations added by the remote expert, on a browser, x use the highlighted areas as hot-spots for accessing additional content In the prototype, the local maintenance technician shoots a video on his/her mobile terminal and the expert in the remote maintenance center can view it. The technician can move the camera around, letting e.g. the remote expert see the maintenance target from different angles. The remote expert can define positions of virtual 3D objects relative to the markers found on the video. The video is sent to the media server and manipulated by the ArMarkerDetector filter that augments virtual objects in correct positions and directions to the video. User interactions between clients and server are delivered as WebSocket messages and videos stream are delivered using WebRTC (Figure 6). Figure 6. Architecture of the prototype. Communication between the client applications and media server is described in Figure 7. For clarity, role of application server (delivering messages between media server and clients) is left out from the figure. 295 P. Siltanen et al. / An Architecture for Remote Guidance Service Figure 7. Marker tracking communication sequence. The prototype can use predefined markers, e.g. QR Code type symbols, attached to the maintenance target environment enabling the virtual objects to be positioned. Marker tracking is not practical in all the industrial cases, because the markers may get damaged or dirty, making the marker tracking difficult. Therefore we have also implemented markerless tacking method, that is based on the same architecture but no predefined markers are needed (Figure 8). Using markerless tracking, virtual objects can be positioned basically anywhere in the video stream, and feature detection algorithm keeps track of the features found on the video. This allows the virtual object to be kept in the same position on the video even when the camera moves. When using markerless tracking, the maintenance technician shoots a video of the maintenance target, and the live video is delivered to the remote maintenance center. The remote expert can then select a rectangular area from the video, and this area is used as a target that is tracked in the video. There is no need to add any physical marker on the maintenance target, because the virtual objects can be attached to the selected image area. However, markerless tracking technology may fail if the lighting conditions are unfavourable or if there are not enough features in selected area. Media Server Maintenance technician WebRTC video stream Remote Expert WebRTC video stream Select feature area from stream Link virtual object to the feature area Augment virtual objects to video Augmented video with virtual objects Figure 8. Markerless tracking communication sequence. Augmented video with virtual objects 296 P. Siltanen et al. / An Architecture for Remote Guidance Service Figures 9 and 10 show example of the markerless tracking user interfaces in the prototype. In Figure 9, the remote expert selects a feature area to be tracked in the video stream, by selecting a rectangular area from the video. Figure 9. Feature set selected from remote video. In Figure 10, the remote expert has positioned a virtual arrow into the video stream, pointing to the direction of the remote expert wants to emphasize. The maintenance technician can see the arrow augmented on the video stream on his/her mobile device. The position and direction of the arrow remain same relative to the selected features, even though the camera position or viewing angle changes. Figure 10. Maintenance technicians user interface on mobile device The ArMarkerDetector filter uses VTT’s own Alvar 6 library for implementing feature tracking and augmentation. 6 http://virtual.vtt.fi/virtual/proj2/multimedia/alvar/ P. Siltanen et al. / An Architecture for Remote Guidance Service 297 3. Conclusions and future research Augmented reality creates opportunities much beyond the current technologies used in guiding the maintenance personnel. This paper proposed an architecture based on open source media server, to implement guidance system using collaborative augmented reality. The architecture separates implementation of computationally demanding operations, such as tracking and video augmentation, from user terminal specific interactions, e.g. handling touch in mobile terminals, mouse clicks in desktop, or gesture/voice based interaction using head mounted displays. In order to verify the architecture, a prototype was built. The novel element of the architecture is especially use of open source media server to implement tracking and augmentation. In the near future we will continue developing the prototype, e.g. enabling better user interface for the remote expert to position the virtual objects. We also plan to extend research from industrial maintenance to consumer oriented use cases, such as collaborative video watching, where remotely located people can watch same video and highlight parts of the video for others, using same markerless tracking method. References [1] M. M. Herterich, C. Peters, F. Uebernickel, W. Brenner and A. A. Neff. Mobile Work Support for Field Service: A Literature Review and Directions for Future Research, In: 12th International Conference on Wirtschaftsinformatik, March 4-6 2015. [2] M. Billinghurst, and K. Hirokazu, Collaborative augmented reality, Communications of the ACM 45(7), 2002, pp. 64-70. [3] M. Billinghurst, M. Hakkarainen and C. Woodward, Augmented assembly using a mobile phone, In: 7th International Conference on Mobile and Ubiquitous Multimedia, 2008. [4] R. Edward, R. Porter, and T. Drummond. Faster and better: A machine learning approach to corner detection, IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(1), 2010, pp. 105-119. [5] Caudell, Thomas P., and David W. Mizell. Augmented reality: An application of heads-up display technology to manual manufacturing processes, In: Twenty-Fifth Hawaii International Conference on System Sciences, 1992. [6] A.Y.C. Nee, S. K. Ong., G. Chryssolouris and D. Mourtzis, Augmented reality applications in design and manufacturing, CIRP Annals-Manufacturing Technology 61(2 ), 2012, pp. 657-679. [7] Z. Szalavári, D. Schmalstieg, A. Fuhrmann, & M. Gervautz, "Studierstube”: An environment for collaboration in augmented reality, Virtual Reality 3(1),1998, pp. 37-48. [8] K. Hirokazu, and M. Billinghurst. Marker tracking and hmd calibration for a video-based augmented reality conferencing system, In: 2nd IEEE and ACM International Workshop on AR, 1999. [9] H. Weidong, L. Alem, and F. Tecchia, HandsIn3D: supporting remote guidance with immersive virtual environments, In: 14th IFIP TC 13 International Conference, 2013. [10] G. Reitmayr, E. Eade, and T. W. Drummond, Semi-automatic annotations in unknown environments. In: 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, 2007. [11] Y. Liu, G. Yang and L. Chao, A survey on peer-to-peer video streaming systems, Peer-to-peer Networking and Applications 1(1), 2008, pp. 18-28. [12] D. Burnett and A. Narayanan, Media capture and streams, World Wide Web Consortium WD WDmediacapturestreams-20120628, 2012. [13] A. Bergkvist, D. C.Burnett, C. Jennings, A. Narayanan, WebRTC 1.0: Real-time Communication Between Browsers. Working draft, W3C, 2013. [14] M. Handley, C. Perkins, and V. Jacobson. "SDP: session description protocol." (2006), Accessed 10.4 2013. [Online]. Available: http://tools.ietf.org/html/rfc2327 [15] L. L. Fernandez, M. P. Diaz, R. B. Mejias, F.J. Lopez, J.A. Santos, Kurento: a media server technology for convergent WWW/mobile real-time multimedia communications supporting WebRTC, In: 14th International Symposium and Workshops on a World of Wireless, Mobile and Multimedia Networks, 2013 [16] P. Honkamaa, S-M. Mäkelä, M. Ylikerälä, J. L. Fernandez, I. Gracia. Nubomedia project deliverable D4.5.1: Augmented Reality media element prototypes v1, 2015. 298 Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-298 Impact of Non-Functional Requirements on the Products Lines Lifecycle German URREGO-GIRALDO a,1, Gloria GIRALDO b and Myriam DELGADO a a Facultad de Ingeniería, Universidad de Antioquia, Medellín, Colombia. b Facultad de Minas Universidad Nacional, Sede Medellín Abstract. The big progresses in Information and Communication Technologies and the increasing development of web systems highlight the importance of NonFunctional Requirements (NFR) in the construction of knowledge and information systems. This kind of requirements has been always present, indicating the systems quality. But, the traditional supplementary role of implicit expected quality attributes changes in an explicit central role in specialized technological solutions centered on massive interactions of agents who access communications nets at anytime, anywhere, with any means. The dynamicity of the collective work, of the diversification, integration, and deepening of knowledge treatment of every intervention field in the society, as well as the massive interaction of persons, machines, and objects support the emergence of new products development approaches and their lifecycles. For example, in Software Products Lines, the changes in the lifecycle determine that the Non-Functional Requirements must be defined in earlier phases than in the software lifecycle. In this paper it will be explained why and what is the meaning of the introduction of Non-Functional Requirements in earliest phases of Products Lines lifecycle. Highlight the changes of Non-Functional Requirements introduced in phases of product lines lifecycle and their impact in every development phase is the goal of this article. Keywords. Non-Functional Requirements, Products Lines, Product lifecycle Introduction Product line concept is born as a requirement of industry aiming at to offer opportunely new competitive products, in order to have a participation in the market that ensures the accomplishment of high organizational objectives. This purpose acquires dynamicity under the form of computer-based Product Lines, supported on fundamental work treated in [1]. The economical and technologic changes increase the interest in these approaches, pulling the development in software engineering and in particular in Requirements engineering. In this conjuncture are important the contributions consigned in [2], [3]. In this direction, other contributions to computer-based products lines, and also to Software Product Lines appear, among others, in [4], [5]. From Software Engineering and Requirements Engineering are recognized basic contributions, such as, the concept formulated by Ross in [7], who considered system requirements as constraints; the characterization of quality requirements proposed by T.P Bowen in [8], and the concept of soft goals for expressing non-functional 1 Corresponding Author, E-mail: gaurrego015@gmail.com. G. Urrego-Giraldo et al. / Impact of NFR on the Products Lines Lifecycle 299 requirements treated by Mylopulous et al. in [9]. These contributions have founded and stimulated the research in the field of requirements engineering. Many concepts for supporting the discovery, negotiation, specification, operationalization and in general, the treatment and use of functional and nonfunctional requirements are present in existing methods. For example, the approaches based on the concept of goal, documented in the literature, such as the considered in [10], [11], [12] offer important views and strategies for constructing high quality systems. A resume of goal-based approaches, until year 2000, is presented in [13]. Further developments of are found in [14-26]. The treatment of non-functional requirements, initially defined as quality attributes, becomes increasingly more important, determined, among other by the irruption of technology and information processing in all spaces of the society, the nature, the economy, and the individuals life; the massive use of communications supporting complex systems, interactions machine to machine, the internet of things ; mobility, ubiquity, automation, and the treatment of big data, in real and virtual environments. In this directions are the contributions of sources [19], [20], [21]. Precisely, in the use of non-functional requirements for ensuring the quality of systems, highlight, among others, the quality models proposed by Mac Call in [27], Dromey in [28], Bohems in [29], ISO model in [30], and the FUREPS model in [31]. From some works of our research team in the field of non-functional requirements, published in [32], and two works of engineering students referred in [33], 234 nonfunctional requirements mentioned in the literature have been studied, in order to identify the phase of software systems lifecycle, where every requirement must be firstly, identified, and the relationships among these ones. These concerns are not enough treated in the field of Software Products Lines, specifically in the processes of the lifecycle phases, which are centered in the domain analysis, and the construction and assembling of assets, in order to take advantage of variability concepts, and potentiate the reuse of components and software products . The identification of nonfunctional requirements belonging to each phase of the lifecycle of Software Products Lines, the comparison and evaluation of this non-functional requirements assignment with this one of software lifecycle is the main subject of this paper. The content of this paper is structured as follow: at first the Introduction. In Section 1, the life cycle of products is extended to the lifecycle of products lines. The analysis of the lifecycle of products lines is the subject in Section 2. The assurance of Non-Functional Requirements in phases of products lines lifecycle is treated in Section 3. Section 4 contains the conclusion and future work. Then, acknowledgments are expressed. The references are detailed in last section. 1. Extension of the Product Lifecycle to the Lifecycle of Products Lines Aiming to take advantage of variability concept and to support the reuse of components and products, the lifecycle of SPLs considers more detailed and specific treatment of processes in each lifecycle phase. In this way a more precise assignment of NFRs to specific processes and their involved objects, is made. The diversity of possible strategies for implementing particular Non-Functional Requirements, in detailed processes, influences the products more directly and meaningfully. The variability and reuse suggest an industrial perspective for Software Product Lines production, equal to that of the manufacturing sector. In this sense, the Lifecycle 300 G. Urrego-Giraldo et al. / Impact of NFR on the Products Lines Lifecycle of SPLs may be framed in three big categories of phases: preproduction, production, and postproduction phases, in Figure 1. Figure 1. Processes of the Life Cycle of Products Lines. G. Urrego-Giraldo et al. / Impact of NFR on the Products Lines Lifecycle 301 Preproduction contains three phases: products definition, analysis of assets and products; and design of assets and products. Production phase integrates, in turn, three phases: production plan, processes for the elaboration of assets and products, and testing and tuning of assets and products. Postproduction category searches to put the product on the hand of consumers, and evaluate their satisfaction with the products. This category contains three phases: disposition of products for distribution; products distribution, and post-distribution services and impact evaluation. Every process of each phase is disaggregated, in turn, in three expressive and manageable processes, where appear the activities that developers and users of product line know and apply. The names of the 27 processes are included in Figure 1. For sake of simplicity, the description of these processes is not presented here. 2. Analysis of the Lifecycle of Products Lines Furthermore, a simplified view of SPL lifecycle, in Figure 2, defines a basic architecture considering four modules: Domain Modeling, Assets Modeling, Products Configuration, and Products Assembling. Figure 2. Simplified Lifecycle of Products Lines. The two first modules belong to the Domain Engineering and the others constitute the Application Engineering. Domain Modeling and Products Configuration define the Problem Space, and correspond to a Context and Domain knowledge description. While, Assets Modeling and Products Assembling define the Solution Space and gather a Context and Domain knowledge materialization. 302 G. Urrego-Giraldo et al. / Impact of NFR on the Products Lines Lifecycle This simplified architectural core of a SPL represents the utilization of the Domain and Application Engineering for treating the descriptive and causative knowledge of the SPL. The first aspect to highlight in SPL lifecycle is the explicit use of engineering concept, aiming to pass from conceptual modeling to materialized assets and products, that means to transform domain and context conceptual knowledge in partial and complete concrete solutions. In this sense, engineering leads, one side, to convert domain knowledge in concrete components enclosing functionalities useful for the construction of products. Components constitute partial solutions able to contribute to a complete solution materialized in a product. Domain Engineering transforms Descriptive Conceptual Domain Knowledge into Causative Materialized Domain Knowledge. Another side, engineering drives to transform products requirements and conceptual products in concrete products, which gather the functionalities of appropriated components and constitutes complete solutions. Application Engineering transforms Descriptive Conceptual Context Knowledge into Causative Materialized Context knowledge. A second aspect to remark in SPL lifecycle consists in the related treatment of the Problem Space and the Solution Space. Both spaces contain domain knowledge and context knowledge. Domain knowledge is contained in domain objects, of two types: conceptual and concrete objects. Domain Knowledge of conceptual objects is expressed in characteristics of these objects identified in the Problem space. This knowledge is materialized in components, which are domain concrete objects belonging to the solution space. Context knowledge of the problem space is represented by agents able to act or interact with agents of this or other contexts, embodying characteristics of the domain objects, configuring a big set of conceptual products, which are materialized in the solution space as concrete products constituted in agents able to act an interact with agents of this context or another contexts, These agents embody components expressing materialized objects the domain knowledge of the solution space. These interrelated treatment of domain and context knowledge belonging to the problem and solution spaces is expressed in detailed processes of the SPLs lifecycle involved in the domain and application Engineering. The SPLs lifecycle extends the same nein phases proposed for the conventional software lifecycle for treating in integral way products and assets knowledge. This diversified treatment needs specialized processes in order to consider multiples domain models, diverse mechanisms and strategies for assets and products configuration and instrumentation, construction of assets and products, and the certitude of assets compatibility and interoperability. The engineering foundation of SPL processes, and their detailed disaggregation allow the incorporation of implemented NFRs, identified in specific phases of the SPL lifecycle, treated in next Section. 3. Definition of Non-Functional Requirements in the Life Cycle of Software Products Lines Particular concepts involved in the definition of the Product Lines approach, such as core and supplementary characteristics, multiple products, domain covering, components-oriented, market segmentation, etc.; determine specific exigencies of Non- G. Urrego-Giraldo et al. / Impact of NFR on the Products Lines Lifecycle 303 Functional Requirements. Those particular concepts are materialized in definite phases of the Product Lines Lifecycle. Each exigency is related to the phase where the corresponding concept is firstly applicable. Accordingly, NFRs appear in Products Lines Lifecycle at least in the same phase than in the typical Product Lifecycle, but many N-FRs are incorporated in a previous phase. Many NFRs belonging to Analysis and Design phases of a Product, for example, are attained in the Products Lines Lifecycle in an earlier phase than in the typical Product Lifecycle, and some others are achieved in the same phase, as it appears in Figure3. Figure 3. N-FRs of SPL Achieved in Advance, in Definition Phase. 304 G. Urrego-Giraldo et al. / Impact of NFR on the Products Lines Lifecycle This situation is explained in this article in relation to the Software Products and Software Products Lines, aiming to show the impact of NFRs on the Products Lines Lifecycle. In this Section, the advanced achievement of NFRs in PL is analyzed comparatively with NFRs related to Software Products. In Figure 3 are marked with the letters PL, in the first column, NFRs that SPL approaches may define earlier than traditional Software approaches, marked with the letter X. Indeed, SPL approaches establish these NFRs in the Definition phase while some of these NFRs belong to Design phase of traditional Software Products, and the most correspond to the Analysis phase. The early applicability of these NFRs indicates the advantage of conceptualizing software products as software Product Lines. This approach is rapidly enriched with pertinent capabilities for satisfying needs and exigencies. The early materialization of these capabilities founds the product architecture and endows lifecycle phases with applicable NFRS materialized in software components. In the same way, in Figure 4 are deployed NFRs of SPLs to be defined firstly in the Analysis phase of their Lifecycle. All these NFRs are assured in the Design phase of traditional Software Lifecycle. The obtaining of NFRs in analysis phase means that the logical model of SPLs solutions contains architectural expressions and some technological elements, materials, and dimensions provided by the rich knowledge and implemented NFRs coming from the product definition phase. The earlier obtaining of NFRs in SPL Lifecycle means that SPLs acquire and incorporate more meaningful knowledge before than the traditional software methods. This knowledge characterized by NFRs determines the advantages of SPLs and its fortress for contributing to software industrialization. Indeed, reuse and variability as main challenges of SPLs are explicitly treated and are potentiated by the other particular NFRs. Detailed management of NFRs in SPLs and their operationalization is the subject of other ongoing works. 4. Conclusion and Future Work The results of the analysis of more mentioned N-FRs in Software Engineering aid to clarify the advantages of SPL approach against the traditional software methods, in an industrial perspective. Reusability, for example, may be achieved in SPLs, in the analysis phase, while in traditional software products this attribute is obtained later, in the design phase. That means that the configuration of assets and products of a product line incorporates the reusability attribute early, in a conceptual way, avoiding the effort, cost, risk, and restrictions of attempting to introduce that attribute with technological resources, later in the design phase. Moreover, variability is ensured in the definition phase of SPLs, while in traditional software products lifecycle, this attribute is attained in the analysis phase. The advance in variability is related to the need of more complete and rapid discovering of domain knowledge and early identification of interested agents, in SPLs approach. This exigency leads to a deeply understanding of the domain and to a clear visualization of a diversity of products. This way allows found and stimulate a rigorous insight in the studied realities and connect directly this knowledge with the identification of a diversity of social, economic, environmental, cultural, and technological aspects to be considered in a rich offer of new products. In fact, in the G. Urrego-Giraldo et al. / Impact of NFR on the Products Lines Lifecycle 305 perspective of developing a SPL surges immediately the question about the major number and specificity of characteristics and relationships among them, as well as, the multiple agents interested in a gamma of final products and in the development and benefits of particular phases of the product lifecycle. Figure 4. N-FRs of SPL Achieved in Advance, in Analysis Phase. 306 G. Urrego-Giraldo et al. / Impact of NFR on the Products Lines Lifecycle NFRs strength the conceptual and architectural models implicit in SPLs approach. The NFRs determine the useful and capacity of thing to satisfy exigencies, indicating the functionalities and propertied of products, components, and relationships among components, to be achieved in the implementation of NFRs. Thus a basic conceptualization and architecture of a product (in general a solution) is obtained of precise understanding of central NFRs. The early identification of NFRs aid to define strategies for the conceptualization and design o products directly connected with production and postproduction phases, where it is confirmed the useful and satisfaction of needs, reached by products. The wide and deep knowledge of the domain and context allow introduce early, meaningful NFRs which enrich and impulse the logical configuration of products, the component-based product assembling, and the other phase of SPLs Lifecycle. NFRs of SPLs express direct contributions to Processes and Products, along the lifecycle, which constitute an effective way for improving the Product Lines engineering and traditional Software Engineering Methods. The implementation of NFRs of SPL under different strategies, the construction of NFRS assets, and the creation of NFRs Product Lines, as Quality Labels Line for products Line are ongoing works. 5. Acknowledgment This work was elaborated within projects “Conformación, evolución y consistencia de soluciones basadas en el concepto de línea de productos, en organizaciones y en la minería de datos” and “Desarrollo de soluciones para soportar la completitud y la corrección de líneas de productos con aplicación a la ingeniería de software” identified with codes 111556933404 and 111556933192, respectively. Both projects are funded by COLCIENCIAS, the Colombian agency for the support of the scientific research and the technological Development. The models were elaborated by the research team ITOS of Antioquia University and by the research team Software Engineering of National University of Colombia. References [1] D. Parnas, On the design and development of program families, IEEE Transactions on Software Engineering, SE-2(1): (1976) 1 – 9. [2] K. Pohl, Requirements Engineering: Fundamentals, Principles, and Techniques, Springer-Verlag, Berlin Heidelberg, 2010. [3] K. Pohl, G. Böckle, and F. van der Linden, Software product line engineering: Foundations, principles and techniques, Springer-Verlag, Berlin Heidelberg, 2005. [4] D. Benavides, S. Segura, A. Ruiz-Cortés, Automated Analysis of Feature Models 20 Years Later: A Literature Review. Information Systems, 35(6), 2010, :615–636. http://www.researchgate.net/publication/223760542_Automated_analysis_of_feature_models_20_years _later_A_literature_review/links/0046352bd57ee8f1c9000000, Accessed 20 May 2015. [5] R. Mazo, A Generic Approach for Automated Verification of Product Line Models, PhD dissertation, Paris 1 Panthéon – Sorbonne University, Paris, 2011. [6] S. Wiesner et al., Requirements Engineering. In: J. Stjepandić et al. (eds.) Concurrent Engineering in the 21st Century, Springer International Publishing Switzerland, pp. 103-132, 2015. [7] D. Ross, and K. Schoman, Structured Analysis for Requirements Definition, IEEE Transactions on Software Engineering, Vol. 3, 1977, No 1, 6-15, G. Urrego-Giraldo et al. / Impact of NFR on the Products Lines Lifecycle 307 [8] T.P. Bowen, G.B. Wigle, J.T. Tsai, Specification of Software Quality Attributes, Report of Rome Air Development Center, 1985. [9] J. Mylopoulos, K.L. Chung, B.A. Nixon, Representing and Using Non- Functional Requirements: A Process-Oriented Approach, IEEE Transactions on Software Engineering, Special Issue on Knowledge Representation and Reasoning in Software Development, Vol. 18, June 1992, No 6, pp. 483-497. [10] C. Rolland, C. Souveyet, C. Salinesi, Guiding Goal Modelling Using Scenarios, TSE special issue on scenario management, 1998. [11] A.V. Lamsweerde, Goal-oriented requirements engineering: a guided tour, RE’01 International Joint Conference on Requirements Engineering, IEEE, Toronto, August 2001, pp.249-263. [12] A. Anton, Goal-Based Requirements Analysis, ICRE'96, Colorado Springs, IEEE, 1996. [13] A.V. Lamsweerde, Requirements Engineering in the Year 00: A Research Perspective. Proc. 22nd International Conference on Software Engineering, Limerick, ACM Press, June 2000. [14] D. Blanes, E. Insfran, and S. Abrahão, Requirements Engineering in the Development of Multi-Agent Systems: A Systematic Review, International Conference on Intelligent Data Engineering and Automated Learning, Burgos, Spain, 2009, pp. 510-517. [15] E. Bjarnason, P. Runeson, M. Borg, M. Unterkalmsteiner, E. Engström, B. Regnell, G. Sabaliauskaite, A. Loconsole, T. Gorschek, R. Feldt, Challenges and practices in aligning requirements with verification and validation: a case study of six companies, Empirical Software Engineering, SpringerVerlag, Berlin Heidelberg, 2013. [16] E. Bjarnason, Integrated Requirements Engineering - Understanding and Bridging Gaps within Software Development, Doctoral Dissertation, Department of Computer Science, Lund University. Sweden, 2013. [17] C. Krueger, K. Jackson, Requirements engineering for systems and software product lines, 2009, http://www.biglever.com/extras/RE_for_SPL.pdf, Accessed 25 May 2015. [18] B.H.C. Cheng, J.M. Atlee, Research Directions in Requirements Engineering, FOSE '07 2007 Future of Software Engineering, IEEE Computer Society, Washington, 2007, pp. 285-303. [19] I. Goksun, Requirements engineering for mobile systems, Master's Theses, San Jose State University, 2005. [20] D. Chang, C.-H. Chen, Understanding the Influence of Customers on Product Innovation, Int. J. Agile Systems and Management, Vol. 7, 2014, Nos 3/4, pp. 348 - 364. [21] O. Vermesan et al., Internet of Things Strategic Research Roadmap, http://www.internet-of-thingsresearch.eu/pdf/IoT_Cluster_Strategic_Research_Agenda_2011.pdf, Accessed 25 May 2015. [22] A. McLay, Re-reengineering the dream: agility as competitive adaptability, Int. J. Agile Systems and Management, Vol. 7, No. 2 (2014) 101–115. [23] V. Shukla, Comprehensive Methodology for Complex Systems’ Requirements Engineering and Decision Making, Doctorat de l’Université de Toulouse delivré par l’Institut National des Sciences Appliquées de Toulouse Mention : Informatiques et Génie, 2014. [24] S. Alguezaui, R. Filieri, A knowledge-based view of the extending enterprise for enhancing a collaborative innovation advantage, Int. J. Agile Systems and Management, Vol. 7, No. 2 (2014) 116–131. [25] S. Wiesner et al., Requirements Engineering. In: J. Stjepandić et al. (eds.) Concurrent Engineering in the 21st Century, Springer International Publishing Switzerland, pp. 103-132, 2015. [26] F. Elgh, Automated Engineer-to-Order Systems A Task Oriented Approach to Enable Traceability of Design Rationale, Int. J. Agile Systems and Management, Vol. 7, 2014, Nos 3/4, pp 324 - 347. [27] J.A. McCall, P.K. Richards, and G.F. Walters, Factors in Software Quality, Nat'l Tech. Information Service, Vol. 1, 2 and 3, 1977. [28] R.G. Dromey, A model for software product quality, IEEE Transactions on Software Engineering, 1995, No. 2, pp. 146-163. [29] B.W. Boehm, J.R Brown, and M. Lipow, Quantitative evaluation of software quality, Proceedings of the 2nd International Conference on Software Engineering ICSE '76, IEEE Computer Society Press, Los Alamitos, 1976, pp. 592-605. [30] ISO, International Organization for Standardization, ISO 9126-1:2001, Software engineering – Product quality -- Part 1: Quality model, 2001. [31] S. Chulani, B. Ray, P. Santhanam, R. Leszkowicz, Metrics for Managing Customer View of Software Quality, METRICS 2003, IEEE International Symposium on Software Metrics, 2003, pp. 189-198. [32] G. Urrego-Giraldo, G.L. Giraldo G. Contextualized achievement of Engineer's competences for sustainable development, Global Engineering Education Conference, IEEE, 2014, pp. 713 -720. [33] G. Urrego-Giraldo, G.L. Giraldo G., Differentiated Contribution of Context and Domain Knowledge to Products Lines Development, In: J. Cha et al. (eds.) Moving Integated Product Development to Service Clods in Global Economy, IOS Press, Amsterdam, pp. 239-248. 308 Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-308 Manufacturing Resource Servitization Based on SOOA Wensheng XU1, Lingjun KONG, Jianzhong CHA School of Mechanical, Electronic and Control Engineering, Beijing Jiaotong University, Beijing, China Abstract. Cloud manufacturing is a new service-oriented networked manufacturing model in which manufacturing resources can be shared in the cloud for customers to utilize as needed. Traditional resource servitization methods are mostly based on SPOA (Service Protocol-Oriented Architecture), which lacks the flexibility to cope with the dynamic nature of the network and the manufacturing resources. In this paper, based on the characteristics of manufacturing resources and the network, the SOOA (Service Object-Oriented Architecture) is adopted as the underling service architecture for resource servitization, and an SOOA-based resource servitization method is proposed, in which three stages are involved -service interface definition, manufacturing resource encapsulation and dynamic provisioning of manufacturing services. Through the three stages, different kinds of manufacturing resources from different perspectives, including software and hardware resources, static and dynamic resources, or unique and replaceable resources, can all be provisioned and accessed as manufacturing services on the dynamic network with standardized interfaces and SOOA characteristics. This approach can provide support for connect-in of manufacturing resources from resource providers and can adapt to the dynamic nature of manufacturing resources and the network very well. Keywords. Cloud Manufacturing, Resource Servitization, Service Object-Oriented Architecture, Service Encapsulation Introduction The target of cloud manufacturing is to provide reliable manufacturing services to customers throughout the manufacturing lifecycle with high quality but low cost at any time based on customer demands. To achieve this target, a huge manufacturing resource cloud pool needs to be built to share all the manufacturing resources for customers. The resource servitization technique is a core technique for building the manufacturing resource cloud pool, with which manufacturing resources can be servitized to become manufacturing services in the cloud environment. Researchers in this field have carried out a lot of research work for manufacturing resource servitization, mainly including three types: (1) Resource servitization based on WSDL [1, 2, 3]; (2) Resource servitization based on WSRF [4]; (3) Resource servitization based on Ontology or semantic web [5, 6]. Most of the resource servitization methods are based on SPOA (Service Protocol-Oriented Architecture), which is a kind of SOA 1 Corresponding Author, E-mail: wshxu@bjtu.edu.cn W. Xu et al. / Manufacturing Resource Servitization Based on SOOA 309 and is protocol-specific, i.e. the communication protocol needs to be decided by the service provider and the service requester needs to abide by the protocol – for example SOAP (Simple Object Access Protocol) in Web Services or IIOP (Internet Inter-ORB Protocol) in CORBA. This type of architecture based on fixed protocols lacks flexibility and efficiency in some situations, and resource servitization based on SPOA have some limitations for the cloud manufacturing environment. In this paper, the characteristics of manufacturing resources are summarized, and SOOA (Service Object-Oriented Architecture) is analyzed for cloud manufacturing. Then the manufacturing resource servitization method based on SOOA is proposed, three key stages are discussed: manufacturing service interface definition, manufacturing resource encapsulation, and dynamic provisioning of manufacturing services. Then a prototype manufacturing resource servitization platform is developed to implement and verify the proposed servitization method. At last, conclusions are given. 1. Requirement analysis of manufacturing resource servitization Three main characteristics of manufacturing resources and their effects to resource servitization are as follows. (1) There are various types of manufacturing resources in enterprises. From the perspective of resource formation and functions, resources can be classified into several types: software resources such as office software and CAx engineering software, hardware resources such as monitoring devices or manufacturing equipment, intelligence resources such as domain engineers or experts, knowledge resources such as design standards library or design models library, manufacturing capacity resources such as capabilities of requirement analysis, concept design, structural design or 3D printing, etc. From the perspective of the resource stability, resources can be classified into static resources and dynamic resources. Static resources are expected to work online continuously and reliably such as the authentication system for an enterprise portal or the file storage servers. Dynamic resources may go offline or shutdown as planned or unexpectedly, such as a machine resource powers off for maintenance. From the perspective of resource customers, resources can be divided into unique resources and replaceable resources. If in some situation a customer (or application) requires a specified resource with a certain identity or at a certain location and this resource cannot be replaced by other resources, then this resource is a unique resource for the customer. For example, different monitoring devices at different locations in a workshop play different roles, so when it is needed to acquire data from a specific monitoring device at a specific location, this hardware resource cannot be replaced by other monitoring devices at different locations even though other monitoring devices are of the same type. If a resource can be replaced by other resource of the same type for a customer, then this resource is called a replaceable resource. For example, in a product design process, any ANSYS software tool at any location may suffice with the design process. All the above manufacturing resources might need to be shared in cloud manufacturing to facilitate convenient design or manufacturing applications and activities. (2) Manufacturing resources are basically highly dynamic. For example, the status, properties and capacity of a resource might change (for example, enhance or degrade, power on or power off) continually throughout the lifecycle of the resource. Also the 310 W. Xu et al. / Manufacturing Resource Servitization Based on SOOA network environment on which the resources are connected and by which the resources are accessed by users are highly dynamic, for example, the topological structure of the network may change, the network connection may become unstable. Deutsch summarized the eight fallacies about the network and that reflects vividly the dynamic nature of the network [7]. Resource servitization methods need to adapt to the dynamic nature of manufacturing resources and the network environment in order to share manufacturing resources effectively and provide services in a reliable way. (3) Access protocols of manufacturing equipment or software resource are normally exclusive. Equipment or software providers either do not anticipate that in future all resources might be connected by network, or they intentionally adopt exclusive or private access protocols in order to retain customers, therefore in reality most manufacturing equipment or software adopts exclusive access protocols [8]. To adapt to this reality, the servitization method should keep the access protocol independent for manufacturing resources and should not use a unified representation communication protocol to access resources, for example SOAP in Web Services or IIOP in CORBA should not be a prerequisite in resource servitization. Actually the unified representation protocol architecture may affect the communication efficiency adversely between the customers and the resources in some cases. Currently there are mature technical implementations of SOOA, such as Jini technique [9], and it can support Plug&Play of software and hardware in the dynamic network environment. Interface standardization is the key factor for successful application of SOOA, and the interface is the application protocol between service providers and service requesters. If a service of a certain type provided by a provider can conform to the standardized interface, then the service can be published through the standardized interface and can be identified and accessed by requesters. On the other hand, as an independent functional party, a service requester can take advantage of this standardized interface and can search and find its needed services through the service registry. When invoking the service through the service proxy provided by the service provider, the requester does not need to know the underlying protocol, so SOOA is a protocol-neutral architecture and it can support any exclusive communication protocols provided by the service providers. Through the service property mechanism provided by SOOA, the detailed information about the resource can be described, the service provider can wrap the resource’s description and semantic information into the service attributes, and publish them to the service registry along with the service proxy, so service requesters can search and select the needed service based on the service attributes. Also, in SOOA, the service lease mechanism and service remote monitoring mechanism are also designed towards the dynamic network, so service requesters can adapt well to the dynamics of the resources and the network. One of the most important engineering applications in SOOA is FIPER (Federated Intelligent Product Environment) [10] which is built on Jini, and FIPER encapsulates CAD, CAE, PDM software, optimization tools and cost models as dynamic Jini services with explicit service interfaces. The advanced features of SOOA make it an excellent architecture for manufacturing resource servitization on the dynamic network [11,12]. 311 W. Xu et al. / Manufacturing Resource Servitization Based on SOOA 2. The process of manufacturing resource servitization Based on the above analysis, a manufacturing resource servitization method is proposed based on SOOA, as shown in Figure 1. Three stages are involved in the servitization process: interface definition, resource encapsulation, and dynamic provisioning, through which various types of manufacturing resources can be encapsulated as manufacturing services with standardized interfaces and the SOOA features. Interface definition Manufacturig resource Resource encapsulation Manufacturing service interface Standardization Resource functions Definition Manufacturing service interface Dynamic provisioning Manufacturing service Manufacturing service program Informatization, software-enabled Intelligence resource Knowledge resource Multiple Encapsulate functions Tool resource Singular Create Capacity resource function Cloud-enabled Service registry Deploy Manufacturing service program Register Manufacturing service container Figure 1. Three stages in manufacturing resource servitization. 2.1. Manufacturing service interface definition Interface standardization is an important requirement for resource servitization. Only standardized service interfaces of manufacturing services can be recognized in the Manufacturing Service Platform (MSP), and the functions and capabilities encapsulated behind the standardized interfaces can be utilized by requesters. The Manufacturing Service Interface (MSI) can be defined as: MSI F , I , R FI . F refers to the function set, fi  F , f i is called a function path. I refers to the interface set, i j  I , i j is called a function interface. RFI is the binding relation between the function path and the function interface, and RFI Ž F u I . MSI is the interface template for the service implementation of manufacturing resources. A service encapsulates one or more manufacturing resources, and can provide specific functions. The interface of the service implementation must conform to the corresponding MSI, then the manufacturing service can be recognized by requesters in the MSP and the resource functions of the service providers can be utilized correctly. The MSI not only includes the input and output information of the manufacturing service, service descriptions, but also a tree-structure of functions. Requesters can check the descriptions of the interface to find the required functions; a text string is used as the interface signature to identify the functions provided by a resource. The interface method information defines different operation methods and the types of the input and output information, including integer, string, files, etc. By using the interface path and the interface signature together, it can be convenient and fast to find a needed MSI of a specific function in a whole manufacturing function tree. Based on the above MSI conceptual model, a Service Interface Description Language (SIDL) is proposed based on XML to describe the manufacturing function 312 W. Xu et al. / Manufacturing Resource Servitization Based on SOOA tree structure and the MSI on a manufacturing function (MF) node. The grammar structural tree of SIDL is shown in Figure 2. Figure 3 is a fragment of a modeling example using SIDL, and it defines a service interface “FEAAnalysis” for finite element analysis computing. The interface path of this service interface is “MF1.MF12.MF121.MF1211”, it means: “design function.structure design function.FEA function.FEA computing function”. The interface signature is “FEAAnalysis”, there is an operation “FEAAnalysis”, and the input and output are mesh model “meshModel” and analysis report “analysisReport” respectively, which are all of “File” type. If a manufacturing service intends to provide the FEA computing function, then it can encapsulate an FEA computing software tool such as ANSYS and then implement this service interface to make the encapsulated resource be recognized and can be utilized by requesters. XML Schema Syntax of SIDL Figure 2. Structure tree of the syntax of SIDL. W. Xu et al. / Manufacturing Resource Servitization Based on SOOA 313 A SIDL model example <SIDLRoot xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="..\schemas\sidl.xsd"> <description>Manufacturing function interface root node</description> <function id="MF1" name="design" path="MF1"> <description>Design function</description> <function id="MF11" name="conceptual design" path="MF1.MF11"> <description>Conceptual design function</description> <function id="MF111" name="Requirement analysis" path="MF1.MF11.MF111"> <description>Requirement analysis function</description> <interface> <description>Requirement analysis interface</description> <signature name="RequirementAnalysis" id="MF1.MF11.MF111" type="DesignService"/> <operate> <description>Requirement analysis interface operation 1</description> <body name="requirement analysis"/> <input inputType="string" name="requirementAnalysisRequest"/> <output outputType="string" name="requirementAnalysis"/> </operate> </interface> </function> </function> <function id="MF12" name="structure design" path="MF1.MF12"> <description>Structure design function</description> <function id="MF121" name="FEA" path="MF1.MF12.MF121"> <description>FEA analysis</description> <function id="MF1211" name="Analysis" path="MF1.MF12.MF121.MF1211"> <description>Analysis computing</description> <interface> <description>FEA analysis computing</description> <signature name="FEAAnalysis" id="MF1.MF12.MF121.MF1211" type="DesignService"/> <operate> <description>Analysis computing operation 1</description> <body name="FEAAnalysis"/> <input inputType="File" name="meshModel"/> <output outputType="File" name="analysisReport"/> </operate> </interface> </function> </function> </function> </function> </SIDLRoot> Figure 3. Fragment of a modeling example in SIDL. 2.2. Manufacturing resource encapsulation After defining service interfaces, manufacturing resources can be encapsulated to implement these service interfaces. Each resource may have multiple functions, and multiple resources can perform a function together, so a resource may implement multiple service interfaces, and multiple resources also can join together and implement one service interface. According to the different features of different types of resources, different strategies are used to encapsulate these manufacturing resources as services. For software resources and knowledge resources, they may be encapsulated and 314 W. Xu et al. / Manufacturing Resource Servitization Based on SOOA implement the needed interfaces directly. For hardware resources without network capability, they need a network connection module to become a software component first and then implement the needed interfaces. For intelligence resources, humanmachine interfaces and task management are needed, and then they can be encapsulated as SOOA services. Human experts can interact with requesters through these interfaces. For capacity resources, since they normally contain all kinds of software, hardware, knowledge, intelligence resources, so these resources can first be encapsulated internally inside the capacity resource, then the human interface and task management can be developed to interact with internal resources, then they can be encapsulated as SOOA services. Human coordinators can coordinate between service requesters and internal resources. The manufacturing resource encapsulation strategies for different types of resources are shown in Figure 4. Figure 4. Manufacturing resource encapsulation. 2.3. Manufacturing service dynamic provisioning The manufacturing service programs achieved by the encapsulation stage need to be further deployed into the manufacturing service container, register at the service registry, and then provide service to requesters from the cloud. The dynamic connect-in of the host computers for the manufacturing resources should be realized, such as the computing server on which the engineering software tools are running, the master computer in advanced experiment equipment, or the communication desktop computer of a domain expert. The remote hot deployment of manufacturing services and the version control of service programs should be supported; the management of the W. Xu et al. / Manufacturing Resource Servitization Based on SOOA 315 services in the service container should be supported, such as service start-up, pause, resume and service stop. The structure of the manufacturing service container is shown in Figure 5. It mainly includes a SOOA container service and multiple SOOA manufacturing services. The SOOA container service is an SOOA service itself, and it is responsible for providing all the functions including resource connect-in, service running environment and service management. The SOOA manufacturing services correspond to the functions of manufacturing resources one to one respectively. Figure 5. Structure of the manufacturing service container. 3. Implementation of the manufacturing resource servitization system Based on the above servitization method, a prototype manufacturing resource servitization platform is developed. The platform framework is shown in Figure 6. The platform can be divided into five layers: resource layer, encapsulation layer, service layer, task layer and portal layer. The resource layer includes different kinds of manufacturing resources. The encapsulation layer is responsible for encapsulating manufacturing resources as SOOA services by the encapsulation tool. The service layer contains various manufacturing services. The task layer has five functions: receiving submitted tasks, scheduling tasks based on schedule strategies, executing tasks by invoking remote engineering software services, retrieving task results, and monitoring task state. The user portal layer is the entry portal of the web-based system. A service encapsulation operator can make encapsulation operations on the resources through the encapsulation layer, and a task submitter can submit a task, retrieve task results and monitor task state through the task layer. An example of encapsulating a HyperMesh software tool and then accessing and utilizing this service is shown in Figure 7. The 316 W. Xu et al. / Manufacturing Resource Servitization Based on SOOA experiments showed that this platform can encapsulate and provision various resource services effectively. Figure 6. The framework of the manufacturing resource servitization platform. (1) Encapsulating HyperMesh (2) Accessing and utilizing HyperMesh service Figure 7. HyperMesh software is encapsulated as a manufacturing service. W. Xu et al. / Manufacturing Resource Servitization Based on SOOA 317 4. Conclusions Manufacturing resources of various forms and features can be encapsulated and provisioned in a manufacturing service cloud through the servitization method based on SOOA. Since the dynamic nature of the manufacturing resources and the network are already taken care of by the underlying SOOA architecture, this servitization method can handle both static resources and dynamic resources, hardware resources and software resources, unique resources and replaceable resources, and can adapt to the dynamic network environment well. Resources with exclusive communication protocols can also be encapsulated and provisioned in the manufacturing service cloud. Initial practices showed the effectiveness of this approach. In future, based on resource servitization, coordination of the resource services in the manufacturing service cloud will be further studied. Acknowledgment This work is supported by the National Natural Science Foundation of China (51175033) and National High Technology Research and Development Program of China (2013AA041302). References [1] Y. Yin, Z. Zhou, Y. Chen, et al., Information service of the resource node in a manufacturing grid environment, International Journal of Advanced Manufacturing Technology, 2008, 39(3-4): 409-413. [2] F. Tao, Y. Hu, Z. Zhou, Study on manufacturing grid & its resource service optimal-selection system, International Journal of Advanced Manufacturing Technology, 2008, 37(9-10): 1022-1041. [3] S. Shu, R. Mo, H. Yang, et al., An implementation of modeling resource in a manufacturing grid for resource sharing, International Journal of Computer Integrated Manufacturing, 2007, 20(2-3): 169-177. [4] L. Wu, X.X. Meng, S.J. Liu, Research on resource service encapsulation in manufacturing grid, Chinese Journal of Computer Integrated Manufacturing Systems, 2008, 14(9): 1837-1844 (in Chinese). [5] Y. Hu, F. Tao, D. Zhao, et al., Manufacturing grid resource and resource service digital description, International Journal of Advanced Manufacturing Technology, 2009, 44(9-10): 1024-1035. [6] J.W. Yin, W.Y. Zhang, M. Cai, Weaving an agent-based semantic grid for distributed collaborative manufacturing, International Journal of Production Research, 2010, 48(7): 2109-2126. [7] P. Deutsch. The eight fallacies of distributed computing[EB/OL]. [2015-06-02]. https://blogs.oracle.com/jag/resource/Fallacies.html [8] H. Wong, Developing Jini applications using J2ME technology, Pearson Education, Boston, 2002. [9] N. Weibel, R. Belotti, M.C. Norrie, et al., Web services technologies: SOAP vs. Jini, Swiss Federal Institute of Technology, Zürich, 2002. [10] M. Sobolewski, Technology Foundations. In: J. Stjepandić et al. (eds.) Concurrent Engineering in the 21st Century, Springer International Publishing Switzerland, pp. 67-99, 2015. [11] R. M. Kolonay, A physics-based distributed collaborative design process for military aerospace vehicle development and technology assessment, International Journal on Agile Systems and Management, Vol. 7, 2014, Nos. 3/4, pp. 242 – 260. [12] M. Sobolewski, (2014) Unifying Front-end and Back-end Federated Services for Integrated Product Development, In: J. Cha et al. (eds.) Moving Integated Product Development to Service Clods in Global Economy, IOS Press, Amsterdam, pp. 3-16, 2014, Retrieved May 25, 2015, http://ebooks.iospress.nl/publication/37838. 318 Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-318 An Approach to Assess Uncertainties in Cloud Manufacturing Yaser Yadekar, Essam Shehab1 and Jorn Mehnen Manufacturing Department, Cranfield University, UK Abstract. As new technologies and advanced networks play an increasing important role in manufacturing, many enterprises are suffering from unknown and unpredictable situations, termed “uncertainties”. The aim of this paper is to provide an approach to evaluate the importance of uncertainties in Cloud Manufacturing. The Simple Multi-Attribute Rating Technique (SMART) was used in this research to assess uncertainties that exist in Cloud Manufacturing. Additionally, a Microsoft Excel assessment tool has been developed to help decision makers identify uncertainties and determine the weight of uncertainty in Cloud Manufacturing. Keywords. Cloud Manufacturing, Uncertainties, Simple Multi-Attribute Rating Technique (SMART) Introduction Technology plays an ever more important role in linking enterprises and markets. The development of new technologies has helped enterprises to support their decisionmaking processes; to gain competitive advantage; and to enter new markets globally. New technologies such as Cloud Computing, Internet of Things, Virtualization, and Web Services, with the support of existing advanced manufacturing networks has the ability to change and restructure manufacturing systems in the manufacturing industry [1]. However, the manufacturing industry is facing many problems with existing manufacturing networks that affect the whole life cycle of the manufacturing process. Those problems include: manufacturing resources sharing, accessibility of equipment, and knowledge sharing [1,2,3,4]. With the emergence of new technologies, a new manufacturing paradigm, called “Cloud Manufacturing”, has arisen and received attention from both researchers and professionals over the past few years [5]. This paradigm allows: sharing of manufacturing resources, capabilities, and knowledge between different parties (manufacturing units, suppliers, other enterprises and customers) [6]; reduction in costs, and maximization of productivity, business agility and innovation [7]. Appling new and complex technologies and networks in enterprises can create unknown and unpredictable situations, known as “uncertainties”. Every enterprise tries to avoid, at any cost, having the undesirable state of ‘uncertainty’ in their system, as more uncertainty in a problem can lead to less understanding of that problem [8]. 1 Corresponding author; E-mail: e.shehab @cranfield.ac.uk Y. Yadekar et al. / An Approach to Assess Uncertainties in Cloud Manufacturing 319 The remainder of this paper is structured as follows: Section 2 provides a brief description of the Cloud Manufacturing concept; Section 3 explains the proposed methodology in this paper; Section 4 presents an overview of uncertainty assessment for Cloud Manufacturing; Section 5 demonstrates the development of an assessment tool; Finally, Section 6 concludes the paper and discusses future work. 1. Cloud Manufacturing Concept Cloud Manufacturing is a new paradigm which has resulted from changes in global market demands, the invention of new technologies, and developments in advanced communication networks [9]. This new paradigm offers, for the whole life cycle of manufacturing, faster, safer, more reliable, high-quality, cheap and on-demand manufacturing services [10]. Figure (1) shows traditional manufacturing and Cloud Manufacturing. In traditional manufacturing, the customer’s drawing is transferred into CAD and CAM systems to generate G-Code for a machine to manufacture the part. This can be done by using manual or mechanised transformational techniques. However, in Cloud Manufacturing, manufacturing resources and manufacturing capabilities needed for the whole lifecycle of a product are transferred into the Cloud. This can be done by using intelligent and automatic techniques. Figure1. Traditional manufacturing and Cloud Manufacturing. 320 Y. Yadekar et al. / An Approach to Assess Uncertainties in Cloud Manufacturing 2. Research Methodology Initially, a combination of a literature review (Journal papers, reports and documents), interviews, a questionnaire, a Delphi survey, and workshops with experts was used in this research in order to identify uncertainties and to determine the most important dimensions in Cloud Manufacturing [11,12]. From this, a total of 32 potential uncertainty factors were identified, with four important dimensions: Security, Performance, Cost and Regulatory). Subsequently, the Simple Multi-Attribute Rating Technique (SMART) was identified from the literature as a suitable approach to assess the importance (weight) of uncertainty in Cloud Manufacturing. This technique is one of several weighting methods based on elicitation in a multiple-criteria decision analysis (MCDM) approach that uses experts’ or stakeholders’ judgment to weight the importance of multiple categories and their alternatives. 3. Uncertainty Assessment After identifying potential uncertainties, there is a need to evaluate each uncertainty. This evaluation delivers a rating for the various uncertainties that is then used to determine strategies and decisions on how to deal with uncertainty in a Cloud Manufacturing. The process of uncertainty assessment is conducted in three essential phases: identify all potential uncertainties in the Cloud Manufacturing; estimate the importance of uncertainty (weight); rate uncertainties according to value of weight for each uncertainty in the system. Multiple-criteria decision analysis (MCDA) is a technique in the operations research discipline that has ability to handle and solve issues involving: multiple factors; a large amount of information and knowledge; and different alternatives [13]. There are different weighting methods based on elicitation in a MCDM approach that uses experts’ or stakeholders’ judgment to weight the importance of categories and alternatives [14]. Some of weighting techniques include: Simple Multi-Attribute Rating Technique (SMART), that implements direct entry of relative scores and weights for criteria and alternatives weighting; Swing Technique, that applies a lowest level to highest level range for weighting decision criteria; and Analytic Hierarchy Process (AHP), which employs a ratio scale, pairwise, for comparison of alternatives. The Simple Multi-Attribute Rating Technique (SMART) was proposed by Edwards in 1971[15], and has become a commonly used tool for decision-makers in the real world [16]. The advantages of this technique are that: it is a simple tool to implement; its alternatives are independent; it enables the eliciting of numerical judgments; it deals with both qualitative and quantitative criteria; it creates linear form; and it is straight forward to enter the scores and weight. The downside for this technique is inability to capture all details and complexities of the real problem [17]. 3.1. Uncertainty Identification Identifying the types and sources of uncertainties that exist in the project or system is the first stage in uncertainty assessment, with documentation of uncertainties in the early stage of the project being an essential step to provide knowledge about each uncertainty. Table (1) shows uncertainty factors. Y. Yadekar et al. / An Approach to Assess Uncertainties in Cloud Manufacturing 321 Table 1. Uncertainty factors. Uncertainty Factor Data Breach Data Control Data Location Data Loss or Leakage Insecure Cloud Services interfaces Applications Security Cloud Services interfaces data transmission Security Cloud Services interfaces development Security Remotely access Cloud services security Intellectual property (IP) protection Encryption Levels Scalability Bandwidth Cloud Service Availability Machine Availability System Integrity Uncertainty Factor Data Interoperability/Standardization Machine protection Latency Fault-tolerance Revision Request Disaster Recovery Authentication Mechanism Administrative Management Permission control User Boundary Quality control and assurance Training Standards Unexpected cost/price changing Quality of Service (QoS) Vender-Lock in 3.2. Uncertainty Evaluation Uncertainty importance can be interpreted as to how this uncertainty might affect a Cloud Manufacturing in different dimensions. Measuring the importance of uncertainty can be an exhausting step in the uncertainty assessment process because of the nature of the uncertainty. To determine the importance (weight) of uncertainty in Cloud Manufacturing, multiple-criteria decision analysis (MCDA) approach was adopted in this research. This approach is a structured framework that provides advanced calculation methods for both qualitative and quantitative decision criteria [13]. MCDA is a term for methods and tools that provide decisions to decision makers in situation where there are several conflicting criteria [18,19]. Choosing the SMART technique in this phase is the most appropriate MCDM technique for this research because of the technique’s advantages that mention above. By following the SMART methodology: 1- The decision maker is the expert or tool user. 2- The user selects 10 uncertainties to be analysed: Data Location, Data Loss or Leakage, Applications Security, Bandwidth, Service Availability, Machine Availability, Latency, Authentication Mechanism, Training and User Boundary. 3- The identified Cloud Manufacturing dimensions are Security, Performance, Cost, and Regulatory. 4- The user ranks the dimensions according to their decision (most important) as follows: 1) Security. 2) Performance. 3) Regulatory. 4) Cost. 5- The user rates dimensions as follows: Security = 90, Performance = 80, Regulatory = 50, Cost = 30 6- The weight for each dimension is calculated. 322 Y. Yadekar et al. / An Approach to Assess Uncertainties in Cloud Manufacturing Table 2. Uncertainty dimensions weight Dimension Security Performance Regulatory Cost Weight 90 80 50 30 Normalised Weight 90/250 = 0.36 80/250 = 0.32 50/250 = 0.2 30/250 = 0.12 7- Values are assigned for each uncertainty, on each dimension, with value on scale from 0-10. 8- The score for each uncertainty is calculated by multiplying each scaled value of uncertainty into their weighted dimension, and then sum all scores for each uncertainty. Table 3. Uncertainty total weights Uncertainty Data Location Data Loss Applications Security Bandwidth Service Availability Machine Availability Latency Authentication Mechanism Training User Boundary Security (0.36) 9 10 10 5 3 3 3 9 2 7 Performance (0.32) 5 7 8 10 10 9 7 8 6 8 Regulatory (0.2) 9 5 4 5 4 4 3 8 6 8 Cost (0.12) 2 5 5 9 8 8 5 7 7 3 Totals 6.88 7.44 7.56 7.08 6.04 5.72 4.52 8.24 4.68 7.04 4. Tool Development The goal of the development a Microsoft Excel assessment tool is to help decision makers to identify uncertainties and assess uncertainty in Cloud Manufacturing. The tool is divided into three stages: input data stage, to reference relevant uncertainties in Cloud Manufacturing; assessment stage, to evaluate the severity of uncertainty by measuring the importance (weight) of uncertainty; and output information stage, to provide a report on uncertainties in the project that includes rating of each uncertainty. The approach to determine uncertainty importance (weight) is based on the Simple Multi-Attribute Rating Technique (SMART). In this technique, the user is required to rank the earlier identified four dimensions of Cloud Manufacturing according to their judgment (1 is most important). Also, the user rates the dimensions by assigning numerical ratio judgments of the relative importance of attributes (on a scale from 10-100). Then, the SMART will calculate the weight for each dimension by summing importance weight and dividing by total weight. The next step is to account for each uncertainty on each dimension with a value on a scale from 0-10. The SMART will then calculate total weight for each uncertainty. Finally, after calculating the weight for each relevant uncertainty, a report will be generated in the register page that provides information regarding uncertainty prioritisation. The prioritisation scores will be obtained from each uncertainty by the uncertainty’s weight, and the uncertainty’s severity will be determined in terms of Low, Medium and High. Figure 2 shows uncertainty importance page and register page. Y. Yadekar et al. / An Approach to Assess Uncertainties in Cloud Manufacturing 323 Figure 2. Uncertainty Importance page and Register page. 5. Conclusions Uncertainties in Cloud Manufacturing can be a major obstacle for Cloud Manufacturing implementation due to the nature of uncertainty that contains both unquantifiable and quantifiable factors and provides little information about the uncertainty complexity. In this paper, the Simple Multi-Attribute Rating Technique (SMART) has been presented as an approach to measure the importance (weight) of uncertainty in Cloud Manufacturing. This approach uses experts’ or stakeholders’ judgment to weight the importance of each uncertainty in four different dimensions. As a result, this approach delivers a rating for uncertainties that can be used to determine strategies and decisions on how to deal with uncertainty in Cloud Manufacturing. It is suggested that future research applies different assessment methods on uncertainties in Cloud Manufacturing and also assesses uncertainties in different levels, such as status of uncertainty knowledge base, in order to quantify uncertainties. 324 Y. Yadekar et al. / An Approach to Assess Uncertainties in Cloud Manufacturing References [1] X. Xu, From cloud computing to cloud manufacturing, Robotics and Computer-Integrated Manufacturing, 28 (1) (2012), 75-86. [2] X. Gao, M. Yang, Y. Liu, and X. Hou, Conceptual model of multi-agent business collaboration based on cloud workflow, Journal of Theoretical and Applied Information Technology, 48 (1) (2013), 108-112. [3] Y. Laili, F. Tao, L. Zhang, and B. R. Sarker, A study of optimal allocation of computing resources in cloud manufacturing systems, The International Journal of Advanced Manufacturing Technology, 63 (2012), 671-690. [4] O. F. Valilai, and M. Houshmand, A collaborative and integrated platform to support distributed manufacturing system using a service-oriented approach based on cloud computing paradigm, Robotics and Computer-Integrated Manufacturing, 29 (1) (2013), 110-127. [5] W. Li, and J. Mehnen (eds.), Cloud Manufacturing Distributed Computing Technologies for Global and Sustainable Manufacturing, (2013), Springer, London. [6] Y. Yadekar, E. Shehab, and J. Mehnen, Challenges of Cloud Technology in Manufacturing Environment, In Proceedings of the 11th International Conference on Manufacturing Research (ICMR 2013), Cranfield University, pp. 177-182. [7] L. Ren, L. Zhang, L. Wang, F. Tao, and X. Chai, Cloud manufacturing: key characteristics and applications, International Journal of Computer Integrated Manufacturing (2014) [Online], 1-15 available at: http://dx.doi.org/10.1080/0951192X.2014.902105. [8] T. J. Ross, J. M. Booker, and A. C. Montoya, New developments in uncertainty assessment and uncertainty management, Expert Systems with Applications, 40 (3) (2013), 964-974. [9] Y. Yadekar, E. Shehab, and J. Mehnen, A Taxonomy for Cloud Manufacturing, In Proceedings of the 12th International Conference on Manufacturing Research (ICMR 2014), Southampton Solent University, (2014), pp. 103-108. [10] L. Zhang, Y. Luo, F. Tao, B. H. Li, L. Ren, X. Zhang, and Y. Liu, Cloud manufacturing: a new manufacturing paradigm, Enterprise Information Systems, 8 (2) (2014), 167-187. [11] Y. Yadekar, E. Shehab, and J. Mehnen, Uncertainties in Cloud Manufacturing, In: J.Cha et al. (eds.) Moving Integrated Product Development to Service Clouds in the Global Economy, IOS Press, Amsterdam, 2014, pp. 297-305. [12] Y. Yadekar, E. Shehab, and J. Mehnen, Taxonomy and Uncertainties in Cloud Manufacturing, Int. J. Agile Systems and Management, Vol. 8, No. 3/4, in press. [13] D. Jato-Espino, E. Castillo-Lopez, J. Rodriguez-Hernandez, and J. C. Canteras-Jordana, A review of application of multi-criteria decision making methods in construction, Automation in Construction, 45 (2014), 151-162. [14] T. Myllyviita, P. Leskinen, and J. Seppälä, Impact of normalisation, elicitation technique and background information on panel weighting results in life cycle assessment, The International Journal of Life Cycle Assessment, 19 (2) (2014), 377-386. [15] E. Løken, Use of multicriteria decision analysis methods for energy planning problems, Renewable and Sustainable Energy Reviews l1 (7) (2007), 1584-1595. [16] E. K. Zavadskas, Z. Turskis, S. Kildienė, State of art surveys of overviews on MCDM/MADM methods, Technological and Economic Development of Economy, 20 (1) (2014), 165-179. [17] W. Edwards, Social utilities, Engineering Economist 6 (1971), 119-129. [18] W. Edwards, and F. H. Barron, SMARTS and SMARTER: Improved Simple Methods for Multiattribute Utility Measurement, Organizational behavior and human decision processes, 60 (3) (1994), 306-325. [19] P. Goodwin, G. Wright, and L. D. Phillips, Decision analysis for management judgment, Wiley, London, 2004. Part 5 Design Methods & Knowledge-Based Engineering This page intentionally left blank Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-327 327 Howtomation© Suite: A Novel Tool for Flexible Design Automation Joel JOHANSSON1 Mechanical Engineering, School of Engineering, Jönköping University, Sweden Abstract. This paper shows how to achieve flexibility in design automat systems through the introduction of knowledge objects and through the adoption of an oriented view of the product structure. To demonstrate the ideas a novel tool called Howtomation© Suite (for automated know-how) is presented. The new tool handles the addressed issues and has been successfully implemented at one company. That successful implementation is described at end of the paper. Keywords. Design Automation, Knowledge Based Systems, Engineer to Order, Knowledge Base, Knowledge Object, Manufacturability Analysis, Injection Molding. Introduction The ability to design and manufacture individualized products increases the competitiveness of manufacturing companies and is sometimes the business case to them [1]. Three opportunities have been pointed out for Swedish industry to stay competitive on the global market: individualized products, resource-smart design and production, and a focus on customer value [2]. These opportunities can be achieved by efficient design and manufacture of customized products. However, that requires developing and integrating knowledge based systems for products and production [3]. The research presented in this paper is part of a research project aiming at these targets. The realization of individually engineered products can be supported by the adoption of an automated engineer-to-order (ETO) [4] approach in the quotation, the development, and the production preparation processes. Automating the ETO processes allows a company to efficiently adapt their products to vast variations of customers’ specifications bringing more value to the customer and profit to the company by efficient use of engineering staff, material, and manufacturing resources. The core of such a company is the exercising of a rich and diverse knowledge base about the products, their production and the required resources for design and manufacture enabling the company to quickly go from quotation to engineering the product and to production, all while maintaining the most competitive pricing. To successfully develop, implement, and maintain that core activity requires the development and implementation of com- 1 Corresponding author, E-Mail: joel.johansson@jth.hj.se 328 c J. Johansson / Howtomation Suite: A Novel Tool for Flexible Design Automation puter systems for efficient design of product variants with associated specifications for automated manufacturing. Since the development of a design automation system is a significant investment in time and money and since experience has shown that problems often arise when such systems are to be implemented in current operations (i.e. after proof of concept phase) it is vital to discuss some critical issues. One of few things we can infer about future is that things are going to change. That is why flexibility is the main focus in this paper. Further, since manufacturing companies deal with physical products, issues with geometry is another important aspect addressed here. 1. System architecture for flexibility Knowledge based systems (KBS) which is a result of decades of research within the field of artificial intelligence has proven to be applicable to a wide range of engineering design issues [5] also forms the foundation of the architecture described in this paper. A KBS has two vital components, the knowledge base, and the inference engine. KBS is based on the strategy to formalize the knowledge to be automated and store it within the knowledge base and to let the inference engine search for consistent states of the knowledge, states where no conflicting statements exist. In practice this means that the formalized knowledge is separated from the computer routines that are applying the knowledge. The knowledge to be stored in the knowledge base can be of different kinds, and there are many ways in which the inference engine can act [6-8]. The complexity of an artifact can be measured in two dimensions including its physical realization, and the knowledge required to comprehend it. There are artifacts that cannot be made by a single person, and there are artifacts that cannot be comprehended by a single person [8]. Just as the former calls for decomposing the product into modules, the latter calls for dividing the knowledge into chunks. Consequently, two ways of achieving flexibility are identified. One is by applying an object orientated approach to the knowledge-base. The other is to apply an object orientated approach to the product structure. 1.1. Object Oriented Knowledge Base: Knowledge objects Object-oriented programming offers the possibility to develop highly flexible software. To apply object oriented programming to the knowledge base a class of objects called knowledge objects has been proposed in [9, 10]. Figure 1 illustrates one way of implementing the knowledge object class. As seen, a knowledge object contains a list of input parameters (realized as a manager object that is basically a collection), a list of output parameters, and a method (execution method) for processing input parameters to make unknown output parameters known. Other fields may be added to a knowledge object to make the system well-functioning. Proposed additional fields are listed and explained in the Table 1 (which is not a complete list of the fields used in the system described at end of the paper). c J. Johansson / Howtomation Suite: A Novel Tool for Flexible Design Automation 329 Figure 1. Knowledge object class definition. Table 1. Attributes useful to implement for the knowledge object class. Field Name Active Categories Constraints ExecutionArguments ExecutionDuration Execution Message ExecutionMethod HistoricalValues InputParameters Name Optimizable OutputParameters Owner Precision RemeberHistoricalValues Status Purpose Controls whether the knowledge object is active or not. Categorize the knowledge automated by the knowledge object. Specifies when the knowledge object is applicable. Serves as sockets to the computer routine specified as execution method. Keeps track of how long it takes to execute the knowledge object. Stores the resulting messages of executing the method. The method to run when executing the knowledge object. Stores result values. List of the input parameters for the knowledge object. The name of the knowledge object. Specifies whether the knowledge object can be put into an optimization loop. List of the input parameters for the knowledge object. Specifies the user responsible for the accuracy of the knowledge automated by the knowledge object. Specifies the precision of the knowledge represented by the knowledge object. Specifies whether to store input and output values. Indicates the current status of the knowledge object. When developing the knowledge objects, they should be defined in a way that makes them autonomous. Methods used to process the parameters should preferably be automated external software applications so that the knowledge representation is put outside of the knowledge handling system. The external applications should be selected so that the resulting design automation system contains user readable and understandable knowledge, and is easy to use. The benefits of developing knowledge objects that are 330 c J. Johansson / Howtomation Suite: A Novel Tool for Flexible Design Automation autonomous using common and wide-spread applications as methods are two-folded: the knowledge can be used manually without the design automation system, and it is easy to find people skilled enough to use the very same knowledge the design automation system does - it makes the knowledge more human-readable. 1.2. Object Oriented Product Structure The second way of achieving a high degree of flexibility in a design automation system is by adopting object orientation to the product structure. When taking such a perspective on the product structure all components are viewed as objects with dimensions or other features as attributes. These objectified components can subsequently be wrapped as knowledge object where the attributes serve as input or output parameters. This can be achieved using any parametric CAD-system. When taking an object-oriented view on the components it has proven to be good praxis to communicate dimension values and suppress states of features through user defined parameters. The parameters should then be put at an appropriate level in the product structure, which is in the tree leafs if possible. In a case where a parameter is affecting several components it is put at the lowest possible level in the product structure above the components it is controlling, see Figure 2. Any rule or calculation are put at the same level as its dependent parameters. Parameters in assemblies are inherited by descendant subassemblies and components and are repeated in them. In practice this means that the introduction of a parameter in a subassembly controls any such parameter in its descending components. This behavior can be achieved by functionality in the most common CAD-systems (the functions have different names for example publications, external link, reference), but can also be achieved by macro programming. Consequently, when replacing a descendant component it will automatically be updated to parameters in the ascendant assembly. Figure 2. Parameters and rules should be put in leaf nodes as far as possible. Parts and components inherit parameters from ascending assembly. The brackets surrounding the rule-nodes indicates that, if any, only rules intuitively connected to the geometry should be put in the CAD-models. 1.3. Inference engine The inference engine is used to automate the formalized knowledge stored in the knowledge-base. The inference engine arranges the knowledge in the knowledge-base in an executable order. Two main types of search-based inference engines exist: forward and backward-chaining [11]. A forward-chaining (also called data-driven) mechanism uses the information initially presented to fire all applicable rules. The method has two steps. In the first step, triggered rules are listed. In the second step, an appropriate rule from the triggered ones is selected and fired. After firing the selected rule, c J. Johansson / Howtomation Suite: A Novel Tool for Flexible Design Automation 331 all triggered rules are listed again and so on, until no triggered rules are found. If knowledge objects are used to build the knowledge-base, the inference engine searches for knowledge objects with all input parameters known. It then selects one of the found knowledge objects to execute the method defined in that knowledge object to calculate output parameters using the input parameters. When the method has run, the stock of known parameters is updated, and a new search for executable knowledge objects is initiated (depth first) or executes the next knowledge object in the found executable knowledge objects (width first). A backward-chaining mechanism (also called goal-driven) is fed with goal states. The mechanism then searches backward to see how to end up at that state. When knowledge objects are used, the knowledge-base is searched for knowledge objects to fire to find the queried parameters. The user is later asked to put in required information. The backward-chaining mechanism is more effective at runtime than the forward-chaining one. This is because executions of unnecessary methods are avoided. Event handling is available in modern operating systems. Here, it is proposed that the inference engine should make use of these functions in the operating systems. That gives an event-based, forward-chaining search mechanism that works as follows. When a parameter is changed, an event is raised in the system notifying that a change has occurred. This triggers an update of the conflict set. If there still are knowledge objects left in the conflict set, one of them is selected to be executed, in accordance with implemented rules for selection. When the object is executed, its output parameters are changed, and the conflict set is updated, and so on. When implementing the inference engine using event handling, a significant amount of loop algorithms are avoided and when running the system, the inference engine is triggered automatically on change. 2. Dealing with geometry What makes computer systems for automated engineering design outstandingly hard to develop is that geometry is a big share of the problem domain. It has proven that geometrical problems are hard to automate, some examples are found in [12]. Two main strategies exist to deal with this situation, one is to create template CAD-models that are parametrically and/or topologically modified, and the other is to generate the geometry programmatically. The former strategy is here referred to as the template based systems and the later one is referred to as generative systems, it is of course possible to combine the two strategies into hybrid systems. The advantage of the template based approach is that it is easy to predict the outcome of the system, what you see is what you get. In the CAD-system the engineers can define parameters and add rules for updating dimensions and suppressing/activating features to make the components turn into shapes corresponding to given parameters. The drawback is that the template based approach is not scalable so that over time when adding more and more features the CAD-models are hard to maintain and hard to instantiate. This is because the template models in fact contains the complete set of the design space of the component so when instantiating the models, the entire design space is instantiated again. The system gets fragile when starting to instantiate such CAD-models into assemblies. The advantage of generating the CAD-models programmatically is that the resulting models are lightweight compared to the template approach. It is also possible to make the generated models much more general so that the resulting models might look 332 c J. Johansson / Howtomation Suite: A Novel Tool for Flexible Design Automation completely different. The drawback is that it is hard to predict the outcome, and that it is hard for the engineers to modify the models, as they are represented by computer programming code. Real systems are of course hybrids. 2.1. Where to put the knowledge repository It is possible to implement the knowledge-base in stand-alone automated engineering design systems or into CAD-integrated KBE-systems (KBE is the acronym for Knowledge Based Engineering). When using a CAD-integrated KBE-system to implement the knowledge-base, the rules will be listed in the model-tree among the different features. This can be valuable since it is easy to see what geometries the rules are connected to. It also makes the user feel familiar with the user interface. But the knowledge-base in such a system can be cumbersome to understand when the knowledge-base contains a vast number of rules compared to the number of geometry features. This is especially true if many of the rules do not deal with the geometry. In such cases a stand-alone automated engineering system should be used. Another issue to consider is that when using a CAD-integrated KBE-system, the knowledge is bound to the CAD-system. This means the knowledge-base will be difficult to translate to other CAD-systems. In stand-alone systems, knowledge is automated outside the CAD-models and design proposals can be generated in native or neutral CAD-formats. Another benefit of putting the knowledge in a system outside the CADsystem is the distinct interface between CAD and the knowledge, which is usually realized by a set of parameters helping clarifying which parameters are the governing ones of the design. These parameters are the attributes of the objects when taking object oriented perspective of the product structure. One drawback with the stand-alone approach is that it can be hard to implement a knowledge-base containing mostly geometric relation-ships into a stand-alone KBE-system. 3. Constraints, constraints, and constraints Product and production development involves the identification and propagation of active constraints. In product development these constraints often are referred to as the dimensioning parameters, and in production development they are often referred to as the production window. These constraints originates from laws of physics, legacy, economics, or customer and affects the physical realization of the product. These constraints can be explicitly defined using parameters implemented in the knowledge base becoming input parameters, or implicitly defined introducing new parameters that becomes output parameters. More abstract, we also have constraints on the knowledge used to derive the product indicating the valid range of it. Take for instance the commonly used slender rod assumption. It is said to be valid if the length of the rod is much greater than the cross section of it (factor 10 is widely used), which is a constraint on the knowledge itself. Finally if introducing optimization algorithms to search for optimum solutions mathematically modeled constraints have to be induced from the above mentioned constraints. Hence, it is important for design automation systems to be able to process constraints. Theoretical foundation for that is for example found in [13, 14]. c J. Johansson / Howtomation Suite: A Novel Tool for Flexible Design Automation 333 4. Howtomation Suite To verify the concepts presented in the previous sections a novel tool was developed and applied to a real life example. The Howtomation suite is based on the Microsoft .net platform and consist of five parts: core definitions for parameters, inference engine, knowledge objects, graphical user interface components, and a constraint solver. The company where the Howtomation Suite was applied develops and manufactures heated runner systems for injection molding of plastic materials. 4.1. Hot runners for injection molding One reason for making the automation at the company was that the product is an ETOproduct. Every produced hot runner system is unique. The runner systems differ in layout, see for instance Figure 3 where an examples of the X-shaped layout is shown, there are also H-shaped layouts, circular layouts, and custom layouts. The runners are connected to the tooling cavity through in-gates. The number of in-gates is up to 48 for a single system, see Figure 3. There are 5 series of in gates that all can have two different types of bushings, be of length ranging up to 600 mm, and have 9 different types of end caps. Figure 3. Example of hot runner system with 48 gates in X-layout. 4.2. Knowledge Objects The design of a runner system starts with planning the layout, which is done manually and results in a CAD-model containing a sketch including lines schematically illustrating the runner system. The subsequent steps are automated and is visualized by the Howtomation Suite as shown in Figure 4 and Figure 5, where executed knowledge objects are green, triggered knowledge objects are yellow and unreachable knowledge objects are red. Also, known parameters are green while unknown parameters are red. The visualization facility is used during the automation phase (design mode) but is not visible to the engineers making use of the system. 334 c J. Johansson / Howtomation Suite: A Novel Tool for Flexible Design Automation Figure 4. Prior to run there exist three triggered knowledge objects in the knowledge base (the picture should be viewed in color). 5 4 1 7 6 2 8 3 1 Figure 5. Knowledge objects are executed by the inference engine to turn unknown parameters into know. The figure show two executed objects (1 and 3). Numbers added correspond to table 2 (the picture should be viewed in color). The process automated at the company includes 8 knowledge objects of 5 different kinds. The first three objects performs combined selections (many times referred to as configuration) applying the constraint solver included in the Howtomation Suite and that was developed based on the theories in [13, 14]. The first combined selection is based on 11 parameters and realize the company’s product catalog. In that selection the constraint solver has to deal with 17 574 796 800 possible and impossible combinations of in gate parameters. The validity check of combinations are done through 7 constraints. The other two selections include selecting appropriate template CAD-models for instantiation and to select number of heating elements. Other five knowledge objects connects to the CAD-system to retrieve information about active CAD-models, instantiating template CAD-models, updating family tables, and inserting components into assemblies. See Table 2 for details about the hot runner knowledge base. Table 2. Attributes useful to implement for the knowledge object class. Numbers refer to Figure 5. Name 1. Combined Selection 2. File Selection 3. Heaters 4. Get Folder 5. Active Model 6. Instantiate Model 7. Insert Hot Runners 8. Update Family Table Purpose Displays a dialog where the engineer can configure the gates based on customer enquiries, see Figure 6. Combined selection to pick appropriate template CAD-model. Combined selection to identify how many heating elements should be used. Get the directory folder of the active SolidWorks model. Gets the file path of the active SolidWorks model. Creates copies of the template CAD-models in to the folder of the active CAD-model. Assembles in-gate instances into the CAD-model based on selected points or sketches (sometimes up to 48 times). Updates the family table to match current parameter values. c J. Johansson / Howtomation Suite: A Novel Tool for Flexible Design Automation 335 When ready, the knowledge base is executed in release mode which means that the graphical user interface as shown in Figure 4 and Figure 5 is not visible. To make the knowledge easily accessible to the engineers a button was added to the SolidWorks user interface to execute the knowledge base. During run-time knowledge objects might enquiry the engineer for inputs, for instance when executing the first knowledge object a selection dialog box shows up where user requirements are filled in and at end of the processes he/she is asked to indicate where in the CAD-model the gates are to be inserted. Figure 6. At run-time the knowledge objects are not visible to the engineer but may be interactive, here two dialog boxes show up during execution of the knowledge base. The execution is started from the CADsystem. 4.3. Object orientation applied to the hot runner product structure When formalizing the knowledge and the product structure a previously developed (huge) design table containing the complete design space (17.5·10 9 combinations) was sliced down to a set of design tables with no rules connected to template CAD-models. The rules were put in the Howtomation Suite instead. Previously the complete design space of the gates were instantiated together with each instance of the gates (48 times in the Figure 3) which made the CAD-system break down after up to an hour of crunching. Now the gate instance are light weight bakeries resulting from the automated process. 4.4. Geometrical problems to handle There was one geometrical problem to overcome during the automation process and that was encapsulated into the knowledge object that inserts the gates into the main assembly (nr 7 in Figure 5 and Table 2). When defining the layout of the hot running system the gate locations are defined by a 2d sketch. The lines in the sketch defines the channels of the runner system and all the end points of the sketch defines the locations of the gates. A routine had to be developed that looped through all the curves of the sketch to identify the endpoints. 336 c J. Johansson / Howtomation Suite: A Novel Tool for Flexible Design Automation 5. Conclusion Products and their underlying knowledge change over time and an automated engineering design system needs to be flexible so that product components and pieces of knowledge can easily be added, updated, or deleted without disrupting the operation of the system. Adopting an object oriented perspective on the product structure and the introduction of knowledge objects to define autonomous chunks of knowledge have proven successful to achieve such flexibility when developing design automation systems. In this paper a platform for automating engineering activities by implementing these ideas was presented together with an in production application, hot runner systems for injection molding of plastics. The knowledge objects introduced in that system are supporting selection of gates for the hot runners using constraint processing algorithms. Subsequently template CAD-models are selected, instantiated, updated and assembled. The execution of the knowledge objects is controlled by an inference engine and to make the system easy to maintain a graphical user interface was developed. Acknowledgements The work has been carried out within the project IMPACT, funded by the Knowledge Foundation (KK-stiftelsen), Sweden. The Howtomation© suite was tested at the company MasterFlow and the author is grateful for their enthusiasm and willingness to adapt the system. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] L. Hvam, N.H. Mortensen, Riis, Product customization. 2008; Available from: http://public.eblib.com/choice/publicfullrecord.aspx?p=336869. Vinnova, Challenge-driven innovation - Vinnova’s new strategy for strengthening Swedish innovation capacity. Vinnova information vi 2011:07. 2011, Stockholm, Sweden: Vinnova. N. N., Factories of the future : multi-annual roadmap for the contractual PPP under Horizon 2020. 2013. J. Gosling, M.M. Naim, Engineer-to-order supply chain management: A literature review and research agenda, International Journal of Production Economics, 122(2), pp. 741-754, 2009. A.A. Hopgood, Intelligent systems for engineers and scientists, CRC Press, Boca Raton, 2001. J.-W. Choi, Architecture of a knowledge based engineering system for weight and cost estimation for a composite airplane structures, Expert Systems with Applications, 36(8), pp. 10828-10836, 2009. J. Wang, A cost-reducing question-selection algorithm for propositional knowledge-based systems, Annals of Mathematics and Artificial Intelligence, 44(1-2), pp. 35-60, 2005. C.Y. Baldwin, Design rules / the power of modularity, MIT Press, Cambridge, 2000. F. Elgh, J. Johansson, Knowledge Object - a Concept for Task Modelling Supporting Design Automation, in J. Cha et al. (eds.): 21th ISPE International Conference on Concurrent Engineering, 8-11 September, Beijing, China, IOS Press: Amsterdam, pp. 192-203, 2014. J. Johansson, A flexible design automation system for toolsets for the rotary draw bending of aluminium tubes, in 2007 ASME IDECT (DFMLC). 2007. G.F. Luger, Artificial intelligence : structures and strategies for complex problem solving, AddisonWesley, Harlow, New York, 2005. J. Johansson, F. Elgh, How to successfully implement automated engineering design systems: Reviewing four case studies. in: C. Bil (eds.): Proceedings of 20th ISPE International Conference on Concurrent Engineering (CE2013), Sep, 2 - 5 2013, Melbourne, Australia, IOS Press, Amsterdam, pp. 173- 182, 2013. R. Dechter, Constraint processing, Morgan Kaufmann, San Francisco, 2003. T. Frühwirth, S. Abdennadher, Essentials of constraint programming, Springer, Berlin New York, 2003. Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-337 337 Generic Functional Decomposition of an Integrated Jet Engine Mechanical Sub System Using a Configurable Component Approach Visakha RAJAa,b,1 and Ola ISAKSSON a,b GKN Aerospace Sweden AB, SE 461 81, Trollhättan, Sweden b Chalmers University of Technology, Department of Product and Production Development, SE 412 96, Göteborg, Sweden a Abstract. A procedure is proposed to functionally decompose an already existing integrated mechanical jet engine subsystem. An integrated sub system is a system where the same design object satisfies multiple functions: which is typically the case in aircraft engine sub systems and components. A generic decomposition method will allow implementation and use in automated design systems and will function as a means to build experiences into platforms. Using the procedure, an enhanced function-means tree (E F-M tree) consisting of functional requirements, means to satisfy the requirements and constraints was created for the integrated jet engine component. The E F-M tree is then used to generate a hierarchy of configurable components (CCs). A configurable component (CC) is a stand-alone conceptual object that contains the functional requirement, means to satisfy the requirement (or design solution) and constraints at a certain level of the E F-M tree. A specific CC hierarchy configuration results in the description of the product concerned. The usage of the CC hierarchy as design documentation as well as a template to derive other designs from is demonstrated. Finally limitations of describing product functional requirements using CC method and recommendations for further development of the method are discussed. Keywords. Integrated design, functional decomposition, enhanced function-means tree, configurable components 1. Introduction To be competitive in the market, companies need to prepare their products for upcoming, novel developments. For a tier one aircraft sub systems supplier like GKN Aerospace, this means to quickly integrate the sub systems into alternative system architectures introduced by their customers. One such situation is engine - sub system integration. Different engine architectures demand different designs of sub systems. Due to market pressure, the sub system designs must be defined and evaluated to a minimal cost and within a short timeframe. For delivering solutions quickly, in depth knowledge about products that the company designs and manufacture is necessary. Such knowledge can be captured in a product platform. 1 Corresponding author; E mail: visakha.raja@gknaerospace.com 338 V. Raja and O. Isaksson / Generic Functional Decomposition In order to create a platform, it is necessary to understand the functions and interactions of important product features. This necessitates a systematic way of documenting the design. What is required of the product, what features satisfy the requirements and what are the limitations of the features. The design documentation should be such that addition, change or deletion is possible making it a continually expanding database. Option should exist to link such a database to a CAD system that can generate various input dependant configurations. Thus such a system will result in a platform being made for the products concerned. The first step in making such a database is to carefully identify and relate the functions the structure is intended to satisfy, the means to satisfy those functions and constraints if any. Once the identification is done, the function-means [1] and constraints should be represented in an easily understandable form. For a modular product or an assembled product, different modules or assemblies satisfy different functions. The identification and representation of function-means is thus straightforward for such products. However, for an integrated product system, where the same part satisfies a multitude of functions, identification of function-means is difficult. This paper examines how the said identification and representation can be done for an integrated architecture product system. The structure considered in this paper is known as a “cold structure”. 1.1. A cold jet engine structure The “Cold structure” used as an example in this paper is a static component in a turbofan engine, sufficiently complex to view as a sub-system by itself. The major function of this cold structure is to connect compressors and guide the flow between them. It is referred to as cold since the temperatures it is exposed to (~250 0C) is considerably less than other parts of the engine such as combustor or turbines (~12000C). In a two shaft turbo fan engine, such cold structure connects the low pressure compressor (booster) with the high pressure compressor. In a three shaft engine the cold structure connects the intermediate pressure compressor with the high pressure compressor. Depending on the engine architecture (two/three shaft, geared) and engine manufacturer OEM the cold structure may be referred to using different names (Intermediate Casing or IMC for example) but in this report a generic structure with the most general functionalities is considered for modeling. Figure 1 shows the position of a cold structure in a twoshaft turbo-fan engine. Figure 1. Position of cold structure in a two spool turbofan engine (generic figure). V. Raja and O. Isaksson / Generic Functional Decomposition 339 2. Current practice For mechanical component and sub-system design, engineering design activities are driven by “statements of work” documents, as agreed with the system integrator, in this case the engine OEM. Such document state design requirements and criteria, as well as how information is shared between the integrating team and the component design team. The requirements stem from a Requirements Engineering work where practices differ, yet have in common that they should be expressed to replicate function and performance, irrespective of a design solution. In practice, it is quite difficult to separate out the dependency to the choice of product solution, since requirements are being derived from a system solution with a certain component solution in mind. From a component and sub system developers point of view, it is highly desired to 1) enable re-use of best practices and experiences, 2) to customize a product solution based on the desired behavior “profile” , i.e. whether to optimize on low weight or against manufacturing robustness as an example. Such decisions are made in any design project, and typically require trading alternative concepts against each other. Hence, it is desired to understand a components performance and behavior in a systematic way. At present, such trades are captured in “Product Platforms” [2][3][4] which has been developed together with Chalmers over the last years. It is also noted that the value of how to trade the performance and behavior of the targeted design solution, is an integral part of the dialogue within the design team and together with the integrator. At present the decision support for this work is limited to “structural analysis” such as using PUGH matrices, QFD analysis or FMECA work [5] where the function is separated out from the design solution. There is – however – not an established support for how to systematically express a component performance and behavior from a functional perspective. F-M studies are being made, yet not a part of a standard work, but rather as an analysis tool for training or post design engineering work. 2.1. Literature review There exist a variety of approaches to functional description and decomposition. Aurisicchio et al. [6] classifies methods of representing product functional breakdown into form-dependant methods and form-independent methods. Form dependant methods are those that depend on the shape of the product (components in the assembly). Form-independent methods do not depend on the shape of the component and proposes generic solutions. In order to generate the functions, methods such as functional analysis system technique or subtract and operate procedure can be used. For an already existing product, form dependant methods are more intuitive to use. In order to represent the functional breakdown, Aurisicchio et al. [6] proposes a functional analysis diagram (FAD). The FAD aims to combine a CAD model and functional descriptions of different parts in the model using day-to-day language. Levandowski et al. [7] did an early effort to represent an integrated component using configurable component approach. However the study was aimed at exhibiting the capabilities of the configurable component method as a means of product platform creation and did not focus on how to extract functions and means from an integrated component like a cold structure and represent them as a configurable component. In contrast to Levandowski et al. [7], we focused on functional decomposition itself and 340 V. Raja and O. Isaksson / Generic Functional Decomposition subsequent representation of the configurable component approach. identified function-means-constrains using 3. The E F-M tree and Configurable Component (CC) concept 3.1. The enhanced function means tree A function-means tree (F-M tree) is a graphical representation of a need and the solution that satisfies the need. For example the need to cut vegetables satisfied by a chef’s knife. Andersson et al. [8] propose the enhanced function means tree as function-means tree enhanced to include constraints associated with the solutions as well. This enables visualizing a complete picture of the product functional breakdown. The elements of a function means tree according to Andersson et al. [8], considered in this paper are:  Functional requirement Functional requirements are what a product or element of a product does in order to contribute to a certain purpose by creating an internal or external effect [8].  Means Means are physical or abstract entities chosen during design process to fulfill the functional requirements. Means are referred to as Design Parameters (DP) in [8] though in this paper, following [7], they are referred to as Design solutions (DS).  Constraints Constraints are non functional requirements that do not have specific solution rather bound and add value to solution space such as weight, cost, reliability, safety and ergonomics. Therefore it is possible to represent product information as an E F-M tree hierarchy. Figure 2 show an E F-M tree for a kitchen knife. Figure 2. E F-M tree for at chef's knife. FR, DS and C representations adapted from [7]. 3.2. The configurable component (CC) method The CC method is described [9]. The method was developed as a concept model for developing computer based product platforms. In its simplest form, when functional requirements, means and constraints in the E F-M tree at a certain level are taken V. Raja and O. Isaksson / Generic Functional Decomposition 341 together, the resulting construct is a CC. In the CC model, means (as defined in section 3.1.2) is termed as design solution (DS). Therefore in the most basic form a CC contains Functional Requirements (FR), Design Solutions (DS) and Constraints (C). The E F-M tree hierarchy thus turns into a number of interconnected CCs. A CC is a standalone object that can call other CC objects. A particular CC can be made such that it satisfies the constraints in a certain manner. Eg. choose the 3 rd dimensional value from an available list of 5. Thus a CC has been configured according to a requirement and hence the name ‘configurable’ component. It can also be that a design solution is assigned a certain dimension. When a CC is configured (assigned values or made selection or such), it is called an instantiation. A product will then be a collection of all configured (or instantiated) CCs. Multiple CCs can communicate to each other and can have interfaces. CC Interfaces are not considered in this paper. Details about CC interfaces can be found in [9]. According to definition in [9], inside a certain CC, an FR is satisfied by one and only one DS. Multiple FRs and DSs inside a single CC will make the configuration of the component difficult and decisions as to which solution needs to be applied difficult. In other words, the CC does not exist as a stand-alone entity and configuration is difficult. When a DS satisfies a FR, the relation (indicated by arrows) between them is termed ‘isb - is solved by’. Similarly the relation between a DS and C is termed ‘icb - is constrained by’. When a DS refers to secondary FRs, the relation is termed ‘rf – requires function’. When a CC refers to other CCs, the relation is termed ‘icu - is composed using’.The terms, isb, icb, rf and icu can be noted in Figure 3. Other relational terms also exist in CC definitions but are not considered in this paper. In contrast to the definition of means in section 3.1.2, the DS in a configurable component are generic in nature. Multiple DS can be solved by a component (CO) [7]. This is further exemplified in section 4. Detailed application of the configurable component method can be found in [3] and [4]. Figure 3. Creation of CCs from E F-M tree. Adapted from [9]. 342 V. Raja and O. Isaksson / Generic Functional Decomposition 4. Application of configurable component method to an integrated system For the configurable component method to be applied to an already existing product, a decomposition of the functions of the product must be performed. The method used for functional decomposition is according to that suggested in Ullman [10]. For a certain product it involves disassembling each constituent component and listing its functions. For an integrated component, the same component satisfies a number of functions. Different sections of the component can be thought of as satisfying different functions. Similar to noting functions for each constituent component, functions can be noted for each section. Figure 4 shows the cold structure designed by GKN Aerospace in connection with the EU project Environmentally Friendly Aero-Engine, VITAL [11]. The most important sections, their functions, constraints and interfaces are noted down in a table. Each row of the table will form the basis for a configurable component. The functions of the section are the functional requirements, the section itself is the design solution and constraints are constraints. The collection of each row is the listing of function-means-constraints for the integrated mechanical component in its entirety. Not all functions and design solutions concerning a cold structure are shown in this paper. Only a selected number of functions are shown. vanes flanges thrust lugs Figure 4. Cold structure designed for the VITAL program [11]. Figure does not correspond to part marked in Figure 1. The method used can be summarized into the following steps. 1. Prepare an exhaustive list of functions that the integrated component is required to satisfy 2. Separate the component into identifiable sections 3. Identify constraints associated with each section 4. Create a table in which each identified product section is assigned functions that it satisfies and associated constraints 5. Create an E F-M tree for each section connecting functional requirement, design solutions and constraints 6. Prepare configurable components from the E F-M tree table While performing step-4, identification of further functional requirements might result. These additional functional requirements should be added to the list created in step-1. V. Raja and O. Isaksson / Generic Functional Decomposition 343 Table 1. Functions, solutions and constraints. No Sections (Design Solutions) Functions that the sections satisfy (Functional Requirements) Constraints 1 Thrust lugs transfer thrust loads to aircraft Length of thrust lug arm diameter of thrust lug thickness of thrust lug angle of inclination of thrust lug distance between the arms of the thrust lug 2 Flanges act as component interfaces Inner diameter Outer diameter flange thickness 3 Vanes connect flow annulus walls vane thickness vane forming methods (cast/sheet metal) vane height vane length (actual chord, axial chord) transfer rotor loads to engine outer frame induce changes in flow properties The configurable components that are derived from the functions-meansconstraints for thrust lug is (first row of Table 1) shown in Figure 5. Figure 5. CC corresponding to thrust lug section as design solution. There are a number of sections that satisfy function ‘to act as component interfaces’. Thus a CC is made for flanges with applicable constraints. The instantiation of the flange CC results in different flanges for the components. 344 V. Raja and O. Isaksson / Generic Functional Decomposition Figure 6. Flange CC and its instantiation. It can be noted from Table 1 that the same sections satisfy multiple functions. As stated in section 3.2, in a CC, a functional requirement is satisfied by one and only one design solution. Therefore for a certain functional requirement, the closest concept name that indicates a solution is noted with its constraints which in turn are satisfied by the section (section becomes a CO as stated in section 3.2) mentioned in the table. Thus, the vane section has three functions to satisfy which are met by three design solutions expressed in general terms which in turn are provided by the section vane (similar to what has been done for flanges, instantiation is also possible for vanes or thrust lugs though they are not shown here). Figure 7. CC for Vane. The collection of all CCs forms the integrated component. Following sub-sections detail two of the application cases for such a functional decomposition. 4.1. Application Case-1: Design documentation Along with the table that lists the functional requirements and design solutions, the collection of CCs form a basis for documenting the design. It is possible to get an overview of the component design- which functions required the inclusion of what sections and the constraints associated with the sections. Once the E F-M tree/CC structure is made it is possible to visually identify the contribution of different product sections towards satisfying different functional requirements. V. Raja and O. Isaksson / Generic Functional Decomposition 345 4.2. Application Case-2: generation of new designs With reference for Figure 7, the section ‘vane’ satisfies three different functional requirements. If the vanes are manufactured using sheet metal forming method, they may not have enough strength to carry rotor loads towards engine outer frame. Therefore the load transfer function needs to be satisfied using another section. As an example, strut rods can satisfy the function of load transfer and they can be located inside the vanes. If such is the case, the CC listing can be changed as shown in Figure 8. Figure 8. Separation of function from a section and formation of new design. Only two CCs are satisfied by the vane section, the other CC is satisfied by the struts sections. Thus a function is separated from a section and assigned to a new section in order to create a new solution. In this case the functional breakdown structure forms the basis of a product platform. 5. Concluding discussion and further work A procedure has been proposed to represent the function-means-constraints of an integrated jet engine component. The function-means-constraints representation was then used to create configurable components, which are stand-alone objects that contain a certain functional requirement, design solution that satisfies the requirement and constraints associated with the design solution. The method was applied onto a representative complex component, a cold structure. The generated CC structure was demonstrated as design documentation as well as a template to generate additional designs. It is possible to perform a functional decomposition using the procedure though hard to perform it without contextual knowledge. The decomposition was not possible to uniquely derive, rather there exist alternate ways of decomposition still following the suggested procedure. From a system viewpoint this is a limitation. Also, some lack of clarity as to up to which level the decomposition should be done exist. These limitations are discussed in detail below. 5.1. Way of identifying sections Separation of the product into sections does not follow any uniquely defined rule. This was done according to already existing knowledge about the system. It may be possible 346 V. Raja and O. Isaksson / Generic Functional Decomposition to decompose the product into sections in more than one way. Consequently different CCs get generated. It is difficult to discern if CCs generated following a certain section identification scheme is correct or not. Objective criteria must exist to evaluate which way of section identification is most appropriate. This can be related to self sufficiency of resulting CC in that all relevant constraints are identified and quantified. 5.2. Granularity for CCs By granularity, the number of levels at which configurable components exist are meant [9]. In this paper, only one level of configurable components (CCs) was identified though CCs can work in multiple levels. The single level of CC comes from the single level of the E F-M tree. A guideline is desired in terms of number of levels of CC structure for efficient operation. In general, it can be stated that CCs should exist at that level at which the design solutions (DSs) and constraints (Cs) are sufficiently simple, resulting CCs are self contained and easily re-usable. The CC method is a powerful way of representing the function means of a structure. It captures the function, means and constraints in an effective and re-usable manner. If made sufficiently generic (a suitable level of granularity), it is possible to generate solutions automatically corresponding to requirements arising from higher levels. As an example, architecture for a sub-system can then be automatically derived from a CC model of an engine. Acknowledgement This work has financially been supported by NFFP, the national aeronautical research programme, jointly funded by the Swedish Armed Forces, Swedish Defense Materiel Administration (FMV) and Swedish Governmental Agency for Innovation Systems (VINNOVA). References [1] M.M. Andreasen, Machine Design Methods Based on a Systematic Approach—Contribution to a Design Theory, Lund University, Sweden, 1980. [2] A. Claesson, A Configurable Component Framework Supporting Platform-Based Product Development, Chalmers University of Technology, Göteborg, 2006. [3] M.T. Michaelis, Co-Development of Products and Manufacturing Systems Using Integrated Platform Models, Chalmers University of Technology, Göteborg, 2013. [4] C.E. Levandowski, Platform Lifecycle Support Using Set-Based Concurrent Engineering, Chalmers University of Technology, 2014. [5] K.T. Ulrich, and S.D. Eppinger, Product Design and Development, McGraw-Hill, New York, 1995. [6] M. Aurisicchio, R. Bracewell, and G. Armstrong, The Function Analysis Diagram: Intended Benefits and Coexistence with Other Functional Models, Artificial Intelligence for Engineering Design, Analysis and Manufacturing 27, no. 03 (2013): 249-57. [7] C. Levandowski, M.T. Michaelis, and H. Johannesson, Set-Based Development Using an Integrated Product and Manufacturing System Platform, Concurrent Engineering 22, no. 3 (2014): 234-52. [8] J. Malmqvist, Improved Function-Means Trees by Inclusion of Design History Information, Journal of Engineering Design 8, no. 2 (1997): 107-17. [9] H. Johannesson and A. Claesson, Systematic Product Platform Design: A Combined Function-Means and Parametric Modeling Approach, Journal of Engineering Design 16, no. 1 (2005): 25-43. [10] D.G. Ullman, The Mechanical Design Process, McGraw-Hill, Boston, 2010. [11] Transport - Research & Innovation , European Commission. ̌VITAL Environmentally Friendly AeroEngine. Last update 20/02/2012. http://ec.europa.eu/research/transport/projects/items/vital_en.htm Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-347 347 A Study on Marine Logistics System for Emergency Disaster Control Heng WANG1 and Kenji TANAKA Department of System Innovation, The University of Tokyo Abstract. This paper proposes marine logistics system for emergency disaster control. We optimize marine logistics by determination of the assignment of ships to transportation routes. To determine this assignment, we come down to assignment problem, and solve by using linear programming. By using this model, we can calculate necessary conditions to supply demand on relief supplies of evacuees. We use this model to evaluate marine logistics of Tonankai Earthquake, and conclude that we need to use 24 ships and 240,000 tons of relief supplies in time of Tonankai Earthquake. Keywords. marine logistics, emergency disaster control, linear programming, model, simulation Introduction Natural disaster caused serious damage to society many times in the past. Today, even though we take measures to natural disaster, but still natural disaster causes serious damage. Therefore, it is important to take more measures of recovery from disasters. When natural disaster occurs, we have to provide relief supplies, such as water and food, for evacuees. Most of the relief supplies are transported by land logistics. On the other hand, marine logistics have some advantages in emergency transportation. It is believed that marine logistics suffers less damage than land logistics when natural disaster occurs. When natural disaster occurs close to sea, we can provide relief supplies for evacuees more efficiently and effectively by using marine logistics. In this paper, we optimize marine logistics by determination of the assignment of ships to transportation routes. 1. Literature review There are some studies about land and marine logistics in time of disaster. Most of these studies only propose qualitative method [1-3]. Although few studies propose quantitative method, for example optimization of land logistics in time of disaster [4-7] or marine logistics simulation in time of disaster [8], there have been no studies to optimize marine logistics in time of disaster. Therefore, this paper develops marine logistics system for emergency disaster control, including optimization algorithm. 1 Corresponding Author, E-Mail: k.oh.920718@gmail.com. 348 H. Wang and K. Tanaka / A Study on Marine Logistics System for Emergency Disaster Control 2. Marine logistics system for emergency disaster control 2.1. Outline of model We have developed a mathematical model about marine transportation in time of disasters. By using this model, we can calculate necessary conditions to supply demand on relief supplies of evacuees. For example, how many ships are needed, how many quays are needed for each port, or how to assign ships to transportation routes. Figure 1 shows the outline of this model. Figure 1. Outline of the marine logistics system for emergency disaster control. First, we pick up available start ports, goal ports and ships for marine logistics. Then, we calculate demand on relief supplies on damaged area. This calculation based on population of damaged area, status of disaster damage, and necessary amount and kind of relief supplies per person. Next, we determine the assignment of ships to transportation routes based on these conditions. We assume that ships transport relief supplies by a shuttle service between start port and goal port. Start port is the port at non-damaged area, and goal port is the port at damaged area. We optimize marine logistics by determination of the assignment of ships to transportation routes. To determine this assignment, we come down to assignment problem, and solve by using linear programming. From this optimization, necessary conditions to supply demand on relief supplies of evacuees can be calculated. For example, how many ships are needed, how many quays are needed for each port, or how to assign ships to transportation routes. Then, we carry out marine logistics simulation based on calculated conditions. After marine logistics simulation has done, we evaluate whether we can supply demand on relief supplies of evacuees under these conditions. And we decide necessary condition such as how many ships are needed and how to assign ships to transportation routes. H. Wang and K. Tanaka / A Study on Marine Logistics System for Emergency Disaster Control 349 2.2. Pick up available ports and ships In order to use marine transportation in time of disaster, we pick up available ports and ships at first. We decide the area where we have to transport relief supplies, judging from status of disaster damage. Then, we choose the area which looks toward sea and where we can use marine transportation among damaged area. We pick up the ports on this area as the goal ports. After we pick up the goal ports, we pick up the start ports. We pick up the ports which are not damaged by disaster and which are around the goal port as the start ports. As we have picked up start ports and goal ports, we can decide the transportation routes. When we define nstart_port as number of start ports and ngoal_port as number of goal ports, number of transportation routes is represented as nstart_port * ngoal_port. We confirm distance of each transportation route. Then, we pick up available ships. It is said that ferries and roll on/ roll off ships are adapted for marine transportation in time of disaster. Therefore, we use these kinds of ships in our models. We choose ferries and roll on/ roll off ships which operate regularly in Japan, and confirm the capacity and speed of these ships. 2.3. Calculate demand on relief supplies We can calculate demand on relief supplies in the damaged area. In our model, we define the demand of each port as the demand of evacuees on relief supplies around that goal port. First, we pick up supplies which may be necessary for evacuees in time of disaster. We make the list like Table 1, which shows kind and amount of necessary relief supplies for evacuees. Table 1. List of relief supplies. A Necessary amount (per person*day) MA1 B MB2 , MB3 C MC2 Division 1 Division 2 Division 3 Division 2 ࣭࣭࣭ ࣭࣭࣭ ࣭࣭࣭ Kind of goods Target age Type of goods α ࣭࣭࣭ α β Kind of goods means the kind of relief supplies which may be necessary for evacuees in time of disaster, such as water and food. Necessary amount means the amount of relief supplies per person and per day. We define Mij as the weight of relief supply i necessary for age division j per person and per day. Target of age division means people of which age division need the kind of relief supplies. Type of goods means the type of the relief supplies. We divide relief supplies into two types according to the way of occurrence of the demand. Demand of type α relief supplies occur regularly, such as water and food. On the other hand, Demand of type β relief supplies occur just after disaster, such as blankets and toilets. 350 H. Wang and K. Tanaka / A Study on Marine Logistics System for Emergency Disaster Control 2.4. Determine the assignment of ships to transportation routes We determine the assignment of ships to transportation routes based on ports and ships conditions and demand of relief supplies conditions. In our model, we assume that ships transport relief supplies by a shuttle service between start port and goal port. And we optimize marine logistics by determination of the assignment of ships to transportation routes. To determine this assignment, we come down to assignment problem, and solve by means of linear programming. First, we define objective function and variable of assignment problem. We define xij as variable of assignment problem, and define f as objective function of assignment problem. If ship i is assigned to transportation route j, xij is equal to 1. If ship i is not assigned to transportation route j, xij is equal to 0. When the variable is defined as that, Eq. (1) gives the definition of objective function. The best assignment maximizes the objective function. Therefore, we determine xij so that the objective function gets the maximum. Incidentally, aij is the transportation amount when ship i is assigned to transportation route j. ௡ ௡ ೞ೓೔೛ ೝ೚ೠ೟೐ σ௝ୀଵ ˆ ൌ σ௜ୀଵ ܽ௜௝ ‫ݔ‬௜௝ (1) Next, we define constraints. To determine the best assignment, we define three constraints. x x One ship is only assigned to one transportation route αk to βk ships are assigned to transportation routes which are related to start port k x γl to δl ships are assigned to transportation routes which are related to goal port l First constraint means that one ship is never assigned to several transportation routes. Second constraint means that there is proper number of ships to be assigned to a start port. This is because the number of quays is different at every start port. Third constraint means that there is proper number of ships to be assigned to a goal port. This is because the demand of relief supplies is different in every goal port. Therefore, the best assignment is not the assignment which simply maximizes the transportation amount of relief supplies but the assignment which is matched with the demand of goal ports. These three constraints are expressed by Eqs. (2) to (4). ௡ೝ೚ೠ೟೐ σ௝ୀଵ ‫ݔ‬௜௝  ൑ ͳ ௡ (2) ௡೒೚ೌ೗̴೛೚ೝ೟ ିଵ ೞ೓೔೛ σ௝ୀଵ ߙ௞ ൑ σ௜ୀଵ ௡ ‫ݔ‬௜ሺ௞ା௝௡ೞ೟ೌೝ೟̴೛೚ೝ೟ ሻ  ൑ ߚ௞ (3) ‫ݔ‬௜௝  ൑ ߜ௟ (4) ௟௡ ೞ೓೔೛ ೞ೟ೌೝ೟̴೛೚ೝ೟ σ௝ୀሺ௟ିଵሻ௡ ߛ௟ ൑ σ௜ୀଵ ೞ೟ೌೝ೟̴೛೚ೝ೟ ାଵ H. Wang and K. Tanaka / A Study on Marine Logistics System for Emergency Disaster Control nroute nship nstart_port ngoal_port : : : : 351 number of transportation routes number of available ships number of start ports number of goal ports 2.5. Marine logistics simulation 2.5.1. Outline of marine logistics simulator We carry out marine logistics simulation based on the assignment which is calculated by means of the method explained in 2.4. section. Necessary conditions to achieve enough marine logistics in time of disaster, for example how many ships are needed, can be determined by this simulation. Figure 2 shows the outline of marine logistics simulator. Figure 2. Outline of the marine logistics simulator. First, simulator reads the necessary data for simulation. Necessary data is port data, ship data, and demand data which we determine at 2.3. and 2.4. sections. Next, we determine whether simulator use changing route algorithm. Changing route algorithm is the optimization algorithm to reduce number of necessary ships for transporting relief supplies. Changing route algorithm is explained in detail at 2.5.2 section. Then, simulator determines the first assignment of ships to transportation routes by means of linear programming. If we determine not to use changing route algorithm, simulator carries out simulation based on this assignment until the end of simulation period. If we determine to use changing route algorithm, simulator carries out simulation based on this assignment until the time when simulator judge to change assignment. Then simulator determines another assignment of ships to transportation routes based on demand and supply of every goal port at that time. And then, simulator carry out simulation again based on newer assignment until the time when simulator judge to change assignment again. Simulator repeats these processing until the end of simulation period. 352 H. Wang and K. Tanaka / A Study on Marine Logistics System for Emergency Disaster Control When the simulation ends, we get the data of output result. Result of simulation is assignment of ships to routes, supply of relief supplies at every goal port, and stock of relief supplies at every start port. After we get the data of simulator, we change the number of ships and simulator carries out simulation again. Through this loop processing, we can get result of all combinations of number of ships and changing route pattern. 2.5.2. Changing route algorithm Assignment of ships to transportation routes is calculated whenever the number of goal ports is changed. Change of the number of goal ports happens when enough relief supplies are transported to a goal port. After that time, the ships assigned to this goal port can be assigned to another goal port. Therefore, transport efficiency can be improved by using this algorithm. New assignment is determined based on the demand and the supply of relief supplies at that time. The greater the relief supplies, the more ships are assigned to the goal port. 2.6. Decide necessary condition Result data of assignment, stock, and supply is acquired from marine logistics simulation. We decide necessary condition by evaluating this data. To evaluate result of simulation, we use fill rate of relief supplies as key performance indicator. We focus on relationship between number of ships which are used for transportation and fill rate of relief supplies. Next, we decide the kind and amount of stock of necessary relief supplies. To carry out the assignment which is calculated by simulator, shortage of relief supplies must not occur at each start port until the end of simulation period. Therefore, we have to store relief supplies enough to cover the necessary kind and amount at each start port. In our model, necessary relief supplies at a start port are defined as the kind and amount of relief supplies which are exported from the start port during the simulation period. 3. Case study 3.1. A scenario of simulation This case study applies the marine logistics system to calculate necessary conditions to carry out marine logistics in time of Tonankai Earthquake. Tonankai Earthquake is the serious earthquake, which is believed to occur in the nearly future in Japan. Figure 3 shows the expected distribution of seismic intensity in times of Tonankai Earthquake. We try to calculate necessary conditions for transporting relief supplies to the region which is damaged seriously and faces the sea. H. Wang and K. Tanaka / A Study on Marine Logistics System for Emergency Disaster Control 353 Figure 3. Expected distribution of seismic intensity in times of Tonankai Earthquake. 3.2. Carrying out marine logistics simulation First, we pick up the goal ports and the start ports. We pick up twelve goal ports and four start ports based on the expected distribution of seismic intensity. In addition, we calculate the distance between each start port to each goal port. Next, we calculate demand on relief supplies. Table 2 shows kind and amount of necessary relief supplies for evacuees. In this table, goods for children include milk powder and baby’s diapers, and goods for aged people include food for the aged and diapers for the aged. Table 2. List of necessary relief supplies. Kind of goods Necessary amount (per person*day, ton) Water 0.003 Food Goods for children Goods for aged people 0.00075 Children, Adults, Aged people Children, Adults 0.0013 Children α 0.00105 Aged people α Blanket 0.002 Temporary toilet 0.0016 Target age Children, Adults, Aged people Children, Adults, Aged people Type of goods α α β β Then, we expect how many evacuees will arise at each region based on the scenario. From this expected evacuees and the list of relief supplies, we calculate the demand of relief supplies at each region. 354 H. Wang and K. Tanaka / A Study on Marine Logistics System for Emergency Disaster Control After ports and ships conditions and demand of relief supplies conditions are determined, we carry out marine logistics simulation. Simulation period is set as seven days. We calculate the minimum number of necessary ships enough to cover the demand in this period. Figure 4 to Figure 6 show the state of demand, supply and transportation route in the simulation. (We use changing route algorithm in these figures.) Figure 4. The state immediately after the earthquake. Figure 5. The state four days after the earthquake. H. Wang and K. Tanaka / A Study on Marine Logistics System for Emergency Disaster Control 355 Figure 6. The state seven days after the earthquake. 3.3. Result of simulation After the simulation, we evaluate the result. Figure 7 shows the relationship between the number of ships and fill rate of relief supplies. We will see that 35 ships are needed for 100 percent fill rate when we do not use changing route algorithm. On the other hand, only 24 ships are needed for 100 percent fill rate when we use changing route algorithm. This result suggests that changing route algorithm decrease the number of necessary ships for transportation. Figure 7. Difference of necessary ships whether or not to use changing route algorithm. 356 H. Wang and K. Tanaka / A Study on Marine Logistics System for Emergency Disaster Control Next, we determine the kind and amount of stock of relief supplies. Figure 8 shows the necessary stock of the relief supplies in each start port when we use 24 ships under the changing route algorithm. Figure 8. Necessary stock of relief supplies in each start port. 4. Conclusion We design marine logistics system for emergency disaster control. This system includes the route changing algorithm, which decreases the number of necessary ships for transportation of relief supplies. We apply this system to calculate necessary conditions to carry out marine logistics in time of Tonankai Earthquake. The result of simulation shows that we need to use 24 ships and 240,000 tons of relief supplies. References [1] R. A. Hadiguna, I. Kamil, A. Delati, R. Reed, The Tohoku disasters: Chief lessons concerning the post disaster humanitarian logistics response and policy implications, International Journal of Disaster Risk Reduction, 9 (2014), 38-47. [2] R. Kaynak, A. T. Tuğer, Coordination and collaboration functions of disaster coordination centers for humanitarian logistics, Procedia - Social and Behavioral Sciences, 109 (2014), 432-437. [3] M. A. G. Bastos, V. B. G. Campos, R. A. M. Bandeira, Logistic processes in a post-disaster relief operation, Journal of Operations Management, 111 (2014), 1175-1184. [4] T. Namimatsu H. Tamura M. Sengoku S. Shinoda T. Abe, Analysis of algorithms using the set division on a delivery problem, IEICE technical report, 97(403) (1997), 41-48. [5] H. Kuse Y. Yano, Logistics Planning for Disaster Prevention, City planning review, 60(3) (2011), 87-90. [6] Y. Senda A. Suzuki, The 2013 Spring National Conference of Operations Research Society of Japan Abstracts (2013), 98-99. [7] Y. Lin, R. Batta, P. A. Rogerson, A. Blatt, M. Flanigan, A logistics model for emergency supply of critical items in the aftermath of a disaster, Socio-Economic Planning Sciences, 45(4) (2011), 132-145. [8] T. Majima D. Watanabe K. Takadama M. Katsuhara, A Development of Transportation Simulator for Relief Supply in Disasters, SICE Journal of Control, Measurement, and System Integration, 6(2) (2013), 131-136. Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-357 357 A Guideline for Adapted System Dynamics Modeling of Rework Cycles in Engineering Design Processes Elisabeth SCHMIDT1, Daniel KASPEREK and Maik MAURER Institute of Product Development, Technische Universität München, Germany Abstract. A substantial variety of rework cycle system dynamics (SD) models that are capable of simulating the influence of rework on project parameters exists in the literature. Although all of them use the rework cycle concept, these models vary in their structure and quantification as they are adapted to capture specific process features. The difficulty of grasping the variety of diverse rework cycles and the variability in modeling goals are obstacles for modelers in finding the right model. The aim of this article is to provide a guideline that will help developers create adapted rework cycles for engineering design processes. Through literature research of different rework cycle SD models, specific elements are classified in a guideline. With the knowledge of a variety of rework cycles SD models and their capabilities, recommendations for developers can be made on how to build a model that captures the necessary requirements for their research focus. Keywords. Engineering Design Process, System Dynamics, Rework Cycle Introduction Engineering design processes (EDP) are characterized by their dynamic and creative behavior and their unpredictable results. For the planners and managers within companies, it is interesting to learn more about the dynamic process behavior in order to distribute resources appropriately as well as for cost and schedule calculation [1]. The use of SD provides a way to simulate processes and thus enables the modelers to foresee the process behavior. In order to best reproduce the process behavior, modelers include certain features in their models to reflect special process characteristics. This strategy allows for the pursuit of various research objectives and will be referred to below as adapted modeling. Table 1. Purposes of rework cycles allocated to the references. Purpose of rework cycle Phase concurrency Human factors Staffing Outsourcing Testing Tipping point Cost and schedule foresight Process improvement 1 References [2-13] [4; 12-18] [4; 12; 14; 16-21] [17] [8; 22] [21; 23] [24; 25] [26; 27] Corresponding Author, E-mail: schmidte@mit.edu. Table 1 gives an overview of the possible purposes of rework cycles that can be found in the SD literature. Developers may include structures to capture impacts of project staffing, phase concurrency, testing processes and more. There are rework cycles 358 E. Schmidt et al. / A Guideline for Adapted System Dynamics Modeling of Rework Cycles in EDP which pursue more than one of the listed purposes in Table 1 and thus contain several structures to create a simulation of these features. Depending on the selection of purposes, the rework cycle needs to be adapted to enable the simulation of these features. Adaptations of SD models vary in their size, structure and quantification based on their application. Due to the variety of rework cycles and variability in modeling goals, modelers have difficulties finding the right model for their process. A guideline that is specifically directed to this modeling problem cannot be found within the literature research of this study. Therefore, we propose a guideline that supports modelers in choosing and adapting existing rework cycle concepts for their particular needs. 1. Literature-based guideline for adapting rework cycles The developed guideline presents different structures that modelers use to technically implement certain behaviors of rework cycles. These structures are referred to in this paper as adaptations. The adaptations are added to a basic rework cycle model in order to generate a model that simulates the considered EDP more accurately. The adaptations are summarized in the adaptation scheme shown in Figure 1. Figure 1. Rework cycle adaptation scheme. Based on the minimal rework cycle there are two ways to adapt the rework cycle – SFC adaptations and causal link adaptations. The adaptation scheme is built on a basic rework cycle model. In Figure 1, this model is located in the middle. It is characterized by its simple structure that contains the minimum number of independent stocks to model processes with rework [22]. Moreover, the model is considered to be simple because of its constant rates. E. Schmidt et al. / A Guideline for Adapted System Dynamics Modeling of Rework Cycles in EDP 359 The simple model in Figure 1 is often expanded to achieve certain characteristics. The expanding of the model is technically implemented either by means of additional stocks and flows (A) or influencing causal links (B). Rework cycles can be expanded by adapting the SFC (stock and flow construct) by means of adding stocks and flows (Figure 1, left). The adaptation scheme divides the SFC implementations into three categories: A1) Consideration of intermediate states A2) Consideration of important additional co-processes A3) Consideration of influence of process concurrency in multi-phase projects Another means for adapting rework cycle SD models is causal link expansions (Figure 1, right). Certain process behaviors can be represented by adding variables and the dependencies between the variables. In most cases, additional variables are used to influence the rates of rework cycles. Using the initial model in the middle of Figure 1, there are three rates that are affected by the values of other factors: B1) Variable work accomplishment rate B2) Variable rework generation rate B3) Variable rework discovery rate Alongside these six ways of adapting system dynamic models, there are also other ways to model certain features which do not fit into one of the presented categories (e.g. in [20; 26]). However, these adaptations only appear in sporadic cases and thus are not included in the scheme. The SFC and causal link adaptations – which were assessed as appearing frequently within the research of this study – are explained in the following. 1.1. Consideration of intermediate states A1) In many rework cycles, more states – described by stocks – are considered than the three basic ones illustrated in Figure 1. Along with additional stocks, new flows are also included, which can be useful in modeling different rates of process steps. An often used intermediate stock is “Work in Progress”. Figure 1 shows in the top left corner (A1) a rework cycle which includes this stock. As a consequence, the rework rate is different from the original completion rate. In some processes this is a benefit, since the assumption of a constant rate for both original and rework would be an improper simplification. Table 2 lists the references which include intermediate states, similar to the one under A1) in Figure 1. Naming is different in most of the cases. Therefore Table 2 includes a column that lists the names of the additional stocks. Table 2. Intermediate states considered in rework cycles. The modeler of an SD model is admonished to rethink the structure of the observed EDP and which states need [22] to be captured in the rework cycle. Each [5] [7; 8] crucial state has to be modeled by one [20] stock and distinct process steps by [28] distinct flow rates. The inclusion of the [21] corresponding amount of stocks and [12; 24; 29; 30] flows allows for a more precise modeling of the process. This is especially useful for cases in which the rates of the process steps are significantly different and the use of an average rate for the entire Intermediate states Tasks Completed not Checked, Tasks Approved Tasks Pending Test Work in Progress Tasks to Be Reworked Tasks in Testing Work in Quality Assurance Quality Assurance Backlog Known Rework References [2] 360 E. Schmidt et al. / A Guideline for Adapted System Dynamics Modeling of Rework Cycles in EDP workflow would be insufficient. Examples for such different rates are resource requiring and non-resource requiring flows [5]. Another advantage of distinct stocks becomes apparent with regard to evaluation purposes. Distinct stocks allow for a separate monitoring of the progress for different process parts. Thus the causes for delays of the project end date can be better located. These reasons advocate the use of intermediate states in SD models. 1.2. Consideration of important additional co-processes A2) Some researchers use co-flow structures to consider the dynamics of auxiliary side processes like hiring, training or testing. The inclusion of these co-flow structures allows for a more precise modeling of the observed EDP because the interactions of the side processes and the rework cycles are captured in such models. As shown in Figure 2, the side processes are modeled with co-flows that influence the rework cycle. In Figure 2, these side processes are staffing and testing. The available staff at a certain point can be calculated as the accumulated difference of the “Hiring” and “Turnover” rate. In this example the staff level influences the “Work Accomplishment” rate, such that when more employees are available to do work, the rate increases. This influence is graphically represented with an information flow from the stock to the valve. In the other co-flow of Figure 2 tests are processed from the “Test to Do” stock over a “Testing Rate” to the “Test Done” stock. The number of tests that are done impact the “Rework Discovery” rate. Figure 2. SD models of a rework cycle and co-flows for staffing and testing. The modeling of the co-processes needs to be included in the SD model if both the main process is affected and the co-processes show a dynamic behavior during the execution of the main process. When comparing the behaviors of SD models with and without the consideration of these side processes, one would observe that the simulated project durations are drifting the further apart the more influence these side processes have on the rates of the main SFC. For example in projects in which only few new employees are hired to contribute to the work accomplishment, the decrease of the simulated project duration compared to the basic model’s duration is lower than the duration decrease of projects with many additional employees. Hence, the benefit of modeling the co-flow correlates with the impact of the hiring side process. Some developers also integrate co-flows in order to gain additional information during the simulation. In those cases the co-flow does not necessarily affect the rework E. Schmidt et al. / A Guideline for Adapted System Dynamics Modeling of Rework Cycles in EDP 361 cycle, such as the change co-flow in [10]. This co-flow is included to calculate the resulting costs, but that does not impact the rework cycle. Table 3. Co-processes considered in rework cycles as parallel co-flows. Co-processes Testing Error rectification Phase task change Hiring Training Change generation Change discovery Completed work Expended effort Error generation References [8; 22] [7] [16] [12-15; 17; 18] [13; 15; 17; 18] [6; 10] [6] [17] [17] [13] Table 3 lists various co-processes that are included in existing SD models allocated to their authors. This list serves as a guide for future modelers as to which side processes may be included in the model. Depending on the side processes of the observed EDP and its influence on the rework cycle the modeler needs to include co-flows in the SD model. 1.3. Consideration of influence of process concurrency in multi-phase projects A3) Another important SFC adaptation of the basic rework cycle is consideration of the effects of process concurrency. This feature often appears in models of multi-phase projects. The impact of iteration due to releasing flawed work to subsequent phases is considered in these models. This relationship can also be called coordination. In the models of [2; 16] coordination is represented by including a “Coordination” stock in the rework cycle. Other than the additional stocks for intermediate states, introduced above, the “coordination” stock is not part of the process chain but generates an additional iteration loop. The additional coordination stock enables flawed tasks from other phases to be accumulated in this stock until they can be coordinated. In the real process, this could happen, for example, in meetings with the responsible employees of the involved phases [2]. The authors [4; 6; 8; 9] dispense with the coordination stock and solely use a corruption flow to model the influence of process concurrency. Figure 3 shows the rework cycle of [4] which iterates flawed work from the “Accomplished Work” stock to the “Remaining Work” stock via a corruption flow. The rate of the flow is calculated with the variables “Cor FW task” and “Cor BW task” which quantify the rework from other process steps that is either flowing to the particular subsequent (forward corruption) or previous (backward corruption) process step. Figure 3. Rework cycle with corruption flow (adapted based on [4]). The inclusion of a coordination or corrupt flow is necessary when modeling multiphase processes in which the phases are worked on concurrently. In case of such phase overlap the next phase starts before the previous phase has been completed. Therefore flawed tasks of the previous phase may not have been discovered yet and, though, are 362 E. Schmidt et al. / A Guideline for Adapted System Dynamics Modeling of Rework Cycles in EDP released to the next phase. When eventually the flaw is discovered, the work unit needs to be sent back to the phase in which the flaw has been generated. For this reason the basic rework cycle with its one rework discovery flow is not sufficient because it only accounts for phase internal flaws. It necessitates a second iteration loop to consider the rework of tasks that have been discovered in a different phase. Table 4. SFC implementation of coordination in multi-phase projects. SFC adaptations Coordination stock & flow Corruption flow References [2; 16] [4; 6; 8; 9] An overview of existing SFC adaptations for modeling the influence of phase concurrency is provided in Table 4. 1.4. Variable work accomplishment rate B1) The work accomplishment during EDPs can be influenced by various factors. In SD models, these influences are modeled by causal links which affect the work accomplishment rate. For this reason, the adaptation scheme allocates the modeling of variable work accomplishment to the causal link adaptations. The influencing variables are listed and categorized in Table 5. The company characteristic quality is a constant factor that defines the percentage of flawless work and thus the portion of work that moves from the “Work to Do” stock to the “Work Accomplished” stock. The management lever time to completion is a factor that arises from the project schedule. The allocated resources directly influence the work accomplishment, whereas pressure, overtime and organizational changes define productivity and therefore indirectly affect the accomplishment of work. Similarly, the human factors influence the productivity. Different skill levels and morale are characteristics of individuals that impact work completion. Furthermore, the performance of a team varies with the time due to the development of synergies and with group size. Table 5. Influences on work accomplishment and associated references. Category Company characteristic Management levers Human factors Process factors Factor Quality Pressure Resources Overtime Organizational changes Time to completion Skill level Morale Synergy Group size Available work Completed work in previous phase Undiscovered rework in previous phase References [2; 4; 8; 12; 17; 29; 30] [5; 12; 30] [4; 5; 7; 19; 21; 22; 28-30] [5; 12; 30] [12] [17; 30] [15; 17; 30] [12; 30] [12] [4; 12] [2; 4; 5; 7; 8; 10; 13; 28; 30] [8] [10] Within the category process factors, start conditions and iteration effects are differentiated. Start conditions represent the prerequisites for a phase or task to start. For example, work completion cannot start unless there is work available. Other factors, such as the amount of undiscovered rework in a previous phase continuously affect the work accomplishment throughout the whole process. E. Schmidt et al. / A Guideline for Adapted System Dynamics Modeling of Rework Cycles in EDP 363 The necessity of each factor listed in Table 5 in an SD model is different. The factor quality has to be included in each model because it defines the share of flawed work units which create the rework cycle in the first place. Other factors listed in Table 5 are optional and the benefit of their consideration in causal links of SD models depends on their influence on the rates. This influence can be significant. As stated earlier, pressure, overtime and organizational changes as well as the human factors influence productivity. This resulting productivity is found to be able to vary by a factor of two [12]. This productivity factor multiplied with the work accomplishment rate can change the process duration significantly. In these cases the adapted work accomplishment rate is better suited than a constant rate like in the basic model. Values for the quantification of the listed factors can be found in the referenced literature in Table 5. 1.5. Variable rework generation rate B2) Rework generation within an EDP in most cases cannot be described with a constant rate. Therefore many SD model developers include additional variables to demonstrate the varying behavior of rework generation. Table 6 brings together various factors on the generation of rework. Table 6 shows that rework generation is dependent on the work completion. The reason for this relationship is that the rework generation rate is often calculated as the product of the work completion or accomplishment rate and 100% minus the quality. Another company characteristic is target design maturity. This variable is used in [5] in order to compare it with the project factor believed design maturity in order to trigger iteration in the rework cycle as well as the start of a subsequent phase. Table 6. Influences on rework generation and associated references. Category Company characteristic Project factor Management lever Iteration effect Other Factor Quality Work completion Target design maturity Believed design maturity Time to completion Undiscovered rework in previous phase Obsolescence References [2; 4; 5; 7; 8; 10; 12; 17; 22; 29; 30] [4; 7; 8; 10; 12; 17; 22; 29; 30] [5] [5] [17] [10] [12] The value of the variable time to completion terminates the process in the example of [17], and thus the rework generation corresponding to the work accomplishment ends. The model of [10] captures the fact that the amount of undiscovered rework in the previous phase increases the error generation in the next phase. Another factor that can initiate rework is obsolescence, which means that already accomplished work becomes obsolete and thus worthless [12]. As explained in section 0 the benefit of adding these factors to the causal links influencing the rework generation rate is case-dependent. One example that proves the importance of an adapted rework generation rate through causal links can be found in the study of [10]. Through optimization they found that undiscovered rework in the previous phase can increase the rework generation in the following phase by up to 50%. With such a high dependency, a neglect of the factor like in the basic model would cause ineligible shortened project durations in simulations compared to the adapted model. 364 E. Schmidt et al. / A Guideline for Adapted System Dynamics Modeling of Rework Cycles in EDP 1.6. Variable rework discorvery rate B3) The discovery of errors in tasks that have been worked on is usually modeled as being variable. Many authors include dependencies on other variables in their model to achieve a better approximation of the process they want to observe. Factors that can be found in the literature are listed in Table 7 and allocated to different categories. Table 7. Influences on rework discovery and associated references. Category Company characteristics Project factors Management levers Iteration effects Other Factor Probability of discovering errors Undiscovered rework Quality Quality assurance / testing Problem complexity Perceived progress Pressure Resources Dependence on previous phase Rework completion Quality assurance / testing Progress Time References [2; 8; 22] [4; 8; 10] [2; 8; 22] [2; 8; 9; 21; 22] [5] [12; 17] [21] [28] [2; 4; 7; 8] [8] [2; 8] [10] [4; 5; 10] Company characteristics are, for example, the probability of discovering errors and the amount of undiscovered rework. The higher these variables are, the more rework is discovered within a certain time unit. The higher the probability of discovering errors is, the longer the project duration but also the better the quality of the completed work. The company might also conduct test or quality assurance processes which also influence error finding. The management lever pressure is modeled so as to increase the fraction of rework and thus the discovery rate in the rework cycle of [21]. The iteration effects can originate from elements in previous phases or of subsequent phases. A multiplicative relation between the rework rate of the previous phase and the corruption rate which is combined with the rework discovery rate is described in [4; 8]. Impacts from subsequent phases can be defined, for example, by the quality assurance of subsequent phases [8] or the work progress in these phases [10]. In the example of [10] the progress of the subsequent phase reduces the time constant for rework discovery in the previous phase with up to 50%. For such remarkable dependencies the simulated project duration is shorter for adapted rework cycles than for the basic model that does not account for the time constant reduction for rework discovery. 2. Reflection and discussion The development of a guideline for the adaptation of rework cycles in SD provides future SD modelers with the necessary information and consolidates existing literature. The guideline details the major components of SD models and compares various rework cycles. Suggestions are made on when the adaptation of the rework cycle is more useful than the basic model. One point that needs to be addressed is the initial evaluation of the guideline. A validation will require modelers to apply the guideline, preferably on a practical process. Moreover, the process which should be simulated by rework cycles would E. Schmidt et al. / A Guideline for Adapted System Dynamics Modeling of Rework Cycles in EDP 365 have to be complex enough so that each feature presented in the guideline would be applied. Therefore, one proposal for subsequent research is the success evaluation of the guideline. This requires practical EDPs as modeling subjects as well as resources such as SD modelers who apply the guideline during the modeling and give feedback. 3. Conclusion The aim of this study is to create a guideline for the modeling of rework cycles in SD, adapted to simulate certain features of specific EDPs. The need for this guideline was derived from the desired support for SD modelers. Another driver for this research effort was the lack of a comparable guideline in the reviewed literature. During the literature research, 25 rework cycles of various authors were observed regarding their purpose and structure. The rework cycles were analyzed in regards to how they simulate certain characteristics of the EDP and to how they differ from the simplest version of a rework cycle. Adaptations are applied to adjust the model to simulate the considered EDP appropriately. Two kinds of adaptations were identified: SFC adaptations and causal link adaptations. SFC adaptations consist of the SD elements stock and flow and are mainly included in order to model intermediate states, parallel co-processes and iteration loops to realize coordination within the rework cycle. Causal link adaptations on the other hand, consist of information flows and variables and capture the impact of stocks or variables on rates. The three basic rates work accomplishment, rework generation and rework discovery were observed concerning their influences. For each rate, influencing factors modeled in existing rework cycles were gathered and listed in the corresponding tables. In a next step, the guideline was discussed, and further research needed for the evaluation was identified. Acknowledgement We thank the German Research Foundation (Deutsche Forschungsgemeinschaft – DFG) for funding this project as part of the collaborative research centre ‘Sonderforschungsbereich 768 – Managing cycles in innovation processes – Integrated development of product-service systems based on technical products’. References [1] [2] [3] [4] M. Kreimeyer and U. Lindemann, Complexity Metrics in Engineering Design: Managing the Structure of Design Processes, Springer, Berlin, 2011. D.N. Ford and J.D. Sterman, Dynamic modeling of product development processes, System Dynamics Review, Vol. 14, 1998, No. 1, pp. 31-68. D.N. Ford and J.D. Sterman, Overcoming the 90% syndrome: Iteration management in concurrent development projects, Concurrent Engineering,Vol. 11, 2003, No. 3, pp. 177-186. D. Kasperek, M. Lindinger, S. Maisenbacher, and M. Maurer, A structure-based System Dynamics Approach for Assessing Engineering Design Processes, in: 2014 International System Dynamics Conference, System Dynamics Society, Delft, Netherlands, 2014. 366 E. Schmidt et al. / A Guideline for Adapted System Dynamics Modeling of Rework Cycles in EDP [5] H.N. Le, A Transformation-Based Model Integration Framework to Support Iteration Management in Engineering Design, PhD Thesis, University of Cambridge, Cambridge, UK 2012. S. Lee and I.S. Lim, Degree of Overlapping Design Activities in Vehicle Development : A System Dynamics Approach, Asian Journal on Quality, Vol. 8, 2007, No. 2, pp. 128-144. J. Lin, Overlapping in Distributed Product Development, in: 2006 International System Dynamics Conference, System Dynamics Society, Nijmegen, The Netherlands, 2006. J. Lin, K.H. Chai, Y.S. Wong, and A.C. Brombacher, A dynamic model for managing overlapped iterative product development, European Journal of Operational Research, Vol. 185, 2008, No. 1, pp. 378-392. F. Nasirzadeh, M. Khanzadi, A. Afshar, and S. Howick, Modeling quality management in construction projects, International Journal of Civil Engineering, Vol. 11, 2013, No. 1, pp. 14-22. K. Parvan, H. Rahmandad, and A. Haghani, Empirical Study of Design-Construction Feedbacks in Building Construction Projects, in: 2013 International System Dynamics Conference, System Dynamics Society, Cambridge, MA, 2013. A. Powell, K. Mander, and D. Brown, Strategies for lifecycle concurrency and iteration–A system dynamics approach, Journal of Systems and Software, Vol. 46, 1999, No. 2, pp. 151-161. K. Reichelt and J. Lyneis, The dynamics of project performance: benchmarking the drivers of cost and schedule overrun, European Management Journal, Vol. 17, 1999, No. 2, pp. 135-150. S. Ruutu, P. Ylén, and M. Laine, Simulation of a distributed design project, in: 2011 International System Dynamics Conference, System Dynamics Society, Washington, DC, 2011. T. Haslett and S. Sankaran, Applying Multi-Methodological System Theory to Project Management, in: 53rd Annual Meeting of the ISSS, Brisbane, Australia, 2009. J.I.M. Hernández, J.R.O. Olaso, and A.G. López, Technology Assessment in Software Development Projects Using a System Dynamics Approach: A Case of Application Frameworks, in F.P.G. Márquez and B. Lev (eds): Engineering Management, InTech, Rijeka, 2013, pp. 119-142. T. Laverghetta and A. Brown, Dynamics of naval ship design: a systems approach, Naval Engineers Journal, Vol. 111, 1999, No. 3, pp. 307-323. S. Lisse, Applying system dynamics for outsourcing services in design-build projects, Journal of Project, Program & Portfolio Management, Vol. 4, 2013, No. 2, pp. 20-36. J. Williford and A. Chang, Modeling the FedEx IT division: a system dynamics approach to strategic IT planning, Journal of Systems and Software, Vol. 46, 1999, No. 2, pp. 203-211. L.J. Black and N.P. Repenning, Why firefighting is never enough: preserving high̺quality product development, System Dynamics Review, Vol. 17, 2001, No. 1, pp. 33-62. N.P. Repenning, A dynamic model of resource allocation in multi̺project research and development systems, System Dynamics Review, Vol. 16, 2000, No. 3, pp. 173-212. T. Taylor and D.N. Ford, Tipping point failure and robustness in single development projects, System Dynamics Review, Vol. 22, 2006, No. 1, pp. 51-71. H. Rahmandad and K. Hu, Modeling the rework cycle: capturing multiple defects per task, System Dynamics Review, Vol. 26, 2010, No. 4, pp. 291-315. H. Rahmandad, Dynamics of platform-based product development, in: 2005 International System Dynamics Conference, S.D. Society, System Dynamics Society, Boston, MA, 2005. K. Cooper and G. Lee, Managing the dynamics of projects and changes at Fluor, in: 2009 International System Dynamics Conference, System Dynamics Society, Albuquerque, NM, 2009. J.M. Lyneis, K.G. Cooper, and S.A. Els, Strategic management of complex projects: a case study using system dynamics, System Dynamics Review, Vol. 17, 2001, No. 3, pp. 237-260. G. D'Avino, P. Dondo, and V. Zezza, Reducing ambiguity and uncertainty during new product development: a system dynamics based approach, in: Technology Management: A Unifying Discipline for Melting the Boundaries, IEEE, Portland, OR, 2005. N.P. Repenning and J.D. Sterman, Capability traps and self-confirming attribution errors in the dynamics of process improvement, Administrative Science Quarterly, Vol. 47, 2002, No. 2, pp. 265295. N.R. Joglekar and D.N. Ford, Product development resource allocation with foresight, European Journal of Operational Research, Vol. 160, 2005, No. 1, pp. 72-87. K.G. Cooper, Naval ship production: A claim settled and a framework built, Interfaces, Vol. 10, 1980, No. 6, pp. 20-36. K.G. Cooper, J.M. Lyneis, and B.J. Bryant, Learning to learn, from past to future, International Journal of Project Management, Vol. 20, 2002, No. 3, pp. 213-219. [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-367 367 Design Optimization of Electric Propulsion of Flying Exploratory Autonomous Robot Mateusz WĄSIK1, Wojciech SKARKA Institute of Fundamentals of Machinery Design, Faculty of Mechanical Engineering, Silesian University of Technology, Poland Abstract The main idea of the paper is to optimise the design parameters of the electric propulsion of exploratory autonomous robot designed for flying in specified extremely unfavourable environment. There are still a lot of unsolved needs in the flying autonomous robots area. A lack of solutions for drones flying in the unpredictable and turbulent conditions has been identified. As examples of such needs the following could be distinguished: exploration of the space-planets with their own atmosphere different from Earths or on Earth drones being used in mountain and marine emergency services which operate in extremely unfavourable weather conditions. The main issue from the propulsion designs site is the influence of the turbulent wind, sometimes connected with precipitation on the flight mechanics. As another important concern the strength of the propulsion of the flying object can be mentioned. Another aspect which has a profound effect on the operation of such a robot drive is the best energy balance of such devices and the desire to ensure its long-term operation and extreme maximum performance. Already, each of these tasks is solved separately and optimization is a major challenge. However, the multi-criteria optimization taking into account the coupling phenomena occurring on the robot is a much bigger problem. In the paper the requirements for the propulsion are presented and the concept solutions for specified unfavourable environments is proposed. The very definition of the assumptions of the environmental conditions defining the tasks of optimizing the yarn can be a problem. Excessively harsh conditions lead to the solutions unacceptable in practice. The following features of the conceptual drones propulsion design are analysed: aerodynamics, strength, stiffness, mass and mobility. Consequently, the analyses lead to the optimization of the shape and design of the drones propulsion system. Keywords. Finite element method, Autonomous flying robot, Damage resistance, Optimisation Introduction Continuous technical development brings fresh innovatory solutions and at the same time dictates the new trends. For years, common applied solutions are getting not enough adequate to the newly established and evolving needs. The development of related science fields, eliminates many contradictions in the application of given 1 Corresponding Author: Institute of Fundamentals of Machinery Design, Faculty of Mechanical Engineering, Silesian University of Technology, 44-100 Gliwice, Konarskiego 18a Str., Poland; E-mail: wasik.mateusz@hotmail.com. 368 M. Wasik ˛ and W. Skarka / Design Optimization of Electric Propulsion technology in aviation and creates new opportunities for developers. The need for optimization in order to reduce fuel consumption and emissions of harmful chemicals into the atmosphere, combined with economics become a major factor driving the need for the development and introduction of innovative aviation propullsions. On the other side of demand there are existing and previously unsolved problems of classical methods of implementation of aviation propulsion. An example of such a problem can be particularly noticeable in re-cent years, the growing popularity of remote-controlled, small flying objects, commonly called drones, the use of which designated new opportunities in the field of aviation, as well as become a source of new needs. This article is a basis for further research on finding solutions to some needs in the field of aviation propulsions. 1. Needs analysis The research was carried out in projects currently implemented in national R & D programs, the problems in today’s aerospace and aviation industry and in areas not directly related to aviation regulation. The following problems were recognized and highlighted to be solved in the field of aviation propulsions. 1.1. Innovative Aviation INNOLOT Programme Innovative Aviation INNOLOT Programme [1] is a nationwide program established to increase the competitiveness of the Polish economy in the field of high technology products for the aviation industry. INNOLOT is an initiative created based on an agreement between the National Research Centre and Development-NCBR and the Polish Aviation Technology Platform, which includes the Aviation Valley Aviation Cluster Wielkopolski and the Federation of Aviation Companies Bielsko. Potential participants in the project are the industry, research institutes and universities. The program main assumptions are to provide for an increase in the number of deployed innovative solutions in the aviation sector and to strengthen collaborative projects and entrepreneurs in the field of R & D Polish aviation sector. The project identified the need for an innovative propulsion system (ISN), an innovative Rotorcraft (IA) and the innovative aircraft (IS). 1.2. Exploration of the space Analyzing the implemented by NASA project HAVOC [2], aiming to build a flying object to explore the planet Venus distinguished a following need. Considering the following situation-an autonomous flying robot explores a distant planet from Earth. Explored planet has a substantially different, not comparable to Earth’s atmosphere, usually turbulent, with local states of turbulent, difficult to predict or significant rarefaction. The robot in its pioneering exploratory mission does not have access to energy resources known on Earth what releases the need to ensure the positive energy balance. The test gauge works in extremely harsh environments. Distance from the nearest centers of human civilization in which is the explorer because of the time and quality of the responses reduces or completely prevents continuous communication. Navigation in the unknown, unexplored areas in conjunction with the state of the lack of continuous link with the command center requires its full autonomy. Because of the M. Wasik ˛ and W. Skarka / Design Optimization of Electric Propulsion 369 nature of the mission it is also not available any service so it is required the ability to work in a state of incomplete performance. For example, broken one rotor in blade propeller propulsion can not disable operations, only reduce the efficiency of the work. 1.3. Rescue services Following discussions with representatives of the mountain rescue which uses drones for exploration missions distinguished the following problem. Considering the alternative situation characterized by less extreme working conditions, in which flying, autonomous robot takes part in a mountain rescue mission. The priority criteria in rescue missions is the time in which we reach the victim. Commonly used flying devices are suitable for use in Earth’s atmosphere, however there are specific conditions disrupting the flight. Especially in the case of flying robots with a relatively low mass, there is a significant disruption of the flight mechanics or its prevention in turbulent and dynamic variable conditions. Unpredictable turbulent environment unfavorable for flight requires a very rapid response of drive unit for stabilization and robot autonomy in the selection of settings for the current conditions. The unit operates in conditions in which the possibility of landing for supplementation of power source and service operations are very limited with natural topography. The ability of quick access to the device, which generates a tight energy balance, requires the optimization of its drive in order to minimize energy consumption. 2. Review of recent electric propulsions for drones The overview of types of aviation propulsions commonly applied for and flying robots, discussion of their basic principles of operation and application is presented. For each of the drives are also shown the existing scientific problems resulting from its construction and the method of its operation. This chapter contains only descriptions of the ways to implement the drive. 2.1. Propeller propulsion The most common type of propulsion in drones is connection of the propeller and electric motor. A rotor is a driving element that converts the energy generated by a torque (rotation) on the thrust [3]. This change of the energy results from the interaction of propeller on the medium-the surrounding air. Application of a propeller propulsion is associated with the occurrence of some scientific problems examined in Earth conditions [4]. The main problems related with the construction and aerodynamic effects on propellers surface is variable efficiency at different flight velocities and low resistance to mechanical damage. An important role from the point of view of optimizing the propulsion system to work in various weather conditions is the number of rotor blades. 2.2. Tunneled propeller propulsion Another kind of the propeller propulsion is the propulsion with the tunneled rotors. The principal idea of this type of aircraft propulsion is same as for the conventional ones 370 M. Wasik ˛ and W. Skarka / Design Optimization of Electric Propulsion with the difference that the propelers are inside tubes-tunels [5]. The main changes of application the tunneled propellers in comparison to the conventional solution, are reduction of the shaft angle and reduction of the navigational draft. Because of the high relationship between the propellers and tunnel geometry, it is required to design them as one system. There can be also marked other optimization problems like the propeller diameter, propeller / hull tip clearance and tunnel / propeller hull trim control [6]. Tunneled propellers are commonly used in water jet propulsions [7]. 2.3. Airship The airship is a flying object constructed of a vessel, made of thin, light and flexible material, the frame which suits it the proper shape and the propulsion [8]. The airship hovers in the air by filling with lighter-than-air gas, usually hydrogen or helium. To move around in the air and drive maneuvering is used-usually propeller in combination with an electric engine. Its major scientific problems are the type of gas which fills the shell, the impact of the environment on filling gas in a way different from terrestrial weathering, as well as issues of strength of the coating in the external environment, stability in the turbulent atmosphere, the propulsion selection for flight stabilization and low resistance in turbulent atmospheric conditions [9]. 2.4. Protection of the propulsion There are different types of the protection for the drones propulsions. One of the method of protecting the propulsions from the damage influence is avoiding damage by appling different protection elements on the fling objet. As example is given the Gimball [10] drone which has mounted a kind of protection shield made of net. Another practise is to use controling system which allows to keep the drone fling with damage of the propulsion (e.g. one blade in one of the propellers is broken) but with reduced fling features. 3. Protection shield types For the research were created and analysed models with three different types of the protection shield. The models were created based on the existing drone with protection shield called Gimball [10]. The general shape of the shield is based on the sphere. There are three different types of the patterns which create the shields shown in the figures. The material used for modal and strength analyses is carbonfiber composite with longitudinal fiber arrangement. 4. Numerical methods 4.1. Optimalisation Basing on the experience of the authors [11] [12] [13] [14] [15] in the different kinds of optimization, multi-criteria optimization tasks do not give specific solutions. The result of the multi-criteria optimization task is the Pareto front-a set of solutions. From a de- M. Wasik ˛ and W. Skarka / Design Optimization of Electric Propulsion 371 sign point of view a specific result is important, not the set of possible solutions. It was decided to define the scientific problem as the single criteria task. The single optimization criteria consist of the aerodynamic features condition. The strength and modal features are treated as the constrains. In the methodology subsection are detailed described used optimization methods. Figure 1. The types of the shiellds a) shield with trianglular net b) shield with quadratic net c) shield with combination of hexagonal with pentagonal net. 4.2. Metodology 4.2.1. Aerodynamics The simulation was conducted in virtual aerodynamic tunel, created in HyperWorks software with AcuSolve solver. The tunel dimennsions: length 5m, higth 1m and width 1m. The drone was placed in distance 1 m from the inlet of the wind tunel and 0.7m above its bottom. There were conducted two types of aerodynamic analyses [16] [17]the impact of the protection shield on the air resistance [18] [19] [20] during fligth with the velocity 5m/s and the impact of the protection shield on the propellers drag stream with the turbulent flow velocity 50m/s. The flow velocity was approximately calculeted for an twin blade 6 inch propeller with 80% efficiency and 11.1V-12.6V bldc motor with rotations per Volt (Kv parameter) in the range from 2300 to 3650. The model was treated as simetric in one plane. The ground was static. As boundary conditions were takenen the standard condi-tions from the wind tunel- temperature of the air 25◦C, preassure of the air 1018hPa and the density of the air 1.225kg/m3. 4.2.2. Strength analysis The simulation was conducted in two cases. In both of the cases the drone hits an obstacle with the fligth velocity 5 m/s and the protection shield works as an bumper and absorbs the hits energy. It was analysed the extreme situation when the extremely rigid obstacle stops the drone flight. The protection shield takes over the impact of the collision. The collision reaction force is applied to the shields node. In the first analysed case the shield is hit in the node where it is mounted to the gimbal. In the second case the shield is hit in the farest node from the mount place. The gimbal is mounted in two opposite nodes [21] [22] [23] [24]. 372 M. Wasik ˛ and W. Skarka / Design Optimization of Electric Propulsion 4.2.3. Modal analysis The modal analysis was conducted for the shield fixed in two opposite nodes-gimbal mounting nodes. The shield has only one degree of freedom-rotation around gimbal axis. There were calculated the eight first modes. The main idea of the modal analysis [25] [26] is to check if the protection shields own vibration frequency is different from the vibration frequency generated by the propulsion- twin blade 6 inch propeller with 80% efficiency and 11.1V-12.6V BLDC motor with Kv parameter in the range from 2300 to 3650. 5. Results The results are divided into analyses types. There the results for the best patterns of the shields are presented. It was assumed to minimalise the fligth resistances and to minimalise the impact of the protection shield on the fligth mechanics. The constraining assumptions are the resistance of the protection shield for damages comming from collisions and any influence of the vibrations from the propulsion system on the shields strength. 5.1. Aerodynamics Table 1 contains the results of the aerodynamic optimization-the best achieved results for each of the shields net pattern. It can be observed that hexagonal + pentagonal net pattern has the lowest drag coefficient and in connection with the lowest drag area has also the lowest blockage ratio which meanas the lowest aerodynamic resistance. Figures 2 and 3 show the visualisation of the air flow hrough the shield with combination of hexagonal and pentagonal net with different types of flows. Table 1. Comparison of the results for the different types of shields patterns of the CFD analysis. Shields pattern Triangular net Quadratic net Hexagonal + pentagonal net Inside the protection shield Drag Drag area Blockage coefficient [m2] ratio [%] 0.1684 0.0779 12.654 0.1399 0.06 9.763 0.09135 0.031 5.673 Outside the protection shield Drag Drag area Blockage coefficient [m2] ratio [%] 0.1903 0.083 14.677 0.1581 0.068 11.034 0.12785 0.047 7.833 Figure 2. Air stream lines through the shield with combination of hexagonal and pentagonal net-outside flow. M. Wasik ˛ and W. Skarka / Design Optimization of Electric Propulsion 373 Figure 3. Air stream lines through the shield with combination of hexagonal and pentagonal net-outside turbulent flow. 5.2. Strength analysis In Table 2 are presented the results of the strength analysis. As can be observed all of the net topologies fulfill the strength assumptions. The von Misses stress for all of the patterns is safe low in comparition with the carbonfieber Young modulus (90420 Gpa). The displacements in comparition with the outal diameter of the protection shield (300-400 mm) are also acceptable. The best strength results were achieved for the triangular net. Figures 4 and 5 show the visualisation of the stress and the displacements in shield with combination of hexagonal and pentagonal net. Table 2. Comparison of the results for the different types of shields patterns of the strength analysis. Gimbal mounting node Shields pattern Triangular net Quadratic net Hexagonal + pentagonal net Displacement [mm] 5.13 11.43 9.12 Stress [Pa] 4.325e5 7.088e5 2.574e5 Farthest node from gimbal mounting node Displacement Stress [Pa] [mm] 6.07 3.538e5 12.76 6.648e5 9.463 1.436e5 374 M. Wasik ˛ and W. Skarka / Design Optimization of Electric Propulsion Figure 4. Visualisation of the magnitudal displacements in the shield with combination of hexagonal and pentagonal net. Left pictured the crash force aplied in the node far from gimbal mounting. Right pictured the force applied in the gimbal mounting. Figure 5. Visualisation of the von Mises stress distribution in the shield with combination of hexagonal and pentagonal net. Left pictured the crash force aplied in the node far from gimbal mounting. Right pictured the force applied in the gimbal mounting. 5.3. Modal analysis In Table 3 are presented the results of the modal analysis. As can be observed the ranges of main vibration modes for all of the patterns are far from the desired propulsion vibration modes range (400—1100 Hz) also all of them are fulfilling the strength design assumption. Figure 6 show the visualisation of the first four vibration modes in shield with combination of hexagonal and pentagonal net. Table 3. Comparision of the results for the different types of shields patterns fot the modal analysis. Shields pattern First mode [Hz] Triangular net Quadratic net Hexagonal + pentagonal net 0.913 1.2903 1.7244 Second mode [Hz] 5.545e3 1.021e4 7.355e3 Third mode [Hz] Forth mode [Hz] 1.632e4 3.004e4 2.098e4 1.878e4 3.221e4 2.132e4 M. Wasik ˛ and W. Skarka / Design Optimization of Electric Propulsion 375 Figure 6. Visualisation of the first four vibration modes in shield with combination of hexagonal and pentagonal net. 6. Conclusions In this paper were created different topological models for protection of fling autonomous robots propulsion protection. Each of the topology was aerodynamically optimised with strength and modal constrains. In the paper are presented the results for the best representatives of three main topological groups- protection shield created by triangular, quadratic and combination of hexagonal and pentagonal nets. As we can see in the Results chapter the best results are achieved for the combination of the hexagonal and pentagonal net. Optimising the topology of the protection shield it was achieved the minimal influence for the aerodynamics of the drone. All of the protection types are enough strength to protect the inside mounted propulsion without self-damage during the collision. The ranges of main vibration modes are far from the desired propulsion vibration modes range (400-1100 Hz). The best solution- the combination of hexagonal and pentagonal nets has in comparison with the other solutions the lower air drag, the optimal strength and in comparison with the heaviest solution- the quadratic net, 70% lower mass. The acquired knowledge from the conducted research is base for the design of autonomous exploratory drone for the missions in dangerous and unpredictable environment. 376 M. Wasik ˛ and W. Skarka / Design Optimization of Electric Propulsion References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] Polish National Research Centre and Development- NCBR, Innovative Aviation INNOLOT Programme, http//www.ncbir.pl/programy−krajowe/programy−sektorowe/innolot, Accessed 25 Feb 2015. NASA, Project HAVOC, http//sacd.larc.nasa.gov/branches/space−mission−analysis−branch−smab /smab−projects/havoc, Accessed 25 Feb 2015. S. Gudmundsson, General Aviation Aircraft Design Applied Methods and Procedures. Chapter 14 – The Anatomy of the Propeller, Elsevier, Oxford, 2014, pp. 581—659. L. Zhengchu, W. Xunnian, C. Hong, W. Liu, Experimental study on the influence of propeller slipstream on wing flow field, Journal of Experiments in Fluid Mechanics, Vol. 14, (2000), 44-48. D.L. Blount, Design of propeller tunnels for high-speed craft, Blount Donald L. Blount and Assoc., http://www.boatdesign.net/forums/attachments/boat-design/70266d1336602693-prop-pocket-designpropeller-tunnels.pdf, Accessed 25 Feb 2015. G. Borda, Propulsion Experiments with a Tunnel Hull Planing Craft to Determine Optimum Longitudinal Placement of Propellers and Effects of Nozzle Sideplates in the Tunnels, DTNSRDC Report SPD−717−02, 1982. E.E. West, L.B. Crook, A Velocity Survey and Wake Analysis for an Assault Support Patrol Boat (ASPB) Represented by Model 5104,NSRDC T& E Report 149−H−05, 1967. G.A. Kohury, Airship Technology, Cambridge University Press, Cambridge, 2012. Z. Zhenga, W. Huoa, Z. Wub, Autonomous airship path following control: Theory and experiments, Control Engineering Practice 21, (2013), 769—788. Flyability SA, Gimbal drone, http//www.flyability.com, Accessed 25 May 2015. W. Skarka, Application of numerical inverse model to determine the characteristic of electric race car. In: I. Horvath, Z. Rusak (eds.) Tools and methods of competitive engineering. TMCE 2014 Symposium, Budapest, Hungary , May 19-23, 2014, pp. 263-274. M. Targosz, M. Szumowski, W. Skarka, P. Przystałka, Velocity Planning of an Electric Vehicle Using an Evolutionary Algorithm, In: J. Mikulski (ed.) Activities of Transport Telematics, 13th International Conference on Transport Systems Telematics, TST 2013, Katowice-Ustroń, Poland, October 23–26, 2013, Springer-Verlag, Berlin Heidelberg, 2013, pp. 171-177. M. Targosz, W. Skarka, P. Przystałka, Simulation and optimization of prototype electric vehicle – methodology, In: D. Marjanovic et al. (eds.): Proceedings of the 13th International Design Conference, Dubrovnik, Croatia, May 19-22, 2014. Vol. 2, Faculty of Mechanical Engineering and Naval Architecture, University of Zagreb, 2014, pp. 1349-1360. M. Wąsik, Influence of the windscreens inclination angle on the aerodynamic drag coefficient of the cars participating in the race Shell Eco-marathon based on numerical simulations. In: J. Maczak (ed.) XIV International Technical Systems Degradation Conference, Liptovsky Mikulas, 8-11 April, 2015, Polskie Naukowo-Techniczne Towarzystwo Eksploatacyjne, Warszawa (2015), pp. 158-160. M. Wąsik, Methodology of aerodynamic analysis of the cars participating in the race shell Ecomarathon based the HyperWorks software. In: J. Maczak (ed.) XIII International Technical Systems Degradation Conference, Liptovsky Mikulas, 23-26 April 2014, Naukowo-Techniczne Towarzystwo Eksploatacyjne, Warszawa, (2014), pp. 119—120. J.D. Anderson, Computational Fluid Dynamics: The basics with applications, McGraw-Hill, New York, 1995. J.H. Ferziger, M. Peric, Computational Methods for Fluid Dynamics, Springer-Verlag, New York, 2002. B. Etkin, Dynamics of flight: stability and control, John Wiley & Sons, Michigan, 1982. H. Smith, Illustrated Guide to Aerodynamics, TAB Books, Blue Ridge Summit, 1992. R. M. Kolonay, A physics-based distributed collaborative design process for military aerospace vehicle development and technology assessment, Int. J. of Agile Systems and Management, 7(3/4):242–260, 2014. W. Weaver Jr., P.R.Johnston, Finite Elements for Structural Analysis, Prentice-Hall, Englewood Cliffs, 1984. T.R. Chandrupatla, A.D. Belegundu, Introduction to Finite Element Method in Engineering, PrenticeHall, London, 1991. R.D. Cook, Concept and Applications of Finite Element Analysis, John Wiley, New York, 1981. M. Kleiber (ed.), Handbook of Computational Solid Mechanics. Survey and Comparison of Contemporary Method, Springer-Verlag, Berlin Heidelberg, 1998. N.M.M. Maia, J.M.M. Silva, Theoretical and Experimental Modal Analysis, Research Studies Press, Taunton, 1997. J. He , Z.-F. Fu, Modal Analysis, Butterworth-Heinemann, Oxford, 2001. Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-377 377 Towards Cloud Big Data Services for Intelligent Transport Systems Gavin KEMP1, Genoveva VARGAS-SOLAR2,3, Catarina FERREIRA Da SILVA1, Parisa GHODOUS1, Christine COLLET2 and Pedropablo LOPEZ AMAYA1 1 Université Lyon 1, LIRIS, CNRS, UMR5202, bd du 11 novembre 1918, Villeurbanne, F69621, France 2 LIG Grenoble Institute of Technology, 681 rue de la Passerelle, Saint Martin d'Hères, France 3 LIG-LAFMIA, CNRS 681 rue de la Passerelle, Saint Martin d'Hères, France Abstract. In later years, the increase in computation power and data storage has opened new perspectives to data analysis. The possibility to analyse big data brings new insights into obscure and useful correlations in data providing undiscovered knowledge. Applying big data analytics to the transport data has brought better understanding to the transports network revealing unexpected choking points in cities. This technology is still largely inaccessible to small companies due to their limited computational resources and complex for large ones due to the time needed to develop a big data analytical system Using the high scalability of Cloud and the use of specialized services in a services oriented architecture, new perspective are to developing efficient and scalable big data infrastructure adapted to transport systems. This paper presents a big data infrastructure using service oriented architecture. Keywords. ITS, Big Data, Cloud Services, NoSQL Introduction During the last five years, the problem of providing intelligent real time data management using cloud computing technologies has attracted attention from both academic researchers, like [1], [2]and industrial practitioners like Google Big Query, IBM, Thales. They mostly concentrate on modelling stream traffic flow, yet they barely combine different data flows with other big data to provide new Intelligent Transport Services (ITS). ITS apply technology for integrating computers, electronics, satellites and sensors for making every transport mode (road, rail, air, water) more efficient, safe, and energy saving. ITS effectiveness relies on the prompt processing of the acquired transport-related information for reacting to congestion, dangerous situations, and, in general, optimizing the circulation of people and goods. Integration, storage and analysis of huge data collections must be adapted to support ITS for providing solutions that can improve citizens’ lifestyle and safety. In order to address these challenges it is important to consider that big data introduce aspects to consider according to its properties described by the 5V's model [3]: Volume, Velocity, Variety, Veracity, Value. Volume and velocity (i.e., continuous production of new data) have an important impact in the way data is collected, archived and continuously processed. Transport 378 G. Kemp et al. / Towards Cloud Big Data Services for Intelligent Transport Systems data are generated at high speed by arrays of sensors or multiple events produced by devices and transport media (buses, cars, bikes, trains, etc.). These data need to be processed in real-time, near real-time or in batch, or as streams. Important decisions must be made in order to use distributed storage support that can maintain these data collections in apply on them analysis cycles. Collected data, involved in transport scenarios, can be very heterogeneous in terms of formats and models (unstructured, semi-structured and structured) and content. Data variety imposes new requirements to data storage and database design that should dynamically adapt to the data format, in particular scaling up and down. ITS and associated applications aim at adding value to collected data. Adding value to big data depends on the events they represent and the type of processing operations applied for extracting such value (i.e., stochastic, probabilistic, regular or random). Adding value to data, given the degree of volume and variety, can require important computing, storage and memory resources. Value can be related to quality of big data (veracity) concerning (1) data consistency related to its associated statistical reliability; (2) data provenance and trust defined by data origin, collection and processing methods, including trusted infrastructure and facility. Processing and managing big data, given the volume and veracity and given the greedy algorithms that are sometimes applied to it, for example, giving value and making it useful for applications, requires enabling infrastructures. Cloud architectures provide unlimited resources that can support big data management and exploitation. The essential characteristics of the cloud lie in on-demand self-service, broad network access, resource pooling, rapid elasticity and measured services [4]. These characteristics make it possible to design and implement services to deal with big data management and exploitation using cloud resources to support applications such as ITS. The objective of our work is to manage and aggregate cloud services for managing big data and assist decision making for transport systems. Thus this paper presents our approach for developing data storage, data cleaning and data integration services to make an efficient decision support system. Our services will implement algorithms and strategies that consume storage and computing resources of the cloud. For this reason, appropriate consumption models will guide their use. The remainder of the paper is organized as follows. Section 2 describes work related to ours. Section 3 introduces our approach for managing transport big data on the cloud for supporting intelligent transport systems applications. Section 4 presents a case study of the application that validates our approach. Finally, Section 5 concludes the paper and discusses future work. 1. Related work This section focuses on big data transport projects, namely to optimize taxi usage, and on big data infrastructures and applications for transport data events. Transdec [5] is a project to create a big data infrastructure adapted to transport. It is built on three tiers comparable to the MVC (Model, View, Controller) model for transport data. The presentation tier, based on GoogleTM Map, provides an interface to express queries and expose the result, the query interface provides standard queries for the presentation tier and a data tier is spatiotemporal database built with sensor data and traffic data. This work provides an interesting query system taking into account the dynamic nature of town data and providing time relevant results in real-time. G. Kemp et al. / Towards Cloud Big Data Services for Intelligent Transport Systems 379 Urban insight [6] is a project studying European town planning. In Dublin they are working event detection through big data, in particular on an accident detection system using video stream for CCTV (Closed Circuit Television) and crowdsourcing. Using data analysis they detect anomalies in the traffic and identify if it is an accident or not. When there is an ambiguity they rely on crowdsourcing to get further information. The project RITA [7] in the United States is trying to identify new sources of data provided by connected infrastructure and connected vehicles. They work to propose more data sources usable for transport analysis. L. Jian et al. [8] propose a service-oriented model to encompass the data heterogeneity of several Chinese towns. Each town maintains its data and a service that allows other towns to understand their data. These services are aggregated to provide a global data sharing service. These papers propose methodologies to acknowledge data veracity and integrate heterogeneous data into one query system. An interesting line to work on would be to produce predictions based on this data to build decision support systems. H. V. Jagadish et al. [3] propose a big data infrastructure based on five steps: data acquisition, data cleaning and information extraction, data integration and aggregation, big data analysis and data interpretation. X. Chen et al. [9] use Hadoop-gis to get information on demographic composition and health from spatial data. J. lin and D. Ryaboy [10] present their experience on twitter to extract information from log information. They concluded that an efficient big data infrastructure is a balancing speed of development, ease of analysis, flexibility and scalability. Proposing a big data infrastructure on the cloud will make developing big data infrastructures more accessible to small businesses for several reasons: little initial investment, ease of development through Service-Oriented Architecture (SOA) and using services developed by specialist of each service. N. J. Yuan et al. [11], Y. Ge et al. [12] and D. H. Lee et al. [13] worked a transport project to help taxi companies optimize their taxi usage. They work on optimizing the odds of a client needing a taxi to meet an empty taxi, optimizing travel time from taxi to clients, based on historical data collected from running taxis. Using knowledge from experienced taxi drivers, they built a mapping of the odds of passenger presence at collection points and direct the taxis based on that map. These research works do not use real-time data thus making it complicated to make accurate predictions and react to unexpected events. They also use data limited to GPS and taxi usage, whereas other data sources could be accessed and used. D. Talia [14] presents the strengths of using the cloud for big data analytics in particular from a scalability stand point. They propose the development of infrastructures, platforms and service dedicated to data analytics. J. Yu et al. [15] propose a service oriented data mining infrastructure for big traffic data. They propose a full infrastructure with services such accident detection. For this purpose they produce a large database with the collected data by individual companies. Individual services would have to duplicate the data to be able to use it. This makes for highly redundant data as the same data I stored by the centralised database, the application and probably the data producers. What is more, companies could be reluctant to giving away their data with no control for its use. H. Demirkan and D. Delen [16] proposes a service oriented decision support system using big data and the cloud. A data service provides a centralised database to which application can query. The state of the art reveals a limited use of predictions from big data analytics for transport-oriented systems. The heavy storage and processing infrastructures needed for big data and the current available data-oriented cloud services make possible the 380 G. Kemp et al. / Towards Cloud Big Data Services for Intelligent Transport Systems continuous access and processing of real time events to gain constant awareness, produce big data-based decision support systems, which can help take immediate informed actions. Cloud based big data infrastructure often concentrate around the massive scalability but don’t propose a cheap method to simply aggregate big data services. 2. Managing transport data in smart cities Consider the scenario where a taxi company needs to embed decision support in electric vehicles, to help their global optimal management. The company uses electric vehicles that implement a decision cycle to reach their destination while ensuring optimal recharging, through mobile recharging units. The decision making cycle aims at ensuring vehicles availability both temporally and spatially; and service continuity by avoiding congestion areas, accidents and other exceptional events. The taxis and mobile devices of users are equipped with video camera and location trackers that can emit the location of the taxis and people. For this purpose, we need data on the position of the vehicles and their energies levels, have a mechanism to communicate unexpected events and have usage and location of the mobile recharging station. Data analytics Services Decision making support services Integration & aggregation Extended UnQL platform Data storage Services MongoDB Neo4J CouchDB Data cleaning & processing Services PIG HADOOP Data harvesting Services REST FLUME Figure 1. Big Data services. Figure 1 shows the services that this application relies on. These services concern data acquisition and cleaning and information extraction in one side of the spectrum, and on the other side big data analysis, integration and aggregation services, and decision-making support. G. Kemp et al. / Towards Cloud Big Data Services for Intelligent Transport Systems 381 Figure 2. Big Data infrastructure. In cloud computing everything is viewed as a service (XaaS). As a consequence cloud software (SaaS) is built as an aggregate of services exploiting services available on the cloud infrastructure (IaaS). In this spirit, we build a big data infrastructure were individual companies (Figure 2), specialized in their level of big data; can propose services on each level of big data. This also means that the companies wanting a big data infrastructure will be able to simply build it from an aggregation of services proposed by specialized companies. Next section explains how the services are built to insure high scalability of cloud and the controlled cost. 3. Some Cloud Big data services for transport 3.1. Data acquisition service The first step of a big data infrastructure is well collecting the big data. This is basically hardware and infrastructure services that transfer, to NoSQL data stores adapted to the format of the data, the data acquired by the vehicles, users, and sensors deployed in cities (e.g. roads, streets, public spaces). This is done by companies and entities such as town or companies managing certain public spaces, who have data collecting facilities. These companies propose and sell their data on a cloud infrastructure. Clients using the same cloud infrastructure could then pay to have access the data., in our case the universities Openstack infrastructure [17]. Using object storage such as Swift [18] using MongoDB [19], these companies will have a cheap highly scalable and sharable data store. Also the sharding capability of these data stores offers high horizontal scalability but also faster analysis threw MapReduce and data availability. 382 G. Kemp et al. / Towards Cloud Big Data Services for Intelligent Transport Systems In the taxis scenario, data from the town and from the vehicles would be stored on several object data stores known as container with Swift. Since there are several companies involved, they will store there data on separate data stores. When a client wishes to access the data, the company would propose REST (representational state of transfer) [20] services to which the client can query to get access to the data. These companies propose different service to historical data and real-time data. The historical data service provides authentication to the client so they can apply Openstack MapReduce service (Sahara) [21] on their container. The real-time data service provides a Json or XML file of the latest data produced. We are implementing and testing the data acquisition service. This services uses NodeJS module to acquire the city data from the Grand Lyon [22] but also from Twitter and from Bing search engine using REST requests. Still using REST requests these services will post the data onto a Mongodb database container to store as historical data. The service provides functions to access data via REST either with the key to the data store when wanting to query or analyses the historical data or the latest file acquired when using the real-time data service. The data is stored under XML, Json or the original image file or PDF before data extraction. 3.2. Information extraction and cleaning service The next step is cleaning and data extraction. This consists of both extracting the information from unstructured data and cleaning the data. This could be done by the company producing the data or an independent company depending on the level of structuration of the data. Highly structured data would likely be cleaned by the company producing the data as they understand best its production and thus know how best to clean it up. For highly unstructured data like sound or video data, highly specialized expert would be needed to extract the information. This would be used to pot outliers in the data. Using MapReduce, the company acquiring the data or the company contracted to do it would perform statistical analysis to identify for example outliers in the data. This is important as, for example, a malfunction in a sensor loop could either ignore passing traffic or register non-existing traffic. Cleaning these events is important since inaccurate data produced by a dodgy sensor can break a model. 3.3. Integration and aggregation services The objective of big data analytics is to use the large volume of data to extract new knowledge by searching, for example, for patterns in the data. This often has a consequence of data coming from a wide variety of sources. This means the data has to be aggregated into a usable format for the analytics tools to use. This service proposes services for real-time data aggregation and historical data aggregation. The real-time data aggregation service gets the data from the individual data stores real-time data services and proposes a formatted file with the data from all the data acquiring service simply by fusing together the data provided by the real-time data acquisition services. Thus we aggregate data from the city, state of recharging stations, having location of people based on the time stamp or the GPS location. The historical data aggregation will have to find a way to do similar action but with the data stores. The problem is that having data on several separate data stores is not a usable format. Importing all the data into a new huge data store would be G. Kemp et al. / Towards Cloud Big Data Services for Intelligent Transport Systems 383 redundant on already existing resources making this service potentially excessively expensive and as for temporary stores would be long to build when having to import terabytes of data as well as being expensive on network cost as well as time consuming. To solve this problem, this service will to propose a query interface for simple querying and processing service to process the data masse by converting a form simple programing language into UNQL queries to collect and pre-process the data before being integrated into a model. 3.4. Big data analytical and decision support services The whole point of big data is to identify and extract information form the mass of data. Predictive tools can be developed to anticipate the future. The role of this service is to provide a computer model of the historical data. It also provides the algorithm applied to the individual pieces of data. Thus using the model provided by the analytical service and the algorithm applied to the real-time data we can approach similar situations and act accordingly. The decision support service composes several services. On the strategic level and using the model and the algorithm proposed by the big data analytical services, the decision support service provides an interface exposing the data situation in real-time but also predictions on events. For example, regularly observing an increase in the population in one place and traffic jams 30 minutes later we can deduct cause and effect and intervene in future situations so the taxis avoid and evacuate that area. This service also generates data on the decision taken by the strategists to build more elaborate model including the consequence of this decision to then provide better decision support. On the vehicle level, services will provide advice to the vehicle for optimal economic driving based on the driving conditions. It also provides a database were the information on the dangers of the road is stored. 4. Conclusion This paper proposes a set of big data services as a starting point of a dedicated and flexible infrastructure for managing and exploiting transport data. Our approach uses NoSQL systems deployed in a multi-cloud setting and makes sharing decisions for ensuring data availability. Our transport data service architecture is validated in a scalable and adaptable ITS case study of electric vehicles using big data analytics on the cloud. This provides a global view of current status of town transport, helps making accurate strategic decisions, and insures maximum security to the vehicles and their occupants. For the time being our data transport services concentrate in improving design issues with respect to NoSQL support. We are currently measuring performance with respect to different sizes of data collections. We have noticed that NoSQL provides reasonable response times once an indexing phase has been completed. We are willing to study the use of indexing criteria and provide strategies for dealing with continuous data. These issues concern our future work. 384 G. Kemp et al. / Towards Cloud Big Data Services for Intelligent Transport Systems Acknowledgement We thank the Région Rhône-Alpes who finances the thesis work of Gavin Kemp by means of the ARC 7 programme (http://www.arc7-territoiresmobilites.rhonealpes.fr/), as well as the competitiveness cluster LUTB Transport & Mobility Systems, in particularly Mr. Pascal Nief, Mr. Timothée David and Mr. Philippe Gache for putting us in contact with local companies and projects to gather use case scenarios for our work. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] V. Gulisano, R. Jiménez-Peris, M. Patiño-Mart́nez, C. Soriente, and P. Valduriez, “StreamCloud: An elastic and scalable data streaming system,” IEEE Trans. Parallel Distrib. Syst., vol. 23, pp. 2351–2365, 2012. F. Lecue, S. Tallevi-Diotallevi, J. Hayes, R. Tucker, V. Bicer, M. L. Sbodio, and P. Tommasi, “STAR-CITY,” in Proceedings of the 19th international conference on Intelligent User Interfaces IUI ’14, 2014, pp. 179–188. H. V. Jagadish, J. Gehrke, A. Labrinidis, Y. Papakonstantinou, J. M. Patel, R. Ramakrishnan, and C. Shahabi, Big Data and Its Technical Challenges, vol. 57, no. 7. 2014. P. M. and T. Grance, “The NIST Definition of Cloud Computing Recommendations of the National Institute of Standards and Technology,” 2008. U. Demiryurek, F. Banaei-Kashani, and C. Shahabi, “TransDec:A Spatiotemporal Query Processing Framework for Transportation Systems,” IEEE, pp. 1197–1200, 2010. A. Artikis, M. Weidlich, A. Gal, V. Kalogeraki, and D. Gunopulos, “Self-Adaptive Event Recognition for Intelligent Transport Management,” pp. 319–325, 2013. D. Thompson, G. McHale, and R. Butler, “RITA,” 2014. [Online]. Available: http://www.its.dot.gov/data_capture/data_capture.htm. L. Jian, J. Yuanhua, S. Zhiqiang, and Z. Xiaodong, “Improved Design of Communication Platform of Distributed Traffic Information Systems Based on SOA,” in 2008 International Symposium on Information Science and Engineering, 2008, vol. 2, pp. 124–128. X. Chen, H. Vo, A. Aji, and F. Wang, “High performance integrated spatial big data analytics,” in Proceedings of the 3rd ACM SIGSPATIAL International Workshop on Analytics for Big Geospatial Data - BigSpatial ’14, 2014, pp. 11–14. J. Lin and D. Ryaboy, “Scaling big data mining infrastructure : The twitter Experience,” ACM SIGKDD Explor. Newsl., vol. 14, no. 2, p. 6, Apr. 2013. N. J. Yuan, Y. Zheng, L. Zhang, and X. Xie, “T-finder: A recommender system for finding passengers and vacant taxis,” IEEE Trans. Knowl. Data Eng., vol. 25, pp. 2390–2403, 2013. Y. Ge, H. Xiong, A. Tuzhilin, K. Xiao, M. Gruteser, and M. Pazzani, “An energy-efficient mobile recommender system,” in Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD ’10, 2010, p. 899. D.-H. Lee, H. Wang, R. Cheu, and S. Teo, “Taxi Dispatch System Based on Current Demands and Real-Time Traffic Conditions,” Transp. Res. Rec., vol. 1882, pp. 193–200, 2004. D. Talia, “Clouds for scalable big data analytics,” Computer (Long. Beach. Calif)., vol. 46, no. 5, pp. 98–101, 2013. J. Yu, F. Jiang, and T. Zhu, “RTIC-C: A Big Data System for Massive Traffic Information Mining,” in 2013 International Conference on Cloud Computing and Big Data, 2013, pp. 395–402. H. Demirkan and D. Delen, “Leveraging the capabilities of service-oriented decision support systems: Putting analytics and big data in cloud,” Decis. Support Syst., vol. 55, no. 1, pp. 412–421, 2013. Open, “Openstack,” 2015. [Online]. Available: http://www.openstack.org/. openstack, “swift,” 2015. [Online]. Available: http://docs.openstack.org/developer/swift/. P. J. Sadalage and M. Fowler, NoSQL Distilled. 2012. G. Kemp et al. / Towards Cloud Big Data Services for Intelligent Transport Systems [20] [21] [22] F. Valverde and O. Pastor, “Dealing with REST Services in Model-driven Web Engineering Methods,” Jan. 2009. openstack, “sahara,” 2015. [Online]. Available: http://docs.openstack.org/developer/sahara/. GrandLyon, “Smart Data,” 2015. [Online]. Available: http://data.grandlyon.com/. 385 386 Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-386 Cooling and Capability Analysis Methodology: Towards Development of a Cost Model for Turbine Blades Film Cooling Holes Javier CONTINENTEa, Essam SHEHABa,1, Konstantinos SALONITISa, Sree TAMMINENIb and Phani CHINCHAPATNAMb a Manufacturing Department, School of Aerospace, Transport and Manufacturing, Cranfield University, UK b Product Cost Engineering, Rolls-Royce plc., UK Abstract. This paper outlines a methodology incorporating cooling performance and manufacturing process capabilities to evaluate turbine blade film cooling cost. The manufacturing process considered is the electro discharge machining of cooling holes on turbine blades. The proposed methodology allows estimating the impact of geometry, cooling and capability variables on the unit cost of the turbine blade. A procedure based in CFD simulation is proposed to quantify the effects of geometry parameters and their variability due to manufacturing, on cooling performance. Similarly, the influence of hole design on process capabilities is addressed propounding a method to evaluate it. The methodology has been applied to two case studies namely: cooling analysis for heat transfer coefficient and assessment of non-conformance in different regions of an aerofoil. Keywords. Film cooling holes, manufacturing capabilities, cooling analysis, EDM Introduction Turbine blades are one of the most critical components in modern aircraft engines which are pushed to their limits due to the constant demand of better efficiency making cooling a fundamental design requirement. However, not only performance is the objective, but also delivering cost-effective solutions is an obligation for the leading competitors. Film cooling holes have become one of the key features to achieve the cooling requisites and they account for a significant amount of the manufacturing cost. Therefore, there is a need for developing prediction models and tools that allow optimising cost at design stage. Unit and lifecycle cost are dependent on production process and the in-service temperature of the blade respectively, converting them in the main cost drivers. Complex inter-relationships exist between geometry, cooling performance and manufacturing process capabilities and affect the component cost. The purpose of this work is to present a methodology to tackle cooling and capability analysis in order to 1 Corresponding Author, e-mail: e.shehab@cranfield.ac.uk J. Continente et al. / Cooling and Capability Analysis Methodology 387 derive a system which can predict the impact of these parameters on the manufacturing process cost of the turbine blade. Those predictive abilities can be integrated into a cost model in future development. Moreover this paper demonstrates a couple of examples of the proposed methods as they are applied to cooling and capability analysis. Electro Discharge Machining (EDM) is the manufacturing process referred in this research. It is a widely spread technology for drilling holes on turbine blades which has been in scope for many researchers to undertake the challenges of cost modelling it [1]. 1. Background 1.1. Turbine Blade Cooling One of the major concerns of aerospace industry is to increase engines efficiency, reducing both operation costs and contaminant emissions. Therefore, since the beginning of modern aviation, the rising of combustion gases temperature has been a trend to improve overall efficiency. In current engines this temperature is exceeding the melting point of nickel superalloy blades used in turbines, thus advanced cooling methods are required to protect blades [2]. In modern jet engines, high pressure turbine blades are cooled using a combination of four systems: complex internal passages, film cooling, thermal barrier coatings and aerodynamic design to reduce heat loads [3]. Film cooling consists of manufacturing rows of holes that can provide cool air bleed from the compressor to create a film over the aerofoil leading to a reduction of temperature on metal surface and preventing excessive component wear, known as burn off. The configuration of these holes needs to achieve a balance between cooling effects and cooling air consumption, which has a negative impact on engine efficiency [4]. This trade-off process is complex and difficult to optimise [5]. One of the main improvements in film cooling is the production of non-round shape holes, which improve distribution of the air on the surface and reduce aerodynamic losses, resulting in higher cooling efficiency [3]. The main parameters used to evaluate the cooling performance are cooling effectiveness and heat transfer coefficients (HTC) over the interior and exterior walls of the blade. The relation of those variables with the geometry of the holes has been object of study for many researchers, using an experimental approach as [6] and [7] or CFD simulation as [8]. In addition to topology, the influence of manufacturing variability and the specification limits has been addressed [9] showing significant impact on performance and therefore in the in-service life of the blades. 1.2. Electro Discharge Machining EDM is a non-traditional machining process widely used in aerospace manufacturing when it comes to deal with hard materials. The intense heat produced by electric sparks, generated between the component and an electrode, is responsible for melting of material thereby allowing controlled erosion. These sparks occur in the gap separating both electrodes, the tool and component, which is filled with a dielectric liquid or gas. The capability of drilling tiny holes in superalloys and the fact that there is no contact between the piece and the electrode are its main advantages when compared to traditional machining methods [10]. 388 J. Continente et al. / Cooling and Capability Analysis Methodology There are several variants of EDM which share the same principles, but for production of film cooling holes the ones used are die-sinking and fast hole drilling. Die-sinking EDM was the first type developed. It consists of an electrode with the negative shape of the required geometry which is moved over the workpiece to reproduce the features. Discharges take place while both are submerged in the dielectric inside a tank to contain the fluid. A CNC control manages to displace the electrode head or the piece to achieve the machining positions needed, while a servo is in charge of advancing the electrode forward and backwards. Fast hole drilling refers to an EDM variant especially developed for drilling holes with improved capabilities in terms of removal rate when compared to the original method [11]. This method requires the use of a hollow electrode which is rotated continuously while the dielectric fluid is pumped through. The rotating characteristic provides concentricity and helps with the flushing process, while the high pressure dielectric forced out tends to centre and stiffen the electrode [12]. 2. Proposed Methodology A unit and lifecycle cost model for film cooling holes has to integrate parameters from several analyses to produce quality predictions. Figure 1 shows the main cost drivers for EDM cooling holes and analysis for model their interactions. Cooling performance is associated with the part life, thus it has an impact on the lifecycle cost, while geometry and process capabilities are related to manufacturing cost. As displayed in the figure, variables can be linked together through a cooling and capability analysis being the building blocks to develop a comprehensive cost model in future work. Generating a methodology for those analysis is the aim of current research and hence it is presented in the following sections. Cooling Analysis • Manufacturing variability Cooling Performance • Cooling Effectiveness • Heat Transfer Coefficients Cooling Analysis • • • • Holes Geometry Diameter Angles Shape Position Capability Analysis Process Capabilities • Non-conformance ratio Integrated Unit & Lifecycle Cost Model Figure 1. Main cost drivers for EDM cooling holes and analysis for model their interactions. 2.1. Cooling Analysis The main objective of analysing the thermodynamic process of film cooling is to evaluate the impact of geometric parameters on cooling variables and find relationships J. Continente et al. / Cooling and Capability Analysis Methodology 389 to model that interaction. To satisfy this purpose CFD simulation is the core method employed to investigate how geometric parameters influence cooling performance. The methodology for this part of the research is summarised in Figure 2. This kind of procedure has been usually carried out by many researchers when comparing different hole shapes and dimensions. However, the reason to perform this analysis is to obtain results for the specific topology and conditions established during design that may not be found in literature or covered in detail. As illustrated in Figure 2, the first step is to create a model of the geometry which can be described parametrically in order to vary dimensions covering different configurations. It is used a single hole model to investigate how different variables change cooling parameters, rather than a full blade model which is also requiring a lot of computational resources. Nevertheless, to characterise aspects like hole position, a multi-hole model must be built to take into account the interaction between the flow of adjacent holes. The simplified model is created and parametrised importing the geometry developed by designers. The computational domain considered has a rectangular duct where the hot stream flows and another rectangular body to simulate the interior passage of the blade. Between those ones there is another body to reproduce the blade wall which contains the hole. Topology of different holes Generate Parametric Geometry Model Relevant factors and levels Design of Experiments External conditions CFD Simulation Analysis and Modelling Response Significance and model Figure 2. Proposed methodology for cooling analysis. Once the model is ready it is necessary to introduce which parameters will be varied and how, this is conducted with design of experiments, selecting factors and levels for the simulations. Low and high levels are calculated to ensure geometry can be generated for all combinations of factors. This is fundamental to avoid introducing a set of values that may result in an impossible model which would crash the software. A sensitive aspect for CFD analysis is the mesh, ensuring a good quality mesh is fundamental to obtain accurate results. When the geometry of the model is changed the mesh should be tested until the number of elements is suitable and the orthogonal quality is acceptable. As a rule of thumb a hexahedral mesh usually produces better results with less iterations, and the number of elements should be around a million for accurate solution. When the objective is to analyse the impact of manufacturing variability and tolerances mesh should be finer around the hole in order to capture the differences between cases. Boundary conditions for inlets and outlets are provided as 390 J. Continente et al. / Cooling and Capability Analysis Methodology well to the solver before initiating calculations. The output variables and their computed domain are set at this point, for instance cooling effectiveness could be required in a particular point or averaged over a surface or line. Finally, after CFD simulation is done, the relevant variables are presented as a table for all the case points. It is possible to plot graphical representations enhancing visualisation of the solution and facilitating checking and comparing with other work. The significant level of each factor is calculated and presented with charts for identification of the relevant variables. Those ones are then used to obtain a regression equation or a response surface. 2.2. Capability Analysis The capability analysis is developed to enlighten the influence of geometry in the accuracy and repeatability of the EDM and provide predictors to estimate process capabilities like non-conformance ratio or deviation from nominal specifications. The methodology proposed in Figure 3 starts with the collection of data from two main sources, the inspection systems at the manufacturing facility and 3D models and drawings used for design. Quantity and quality of information gathered is fundamental to achieve precise results, therefore numeric data is preferred and should be prioritised among the different types of records available. The subsequent stages are dependent on the nature of the variables inspected and the size of the dataset, consequently those will determine the model achieved at the end of the analysis. Data Facility collection system Pre-process Select mature data Calculate outputs per feature 3D models Drawings Experts’ Knowledge Combine outputs of different parts Combine outputs with geometry Maturity Analysis Maturity index Visualisation and modelling Figure 3. Proposed methodology for capability analysis. Next, inspection data is pre-processed to generate files with raw data appropriately formatted to calculate the required variables. Inspected parts are sorted by date while outliers are removed, and if necessary values are normalised. Nominal values and specification limits are added and linked to the features they define. The structure is created for all parts being analysed, this is, the different blades data has been gathered for. Then it is possible to obtain the capabilities associated to individual features. At this point, a maturity analysis is carried on in order to establish if the manufacturing process has achieve a stable condition in the long term. Due to the J. Continente et al. / Cooling and Capability Analysis Methodology 391 optimisation of the EDM process taking place when a component begins its production life, it is fundamental to ensure what the current status is before using data. This is established by interpreting the evolution of the outputs, for instance non-conformance, over a relatively long period of time. To the same purpose, experts’ opinions can be incorporated. Once the maturity level is known, depending on the amount of data available, the information corresponding to the learning process can be dismissed if there is enough remaining points, or a maturity index can be assigned to the part. If some data is removed the target variables are recalculated. The values for every single feature inspected of different parts are now combined taking into account the maturity index when necessary. Following the geometric parameters obtained from models or drawings are added to the dataset. Obviously this information is as well pre-processed as required to make possible to compare all the different parts. For instance, the hole entry coordinates are feature scaled to a 0 to 1 range. Manufacturing experts’ knowledge is incorporated here in the form of a virtual part whose data comes from rules based on their experience. The last step of this approach is to investigate the variation of capabilities with factors and correlation between parameters. The analysis outcome is a model to relate geometry inputs with capabilities. This model is subjected to the type of data available, so it could adopt different forms as regression equations, classification rules, decision charts, capability maps, etc. Selecting one or another is a task left to the analyst criterion. Nevertheless, visualization is always useful and therefore it is included as part as the process. 3. Case Studies: Cooling Analysis for HTC and Non-conformance Capability 3.1. Cooling Analysis for HTC Following, the aforedescribed methodology is employed to investigate the influence of several geometric parameters on the external heat transfer coefficient of a single cooling hole. To perform CFD calculations the finite element analysis software used is ANSYS 14.5 in conjunction with the solver FLUENT. A simplified model of a fan shaped hole is built and parametrised in relation to the diameter. The exterior dimensions of the domain are six diameters in width and 30 diameters long, while the heights of the duct and passage are four and two diameters respectively. Wall width is dynamically adjusted in relation to other parameters. The hole topology currently used is a cylindrical hole with forward expansion or laidback and a fan shaped exit. Boundary conditions for inlets and outlets are showed in Figure 4, which have been selected similarly to the ones used in other studies found in literature. The external walls of the model are adiabatic while the blade wall has heat transfer properties corresponding to a nickel alloy. To solve the problem a hexahedral mesh has been generated with approximately 900000 elements. The blowing ratio is 1.1 and turbulence intensity is set to 5%. Variables employed are hole angle, laidback angle, fan angle, length to diameter ratio and fan width. For all of those it has been calculated the lower and upper values to use for simulation regarding that the resulting geometry is feasible as shown in Table 1. Utilising those values a full factorial table was generated to populate the inputs for 32 design points of CFD calculation. 392 J. Continente et al. / Cooling and Capability Analysis Methodology Velocity inlet V = 130 m/s T = 540 K Pressure outlet T = 540 K Velocity inlet V = 40 m/s T = 298 K Pressure outlet T = 298 K Hole Blade Duct Inner passage Figure 4. Model diagram and boundary conditions for CFD analysis. Table 1. Factors and levels for geometric variables. Factor Hole angle Laidback angle Fan angle L/D ratio Fan width Low level 30 2 2 1 1.5 High level 60 20 15 4 3.6 In this case study the outcome is HTC and is derived by means of temperatures and heat flux solved in the simulation. Equation 1 calculates the HTC with film cooling, where qw is the heat flux through the wall, Tb is bulk temperature and Tw is the temperature of the wall averaged in the volume of solid. qw (1) hf (Tb  Tw ) Bulk temperature is used as a reference in internal force convection situations and here is calculated with Equation 2 for the volume of flow downwards the hole. Capital U is the velocity of the main stream while the small u is absolute velocity of individual particles. V and T are volume and temperature respectively. 1 Tb u T dV (2) UV V To the purpose of comparing different cases it is considered a dimensionless variable dividing the coefficient by the one obtained without film cooling. Table 2 contains the results for the dimensionless HTC. Most relevant factors and interactions are determined considering all main effects and first order interactions with a significance level of 95%. Those effects are presented in Figure 5. ³ Figure 5. Pareto Chart of Standardise Effects for dimensionless HTC (α = 0.05). Figure 6. Surface Response Analysis for dimensionless HTC. As the chart show the most important impact on cooling variables is due to hole angle, fan width and length to diameter ratio in that order of importance. The other two J. Continente et al. / Cooling and Capability Analysis Methodology 393 main factors, fan angle and laidback angle, have been found to be not significant. This should be confirmed in further simulations for different blowing ratios. After main effects, the most relevant first order interactions are hole_angle∙L/D_ratio and L/D_ratio∙fan_width. Finally before continuing simulating other conditions to prove results, a surface response analysis has been carried out with the two principal variables driving HTC. This is an example to show it is possible to obtain a model which can relate cooling performance with geometry. Figure 6 presents a contour plot of dimensionless HTC versus hole angle and fan with; Equation 3 shows the model derived. hf = 0.7162 - 0.00652hole_angle - 0.2219fan_ width + 0.000113hole_angle 2 + h0 (3)  0.05023fan _width 2 - 0.000948hole_angle ˜ fan_width Table 2. Dimensionless heat transfer coefficient. StdOrder 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 hole_angle 30 60 30 60 30 60 30 60 30 60 30 60 30 60 30 60 30 60 30 60 30 60 30 60 30 60 30 60 30 60 30 60 fan_angle 2 2 8 8 2 2 8 8 2 2 8 8 2 2 8 8 2 2 8 8 2 2 8 8 2 2 8 8 2 2 8 8 L/D_ratio 1 1 1 1 3 3 3 3 1 1 1 1 3 3 3 3 1 1 1 1 3 3 3 3 1 1 1 1 3 3 3 3 fan_width 1.6 1.6 1.6 1.6 1.6 1.6 1.6 1.6 3.6 3.6 3.6 3.6 3.6 3.6 3.6 3.6 1.6 1.6 1.6 1.6 1.6 1.6 1.6 1.6 3.6 3.6 3.6 3.6 3.6 3.6 3.6 3.6 laidback_angle 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 hf/h0 0.3358 0.4175 0.3421 0.4127 0.3528 0.4201 0.3649 0.4141 0.2577 0.3870 0.2580 0.3825 0.3438 0.4049 0.3434 0.4016 0.3387 0.4070 0.3670 0.4081 0.3531 0.4096 0.3710 0.4091 0.2832 0.3658 0.2883 0.3656 0.3708 0.3827 0.3503 0.3834 394 J. Continente et al. / Cooling and Capability Analysis Methodology 3.2. Positional Non-Conformance Capability The proposed methodology in the previous section has been applied to create a prediction map for non-conformance due to positional misplacement. Firstly data is collected from the measuring system software in the form of several files containing the raw information of inspection. That information is generated by Coordinate Measuring Machines (CMM) which determine the position of some of the cooling holes, usually on two perpendicular coordinates. The CMM provide accurate results, thus those are considered high quality data. Records for seven blades have been gathered corresponding to a time period of almost two years on average as shown in Table 3. Table 3. Summary of data collected. Part name Blade A Blade B Blade C Blade D Blade E Blade F Blade G Part count 6309 6278 2431 7621 4905 12255 5229 Holes inspected per part 44 51 45 18 19 20 36 Data period (months) 26 18 23 31 20 28 11 Once data is compiled, sorted and outliers are removed the non-conformance ratio is evaluated for every hole considering both measured coordinates when available. Then maturity can be assessed by exploring the evolution of the variable, looking for stability over the long term. Maturity has also been scrutinized by manufacturing experts to confirm the EDM process has reach an optimized status. Design parameters including the location of the cooling holes have been extracted from drawings and geometry 3D models and has been used to divide the aerofoil into twelve areas. Non-conformance has been computed for those zones employing the values calculated for the holes drilled in each area. The results are visualised using a colour map as presented in Figure 7. Non-conformance level Top Middle Bottom Suction Side Leading Edge Pressure Side Trailing Edge Figure 7. Level of non-conformance over the aerofoil (Data have been normalised due to confidentiality). The results plotted in the Figure 7 reveal that the areas with higher nonconformance are located at the bottom of the leading and trailing edge, whereas the J. Continente et al. / Cooling and Capability Analysis Methodology 395 suction and pressure sides present in general lower values. According to this map it is possible to estimate the ratio of holes out of tolerance for a new part when it reaches a mature status. 4. Conclusion and Future Work Across this research work, a methodology for investigating the relationship between cooling performance, geometry and manufacturing capabilities, as well as between geometry and capabilities has been proposed. An example of use to obtain the response of HTC has been provided, resulting in angle and lateral expansion of the hole being the most significant geometric parameters. In addition, manufacturing data regarding position inspection has been utilised to create a non-conformance map of an aerofoil. The proposed analysis are developed as individual blocks that can be integrated in future steps into a comprehensive cost model for EDM production of cooling holes. Further work will be carried out to derive results for different types of holes and fabrication conditions, and finally to develop the mentioned cost model. Additional effort should be made to validate findings and to automate the acquisition and integration of data. References [1] J. Continente, E. Shehab, K. Salonitis, S. Tammineni, P. Chinchapatnam, Challenges of Cost Modelling for Electro Discharge Machining and Laser Drilling Manufacturing Processes, 12th International Conference on Manufacturing Research (ICMR2014). p. 83–8. [2] M.H.Wang, D. Zhu,Simulation of fabrication for gas turbine blade turbulated cooling hole in ECM based on FEM, Journal of Materials Processing Technology. 2009;209(4):1747–51. [3] R.S. Bunker,A Review of Shaped Hole Turbine Film-Cooling Technology. Journal of Heat Transfer. 2005;127(4):441–53. [4] R. Roy, Adaptive Search and the Preliminary Design of Gas Turbine Blade Cooling Systems, PhD thesis, University of Plymouth, 1997. [5] Y. Lu D. Allison, S.V. Ekkad,Turbine blade showerhead film cooling: Influence of hole angle and shaping, International Journal of Heat and Fluid Flow, 2007;28(5):922–31. [6] W.F. Colban, K.A. Thole, D. Bogard, A film-cooling correlation for shaped holes on a flat-plate surface. Journal of Turbomachinery [Internet]. Combustion Research Facility, Sandia National Laboratories, Livermore, CA 94551, United States; 2011;133(1). [7] R.P. Schroeder, K.A. Thole, Adiabatic effectiveness measurements for a baseline shaped film cooling hole. ASME Turbo Expo 2014: Turbine Technical Conference and Exposition, GT 2014. American Society of Mechanical Engineers (ASME); 2014. [8] K.-D. Lee, K.-Y. Kim, Shape optimization of a fan-shaped hole to enhance film-cooling effectiveness. International Journal of Heat and Mass Transfer. 2010;53(15-16):2996–3005. [9] R.S. Bunker,The effects of manufacturing tolerances on gas turbine cooling. Journal of Turbomachinery. 2009;131(4):1–11. [10] K.H. Ho, S.T. Newman, State of the art electrical discharge machining (EDM). International Journal of Machine Tools and Manufacture. 2003;43(13):1287–300. [11] F.N. Leao, I.R. Pashby, M. Cuttell, P.Lord, Optimisation of EDM Fast Hole-Drilling Through Evaluation of Dielectric and Electrode Material. In: ABCM, editor. 18th InternationalCongress of Mechanical Engineering. 2005. [12] O. Yilmaz, M.A. Okka,Effect of single and multi-channel electrodes application on EDM fast hole drilling performance. International Journal of Advanced Manufacturing Technology. 2010;51(14):185–94. 396 Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-396 A Methodology for Mechatronic Products Design Applied to the Development of a Instrument for Soil Compaction Measurement Mauricio MERINO PERESa, Iana G. CASTELO BRANCOb, and Andréa Cristina dos SANTOSab1 a Mechatronic System Graduate Program, University of Brasília - Brazil b Production Engineering Department, University of Brasília - Brazil Abstract. Checking the compression state is a fundamental part considering the development and growth of planting productivity. The most practical method to achieve the goal is to determine the penetration resistance by means of using an impact penetrometer. The measure held is considered indirect, because the data obtained during measurement are taken to an equation in order to determine the degree of compression. This can cause uncertainty, also can turns up to be inaccurate when considering the data obtained. The most significant advances presented are bibliographic referents contribution to future research. Which highlight that the development already performed do not reach full form the needs present in the penetrometer. Despite the great diversity of goals and methodologies employed, a necessity and common motivation in these studies, it is the determination of compaction degree, and the contribution from these information to decision-making. Thus, the measurement of soil compaction becomes a long process. In this way, we can affirm that we must develop a compression-measuring instrument based on the principle of penetrometer, conducting a series of activities that allow the generation of the technical specifications of a Mechatronic Product. The activities that will rise to the project specifications are: to identify customer needs, analyses the needs relating to the new product; identification of product quality metrics; analyze manufacture and competing products; define product specifications. The QFD "House of quality" is suggested in order to assist product specifications definition. In this context, there are criteria to be adopted in decision-making process. However, the project success is reflected in the technical specifications. With this article we aim to fill the gaps relied on the knowledge that only experience at the time of developing a mechatronic product design. The article aims to clarify and detail the project specifications that are used as source of information and which allows to make a success in the development of a new instrument for measuring the compression of soil. The main contribution of this article is the model, method and techniques presentation for the development of technical specifications to Mechatronic product design applied to a measuring instrument of soil compaction. Keywords. Soil, Mechatronic design, soil compaction, methodology of design, conceptual design 1 Corresponding Author; E-mail: andreasantos@unb.br M. Merino Peres et al. / A Methodology for Mechatronic Products Design 397 Introduction Soil compaction is a process resulting from intensive manipulation of the soil, when it modifies its characteristics, such as the porosity due to particles saturation. Soil compaction is desired in buildings design but highly prejudicial to agricultural activities. Agricultural production is negatively affected by soil compaction, because it influence roots growth in a negative way, causing problems to planting in its normal development. Tillage systems should offer favorable conditions for crop growth and development. However, depending on the soil, climate, culture and its treatment, degradation of physical quality of soil can be promoted [1]. There are different methods and techniques for measuring soil compaction. Mostly involve laboratory testing using various physical parameters as a kind of call to determine the soil compaction, such as density, porosity and resistance to penetration. Soil resistance to penetration is considered to be the most appropriate property to express the soil compaction degree, and consequently, the facilitation to roots penetration [2]. However, the resistance of soils measured by penetrometers is correlated to soil density and is also function of moisture content. Therefore, they must be made of measures soil moisture when the determination of resistance [3]. The most practical method for determining the resistance of soil to penetration is by means of using an impact penetrometer. Since the 80s many attempts to improve the instrument were made with great strides, but still don't have a solution which integrates various aspects of instrumentation in order to allow reading characteristics improvement focusing in reliability of the instrument keeping the initial conditions of low cost and easy to transport. Exploratory studies carried out in partnership with EMBRAPA (Brazilian Agricultural Research Corporation) emphasize the necessity of developing a new instrument, low cost of operation and maintenance. The instrumentation is studied by science, such as Mechatronics and Systems Control, which applies and develops measurements and controls techniques on equipment and industrial processes. Within this context, the development project of monitoring instrument may be performed by use of templates for mechatronic product development project. This article aims to present the methods and techniques used for technical specifications elaboration of the design of a new instrument for measuring soil compaction. Then, it is divided into three parts, like presented: the first part presents the literature review to drive the design of the measuring instrument of soil compaction; the second part presents as was led the development of the specifications of the instrument for measurement of soil compaction and the end, in the last part are performed the final considerations. 1. Design of Complex System Due to the integration and interconnection of mechanical and electrical engineering and information technology mechatronic products can be considered as complex products, which present a high probability of scopes changes during development along the project [4]. 398 M. Merino Peres et al. / A Methodology for Mechatronic Products Design Complex systems have a higher number of elements interacting between them, known as sub-systems, which consist of simplest functional devices that are composed by parts [5]. For [6] complex systems may exhibit one or more of the following features: x Some unpredictability degree; x Variety of components that are highly interconnected and the behaviour of the entire system depends significantly on its components interactions; x Some autonomy level; x Its components are individuals or autonomous elements, such as software that self-modify, x Adaptive, able to learn, grow and respond to environmental stimulation, x Self-organized, and the whole system emerges as a result of cooperation and competition of its components, x Properties and emerging behaviours are considered key features. Thus, it is important to emphasize the variety of characteristics that may be presented in a complex system, and that these features can cause a different treatment in terms of its analysis. According to [7] in order to understand the complexity of product design it is possible to work the definition used in cybernetics. Where simple problems are characterized by having few parameters and low interdependence among them and complicated problems have a high number of parameters and intense connectivity and finally, complex problems adds the dynamics factor to definition of complicated problems. A traditional modelling approach to complex systems is to decompose it into subsystems, which are able to be more recognized; to observe the relationship between the subsystems that affect the behaviour of the whole system; and to monitor the inputs and outputs and impacts on the system [8]. 2. Mechatronic Design The main references for the product design process, regarding the method and technics presentation, will focus on the mechanical system design. Among them, three stand out [9], [10], [11]. The V model to the mechatronic product design process is presented in [4], [12]. This reference model is defined in the VDI 2206 (German Standard), which was built from the systems engineering theory. According for [13] the engineering system process in product development includes the following basic tasks: " Define the objectives of the product; establish product performance requirements (requirement analysis), Establish the functionality of the product (functional analysis), Development alternative design concepts of the product (architecture synthesis), select a baseline product line (selection of balanced product design), Verify that the baseline product design meets requirements (verification), validate that the baseline product design satisfies its users (validation) and the iterative the above process through lower levels (cascading product requirement no lower levels through allocation of functions and design synthesis)...these tasks are performed iteratively. following five types of loops " [13] pg 28 This five types presented are iteratively in order to modify the product architecture and configuration until a balanced product design is reached. The requirement loops helps in refining the definition of requirements as they are used in analyzing the M. Merino Peres et al. / A Methodology for Mechatronic Products Design 399 functions required considering the requirements by allocating systems, subsystems and components at various levels. The Design loops involve iterative applications of the functional analysis and allocations results to design the product such that the entire product with interfaces between various system, subsystems, and components can perform to meet all its requirements. The Control loops make sure that right issues are considered an analyzed at the right time and right decisions are made to control the three basic tasks: requirements analysis, functional analysis and allocation and design synthesis. The verification loops involves conducting test on designed product, it systems, subsystems, and components to ensure that all requirements are met at every level. The validation loop involves tests and evaluations that are conducted to ensure that product will meet all its stated customer needs, that its product stated customer needs [13] . The requirements of a system are generally classified into functional and nonfunctional. The functional requirements are the ones that must be performed to comply with the objectives to operate or use the product. Customer requirements are expectations about the product (or system) in terms of mission, objectives, functions, environment and constraints which are pointed out by customers itself of stakeholders. These requirements are specified to set the target values for measuring product attributes. Source [13] sets it beyond functional and customer requirements and then points out other types of project requirements. Among them, in the project, stand out in the human factors requirements. Source [14] had implemented a range of methods and techniques during the design process to improve endoscope use. These were aimed at greater integration of the engineering development team with the users (the medical staff). The development of the new endoscope involved the following steps: 1- Determination design focus area- is decided about the direction in which the development efforts will be allocated towards adding value to the product and understand the needs of users. Surveys were conducted on user expectations in early development. 2- Creation of the job stream and system application to understand what the context of use. 3- Problem determination to be solved and what are the goals of design. 4- Creation of workflow, with all the problems targeted by the project and that need to be solved - Knowing what problems to be attacked is then generating a new stream of jobs already in mind showing the solution and entering the desired products characteristics, that is, this step is generated idealized performances of the product. 5- Translation of wanted stream tasks on a functional structure containing system functionalities to be created - The functional structure is then added to the wanted stream tasks. 6- Selection and configuration of the building elements, physical elements, which are the primary concepts - An attempt is made to translate the created functions (functional structure) in physical elements and an overview of system operation, through schematic and models. 7- Decomposition of physical elements into manageable modules - Carry out the division of architecture modules for implementation can be a laborious task, but if done correctly can allow greater customization for the end customer. 400 M. Merino Peres et al. / A Methodology for Mechatronic Products Design Source [15] introduces a model to integrate the traditional requirements process into Axiomatic Design Theory and proposes a method to structure the requirements process. The method includes a requirements classification system to ensure that all requirements information can be included in the Axiomatic Design process, a stakeholder classification system to reduce the chances of excluding one or more key stakeholders, and a table to visualize the mapping between the stakeholders and their requirements. The ex-ante approach, inspired by the axiomatic design (AD), proposed by source [16]. Is a step-by-step guide designed to leader the correct translation of customer needs into product requirements. The procedure consists of an iterative elaboration and reorganize the information related to the radical innovation of complex products. According to [16], the most important methods for the development of this model are: the method of Kano; Cascini's method; the method of source [15] and QFD. Is new approach proposed by the authors is suitable for the improvement of designs of complex products, such as automotive devices where the method has been tested, or biomedical, operating with high standards of quality. 3. System Engineering and Techniques for Soil Compaction. There are different methods and techniques for measuring soil compaction. Mostly involve laboratory testing using various physical parameters of the soil as a kind of call to determine the soil compaction, such as density, porosity, resistance to penetration that can be measured in the field with instruments like the penetrometers, shows in Table 1. Table 1. Methods and techniques for measuring physical parameters of the soil compaction. Physical Parameters Porosity and density of soil Resistance to penetration Water and humidity infiltration Compaction Compression curve Methods and techniques for measuring physical parameters of the soil/References Method of volumetric ring [17], Divot waxed Method [17], Computer Tomography Method [18] and Surface probes neutron-gamma [17]. There are different types of penetrometers that are used to determine the physical properties of the soil as: dynamic penetrometer, static, the penetrograph and the Pocket penetrometer [19], [3] There are alternative methods for measuring the content of water and hydraulic conductivity in the soil, as the tension meter, the neutrons and the technique of time domain reflectometry -TDR probe [18], [20], [21], [23]. The method alternative or examination of trench simplest way to identify layers compacted in the field is the opening of trenches and superficial observation or foot grid. This method presents limitations since it allows only identify a compressed layer without, however, characterizes her. That is, it is not possible to define what degree of compression and how much this was affecting the growth and productivity of culture, as well as decide safely on the need for some operation moto-mechanized to ground decompression [24] The compression curve, this directly related with the determination of moisture. Contributing high in determination of soil compaction. Soil compaction curve is determined by the Proctor test, normal or modified, widely employed in civil engineering; However, its use has limitations because the agronomic setting of curve part of the reuse of a single sample, regardless of the original structure of the soil [25]. M. Merino Peres et al. / A Methodology for Mechatronic Products Design 401 The method of measurement of soil compaction which offers greater accuracy, higher reliability in obtaining data and shorter answer is nuclear type. However, this method involves high costs. Besides facing cultural and legal difficulties in field research carried out in Brazil. For example, the nuclear type equipment usually stay stuck in Brazilian airports, due to current legislation. Ecological products producers reject this method of measurement for cultural issues. Soon, from the revision of the methods for measuring soil compaction is necessary to integrate two measurements and penetration resistance reading moisture in the same technical system. 4. Preparation of project Specifications to measure soil compaction The proposed approach betrays the contribution of sources [14], [15], [16] the use of methods and techniques such as QFD, Kano method and the table to map customers ' needs. In addition uses techniques of complex systems engineering drawing of the Vmodel that allows developing mechatronic systems from a specific development order. The proposed model lies between analysis and identification of specific areas of the model project V (level of system and subsystem level), emphasized the analysis of requirements of loops of systems engineering. However, the methods and techniques have not been adopted in its entirety due to the complexity of mechatronic products that vary from project to project. According to source [26] to manage complexity of the mechatronic product design, a model of due process is required, this model procedures have involved as the "stakeholders" of different domains, where there is a need for activities to be performed, coordinated and synchronized. To clarify who the customers are, actors, direct and indirect involved in the design, it becomes necessary perform a survey of them and add them within the product life cycle, considering them to approach to stakeholders, thus facilitates statement of requirements. To survey of project stakeholders a thorough research is needed to about the problem, this way you can view all the actors, from the supply chain to the project team or instrument manufacturer. The used technique for identifying who the stakeholders are and in what form they interact with in the design is through the mapping phase of the product life cycle. Principal methods used to compile information the needs of the stakeholders in the product life-cycle were: interviews, questionnaires, direct observations, the activity stream mapping to measure soil compaction of existing equipment. For the survey of this information the design team received the support of EMBRAPA (Brazilian Agricultural Research Corporation) mainly concerning the difficulties and use the facilities of existing methods and techniques, figure 1 illustrating the activities in the field. Source [15] introduces a model to integrate the traditional requirement process into a axiomatic design theory and proposes a method design. The method includes a requirements classification system to ensure that all requirements information can be included in the Axiomatic Design process, a stakeholder classification system to reduce the chances of excluding one or more key stakeholders, and a table to visualize the mapping between the stakeholders and their requirements. 402 M. Merino Peres et al. / A Methodology for Mechatronic Products Design Figure 1. Activities system for the determination of soil compaction from the use of the penetrometer. A) Process of measuring soil compaction (tiered triangle); B) main phases of the triangle for the calculation of soil compaction. Table 2 illustrates part of the survey of the customers ' needs by means of project stakeholders found throughout its lifecycle. This is a key step for the specifications of a Mechatronic product of a complex system, the purpose of to make a prototype that is appropriate. Attribute is a characteristic of the product that it must have to sell well. Instead of determining product requirements directly from the customer needs. The requirements specified to achieve the product attributes can be defined as the attribute requirements [13]. Constraints they are "needs" that must be present on the finished product. As an example those imposed by Regulations and laws of the country. The next step aims to determine the instrument specifications. The most widely used method is the Deployment Quality Function (QFD), specifically the first matrix house of quality. 403 M. Merino Peres et al. / A Methodology for Mechatronic Products Design Indirect External Project EMBRAPA Cooperative / rural producer / Universities Laboratories de INMETRO Have different measurements Be easy operation Be easy to maintain Be ergonomic Be light Conform to the technical standards and laws Conform to the technical standards and laws X X Legality Reliability Security Economy Aesthetics Needs Ergonomics Stakeholders Operation Type of Stakeholder Direct Stage of life cycle Table 2. Array of support for lifting of the attributes and requirements of the project X X X X X X X X X X X The source [16] represents a step-by-step procedure aiming to guide the designer through a correct translation of needs into product requirements. The peculiarity is that the method interposes at the very early stage of VOC (Voice of the Customer) translation into CTQs, unlike classical approaches that occurs in successive steps. The procedure consists of an iterative elaboration and re-organization of the information related to the radical innovation of complex products. The method drives the QFD team through a interaction revision of needs, critical to quality and design parameter in order to pursue a well-balanced structure actually. Table 3 illustrates part of the technical specifications of the instrument form measured soil compaction measuring instrument elaborated from first matrix QFD. Although it has been used a number of methods and techniques to support the generation of specifications, these were not sufficient to establish the specifications of the software. To do this is necessary to establish the conceptual design phase and the allocation of requirements in physical function to determine the specifications of the software. Table 3. Part of the Technical specifications of the instrument for measurement of soil compaction. Specification Unit of measurement High level of measurement configuration High calibration facility High resolution Signal measure % °C % °C % KPA W KPA Goal Sensors Having a system with multiple types of measurement settingmoisture-penetration resistance-density. Configuration programming software (graphic interface) Have an easy instrument calibration Have a high signal quality and resolution Equipment to perform calibrations, adjustments and uncertainties. Software programming configuration of penetrometer sensors Unwanted Outputs The instrument has a few configuration options and measure The penetrometer is hard to adjust and calibrate The penetrometers have a bad signal reception and low quality. 404 M. Merino Peres et al. / A Methodology for Mechatronic Products Design 5. Final Remarks This paper reviews the source of the technical specifications of the product and the templates used through a literature review. Were identified different sources and models. However, these have not been adopted in its entirely due to the complexity of mechatronic products. Therefore, the model presented in this paper is proposed to integrate the different areas of mechatronic products. The main references of product design process, with the presentation of the methods and supporting techniques must focus on mechanical systems design. In this article we sought to the way of the systems engineering for designing an soil compaction measuring instrument. Over the literature review have been found various models for mechatronic product design, but they present limited information of how to conduct the design process. The main reference was found [16], which formed the basis for research. From this author, it was sought to a lot of articles that aid in the development of activities. This first part of the research included a series of visits to the fields to knowledge of the methods of existing analyzes with the purpose to better understand the problem design. The local visits and conversations with experts who use existing equipment produced a set of needs and transformed in different types of requirements. The main challenge in this project was to create an inexpensive conception with appropriate technology that meets the needs of different stakeholders. The next steps of this research involves the development of conceptual design phase including development testing and analysis for finding the best path for the development of soil compaction tool. References [1] C.A. Tormenta, et al. Densidade, porosidade e resistência à penetração em Latossolo cultivado sob d diferentes sistemas de preparo do solo, Scientia Agricola, V. 59, n. 4, pp. 795-801, 2002. [2] D. Cerqueira Silveira, et al., Relação umidade versus resistência à penetração para um Argissolo Amarelo distrocoeso no recôncavo da Bahia. Revista Brasileira de Ciência do Solo, v. 34, n.3, pp. 659667, 2010. [3] J.M. Manieri, Utilização de um penetrômetro de impacto combinado com sonda de tdr para medidas simultâneas de resistência e de umidade do solo na avaliação da compactação em cana-de-açúcar. Campinas, 2005. [4] P. Hehenberger, et al. Hierarchical design models in the mechatronic product development process of synchronous machines. Mechatronics, 20, pp. 864-875, 2010. [5] A. Kossiakoff, W. N. Sweet, Systems Engineering Principles and Practice, John Wiley & Sons, Hoboken, 2003. [6] D.W. Hybertson, Model-Oriented Systems Engineering Science: a Unifying Framework for Traditional and Complex Systems, Auerbach Publications, Boca Raton, 2009. [7] U. Lindemann, M. Maurer, T.D.I Braun, Structural Complexity Management: an approach for the field of product design, Springer, Berlin, 2009. [8] T.R. Browning, Applying the Design Structure Matrix to System Decomposition and Integration Problems: A Review and New Directions, IEEE Transactions on Engineering Management, 48, 3,2001. [9] G. Pahl et al., Projeto na Engenharia: métodos e aplicações – 6ª Edição. Editora: Edgard Blucher, 432, 2005. [10] H. Rozenfeld et al., Gestão de Desenvolvimento de Produtos: Uma referencia a melhoria do processo, Saraiva, São Paulo, pp. 542, 2006. [11] N. Back et al., Projeto Integrado de Produtos: Planejamento, Concepção e Modelagem, Manole, São Paulo, pp 601. 2008, [12] V.S. Vasić, M.P. Lazarević, Standard Industrial Guideline for Mechatronic Product Design. FME Transactions, Faculty of Mechanical Engineering, Belgrade, 36, pp.103-108, 2008. M. Merino Peres et al. / A Methodology for Mechatronic Products Design 405 [13] V. D. Bhise, Designing complex products with systems engineering processes and techniques. Taylor and Francis, pp.462, 2014. [14] J.G. Ruiter, M.C. Voort, G.M. Bonnema, User-centred system design approach applied on a robotic flexible endoscope, Computer Science, 16, pp.581-590, 2013. [15] M.K. Thompson, Improving the requirements process in axiomatic desing theory. CIRP Annuals Manufacturing Technology, 62, pp. 115-118, 2013. [16] G. Montelisciani et al., Ordering the Chaos: a Guided Translation of Needs into Product Requirements, Procedia CIRP, 21, pp. 403-408, 2014. [17] L.F. Pires et al., Comparação de métodos de medida da densidade do solo. Acta Scientiarum Agronomy, V. 33, pp. 161-170, 2011. [18] O.A. Camargo, L.R. Allenoi, Reconhecimento e medida da compactação do solo. disponível em ": http://www. infobibos. com/Artigos/2006_2 C 6, Acesso em 04 de 10 de 2014.2006. [19] E. Torres, O.F. Saraiva, Camadas de impedimento mecânico do solo em sistemas agrícolas com a soja. Embrapa Soja, 1999. [20] I. Vásques Garcia, Prototipo de un penetrómetro cónico de imapcto y su validación de uso en suelos forestales, Montecillo, Texcoco, Estado de México, 2010. [21] J.A. Azevedo, E.M. Da Silva, Tensiômetro:dispositivo pratico para controle da irrigacao, Embrapa Cerrados, 1999. [22] C.M. Vaz et al., Validação de 3 equipamentos de TDR (Reflectometria no Domínio do Tempo) para a medida da umidade de solos, Embrapa, São Carlos, 2004. [23] M.D. Sá, J.D. Santos Júnior, Compactação do solo: conseqüências para o crescimento vegetal, EMBRAPA Cerrados , Planaltina, DF, 2005. [24] F.T. Ramos et al., Curvas de compactação de um Latossolo Vermelho-Amarelo: Com e sem reúso de amostras. R. Bras. Eng. Agríc. Ambiental, v. 17, n. 2, p. 129-136, 2013. [25] J. Gausemeier et al., Integrative development of product and production system for mechatronic products, Robotics and Computer-Integrated Manufacturing, 27, pp. 772-778, 2011. [26] R.C. Beckett, Functional system maps as boundary objects in complex system development, Int. J. Agile Systems and Management, Vol. 8, No. 1, pp. 53–69, 2015. 406 Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-406 Process Knowledge Model For Facilitating Industrial Components’ Manufacturing Jingyu SUN a,1 Kazuo HIEKATA a Hiroyuki YAMATO a Pierre MARET b Fabrice MUHLENBACH b a Graduate School of Frontier Sciences, The University of Tokyo, Japan. b Laboratoire Hubert Curien, Jean Monnet Université, France Abstract. This paper proposed a process knowledge model for facilitating complicated industrial components’ manufacturing by suggesting a combination of the manufacturing action elements under a given situation. The proposed knowledge model is built regarding the generalized manufacturing process as a sequence of the situations under which the experienced workers made their decisions and the actions which the workers took. Each situation and action can be broken down into a set of element situations and element actions. Based on the interviews with expert workers and the recorded manufacturing data, the pairs of situation element and action element are extracted, scored and stored in the knowledge model. The most efficient action suggested by the knowledge model is a combination of the action elements which are fuzzily inferred to be the most effective ones for the given situation. In this model, the effectiveness of each situation and action element pair is evaluated and scored based on the subsequent manufacturing steps and manufacturing milestones from the recorded data. The proposed knowledge model was proved to be effective when being applied to a series of 3D manufacturing data obtained in the shipyard. Keywords. Process knowledge model, curved shell plate, manufacturing action, fuzzy knowledge Introduction In nowadays’ industry, during the manufacturing process of the relatively complicated industrial components which are produced by human step by step such as the curved shell plate in shipyard, a lot of knowledge and skills exist and are proved to be hard to be elicited and disseminated. Without the efficient knowledge elicitation and dissemination, the inheritance of these skills and experiences cost more time and energy, and the automatic machining of these components becomes extremely difficult and even impossible. The reason why the knowledge existing in these manufacturing processes is difficult to be elicited is partly because there is not yet an efficient way to describe the manufacturing processes with multiple expert workers. Therefore, it is not possible to analyze these processes efficiently even there is enough data recorded. Besides, the knowledge provided by different expert workers should be properly evaluated and restructured before being used in the future daily manufacturing. 1 Student, Graduate School of Frontier Sciences, the University of Tokyo, Building of Environmental Studies, Room #274, 5-1-5, Kashiwanoha, Kashiwa-city, Chiba 277-8563, Japan; Tel: +81 (4) 7136 4626; Fax: +81 (4) 7136 4626; Email: sun@is.k.u-tokyo.ac.jp ; http://www.nakl.t.u-tokyo.ac.jp/ J. Sun et al. / Process Knowledge Model for Facilitating Industrial Components’ Manufacturing 407 This paper proposes a process knowledge model for facilitating the complicated industrial components’ manufacturing. The manufacturing process of the complicated component is modeled as a sequence of the situations under which the experienced workers made their manufacturing decisions and the actions which the workers took under the given situation. Every situation and action can be broken down into a set of elements representing the detailed items constituting this manufacturing step. Then the element pair of the situation and action is evaluated using the recorded manufacturing data and stored in the knowledge model. When manufacturing a new component under the similar situation, the most efficient action is suggested by the constructed knowledge model in the form of a combination of the action elements which are fuzzily inferred to be the most effective ones under the given situation. The knowledge model consists of the following steps: a. Cluster components’ situation elements. b. Cluster experts’ action elements. c. Find or estimate the subsequent manufacturing to evaluate the action effectiveness. d. Build element rule base based on the weighted (IF <situation> THEN < action>) sets. e. Construct knowledge model with efficient inference process for suggesting the most effective action element combination. The proposed knowledge model is proved to be effective when being applied to a series of curved shell plates’ 3D manufacturing data obtained in the shipyard. The knowledge model is built using the recorded data of two plates with the same target shape and under similar situations. The effectiveness of each situation and action element pair is weighted by evaluating the subsequent manufacturing steps recorded in the shipyard. When manufacturing a new plate under the similar situation as the former two, the constructed knowledge model suggests the efficient combination of manufacturing action elements successfully after comparing the situation elements and weighting the existing action elements. 1. Related Work Ruston et al. proposed a fuzzy rule-based model of human problem solving. In this model, Knowledge is stored in the form of symptomatic rules (S rules) and topographic rules of the form IF <situation> THEN <action> [1]. A rule is considered for use when its situation part matches the observed state of the world. As a prior work of this study, a framework for capturing the knowledge during the curved shell plate’s manufacturing, in which the knowledge is articulated in Nested Ripple Down Rules [2] tree format is proposed by Sun et al. The framework provides the beginning workers with the capability to quickly design a manufacturing plan about all aspects of the design process as the expert workers. The demerit of this system is that a lot of interviews with the expert workers have to be conducted to elicit the most efficient knowledge. However, during the interviews, sometimes, even the most experienced workers could fail to describe or prove which the manufacturing plan they provided is most effective and why. In this paper, to elicit the knowledge during the manufacturing process more objectively, a knowledge model mainly based on the recorded data of the manufacturing process and requiring fewer interviews is proposed. 408 J. Sun et al. / Process Knowledge Model for Facilitating Industrial Components’ Manufacturing Figure 1. System Overview of NRDR knowledge base [1]. 2. Proposed Framework for Capturing Knowledge Overview At first, a process model regarding the generalized manufacturing process as a sequence of the situations and actions as shown in Figure 2 which could represent the most generalized manufacturing process existing in today’s industry is built. Facing one group of the similar given situation S of the component, multiple experts or one experts in different periods may use different manufacturing plans A, and based on different manufacturing plans, there are different subsequent manufacturing steps until the component reaches the designed target situation ࡿ࢚ࢇ࢘ࢍࢋ࢚ . 㻌 Figure 2. Process modeling. J. Sun et al. / Process Knowledge Model for Facilitating Industrial Components’ Manufacturing 409 To analyze how proper each action (from ࡭૚ ࢏ା૚ to ࡭ࡺ ࢏ା૚ ) is when facing the situation ࡿ૚ ࢏ (or similar situations’ group), the subsequent manufacturing from ࡿ૚ ࢏ as shown in Figure 3 should be evaluated. Multiple representative milestones (ࡹ૚ , … , ࡹࡰ ) are set. The count of the total subsequent manufacturing steps d and the count of the subsequent milestones D are found out. The concept is that the better action is that make the manufacturing steps numerical less (smaller d) and more comprehendible (smaller D). Figure 3. Subsequent manufacturing analyzing. 㻌 The structure of the whole knowledge modeling process which implemented the proposed approach is illustrated in Figure 4. The recording data to be analyzed representing the manufacturing situations and the multiple actions took by experienced workers is recorded in the manufacturing scenario DB. After a series of processes which will be introduced later, an expert knowledge model with a set of elicited rules is constructed. Then an effective inference process is instituted facilitating the using of the constructed knowledge model. 㻌 Figure 4. Knowledge Model Overview. 410 J. Sun et al. / Process Knowledge Model for Facilitating Industrial Components’ Manufacturing 2.1. Element clustering of situation and action Assuming both the situations and actions can be broken down in to a series of elements, the basic inferences from the situation to action and the clustering of different situations and actions are conducted towards these elements. When there are multiple data pairs, clustering method such as K-means is used to cluster these element situations and element actions based on the vector distances between them. a. ࡿ௫ → (‫ܛ‬ଵ ௫ , ‫ܛ‬ଶ ௫ , … , ‫ܛ‬ఌ ௫ ) b. Cluster situation elements using: ݀௫௬ ൌ ඥ࢙୪ ௫ ή ࢙୫ ୷ c. ࡭௫ → (ࢇଵ ௫ , ࢇଶ ௫ , … , ࢇఋ ௫ ) d. Cluster action elements using: ݀௫௬ ൌ ඥࢇ୪ ௫ ή ࢇ୫ ୷ 2.2. Action effectiveness evaluation Then the actions’ effectiveness is evaluated based on the subsequent manufacturing steps’ count d and the subsequent manufacturing milestones’ count D. a. Find / Estimate count d of milestones from ࡹ૚ to ࡹࡰ b. Find / Estimate count D of necessary manufacturing steps from ࡿ૚ ࢏ to ࡿ૚ ࢏ାࢊ c. Set 2 weights to each ࡭௫ ௜ାଵ using D and d and a general weight ߱௫ ௜ାଵ , ߛଵ and ߛଵ can be changed when being applied to different components. • ܹ ௫ ௜ାଵ ൌ  • ߱ ௫ ௜ାଵ ൌ ଵ ଶగఙమ ݁ మ ൫ವೣ ൯ మ഑మ ି ‫ ݓ‬௫ ௜ାଵ ൌ  ଵ ଶగఙమ ݁ మ ൫೏ೣ ൯ మ഑మ ି ఊభ ൈௐ೔శభ ೉ାఊమ ൈ௪೔శభ ೉ ఊభ ାఊమ  2.3. Element rule generating The knowledge model constructed by the evaluated weighted rules is built according the following steps: a. Generate element rules like (If ࢙࢒ , then ࢇ࢑ ). When both the similarities (the clustering result) of the situation and action elements are big enough, the two element rules can be seen as the same rule. b. Decide Fuzzification and Defuzzification regulations according to the specific manufacturing process. c. Give calculated weight in 2.2 to the generated element rules. 2.4. Inference process instituting When using the knowledge model to manufacture a component under certain situation, the inference process of the knowledge model is as below: a. Substitute the component’s situations ࡿࢌ ൌ ሺࢌ૚ ǡ ࢌ૛ ǡ  ǥ ǡ ࢌࢾ ሻ into the rule base in 2.3 b. Get every inferred element actions : ࡭࢏࢔ࢌ ൌ ሺࢇ࢏࢔ࢌ૚ ǡ ࢇ࢏࢔ࢌ૛ ǡ  ǥ ሻ J. Sun et al. / Process Knowledge Model for Facilitating Industrial Components’ Manufacturing 411 Give each element actions ࢇ࢏࢔ࢌࢠ in ࡭࢏࢔ࢌ the weight c. • തതതതതത ࣓ ଙ࢔ࢌ ൌ σሺࢽ૚ ൈࢃ࢏శ૚ ࢄ ାࢽ૛ ൈ࢝࢏శ૚ ࢄ ሻ ࢽ૚ ାࢽ૛ ሺࢃ࢏ା૚ ࢄ ƒ†࢝࢏ା૚ ࢄ ƒ”‡–Š‡™‡‹‰Š–•‘ˆ–Š‡ ƒ –‹‘•‹™Š‹ Šࢇ࢏࢔ࢌ૚ ‡š‹•–•ሻ തതതതതത Choose the top Ɂ element actions ranked by ࣓ ଙ࢔ࢌ and build the result Action : d. • ࡭࢘ ൌ ሺࢇ࣌૚ ǡ ࢇ࣌૛ ǡ  ǥ ሻ 2.5. Fuzzification and Defuzzification process Because not all the situation elements match perfectly with the former ones used to construct the knowledge model, and a rule should not be fully followed when it has a low strength, the whole inference process should be fuzzed as the following figure when generate the output suggested actions. The Fuzzification of the input data is as below: ܹே is the general weight of each rule and ܸܶே is the situations’ similarity between the known rule and the input rule. O is the output strength of each selected rule. Figure 5. Fuzzification process. 3. Experiment using curved shell plate’s 3D manufacturing data 3.1. Overview A knowledge model was constructed using the recorded manufacturing scenarios of two curved shell plates. Then the constructed knowledge model successfully suggested a set of manufacturing plans for another plate. When manufacturing a curved shell plate, the actions taken by workers are always related to the curvature errors comparing to the design shape at each frame line (horizontal lines on the plate). In this experiment, the curvature errors at each frame are virtualized and stored in 3D environment and are represented in the form of vectors (one vector for one frame line). 3.2. Knowledge model construction (A) Knowledge model construction Two plates whose target shapes are totally symmetrical are used to construct the knowledge model. Firstly, as shown in Figure 6, the distance error color maps of these two plates are similar to each other, and the distance error histograms are compared using the histogram intersection method. The similarity is 0.877. So these two plates could be seen in the similar general situation. Histogram Intersection: ࡴࡵ൫ࡵǡ ࡹ൯ ൌ  σ࢔࢐ୀ૚ ‫ܖܑܕ‬ሺࡵ࢐ ǡ ࡹ࢐ ሻ = 0.877 412 J. Sun et al. / Process Knowledge Model for Facilitating Industrial Components’ Manufacturing 㻌 Figure 6. Distance error color maps and histograms of plate 1(left) and plate 2(right). 㻌 Figure 7. Curvature error color maps of plate 1(left) and plate 2(right). Then the curvature errors on each corresponding frame line are virtualized as shown in Figure 7. In Table 1, the curvature errors are normalized and each frame line has a vector constituted by the curvature error values at the uniformly-spaced points on every frame line (F1, F2, …, F5). Table 1. Curvature errors at each frame line of plate 1and plate 2. 㻼㼘㼍㼠㼑㻌㻝㻌㻌㼏㼡㼞㼢㼍㼠㼡㼞㼑㻌㼑㼞㼞㼛㼞㻌㻌㻌 㻲㻝㻌 㻜㻌 㻝㻌 㻲㻞㻌 㻜㻌 㻝㻌 㻲㻟㻌 㻜㻌 㻝㻌 㻌 㻜㻌 㻝㻌 㻝㻌 㻜㻌 㻜㻌 㻜㻚㻡㻌 㻜㻌 㻙㻝㻌 㻜㻚㻡㻌 㻌 㻜㻚㻡㻌 㻜㻌 㻜㻌 㻲㻠㻌 㻜㻌 㻝㻌 㻲㻡㻌 㻜㻌 㻝㻌 㻼㼘㼍㼠㼑㻌㻞㻌㻌㼏㼡㼞㼢㼍㼠㼡㼞㼑㻌㼑㼞㼞㼛㼞㻌㻌㻌 㻲㻝㻌 㻜㻌 㻝㻌 㻲㻞㻌 㻜㻌 㻝㻌 㻲㻟㻌 㻜㻌 㻝㻌 㻜㻌 㻙㻝㻌 㻌 㻜㻌 㻝㻌 㻝㻌 㻙㻜㻚㻡㻌 㻜㻌 㻌 㻜㻚㻡㻌 㻜㻌 㻜㻌 㻜㻌 㻝㻌 㻲㻠㻌 㻲㻡㻌 㻜㻌 㻙㻝㻌 㻙㻜㻚㻡㻌 㻜㻌 㻜㻌 㻜㻌 㻝㻌 㻝㻌 㻌 㻌 㻌 㻌 㻌 㻌 㻌 㻙㻝㻌 㻝㻌 㻜㻌 㻜㻌 㻙㻜㻚㻡㻌 㻜㻚㻡㻌 㻜㻚㻡㻌 㻙㻝㻌 㻜㻌 㻜㻌 㻜㻚㻡㻌 㻜㻌 㻜㻌 㻜㻚㻡㻌 㻜㻌 㻜㻌 㻜㻌 㻜㻌 㻙㻝㻌 㻜㻌 㻜㻌 㻜㻌 㻝㻌 㻝㻌 㻝㻌 㻝㻌 㻜㻌 㻌 㻜㻚㻡㻌 㻝㻌 㻜㻚㻡㻌 㻌 㻙㻝㻌 㻜㻌 㻝㻌 㻌 㻙㻜㻚㻡㻌 㻝㻌 㻝㻌 㻜㻌 㻜㻌 㻜㻌 㻜㻌 㻜㻌 㻜㻌 㻌 㻜㻌 㻜㻌 㻜㻌 㻌 㻜㻌 㻙㻜㻚㻡㻌 㻜㻌 㻜㻌 㻜㻚㻡㻌 㻝㻌 㻜㻌 㻜㻌 㻝㻌 㻌 㻜㻚㻡㻌 㻜㻌 㻜㻌 㻙㻜㻚㻝㻌 㻝㻌 㻌 㻜㻌 㻙㻜㻚㻡㻌 㻜㻌 㻜㻌 㻙㻜㻚㻡㻌 㻌 㻜㻚㻡㻌 㻜㻚㻡㻌 㻝㻌 㻜㻌 㻜㻌 㻜㻌 㻜㻌 㻜㻌 㻜㻌 㻝㻌 㻌 㻙㻝㻌 㻜㻌 㻜㻌 㻜㻌 㻙㻜㻚㻡㻌 㻌 㻜㻌 㻜㻌 㻜㻌 㻌 㻜㻌 㻜㻌 㻜㻚㻡㻌 㻜㻚㻡㻌 㻜㻌 㻜㻌 㻌 㻌 㻜㻌 㻙㻝㻌 㻜㻚㻡㻌 㻜㻚㻡㻌 㻙㻜㻚㻡㻌 㻜㻌 㻜㻌 㻜㻚㻡㻌 㻜㻌 㻜㻌 㻜㻚㻡㻌 㻜㻌 㻜㻌 㻜㻌 㻌 㻜㻌 㻜㻌 㻜㻌 㻜㻚㻡㻌 㻜㻌 㻜㻌 㻜㻌 㻜㻌 㻝㻌 㻙㻜㻚㻡㻌 㻜㻌 㻜㻌 㻜㻌 㻌 㻌 㻌 㻌 㻙㻝㻌 㻜㻌 㻜㻌 㻌 㻌 ૚ Figure 8. Manufacturing actions of plate 1 ࡭ ࢏ା૚ ૛ (left) and plate 2 ࡭ ࢏ା૚ (right). 㻌 㻜㻌 㻜㻌 㻜㻌 413 J. Sun et al. / Process Knowledge Model for Facilitating Industrial Components’ Manufacturing Table 2. Action properties at each frame line of plate 1(left) and plate 2(right). 㻼㼘㼍㼠㼑㻌㻝㻌㻌㻭㼏㼠㼕㼛㼚㻌 㼍㻝㻌 㼍㻞㻌 㼍㻟㻌 㼍㻠㻌 㼍㻡㻌 㼘㼛㼏㼍㼠㼕㼛㼚㼄㻌 㼘㼛㼏㼍㼠㼕㼛㼚㼅㻌 㻞㻌 㻠㻚㻡㻌 㻟㻚㻡㻌 㻠㻌 㻢㻚㻡㻌 㻠㻚㻡㻌 㻢㻚㻡㻌 㻞㻚㻡㻌 㻣㻚㻡㻌 㻟㻚㻡㻌 㼘㼑㼚㼓㼠㼔㻌 㻺㼛㼞㼙㼍㼘㼕㼦㼑㼐㻌㼘㼑㼚㼓㼠㼔㻌 㻼㼘㼍㼠㼑㻌㻞㻌㻌㻭㼏㼠㼕㼛㼚㻌㻌 㼘㼛㼏㼍㼠㼕㼛㼚㼄㻌 㼘㼛㼏㼍㼠㼕㼛㼚㼅㻌 㼘㼑㼚㼓㼠㼔㻌 㻺㼛㼞㼙㼍㼘㼕㼦㼑㼐㻌㼘㼑㼚㼓㼠㼔㻌 㻠㻌 㻜㻚㻡㻌 㻠㻌 㻜㻚㻡㻌 㻠㻌 㻜㻚㻡㻌 㻤㻌 㻝㻌 㻤㻌 㻝㻌 㻌 㻌 㻝㻚㻡㻌 㻡㻌 㻠㻌 㻜㻚㻡㻌 㻌 㻟㻚㻡㻌 㻠㻚㻡㻌 㻤㻌 㻝㻌 㻌 㻡㻚㻡㻌 㻟㻚㻡㻌 㻤㻌 㻝㻌 㻤㻚㻡㻌 㻜㻚㻡㻌 㻝㻢㻌 㻞㻌 㻥㻚㻡㻌 㻜㻌 㻠㻌 㻜㻚㻡㻌 Then the actions on each corresponding frame line are virtualized as shown in Figure 8. In Table 2, the actions’ locations and strength are calculated and each frame line has a vector constituted by the location value and normalized manufacturing strength values at the uniformly-spaced points on every frame line. Table 3. Similarities of situation(left) and action(right) at each frame line. 㼟㻝㻌 㼟㻞㻌 㼟㻟㻌 㼟㻠㻌 㼟㻡㻌 㻜㻚㻞㻝㻥㻌 㻌 㻜㻚㻢㻠㻌 䚽㻌 㻜㻚㻤㻝㻌 䚽㻌 㻜㻚㻞㻠㻌 㻌 㻜㻚㻢㻜㻤㻌 䚽㻌 㻌 㼍㻝㻌 㼍㻞㻌 㼍㻟㻌 㼍㻠㻌 㼍㻡㻌 㻜㻚㻥㻥㻡㻌 䚽㻌 㻜㻚㻥㻡㻟㻌 䚽㻌 㻜㻚㻥㻝㻟㻞㻌 㻌 㻜㻚㻥㻡㻥㻣㻌 䚽㻌 㻜㻚㻤㻣㻜㻞㻌 㻌 As shown in Table 3, Situation elements at frame 2, 3 and 5 can be seen as the same (voted over 0.6 ), and action elements at frame 1, 2 and 4 can be seen as the same (voted over 0.95). And based on the recorded data, the counts of the subsequent milestones and manufacturing steps of plate 1 are 6 and 3, while the counts for plate 2 are 8 and 2 respectively. Therefore the calculated weights for the two actions are shown in Table 4, and the rule base with rule’s normalized weights is constructed like in Table 5. Table 4. Weights of actions. ࡭૚ 㻌 㻌 㻹㼕㼘㼑㼟㼠㼛㼚㼑㻌ܹ௜ାଵ ௑ 㻌 㻿㼠㼑㼜㻌‫ݓ‬௜ାଵ ௑ 㻌 ߱ప௡௙౔ തതതതതതത㻌 ࡭૛ 㻌 㻜㻚㻡㻤㻢㻢㻞㻌 㻜㻚㻞㻢㻤㻥㻠㻌 㻟㻚㻞㻜㻞㻜㻟㻌 㻜㻚㻠㻝㻟㻟㻤㻞㻌 㻜㻚㻣㻟㻝㻜㻡㻥㻌 㻞㻚㻣㻥㻣㻥㻣㻝㻌 Table 5. Constructed rule base. 㻌 s (plate, frame) 2,5 = 1,5 a (plate, frame) 2,1 = 1,1  → 6 ‫ ࢒ࢇ࢓࢘࢕࢔܅‬ 1 1,4 → 2,2 = 1,2 3.202029 2,5 = 1,5 1,4 2,3 = 1,3 2,3 = 1,3 2,4 → → → → → 1,3 2,4 = 1,4 1,5 2,4 = 1,4 2,2 = 1,2 3.202029 3.202029 3.202029 6 2.797971 0.5336715 0.5336715 2,3 = 1,3 → 2,3 2.797971 2,4 → 2,4 = 1,4 2.797971 2,1 → 2,5 2.797971 0.5336715 0.5336715 1 0.4663285 0.4663285 0.4663285 0.4663285 (B) Manufacturing using the constructed knowledge model In this experiment, the knowledge model constructed in Step (A) suggested a manufacturing action element set for another plate. Firstly, as shown in Figure 9, the situation of plate 3 is analyzed as plate 1 and plate 2. The curvature error vectors at each frame line are generated and compared with plate 1 and plate 2. The result is as shown in Table 7. 414 J. Sun et al. / Process Knowledge Model for Facilitating Industrial Components’ Manufacturing 㻌 Figure 9. Plate 3’s situation analysis. Table 6. Curvature errors at each frame line of plate 3. 㻼㼘㼍㼠㼑㻌㻟㻌㻌㼏㼡㼞㼢㼍㼠㼡㼞㼑㻌㼑㼞㼞㼛㼞㻌㻌㻌 㻌 㻌 㻲㻝㻌 㻲㻞㻌 㻲㻟㻌 㻲㻠㻌 㻜㻌 㻜㻌 㻜㻌 㻜㻌 㻝㻌 㻝㻌 㻝㻌 㻝㻌 㻜㻌 㻝㻌 㻝㻌 㻜㻌 㻜㻌 㻜㻌 㻜㻌 㻙㻜㻚㻡㻌 㻜㻌 㻜㻌 㻙㻜㻚㻡㻌 㻜㻌 㻌 㻜㻌 㻜㻌 㻝㻌 㻜㻌 㻌 㻜㻌 㻜㻌 㻝㻌 㻜㻌 㻌 㻜㻌 㻜㻚㻡㻌 㻜㻚㻡㻌 㻝㻌 㻌 㻜㻌 㻜㻌 㻜㻌 㻜㻌 㻌 㻜㻌 㻙㻝㻌 㻜㻌 㻜㻌 㻌 㻜㻌 㻜㻌 㻜㻌 㻝㻌 㻌 㻜㻌 㻜㻌 㻜㻌 㻜㻌 㻌 㻙㻜㻚㻡㻌 㻜㻌 㻜㻌 㻌 㻌㻌㻜㻌 㻌 㻜㻌 㻜㻌 㻜㻚㻡㻌 㻜㻌 㻌 㻙㻝㻌 㻜㻌 㻜㻚㻡㻌 㻝㻌 㻌 㻙㻝㻌 㻜㻚㻡㻌 㻜㻌 㻜㻚㻡㻌 㻌 㻜㻌 㻜㻌 㻜㻌 㻜㻌 㻲㻡㻌 㻜㻌 㻝㻌 㻙㻝㻌 㻜㻌 㻜㻚㻡㻌 㻜㻌 㻜㻌 㻜㻌 㻜㻌 㻝㻌 㻜㻌 㻜㻌 㻜㻌 㻜㻌 㻜㻌 㻜㻚㻡㻌 㻜㻌 Table 7. Situation similarities at each frame (between plate 1 and plate 3) and (between plate 2 and plate 3). 㻿㻝㻌㻒㻌㻿㻟㻌㻌 㻿㻵㻹㻵㻸㻭㻾㻵㼀㼅㻌㻌㼂㼀㻌㻌 㻜㻚㻝㻢㻣㻌 㻌 㻜㻚㻞㻢㻣㻌 㻌 㻜㻚㻢㻝㻢㻌 䕿㻌 㻜㻚㻡㻤㻣㻌 㻌 㻜㻚㻢㻤㻟㻌 䕿㻌 㻿㻞㻌㻒㻌㻿㻟㻌㻌 㻿㻵㻹㻵㻸㻭㻾㻵㼀㼅㻌㻌㼂㼀㻌㻌 㻜㻚㻡㻟㻤㻌 㻌 㻜㻚㻞㻝㻠㻌 㻌 㻜㻚㻤㻣㻣㻌 䕿㻌 㻜㻚㻠㻜㻤㻌 㻌 㻜㻚㻥㻢㻠㻌 䕿㻌 ࢂࢀࡺ 㻌 㻜㻚㻝㻣㻟㻌 㻜㻚㻞㻣㻣㻌 㻜㻚㻢㻟㻥㻌 㻜㻚㻢㻜㻥㻌 㻜㻚㻣㻜㻥㻌 ࢂࢀࡺ 㻌 㻜㻚㻡㻡㻤㻞㻌 㻜㻚㻞㻞㻝㻤㻌 㻜㻚㻥㻜㻥㻤㻌 㻜㻚㻠㻞㻟㻡㻌 㻜㻚㻥㻥㻥㻢㻌 Considering both the general weight of each rule ୒ and the situations’ similarity ୒ between the known rule and the input rule, the output strength O of each selected rule is like in Table 8. Table 8. Output suggested action elements and weights. ܹே a (plate, frame) ܸܶே ܱ s (plate, frame) 25 = 15 → 21 = 11 1 0.709 0.709 14 25 = 15 14 13 13 → → → → → 12 13 14 15 24 = 14 0.5336715 0.5336715 0.5336715 0.5336715 1 0.609 0.709 0.609 0.639 0.639 0.533 0.533 0.533 + 0.639 0.533 0.639 – 0.639 Therefore, the system suggested action elements are like in Figure 10. The manufacturing lines which are bolder should be manufactured with more strength than others according to the output strength O. J. Sun et al. / Process Knowledge Model for Facilitating Industrial Components’ Manufacturing 415 㻌 Figure 10. Suggested manufacturing actions for plate 3. 4. Conclusion and Future Work A knowledge model was proposed for facilitating the manufacturing of the complicated industrial components. The process of the components’ manufacturing design was modeled. Multiple situation elements and action elements were clustered. In the knowledge model construction step, the manufacturing actions proposed by different experts’ action clusters were evaluated and scored into rule set by evaluating the subsequent manufacturing patterns(milestones and steps) of the recorded manufacturing processes based on the analysis of the exist manufacturing DB. A knowledge model including the evaluated rule set and efficient inference process which is able to propose the most proper manufacturing action was constructed. The proposed knowledge model was proved to be effective by applying to a series of curve shell plates’ manufacturing data obtained in the shipyard. A knowledge model was constructed using the existing manufacturing data. When manufacturing a new plate under the similar situation, the constructed knowledge model suggested an efficient combination of manufacturing action elements successfully. In future, more experiments are going to be conducted to construct a knowledge model for the other kinds of complicated industrial components. Acknowledgement This manuscript is an output of the Joint Study supported by region Rhone Alpes. The authors would like to thank Jean Monnet University, the Relations & Mobilites Internationales and CILEC, which gave a lot of support to this project. References [1] R.M.Hunt, W.B.Rouse,: A fuzzy rule-based model of human problem solving, Systems, Man and Cybernetics, IEEE Transactions on (Volume:SMC-14 , Issue: 1 )112 -120. [2] J. Sun, K. Hiekata, H. Yamato, N. Nakagaki, A. Sugawara, Virtualization and automation of curved shell plates’ manufacturing plan design process for knowledge elicitation, Int. J. Agile Systems and Management, Vol. 7, Nos 3/4, 2014, pp 282 - 303. [3] B.R. Gaines, R. Compton: Induction of Ripple-Down Rules Applied to Modeling Large Databases, Journal of Intelligent Information Systems, 5, (1995) 211-228. This page intentionally left blank Part 6 Multidisciplinary Product Management This page intentionally left blank Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-419 419 Evaluation of Support System Architecture for Air Warfare Destroyers John P.T. MOa,1 and Douglas THOMPSONb a RMIT University, Australia b ASC Pty Ltd, Australia Abstract. The Australian Government and its group of defence organisations struggled over the years to effectively support defence capabilities throughout their life. Numerous reports pointed to management and structural failures that started during the procurement phase when insufficient budget was apportioned to the through life costs through to failure to learn from previous projects, poor risk management practices, lack of responsibility and accountability and failure to adequately communicate between stakeholders. Government thinking has flavoured the use performance based contracts known as Contract for Availability (CfA). However, research has shown that effective CfA type contracts only work by adopting proven enterprise architecture frameworks. This paper evaluated a number of reference architecture framework (AF) and suggested a suitable AF for establishing a through life support system of the Hobart Class Air Warfare Destroyer (AWD) throughout its 30 year in-service life. Keywords. Enterprise architectural framework, Support system architecture, Performance based contracting, Contracts for availability, Service oriented architecture Introduction The Australian Government struggled over the years to effectively support defence capabilities throughout their life. Numerous reports including the Kinniard review [1], the Mortimer review [2], the Rizzo review [3] and the Black review [4] showed that there were management and structural failures within the support organizations. Problems started during the procurement phase when insufficient budget was apportioned to the through life costs through to failure to learn from previous projects, poor risk management practices, lack of responsibility and accountability and failure to adequately communicate between stakeholders. Current government thinking is to enter into performance based contracts to engage a prime contractor, who is fully responsible for managing all relationships with suppliers and sub-contractors. However, research has shown that effectiveness of this type of contracts depends on the relationship and system compatibility between customer and suppliers. The result is the risk of uncertainty in guaranteeing availability and capability of the system being support [5]. The Hobart Class Air Warfare Destroyer (AWD) is a completely new class of ship currently under construction for the Royal Australian Navy (RAN). The design for the 1 Corresponding Author, E-mail: john.mo@rmit.edu.au. 420 J.P.T. Mo and D. Thompson / Evaluation of Support System Architecture for AWDs AWD Through Life Support (TLS) organisation structure is extremely complex due in part to the complexity of the ship and partly due to the large number of stakeholders that need to interact to create an effective support solution for the ships. This offers the opportunity to design the support system from scratch using proven Architecture Framework (AF). However, there are many reference AFs. The process to select a suitable AF is a complex and time consuming exercise as it involves multiple stakeholders, an understanding of the processes, determination of the requirements of the organisation and knowledge of available AF. This paper discusses the complexity of evaluating benefits of the AFs to AWD and proposes an easy-tofollow process. The support organisation, once populated with all the roles, responsibilities, communication lines, processes and procedures as described by the Enterprise Architecture (EA), can be used as the guide to establish the support system that maintains consistent service level to the AWD’s 30 year in-service life. 1. Literature Review The British Ministry of Defence (MoD) started to create a type of service contracts that required the contractor to provide guaranteed maintenance, documentation support, spares, tools, configuration management and ships husbandry services. However, the drive to reduce costs introduced considerable risks into the support of equipment. Haddon-Cave [6] believed that the overriding imperative during this period was to deliver the cuts and change required. 1.1. Performance Based Services The US Department of Defence (DoD) addressed this issue by Performance Based Logistics (PBL) to generate cost savings. The general aim was to improve supply chain performance as a means to generate savings [7]. Recently, the Australian requirements for in-service support contracts were detailed in ASDEFCON (Support) Version 3 point towards a Contract for Availability (CfA) type contract [8]. Under a CfA, industry is required to provide the availability of equipment to enable Defence to meet a capability requirement with a reduce costs of 20% within five years. However, the issue is that the primes are then left to manage the relationships with suppliers and subcontractors to meet the agreed performance requirements of the support contract. To investigate these issues, Wood and Tasker [9] used an example to illustrate the need for a contractor to apply service thinking in the design of complex sustainment system particularly if there was an urgent surge requirement to meet an immediate threat. Ng et al [10] proposed that high levels of availability and capability could only be achieved by a contractor with greater levels of dependency on its customer and their resources. Unfortunately, this requirement was made complicated by the realisation that the contractor had little or no control over those resources. Partridge and Bailey [11] observed that while Defence has a great desire to pursue a service oriented approach to the delivery of capability, they actually don’t understand what constitutes a service, or recognize what type of services are used by Defence. J.P.T. Mo and D. Thompson / Evaluation of Support System Architecture for AWDs 421 1.2. Service System Structure The Hobart Class AWD is planned to be in service with the RAN for 30 years. Over this period of time the ships will have to be supported by an array of services to ensure the ships remain operationally capable, fit for service, adequately manned, safe to use and technologically relevant. The Hobart Class AWD support organisation will have a large number of stakeholders with an interest in how the AWDs operate and perform. These stakeholders can be split into two distinct groups: the users and the support people. The users in the case of the Hobart Class AWD are the RAN. The Future Maritime Operating Concept-2025 (FMOC-2025) depicts the varied nature of the tasks the crew of the Hobart Class AWD will be required to perform. Each of the quadrants has blurred boundaries depicting the possible overlap of tasks with an over-riding possibility of lethal threat no matter the task. To facilitate the support of AWD, a Systems Program Office (SPO) responsible for the TLS of the Hobart Class AWD was implemented. The SPO will execute its responsibilities for engineering and maintenance management and Corporate Governance through materiel support contracts. Current strategy is for the SPO to outsource the majority of the support tasks for the Hobart Class AWD to external suppliers but maintain overall responsibility for the efficiency and effectiveness of the support, including management of the budget. The SPO will act as the information conduit between the external suppliers and the various RAN and other Defence agencies who are stakeholders in the Hobart Class AWD. Figure 1 shows how the AWD SPO stakeholder relationship structure would appear taking into account the relationships that RAN and Defence have with internal and external stakeholders. The complexity of these relationships gives an insight into the difficulties posed in maintaining the relationships and accounting for all stakeholder interests when making decisions. Figure 1. Hobart Class AWD SPO Stakeholder Relationship Diagram. 1.3. Enterprise architecture framework An enterprise architecture defines methods and tools which are needed to identify and carry out change [12]. Enterprises need lifecycle architecture that describes the progression of an enterprise from the point of realisation that change is necessary through setting up a project for implementation of the change process. Therefore, it is 422 J.P.T. Mo and D. Thompson / Evaluation of Support System Architecture for AWDs crucial that the enterprise resulted product service supported by a systematic design methodology that helps the management developing well defined policy and process across the organisational boundaries and implements the changes in all enterprises concerned during the process. The international standard ISO/IEC/IEEE 42010 [13] defined enterprise architecture as the fundamental organisation of a system embodied in its components, their relationships to each other, and to the environment, and the principle guiding its design and evolution. Lankhorst [14] described the enterprise architecture as the synergy between art and science used to describe concepts such as functionality and complexity when designing complex structures. It is clear from these literatures that an AF can provide a clear mission of the enterprise and defined structure to allow information from previously unrelated domains within the enterprise to come together in a form that can be understood by all. ISO recognises 64 different AF. These AFs range in scope from being designed solely for the use of French weapons systems manufacturers through to completely open systems. Some are designed purely for IT developers, or adaptations of older methods, or industry specific such as finance or security. An AF provides the structure for collecting data on how the enterprise is constructed [15]. It provides information on how the hardware, software and networks interact together across the various systems and organisations that comprise the enterprise. It also provides a suitable methodology for accessing, organising and displaying the collected information. The following AFs are short-listed based on their previous uses in defence as well as popularity in complex engineering organizations. 1.3.1. Zachman enterprise architecture Zachman [16] proposed that the way forward to build an enterprise, where different departments produced different solutions to essentially the same problem was to develop an enterprise architecture that standardised the language used to overcome one of the biggest issues, communication. The Zachman framework is a set of structured objects for which explicit expressions are required for creating, operating and changing the objects. The Zachman framework does not include any methodology for creating the EA. It is designed to be used as a guide for the EA structure, not as a process for creating the structure. 1.3.2. The Open Group Architectural Framework (TOGAF) TOGAF is an open source AF freely available to be used by anyone. This feature can be a bonus as it has a wide variety of documentation, commentary and users communicating on the internet and through user groups [17]. The open source nature however is a drawback in that any usage of TOGAF has to be heavily customised to meet the specific needs of a particular user. The heavy customisation then leads to a need to produce extensive documentation and training. TOGAF is more advanced than Zachman in that it includes a meta-model, but it does not include the feature of predefined viewpoints that exists in some other AFs. 1.3.3. Federal Enterprise Architecture (FEA) FEA is managed by the Executive Branch of the U.S. Federal government and is the mandated AF by Federal law to be used by all Agency Heads [18]. The purpose of J.P.T. Mo and D. Thompson / Evaluation of Support System Architecture for AWDs 423 using FEA is to accelerate business transformation and the integration of new technology by providing standardisation, common design principles, scalability and a repeatable project methodology that will aid inter Agency planning, decision making and management. FEA utilizes six Reference Models to support common approaches in standardization of strategic, business and technology. FEA is widely used within the U.S. federal government and enables interaction across multiple agencies and departments within government circles. However, FEA is primarily designed for use during information technology procurement projects. It meets the needs of IT system development teams very well, but the DoD is excluded. 1.3.4. Department of Defense Architecture Framework (DoDAF) DoDAF differs from Zachman, TOGAF and FEA is the use of views and viewpoints to help the user to visualise and understand the complexities of the model [19]. DoDAF defined six processes and supplemented with Meta Model to create a consistency of language across the usage of all six core processes. The semantics and format of the data exchange between the architecture, analysis tools and the architecture databases are consistent and provide a basis of understanding across all stakeholders. DoDAF describes a set of models for visualising data through graphic, tabular and textual means to facilitate the use of information at the data layer. 1.3.5. Ministry of Defence Architecture Framework (MoDAF) MoDAF is used by the British MoD to support their project planning and change management activities. MoDAF provides managers with a comprehensive tool to aid in the understanding of the key factors they need to consider when making decisions. The MoD works closely with its international allies to ensure that when operating in coalition operations, capability information is shared to support interoperability. To facilitate this requirement, MoDAF was developed from DoDAF, but modified by MoD to include Strategic, Acquisition and Service Oriented viewpoints [20]. 2. Architecture Framework Evaluation Methodology The qualitative comparison in Table 1 shows that there are pros and cons for each of the enterprise architectures shortlisted for selection. Unless a “new” enterprise architecture that combines all the pros and eliminates all the cons can be found, the AWD support organization should make a choice from one of these architectures. Hence, a multi-criteria decision analysis is required to select the best AF to suit the organisation. The choice of multi-criteria decision method depends on the correct understanding of the philosophy behind each of the methods [21]. The weighted linear average method, which is probably the simplest and widely used in industry, is chosen in this analysis due to its simplicity and ease of communicating with stakeholders. The essence is to determine a reasonable set of weights to the criteria. In this case, there is no literature highlighting particular criteria and hence all criteria carry equal weight. An AF should be scored against a range of criteria to determine its suitability based on the needs of the organisation. The criteria are divided into five broad categories, viz, (1) Objectives; (2) Properties; (3) Components; (4) Functions; and (5) Services. 424 J.P.T. Mo and D. Thompson / Evaluation of Support System Architecture for AWDs Table 1. Summary of pros and cons of the five candidate enterprise architectures. Pros Zachman x TOGAF x x x FEA DoDAF MoDAF Cons Externalise meanings of enterprise objects Good as a guide for EA structure x x Does not include method to create EA No information on processes Open source, assessable by anyone Lots of documentation, commentary and user communications x x Heavily customized Needs extensive traing x Used by US Federal agencies x Primarily designed for procuring IT systems x x x Use different views to explore the EA Data visualized in graphics x More complex EA structure Interoperable with DoDAF and other AFs used by the British allies x More complex EA structure Each of the AF selected in Section 4 are rated against the 26 criteria expanded from the 5 categories on a score-sheet. This gives a visual representation of the strengths and weaknesses of the selected AF. The score-sheet (Figure 2) has the facility in this study to rate each of the criteria from most to least important. From these results either the best fit candidate can be chosen, or if none of the candidates fulfills the criteria at a satisfactory level, then a hybrid solution can be selected based on the scoring information. The process of scoring the short-listed AF using the suggested methodology against the organisation criteria can be a very subjective process. Ideally, the group of stakeholders who helped to develop the organisation criteria would be involved in the AF selection process. This same group of stakeholders could then score each of the short-listed AF, and then compare their score sheets. Using this group methodology helps to eliminate any preference or bias. Hence, a small group of five integrated logistics and management system practitioners with wide ranging experience in developing and supporting defence projects was gathered to workshop the criteria questions. A series of meetings were held over a number of weeks to inform the team members of the reasons behind the need to select an AF, the purpose and benefits to be gained from using an AF, details of the short listed AF and to examine the criteria. The purpose of these meetings was to ensure that each of the team members fully understood what question was being asked of the AF being examined and they had sufficient background and detail to be able to score the AF against the criteria. Once the individual tasks were completed, then the team met to discuss their individual scores, and explain the reasoning behind any scores that differed wildly from the group average. Once a consensus was reached on the score for each of the criteria questions, then the results were recorded in the score-sheet. From the summary score sheet seen in Figure 3, the best choice of AF for the Hobart Class AWD TLS is MoDAF. The list of short-listed AF shows a progression through the history of AF. Zachman, who first posited the idea of a framework for organising the documents and processes of an organization, scores lowest. While the selection of MoDAF as the recommended AF for use by the Hobart Class AWD TLS organisation appears to be based on the assumption that as it is the newest of the AF examined, it must be the best suited. This is not strictly true as each of the AF examined had deficiencies which counted against them when they were scored against the assessment criteria. J.P.T. Mo and D. Thompson / Evaluation of Support System Architecture for AWDs Figure 2. Completed Score Sheet. 425 426 J.P.T. Mo and D. Thompson / Evaluation of Support System Architecture for AWDs Figure 3. Completed Summary Score Sheet. As mentioned earlier, TOGAF is freely available and is widely used because of that, but it is limited in some of its functionality and requires considerable customisation to fit the organisation’s needs. FEA, the AF dictated by the U.S. Government is a more complete AF than Zachman or TOGAF, but it doesn’t include the pre-defined views of the metamodel data of DoDAF and MoDAF and is more slanted towards information technology procurement than complex hardware systems. MoDAF, and DoDAF, upon which it is based, are designed primarily for use with military systems and also include the customer perspective. MoDAF is based on DoDAF but goes further by modifying the viewpoints to cover Operational, Acquisition and Strategic views. MoDAF and DoDAF both include comprehensive user training information and meet the general requirements for an AF required to support the complex relationship structure likely in the Hobart Class TLS organisation more completely than the other AF. It is for these reasons that MoDAF scores the highest and is therefore the recommended AF for the Hobart Class AWD TLS. 3. Conclusion Complex engineering systems such as AWD requires a properly designed support system for its service life. The decision to adopt an AF to assist the Hobart Class AWD TLS organisation is not a trivial task. This paper contributes to the practice of AF design by illustrating an easy to follow selection process to meet the requirements of the organisation. This process has two steps. First step is to shortlist a few potential J.P.T. Mo and D. Thompson / Evaluation of Support System Architecture for AWDs 427 AFs so that the scope of investigation can be much better defined. The starting point for selecting an AF is to determine how the selection process will occur and to determine some basic criteria. The selected AF needs to meet the requirements of the stakeholders of the through life support program and can become a complex problem determining the exact needs of the organisation. Step 2 is to use weighted linear average method with the assistance of experienced engineers in the support area. The method of selection used was based on a scoring system with 26 criteria in 5 categories. A group of experienced logistic and management system practitioners assessed the short listed AF against the selected criteria and recorded the scores. According to the score-sheets, the AF that best fits the requirements of the Hobart Class AWD based on the scoring system used is MoDAF. This is not an unexpected result as MoDAF is the latest iteration of an on-going design process that started with the Zachman framework and has been through numerous design and requirements changes to meet the ever increasing needs of a wide range of users and stakeholders. References M. Kinnaird, Defence Procurement Review 2003, Department of Defence Publications. http://www.defence.gov.au/publications/dpr180903.pdf, Accessed: May, 30th 2015. [2] D. Mortimer, Going to the Next Level - The Report of the Defence Procurement and Sustainment Review,18 September, 2008, http://www.defence.gov.au/publications/mortimerreview.pdf, Accessed: May, 30th 2015. [3] P.J. Rizzo, Plan to Reform Support Ship Repair and Maintenance Practices, July, 2011, http://www.defence.gov.au/publications/reviews/rizzo/Review.pdf, Accessed: May, 30th 2015. [4] R. Black, Review of the Defence Accountability Framework, Department of Defence, Canberra, 2011, www.defence.gov.au/Publications/Reviews/Black/black_review.pdf, Accessed: May, 30th 2015. [5] J.P.T. Mo, Performance Assessment of Product Service System from System Architecture Perspectives, Advances in Decision Sciences, Volume 2012, Article ID 640601, 19 pages. [6] C. Haddon-Cave, The Nimrod Review: an indepednent review into the broader issues surrounding the loss of the RAF Nimrod MR2 aircraft XV230 in Afghanistan in 2006 report, 28 October, 2009. https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/229037/1025.pdf, Accessed: May, 30th 2015. [7] C.J. Hockley, J.C. Smith, and L.J. Lacey, Contracting for Availability and Capability in the Defence Environment, In: I. Ng et al. (eds.) Complex Engineering Service Systems, Springer-Verlag, London, pp. 237-256, 2011. [8] Defence Materiel Organisation. (2011) ASDEFCON (Support) V3.0. http://www.defence.gov.au/dmo/DoingBusiness/ProcurementDefence/ContractinginDMO/ASDEFCON /ASDEFCON-Spt.aspx, Accessed: May, 30th 2015. [9] L. Wood and P. Tasker, Service Thinking in Design of Complex Sustainment Solutions. In: I. Ng et al. (eds.) Complex Engineering Service Systems, Springer-Verlag, London, 397-416, 2011. [10] I. Ng, G. Parry, R. Maull, and D. McFarlane, Complex Engineering Service Systems: A Grand Challenge, In: I. Ng et al. (eds.) Complex Engineering Service Systems, Springer-Verlag, London, 439454, 2011. [11] C. Partridge and I. Bailey, An Analysis of Services, Ver 1.3. Model Futures, 11 May, 2010. http://www.modelfutures.com/file_download/17/MOD+CIO+-+Service+Analysis+Report+-+v1.3.pdf, Accessed: May, 30th 2015. [12] P. Bernus, L. Nemes, A framework to define a generic enterprise reference architecture and methodology, Computer Integrated Manufacturing Systems, Vol.9 (1996), No.3, pp.179-191. [13] ISO/IEC/IEEE 42010, Systems and software engineering — Architecture description, 24 Nov, 2011. [14] M. Lankhorst, Enterprise Architecture at Work - Modelling, Communication and Analysis, 3rd ed., Springer-Verlag, Berlin Heidelberg, 2013. [15] L. Urbaczewski and S. Mjdalj, A Comparison of Enterprise Architecture Frameworks, Issues in Information Systems, Vol.7 (2006), No.2, pp. 18-23. [1] 428 J.P.T. Mo and D. Thompson / Evaluation of Support System Architecture for AWDs [16] J. Zachman, A Framework for Information Systems Architecture. IBM Systems Journal, 26(3), (1987) 276-292. [17] The Open Group, Welcome to TOGAF - The Open Group Architecture Framework, 2006, Version 9.01 http://pubs.opengroup.org/todaf/, Accessed: May, 30th 2015. [18] Executive Branch, U.S. Government, The Common Approach to Federal Enterprise Architecture, May, 2012. URL: https://www.whitehouse.gov/omb/e-gov/FEA, Accessed: May, 30th 2015. [19] Chief Information Officer, US Department of Defense, DoD Architecture FrameworkVersion 2.02, 2015, http://dodcio.defense.gov/TodayinCIO/DoDArchitectureFramework.aspx, Accessed: May, 30th 2015. [20] U.K. Ministry of Defence. (2012). MOD Architecture Framework Overview. from Guidance - MOD Architecture Framework: Published: 12 December, 2012. URL: https://www.gov.uk/mod-architectureframework#modaf-meta-model-and-modaf-ontological-data-exchange-mechanism, Accessed: May, 30th 2015. [21] K. Steele, Y. Carmel, J. Cross, C. Wilcox, Uses and Misuses of Multi-Criteria Decision Analysis (MCDA) in Environmental Decision-Making, the Australian Centre of Excellence for Risk Analysis, final report, 2008, http://www.acera.unimelb.edu.au/materials/endorsed/0607_0610.pdf, Accessed: May, 30th 2015. Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-429 429 Towards a Proposed Process to Manage Assumptions during the In-Service Phase of the Product Lifecycle John ILEYa,1 and Cees BIL b Nova Systems, Mile End SA 511, Australia b School of Aerospace, Mechanical and Manufacturing Engineering, RMIT University, Melbourne, VIC 3001, Australia a Abstract. Assumptions affect our everyday life and more specifically in business they can have a profound effect on an organisation’s support services delivery performance if they prove to be wrong. The literature shows the need to identify assumptions and expose any implicit assumptions that may have a major impact on the business objectives. There are also examples of methodologies that suggest there are advantages in managing assumptions for strategic decision making. However, there is little research dealing with the ongoing assessment or review of assumptions when delivering support services during the In-Service phase of a product’s lifecycle. The outputs of various government audit reports and independent enquiries suggest that erroneous assumptions are a potential cause of death or serious injury and substantial increases in costs or reductions in capability of major capital systems. This suggests that a method for the ongoing review and management of assumptions would be beneficial as an aid to the successful delivery of support services. An industry project was observed to gain an insight into how projects deal with assumptions. A number of methods from the strategic planning, risk management and reliability engineering domains are compared and these form the basis for a proposed process to manage assumptions in the Inservice phase of a system’s lifecycle. Keywords. assumptions management, In-Service Support, Product Life-Cycle 1. Introduction Nova Systems (Nova) is a Professional Service Provider specialising in the provision of engineering and management services that provides industry and government with independent expertise in delivering complex projects and solving technologically challenging problems. Nova has recently started working on two projects that are significantly higher value and duration than any other work carried out in the past. Both operate under a performance based contracting construct that puts an element of Nova’s future profit at risk. Consequently Nova is keen to reduce and manage the uncertainty that goes with long term projects. One element of uncertainty could be linked to the assumptions that underpin a project’s execution and their likelihood of proving false during the lifetime of the project. Potentially this could be addressed through a formal 1 Corresponding Author: John Iley, Logistics Engineering Manager, Nova Systems. Email john.Iley@novasystems.com 430 J. Iley and C. Bil / Towards a Proposed Process to Manage Assumptions method that reduces the possible impact of assumptions that prove to be false when delivering support services to enduring systems. This, in turn, may improve Nova’s service delivery and reduce the risk of losing the profit at risk. The lifecycles of enduring systems (e.g. rail, ships, process plants, aircraft, lighthouses, nuclear power stations) are generally measured in the tens of years and in some instances the lifecycle could be over 100 years. These systems bring with them the need for ongoing support that can cost much more than their original purchase price [1]. Effective planning is required to gain the maximum utility from these systems during their in-service or operational phase that meets the organisation’s desired outcomes. Although the physical attributes of the system may be known today humans are sadly lacking in their ability to predict the future and hence the conditions the system may encounter during its lifetime. This leads to a level of uncertainty during the planning activity and consequently assumptions are made about future states that may or may not prove to be accurate. The time horizon for these assumptions could be short term or long term and their potential impact could range from no effect to catastrophic. This naturally leads to the concept that some assumptions are more important than others and should warrant an increased level of scrutiny [2][3]. Assumptions are a part of our everyday life. Without them the decision making process would grind to a halt and nothing would get done. This extends into engineering where engineers are expected to use assumptions in their application of engineering methods when solving complex problems or applying appropriate techniques [4]. All business plans (whether they be strategic plans, project management plans, integrated logistics support plans, support services management plans, asset management plans or similar) are based to some extent on assumptions, either explicitly stated or implied. Typical assumptions may include availability of appropriately qualified personnel, equipment operating periods and rates of effort, availability of support equipment, and potential changes (or lack thereof) to relevant legislation. If the assumptions are not challenged or tested during the planning process or as the business context changes then it is probable that business outcomes could be impacted to the detriment of the business and the customer. Peter Drucker (1994) writing about what he termed “The Theory of the Business” highlights the potential impact of not reviewing and possibly changing assumptions as changes to the operating environment evolve [3]. What may have been sound assumptions in the beginning of an enterprise’s or system’s life may no longer hold true and, if this is not recognised, can lead to significant loss of revenue or even collapse [3]. 2. Comparison of assumption identification and assessment methodologies This chapter describes and discusses various methods found in the literature that consider assumptions and their potential impact as part of a business planning process. These methods will be compared to each other and to FMECA and risk management processes. The approaches or methods are: 2.1 Assumption-Based Planning The aim of Assumption-Based Planning is to expose as many load-bearing assumptions as possible so that they can be appropriately treated in the planning process [2]. It is focused on improving an existing plan rather than the delivery of the planned activities. J. Iley and C. Bil / Towards a Proposed Process to Manage Assumptions 431 It is important to note that the Assumption-Based Planning method is aimed at those assumptions that are affected by what the future holds and not those about how it is hoped the plan will perform [2]. This is a subtle distinction that appears to be a method for reducing the number of assumptions that need to be assessed and provides focus onto those assumptions associated with the way that the world may behave in the future. Although not particularly relevant to the Assumption-Based Planning process Dewar makes an important, albeit subtle, point that assumptions need not be explicit to everyone; just to the planners and decision makers in situations where exposure to a wider audience may put the organisation at a disadvantage [2]. The Assumption-Based Planning process is depicted in Figure 1 and comprises five steps: x x x x x Step 1 is the analysis of these plans to identify the explicit and implicit assumptions on which they are based. Step 2 then identifies those assumptions on which the success of the plan rests (what Dewar calls the ‘load-bearing assumptions’) and those that are most likely to be overturned by future events (the ‘vulnerable assumptions’). Assumptions that are both ‘load-bearing’ and ‘vulnerable’ are of particular interest as their impact could be significant if the assumption proves to be false. Step 3 identifies thresholds or events that when detected indicate that a vulnerable assumption has either failed or is about to fail. These thresholds or events are termed ‘signposts’ and if a signpost event occurs then action is required. Step 4 considers actions that can be taken to support the success of the assumption i.e. reduce the possibility of it proving false. They are intended to deal with the vulnerability of load-bearing assumptions and are actions taken to reduce or eliminate any uncertainty in a vulnerable, load-bearing assumption. In the Assumption-Based Planning vocabulary these are called ‘shaping’ actions. Step 5 determines the actions needed to prepare for the possibility of a loadbearing, vulnerable assumption failing. In Assumption-Based Planning vocabulary these are called ‘hedging’ actions. Plausible events Step 2 Step 1 Plans Assumptions Step 3 Signpposts Load-bearing, vulnerable assumptions Shaping actions Step 4 Broken Assumption Step 5 Hedging actions Figure 1. Assumption Based Planning [2]. The Assumption-Based Planning method provides a strong structure for identifying assumptions, assessing their impact should they prove false and approaches for determining appropriate actions to either eliminate or reduce the probability of an assumption proving false or actions to take in the event that it does fail. This appears to be similar to risk management methods. The weakness of the process is that there is no continuous review loop once the planning stage has been completed although it is quite 432 J. Iley and C. Bil / Towards a Proposed Process to Manage Assumptions possible that the success or otherwise of the plan could be used as an input to future planning rounds. 2.2 Critical Assumption Planning Critical Assumption Planning is a cyclical planning method predominantly aimed at new business ventures with the intention 5. Test Implementation of challenging and testing 1. Knowledge Base assumptions based on the premise Assessment that “surfacing and testing assumptions is the essence of 2. Critical Assumption running and managing a new 4. Funding Request Identification business venture’ [5]. The method essentially develops a plan to test the assumptions on which a business 3. Test Program Design venture is based and to use the results to refine the business plan. Figure 2. Critical Assumption Planning [5]. The process is depicted in Figure 2 and comprises six steps of which Steps 1, 2 and 6 are relevant to the subject of assumption management in a support system environment. Step 3 includes an element of contingency planning that deals with alternative actions to take should an assumption prove false when it is tested. 6. Venture Reassessment 2.3 Active Threat and Opportunity Management (ATOM) ATOM is a process for the management of risk and opportunity throughout the whole project lifecycle from project initiation to closeout or handover. According to the authors it provides “a simple method for effective risk management”. It includes an explicit analysis step to identify and assess assumptions during the ‘Identification’ stage of the process as part of .a risk workshop [6]. Figure 3 illustrates the ATOM process steps. It is noticeable that there are similarities between this process and the previous planning approaches or methods and that it includes a continuous review cycle. The assumption identification process depicted in Figure 4 is as follows [6]: x Examine the project’s documentation, this could be bid documents, business plans or management plans. The expectation is that the documentation should contain all the assumptions Figure 3. ATOM process [6]. and constraints that affect the project but this is not always the case and that implicit assumptions held by stakeholders have to be exposed [6]. Initiation Identification Assessment Quantitative Risk Analysis Response Planning Review Implementation Reporting Post-project Review J. Iley and C. Bil / Towards a Proposed Process to Manage Assumptions Assumption Could this Assumption prove false (Y/N) Yes No No If false would it affect project performance (Y/N) Yes Raise Risk No 433 x Identify and list the implicit assumptions through a facilitated discussion between all stakeholders based on work breakdown or risk breakdown structures. x Continue the facilitated discussion to validate each assumption. This is likely to identify assumptions that can be considered safe i.e. unlikely to prove false and these can be excluded as potential risks. Note that the exclusion at this stage does not mean that the assumptions are ignored. They are revisited whenever a risk review is conducted. x Determine the extent to which the remaining assumptions may affect the desired outcomes and then raise risks as necessary. Although the ATOM methodology is applied to risk and opportunity management the assumption and constraints analysis could be adapted for the general management of assumptions throughout the In-service (operations) phase of a End system’s lifecycle [6]. This would entail replacing the ‘raise risk’ step with one that captures the way in which the Figure 4. The assumption assumption could be monitored, any mitigation or prevention assessment process actions and potential recovery plans should the assumption prove false. The ATOM methodology has strong similarities with the Risk Management process described in AS/NZS ISO 31000 [7]. Last Assumption? 2.4 Comparison with FMECA and Risk Management processes Table 1 presents a comparison of the approaches described above with the FMECA and Risk Management processes for a number of factors that could be applicable to the management of assumptions during the In-service (operations) phase of a system’s lifecycle. Because the ATOM methodology is similar to the Standards Australia Risk Management approach this has been excluded from the table. The factors and the questions they pose are: x Environment or System Boundary. This sets the context for each of the methods. Although Assumption Based planning does not appear to have a stage that sets the environment or system boundary this may be because the starting point of assumption based planning is an already developed plan which in theory contains the context for that plan [8]. x Identification. Does the method have a step that identifies the analysis object? In the case of the approaches above this would be assumptions, in the FMECA the failure modes and in risk management identification of relevant risks. x Impact. Are the potential impacts of a failure of the analysis object captured? x Causes. Is the way in which a failure of the analysis object captured? In the case of an assumption this translates to identifying the possible reasons for it to prove false. According to Table 1 possible causes are only identified in the FMECA and Risk Management processes. This is a weakness of the other three approaches although it is possible that the signposts step of Assumption Based Planning is attempting to identify possible causes. If causes are not identified then it is difficult to see how treatment and mitigation strategies can be defined or implemented [8]. 434 J. Iley and C. Bil / Towards a Proposed Process to Manage Assumptions Table 1. Comparison of planning methods with FMECA and Risk Management . Assumption Based Planning Factor Environment / System Boundary Identification Impact Causes Likelihood of occurrence Treatment or mitigation strategies Review No Step1 – Identify the assumptions in the plan Step 2 – Identify load bearing and vulnerable assumptions – determines those assumptions that are worthy of further analysis No explicit step Possibly part of Step 2 when determining vulnerability of an assumption in the plan’s lifetime Steps 3, 4 and 5 – termed signposts, shaping actions and hedging actions. Signposts are warning signs that should result in management action. Shaping actions are intended to help the assumption hold true for the duration of the plan. Hedging actions prepare for the assumption to fail (what can be done now to mitigate the potential effect or what has to be in place should the assumption prove false). No explicit step Critical Assumption Planning Scenario-based Strategic Planning Step 1 – Knowledge base assessment – strives to understand the business context and what is already known and unknown Step 2 – Identify critical assumptions Part of step 2 – use of business models to determine potential impact of assumptions on business outcomes No explicit step No Yes – the framing checklist used to create a common understanding of the scope of the project System description use Yes – 360o Stakeholder feedback process Yes – Impact / Uncertainty Grid Yes – list the potential failure modes Yes – describe the effect on the system of the failure mode Yes – list the potential risks No explicit step Yes forms part of the Impact / Uncertainty Grid. Qualitative in the form of levels of uncertainty Yes – Strategy definition Yes Yes – failure rate or qualitative scale Yes Yes Yes – through feedback into the engineering process. Also identifies existing control measures that may mitigate potential failures and their impact Yes – action plans to remove the risk or deal with it when it materialises and becomes an issue Yes as part of the engineering management process Yes – regular review as mandated by management plans and corporate instructions. Contingency planning element of Step 3 Assumption Test program Step 6 Venture reassessment. Step 6 monitoring – although this is not an explicit monitoring of the assumptions but more to do with monitoring the plan’s effectiveness. Risk Management FMECA and Yes – establishing the external context Yes – describe the potential impact if the risk materialises J. Iley and C. Bil / Towards a Proposed Process to Manage Assumptions x x x 435 Likelihood of occurrence. Within a timeframe of interest will an assumption fail or a risk event occur. In FMECA terminology this is the failure rate associated with the particular failure mode. Treatment or mitigation strategies. Does the method include a step to identify and document possible mitigation or treatments in the event that an assumption proves false or a risk event materialises. For failure modes this factor relates to existing controls that prevent a failure or its probability of occurring. Treatment beyond existing controls would require feedback into the overall engineering processes to effect a design change or implement new controls. Review. Are reviews a part of the methodology? Reviews may be one off events as is the case of a FMECA unless it is updated at some point in the future or regular events as is usually the case in a risk management regime. In the case of Scenario based strategic planning the monitoring process looks at the plan’s overall effectiveness rather than specific assumptions. Should effectiveness drop off then assumptions would be revisited as part of the overall process [9]. 3. Proposed assumptions management process model Complex enduring systems tend to have Support System solutions and associated support services with lifecycles that can be measured in the tens if not hundreds of years and may have multiple stakeholders. This sort of environment is intrisically uncertain and affected by unplannable events [10]. As Hillson points out “no one knows the future with perfect certainty” and that making assumptions is a way of dealing with uncertainty by simplifying matters [11]. The issue is that the assumptions made today may either fail or become irrelevant due to changing circumstances. Furthermore, successful delivery of support services is reliant on the various assumptions holding true. It is clear that assumptions are an endemic part of any system support solution or support service. Overtime as the situation changes it would be prudent to revisit and reassess the assumptions in the light of experience gained, current knowledge and future directions. At the moment there does not appear to be a formal process that routinely reviews the assumptions on which a support solution or its associated support services are based. This chapter will describe a proposed process based on the literature and from project observations that is intended to assist projects pay more attention to assumptions. 3.1 Characteristics of an assumption management process The analysis of the various approaches to assumption identification, risk management and FMECA suggests that an assumption management process should include the following: x x x x The context within which the assumptions exist is defined. What are the circumstances that lead to the need for an assumption? Identification of all the assumptions affecting the service delivery. Analysis of the identified assumptions to determine their potential impact on service delivery. Judgement about the importance or otherwise of the impact of the assumptions should they prove false. 436 x x x x x x x x J. Iley and C. Bil / Towards a Proposed Process to Manage Assumptions Determination of the likelihood that the assumption will prove false. Identification of possible indicators that the assumption is heading towards proving false. These indicators should enable action to be taken before there is any significant impact on service delivery. Identification of possible actions that can be taken to reduce the possibility of an assumption proving false. Identification of possible actions to take if the assumption does prove to be false. Recording the results of the analysis outcomes. Updating plans with the outcomes of any assumption analysis activity Regular review including when changes to the external environment occur. The process is continuous. 3.2 The lifecycle view of assumption management Assumptions come into being the moment that any planning activity commences. They are usually used to fill gaps in knowledge or uncertainty about the future. The lifecycle starts with the analysis of support services definitions and requirements and an understanding of the support system and contract requirements. The analysis of these artefacts will identify the known facts and the gaps in knowledge (certainty and uncertainty). The known facts will feed directly into the planning process whilst the gaps in knowledge will be treated as assumptions or risks and these will then be fed into the planning process. The planning process may turn the gaps in knowledge into known facts and this will lead to a revision of the assumptions or risks. Conversely the planning process may result in more gaps in knowledge and uncertainty and this will need to be put through the assumption and risk analysis processes before being included in any resultant plan. Once the plan is established and the support services are delivered, the plan will be reviewed and this may require the assumptions to be revised. This could be through adding new assumptions, revising existing assumptions or retiring assumptions because they are no longer relevant to the delivery of support services. 3.3 Proposed assumption management process Figure 5 depicts the proposed assumption management process. For simplicity the context and continuous review steps are not included. The process starts with the identification of all assumptions contained in the plans or derived from the contract requirements, support services requirements, support system description, support services definition and any other relevant source of information used to plan support services. At this step it is important to identify as many of the explicit and implicit assumptions as possible. A technique such as ‘Looking for wills and musts’ should be utilised to seek out the implicit assumptions [2]. Engagement with stakeholders is another good method for determining assumptions [9]. When listing the assumptions Hillson recommends writing them in the form of ‘IF this assumption proved false, THEN the effect on the project would be…’ [11]. Using this approach assists the assessment stage with the ‘IF’ side addressing the likelihood of the assumption failing and the ‘THEN’ side the impact if the assumption did fail [11]. Once as many as possible assumptions are identified and listed, the next step is to analyse each assumption. Starting with the first assumption two questions are asked that reduce the J. Iley and C. Bil / Towards a Proposed Process to Manage Assumptions 437 list of assumptions to those that could affect service delivery performance and should therefore be monitored and managed going forward. The first question deals with assumptions that are highly unlikely to fail during the lifetime of the project. The next then decides that if the assumption did prove false would there be a significant impact on the service delivery. In either case if the answer is “no” the details are recorded for future reference. If an assumption is likely to prove false and have an impact on the performance of the support services the next Identify All Assumptions steps in the process assess the likelihood of occurrence and determine possible actions that either eliminate or reduce the likelihood of occurrence, give advance warning of Select Assumption impending failure or actions to take if the assumption does fail. The process is then repeated for the remaining assumptions until there are no more to be assessed at which Could this Assumption prove point the results of the analysis are incorporated into the false (Y/N) No relevant plans. Yes One aspect of the assessment process is the decision about whether an assumption could prove false. There are If false would it affect project performance two aspects to this question worthy of further discussion. (Y/N) These are the level of confidence that can be afforded to the Yes assumption and how vulnerable the assumption is to a Determine change in environmental circumstances. When the likelihood of occurrence timeframe is relatively short, there is likely to be a high confidence that the assumption will hold true and less likely No Identify indicators to be vulnerable to unforeseen changes in the operating of impending failure environment. However, the opposite is true when the No timeframe is relatively long such that confidence would be Identify actions to low and vulnerability would be high. Hence when reduce possibility of failing determining if an assumption could prove false, then the planning timeframe must be taken into consideration. Identify actions in the event that assumption fails 4 Conclusions Record details Assumptions underpin most, if not all, In-service support plans and the delivery of support services. Once the plan is established and being executed the assumptions are rarely, Last Assumption? if ever, revisited until either the contract is renewed or an incident occurs that affects support services performance. If yes assumptions are managed to the same degree as risk and Update Plan(s) opportunity then it is possible that there will be fewer surprises during the In-service phase of a system’s lifecycle. Assumptions that prove false can have catastrophic End consequences and for this reason alone it would be prudent to ensure that all significant assumptions are explicitly Figure 5. Proposed assumptions management identified and recorded. Many assumptions go unnoticed process because they are implicit and they are the most difficult to identify and bring into the open. Some implicit assumptions are the result of the organisations culture and are treated as fact without challenge. This has led to the 438 J. Iley and C. Bil / Towards a Proposed Process to Manage Assumptions situation outlined in the Rizzo Report [12] where the RAN assumed a ship was ‘safe to sail’ unless proven otherwise or the Nimrod aircraft crash where it was dangerously assumed that because there had been no accidents that the aircraft was therefore intrinsically safe [13]. Various methodologies that identify, assess and treat assumptions were compared with each other and with risk management and reliability engineering. They show common themes between the various approaches including setting the context, identify the assumptions, assess their impact on the project and likelihood of occurrence and then determine an appropriate treatment. Not all methods included a review step but in the context of support services delivery over a long period this would be a sensible step to ensure that the assumptions are still relevant or are becoming vulnerable to the possibility of proving false. A technique that identifies implicit assumptions contained in a plan was trialled in an industry project environment. The results were quite interesting and the exercise did reveal a number of assumptions that if they prove false could have a significant impact on the project’s success. A structured process for assessing, categorising and managing assumptions in a support services context is proposed to improve overall service delivery by potentially reducing the adverse impact of an assumption proving to be false. This is based on the methods outlined in the risk management and strategic planning literature. The proposed process could benefit to Nova Systems as it continues to provide engineering services to its many clients and moves into longer term contracts involving the Integrated Support Contractor construct. In the wider Support Systems community a thorough understanding of the potential impact of assumptions that prove to be false and putting more effort into the identification of implicit assumptions would benefit the design of Support Systems and the associated delivery of support services. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] AMC. 2014. Defining Asset Management. The Asset Journal, Vol 8 Iss 2 42-43 J.A. Dewar, Assumption Based Planning: A tool for reducing avoidable surprises, RAND, Cambridge University Press, 2002. P. Drucker, The Theory of the Business, Harvard Business Review, 95-104, 1994. Engineers Australia 2013, Stage 1 Competency Standard for Professional Engineer, http://www.engineersaustralia.org.au/sites/default/files/shado/Education/Program%20Accreditation/130 607_stage_1_pe_2013_approved.pdf accessed 28 October 2014 H.B. Sykes, D. Dunham, Critical Assumption Planning: A practical tool for managing business development risk, Journal of Business Venturing, Vol 10, 413-4, 1995. D. Hillson, P. Simon, Practical Risk Management: The ATOM Methodology, Second Edition. Management Concepts, Inc.. Kindle Edition (Kindle Location 1374) , 2012 Standards Australia 2008 AS IEC 60812-2008 Analysis techniques for system reliability—Procedure for failure mode and effects analysis (FMEA), http://www.saiglobal.com.ezproxy.lib.rmit.edu.au/PDFTemp/osu-2014-06-30/5871338651/608122008.pdf J.A. Dewar, C.H. Builder, W.M. Hix, M.H. Levin, Assumption Based Planning: a planning tool for very uncertain times, RAND Corporation, 1993. B. Schwenker, T. Wulf (eds.), Scenario-based Strategic Planning, Roland Berger School of Strategy and Economics, Springer Fachmedien, Wiesbaden, 2013. A. De Meyer, C.H. Loch, M.T. Pich, Management of novel projects under conditions of high uncertainty, Judge Business School University of Cambridge, 2006. D. Hillson, Assume nothing, challenge everything!, Project Manager Today, Feb 2008, p. 38, 2008. P.J. Rizzo, Plan to Reform Support Ship Repair and Management Practices, Commonwealth of Australia, 2011. C. Haddon-Cave, The Nimrod Review, The Stationary Office, London, 2009. Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-439 439 Four Practical Lessons Learned from Multidisciplinary Projects a Evelina DINEVAa,1 and Thomas ZILLa, Uwe KNODTa and Björn NAGEL b Air Transportation Systems, Deutsches Zentrum für Luft- und Raumfahrt e.V. (German Aerospace Center) b EIWis, Deutsches Zentrum für Luft- und Raumfahrt e.V. (German Aerospace Center) Abstract. In this study we perform extended Lessons Learned interviews concerning collaboration in multidisciplinary engineering projects at DLR, the German Aerospace Center. Interviews are held with all available members out of several project teams. The success of a project is evaluated against a) standard Lessons Learned scales (as used in Project Management, PM) and b) the perceived satisfaction of the interviewees. Furthermore, we inquire about the nature of collaboration and individual experiences. From the interviews we could identify the influence of four main factors: i) organizational structure; ii) organizational practice; iii) leadership and iv) continuity. The above factors cannot be captured with standard PM methods. These rather need to be “discovered anew” by project managers. The goal of our investigation is to let the practice of good “real live” project management flow into future projects and into education. Keywords. lessons learned, expert interview; empirical research, project management; best practice; multidisciplinary collaboration; collaborative design, multidisciplinary design and optimisation (MDO), participatory MDO (pMDO) Topics. Knowledge-Based Engineering; Transdisciplinary Engineering Introduction Lessons Learned is a tool of Project Management (PM) that serves to capture meta-data about the course of a project [1,2]. Members of the project are briefed about what went wrong and what went well during the project rune time, the causes are analysed as well. The Lessons Learned, which typically follow after the project is closed, are often omitted because the Lessons Learned are not directly relevant for the accomplished or closed project but are intended to help improve subsequent projects. To obtain metaknowledge about the course of a project and to apply it, are, however, two quite different tasks than accomplishing project goals or managing a project. While enterprises are usually quite skilled in the latter tasks, they often lack the responsibility to conduct, disseminate, and apply Lessons Learned–again, Lessons Learned procedures are simply omitted. Note also that providing explicit knowledge and putting this knowledge to work in subsequent projects are different steps of a Knowledge Management process [3]. 1 Corresponding Author, E-mail:evelina.dineva@dlr.de. 440 E. Dineva et al. / Four Practical Lessons Learned from Multidisciplinary Projects As a remarkable example, NASA (the National Aeronautics and Space Administra- tion in the USA) is known for its good practices af applying Lessons Learned. NASA has a dedicated Knowledge Management department, which conducts, applies and dis- seminates Lessons Learned [4]. The Knowledge Mamament and the Project Mannegent departments at DLR (the German Aerospace Center: Deutsches Zentrum für Luft und Raumfahrt, e.V.) are also working to establish a Lessons Leaned culture (see [3] for a description and [5,6] for examples as PM-days exchange at DLR). In addition, systems institutes at DLR who run collaborative facilities [7,8], are also interested in learning from collaborative experiences. What seems to be individual attempts might, however, become standard in the near future as the ISO 9001 norm for Quality Management [ISO- ref] is about to be updated (in this year, 2015) to Knowledge Management strategies, which focus on identifying, gathering, maintaining, and applying implicit and explicit knowledge [9]. This change will affect worldwide millons of enterprises, including sev- eral DLR institutes, who are ISO certified for Quality Management. In the current study, we collect and analyze meta-knowlege, which is obtained in the course of a project. Our focus thereby is on collaboration and on personal experiences, both of which are substantial extensions to standard Lessons Learned methods. We quest for collaboration explicitly because aerospace projets nowadays are very complex and involve a wide range of disciplines [10–18]. While these studies tackle the software tools, hardware tools, organisation, and facilities for collaboration, our quest is about the hu- man factors of collaboration. The quest for personal experiences supports this by provid- ing information how the individual approaches and improves their participation in col- laborative projects. These clearly are an empirical questions, which we investigate with a series of Lessons Learned interviews. 1. Preliminary Lessons Learned Study 1.1. Material A standard Lessons Learned questionnaire was extended with questions about collaboration and about personal experiences. The questionnaire is in German and contains a cover page, followed by four sections, which inquire about: x x x x x cover page: data about the interview (when, where, who), the project (name, scope, size, duration), and the interviewee (participation); project progress: preceding and subsequent projects, project goals, and events that may have had an impact on the project; analysis of collaboration: team satisfaction with the project, nature of shared experi- ences, communication; personal experiences: personal satisfaction with the project, required and obtained skills, and sources of skills; resume: what was good, not so good, and what we have leaned from the project. Scales, on which participants can place a mark between zero and one to rate what’s in question, where used to indicate the level of team and personal satisfaction, and multiple- choice questions for sources of experiences (examples for “scale” and 441 E. Dineva et al. / Four Practical Lessons Learned from Multidisciplinary Projects “multiple-choice” questions are presented for improved questionnaire in Table 1 and 2). Participants were also asked to sketch how the project was organized. All other questions required only verbal responses. The first questionnaire was slightly restructured for a better flow of the in- terview. One scale, asking about the project completeness, was thereby omitted (it is reentered in the more recent version, Section 2.1). Table 1. Plase rate the relvance of the following experiences for your contribution: (i) technical expertise (iii) communication and social interaction (ii) understanding of other disciplines (iv) other: Table 2. What are the sources of your relevant qualifications or experiences? source / qualification from question 3.6 (a) (b) (c) (d) (e) university: advanced training: mentors: "on the job": teaching: other: Examples of translated (from German) non-verbal questions, “scale” and “multiple-choice”, as used in the most resent version of the questionnaire. 1.2. Participants Six DLR employees were interviewed about their experiences in one or more projects out of four projects, A, B, C, and D. All but one participant were from the same department as the interviewer. The focus of the interviews was on projects B and D, which do not have much overlap of team-members. Projects A, B, and C were consecutive projects with some overlap of team-members. Projects A and C were inquired in less detail alongside project B to the extent to which members of project B were also involved in A and C. Due to a relatively high fluctuation rate at DLR, consecutive projects are often manned with a significant proportion of crew turnover. Thus, for project A only some more senior team members were available for the interviews. Participants were informed about the goals, procedure, risks and data handling of the study, and they consent to participate [19]. 1.3. Procedure The interviews took place in an office or in a common area, where distractions were limited. All interviews were conducted in German. Short interruptions occurred but were not critical. The interview was semi-structured in that the interviewer mostly followed the questionnaire (Section 1.1), but reordering was done when it appeared helpful. Where appropriate (i.e., triggered by the context), additional questions were asked, too. The interviewer took written notes of the participants’ verbal answers. Where applicable (e.g., scales and sketches) the interviewee filled in his or hers reply. 442 E. Dineva et al. / Four Practical Lessons Learned from Multidisciplinary Projects 1.4. Analysis and Results Based on just six interviews, statistical analysis can not be performed. However, the pattern of satisfaction with the projects A to D, reamins quite stable when more subjects are added (Figure 1). This pattern shows that satisfaction for the consecutive projects A to C increases fron A to C. This improvement was also explicitly discussed in the verbal reports. In addition, for project D, where satisfaction is about average like B, similar the core issues for dissatisfaction was the same: frustration from wasting one own’s time in hunting for deliverables, on which one’s own work depends. Most interesting data is the improvement from A to B, to C. The overall satisfaction pattern together with the verbal reports suggest that four factors could be identified as likely to be critical for the course of a project. Figure 1. Average satisfaction rates in projects A–D, split over the interview series, S1 and S2. Firstly, the organizational structure matters: DLR, with almost 8000 employees, provides a large scope of engineering disciplines, and is therefore capable of supporting large-scale multidisciplinary projects. This influences collaboration at several levels: a) DLR has the capacity to attract experts in a wide range of disciplines. This is reflected in the interviews by particularly high ratings of the institutes’ disciplinary expertise from all participants. b) Some interviewees also mention that working on relevant large-scale tasks, which smaller organizations cannot offer, is an important source for their motivation. c) Given that DLR institutes are distributed over Germany, multidisciplinary teams typically are not co-located. As a consequence, communication among teams members is to a large extent not in person but by means of mail, phone, video, data, and file exchange. Secondly, the organizational practice plays a role: At DLR, it is common that experts are involved in several projects with different priorities. For project managers and in- stitute directors, it is very difficult to coordinate, who of their crew is needed when in which project. This situation may cause delivery delays of work packages, which often entail frustration as they invoke unnecessary requests for a deliverable or cause a delay in another project. About half the interviewees repot to be annoyed when E. Dineva et al. / Four Practical Lessons Learned from Multidisciplinary Projects 443 they need to hunt for deliverbles. Note that this is despite the fact that communication between coworkers is virtually throughout as loyal an friendly–delays are explicitly attributed to the fact that colleges are working on alertnative projects. Thirdly, leadership plays a critical role: Both project success and high satisfaction of the team members correlate with the ability of the managers to provide intrinsic motiv- ation. The respective managers report about their explicit efforts to align project goals with personal goals and with the goals of the involved departments. Their networking actions–globally, to position oneself in the organization; and locally to bring the team together–also pay off. Fourthly, continuity and intensity of collaboration matter: With the frequency and duration of collaboration, the team members get to know one another and gain insight into each-others disciplines. For the consecutive projects A, B, and C, which have a significant crew overlap, the participants report that: a) b) c) d) team-members gain knowledge of how one’s own work influences others; they learn to know whom to ask which questions; the appreciation of big-picture goals grows team-members develop an understanding of the underlying multi-participatory process These factors have been explicitly enhanced from project B to C by introducing socalled Design Camps, where all team-members are co-located to work together on relatively simple but muti-disciplinary tasks. This increases the intensity of collaboration also by allowing minds to exchange. In short: teams progress from multito trans-disciplinary approach. With that, the motivation and the effectiveness of a team also grow. 1.5. Conclusion Study One The above results, although interesting, need to be regarded within the boundaries of a quite small group of participants. But they are sufficient to state the obvious: Multidisciplinary projects often are spatially distributed, and this is a critical factor that needs to be considered. In many projects at DLR, the project members are not collocated (and from the experience of working in such projects, one might conclude that this the rule rather than the exception for most current projects that are too complex to be handled by single specialized departments or institutes). The issues of leadership and of con- tinuity and intensity of collaboration need to be regarded against the background of the organisational structure and practice: how to create motivation and how to bring minds together when people meet in person just few times in a year? For the second round of interviews, the questionaries were extended to inquire directly about issues of motivation and distribution of collaboration. 2. Expanded Lessons Learned Study The first interview series included only six interviewees, and the analysis of the prelim- inary results offered insights on how to improve the questionnaire. Critically, the inter- viewer also visited several other departments to gain Lessons Learned data from their perspective. 444 E. Dineva et al. / Four Practical Lessons Learned from Multidisciplinary Projects 2.1. Material The questionnaire for this study structured similarly to first one (Section 1.1). The following changes were made: a) The section on collaboration was substantially extended to cover the issue of distributed teams by inquiring about the distribution of communication with the part- ners and about the spatial distribution of communication frequencies. b) The section on personal experience was extended with questions about motivation in order to better investigate the link of between the networking efforts of project leaders and motivatio of team members. c) Questions often do not belong to just one section, and some were rearranged in order to allow for a more fluent interview (e.g. fluctuations were initially inquired about in the section on collaboration, now in the section project progress). d) In order to simplify the evaluation of the interviews, whenever possible the questions were rephrased to multiple-choice selections or scales. This allows categoriz- ation or numerical evaluation of responses, which is a much faster procedure than the comparison of verbal responses (where there are many ways to say the same, such that the interpretation becomes subjected to the judgment of the researcher rather then the interviewees). e) Whether project goals were jeopardized by changes of personnel, due to personal or departmental fluctuations, or new evidence or new technical problems is asked explicitly. These factors were largely taken into account ins the preliminary study as well, but now asking to (not exclusively) categorize among possible jeopardizing factors allows to directly compare these factors against one another. 2.1.1. Participants In the second round of interviews, 14 additional participants were interviewed, and several partipants of the previous round volunteered to answer all new questions in the updated questionnaire. Participants were informed about the goals, procedure, risks, and data handling of the study, such that they can provide informed consent to participate [19]. Critically however, for the second round of interviews, we could recrute participants were from different departments and were located other cities than the interviewer. This is important to gain a critical view from outside one’s own projects for a more balanced comparison. Of the new participants, one was involved in and interviewed about projects B and C and another one just in project B from the preliminary study (Section 1). Three additional participants reported about project D. The remaining nine interviews ware about eight novel projects, E to L—i.e. two participants reported on the same project. (Such a wide scope of projects was not intended when planning the study, but scheduling interviews with people who are very busy and who are far away is very difficult and it will take a lot of time to collect more data for several key projects). 2.1.2. Procedure The procedure was very similar to the previous one. Unlike the first round of interviewees, all but one participants were from different departments than the 445 E. Dineva et al. / Four Practical Lessons Learned from Multidisciplinary Projects interviewer and were thus not familiar with the current studies and its goals. Thus, most interviews were introduced or debriefed on what the study is about, and how it relates to the work of the autors. The actual interviews were about 30 to 90 minutes long, most of them were in the mid range of 45 to 60 minutes. Overall, intervies were longer. Next to the extended questions (i.e. filling in tables or selecting from multiple graphs), the longer duration is due the fact that additional time was provided for the interviewees to aks questions. For the second round, the interviews ware also audio-recorded. Two of the interviews were conducted in English, and the interviewer translated and explained the questions where the interviewees had to fill in their answers. 2.1.3. Analysis and Results Despite increased sample size (from 6 to a total of 20 interviews), there are very few interviews per project, see Table 3. Therefore and to further investigate the results from the preliminary study, the current analysis also has projects A to D in focus. For these projects, the evidence for the results from the preliminary study was strengthened: Figure 1 compares the team and the personal stratification rates for the different subsets of interviews: from the preliminary study (S1) and the follow-up study (S2). There are no apparent changes between the patterns from S1 to S2. This is further supported by the verbal repors from the follow-up study: Alongside dissatisfaction, frustration for the projects where some departments are involved with liddle manpower is reported on one hand; And, on the other hand, the increased co-located work-intensity (e.g. Design Camps) was mentioned as fruitful for the collaboration and coincided with higer satisfaction rates and higer motivation. Table 3. Distributions of interview data over projects and studies. Project [ID] Interviews S1 [total (dep)] Interviews S2 [total (dep)] Interviews All [total (dep)] A B C D E F G H I J K L Sum 2 5 3 1 0 0 0 0 0 0 0 0 (2) (5) (3) (0) (0) (0) (0) (0) (0) (0) (0) (0) 11 0 2 1 3 1 1 1 1 2 1 1 1 15 (0) (0) (0) (0) (0) (0) (0) (1) (0) (0) (0) (0) (1) 2 7 4 4 1 1 1 1 2 1 1 1 26 (2) (5) (3) (0) (0) (0) (0) (1) (0) (0) (0) (0) (10) (11) The improved questionnaire has the potential to reveal correlations between, for in- stance, satisfaction rates and events that jeopardize the goal of a project, see Figure 3. Interestingly in where satisfaction rates are high, the rate of jeopardy on the project goals are rather low, and vice verser. In few projects from the reported ones, where crew was removed entirely from a project (because of intuitional restructuring in one of the depart- ments), the impact on the project success was and on the participants was quite strong (in the numerical data, this can be seen for project G). However, for future interviews, we would need to better distinguish between likelihood and strength of the impacts from unexpected events. (For instance, one interviewee rated the goals of project L to be ex- tremely jeopardized if a key department would remove crew from the project, but such an event is not really anticipated). 446 E. Dineva et al. / Four Practical Lessons Learned from Multidisciplinary Projects Figure 2. Average satisfaction rates (top) versus perceived average jeopardy rates (top) in projects A–L. While individual fluctuations can be buffered quite well, for several projects, the shifts of institutional goals are reported to jeopardize project goals–how strong that is, depends on how relevant the work packages are for the whole project. Recall that networking activities of project leaders can pay off: One strategy is to align project with institutional and with personal goals already in the planning phase. This strategy might in fact be very relavant for securing a successful accomplishment of a project. 2.2. Conclusion Study Two Overall the concluions from the preliminary study were confirmed, both by the numerical data and in the verbal reports. With the extended Lessons Leaned questionnaire, there is a potential to discover how critical events can impact the gols of and the satisfaction with a project. High satisfaction and goal-achievement seem to correlate, however, much more data needs to be generated in order to statistically confirm such a claim. This is still work in progress, given that scheduling and conducting interviews with experts in a distributed organization like the DLR is very time consuming. This, in turn, speaks for the fact that a distributed work environment like DLR with 16 locations in Germany and additional four abroad, poses its own challenges on collaborative multi-disciplinary projects. E. Dineva et al. / Four Practical Lessons Learned from Multidisciplinary Projects 447 3. General Conclusion Our Lessons Learned approach is a first step toward revealing the dynamics of interdisciplinary collaboration. In interdisciplinary projects with a large scope, four factors seem to play a critical role: i) organizational structure; ii) organizational practice; iii) leadership and iv) continuity. All these factors are interconnected: good leadership (e.g., goal alignment strategies) can help to avoid typical problems of a distributed work environment. During the course of a project, continuity that is realized frequency and intensity of meetings between the different disciplines seems also to be an important factor. Although expensive–it takes time and money conduct a working meeting with a large group of experts–continuity is paying off in that it allow minds to meet and move from interdisciplinary to transdisciplinary project work. The explizit information about the dynamics of collaboration within multidisciplinary projects is intended to be used within the organisation, DLR, by means of improved standard process for internal Lessons Learned [3]. In collaboration with the Technical University in Hamburg, the TUHH, Lessons Learned approach should inform the establischment of project-orineted class work in aircraft engeneering. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] G. Probst, S. Raub and K. Romhardt, Wissen managen. Wie Unternehmen ihre wertvollste Ressource optimal nutzen, 6. Auflage, Dr. Th. Gabler Verlag, Wiesbaden, 2010. Project Management Institute, A Guide to the Project Management Body of Knowledge (PMBOK Guide), Project Management Institute, Inc., Newtown Square, Pennsylvania, 2008. E. Dineva, A. Bachmann, U. Knodt and B. Nagel. Human expertise as the critical challenge in participative multidisciplinary design optimization: An empirical approach. In J. Cha, S.-Y. Chou, J. Stjepandić, R. Curran and W. Xu (eds): Moving Integrated Product Development to Service Clouds in the Global Economy, Vol. 1 of Advances in Transdisciplinary Engineering, IOS Press, Amsterdam, pp. 223–232, 2014. A. Laufer, T. Post and E. J. Hoffman, Shared voyage: learning and unlearning from remarkable projects. History Division, Office of External Relations. National Aeronautics and Space Administration, NASA, Washington, DC, 2005. A. Mann, Einführung eines projektübergreifenden Ressourcenmanagements, In: PM Days, Köln, Deutschland, June 2014. DLR Projektmanagementsupport. (PM: Project Mamangemnt). E. Grunewald. Wieviel ist Europa? In: PM Days, Köln, Deutschland, June 2011. DLR Projektmanagementsupport. (PM: Project Mamangemnt). E. Dineva, A. Bachmann, E. Moerland, B. Nagel and V. Gollnick, New methodology to explore the role of visualisation in aircraft design tasks: An empirical study, Int. J. of Agile Systems and Management, 7:220–241, 2014. A. Braukhane and O. Romberg. Lessons learned from one-week concurrent engineering study approach. In: International Conference on Concurrent Enterprising (ICE), June 2011. T. Steininger, K. North and A. Brandner, Die neue iso 9001:2015 – Wissensmanagement wird pflicht! Wissensmanagement, 2014. R. M. Kolonay, A physics-based distributed collaborative design process for military aerospace vehicle development and technology assessment, Int. J. of Agile Systems and Management, 7(3/4):242–260, 2014. E. Moerland, B. Nagel and R.-G. Becker, Collaborative understanding of disciplinary correlations using a low-fidelity physics based aerospace toolkit, In: 4th CEAS Air & Space Conference, Linköping, Sweden, 2013. Flygtekniska Förening. B. Nagel, T. Zill, E. Moerland and D. Böhnke, Virtual aircraft multidisciplinary analysis and design processes – lessons learned from the collaborative design project vamp, In: The 4th International Conference of the European Aerospace Societies (CEAS), Linköping, Sweden, 2013. A. Bachmann, J. Lakemeier and E. Moerland. An integrated laboratory for collaborative design in the air transportation system, In: J. Stjepandić et al. (eds.) Concurrent Engineering Approaches for Sustainable Product Development in a Multi-Disciplinary Environment, Trier, Germany, September 448 [14] [15] [16] [17] [18] [19] E. Dineva et al. / Four Practical Lessons Learned from Multidisciplinary Projects 2012. 19th ISPE International Conference on Concurrent Engineering, Springer-Verlag, London, pp. 1009-1020, 2013. B. Nagel, D. Böhnke, V. Gollnick, P. Schmollgruber, A. Rizzi, G. La Rocca, and J.J. Alonso, Communication in aircraft design: Can we establish a common language? In: 28th International Congress of the Aeronautical Sciences, ICAS 2012, Brisbane, Australia, 2012. D. Seider, P. Fischer, M. Litz, A. Schreiber and A. Gerndt, Open source software framework for applications in aeronautics and space, In: IEEE Aerospace Conference, Big Sky, MT, USA, 03-10 March 2012. A. Braukhane and D. Quantius, Interactions in space system design within a concurrent engineering facility, In: The 2011 International Conference on Collaboration Technologies and Systems (CTS), Philadelphia, PA, USA, May 23–27 2011. D. Schubert, A. Weiss, O. Romberg, S. Kurowski, O. Gurtuna, P. Arthur and G. Savedra-Criado, A new knowledge management system for concurrent engineering facilities, In: 4th International Workshop on System & Concurrent Engineering for Space Applications, SECESA 2010, 2010. K. M. Gough, B. D. Allen and R. M. Amundsen, Collaborative Mission Design at NASA Langley Research Center, Systems Engineering, 1(4):523–525, 2005. DGP and BDP, Ethische Richtlinien der DGPs und des BDP, 2005. Deutsche Gesellschaft für Psychologie e.V. and Berufsverband Deutscher Psychologinnen und Psychologen e.V. Accessed December 1st, 2013. Part 7 Sustainable Product Development This page intentionally left blank Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-451 451 A Feasibility Study of Remote Inverse Manufacturing Nozomu MISHIMAa, Ooki JUNa, Yuta KADOWAKIa, Kenta TORIHARAa, Kiyoshi HIROSEa, Mitsutaka MATSUMOTOb a Graduate School of Engineering and Resource Science, Akita University b National Institute of Advanced Industrial Science and Technology Abstract. Material recycling of small-sized e-waste such as mobile phones is an emerging issue in Japan. Manual disassembly is a suitable process to enhance quality of recycling. However, the labor cost of manual disassembly is one of the largest problems. Current recycling processes using physical separation methods are efficient. But, those methods require large facilities and case-by-case adjustment of the processes. The authors think human vision will be the most flexible and reliable method to separate valuable parts and non-valuable parts. This paper proposes a basic concept of "cloud inverse manufacturing" based on remote operation via internet. In the concept, cloud operators of the system participate in the physical separation processes as if they were playing an online game. The paper evaluates the separation efficiency of crushed particles and showed that metal-rich particles can be recognized by vision. The paper also designed a prototype manipulator to separate target particles from a conveyer. The paper concludes that the concept is promising to carry out low-cost and high-quality recycling of small-sized e-waste. Keywords. e-waste recycling, remote operation, visual separation, material composition Introduction Material recycling of small-sized e-waste is an emerging issue in Japan, since recently the legislation to expand the target of recycling to small-sized EEE was enforced. It is well-known that some electronic products contain considerable amount of valuable metals. Such used products are often called urban mine [1]. As the total amount of valuable metals contained in small-sized e-waste, sometimes it reaches a few % of total consumption in Japan [2]. However, one of the serious problems of small-sized e-waste recycling is that value of materials recoverable from used products is not enough to cover the recycling cost [3]. In the legislation, unlike the large-sized e-waste, recycling fees are not taken from consumers. Thus, the recycling social system which means collection, transport, disassembly, metal recovery in some ways, etc. should be selfprofitable. Or, no one will engage in such recycling industries. In the recycling system of small-sized e-waste, one of the keys is that the valuable materials are scattered in huge number of use products, and usually are not easy to collect. Plus, in order to operate the recycling system efficiently, it is necessary to concentrate metal compositions. Then, it is possible to reduce the amount of non-valuable materials to be treated in the system. In practical recycling process, manual disassembly is a suitable 452 N. Mishima et al. / A Feasibility Study of Remote Inverse Manufacturing process to enhance quality of recycling, which means that it can enrich the metal concentration efficiently. However, the labor cost of manual disassembly is one of the largest problems. To reduce the recycling cost, the authors have already proposed the concept of remote recycling in 2008 and noticed that the concept is suitable for smallsized e-waste. Current recycling processes using physical separation methods (magnetic, pneumatic, electro-static, etc.) are efficient and clean. But, those methods require large facilities and case-by-case adjustment of the processes. The authors think human vision will be the most flexible and reliable method to separate valuable parts and non-valuable parts. Usually the separation by human will cost too high. But, using so-called cloud power via internet, such labor cost can be avoided. This paper proposes a basic concept of "cloud inverse manufacturing" based on remote operation via internet. In the concept, cloud operators of the system participate in the physical separation processes as if they were playing an online game. If an attractive operation can be implemented, it can be possible to reduce manual operation cost drastically. As the first step of the concept, the paper evaluates the separation efficiency of roughly crushed particles of used mobile phone by hand-picking. If the percentages of metals can be enriched only by human vision, it can be a supporting fact of the feasibility of the concept. 1. Situations Regarding Recycling of Small-sized E-waste Small-sized e-waste contain considerable amount of rare earths and critical metals. Table 1 shows amount of valuable metals contained in various kinds of small-sized EEE (electrical and electronic equipment). The numbers in the table are tons and “0” means that the corresponding materials must be contained but the amounts are smaller than meaningful amounts. Table 2 shows how much percentages of annual consumption of Japan can be covered by recycling of small-sized e-waste, plus relative importance of used mobile phones among all the products. The table shows used mobile phones are the most important target of metal recovery, since sometimes nearly half of the total material amounts are contained in mobile phones. For Palladium, Tantalum Gold and Silver, material recovery from small-sized e-waste is important for Japanese industry and used mobile phone is very important among all the small-sized e-waste. Although the amount occupies only a few percent, having multi-sources of resources is strategically important to be economically competitive in the global resource market. Material recycling of small-sized e-waste is important in the aspects of economy, environment and society. In addition, the cost-profit ratio is always a problem in material recycling. Costprofit analysis of used mobile phones have been carried out in a previous survey [4]. Table 3 shows the simple cost estimation of mobile phone recycling. In the new recycling legislation for small-sized e-waste, since no recycling fee is collected from consumers, the social systems to recycle small-sized e-waste should be independently operated with affordable cost-profit balance. However, Table 3 shows that the cost exceeds the profit which can be recovered from the used product. It is said that most of the valuable metals in used mobile phones are contained in PCB (printed circuit board). So, the key issue is how to separate PCB efficiently from other parts in which plastics are dominant and not so valuable. Although manual disassembly is effective for high quality material recycling, manual disassembly of PCB is one of the biggest cost driver. Therefore, a countermeasure to reduce time and cost of manual disassembly process is 453 N. Mishima et al. / A Feasibility Study of Remote Inverse Manufacturing strongly needed to establish an effective social system for small-sized e-waste recycling. Table 1. Recoverable material amount from various kinds of small-sized e-waste [2]. Product Mobile phone Portable game player Non-portable game player Portable CD/MD player Digital audio player Digital camera Driving navigation system Video camera DVD player Others Pd 0.55 0.05 0.03 0.01 0 0.07 0.09 0.21 0.09 0.08 Ta 4.12 0.94 0.24 0.18 0 2.85 0.99 2.19 2.51 3.20 W 3.44 0.14 0.13 0 0 0.24 0.14 0.15 0.47 1.31 Nd 3.93 0.73 0.12 0.01 0.12 0.28 0.21 0.44 0.08 Dy 0.08 0.02 0.01 0 0 0.02 0.07 0.02 0.11 0.14 La 1.22 0.44 0.05 0 0 0.05 0.07 0.11 0.26 0.35 Au 2.1 0.4 0.1 0 0 0.3 0.1 0.1 0.4 2.5 Ag 12.2 1.3 2.1 0.1 0 2.4 1.4 2.2 5.4 24.9 Cu 486 265 67 9 0 87 109 49 507 2262 Table 2. Potential coverage of annual consumption of certain materials by small-sized e-waste recycling. Element Coverage of annual usage (%) Relative importance of mobile phones among all the small-sized ewaste (%) Ta W Nd Dy La Au Ag Cu 2.4 4.37 0.08 0.16 0.11 0.08 2.91 2.30 0.23 46.6 10.0 57.3 66.1 16.7 47.7 35 23.5 12.7 Pd Table 3. Cost and profit estimation of e-waste. Product category Average material value (JPY/unit) Mobile phones 112 Total cost for recycling (JPY/unit) Not estimated Average labor cost (JPY/unit) 145 2. Proposal of Cloud Inverse Manufacturing 2.1. Basic concept of remote recycling Table 3 shows that labor costs to recycle mobile phones should be drastically decreased, in order to establish self-profitable recycling system. There are still discussions about recycling process of used mobile phones. Some recyclers are implementing manual disassembly and some are focusing on automatic separation after pulverization. Since reducing disassembly cost is one of the keys to improve cost-profit balance of mobile phone recycling. Thus, the idea of remote recycling is to replace manual disassembly by physical crushing and manual separation. In addition, manual separation process is carried out at locations where labor costs are relatively inexpensive. This will be effective in reducing the recycling cost. However, it is not welcomed to export used products which contain considerable critical metals and rare earths, in the aspect of 454 N. Mishima et al. / A Feasibility Study of Remote Inverse Manufacturing Japanese resource securing policy. At the same time, outflow of “waste” is restricted by Basel Convention. Thus, in our former paper [5], we have proposed a remote recycling system utilizing remote operation technologies and named tele-inverse manufacturing. The feature of the model is that the operations for recycling are carried out via remote operation. Figure 1 illustrates the schematic view of the system located at local sites. In our previous paper, regarding PC recycling, it was estimated that average labor cost in developing country or area (e.g. China) will be about 30% of Japan. If this analogy can be also applied to mobile phone recycling, labor cost of mobile phone recycling can be reduced to about 44JPY. Of course, there will be some additional cost in implementing remote operation. But, at least, it suggests that the total recycling cost can be greatly reduced when the labor cost is an important cost factor. As for mobile phone recycling, another cost estimation has been shown by governmental agency [6]. The result shows the total of the transportation cost and the labor cost in individual stores where used mobile phones are accepted and treated properly to erase personal information, transfer data to new phone and some paper works. The total will be about 72 Yen per unit. Thus, reduced labor cost of mobile phone recycling by applying remote recycling (44JPY) plus other recycling cost (72JPY) will be almost same as the average material value (112JPY) shown in Table 3. This estimation roughly suggests that mobile phone recycling can be profitable by only implementing remote recycling at some locations where labor costs are inexpensive. Figure 1. Schematic image of remote separation using visual information. 2.2 System proposal -utilization of internetA basic system for remote recycling can be imagined by using current technologies. But, recent progress of information technologies and the spread of PCs and smart phones will enable a further interesting system. In recent network society, there is a huge labor power so-called cloud, behind the internet. For example, there is a subproject of “Search for Extraterrestrial Intelligence (SETI)” project which is called “SETI@home [7].” Any internet users can participate by running a free program that downloads and analyzes radio telescope data to search an extraterrestrial intelligence. N. Mishima et al. / A Feasibility Study of Remote Inverse Manufacturing 455 The project is free for participants. It means that the labor powers of participants are free for the organizer, at the same time. So, if an attractive scheme, a social significance and technological set-up can be provided, it will be possible to ask internet users to participate in the remote recycling operations. In addition, on-line PC games are rather common these days. We hereby propose an “online material separation game.” If it is possible to develop a game-like software which can sort particles to recycling bins and synchronize the vision, the screen and the practical manipulation, the system will be an interesting hobby. And, by this scheme, the labor cost for recycling will be zero. These are the problems to be solved to implement such system. x Hardware system consists of a conveyer, sorting devices, PC, sorting bins, web camera and so on. x Dismantling system of batteries and LCD (liquid crystal display). x Rough crushing machine of target products. x Proper and understandable explanation of the social significance of recycling of small-sized e-waste. x Easy-to-use software which can be downloaded from project website. x Attractive scheme to introduce people to the remote recycling operation. Game-like scheme will be suitable too. x Algorithm to translate operations on screen to manipulation commands x Quality assurance system when the separation by network users is insufficient. x Method to avoid demand conflict. 3. Technical feasibility of the concept 3.1. Separation characteristics As the first step to know the technical feasibility of remote recycling, experiments to know the separation characteristics were carried out. In the experiment, PCB and metallic parts were disassembled manually (Figure 2(a)), the battery and the LCD were dismantled at this time. These two components were excluded from the following procedure. Then, a transparent UV florescent liquid was painted on PCB and metal parts (Figure 2(b)). As a preparation of the experiment, we had to know the basic possibility of recognizing PCB-origin and evidently metal particles. By painting UV florescent liquid and lighting UV lamp afterward, it is possible to verify whether we could recognize target particles. Disassembled parts were crushed by a rotary cutting mill into particles about 2-3mm, and PCB-origin particles and other particles were mixed well. Operator tried to separate PCB-origin particles and metal particles only by vision. After the separation, UV light was lighted to verify whether the operator could separate target particles successfully. Table 4 shows the average of separation experiments of 10samples treated by one operator. “Separated” means total weight of the particles that were recognized as metals by the operator, and “not-separated” means weight the particles that were considered non-metal (plastics). Thus, red numbers in the table mean that the particles were correctly recognized. Figure 2(c) and (d) shows the average outlook of separated and not-separated particles. Ȟ in the table which indicates Newton efficiency [8] which is often used in evaluating separation efficiency of 2 groups of particles, can be calculated using Eq.(1) to (3). Ȟ reaches 0.869 while 0.8 is the usual threshold to estimate the physical separation is efficient enough. 456 N. Mishima et al. / A Feasibility Study of Remote Inverse Manufacturing Table 4. Cost and profit estimation of e-waste. Not-separated PCB or metal? Average weight (g) Yes No 0.048 1.090 Notseparated total 1.138 Separated Yes No 0.728 0.081 Separated total Total weight Ȟ 0.809 1.946 0.869 Ȟ ൌ ɀ௢ ൅ ɀ௨ െ ͳ ൌ ߛ௢ െ ሺͳ െ ߛ௨ ሻ ௫ ήை ߛ௢ ൌ ೚ (1) (2) ߛ௨ ൌ (3) ௫೑ ήி ሺଵି௫೚ ሻήை ൫ଵି௫೑ ൯ήி Where, ߛ௢ : Newton ratio of correctly separated particles, ߛ௨ : Newton ratio of correctly not-separated particles, ‫ݔ‬௢ : ratio of correctly separated particles, ‫ݔ‬௙ : ratio of separated particles, ܱ: weight of separated particles, ‫ܨ‬: total weight of treated particles (a) Disassembled (b) Painted (c) Non-separated (d) Separated Figure 2. Steps of visual separation experiment. 457 N. Mishima et al. / A Feasibility Study of Remote Inverse Manufacturing 3.2. Material composition of separated particles It was indicated that it is basically possible to separate, metal-rich particles from other particles only by visual information. This fact only means that visual separation is possible. Furthermore, the paper tried to show such separation is meaningful. If the compositions of valuable materials are higher in separated particles than PCB itself, it is possible to say visual separation of metal-rich particles is meaningful. And since the cost of rough crushing plus visual separation can be lower than manual disassembly, by utilizing remote operation, remote recycling can be said economically feasible. Table 5 indicated the material compositions measured by XRF (X-Ray Florescent) Analysis [9]. Figure 3 is a viewgraph based on Table 4. Table 5. Composition of metals. Materials Formula Titanium dioxide TiO2 0.087 0.552 Mixed mobile phone particles (%) 1.714 Chrome Cr 5.438 0.387 0.134 Nickel Ni 5.999 1.758 0.77 Copper Cu 12.431 17.873 7.659 Aluminum oxide Al2O3 13.164 27.355 22.22 Silver Ag 0.446 0.188 0.18 Zinc Zn 0.882 0.668 0.391 Tin Sn 0.365 2.33 1.892 Di-iron trioxide Total percentage of metals (%) Fe2O3 26.160 2.062 1.645 72.846 53.640 50.151 Separated particles (%) Disassembled PCB (%) 30 25 20 15 10 5 0 TiO2 Cr Ni Cu Separated Al2O3 PCB Ag Zn Mixed Figure 3. Comparison of material composition. Sn Fe2O3 458 N. Mishima et al. / A Feasibility Study of Remote Inverse Manufacturing 3.3. Discussions According to Figure 3, these points can be found. x Concentrations of Cr, Ni and Fe2O3 in visually separated particles are much higher than crushed PCB and mixed particles of whole mobile phones. x Concentrations of Cu was clearly high in PCB and it was difficult to separate Cu-rich particles by vision. x Al2O3 is contained not only in PCB but also in other parts of mobile phones and it was difficult to separate by vision. x It seems that Cu and Al2O3 are contained in particles that the operator don’t recognize as metals. Different strategy will be necessary. x Sn is also contained not only in PCB but also in other parts of mobile phones. And it seems that Sn is contained in particles that the operator don’t recognize as metals. x Zn is contained mainly in PCB and it seems difficult to recognize Zn-rich particles. x Ag is contained both in PCB and other parts and it seems possible to separate by visual information. However, since the contained percentage is small, further measurements will be necessary. Palladium Tantalum Gold Silver Recoverable money (mill. JPY) 2500 2000 1500 1000 500 0 Material Figure 4. Recoverable monetary value from mobile phones per year [11]. As the results, Fe, Cr and Ni were recognizable by vision. It was assumed that these materials had metallic glosses and were easy to separate. On the other hand, Cu, Al and Sn were included in the particles that the operator didn’t recognize as metals. Not only the color information, but also some knowledge in mobile phone design might N. Mishima et al. / A Feasibility Study of Remote Inverse Manufacturing 459 be necessary. In the beginning, the paper named Au, Ag, Pd and Ta as the important materials for recycling targets. Figure 4 shows potentially recoverable value of used mobile phones. Since about 37 million units of mobile phones reached end-of-life in 2011 [10], if we can collect all the use mobile phone, recovered value would be estimated as 4 billion JPY. Therefore, monetary values of these 4 materials occupy almost half of the total material value of used mobile phones, and value of Gold occupies about 85% among the 4 materials. However, the amount of Gold is too small to detect by XRF analysis. Among the 4 important target, only Silver could be detected. Since this result is not enough to judge whether visual separation is efficient enough, more precise measurements of material composition are needed to know whether the material values of used mobile phones can be efficiently recovered by visual separation of crushed particles. 4. Conclusions Because of the newly started recycling legislation for small-sized e-waste, the paper explained that an economically feasible recycling system will be necessary. Since the new legislation does not require consumers to pay recycling fee, the cost issue can be more critical comparing to large and medium-sized e-waste. The recycling system must be self-profitable. To reduce the recycling cost due to labor cost, the authors proposed a concept to implement remote recycling operations in the previous paper. The basic idea is to replace manual disassembly by rough crushing and manual separation from a remote site where labor cost is relatively inexpensive. In addition, this paper also proposed to utilize cloud labor power via internet. This idea might be possible to reduce recycling cost drastically. At the same time, the idea will be helpful to prevent outflow of e-waste to overseas and secure critical metals in domestic market. Considering the technological difficulties of remote operation and practical recycling processes, the paper proposed that the separation of PCB origin particles can be operated remotely, based on visual information. As the results of experiments to separate metal-rich particles by visual information, it was possible to enrich the concentrations of metals than manually disassembled PCB. Although some materials were difficult to recognize only by vision, it was basically proved that the remote operation based on visual information is feasible. Since this paper only proposed the basic concept, it is necessary to prototype sorting system and try to operate the system remotely. Although the prototyping is behind the schedule, basic feasibility of the concept shown in this paper can boost the process. As for the separation of valuable parts, different strategies to separate copper etc. will be necessary. In addition, since materials with small amount such as Gold sometimes occupies relatively large percentage of the total material value of a mobile phone, more precise measurement method must be carried out to know the economical feasibility of the concept. Although there will be many problems to be solved, the authors concluded that the remote recycling is a promising way to operate a social system for material recycling of small-sized e-waste efficiently and profitably. 460 N. Mishima et al. / A Feasibility Study of Remote Inverse Manufacturing References [1] K. Halada, Material Japan 46, 543–548, (2007). [2] Ministry of Environment, Ministry of Economy, Trade and Industry: “Report of the study group about recovery of the rare metal and proper processing of used small household appliances,” (2010). (In Japanese) [3] K. Takahashi, et. al., Resource Recovery from Mobile Phone and the Economic and Environmental Impact, J. Japan Inst. Metals, Vol. 73, No. 9, pp. 747-751 (2009). (In Japanese) [4] http://www.meti.go.jp/press/20100622003/20100622003-2.pdf, accessed 31/03/14. (in Japanese). [5] M. Matsumoto, et. al., “Proposal and feasibility assessment of tele-inverse manufacturing, International Journal of Automation Technology, Vol. Vol.3, No.1, pp. 11-18 (2009) [6] Ministry of Environment, Ministry of Economy, Trade and Industry, (2011): Report of the study group about recovery of the rare metal and proper processing of a used small household appliances. [7] http://setiathome.ssl.berkeley.edu/, accessed 06/04/14. [8] S. Aravamudhan, N. Premkumar, S.S. Yerrapragada, B.P. Mani, K. Viswanathan, Separation based on shape Part II: Newton's separation efficiency, Powder Technology, Vol.39, Issue 1, 1984, pp. 93-98. [9] B. Beckhoff, B. Kanngießer, N. Langhoff, R. Wedell, H. Wolff, Handbook of Practical X-Ray Fluorescence Analysis, Springer, 2006. [10] Association for Electric Home Appliances, (2012): Annual Report of Home Appliances Re-cycle ̽ FY2011, pp.19-21. (In Japanese). [11] K. Mishima, N. Mishima, A Basic Study on the Effectiveness of Counterplans to Promote Take-back of Mobile Phones, Proceedings of CIRP/LCE2013, paper 162, 2013. Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-461 461 Proposal for Intelligent Model Product Definition to Meeting the RoHS Directive José Altair Ribeiro dos SANTOS1 and Milton BORSATO Federal University of Technology – Parana, Av. Sete de Setembro 3165, Curitiba, PR 80230-901, Brazil Abstract. With increasing environmental awareness in society, manufacturing companies began to realize that they could benefit from the integration of environmental considerations into their products and processes. Although the electrical and electronics industries have developed uniquely fast in the world market, their products have typically been a major cause of the continuing deterioration of the environment and depletion of natural resources. Regulatory directives such as Restriction of Certain Hazardous Substances (RoHS) have been created for preventing the misuse of hazardous materials in product specifications, so companies have been compelled to assess relevant information on material use at the right moment and depth at certain stages of a product’s lifecycle. The present paper proposes the application of semantic models for helping companies meet the requirements established by the RoHS Directive. A model, created in the form of ontology, establishes semantic relationships between stages of the product lifecycle, product structure and business objects. Business processes modeled in the form of activity and information flows are linked to RoHS requirements, which can be viewed through the generation of reports in the Essentials Project open source framework. The resulting semantic model is therefore useful for converting environment-related needs to design requirements through a product development process that addresses the RoHS directive. A consumer electronics product has been selected for demonstrating the feasibility of the proposed solution. Keywords. Intelligent Product Description, RoHS, Knowledge-based Engineering, Model-based Enterprise Introduction Manufacturing businesses must provide robust infrastructure for global communication and system design, manufacturing and life-cycle management focused on the customer for designing, building, delivering and supporting innovative products and services that directly address the needs and desires of many customers [1]. The concept of Integrated Enterprise assumes connection and collaboration between people, systems, processes and technologies to ensure that the right people and the right processes have the right information and the right resources at the right time [1]. For Next-Generation Manufacturing Technology Initiative (NGMTI) [2], the Model-Based Enterprise (MBE) consists of an integrated, all-digital organization that can support all essential functions. On the other hand, increasing environmental awareness has made manufacturing companies began to realize that the integration of environmental considerations in their businesses brings strategic benefits for both their products and the environment [3]. 1 Corresponding author. Tel.: +55-41-3268-3207; mobile: +55-41-9203-6202; e-mail: jose@nhs.com.br. 462 J.A.R. dos Santos and M. Borsato / Proposal for Intelligent Model Product Definition Electro-electronic products have typically been a major cause of continuing deterioration of the environment and depletion of natural resources. Directives, such as the RoHS, are imposed on electro-electronics manufacturers for reducing the environmental impact generated poorly designed products. Meeting the requirements of such directives has became mandatory for a product to remain competitive in the international market [4]. Chandrasegaran et al. [5], Kim et al. [6] and Chen et al. [7] use intelligent models expressed in the form of ontology for product modeling. Chandrasegaram et al. [5] describe a model that simulates the manufacture and use of the product. Kim et al. [6] use a definition model to assist in the capture and sharing of information regarding the assembly of a product, fostering collaboration between developers and production line, which propagates the restrictions and specific facts of the assembly line for the environment projects. Chen et al. [7] present a definition model for multi-level assemblies, which enables the transfer of information between different phases of the design for manufacturing, using the top-down approach, capturing information at different levels of abstraction. The present work aims to propose an intelligent model that can assist companies to incorporate requirements in the definition phase and use them throughout the product lifecycle. An intelligent model of product definition in the form of ontology that enables specific requirements, such as those related to the RoHS Directive, can potentially enable the creation of more sustainable products. This paper is organized as follows: Section 1 presents the theoretical background; Section 2 explains how the research was conducted; Section 3 shows the main results and, finally, Section 4 presents conclusions. 1. Theoretical Background Four topics were considered essential for creating the model definition: Product Lifecycle Metamodels, Product Definition Models, Semantic Models and RoHS Directive. Such concepts and fundamentals bring together the necessary expertise to propose alternative ways to describe a product, where specific requirements, such as those related to the RoHS Directive, can be served over a product lifecycle, in the context of intelligent manufacturing. 1.1. Product Lifecycle Metamodels According to Van Gigch [8], models are representations resulting from a process of converting our view of reality. Examples of models are from a plant of a residence, up to a flowchart that represents an algorithm or a foam mock-up of a new type of vehicle. Distinct modeling techniques can be applied for defining the steps to be followed during product development and beyond, but it is somehow necessary to determine the “what’s and how’s” of the process. Metamodels specify how specific models are to be constructed. In other words, metamodels are models of models. Many authors have proposed product lifecycle metamodels, although not always under the upper cited concept [9,10]. Most of them keep similar features, such as a phase-gate approach and deliverables at each gate. The present work has been based on the metamodel presented by Back et al. [10], not only for it provides the necessary framework for accommodating activities related to the J.A.R. dos Santos and M. Borsato / Proposal for Intelligent Model Product Definition 463 elicitation of RoHS requirements and their application, but also for it has been largely used in other scientific works in the context of the Brazilian industry. 1.2. Product Definition Models Conceiving an intelligent model for product definition means deploying a tool in computational medium that supports all stages of the product lifecycle. It links diverse perspectives of the product, such as those relating to manufacturing, functional descriptions and requirements for meeting regulatory demands. It is a specific model for representing a product, one which is able to bridge specifications brought up in the Informational Design phase to geometric details that are to be used in the Detailed Design phase [5]. 1.3. Semantic Models A semantic model is a set of information in the form of an ontology expressed in Resource Description Framework (RDF) language [11] which can be provided in an integrated implementation as a metamodel. This metamodel sets standard of information for a particular market segment, providing resource an integrated structure for business operations. An integrated semantic model based on real-world problems would most likely support the integration of operational data related in a given business environment. Semantic information models provide an abstraction of the real world of business and assets in a graphical model. Through it, software applications can access information from disparate systems with multiple access methods. The same can be consulted through services or based on the implementation of an interface with queries [12]. 1.4. The RoHS Directive The RoHS directive limits the use of substances Lead (Pb), Cadmium (Cd), Mercury (Hg), Hexavalent Chromium (Hex-Cr) and flame retardants such as Polybrominated biphenyls (PBB), and Polybrominated diphenyl ethers (PBDE). Electronic products and their supply chain should have these substances monitored and receive certificate of conformity in order to join 25 European Union countries and several US states [13]. According to Gong and Chen [14] the first official research related to RoHS in Taiwan in 2004 showed that 86% of 272 products from companies related to the area of personal computers, servers and mobile telephony, implemented management process geared to meet environmental regulations at some level. The following section shows the steps followed in the present research. 2. Methodological Aspects The construction of the model was based on the synthesis of two methods of creating ontologies: Kactus and Gruninger & Uschold demonstrated in Santos [15]. According to Kactus, some processes are necessary for creating ontologies, like specification requirements, conceptual modeling and integration. For Gruninger and Uschold, other steps are necessary to build ontologies, such as addressing motivation scenarios, 464 J.A.R. dos Santos and M. Borsato / Proposal for Intelligent Model Product Definition informal jurisdictional issues, formal terminology, formal jurisdictional issues, formal axioms and theorems. A combination of both methods, organized in seix steps was selected to conduct the present work. In the first step reference models used in product development were investigated. A model to serve as the basis of the construction of the lifecycle phases to be followed in designing electronics products was chosen. Forms of representation were investigated that could be used to represent knowledge in each phase of the selected reference model. Business processes were defined that would be worked on each stage of the lifecycle. One macro diagram representing the model lifecycle, forms of representation and business processes linked the same was conceived. In the second step, information models to support different phases of the product lifecycle were created. Technologies to support the application of semantic models were also investigated in this stage. In the third step, three implementation options were considered. The first option was the use of information artifacts, such as those proposed by Open Applications Group (OAGi) [16], which are components of information models described in eXtensible Markup Language (XML). Although information artifacts in XML allow aggregating and integrating features, very little or no semantic is kept in their relationships, which is undesirable for a knowledge-based integration solution. Another approach was investigated using artifacts with business modeling in Business Process Modeling Notation (BPMN) [12,17]. However, implementing this form of representation would require the construction of a corresponding OWL (Ontology Web Language) [18] model based on BMPN elements in an ontology editor such as Protégé and the presentation of results would be limited to queries on the relationships of the ontology classes. The last approach investigated was the use of a semantic model created in framework Essential [19]. The framework allows the complete modeling of the proposal, as it supports the Enterprise Architecture concept [1]. A report generator coupled to the framework makes it possible to visualize results more clearly. Figure 1 shows the Framework Essential structure, relationships and role of each component. Figure 1. Framework Essential. In the fourth step, ontology components were modeled in Protégé [20] within framework Essential. In the fifth step individuals were added in the ontology based on the information gathered in the first step. In the sixth and final step, the ontology was J.A.R. dos Santos and M. Borsato / Proposal for Intelligent Model Product Definition 465 tested against six competence questions for validating the model. The next section presents the results obtained. 3. Results The product chosen in the model validation step was a Voltage stabilizer at 300VA. Through the theoretical development of this equipment, to be manufactured by a company's Department of Electronics, a list of materials was put together, and RoHS requirements were addressed for each component. The definition model is built in framework Essential as a set of semantic definitions of knowledge related to the company’s work organization, using a specific setup in conjunction with Protégé. The use of two tools allows the representation of concepts through classes of individuals allocated hierarchically in metaclasses made available by framework Essential. Three first-order classes, represented by pre-existing metaclasses, were used: x Application_Layer: used to represent the behavior of the systems that are in use in the organization, i.e. business, function-specific applications; x Business_Layer: used to represent information relating to business processes belonging to the model, i.e. business processes, characters involved, flows, tasks, activities, sub-processes, resources, skills, methods and tools to assist in the processes. x Information_Layer: used to represent information related to data handled by the application and business layers, i.e. parts, materials, standards and assemblies. Other existing classes were used to accommodate information that would be important in the product definition model, such as: in this class an individual named x Application_Service: was "Model_of_Product_Definition_for_Meeting_the_RoHS_Directive" created; this object is the definition model, and thus class Application_Service is the main class of the ontology; x Application_Function: in this class individuals that represent the macro phases of the product lifecycle model based on the metamodel proposed by Back et al. [10]. x Business_Process: in this class individuals representing business processes which manipulate the product definition model were created; x Application_Provider: This class is used to represent the metamodel lifecycle and macro phases that compose it. x Application_Function_Implementation: in this class, individuals that represent the activities for meeting the RoHS directive requirements, referred to each macro phase of the product lifecycle metamodel, were created. 3.1. Example of Model Application An exploratory procedure was developed aiming to validate the model by answering six questions that summarize its main functions. Questions answered by consulting the ontology in Protégé are described as follows. I – How is a product recognized as RoHS compliant? 466 J.A.R. dos Santos and M. Borsato / Proposal for Intelligent Model Product Definition The requirements to meet the directive, which lead to RoHS compliancy, are described in the model through individuals of class Data_Object_Attribute. The query presented in Figures 2 and 3 shows the requirements that must be met before an assembly is approved. Figure 2. Query for accessing RoHS compliance requirements. Figure 3. Query results with the list of requirements to be met. II - How can one connect a RoHS requirement to a particular component? To answer this question the model uses class Information_Layer and subclass Information_View. It represents structured information such as a bill-of-materials (BOM). An individual can be created in the model using this class, under the name: "List of materials 300VA Stabilizer." Class Information_View has a built-in ontology axiom that connects to class Data_Object. Class Data_Object contains information about mechanical and electronic components, which can be used to feed the BOM. The framework interface can be used for inserting components into the BOM. Figure 4 shows how the model supports the inclusion of class Data_Object instances. Figure 4. Access to Data_Object class to populate the Bill of Materials through the Essentials menu. Information on components such as dimensional and RoHS requirements specifications, are instantiated by Data_Object_attribute class. This class is related instances of Data_Object class, as shown in Figure 5. Figure 5. RoHS requirement Statement regarding the component category. J.A.R. dos Santos and M. Borsato / Proposal for Intelligent Model Product Definition 467 The processes related to standards and procedures for meeting RoHS requirements (e.g. acid digestion by microwaves) are related to class Data_Objects instances through the Supporting Data Objects field. Class Business Process individuals can express the relationship a given process has with an individual of class Information_View such as the Bill of Materials (BOM), through field Information_Used, as shown in Figure 6. Figure 6. Relationship between processes for approval of components and items in the BOM. The relationships previously mentioned allow the creation of a BOM that is both related to processes and RoHS requirements. Thus, one can model a BOM with Part Numbers, RoHS requirements and process information, which are necessary for component validation. In other words, queries may be used for selecting components and sub-assemblies present in the BOM. The query in Figure 7 shows the components of the 300VA stabilizer by applying a filter on slot Data_category with value: “Used in stabilizing 300VA”. Figure 7. Query with components of the BOM 300VA Stabilizer. III - How can one tell if a component in an assembly meets the RoHS Directive? What are the methods and applicable assessment tools to determine whether an item complies with the RoHS directive? This question is answered by listing the product’s assemblies and sub assemblies that are expressed through class Information_View, as shown in Figure 8. Figure 8. Relationship among individuals of Information_View class, representing assemblies and sub assemblies. Components within the assembly are individuals of class Data_Object. They have relationships with class Data_Object_Attribute individuals, which store information about RoHS requirements, and other dimensional information. Figure 9 shows attributes belonging to the diode component, such as RoHS requirements, its documentation for dimensional verification and confirmation that it complies with policy. 468 J.A.R. dos Santos and M. Borsato / Proposal for Intelligent Model Product Definition Figure 9. Relationship between individuals the Data_Object and Data_Object_Attributes class that stores RoHS and dimensional specification requirements. Procedures, methods and tools for component conformity assessment are modeled as individuals of class Business_Process, as shown in Figure 10. Figure 10. Modeled procedures as individuals of Business_Process class. The processes are related to the BOM through field Information_Used, which contains an axiom relating classes Information_View and Business_Process. IV - How can one find out about materials or processes that replace substances that are restricted by the RoHS directive? The answer to this question may be obtained by the query pictured in Figure 11. Through the query one can select individuals of class Data_Object for obtaining processes and materials that replace harmful substances. By clicking on a substance or process that category, has access to substance that replaces it. Figure 11. Query with materials and processes that replace the use of harmful substances. V - How can one tell if a product is included (or not) in the RoHS directive? The answer to this question is given through a query in which one can select class Data_Category individuals of class Data_Object representing items included (or not) in the RoHS directive. Items are individuals of class Data_Object. Their classification made is through the Data_Category slot. Figure 12 shows examples of products that are covered by the policy (e.g. toys) and Figure 13 shows such products are not covered (e.g. photovoltaic panels). Figure 12. Query demonstrating categories of products that are not included in the RoHS directive. J.A.R. dos Santos and M. Borsato / Proposal for Intelligent Model Product Definition 469 Figure 13. Query demonstrating product categories that are covered by the RoHS Directive. VI - How can one find out which substances should be restricted in a product for meeting RoHS directive requirements? The query in Figure 14 shows which substances, restricted by RoHS Directive, are present in a given product. The query relates individuals of class Data_Object with individuals of class Data_Category. Class Data_Category is a class Data_Object property. Figure 14. Query that lists restricted substances, in accordance with the RoHS Directive. 4. Final Remarks Currently, manufacturing companies require assistance from certifying laboratories to suit environmental directives. Professional consultants are often needed when products are ready, for homologation purposes. This work aims to contribute in order to allow directive requirements to be available early in a given project and track the product until the end of its lifecycle. The results obtained from the information model were illustrated by reports generated in framework Essential. The present work demonstrates how smart models can be used throughout the product life cycle to better define them. The integration of information needed for complying with the RoHS directive in the form of product lifecycle metamodels can prove to be a viable means for the generation of sustainable products. The present work contributes not only to industrial best practices regarding the integration of RoHS-related information for product definition, but also for the scientific community, as this project can be used as a basis for structuring information systems such as Product Lifecycle Management (PLM), Enterprise Resource Planning (ERP), project management software and computer-aided design and engineering software. The proposed model is not intended to replace the work of laboratories and consultants, but to add valuable knowledge to the product development process. Even with the use of the intelligent model, certification work by accredited laboratories will still be required, but the possibilities to define a product as the policy will be expanded. Products defined in the model may have specification changes at any time, so the procedures described in the proposed model could be applied as necessary to validate compliance with the directive. In the case of changes in the text of the directive, the model can be upgraded simply by editing the corresponding ontology using Protégé. The proposed model may be used, with other important unambiguous description models for proofing the concept of model-based organizations. New software applications are still needed to empower the use of intelligent models for embedding 470 J.A.R. dos Santos and M. Borsato / Proposal for Intelligent Model Product Definition design rationale and support for decision-making, so that designers are freed for the task of creation in its essence. References [1] V. Fortineau, T. Paviot, and S. Lamouri, Improving the interoperability of industrial information systems with description logic-based models—The state of the art, Computers in Industry 64 (2013), 363-375. [2] NGMTI, Strategic Investment Plan for the Model-Based Enterprise, Next Generation Manufacturing Technologies Initiative (2005). [3] A. Brescansin, Regulamentação Ambiental e Estratégia: Uma Análise da Adoção à Restrição do uso de Substâncias Perigosas da Diretiva Europeia RoHS por Fabricantes de Computadores Pessoais Estabelecidos no Brasil., in: II Simpósio Internacional de Gestão de Projetos (II Singep), São Paulo - Sp Brasil, 2014. [4] L.H.d. COSTA, A diretiva ROHS e os desafios para se atendimento no setor eletroeletrônico: Estudo de Caso em Empresa de Eletrodomésticos – Linha Branca, MBA em Gestão Ambiental e Práticas de Sustentabilidade, Instituto Mauá de Tecnologia, 2011. [5] S.K. Chandrasegaran, K. Ramani, R.D. Sriram, I. Horváth, A. Bernard, R.F. Harik, and W. Gao, The evolution, challenges, and future of knowledge representation in product design systems, Computer-Aided Design 45 (2013), 204-228. [6] K.-Y. Kim, D.G. Manley, and H. Yang, Ontology-based assembly design and information sharing for collaborative product development, Computer-Aided Design 38 (2006), 1233-1250. [7] X. Chen, S. Gao, Y. Yang, and S. Zhang, Multi-level assembly model for top-down design of mechanical products, Computer-Aided Design 44 (2012), 1033-1048. [8] J.P. Van Gigch, System design modeling and metamodeling, Springer, 1991. [9] R.G. Cooper, Stage-gate systems: a new tool for managing new products, Business horizons 33 (1990), 44-54,G. Schuh, H. Rozenfeld, D. Assmus, and E. Zancul, Process oriented framework to support PLM implementation, Computers in Industry 59 (2008), 210-218. [10] N. Back, A. Ogliari, A. Dias, and J.C.d. Silva, Projeto integrado de produtos, Planejamento, concepção e modelagem (2008). [11] O. Lassila and R.R. Swick, Resource description framework (RDF) model and syntax specification, (1999). [12] G. Vetere and M. Lenzerini, Models for semantic interoperability in service-oriented architectures, IBM Systems Journal 44 (2005), 887-903. [13] L.A. Cairns, Ensuring RoHS 2 success with agility, Solid State Technol. 56 (2013), 33-33. [14] D.-C. Gong and J.-L. Chen, Critical control processes to fulfil environmental requirements at the product development stage, International Journal of Computer Integrated Manufacturing 25 (2012), 457-472. [15] K.C.P.d. Santos, Utilização de ontologias de referências como abordagem para interoperabilidade entre sistemas de informação utilizados ao longo do ciclo de vida de produtos, (2013). [16] N. Ivezic, B. Kulvatunyou, and V. Srinivasan, On Architecting and Composing Through-life Engineering Information Services to Enable Smart Manufacturing, Procedia CIRP 22 (2014), 45-52. [17] N. Lohmann, Compliance by design for artifact-centric business processes, Information Systems 38 (2013), 606-618. [18] I. Horrocks, B. Parsia, P. Patel-Schneider, and J. Hendler, Semantic web architecture: Stack or two towers?, in: Principles and practice of semantic web reasoning, Springer, 2005, pp. 37-41. [19] E.A.S.L. Essential, The Essential Project, in, 2015. [20] J. Tao, A.V. Deokar, and O.F. El-Gayar, An ontology-based information extraction (OBIE) framework for analyzing initial public offering (IPO) prospectus, in: System Sciences (HICSS), 2014 47th Hawaii International Conference on, IEEE, 2014, pp. 769-778. Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-471 471 Towards a Green and Sustainable Software Hayri ACAR a, Gülfem I. ALPTEKIN b, Jean-Patrick GELAS c, Parisa GHODOUS a a University of Lyon, LIRIS, France b Galatasaray University, Turkey c ENS Lyon, LIP, UMR 5668, France Abstract. Information and Communication Technologies (ICTs) are responsible around 2% of worldwide greenhouse gas emissions [1]. On the other hand, the use of mobile devices (smartphone, tablet, etc.) is continually increasing. Due to the accessibility of the Internet and the cloud computing, users will use more and more software applications which will cause even an increasing effect on gas emission. Thus, an important research question is "how can we reduce or limit the energy consumption related to ICT and, in particular, related to software?" For a long time, proposed solutions focused only on the hardware design, however in recent years the software aspects have also become important. Our first objective is to compare the studies in the research area of energy efficient/green software. Relying on this survey, we will propose a methodology to measure the energy consumed by software at runtime. Keywords. Green Software, Green IT, Sustainable Software, Energy Efficiency. Introduction The availability of various services (i.e. eBank, eHospital) through the cloud has facilitated daily lives. It allows to make energy and money savings by preventing people from moving to accomplish a small task (for instance see his account at the bank). Furthermore, the availability of these services through mobile devices and their widely usage has a positive impact on energy saving. It is also worthwhile to consider technology addicts developing/using applications or software when estimating the growing impact of software on energy consumption. The emission of greenhouse gases is being reduced thanks to technological progress. However, the increasing number of applications’ users causes additional consumption. Therefore, in order to get a better efficiency developers needs to be guided to optimize their development to establish green software. In this paper, we’ve made a state of the art for these research questions by summarizing related works in this field and then we compare them. We aim at establishing an estimation model for the consumed energy. We then investigate its performance and accuracy on a development project. The model will be used as an energy consumption measurement tool that guides developers building greener software. 472 H. Acar et al. / Towards a Green and Sustainable Software 1. Related work The hardware methods to measure energy consumption, in most cases, are based on measurement instruments such as power meter or printed circuits. Thus, it is impossible to measure virtual machines whose usage is becoming more widespread. In addition, the usage of these materials causes energy production and thus additional cost, which is not preferred in creating a measurement model. On the other hand, software tools are based on computer models for energy consumption in order to provide with an estimation. The lack of accuracy and comprehensiveness can cause incorrect and unsatisfactory results because simplifications adopted in estimating the energy consumption for a specific area, will not be valid in another area. Therefore, when using such a tool, it is necessary to be more precise, by taking into account all components of a computing device, such as a PC, tablet, smartphone or server, that are likely to consume energy. With these ideas in mind, we made a list of energy measurement tools that have been proposed in recent years (Table 1). 1.1. Joulemeter The energy consumption of a virtual machine, a computer or software is estimated by Joulemeter which measures hardware resources (CPU, disk, memory, screen, etc.) that are used [2]. The tool makes an auto calibration by getting back the values of the power consumed in the idle state, with the maximal and minimal frequency and by the power of the monitor. These values can be manually seized. The calculations are then made by using the values of these parameters. The energy consumption of the main components and the total power which is supposed to be consumed by the device are visualized. The tool also measures the consumption of a very precise process by allowing to seize the name of this one which can be found in the task manager processes tab. Thus, in real time, the variation of the power due only to the CPU can be observed for this given process. It is possible to register in a file the power consumed by this particular process. This tool only allows estimating the energy consumption of a process. 1.2. vEC Virtual Energy Counters (VEC) estimates the energy consumption of a given process [3]. The main components such as cache, main memory, and buses are considered to provide a quick estimate of the energy consumption. The tool is built on top of the Library of the Perfmon user for the UltraSPARC platform, and authors argue that is easily extendable to other platforms. 1.3. Orion Orion is also an estimation tool of energy consumed by an application [4]. This one, compared to other tools, takes into account the communication components except the processor and memory that have been neglected in many cases. H. Acar et al. / Towards a Green and Sustainable Software 473 For various architectural components of on-chip networks, this tool is a suite of dynamic and leakage power models developed in order to enable rapid powerperformance tradeoffs at the architectural level. 1.4. Span Span is used to provide with live, real-time power information phases of running applications [5]. According to a power model, this approach aims to help developers and to perform synchronization between power dissipation and the source code. Furthermore, tool is a result of external API calls to correlate power estimation with the source code of the application. This work is different from others because the author has studied the energy consumption at the source code. Unfortunately, developers must instrument manually the code. 1.5. PowerAPI The energy consumption of the processes in real time is estimated by POWERAPI using information collected by the hardware (CPU and network) through the operating system [6]. The tool is used to estimate the power consumption of each running application based on their Ids. The tool is limited to measure the power consumption of the CPU and network card without taking into account disk and memory. 1.6. Other energy estimation tools The area of study is new and estimation of energy consumption tools with software methods are limited. Moreover, existing tools only measure the energy consumption due to a program without providing specific details. Moreover, most often some components are neglected during the measurements. Other tools: x x x Framework in order to reduce power consumption proposed by P.K. Gupta and G. Singh [7]. Wattch, a simulator that estimates CPU power consumption [8]. GREENSOFT is a method to measure power consumption taking account a hardware part with a power meter and a software part with a data aggregator and evaluator in order to provide a report [9]. The previous table (Table 1) represents, for each tool, the power model used in the estimation of energy consumption and shows the limits about the accuracy due to incompleteness. So, in the next section, an improved tool based on a power model taking account all components will be defined. 474 H. Acar et al. / Towards a Green and Sustainable Software Table 1. Energy consumption measurement software tools Tools Power model Acronyms Appreciations Joulemeter E Ecpu  Ememory  Edisc Ecpu, Ememory & Edisc: CPU, memory & disk energy usage. Just estimates the energy consumption of an application. vEC E Ebus  Ecell  Epad  Emain Ebus: bus energy Ecell: cache energy, Epa: pad energy, Emain: main memory energy. Limited to only estimates the power consumption due to memory. Orion E Eread  Ewrite Eread: read energy, Ewrite: write energy. Communication components are considered. Span P t j , fi P: power dissipation, f: CPU frequency, t: training benchmarks. Manually code added in the software code to show the parts of code involved on the energy consumption. Pcomp: CPU power, Pcom: power consumed by network card for transmitting software’s data. Only CPU and network have been considered. PowerAPI P pret 'P t j , f i Pcomp  Pcom pret  P fi 2. Proposed Software Model 2.1. Green process All development processes of a computer program requires following a specific sequence in order to complete the project. In addition, after each phase, a green analysis step can be involved in order to check if the considered step has respected all criteria that will allow reducing energy consumption. If the criteria of a phase are not validated by the green analysis, depending uncommitted specifications, a return to the previous step or even return until the requirement analysis step can be performed. Figure 1. Green software engineering process. The process described in the work [10] gets comprehensively the progress of a development project. Thus, we will offer our descriptive diagram in Figure 1. H. Acar et al. / Towards a Green and Sustainable Software x x x x x x x 475 Requirements: First step in order to build a software product. This stage corresponds to the descriptions of the tasks that will be performed by the product. The aim is to meet customer demands. Design: The defined requirements are considered in order to create system architecture. The classes and the relationships among them are defined at this stage. Implementation: In this step, the program is implemented in respect to its design. Developers should choose the most appropriate programming language. Tests: This step allows checking if the software meets its requirements, to discover faults or defects. The tests will be defined at the end of requirements phase (QCHP) before design and implementation step, to show that the specifications have been understood. Use of different tester will allow developers to see if the requirements are correct and consistent. The energy consumption measurement tool will be used in order to know whether the program can be improved. Usage: This step defines how the software product can be used by the user in a green manner. The responsibility belongs to the user but also to the engineers themselves. The user should trained to use the software, because the fact that improper handling can cause errors in the program. Maintenance: Newer versions or enhancements usually involve changes. The developers need to handle them. Furthermore, developers need to know the cost is proportional to the energy waste. Several types of errors in the program can cause to return to the implementation phase, but sometimes even more complicated errors can cause the developer to return to the first step of requirement analysis. The maintenance process must be carried out in the most energy efficient manner. Disposal: Software and hardware must be replaced when it is not profitable up to date, when it is no longer used, or when it has become obsolete. This step considers both the software and the hardware running the code. Disposal of old hardware also causes energy consumption. Green analysis: This step can be added at the end of each one in order to improve energy efficiency. This stage will evaluate the greenness of the software. 2.2. Power model Each estimation tool of energy consumption is based on a power model that takes into consideration different electronic components depending on its area of operations. Thus, in our case we establish a power model that takes into account all the components of the device, even if its consumption is negligible so that our tool can be a generic one. If the component does not exist in a particular case, then its consumption will be considered at zero. Moreover, we establish formulas based on parameters determined by the provider to facilitate the calculations. 476 H. Acar et al. / Towards a Green and Sustainable Software The power consumed by the software can be separated in two parts: static and dynamic, as given in Eq. (1): PSoftware PSoftware, dynamic  PSoftware, static (1) The consumed dynamic power can be expressed as follows: PSoftware, dynamic PCPU , dynamic  PCPU , static  PMemory, dynamic  PMemory, statc  PDisk , dynamic  PDisk , static  PNetwork, dynaic  PNetwork, static (2) Integrating Eq. (2) into Eq. (1), we obtain: PSoftware PCPU , dynamic  PCPU , static  PMemory, dynamic  PMemory, static  PDisk , dynamic  PDisk , static  PNetwork, dynaic  PNetwork, static  PSoftware, static (3) According to Eq. (3), separating static and dynamic power, we deduce following equations: Pstatic Pdynamic PCPU , static  PMemory, static  PDisk , static  PNetwork, static  PSoftware, static (4) PCPU , dynamic  PMemory, dynamic  PDisk , dynamic  PNetwork, dynaic (5) As a result, we can redefine Eq. (1) like: PSoftware Pdynamic  Pstatic (6) In our case, we cannot improve the static power due to the material components of devices produced by the manufacturer. Thus, we are interested only in the dynamic power consumed by software. So, we establish a power measurement formula to each component. 2.2.1. CPU CPU power consumption depends on several factors. This power is approximately proportional to the CPU frequency, and to the square of the CPU voltage: PCPU C uV 2 u F (7) where C represents a constant depending of Capacitance, V is the voltage and F is the frequency. However, we only want to define the power consumed by the program. So, the usage percent of the process Id is determined and it is multiplied with the total consumed power: H. Acar et al. / Towards a Green and Sustainable Software PCPU ,id PCPU u Nid 100 477 (8) where N correspond to the CPU usage of the software. 2.2.2. Other components For this preliminary study, the observations are limited to the power consumed by the CPU, but our energy measurement tool will be used for other components quickly and easily, in a near future. 2.3. TEEC (Tool to Estimate Energy Consumption): Design & Implementation According to [6], Java programming language is stated as the language with the least energy consumption during compilation and execution stages. Thus, Java is chosen as the development language. Sigar library [11] allows getting information about the CPU usage, including the percentage of usage of each process and the number of cores used. Thus, the id of the ongoing process can be identified and retrieved. Moreover, the form of global variable data providers is formed that allows estimating the energy and assigning a corresponding value. Java Agents are also utilized, which are the software components that provide with the instrumentation capabilities to an application, such as re-defining the content of class that is loaded at run-time. Coding a Java Agent requires writing a Java class that has the premain() method with the following signature: “public static void premain(String args, Instrumentation inst)” The manifest file “MANIFEST.MF” has to contain at least: Manifest-Version: 1.0 Premain-Class: package.Agent To run the agent, the following command is used: “java –javaagent:Agent.jar –cp folder/sigar.jar package.MainApplication” The model can be illustrated as in Figure 2. Figure 2. Operation of the proposed power model. 478 H. Acar et al. / Towards a Green and Sustainable Software 3. Experiments First, the proposed tool is tested with a program that requires a lot of calculation, and therefore heavy use of CPU. As the proposed power tool, only measures the power consumed by the CPU, the measurement is more precise and accurate. The Fibonacci sequence is implemented which corresponds to a sequence of integers in which each term is the sum of the two preceding terms. The information that is obtained with the Sigar library on our machine is given in Figure 3: Figure 3. CPU information obtained with Sigar. Furthermore, the task manager is seen before and after the execution of the program to demonstrate that only the CPU is impacted. The usage of CPU is observed to increase from a few percent to thirty percent, and it stays around these levels until the end of program execution and returns back to a few percent. With the proposed power model tool TEEC, the power consumption of Fibonacci sequence using recursive method and iterative method are estimated. The generated test calculates the first 45 values of the Fibonacci sequence with recursive method. For the iterative method, the calculations for the first 5000 value are performed. The results are represented in Figure 4: Figure 4. Power consumption of Fibonacci sequence with TEEC. The results are compared to the results of Joulemeter application for a particular process with its Id and name (Figure 5). H. Acar et al. / Towards a Green and Sustainable Software 479 Figure 5. Power consumption of Fibonacci sequence with Joulemeter. First, it is observable that quite similar results are obtained for the running application. It shows the effectiveness of the proposed tool and computational model. Moreover, the results reveal that the iterative method is quicker and consumes less power than the recursive method. As a future work, the measures will be validated on other applications to demonstrate the precision and accuracy of the proposed model. 4. Conclusion and Perspectives The contribution to power measurement literature will continue by bringing improvement to the estimation of the consumption of other components; such as memory, disk and network, which are neglected in related models in literature. It will allow us to have a higher accuracy in estimating the energy consumption of a program. Using Java agents, the methods will be re-implemented automatically in order to observe their energy consumption. We will seek to be more precise in locating the most intensive pieces of code in each function to help developers optimize their codes. The similar energy estimation tools in literature are analyzed in the paper. The research area of green software development is relatively new, and major part of the tools only provides with an estimation of the energy consumption of an application without involving the source code. Moreover, the recent tools, which have began to take into account the source code, do not take into account all the components that consume energy and / or request to integrate the code manually. Hence, there is a lack of precision and a difficulty of using these tools. After this state of the art, an energy consumption estimation tool is proposed. It has been implemented so as to measure only the consumption due to the CPU, but it may be used for other components quickly and easily, in the future studies. The proposed tool is expected to be improved, and it is planned to dynamically identifying the locations of the head of the largest energy consumer code. This will allow developers to optimize their own codes to obtain greener software. 480 H. Acar et al. / Towards a Green and Sustainable Software References [1] S. Mingay, Green IT: The New Industry Shock Wave, Gartner, Presentation at Symposium/ITXPO Conference, 2007. [2] A. Kansal, F. Zhao, J. Liu, N. Kothari, and A. Bhattacharya, Virtual Machine Power Metering and Provisioning, ACM Symposium on Cloud Computing (SOCC), 2010. [3] I. Kadayif, T. Chinoda, M. Kandemir, N. Vijaykrishnan, M.J. Irwin, and A. Sivasubramaniam, vEC: Virtual Energy Counters, ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering, 2001, 28-31. [4] H.-S. Wang, X. Zhu, L.-S. Peh and S. Malik, Orion: A Power-Performance Simulator for Interconnection Networks, 35th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO-35), 2002. [5] S. Wang, H. Chen and W. Shi, SPAN: A software power analyzer for multicore computer systems, Sustainable Computing: Informatics and Systems, Vol. 1, is. 1, March 2011, 23–34. [6] A. Noureddine, A. Bourdon, R. Rouvoy and L. Seinturier, A Preliminary Study of the Impact of Software Engineering on GreenIT, First International Workshop on Green and Sustainable Software, Jun 2012, 21-27, Zurich, Switzerland. [7] P.K. Gupta and G. Singh, A Framework of Creating Intelligent Power Profiles in Operating Systems to Minimize Power Consumption and Greenhouse Effect Caused by Computer Systems, Journal of Green Engineering, 2011, pp.145–163. [8] W. Ye, N. Vijaykrishnan, M. Kandemir, and M. J. Irwin, The Design and Use of SimplePower: A cycle-Accurate Energy Estimation Tool, Design Automation Conference, 2000. [9] E. Kern, M. Dick, S. Naumann, A. Guldner and T. Johann, Green software and green software engineering–definitions, measurements, and quality aspects, ICT4S 2013: Proceedings of the First International Conference on Information and Communication Technologies for Sustainability, Zurich, February 14-16, 2013. [10] S. S. Mahmoud and I. Ahmad, A Green Model for Sustainable Software Engineering, International Journal of Software Engineering and Its Applications Vol. 7, No. 4, July, 2013. [11] R. Morgan and D. MacEachern, Spring Source Hyperic 2010. Accessed: 03.06.2015. [Online]. Available: https://support.hyperic.com/display/SIGAR/Home Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-481 481 Sustainable Product Development: Ecodesign Tools Applied to Designers Pâmela T. FERNANDES1 and Osíris CANCIGLIERI JUNIOR2 Pontifical Catholic University of Paraná (PUCPR) - Polytechnic School - Production and System Engineering Graduate Program (PUCPR / PPGEPS) Abstract. In general the concepts associated to sustainable product development have as objective to improve of products development process to reduce the environmental loads linked to them. Guidelines, checklists and other tools have been developed to product design for a long time with different complexity, structure and phase of application of development process. These characteristics bring many obstacles to use of these tools by designers mainly during the phase of creation of products, the most important moment where the responsible specifications about the level the sustainability of product will have are defined. Two main characteristics can be associated to difficulties that designers find to application of these tools: the first is related to the type of information needed for their use. Many ecodesign tools demand a long time of application and require a large amount of information about the projects that most of the time is not yet available in the early stages of the development process. And the second is related to specific know-how that generally the designers cannot to process. The objective this paper was to identify some the main ecodesign tools applied to designer throughout de product development process in the last decade. The results indicate that the use of tools that use qualitative information have better potential application by the designers during the creative phase of the process and may increase the product sustainability indices throughout their life. Keywords. Ecodesign, tools, product design, sustainability Introduction The insertion of the sustainable development concept requires the use of appropriate methods and tools in the product creation process to support its approach throughout its life cycle. Multiple expressions as design for environment, ecological design, environmental design, green design, life cycle design and ecodesign can be found in the literature when if speak about sustainable product development [1]. In general all of them have in your essence the integration of environmental considerations in products development process (PDP) and aim to reduce the impacts throughout their lifespan. To summarize all of them, in this study the term Ecodesign was adopted as representative for all these expressions. The sustainable product development is essentially related to consumer goods which may suggest that the environmental problems associated to products are too 1 2 Corresponding Author, E-mail: pamelafernandes_di@hotmail.com. Corresponding Author, E-mail: osiris.canciglieri@pucpr.br. 482 P.T. Fernandes and O. Canciglieri Jr. / Sustainable Product Development: Ecodesign Tools design problems. In this point, designers can and must encourage more sustainable consumption habits by development of products ecologically responsible [2]. This awareness that designers need to consider environmental aspects additionally to technological and market-derived requirements from the early design process [3] and that the decisions made in this stage have a higher impact in terms of energy, cost and sustainability, have been resulted in the need to project knowledge typically required in the later stages to the earlier stages of design process [4]. The PDP generally involve the activities planning, conceptual development, detailed development, prototype, production and market launch and product review [5]. All these activities are inserted in one single phase of the product life cycle – the product development – but they need to consider all others phase that encompass the product (raw materials, manufacture, transport, distribution, use and end-of-life) to ensure a sustainable approach (Figure 1). Thus, different guidelines, checklists and other tools have been developed in the literature to help designers in this task. Figure 1. Product life cycle. Bovea and Pérez-Belis [6] show a review of methods and tools that evaluate the environmental requirements and its integration into the design process and describe three key factors that should make up an ecodesign tool are: the early integration of environmental aspects into the PDP, the life cycle approach and a multi-criteria approach. Many of the tools that have been developed in the literature seek to understand these three key factors, mainly towards bringing the complete product life cycle information for the initial stages of the development process. A research developed from designers’ interview about the use of DFE methods and tools showed that from designers’ point of view methods and tools must be intuitive, logical and easier to use besides to facilitate the communication within the PDP [7]. Nevertheless, countless obstacles are still found for effective use of these tools especially because the product information needed to do an environmental assessment about the project is not accessible during the product creation, stage where the specifications that will be responsible about the level the sustainability of product will be defined. Two main characteristics can be associated to difficulties that designers find to application of these tools: the first is related to the type of information needed for their use – tools demand a long time of application and require a large amount of information about the projects that most of the time is not yet available in the early stages of the development process. And the second is related to specific know-how that generally the designers cannot to process – the available tools are very specialized (need training), unnecessarily complex (are easily forgotten) and need high requirements for data [7, 8]. P.T. Fernandes and O. Canciglieri Jr. / Sustainable Product Development: Ecodesign Tools 483 In this context, the objective this paper was to evaluate the publications of the last decade and to identify the main features of the ecodesign tools applied to designer throughout PDP. These are classified according to the stage of PDP where can be applied, nature of the data (qualitative and quantitative), the format of your utilization and key purpose. 1. Methodology To identify the main guidelines, checklists and tools applied to designers throughout de sustainable product development process was developed a qualitative bibliographic survey to inquire the follow question: Over the last decade, what tools have been applied to designer throughout the sustainable product development? The papers selected to analyze was those studies that present the approach about designers’ activities in the PDP and tools applied for them inside of sustainability environment. Table 1 summarizes the research structure. Table 1. Research structure. Research Steps Activities Papers Research Findings Findings Step 1: Selection and analysis of papers which discuss the designer activity in PDP. Research the words: Product development and Product Design with Designer (Peerreviewed journals). 315 papers findings List of papers that approach the PDP and designer issues. Step 2: From papers selected in Step 1, identify which ones addresses sustainability. Research the words: Sustainability, Sustainable, TBL (triple bottom line), DFE (design for environment) or Ecodesign in title, abstract, keywords e text. 32 papers findings List of papers that approach the PDP and designer issues and the words Sustainability, Sustainable, TBL (triple bottom line), DFE (design for environment) or Ecodesign. Step 3: From papers selected in Step 2, select which ones approaches guidelines, checklists and tools applied to designer. Read the papers and identify the guidelines, checklists and tools applied to designer. 10 papers selected Overview of guidelines, checklists and tools applied to designer – ecodesign tools. Step 4: From papers selected in Step 3, classify the ecodesign tools. Identify in the ecodesign tools the stage of PDP where can be applied, nature of the data (qualitative and quantitative), the format of your utilization and key purpose. 10 papers classified Classification of ecodesign tools. 2. Ecodesign Tools According to Karlsson and Luttropp [9] the goal for product development is to contrive, thus the ecodesign tools must be made for the designers and for your working situations in PDP based on design and engineering process and integrated with the environmental 484 P.T. Fernandes and O. Canciglieri Jr. / Sustainable Product Development: Ecodesign Tools sciences. In the last decade there were no many advances in the development of new tools for ecodesign. As identified by Devanathan et al. [1] the ecodesign tools remain overly qualitative and subjective or quantitatively complex. Table 2 showed the summary of ecodesign tools evaluated in bibliographic survey. The tools evaluated can be divided in three main approaches: design for environment (DFE), life cycle assessment (LCA) and end-of-life (EoL). Table 2. Summary of ecodesign tools evaluated in bibliographic survey. Approach Author(s) PDP Stage Data Format Luttropp and Lagerstedt (2006) Conceptual Qual. Guidelines - GFDA Kuo et al. (2006) Detailed and prototype Qual./ Quant. - QFDE Zhang et al. (2011) Planning and Qual./ conceptual Quant. - MATto Allione et al. (2012) Conceptual and detailed Qual./ Quant. - EcoCAD Chang and Lu (2014) Conceptual and detailed Qual./ Quant. Park and Seo (2006) Conceptual Quant. - edDSS Poudelet et al. (2012) Conceptual Quant. - Eco-OptiCAD Russo and Rizzi (2014) Conceptual and detailed Qual./ Quant. Remery et al. (2012) Conceptual and detailed Qual./ Quant. Lee et al. (2014) Planning and Quant. conceptual DFE - Ten Golden Rules LCA - KALCAS End-of-Life - ELSEM - EoL Index Key Purpose Facilitates the integration of environmental demands into the PDP through of ten main guidelines that should be customized to the project in development. Calculations Procedures to evaluate product design alternatives based on environmental consideration using fuzzy logic. References/ Showed a QFDE method which Calculations creates a shift from initial customer requirements to DFE oriented technical parameters. Guidelines Showed a list of ecodesign guidelines (quantitative and qualitative parameters/ ecoproperties) to help the material selection. Software/ Enables designers to develop Guidelines products that have improved material toxicity and ease-ofdisassembly characteristics. Software Improve design efficiency by managing high-level product information and LCA results. Calculations/ Provides economic and Software environmental decision criteria to support designers in the early design phases. Software/ Supports the designer in References choosing the best triad shape– material–production, identifying the minimum environmental impact and meeting both structural and functional requirements of the product. Guidelines/ Evaluate the various options for Calculations the EoL scenario of a product during early design phase. Calculations/ Enable designers to make Software decision on design alternatives using information from the EoL stage. P.T. Fernandes and O. Canciglieri Jr. / Sustainable Product Development: Ecodesign Tools 485 2.1. Design for Environment (DFE) DFE is a prescriptive practice which suggests a number of ways in which can include environmental considerations integrated into product and process engineering design procedures. It enables consideration of environmental issues as business opportunities developed environmentally compatible products and processes while maintaining product, price, performance, and quality standards [10]. The DFE aims to minimize the environmental impact of products and addresses their concerns at all stages of development - production, transport, use, maintenance and end-of-life. From a design point of view, the use of the DFE tools can be associated to the characteristics of each stage of the life cycle providing requirements which must be inserted in the initial stages of PDP [11] to ensure that the environmental impacts of the product are taken into account before any decision making is compromised [12]. In the papers analyzed five address your approach to DFE concepts. The Ten Golden Rules proposed by Luttropp and Lagerstedt [13] is a tool which shows 10 generic ecodesign guidelines that should be applied to facilitate the integration of environmental demands in PDP to improve the environmental performance of a product during your concept phase. It has to be customized to a particular case to be useful in product development. The 10 guidelines showed by the authors can be summarized as following: 1. Do not use toxic substances and utilize closed loops for necessary but toxic ones; 2. Minimize energy and resource consumption in the production and transport phase; 3. Use structural features and high quality materials to minimize weight in products if such choices do not interfere in functional priorities; 4. Minimize energy and resource consumption in the usage phase, especially for products which have the most significant impacts in the usage phase; 5. Promote repair and upgrading of products; 6. Promote long life, especially for products with the most significant impacts in the usage phase; 7. Reduce maintenance to ensure longer product life, investing in better materials, surface treatments or structural arrangements to protect products from dirt, corrosion and wear; 8. Prearrange upgrading, repair and recycling through access ability, labelling, modules, breaking points and manuals; 9. Promote upgrading, repair and recycling (use few, simple, recycled, not blended materials and no alloys), and; 10. Use as few joining elements as possible and use screws, adhesives, welding, snap fits, geometric locking, etc. according to the life cycle scenario. Kuo et al. [14] show the Green Fuzzy Design Analysis (GFDA) to evaluate product design alternatives based on environmental consideration using fuzzy logic. The structure of environmentally conscious design indices includes five aspects: energy, recycling, toxicity, cost, and material. The most desirable design alternative can be selected based on the fuzzy multi-attribute decision-making technique. Zhang et al. [15] showed a QFDE method which creates a shift from initial customer requirements to DFE oriented technical parameters. The authors categorize the customer requirements in functional customer requirements, performance customer requirements, and 486 P.T. Fernandes and O. Canciglieri Jr. / Sustainable Product Development: Ecodesign Tools environmental customer requirements and analyzed through decomposition, transformation, mergence and supplement of semantic method. At the end, a QFDE method is proposed to convert customer requirements to technical parameters so that these can be supported in the subsequent design process. Allione et al. [8] showed the MATto, a list of ecodesign guidelines (quantitative and qualitative parameters) that provides a deep analysis of the perceptual performances and eco-properties of the materials in the database and helps designers to choice of the most suitable materials for a green product. The guidelines have been derived from 3 main eco-strategies: use of resources with a low environmental impact, material’s life extension, and environmental ethics and policies. Chang and Lu [16] developed a software tool named EcoCAD which to provide support for DFE. It enables designers to spend less time on environmental evaluations and develop products that have improved material toxicity and ease-of-disassembly characteristics during the CAD modeling stage itself. The main characteristic about the tools that are approached by concepts of DFE is that they show guideline in some of your structure. This can be more evident when the use of the tools is approximate to begin of the PDP which consequently, become the tools more qualitative than quantitative. For this reason qualitative tools end up being more used by designers, since these are the main actors in the early stages of the PDP. 2.2. Life Cycle Assessment (LCA) Nowadays one of the most recognized tools to measure the environment performance along the product life cycle and quantify your consumption of resources is LCA. The LCA focuses its studies in potential impacts throughout a product’s life, encompassing the extraction and processing of raw materials, manufacturing, distribution, use, recycling and final disposal. Its considerate the resource use, human health, and ecological consequences and it typically does not address the economic or social aspects of a product [17]. According to ISO 14040 [17] the LCA can assist in: identifying opportunities to improve of product environmental performance along your life cycle; decision-making in industry, governmental and nongovernmental organizations (as the strategic planning, priority setting and products and process design or redesign); selection of relevant indicators of environmental performance, and; marketing (as in the implementation of ecolabel, environmental claim or environmental product declaration). In the papers analyzed three address your approach to LCA concepts. Park and Seo [18] developed a knowledge-based approximate life cycle assessment system (KALCAS) for product concept development which aims at improving design efficiency by managing product information from artificial neural networks and LCA results. Poudelet et al. [19] proposed a business process reengineering methodology that can be used to develop decision-support systems and to support an ecodesign approach (edDSS). The authors provide a tool to a known particular case which enables the use of LCA results in early design phases and to support economic and environmental decision criteria by designers using the LCA as a predictive tool rather than a retrospective one. Russo and Rizzi [3] propose the Eco-OptiCAD, a computer-aided methodology based on the integration of structural optimization and LCA tools. It supports the designer during product development, highlighting when and where the core of the environmental impact lies and it provides effective tools to address such impacts while ensuring structural and functional requirements. It foresees the use of P.T. Fernandes and O. Canciglieri Jr. / Sustainable Product Development: Ecodesign Tools 487 virtual prototyping tools (such as 3D CAD), finite element analysis and structural optimization, function modeling methodology and LCA tools. Despite numerous attempts to integrate LCA the early stages of the PDP and of the advances in different solutions to facilitate the use of LCA the current methods still are very inconvenient for designers. None of the approaches presented by the authors analyzed has a fast and easy application as most of them cite to be necessary for effective use of these tools. The LCA assesses the environmental impacts related to all the stages of a product’s life including raw material, refined material, design, manufacture, transportation, maintenance, and disposal. This does to be necessary to collect wide and complex information about the product which generally are not available during its process of creation. The main reasons why the designers not to adopt the tool are: x x x x x x The LCA needs a lot of data. In the case of a product in development is difficult to gather all of the required data and information [8, 18, 19]. A complete LCA analysis is costly and time consuming [8, 18, 19]. The scope of information collection is wide and leads to complex inventories and assessments [18]. LCA interpretation requires specific high-level expertise and should be difficult to communicate with non-environmental experts [18, 19]. The use of LCA is limited to an analysis of existing products or well defined products at the final stages of the development process [20]. The LCA brings quantitative and statistic results. The PDP is an iterative and dynamic approach where products’ parameters are constantly being changed [18]. 2.3. End-of-Life (EoL) The end-of-life is an approach to management of products at the end of your lifespan. This theme has been growing rapidly among manufacturers of products, primarily in relation to the requirements imposed by environmental regulations and consumers, and secondly because it has economic advantages over the economy of materials and energy required for the extraction of raw materials [10]. In the papers analyzed two address your approach to end-of-life concepts. Remery et al. [21] showed the End-of-Life Scenario Evaluation Method (ELSEM) that evaluates the options for the EoL scenario of a product during early design phase. Its helping designers to construct arguments for decision-making process by analyze of the various EoL treatments through use of decision matrices. Lee et al. [22] proposed an End-ofLife Index that enables designers to make decision on design alternatives for optimal product EoL performance. It acts as a design advisory for designers, brings aggregate values representing the relative performance of a design under available EoL options using information from the end-of-life stage. Many others approach can be associated to EoL, as disassembly, reuse, remanufacture, recycling, so on. However, currently the activities of EoL are not integrated the activities of the life cycle of products, which according to Lee et al. [22] makes the EoL focus more on remediation prevention. At this point the EoL share the same LCA dilemma, the lack of data in the initial stages of the PDP compromising the effectiveness of this approach. 488 P.T. Fernandes and O. Canciglieri Jr. / Sustainable Product Development: Ecodesign Tools 3. Discussion and Conclusions The first feature that can be used to differentiate the ecodesign tools are your nature of the data – qualitative or quantitative. The qualitative tools are easier and quick to use and offer advantages in situations where the product’s environmental properties and the information about its inputs and outputs are not clear. The main features of these tools are the way used to assess the environmental aspects in the life cycle product, normally based in checklist and guidelines which suggest the “best practices” in order to minimize its impacts throughout all the life cycle phases or in some specific phase [8]. The quantitative tools are more suitable when a detailed environmental profile of a product is needed [6]. They require a lot of data about the product and generally have their theoretical background based in the life cycle assessment of products (by LCA software, LCA material and process database) and in the inventory of inputs and outputs along your phases, providing quantitative assessment of its environment impacts at global, macro-regional or local level [8]. Another important feature about the ecodesign tools is related to its purpose, for this is essential to keep in mind the activities that are developed during the PDP. If we look a creative process overview (Table 3), the PDP shows three main stages: planning (needs analysis and requirement definition), conceptual development (generate ideas), detailed development (technical project). In this process the designer is responsible for the transformation of the needs identified in the market in technical and aesthetic characteristics of the products. This activity involves both the creative process of aesthetic features and functional elements, Table 3. Main activities developed in PDP from different knowledge areas and the nature of its data. PDP Stage PLANNING CONCEPTUAL DEVELOPMENT DETAILED DEVELOPMENT Main Activities Data Marketing Identify target market needs Define target customer Analyze product context Qualitative Qualitative Qualitative Marketing + Design Identify best sustainable practices Assess legal issues Assess competitors Qualitative Qualitative Qualitative Engineering Assess new technologies Qualitative Design Consider platform, family and architecture of the product Generate ideas Qualitative Qualitative Design + Engineering Preliminary Project Development of mockup Task analysis Qualitative + Quantitative Qualitative + Quantitative Qualitative Engineering Assess production processes Strategy for supplier groups Detail Project Development of prototype Qualitative + Quantitative Qualitative + Quantitative Qualitative + Quantitative Qualitative + Quantitative Engineering + Design Test prototype with user Qualitative + Quantitative P.T. Fernandes and O. Canciglieri Jr. / Sustainable Product Development: Ecodesign Tools 489 which are subsequently worked in stage of detailed development together to engineering team to fulfill the technical specifications and limitations associated with the production process and also to the company's objectives and the target audience and budgets project (both for production and for the price that will be applied to the product). In the Table 3 is possible observe that the activities realized until the conceptual development stage are predominantly theoretical information derived from identification of opportunity in the planning stage which justify the qualitative nature of the data. This is especially important during the process of developing ideas, where the lower are the limitations, the greater the possibilities of developing innovative ideas. According to Allione et al. [8] the qualitative tools are more useful during the first stages of PDP because they guide the designers through environmental criteria to make the right environmental choice in order to obtain an eco-product. Therefore, the tools developed for use during this process should not block the creative activity of the designer and this characteristic can be largely responsible for adoption of tools that use qualitative information in the early stages of PDP [6]. Since the detailed development stage quantitative information appear in the activities. This information are relative the engineering actions and mainly derivative from technical development of the project that in this stage already have more precise information about its process and allows starting some technical assessments of the possible impacts caused by the product. In this stage the use of quantitative tools already can give some results about the project; however, despite they provide more accurate results in relation the qualitative tools your main dilemma is still “How to get the complete information about all product life cycle if it is still a project in development?” This can be considered the main reason for the quantitative tools are not adopted by designers in the creation process of the products. Thus, even though it are very subjective and not provide concrete solutions, the qualitative tools that showed issues like “Do not use toxic substances” or “Minimize energy and resource consumption” do with your use to be more popular among the designer and consequently among the industry, especially in small and medium size companies [1]. However, the analysis of the articles seems to indicate a trend of development of new tools that seek to integrate qualitative data, which are easier to use in the daily process of designer activities in the PDP with quantitative data, which providing more accurate and measurable results to the choice of the best sustainable project alternatives. This may be the new way to search for the development of new tools or methodologies that can be easily integrated the activities of designers and engineers and generate effective results for sustainable products development. Among the analyzed articles can highlight some recommendations to development for new tools, as follow: x x x Communication: the tools should facilitate communication within of activities of the PDP, among its actors (marketer, designer, engineer and others) and among the different projects (working too like a learning tool). Complexity: the tools should be easy to understand and to use. They should be easy to be integrated into the activities, adaptable to the projects and companies (where possible) and should not spend much time for implementation (both in their own learning and use). Data: the tools should require data according to the stage of the product development where will be applied. Tools that need a lot of data (mainly quantitative) are less friendly to creative process. On the other hand, in the 490 P.T. Fernandes and O. Canciglieri Jr. / Sustainable Product Development: Ecodesign Tools x x stages where the data are available the use these tools can enable more accurate results about the project. Ethics: considerations about society, health and safety, rational use of resources and ethics and police have been introduced as law in many countries. Thus, is important that ecodesign activities have to relate to global and local priorities as well as to these ethical issues. Results: from the overview of product life cycle the tools should provide guidelines to select the best choices for the project during the creative process and evaluate the product at the time the information about the project are more accurate. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] S. Devanathan, D. Ramanujan, W.Z. Bernstein, F. Zhao, K. Ramani, Integration of Sustainability Into Early Design Through the Function Impact Matrix, Journal of Mechanical Design, 132 (2010), 081004. A. Marchand, S. Walker, Product development and responsible consumption: designing alternatives for sustainable lifestyles, Journal of Cleaner Production 16 (2008), 1163–1169. D. Russo, C. Rizzi, Structural optimization strategies to design green products, Computers in Industry 65 (2014), 470–479. S.K. Chandrasegaran, K. Ramani, R.D. Sriram, I. Horváth, A. Bernard, R.F. Harik, W. Gao, The evolution, challenges, and future of knowledge representation in product design systems, ComputerAided Design 45 (2013), 204–228. ISO 14062, Environmental Management—Integrating Environmental Aspects Into Product Design and Development, 2002. M.D. Bovea, V. Pérez-Belis, A taxonomy of ecodesign tools for integrating environmental requirements into the product design process, Journal of Cleaner Production 20 (2012), 61–71. M. Lindahl, Engineering designers’ experience of design for environment methods and tools – Requirement definitions from an interview study, Journal of Cleaner Production 14 (2006), 487–496. C. Allione, C. De Giorgi, B. Lerma, L. Petruccelli, From ecodesign products guidelines to materials guidelines for a sustainable product. Qualitative and quantitative multicriteria environmental profile of a material. Energy 39 (2012), 90–99. R. Karlsson, C. Luttropp, EcoDesign: what’s happening? An overview of the subject area of EcoDesign and of the papers in this special issue, Journal of Cleaner Production 14 (2006), 1291–1298. K. Ramani, D. Ramanujan, W. Z. Bernstein, F. Zhao, J. Sutherland, C. Handwerker, J-K. Choi, H. Kim, D. Thurston, Integrated Sustainable Life Cycle Design : A Review, Journal of Mechanical Design 132 (2010), 091004. P.T. Fernandes, O. Canciglieri Jr., Desenvolvimento integrado do produto e as inter-relações com o ciclo de vida, Revista Sodebras 9 (2013), 3-10. C.M. Rose, Design for environment: a method for formulating product end-of-life strategies, Doctoral dissertation, Stanford University, 2000. C. Luttropp, J. Lagerstedt, EcoDesign and the ten golden rules: generic advice for merging environmental aspects into product development, Journal of Cleaner Production 14 (2006), 1396–1408. T.-C. Kuo, S.-H. Chang, S.H. Huang, Environmentally conscious design by using fuzzy multi-attribute decision-making, The International Journal of Advanced Manufacturing Technology 29 (2006), 419– 425. L. Zhang, Y. Zhan, Z.F. Liu, H.C. Zhang, B.B. Li, Development and analysis of design for environment oriented design parameters, Journal of Cleaner Production 19 (2011), 1723–1733. H.-T. Chang, C.-H. Lu, Simultaneous Evaluations of Material Toxicity and Ease of Disassembly during Electronics Design, Journal of Industrial Ecology 18 (2014), 478–490. ISO 14040, Environmental performance evaluation e life cycle assessment e principles and framework, Geneva, Switzerland: International Organization for Standardization, 1997. J.-H. Park, K.-K. Seo, A knowledge-based approximate life cycle assessment system for evaluating environmental impacts of product design alternatives in a collaborative design environment, Advanced Engineering Informatics 20 (2006), 147–154. P.T. Fernandes and O. Canciglieri Jr. / Sustainable Product Development: Ecodesign Tools 491 [19] V. Poudelet, J.-A. Chayer, M. Margni, R. Pellerin, R. Samson, A process-based approach to operationalize life cycle assessment through the development of an eco-design decision-support system, Journal of Cleaner Production 33 (2012), 192–201. [20] D. Millet, L. Bistagnino, C. Lanzavecchia, R. Camous, T. Poldma, Does the potential of the use of LCA match the design team needs?, Journal of Cleaner Production 15 (2007), 335–346. [21] M. Remery, C. Mascle, B. Agard, A new method for evaluating the best product end-of-life strategy during the early design phase, Journal of Engineering Design 23(2012), 419–441 [22] H.M. Lee, W.F. Lu, B. Song, A framework for assessing product End-Of-Life performance: reviewing the state of the art and proposing an innovative approach using an End-of-Life Index, Journal of Cleaner Production, 66 (2014), 355–371. 492 Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-492 Sustainable Consumption and Ecodesign: a Review Vitor DE SOUZA1 and Milton BORSATO Federal University of Technology - Parana, Av. 7 de setembro 3165, 80230-901, Curitiba, Brazil Abstract. Sustainable products have arrived on the market and faced competition with traditional products with one extra challenge: they normally tend to be more expensive than traditional ones. Customers are yet to pay more for these products as they are alerted for environmental impacts. However, the gap between knowledge and action is being decreased slowly. One of the factors for this sluggish behavior is that the standing economic equation does not cover environmental issues. Traditional approaches for feasibility studies are yet to be changed, as initiatives such as the eco-cost emerge. From a Product Development Process point of view, customer satisfaction and selling price are two very important inputs that drive the process from the early beginning and determine whether the resulting product will achieve market success or not. A more sustainable product tends to be more durable as it reduces scrap. Currently, the world economic chain is configured respecting the pace in which products are consumed, as consumers are accustomed with traditional products and their durability. As the world grows more sustainable, the economic chain will be sustainability-driven and therefore, require more durable products and services. Using a bibliometric approach, this article proposes a review of sustainable product development and its interface with these economical and customer perception issues. First, research domains were prescribed and keywords were defined to narrow the search. Databases were selected and researched using boolean codes. The resulting list was finally reviewed and filtered, as to refine the search accordingly after defined criteria. Findings include (but are not limited to) a growth in the number of published articles year after year and also a list of journals with most publications in this research field. Finally, this article points new trends to be explored inside product development. Keywords. ecodesign, sustainable consumption, willingness to pay. Background Efforts for a more sustainable society still find many barriers. Ljungberg [1] defined four major problems left with no solution: excess of consumption, resource depletion, air pollution and population growth. These problems can be directly linked with the standing global economical development model, which sets a highly accelerated consumption pattern and high competition levels between enterprises, causing deep environmental damages, resource scarcity and many other undesirable side effects. For any company to succeed in the business market, cost-benefit balance is of extreme importance, and every new product under development affected [2]. In 2001 only 15% of the companies based their price formation on potential customers’ behavior [3], generating “flaw” products, which in turn will probably cause a lot of environmental damage and, most importantly, be referred as accountable for such damage [4]. 1 Corresponding Author, E-Mail: vsouza@utfpr.edu.br. V. De Souza and M. Borsato / Sustainable Consumption and Ecodesign: A Review 493 Product Design and Development appears as one of the critical moments to come up with sustainable issues. The earliest one can come up with solutions to drive product development towards a better environmental performance, the better. Ecodesign emerges as the ensemble of tools and methods, which intends to deploy sustainable issues during the early stages of product design and development. But is that enough to effectively diminish environmental damages? If customers support sustainability as a concept, but are not purchasing the sustainable product instead of the traditional one, there’s no decrease in environmental impacts. That means customers are still shy to move from ideology to action [5]. Understanding the factors that influence market performance of these products is essential [6], or a sustainable product is about to become another ecological burden, generating more environmental impact instead. The goal of this research is to present results for a bibliometric-based review in this context and to point trends for future researches within Sustainable (and Competitive) Product Development. The methodology used as the basis for this article is commonly known as ProKnow-C, a systematic flow chart for state-of-the-art reviews. Due to the huge amount of articles retrieved, no article was read in full. The present article is structured as the following: section 1 reviews the literature; section 2 covers the findings and section 3 contains the conclusion and future trends; last, references are presented. 1. Literature Review Bibliometric research is a quantitative disclosure process using a well defined article base for information and knowledge management of a certain scientific matter, achieved after document accounting [8, 9]. ProKnow-C, developed by MCDA (Multicriteria Methodology Decision Support Lab) of Federal University of Santa Catarina, is a methodology that defines a flow chart for bibliometric review. This research was undertaken following some of its procedures, described by Tasca et al. [8]: 1.1. Bibliographic Portfolio Creation To identify the state-of-the-art research is a specific context, a raw article database was obtained after defining research domains (also called research axes). Based on the procedures taken in Hare, McAloone [10], the starting point was to define three domains of research, according to the main purpose of investigation: the relation between consumption, product development and strategic management. Then, keywords were proposed for each domain, as presented in Table 1. Next, Boolean codes were defined as recommended in ProKnow-C. Research databases were chosen considering their connection with the engineering area of research, and are listed below: x Emerald Insight; x Engineering Village; x IEEE; 494 V. De Souza and M. Borsato / Sustainable Consumption and Ecodesign: A Review x Periodicos Capes; x Proquest (ERIC); x Scopus; x Science Direct; x Springer; x Web of Science; x Wiley. Keywords were combined in a search sentence with the booleans: (i.e. "willingness to pay" OR "value" OR "competitive advantage") AND ("eco-innovation" OR "cradleto-cradle" OR "ecodesign" OR "eco-design" OR "Life cycle Assessment" OR "resource depletion") AND (sustainability) AND ("product design" OR "project management"). For each database, articles were retrieved and every search result was combined in proper article management software (e.g. Mendeley). Table 1. Research domains and associated keywords. Domain Strategic Management Keywords Competitive advantage Willingness to pay value Sustainable Development cradle-to-cradle ecodesign eco-design eco-innovation sustainability lifecycle Assessment Engineering Design product design project management The chronological period selected for this research was from 2004 to 2015. Next, repetition was excluded and citation counting was performed, as each and every article was researched in Google Scholar for citation number. 1.2. Articles base filtering – Repository K By using appropriate criteria, articles went through a filtering process. First criterion used was: article title aligned with research objective. Then, representativeness was evaluated based on a Google Scholar search for number of citations per article and ratio from the total. A minimum representativeness factor of 15 citations was defined. This number was used to split the entire list into two repositories: K - articles with confirmed scientific recognition (444 articles) and P - articles with scientific recognition yet to be confirmed (1075 articles). After reading the title of all articles in repository K, all present authors were considered as inside the Authors’ base. For the K repository, every abstract of every article was read and a finer evaluation was taken, considering its relationship with the research objective. V. De Souza and M. Borsato / Sustainable Consumption and Ecodesign: A Review 495 1.3. Articles base filtering – Repository P Repository K was divided in two groups: articles older than two years were assessed to check if they were written by any author present in the author’s base, or else it was excluded; articles newer than two years were evaluated for potential future scientific recognition – others were excluded. Then, articles in repository P were split into two groups: articles with less than two years and with more than two years. This last group was confronted with the author’s base and all articles not written by any of these authors were excluded. 2. Findings The first results found for each database are presented in table 2, in a total of 1,686 articles. Science Direct has returned the majority of articles, followed by Springer, Wiley and Emerald Insight. Capes (Brazilian portal which combines results from multiple databases) didn’t retrieve a significant quantity, and IEEE only returned one result. Table 2.Returned articles per database researched. Database Number of articles returned 696 500 249 100 77 28 21 8 6 1 1,686 Science Direct Springer Wiley Emerald Insight Proquest Scopus Engineering Village Web of Science Capes IEEE Total The total number of articles published per year is presented in Figure 1. As a major trend, the amount of published articles yearly is steadily growing year, as environmental damage grows. 300 268 260 250 234 200 177 153 150 100 79 69 64 58 52 40 50 1 0 2002 2004 2005 2006 2007 2008 2009 2010 2011 Figure 1. Number of articles published per year. 2012 2013 2014 496 V. De Souza and M. Borsato / Sustainable Consumption and Ecodesign: A Review Publications with the most number of published articles are presented in Table 3, together with the impact factor. The journal that published the highest quantity is the JCP – Journal of Cleaner Production, which also stands for the highest Impact Factor of all – 3.590. Some of the publications refer to book chapters, as they are not granted an impact factor. Procedia CIRP, refers to Proceeding compilations and also doesn’t have a JCR Impact Factor. Table 3. Number of published articles per publication with impact factor. Journal name Journal of Cleaner Production Journal of Industrial Ecology Business Strategy and the Environment Resources, Conservation and Recycling Design for Innovative Value Towards a Sustainable Society Glocalized Solutions for Sustainability in Manufacturing The International Journal of Life Cycle Assessment International Journal of Production Economics Materials & Design Clean Technologies and Environmental Policy Procedia CIRP Leveraging Technology for a Sustainable World Total (1) (2) (3) (4) Number of published articles 254 45 40 36 33 Impact Factor(1) 3.590 2.713 2.877 2.692 0(2) 30 29 24 16 16 14 14 0(2) 3.089 2.081(3) 3.171 1.671 0(4) 0(4) 551 Information taken from Journal webpage. Compilation books from Springer. Data obtained from 2013 JCR Impact factor list. CIRP-related proceedings publications. Table 4 shows the 10 most cited articles for this research. Most of the articles refer to reviews of methods and tools. Others discuss the application of sustainability in decision-making, stakeholder influences and business strategy. The most cited, [11], brings insights about the challenges that organizations will have to beard with the rising of sustainability. It covers business network challenges, strategy proposals and it defines an Ecosystem framework based in interconnectiveness and sustainable creativity and innovation drivers that can help enterprises adapt to these market transformations. Source [12] investigates PSS (Product-Service Systems) and delivers a classification of eight types of PSS that exists in SusProNet, a sustainability network inside EU. It also defines economic and environmental potential for each type after defined criterion. It concludes that many of these types only bring results (although some contradiction is found) in the environmental aspect and only one could really deliver economical improvements. It concludes proposing that there are opportunities for research concerning customer value – intangible value in this case - for PSS users. Another article than has to be highlighted is [13], that reviews sustainable assessment methodologies, including indicators for different purposes (including “level-4” step points Supply Chain and Product Life-Cycle Indicators), assessment frameworks from GRI (Global Reporting Initiative), UNCSD (United Nations V. De Souza and M. Borsato / Sustainable Consumption and Ecodesign: A Review 497 Commission for Sustainable Development and many others, depending on the type of study one intends to perform. Table 4. Most cited articles¹. Google Scholar citations 661 Publication Type Year Book 2004 571 Paper 2004 Stakeholder influences on sustainability practices in the Canadian forest products industry 568 Paper 2005 An overview of sustainability assessment methodologies 508 Paper 2010 Sustainable construction—The role of environmental assessment tools Environmentally conscious manufacturing and product recovery (ECMPRO): A review of the state of the art 432 Paper 2008 390 Paper 2010 Industrial Product-Service Systems—IPS2 383 Paper 2010 Application of multicriteria decision analysis in environmental decision making A survey of unresolved problems in life cycle assessment 381 Paper 2005 378 Paper 2008 PROMETHEE: A comprehensive literature review on methodologies and applications ¹ As of March, 2015. 373 Paper 2010 Article The keystone advantage: what the new dynamics of business ecosystems mean for strategy, innovation, and sustainability Eight types of product–service system: eight ways to sustainability? Experiences from SusProNet One of ECMPRO (Environmental Conscious Manufacturing and Product Recovery) reviews [14] also features the progress within its field of research: the paper describes areas that have developed and also new areas of study. It highlights the need for research of methodologies that consider both product and process designs and also the need for introducing inside engineering curriculums sustainable principles and concepts, to improve Sustainability consciousness. This research was conducted in a similar way as seen in [10]. Nonetheless, in this work domain “Environmental Science” was replaced by “Sustainable Development”. The keywords selected within each domain were also changed accordingly. Methodological procedures were also different: this work makes references for the Pro-Know-C bibliometric review method. This research aimed to identify opportunities where Ecodesign meets Market competitiveness and Consumption, whereas the main target for [10] was to define trends for eco-innovation research. 3. Conclusions and Further Research The review procedures and keywords have brought many results concerning the implementation of eco-design in product development processes and systems. It means that maybe with narrower domains and keywords choice, results would have been better driven towards the competitiveness of sustainable products. 498 V. De Souza and M. Borsato / Sustainable Consumption and Ecodesign: A Review A challenge in this research was the need to exchange between data management tools. Mendeley permitted list consolidation, but not for results found in Springer. It didn’t allow to compile and organize the list with citation number classification criterion either. Microsoft Excel was used instead, in order to fulfill such needs. There is a considerable number of trends being investigated in the context of this review. Five trends are featured and described in the topics below, selected for its level of challenge, innovation, effectiveness and criticality. A more specific focus has been given to Strategy and Early Product definition than in [10], where Eco-innovation approaches were identified. x Sustainable Costing: One of the major challenges for sustainability is to be able to define the “non-sustainability” cost for any product or strategy. There are a few attempts in this research field; one that is very much considered is the Eco-cost/Value Ratio [15], which aims to establish a relationship between environmental gains and economical investments. Other attempt is the LCCA (Life Cycle Cost Analysis) that tries to identify costs related to all lifecycle phases of a product. There is an opportunity of research here, and one of the major difficultiesis to establish a way to define what is the economical value gained with the implementation of sustainable initiatives. x Consumption Degrowth: degrowth stands for a paradigm that contradicts traditional politic ideologies (capitalism, socialism and others) which have been proven “incapable of delivering balance for human kind” [16]. This is a difficult concept to be understood, especially in developing countries. In [17], increased product lifecycle span leads to Sustainable Consumption, an inbetween ‘Green growth’ and ‘Recession’. These concepts would certainly have major impacts in Ecodesign activities, therefore lodging a research opportunity. x Sustainable Design Education: there is a need for higher education to develop sustainability conscience and competences with students, especially Engineering courses. Some initiatives have been proven effective, engaging students in industrial and environmental Product Design challenges, with excellent results [18]. x Green Supply Chain Performance: is it possible for a supplier to increase environmental and economical performance at the same time? Some studies [19] already suggest that this is possible, other studies are focused on metrics to measure it [20] and therefore it is a trend to be explored. x Resource depletion mapping: as resources grow scarce, some of them even appear to vanish completely – the non-renewable resources. Understanding which, how and when it could happen is of extreme importance to take countermeasures, like governmental policies. Some work has already started with some of the most used substances: minerals and metals [21], but there are many yet to be explored. In terms of Product Development, these initiatives can be combined with Consequential LCA methods for decision-making processes. V. De Souza and M. Borsato / Sustainable Consumption and Ecodesign: A Review 499 Acknowledgements The authors would like to thank all members involved in the Sustainable Manufacturing Program Group of PPGEM/UTFPR-CT, in particular Cassia Ugaya, Carla Estorilio and Ligia Franzosi. References [1] L.Y. Ljungberg, Materials selection and design for development of sustainable products. Materials & Design, 28(2)(2007), 466-479. [2] C. Breidert, M. Hahsler, and T. Reutterer, A review of methods for measuring willingness-to-pay, Innovative Marketing, 2(4) (2006), 8-32. [3] K.B. Monroe, and J.L. Cox, Pricing Practices That Endanger Profits How do buyers perceive and respond to pricing?, Marketing Management, 10(3) (2001), 42-42. [4] W. McDonough, and M. Braungart, The Upcycle: Beyond Sustainability - designing for Abundance, Macmillan, New York, 2013. [5] J. Kaenzig, S.L. Heinzle, and R. Wüstenhagen, Whatever the customer wants, the customer gets? Exploring the gap between consumer preferences and default electricity products in Germany, Energy Policy, 53 (2013), 311-322. [6] D. Pujari, Eco-innovation and new product development: understanding the influences on market performance, Technovation, 26(1) (2006), 76-85. [7] C. Bakker, F. Wang, J. Huisman, and M. den Hollander, Products that go round: exploring product life extension through design, Journal of Cleaner Production, 69 (2014), 10-16. [8] J.E. Tasca, L. Ensslin, S. Rolim Ensslin, and M.B. Martins Alves, An approach for selecting a theoretical framework for the evaluation of training programs, Journal of European Industrial Training, 34(7) (2010), 631-655. [9] C.A. Araújo, Bibliometria: evolução histórica e questões atuais, Em Questão, 12(1)(2007), 11-32. [10] J.A. Hare, and T.C. McAloone, Eco-innovation: The opportunities for engineering design research. In: D. Marjanovic et al. (eds.): Proceedings of the 13th International Design Conference, Dubrovnik, Croatia, May 19-22, 2014. Vol. 2, Faculty of Mechanical Engineering and Naval Architecture, University of Zagreb, 2014, pp. 1631-1640. [11] M. Iansiti, and R. Levien, The keystone advantage: what the new dynamics of business ecosystems mean for strategy, innovation, and sustainability, Harvard Business Press, Boston, 2004. [12] A. Tukker, Eight types of product–service system: eight ways to sustainability? Experiences from SusProNet, Business Strategy and the Environment, 13(4) (2004), 246-260. [13] R.K. Singh, H.R. Murty, S.K. Gupta, and A.K. Dikshit, An overview of sustainability assessment methodologies, Ecological Indicators, 9(2) (2010), 189-212. [14] M.A. Ilgin, and S.M. Gupta, Environmentally conscious manufacturing and product recovery (ECMPRO): a review of the state of the art, Journal of Environmental Management, 91(3) (2010), 563591. [15] J. Vogtländer, The Eco costs/Value Ratio (EVR): materials and ecological engineering, UitgeverijÆneas, Boxtel, 2002. [16] J. Martínez-Alier, U. Pascual, F.D. Vivien, and E. Zaccai, Sustainable de-growth: Mapping the context, criticisms and future prospects of an emergent paradigm, Ecological Economics, 69(9) (2010), 17411747. [17] T. Cooper, Slower consumption reflections on product life spans and the ͆throwaway society͇, Journal of Industrial Ecology, 9(1̺2) (2005), 51-67. [18] S. Lockrey, and K.B. Johnson, Designing pedagogy with emerging sustainable technologies, Journal of Cleaner Production, 61 (2013), 70-79. [19] H. Hayami, M. Nakamura, and A.O. Nakamura, Economic performance and supply chains: The impact of upstream firms‫ ׳‬waste output on downstream firms‫ ׳‬performance in Japan, International Journal of Production Economics, 160 (2015), 47-65. [20] P. Ahi, and C. Searcy, An analysis of metrics used to measure performance in green and sustainable supply chains, Journal of Cleaner Production, 86 (2015), 360-377. [21] T. Prior, D. Giurco, G. Mudd, L. Mason, and J. Behrisch, Resource depletion, peak minerals and the implications for sustainable resource management, Global Environmental Change, 22(3) (2012), 577587. 500 Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-500 Reducing the Energy Consumption of Electric Vehicles Wojciech SKARKA1 Silesian University of Technology Faculty of Mechanical Engineering Institute of Fundamentals of Machinery Design Gliwice, Poland Abstract. Currently more and more attention is paid to energy consumption and it is especially noticeable in electric vehicles. Generally, when thinking about the idea of reducing the energy consumption in an electric vehicle there are two methods. The first one is introducing new technical solutions in the design for reducing energy consumption. The second way is driving and performing other maintenance tasks in a manner that reduce energy consumption. A special case of the use of these methods is with vehicles built specifically for energy efficient racing, where the criterion of energy saving is much more critical than in the ordinary, commercial vehicles. The task of ensuring minimum energy consumption during driving is quite demanding and the decisions taken as early as at the vehicle concept stage can have a great impact on the final result. The paper presents the experience of the Silesian University of Technology team gained while constructing the high-performance electric vehicles designed for Shell Ecomarathon, beginning with the selected methodology for the design of such vehicles by solving partial design issues for some components of the vehicle as well as studies carried both on test stands and on the racetrack . Special attention was paid to the method of dealing with the determination when reducing the interaction of design features to evaluate their influence on the energy consumption. The second part of the paper deals with the evaluation of the driving strategy during the race and its impact on reducing the energy consumption. The method to determine the outcome of the set strategy and how to select the race strategy is presented along with the optimal strategy in specific structural and environmental constraints. The paper summarizes a few year experience in the design and testing of two vehicles built for Shell Eco-marathon, which is world famous competition for Prototype and UrbanConcept electric battery powered vehicles. Keywords. Efficiency, energy consumption, race car, optimisation, simulation model, design, driving strategy Introduction Shell Eco-marathon, which is the biggest race of energy saving vehicles, is held annually in three different places in Europe, Asia and North America [1]. The task is to cover a given route in a given time using the smallest amount of energy. The results are given in a form of a number of driven kilometers per energy/fuel unit. 1 Corresponding Author, E-mail: wojciech.skarka@polsl.pl W. Skarka / Reducing the Energy Consumption of Electric Vehicles 501 The organizers of the Shell Eco-marathon determine the schedule for test measuring rides. As part of the competition, each team has several attempts within three days to take a test within several hour time span. There is a division to several categories and classes. There is UrbanConcept category where vehicles resemble small city cars, and Prototype class. The course of the competition is different for these categories: in UrbanConcept type, the vehicles are required to stop at a certain time (approx. 1.6 km) to simulate city driving, while the prototype vehicles are allowed to continue ride freely. Additionally, there is division depending on the drive unit. In each class the following drives are distinguish: Gasoline, Hydrogen, Battery Electric, Diesel, Alternative Diesel and Ethanol. The competition is tough for the drivers because there is always a large number of vehicles on the track and in addition each of the vehicles has its own strategy thereby there are vehicles driving at different speeds causing continuous overtaking of each other. Consequently, it increases the danger of collisions and significantly impedes the implementation of the preplanned strategy. The prototype electric-powered vehicle (MuSHELLka) was designed and built by members of the Student Scientific Association of Machine Design [2], at the Institute of Fundamentals of Machinery Design at the Silesian University of Technology in Gliwice. The vehicle was designed and built within one academic year by a team of a dozen of people, students and academics of the Faculty of Mechanical Engineering at the Silesian University of Technology. The vehicle has been designed according to the strict race regulations for the class prototype vehicles and battery electric category. In the last three years, it took part in the European edition of the competition with the score of 487.3 km/kWh in 2014. Apart from the vehicle in the prototype category the team also competed in the category of UrbanConcept battery electric (Bytel) and the team is currently preparing a vehicle for UrbanConcept Hydrogen. 1. Planning development tasks The vehicle shown in Figure 1 is a tricycle structure monocouque type and the outer layer is made of sandwich composites structure based on epoxy resin and carbon fiber, aramid and glass fabric [3]. Figure 1. MuSHELLka vehicle – prototype, battery electric. Designing a vehicle for such specific racing applications is an extremely difficult task because the most important design decisions are made at the beginning of concept phase. It is very difficult to assess their impact on the final outcome and even more to choose optimal solution. The aim of the designers is to minimize energy consumption 502 W. Skarka / Reducing the Energy Consumption of Electric Vehicles in certain race conditions. Since the beginning of work on the various concepts it was thought how to evaluate the vehicle concept and individual systems up to date. In other words, how to transfer the current assessment of general ideas and a more detailed design solutions to a formal optimization problem so that at any time during the design process you can choose from a set of proposed best solutions. The solution is to use a simulation model designed for the analysis of the vehicle in the conditions of the race track. So from the very beginning of the work on a multi-vehicle racing, the simulation model was built. It should be emphasized that the degree of complexity and confidence to the results of the simulation model is adequate to the level of development of the vehicle and the level of team knowledge on the race and the car. The original model built in the first moments of the project work is so incompatible to the current advanced the simulation model. The assumption of continuous development of the simulation model had a significant impact on the assumptions about the general form of the model i.e. modularity so that the particular parts can be developed separately and if necessary replaced. Modelling and simulation of hybrid and electric vehicles is in the focus of interests in last few years. The need for modelling and simulation of electric and hybrid vehicles was described in [4]. The authors presented especially physics based modelling. In [5] the other point of view is presented. This paper considered REVS (Renewable Energy Vehicle Simulation) environment which is a simulation and modelling package developed at University of Manitoba. REVS is composed of several components such as electric motors, internal combustion engines, batteries, chemical reactions, fuzzy control strategies, renewable energy resources and support components that can be integrated to model and simulate hybrid drive trains in different configurations. Seref Soylu in his book [6] have been investigated modelling and simulation of electric vehicles and their components. Mathematical models for mechanical and control devices and their components were proposed to make this reference a guide for everyone how want to prototype electric vehicles. During the project realization the following stages of the development of the model have been assumed: x Development of a preliminary version of the simulation model x Improvement of the individual parts of the simulation model x Verification and tuning of the model based on the results of test rides on the target racetrack in Rotterdam x Improving simulation model based on the results of the verification research in specialized studies x Improving the modeling and optimization methods and further tuning of the model parameters x Customizing the simulation model to changing environmental conditions The original form of the model was based mainly on the mathematical description of physical phenomena of a moving driven mechanical system subjected to external forces such as aerodynamic drag, rolling resistance, etc. In the further stage, the individual modules of the model were developed, based on the research results from the bench stands and preliminary test drives. For the purpose of this verification special bench test stands have been created - engine test bench, the bench drivetrain [7] etc. Based on these results, the model parameters have been fine-tuned to increase the accuracy of operation. A series of verification tests with the use of specialized equipment have been carried e.g. the study of vehicle aerodynamics at the Institute of W. Skarka / Reducing the Energy Consumption of Electric Vehicles 503 Aviation in Warsaw. The results of the studies were integrated into the system simulation model. This allowed a significant increase in confidence in the performance of the simulation model as opposed to preliminary results which were perceived with high degree of uncertainty. The next step was to tune the simulation model based on the results based on test drives in conditions similar to the race [8]. As there is no possibility to access the racetrack on the streets of Rotterdam for testing apart from the competition, a loop track was organized on the test track of FIAT automotive company in Tychy, Poland. In the subsequent stages, it is planned to adapt the model to changing driving conditions on the track as well as various vehicle parameters and different types of vehicles. The simulation model is developing in MATLAB environment and details of simulation model are described in [8,9]. 2. The choice of design approaches Choosing the right design approach is the most important factor influencing the sporting outcome and performance of the vehicle. It should be realized that the solutions adopted at the beginning in most cases can no longer be changed and have an additional impact on other vehicle systems. Other factors such as a chosen strategy or current preparation of the vehicle, which also affect the outcome, can be changed to the existing conditions in a given time limit. The assumption of using a model simulation to evaluate design decisions determined the direction of constructors steps from the very beginning. The first preliminary calculations showed the impact of different design characteristics on resistance forces (Figure 2). Figure 2. Comparison of various types of characteristics on vehicle resistance forces. For this type of vehicle and scenario, by far the largest impact on energy consumption is air resistance, however, it is also important to take into account the rolling resistance of the wheels. In contrast, the resistance of the bearings used in the vehicle is negligible at the preliminary design stage, assuming regularity of bearing arrangements. As a result, it is surprising that the weight of the vehicle taken into account in the latter two categories is not the primary factor influencing the outcome. 504 W. Skarka / Reducing the Energy Consumption of Electric Vehicles 2.1. The choice of shape of the vehicle It is hardly possible to describe all the aspects of the selection of design features. For purpose of this paper example, based on the selection of the shape characteristics of the vehicle, the methodology of selection of these characteristics will be described. Choosing the right design solution and its features is always specific for a given feature. The choice of different aspects of the MuSHELLka vehicle design was described in detail in other papers [10], [11], [12]. During design Knowledge-based Engineering methods was used [13] utilizing CATIA Knowledgeware. The fundamental decision on the concept of the shape was taken on the basis of statutory regulations, planned and published by the organizers. Amendments to the regulations allowed, inter alia, a gradual reduction in the required turning radius, the ban on rear wheel for steering. As the consequence, the concept previously used by most teams did not meet in the future the basic requirements of driving stability. So far, most of the teams have applied the structures already built a few years ago and integrated all wheels to the body resulting in a significant reduction in air resistance. In order for our team to be in line with the planned requirements, we have decided to eject the wheels off the vehicle body which resulted in worse outcomes than the other teams. But our destination in mind was racing in 2014. Since, aerodynamic drag force ‫ܨ‬௫ ൌ ܿ௫ ܵ௫ ߩ ௏మ (1) ଶ depends on two design factors - shape coefficient (ܿ௫ ) and the area of vehicle face projection (ܵ௫ ) and one operational – velocity (V) while looking for solutions the main goal was to minimize the face and the shape. While minimizing the face, the greatest influence were the ergonomic factors. First, drivers that meet the requirements of the regulations were select with the best geometrical parameters, then their body shapes were modeled and the minimum necessary shape for their safe driving was chosen (Figure 3) Figure 3. Ergonomic analyses of the body shape. Another limitation which has proved tricky was an outer shape and in particular unfolding surface shape of the transparent windscreen of the vehicle. Technological and economical aspects played a decisive role in adjusting a glider windscreen, which we received, and shape of the body of the vehicle. Therefore, calculations and the choice of optimal shape of the cockpit were focusing on adapting the upper parts of the body along with the lower part together with flexible beam of the chassis and other details such as hubcaps and steering knuckle. The analysis was carried out using a CFD software (Figure 4). W. Skarka / Reducing the Energy Consumption of Electric Vehicles 505 Figure 4. CFD analysis of vehicle body. 2.2. Verification of vehicle shape parameters CFD analysis results allowed us to select better solutions by means of comparative method [10]. Nevertheless, there were some doubts concerning the final result of the drag. As the wind tunnel tests were planned for a later date, a numerical method was developed to verify the selected design features of the vehicle by using a reverse simulation model which is described in details in [14]. The basic assumption of the method is the use of appropriately adjusted modified simulation model to calculate the value of the selected parameter of a given structural features, eg. Cx drag coefficient based on the known results of test drives. The simulation model by means of optimization methods finds the value of a searched features so that the difference between test drive results and simulation was as little as possible [14]. The final score was additionally verified later in further research in the wind tunnel at the Institute of Aviation in Warsaw [10]. The results obtained by method of reverse simulation model (Cx=0,225) are almost identical with obtained in wind tunnel tests (Cx=0,23). 3. Developing the strategy for driving during the race Developing strategies for driving during the race includes: x the elaboration of a simulation model and formulation of optimization task x verification tests x development of an optimal strategy for the Rotterdam track 3.1. Formulation of optimization task Another significant issue that must be taken into account in designing electric vehicles is to optimize relevant features of every part of the developed system. A comprehensive review on the latest research and development trends in this domain can be found in [15]. Recently, more and more attention has been paid to the second group of problems where advanced control methods were mainly developed. A great number of these methods is strongly connected to the problem of fuel saving. Keulen et al. [16] proposed velocity trajectory optimization for hybrid electric vehicles in order to minimize fuel consumption. Their approach enables fuel saving up to 5% compared i.e. to a Cruise Controller. The authors of the paper [17] present a path and speed planner for automated public transport vehicles in unstructured environments. The proposed method makes it possible to compute analytically a comfort-constrained profile of velocities and accelerations of the electric vehicles. Another path planning method is 506 W. Skarka / Reducing the Energy Consumption of Electric Vehicles suggested by Farooq et al. [18]. They used a soft computing method, so-called particle swarm optimization, in order to minimize the length of the path and to meet constraints on total travelling time, total time delay due to signals, total recharging time, and total recharging cost. A very interesting approach is shown in [19]. The paper considered the simultaneous optimisation of either drive train or driving strategy variables of the hybrid electric vehicle system through the use of a multi-objective evolutionary optimiser. In general, the planning of the strategy of the competition can be formulated as the optimization problem (described in details in [8], [9]), in which the best possible trajectory of the linear velocity is sought. As expected, it can be achieved by optimizing the velocity set-points as a function of the distance. The main purpose of the optimization process is to adjust the values of the velocity set-point in different points of the laps in order to minimize a multiple objective function F, which can be formulated taking into account the following criteria. The first criterion is correlated with the total energy consumption. The second one is associated with the required distance that should be covered during a competition. The last objective is connected with the second one and it deals with the set limit value of the travel time that should not be exceeded. Assuming that all of these objectives are not contradictory, the optimization task can be written as follows [8]: >f1 Minimize F v c subject to vci where vci ( L) vci , (U ) f2 vc vc d vci d vci ( L) (U ) f3 vc @T (2) and i 1,2,, imax , are the lower and upper values of the boundary constraints that should be chosen taking into account the properties of the electric vehicle, denotes the total number of parts of a race path that is used to digitize the raceway laps. The optimization problem that has been described above can be solved in several ways. Generally, multi-objective problems have not a single global solution, and it is reasonable to investigate a set of points, each of which satisfies the objectives f i. A well-grounded approach to search for an optimal solution is the global criterion method [20,21] in which objectives f1, f2 and f3 are combined in order to form a single function. One of the most general indirect utility functions at this matter can be expressed in its simplest form as the weighted exponential sum: 3 U vc ¦w f i i vc i 1 U vc > O1 w1 1  H sim @ O2 ª d cv  d sim º  w2 «H d cv  d sim » d cv ¬ ¼ O2 ª t  t sim º  w3 «H t sim  t cv cv » t cv ¬ ¼ O2 (3) where: H - is the Heaviside step function, fi and wi - indicate the i-th criterion and its importance (the value of the parameter wi should be chosen arbitrarily from the range [0, 1]) , Oi - the exponent Oi determines the extent to which a method is able to capture all of the Pareto optimal points for either convex or non-convex criterion spaces, Hsim [km/kWh] - is an estimator of the efficiency of the system calculated on the basis of the total energy consumption during the ride, W. Skarka / Reducing the Energy Consumption of Electric Vehicles 507 dcv and dsim [m] - is the reference path and the value of the covered distance obtained as a result of the simulation, tcv and tsim [s] represent the set limit value of the travel time and the travel time calculated on the basis of the simulation, vc is the velocity vector where vci [m/s] is defined for a certain section of the route (the size of this vector depends on the complexity of the route). The first component of the objective function is responsible for minimizing energy consumption. Two other components are penalty factors being the limitation that is imposed on the average velocity of the race car. Their goal is to ensure that the vehicle will drive an assumed route at the optimum time. There are various kinds of algorithms that can be applied for solving the problem which has been formulated in the form of (1) or (2). On the one hand, standard optimization methods e.g. gradient/Jacobian/Hessian-based algorithms cannot be effectively employed in this context due to the form of the objective function f 2 and f3 as well as the non-deterministic parts of the simulation model. Nevertheless, for these types of problems stochastic optimization methods in the classic form e.g. Monte Carlo techniques are very often not able to find an accurate solution which would guarantee polynomial-time convergence. Because of these reasons, the optimal solution (the minimum of the objective function U) is searched using evolutionary algorithms (EAs). EAs are known as methods for solving either single- or multi-objective optimization problems [20, 22]. For the purpose of race strategy planning EA solution described in [8,9] was used. 3.2. Verification tests As early as the formulation stage of the overall optimization alternative forms of the objective function, calculation methods and specific calculation parameters leading to results as close as possible to reality were considered. Figure 5. Speed comparison determined through optimization and measured during the test. Verification of the results of calculations was carried out in a planned way that respects the fundamental limitations such as the lack of access to the target path of the street rack in Rotterdam. Therefore, access to the test track was organized and the geometric parameters of the track and the geometric track parameters were verified in details, including basic weather data records. In addition, the research verification methodology included the driving scenario restrictions limiting driving by the driver and established accuracy of the mapping speed profile during the race. Thus, the race was divided into 508 W. Skarka / Reducing the Energy Consumption of Electric Vehicles 3 types of laps i.e. start (1 lap), central (8 laps) and final (1lap) and it was assume that for each of these types of laps separate velocity profile would be implemented. Furthermore, the lap subdivision to pre-established sections with speeds varying linearly between the calculated speeds required at the ends of these segments was assumed. The lengths of sections were selected in experimental way with regard to both the driver’s perception and the complexity of the calculations. The Figure 5 shows an example of a graph of vehicle speed on the first lap of the test track. The figure includes the speed which was calculated for the optimal strategy and the speed of travel, registered during the ride when this strategy was being implemented. The correlation coefficient is 0.82. 3.3. Development of an optimal strategy for the race in Rotterdam The strategy for running the race is selected directly before the race. The race is preceded by a test drive in which the team has developed a previously verified strategy and the final scenario is constructed in agreement with the drivers, who assess the feasibility of the calculated strategy. Strategy elaborated on the basis of numerical simulations and optimization leads to results even twice better than strategy determined by drivers yourself. 4. Conclusions Reducing energy in an electric vehicle taking part in the Shell Eco-marathon has been included in a specially developed methodology, including the development of a simulation model and reverse simulation model, simulation experiments, laboratory experiments, verification testing, identification and optimization of the design features as well as the design and verification of driving strategies. Introduced in the project development methodology taking into account the analysis of the impact of decisions on energy consumption. This analysis is done at each stage and allows for a quantifiable answer what is the impact of the planned action /decision on the final outcome of the race. A key element of this methodology is the simulation model. During the works undertaken a series of numerical tests, bench testing and during the race, which confirmed the accepted way of proceedings and preliminary results impact on the final result of the race. The final score is the result of compromises adopted mainly taking into account the planned changes to the regulations and the risk of economic and sport. The possibility of developing an existing structure are limited due to the need for changes in the main components of the structure. After three years of study, the team has the experience and complete vision of the possibilities of improving the result of allowing the building of new structures much better. References [1] Shell Eco-marathon Europe http://www.shell.com/global/environmentsociety/ecomarathon/events/europe.html (accessed on 04-2015) [2] Students Scientific Association of Machinery Design www.mkm.polsl.pl (accessed on 04-2015) W. Skarka / Reducing the Energy Consumption of Electric Vehicles 509 [3] K. Sternal, A. Cholewa, W. Skarka, M. Targosz, Electric Vehicle for the Students’ Shell Eco-Marathon Competition. Design of the Car and Telemetry System, in J. Mikulski (ed.): Telematics in the Transport Environment, 12th International Conference on Transport Systems Telematics, TST 2012, KatowiceUstroń, Poland, October 10–13, 2012, Communications in Computer and Information Science, Springer-Verlag, Berlin Heidelberg, 2012, pp. 26-33. [4] D.W. Gao, C. Mi, A. Emadi, Modeling and Simulation of Electric and Hybrid Vehicles, Proceedings of the IEEE, Vol.95, No.4, pp.729-745, April 2007. [5] R. Ghorbani, E. Bibeau, P. Zanetel, A. Karlis, Modeling and Simulation of a Series Parallel Hybrid Electric Vehicle Using REVS, American Control Conference, ACC '07, 9-13 July 2007, pp.4413-4418. [6] S. Soylu, Electric Vehicles – Modelling and Simulations, Intech Open, Rijeka, 2011. [7] M. Targosz, Test bench for efficiency evaluation of belt and chain transmission, Proceeding of XII International Technical System Degradation Conference, PNTTE, 2013, Liptovsky Mikulas, Slovakia, 03-06.04.2013, pp. 142-143. [8] M. Targosz, W. Skarka, P. Przystałka, Simulation and optimization of prototype electric vehicle – methodology, In: D. Marjanovic et al. (eds.): Proceedings of the 13th International Design Conference, Dubrovnik, Croatia, May 19-22, 2014. Vol. 2, Faculty of Mechanical Engineering and Naval Architecture, University of Zagreb, 2014, pp. 1349-1360. [9] M. Targosz, M. Szumowski, W. Skarka, P. Przystałka, Velocity Planning of an Electric Vehicle Using an Evolutionary Algorithm, In: J. Mikulski (ed.) Activities of Transport Telematics, 13th International Conference on Transport Systems Telematics, TST 2013, Katowice-Ustroń, Poland, October 23–26, 2013, Springer-Verlag, Berlin Heidelberg, 2013, pp. 171-177. [10] W. Danek, J. Gniłka, M. Pawlak, Aerodynamic analysis of wheeled vehicle MuSHELLka, Modelling and optimization of physical systems, 17th International Seminar of Applied Mechanics, Ustroń, 13.0915.09.2013, pp. 7-12. [11] W. Skarka, M. Otrębska, P. Zamorski, K. Cichoński, Advanced driver assistance systems for electric race car. In: I. Horvath, Z. Rusak (eds.) Tools and methods of competitive engineering. TMCE 2014 Symposium, Budapest, Hungary , May 19-23, 2014. pp. 1487-1494. [12] W. Skarka, M. Otrębska, P. Zamorski, K. Cichoński, Designing safety systems for an electric racing car. In: J. Mikulski (ed.): Activities of transport telematics, 13th International Conference on Transport Systems Telematics. TST 2013, Katowice-Ustroń, Poland, October 23-26, 2013, Springer-Verlag, Berlin, pp. 139-146. [13] W. Skarka, Using Knowledge-based Engineering Methods in Designing with Modular Components of Assembly Systems. In: D. Marjanovic et al. (eds.): Proceedings of The Design 2010 11th International Design Conference, Dubrovnik, May 17-20, Vol. 1-3, pp.1837-1846. [14] W. Skarka, Application of numerical inverse model to determine the characteristic of electric race car, In: I. Horvath, Z. Rusak (eds.) Tools and methods of competitive engineering. TMCE 2014 Symposium, Budapest, Hungary, May 19-23, 2014, pp. 263-274, [15] R.V. Rao, V.J. Savsani, Mechanical Design Optimization Using Advanced Optimization Techniques, Springer-Verlag, London. [16] T. van Keulen, B. de Jager, D. Foster, and M. Steinbuch, Velocity trajectory optimization in Hybrid Electric trucks, In: Proceeding of American Control Conference (ACC), IEE, 2010, pp. 5074 - 5079. [17] J. Villagra, V. Milanés, J. Pérez, and J. Godoy, Smooth path and speed planning for an automated public transport vehicle, Robotics and Autonomous Systems, Volume 60, Issue 2, 2012, pp 252–265. [18] U. Farooq, Y. Shiraishi, and S.M. Sait, Multi-Constrained Route Optimization for Electric Vehicles (EVs) using Particle Swarm Optimization (PSO), Intelligent Systems Design and Applications (ISDA), 2011, pp 391 - 396. [19] R Cook, A. Molina-Cristobal, G. Parks, C. Osornio Correa, and P. John Clarkson, Multi-objective Optimisation of a Hybrid Electric Vehicle: Drive Train and Driving Strategy, Evolutionary MultiCriterion Optimization, Lecture Notes in Computer Science Volume 4403, Springer-Verlag, Berlin, 2007, pp 330-345. [20] C. Bil, Multidisciplinary Design Optimization: Designed by Computer. In: J. Stjepandić et al. (eds.) Concurrent Engineering in the 21st Century, Springer International Publishing Switzerland, pp. 421454, 2015. [21] R.T. Marler, J.S. Arora, Survey of multi-objective optimization methods for engineering, Structural Multidisciplinary Optimization, 26, 2004, pp. 369-395. [22] K. Deb, Multi-objective optimization using evolutionary algorithms, Wiley, Hoboken, 2009. This page intentionally left blank Part 8 Service-Oriented Design This page intentionally left blank Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-513 513 Technical-Business Design Methodology for PSS Margherita PERUZZINIa,1, Eugenia MARILUNGOb and Michele GERMANI b “Enzo Ferrari” Engineering Department, University of Modena and Reggio Emilia, via Vivarelli 10, 41125 Modena (Italy) b Dept. Industrial Engineering and Mathematical Sciences, Università Politecnica delle Marche, via Brecce Bianche 12, 60131 Ancona (Italy) a Abstract. Concurrent Design (CD) is a systematic approach to integrated product design that emphasizes the response to customer expectations and the combination of creativity and engineering. Such a concept represents also the basis of ProductService System (PSS), which represents a valid way for companies to add value to their products, create new value propositions, and easily improve their solution portfolio. Indeed, the fulfilling of the customer needs is fundamental for creating successful industrial PSSs (IPSSs), which aim at combining products and services into a marketable solution. However, the integration of technical and business aspects is crucial to succeed. In this context, this paper proposes an integrated methodology for PSS addressing both technical and business aspects; it adopts a QFD-based approach to structure PSS information along the different process stages, considering four main domains: customer, functional, assets and network. It allows technical feasibility to be carried out and business framework to be defined at the same time to have a robust design concept and a reliable business model from the early design stages. The method is based on the direct involvement of the customer voice according to the CD paradigm. The proposed method also allows to define earlier the network of stakeholders and to dynamically reconfigure the network itself along the process, promoting the creation of the lean enterprise. Keywords. Product-Service System (PSS); PSS design; Quality Functional Deployment (QFD); Business model (BM); Industrial case study Introduction Modern economy is changing from producing material goods to offering functions combining products and services. For instance, IKEA started offering “to create a home” instead of selling furniture, while Rolls Royce offered “total care” and “power by the hour” rather than selling jet engines or spare parts. As a consequence Product-Service Systems (PSSs) are assuming growing importance in industry. They integrate tangible artefacts and intangible services to achieve sustainable, improve enterprise competitiveness, and meet customer needs better [1]. Designing PSS represents a new perspective for traditional manufacturing companies establishing their business on 1 Corresponding Author. Margherita Peruzzini, University of Modena and Reggio Emilia via Vivarelli 10, 41125 Modena (Italy), e-mail: margherita.peruzzini@unimore.it 514 M. Peruzzini et al. / Technical-Business Design Methodology for PSS producing goods to evolve their business model toward a service-oriented scenario and adopt a new interpretation of the basic design concepts to embrace both product and services [2]. The benefits behind PSS are numerous: for the customer that utilizes the function provided, for the provider that can optimize manufacturing, maintaining, and for the environment and society at large, since less waste is produced [3]. However, creating PS obviously affects the manufacturer’s design and development process; for instance, designers have to consider more process variables and merge both tangible and intangible assets, while the development process must be reorganized and activities properly synchronized. Several methodologies to manage PSS can be found in literature: in particular, the network on Sustainable Product-Service Systems Development (SUSPRONET) identified thirteen most important methodologies for PSS design and development [4]. Most of them focus on technical development stages, while some of them on innovation aspects and sustainability. However, they face PSS in two distinctive ways: from a technical perspective or from a business perspective. In this direction new approaches have been proposed [5] and some prototypal solutions achieved [6]. They demonstrated that important changes affect the company processes in creating PSSs: firstly, traditional product lifecycle has to be enhanced by including also service management; secondly, the product-oriented company model must be extended to realize a service-oriented ecosystem [7]. Also interrelations between physical products and intangible services have to be managed by creating new relationships among the stakeholders [8]. Finally, the role of human factors is fundamental to robustly define the customers’ expectations and, consequently, create functions satisfying the customer needs. This paper promotes the combination of technical-business aspects in defining a design methodology for PSS. In particular it supports PSS design from the earliest lifecycle stages until the definition of the production network by combining technical aspects, User-Centred Design (UCD) principles, and business modelling. Design is structured into steps where technical design activities and business-centred activities run in parallel to effectively supports feasibility analysis and comparison among alternative use scenarios from the early stages. 1. Research background PSS arises from the idea of Extended Product (EP) [9], according to which services can be used to extend the product feature in order to differentiate the product itself and support its use by the integration of tangible assets (i.e. materials, technologies, processes, and all staff typically related to the product) and intangible assets (i.e. skills, competencies, services, and all information related to human factors). The final result is a complex system able to combine product, services, enabling technologies and the proper infrastructure to realize the desired functions. Due to its complexity, designing PSSs requires not only developing numerous and interrelated activities, but also involving different types of actors in order to compensate for the various resources required [10]. The stakeholders are required also to extend their responsibility in the lifecycle to properly produce, deliver, manage, reuse, remanufacturing, and recycling the PSS [10]. This situation may cause incompatibilities and inevitably induce technical and business conflicts, that can be responsible for falling performance [11]. M. Peruzzini et al. / Technical-Business Design Methodology for PSS 515 For conflict management, in traditional product development several approaches have been developed [12]. However, few studies deal with these conflicts from the PSS viewpoint. Recently some researches paid attention to methods and tools to design PSS tailored networks: Krucken and Meroni [13] proposed a pro-active approach based on communication and strategic conversations among the partners; Wang and Durugbo [14] proposed fuzzy technics to evaluate levels of transitions from product-focused operations to service-oriented operations; Watanabe and Shimomura [15] and Nemoto et al. [16] paid attention to describe interactions among the ecosystem partners. In the last years, also business aspects assumed more and more attention. Although the existing literature indicates that defining a Business Model (BM) can be useful to implement PSS: although the paucity of guidelines, the definition of a proper BM is crucial for successful PSS [17]. BM refers to the logic of the firm, the way it operates and how it creates value for its stakeholders; since PSS is a new proposal based on added value, a good BM definition is fundamental for PSS. Morris at al. [18] analysed the most recent works about BM frameworks and provided an appreciated overview. A plethora of elements has been defined: nomenclature and arrangement vary depending on the research perspective, but some common aspects can be distinguished such as Value Proposition or Customer Value Proposition. The CANVAS model presented by Osterwalder [19] is probably one of the most robust model, able to describe the business organization through 9 basic building blocks, covering four main areas: product, customer interface, infrastructure management, and financial aspects. Barquet et al. [17] pushed to use BM to support PSS definition, while Guidat et al. [20] gave a set of guidelines to define innovative BM for remanufacturing to convert products into PSSs. However, finding out guidelines about how to use such models for PSS concretely in the design process is difficult and a general-purpose model is still missing. Finally, another important aspect of PSS is sustainability. Tan et al. [21] stated that PSS approaches are sustainable innovation strategies in a total lifecycle perspective. The concept of lifecycle has traditionally been applied to physical products, so to manufacturing companies. A recent study proposed a review of such approaches from different viewpoints: from value to cost, functions, qualities, or performances [22]. In this context Concurrent Design (CD) approach can be used to integrated product or project design by emphasizing the response to customer expectations and the combination of creativity and engineering [23]. CD focuses mainly on the early phases of a project or product and aims at translating multidisciplinary design meetings into concrete parameters, supported by experts representing all perspectives of the product/project lifecycle (e.g. technical, cost, risk, schedule), using a common reference model to exchange information. This approach brings traceability and enables a faster, more efficient and reliable project execution. It could be validly applied to PSS to model its complexity, to move from customer expectations to technical issue. Indeed, PSSs require a strong activity integration and concurrency by definition, and to systematically represent PSS lifecycle. A lifecycle for PSS has been proposed by Kimita et al. [24]: it consists of four phases (value analysis, design, execution and evaluation). The value analysis is crucial as the goals are extracted to start the design phase. There are few studies about how to structure this phase for PSS, from axiomatic design [25] to service-engineering modelling methods [26]. Axiomatic design aims at mapping processes in respect with 4 domains: customer needs, functional requirements, design parameters, and process variables. Firstly, the customer needs are converted into functional requirements as the minimum set of independent requirements that completely characterize the functional needs of the design solution. Than, functional 516 M. Peruzzini et al. / Technical-Business Design Methodology for PSS requirements are embodied into design parameters and design parameters to determine process variables. Kimita et al. [24] proposed a design method to address conflicts in PSS development by adopting an axiomatic design and map the PSS entities. Furthermore, other techniques can be used to detail the design stage, such as Quality Function Deployment (QFD), Analytic Hierarchy Process (AHP) and Hierarchical Task Analysis (HTA). QFD aims at bringing a new product model to market [27] and consists of several activities supported by various matrices able to translate the customers’ requirements into the appropriate technical requirements [28]. QFD can also address sustainability by incorporating environmental aspects [29]. Differently, AHP decomposes a complex system into a hierarchy to capture the basic elements of the problem in order to solve multi-solution problems affected by various factors (i.e. functions, aesthetics, safety, cost, operation, reliability, lifecycle variable) [30]. Similarly, HTA [31] addresses functional requirements as well as the specific actions that are required to satisfy these requirements. Recently it has been demonstrated the effectiveness of QFD and Hierarchical modelling for analysing the relations between customer need and technical requirements in PSS design [32, 33]. Evidences of QFD adoption to incorporate the voice of customers at the early stages of PSS design and develop PSS are presented in [34, 35]. 2. The T-B design methodology for PSS From the previous literary review, a set of important findings has been defined for the development of a new methodology: - a lifecycle approach is needed when talking about PSS; - a technical and systematic design approach is necessary to face PSS design issues; - a business approach is necessary to map the relationship between the company and the customer, that is fundamental to make a PSS run effectively; - a business ecosystem must be defined to consider relations among stakeholders. In this way the research defines a new methodology combining technical and business aspects (T-B) along four domains characterizing PSS: customer, functional, asset, and network. The reference design process is adapted from Pahl et al. [36] and the different design stages are organized into a set of inputs and outputs according to the QFD philosophy. Indeed, the method is structured according to a set of correlation matrices to map the correlations among the most significant aspects at the different design stages, where the output of one matrix becomes the input of the following one. Four matrices referring to the four considered domains are identified. 2.1. The T-B methodology steps The T-B methodology starts from the consumer domain and moves through the other ones (functional, asset and network). The method can be synthetized in six main steps: Step 1. Definition of the customer needs and demands: it is based on the adoption of UCD techniques depending on the specific sector and the market typology (i.e. focus group, interviews, desk research, ethnography, personas) and is generally carried out by the marketing staff. It allows defining a set of needs and their relative weights, expressed according to a 5-point Likert scale. Similarly, demands are elicited: they are extremely important for PSS since they represent the key features to achieve a high perceive value. Data are collected in Matrix 1 that matches needs and demands by M. Peruzzini et al. / Technical-Business Design Methodology for PSS 517 expressing the inner correlation between them, according to a 0-3-9 scale (0 represents no correlation, 9 means the higher correlation). Ethnography and surveys [37] are used for eliciting such correlations. Ethnography consists of providing a qualitative description of the human social condition based on fieldwork and users’ observation in their natural setting. Survey is added to make the user study more interactive and also collect directly the users’ feedback. In this way it is possible to find out the most relevant needs by summing all the correlation values along each specific matrix column and considering the highest results. Similarly, the most significant demands are obtained by applying the sum-product function along each row, where the need weight is the multiplayer for each value, and considering the highest results. Contemporarily from a business viewpoint, two areas of the BM can be defined according to CANVAS (i.e. customer needs and value proposition as a synthesis of the demands). Step 2. Definition of the user tasks: it adopts a specific UCD technique (i.e. roleplaying) to highlight the tasks to be executed to satisfy the selected needs and it moves from the customer to the functional domain. Role-playing is performed by experts who play as characters into the real context of use simulating the actions and moods of the consumers. It allows a vivid and focused exploration of situations and generation of ideas in order to “be in the moment” and share with the customers their experiences [38]. From a technical viewpoint, tasks are necessary to deal with PSS functions; from a business viewpoint, tasks are connected to PSS customer relationship since they express how the users interact with PSS, how they communicate with PSS providers, under which circumstances and how frequently, how they access to both product and services, and so on. Step 3. Requirements elicitation and functions definition: it moves entirely into the functional domain and translates well-know activities in traditional design but moving into the PSS context. The most significant demands from step 1 are organized into a list of basic, technical and attractive requirements by HTA technique [31]. HTA allows addressing the underlying mental processes that give rise to errors during task execution, and being connected with the higher-level mental functions as well as the specific actions required to satisfy these requirements. After that, the definition of the PSS functions is based on the correlation between requirements and tasks, which is carried out by the Functional Analysis System Technique (FAST) method according to the Kano’s model [39]. In this step the combined contributions of the marketing staff as well as the technical staff and the service personnel is fundamental. From a business perspective, the defined functions allow eliciting BM key activities. Step 4. Assets definition: it focuses on the definition of the T/I assets needed to realize the PSS to satisfy the value proposition and the customer needs. It starts from the ecosystem analysis and maps all the potential partners and their features (e.g. skills, competences, services, products, response time, cost, regulation respect). Once partners are fully described, functional modelling is used to relate functions and T/I assets. Unified Modelling Language (UML) is used to model the detailed PSS functional structure and identify the necessary assets. From a technical viewpoint, the result is the list and description of the T/I assets needed. From a business viewpoint, the assets represents the key resources and the distributing channels. Step 5. Partners’ selection: this step moves into the network domain and aims at selecting the most appropriate partners on the basis of the correlation between the assets and the specific partners’ resources and availability that takes into account risk assessment. Risk assessment focuses on the supply chain and Supply Chain Risk Management (SCRM) methods are used: they consider risks within the supply chain in 518 M. Peruzzini et al. / Technical-Business Design Methodology for PSS terms of supply costs, delivery time, supplier reliability, supply quality, and risks external to the supply chain according to a coordinated approach amongst the chain members to reduce the supply chain vulnerability as a whole [40]. In this second case, the so-called Social, Technological, Economical, Environmental and Political (STEEP) analysis is applied. At this stage the main actors are marketing, technical and purchasing staff. This step is mainly business-oriented since it aims at defining the key partners in the BM. Step 6. Service modelling: this step uses blueprinting technique [41] to model service processes specifically starting from the defined CANVAS BM. Blueprinting is a customer-focused approach widely used in service innovation that allows to visualize PSS processes and connect the underlying support processes throughout the organization, and to help defining channels (both information flow channels, and distribution and delivery channels). Figure 1 shows the method matrices and the information defined as inputs and outputs in the four domains considered. Information in red represents the main outputs from each step, while blue labels refer to the business aspects. Figure 1. T-B design methodology for PSS 3. The industrial use case 3.1. The industrial context and issues The industrial case study is represented by the design of an innovative PSS solution for Washer Dryers (WD). Actually the traditional product is designed and produced by an Italian company, leader in household appliances, which has also a worldwide network made up of numerous suppliers and commercial branches. The company is interested in innovating its actual business through services. Recently, it 519 M. Peruzzini et al. / Technical-Business Design Methodology for PSS worked on connectivity issues and proposed a set of connected devices (e.g. washing machines, dryers, fridges, ovens) addressing the smart home concept. However, they are still producing and selling products while services are almost commercial add-ons, so that the real benefits for final users are still hidden due to the lack of analysis of business aspects. Actually technical aspects are faced at the beginning of the design activity, while business aspects are defined later on during the implementation stage. The goal is to innovate the PSS offer by creating an appealing PSS solution by integrating technical and business design. The main challenge is satisfying the real market needs and identifying the right business model able to satisfy the customers’ expectations. 3.2. Method application to design a new PSS This section summarized the method application by presenting the results obtained in form of matrices. Figure 1 shows the results obtained in Matrix 1 and Matrix 2. In Matrix 1 needs and demands are elicited, and the rank reveals the most important ones (highlighted in light blue). Then, the most important needs are used to define the tasks and requirements by respectively role-playing and HTA. From the beginning of the design stage, the related BM starts to be filled by value proposition (derived considering the most significant demand) and the customer segments from the analysis of the needs, while key activities and customer relationship are defined from functional analysis. Figure 2 presents the results from Matrix 3 and Matrix 4. In Matrix 3 PSS assets are derived from the ecosystem analysis on the basis of the above-mentioned functions, while Matrix 4 correlates assets and specific partners’ resources. 2. At a business level key resources and key partners are identified. Last steps relate to the definition of the BM (Figure 4) and service modelling by blueprinting (Figure 5). For the new PSS for WDs, target consumers are “house managers” (people very active at home and full of attentions for the home management in general) and “efficiency seekers” (usually young people who like to be efficient and smart, and are attracted by new technological solutions). Customer relationship is mainly based on the use of a mobile-web application and a 24-7 call centre for assistance. Key partners and key activities are directly related to the previous analyses. The costs consider WD production, service architecture and End-of-Life management. Finally the revenue streams are created according to a Pay per Service model. A) Improve the main board Improve the WD interface WD check Consumer habits monitoring Detergent suppliers involvment HW infrastructure development 5 9 3 4 3 3 3 1 1 1 0 9 0 185 4 Dry clothes 5 0 9 0 0 9 3 1 1 1 0 0 0 120 10 4 0 3 0 3 9 3 0 0 0 0 1 0 76 11 1 1 3 0 9 0 3 0 0 0 0 1 0 17 14 Appliance cycles tailored on customer 4 0 3 0 3 3 3 9 3 3 0 9 3 156 7 0 0 0 0 0 1 0 0 0 0 0 1 3 0 3 0 3 3 0 9 0 0 9 1 0 3 0 0 0 9 9 3 9 9 0 1 44 5 3 0 1 1 0 0 3 3 0 0 0 0 0 11 WD able to load detergents 3 9 1 0 0 0 0 9 0 3 3 9 9 129 8 10 Wash at home clothes require dry-cleaning 1 1 9 0 1 3 1 0 0 0 0 1 0 16 3 0 1 3 0 0 3 3 0 0 9 0 0 22 High washing performance TASKS 14 Choose the correct program according to clothing label Choose the program according to customer habits 9 9 9 9 1 9 3 0 4 9 9 9 1 9 9 9 1 1 0 3 1 3 275 1 Control the program starting 3 64 5 0 9 1 1 9 3 1 1 0 3 0 3 155 7 Use the right detergent amount per each cycle 9 9 1 3 1 9 0 1 0 0 9 9 9 60 3 4 9 1 3 1 1 1 9 9 3 9 0 9 220 2 Use the short cycle for half load 0 0 1 3 0 0 1 0 0 0 1 0 0 6 11 New easier interface 3 0 0 9 0 0 0 3 0 0 9 0 0 63 12 Control the program temperature 0 0 9 3 9 9 9 9 3 9 9 0 9 78 1 Easier selection of programs 2 0 0 3 0 0 0 1 0 0 9 0 0 26 13 Information abot energy consumption 4 0 0 0 0 0 0 9 0 3 0 0 9 84 10 Information about each cycle 5 1 0 0 0 0 0 9 9 3 0 0 3 125 9 Energy and other resources efficiency 4 9 0 0 0 0 0 9 1 3 0 9 9 160 6 2 Avoid the uncorrect stop of WD 0 0 0 1 0 0 9 1 9 9 0 0 9 38 Use the antilimestone per each cycle 9 9 0 1 0 0 0 0 0 0 0 9 0 28 Avoid more cycle with high speed 0 0 1 1 3 1 0 1 9 9 3 0 9 37 7 1 0 0 0 0 0 0 0 0 0 0 1 0 2 12 9 1 1 0 0 0 0 0 1 3 3 3 1 22 9 38 19 30 19 25 31 44 45 26 48 55 31 51 6 11 8 11 10 7 5 4 9 3 1 7 2 Balance between quality and price 4 3 3 3 3 3 3 3 3 9 3 3 3 168 5 Clean the detergent box Reliability and Durability 5 1 1 0 0 0 0 9 9 9 9 0 3 205 3 Clean the WD after 20 cycles 120 82 110 127 104 294 164 158 162 167 204 9 12 10 8 11 1 5 7 6 4 3 B) 3 12 5 2 9 2 48 High dryer performance Rank 0 9 High machine performance MOST SI GNI FI CANT DEMANDS 213 0 9 Rank SW system development Clean clothes Care for fibres Care for colours PSS FUNCTI ONS New sensors to monitor WD Development of an infrastructure to connect WD 0 Load detergent Select programs New sensors to provide autodose WD redesign to provide detergent boxes for autodose Insert clothes into WD Rank MOST I MPORTANT NEEDS Appliance with remote control Detergents supply Appliance with smart interface Pay service and WD for free Smart maintenance WD connected and monitoring New technology for treating coloured clothes WD provided new technology for sweet wash/drying New patented high temp. tech. for clothes anti-allergy WD provided of led technology WD provided of professional cycles Autodose technology Weight CONSUMER NEEDS New app development to monitor WD by external devices Rules definition to apply preventive maintenance PSS REQUI REMENTS PSS DEMANDS Rank 6 8 Figure 2. Matrix 2 (correlation between tasks and PSS requirements) (A) and Matrix 3 (correlation between T/I assets and PSS functions) (B) 520 M. Peruzzini et al. / Technical-Business Design Methodology for PSS 3 0 0 3 0 1 1 0 9 0 0 0 1 45 9 Detergent boxes 2 9 0 0 0 3 0 0 0 0 1 0 3 32 10 WD interface 4 0 1 1 0 1 1 9 0 0 0 3 3 76 7 Sensors inside WD basket 3 9 3 0 0 0 0 1 0 0 Software developers Sensors to monitoring cycle's temperature PSS T/ I ASSETS Smart Home providers 8 House builders 4 Gyms and sport facilities 6 63 Technological partners 90 126 1 Energy utilities 1 1 0 Detergent providers 0 3 9 Design partners 9 9 0 Universities 3 9 0 Rank Rank 3 9 0 Partners in research (electronics, informatics, etc.) NEEDED ASSETS 0 0 1 Temperature monitoring Water used monitoring Detergent used monitoring Energy used monitoring Total costs per cycle 1 0 1 Smart interface 1 0 0 Best practices proposals Marketing offers proposal 0 1 0 0 1 Sensors to monitoring WD running 3 9 3 0 0 0 0 1 0 0 0 Sensors to identify detergent amount 3 9 3 0 0 0 0 1 0 0 0 1 Sensors to monitoring cycle's temperature 3 9 3 0 0 0 0 1 0 0 0 1 Detergent boxes 2 0 0 9 3 3 0 0 0 0 0 0 1 SW system 2 9 3 9 9 3 3 1 0 0 0 0 0 74 7 ICT infrastructure to connect the WD 2 1 9 9 3 3 3 0 0 0 0 0 0 56 8 SW system 2 3 3 1 0 0 0 0 0 0 0 9 Knowledge in autodose technology 3 9 3 3 0 0 0 0 0 0 9 0 0 72 7 ICT infrastructure to connect WD 2 9 9 1 0 0 0 3 0 0 0 0 Knowledge in machine connection and monitoring 4 1 9 9 3 1 1 1 0 0 0 0 0 100 5 Knowledge in autodose technology 3 3 3 1 0 1 0 0 0 0 0 Delivery application 4 0 0 0 9 9 9 9 0 0 0 0 0 144 3 Knowledge in machine connection and monitoring 4 9 9 1 0 0 0 3 0 0 3 1 Rules to check the WD status 4 0 3 9 9 1 0 0 0 0 0 0 0 88 6 Delivery application 4 9 3 1 0 0 0 9 0 0 0 0 0 0 0 Rules to manage the smart maintenance 0 42 9 Rules to manage data collected 5 0 3 9 0 1 1 0 3 3 3 3 3 145 3 Rules to deliver different fuctionalities 4 2 0 0 1 1 9 9 9 1 3 3 3 3 3 180 1 WD components 5 1 170 2 KEY FUNCTI ONALI TI ES 131 A) Smart maintenance 3 9 0 WD monitored 0 0 9 WD connected 9 1 3 Autodose 3 3 Rank Sensors inside WD basket Sensors to monitoring WD running Sensors to identify detergent amount PSS T/ I ASSETS Detergent producers PARTNERS RESOURCES PSS FUNCTI ONS Rank 3 3 9 9 0 0 0 0 3 3 9 1 1 9 1 1 1 1 3 125 248 210 121 111 127 95 68 142 53 72 4 1 2 4 5 4 6 7 3 8 7 WD interface B) 4 1 Rules to check WD status 4 1 Rules to manage smart maintenance 2 1 0 9 9 0 9 0 0 0 0 0 0 0 0 0 0 3 1 9 0 9 0 1 1 3 0 0 0 1 0 0 1 Rules to manage data collected 5 1 1 0 0 0 9 9 9 9 9 3 Rules to deliver different fuctionalities 4 1 1 0 0 0 9 9 9 9 9 3 WD components 5 0 KEY PARTNERS 232 Rank 1 0 9 0 0 0 0 0 0 0 0 180 114 6 9 81 213 85 85 97 79 3 4 10 9 7 2 6 6 5 8 Figure 3. Matrix 4 (correlation between T/I assets and PSS functions) (A) and Matrix 5 (correlation between T/I assets and partner resources) (B) Figure 4. CANVAS BM for the designed PSS Figure 5. Service modelling for the designed PSS (blueprinting) The case study demonstrated the method validity in organizing and structuring the design knowledge as well as guiding designers and managers into a complete PSS model definition. Method application overcomes the main limitation of previous design supported by traditional tools (brainstorming and focus groups), which generated confused PSS concepts, absence of priority among functions, difficult service process M. Peruzzini et al. / Technical-Business Design Methodology for PSS 521 mapping, and critical partners’ selection. Monitoring the design team, we found that method application allows avoiding confusion and reducing time to market. 4. Conclusions The research proposed a new design methodology for PSS to overcome the main limitations of traditional product-centred process, which obstacle manufacturing companies moving to PSS. The new method integrates technical and business aspects according to a strategic approach. It provides structured guidelines from designers and managers to define both technical requirements and business model according to a QFD-based procedure. In this way technical aspects are considered more consciously and the evaluation of business aspects is anticipated providing some advantages also on technical area. An industrial case study is then presented to demonstrate how the proposed method can be adopted to successfully integrate technical and business aspects. The method has been proved to validly guide designers and managers to define the new PSS value proposition, the PSS network and the main service processes. The main limitations concern manual execution, which can be heavy for complex PSS. Future works will be oriented to provide a software tool supporting such methodology. References [1] M.J. Goedkoop, C.J.G. Van Halen, H.R.M. Riele, P.J.M. Rommens, Product-Service Systems, Ecological and Economic Basic, PWC, The Hague, 1999. [2] E. Manzini, C. Vezzoli, Product–service systems and sustainability. Opportunities for sustainable solutions. United Nations Environment Programme, Division of Technology Industry and Economics, Production and Consumption Branch, CIR.IS Politecnico di Milano: Milan; 2002. [3] J. Östlin, E. Sundin, M. Björkman. Business drivers for remanufacturing, Proc. 15th CIRP LCE (2008), 581-586. [4] A. Tukker, U. Tischner, New Business for Old Europe. Product-service development, competitiveness and sustainability, Greenleaf Publications, 2006. ͒ [5] Y. Ducq, C. Agostinho, D. Chen, G. Zacharewicz, R.J. Goncalves, Generic Methodology for Service Engineering based on Service Modelling and Model Transformation. Manufacturing Service Ecosystem. Achievements of the European 7th FP FoF-ICT Project MSEE: Manufacturing SErvice Ecosystem (Grant No. 284860). Eds. Weisner S, Guglielmina C, Gusmeroli S, Doumeingts G. (2014), 41-49. [6] M. Peruzzini, A White Goods Manufacturing Service Ecosystem. Manufacturing Service Ecosystem. Achievements of the European 7th FP FoF-ICT Project MSEE: Manufacturing SErvice Ecosystem (Grant No. 284860). Eds. Weisner S, Guglielmina C, Gusmeroli S, Doumeingts G. (2014), 158-165. [7] M. Peruzzini, M. Germani, C. Favi, Shift from PLM to SLM: a method to support business requirements elicitation for service innovation. Proc. PLM12, Montreal, Canada (2012) 1-15. [8] A. Ghaziani, M. Ventresca, Keywords and cultural change: Frame analyses of of Business Model public talk, 1975 to 2000. Sociological Forum 20 (4) (2005), 523-529. [9] K.D. Thoben, H. Jagdev, J. Eschenbaecher, Extended Products: Evolving Traditional Product Concepts. Proc. 7th International Conference on Concurrent Enterprising, Bremen (2001). [10] O. Mont, Clarifying the concept of product–service system, Journal of Cleaner Production 10 (3) (2002) 237-245. [11] M. Duarte, G. Davies, Testing the conflict–performance assumption in business-to- business relationships, Industrial Marketing Management 32 (2003), 91-99. [12] M.Z. Ouertani, Supporting conflict management in collaborative design: An approach to assess engineering change impacts, Computers in Industry, 59 (9) (2008), 882-893. [13] L. Krucken, A. Meroni, Building stakeholder networks to develop and deliver product-service-systems: practical experiences on elaborating pro-active materials for communication, Journal of Cleaner Production 14 (2006), 1502-1508. 522 M. Peruzzini et al. / Technical-Business Design Methodology for PSS [14] X. Wang, C. Durugbo, Analysing network uncertainty for industrial product-service delivery: A hybrid fuzzy approach, Expert Systems with Applications 40 (2013), 4621-4636. [15] K. Watanabe, Y. Shimomura, Design of Cooperative Service Process for Effective PSS Development. The Philosopher's Stone for Sustainability, Proc. 4th CIRP IPS2, Tokyo, Japan (2012), 321-326. [16] Y. Nemoto, F. Akasaka, Y. Shimomura, Knowledge-Based Design Support System for Conceptual Design of Product-Service Systems. Product-Service Integration for Sustainable Solutions, LNPE, Proc. 5th CIRP IPS2, Bochum, Germany (2013), 41-52. [17] A.P.B. Barquet, M.G. de Oliveira, C.R. Amigo, V.P. Cunha, H. Rozenfeld, Employing the business model concept to support the adoption of product–service systems (PSS), Industrial Marketing Management 42 (5) (2013), 693-704. [18] M. Morris, M. Schindehutteb, J. Allen, The entrepreneur’s business model: toward a unified perspective, Journal of Business Research, 58 (2005), 726-735. [19] A. Osterwalder, Y. Pigneur, Business Model Generation: A Handbook for Visionaries, Game Changers, and Challengers, Wiley, Hoboken, 2010. [20] T. Guidat, A.P. Barquet, H. Widera, H. Rozenfeld, G. Seliger, Guidelines for the definition of innovative industrial product-service systems (PSS) business models for remanufacturing. Procedia CIRP 16 (2014), 193-198. [21] A.R. Tan, D. Matzen, T. McAloore, S. Evans, Strategies for Designing and Developing Services for Manufacturing Firms, Proc. 1th CIRP IPS2, Cranfield, UK (2009). [22] K. Kimita, Y. Shimomura, Development of the Design Guideline for Product-Service Systems. Proc. 6th CIRP IPS2, Windsor, Canada (2014). [23] H.R. Parsaei, Concurrent Engineering: Contemporary Issues and Modern Design Tools, Springer Science & Business Media, 1993. [24] K. Kimita, F. Akasaka, S. Hosono, Y. Shimomura, Design Method for Concurrent PSS Development. Proc. 2nd CIRP International Conference on Industrial Product-Service Systems, Sweden (2010). [25] N.P. Suh, Axiomatic Design Theory for Systems, Research in Engineering Design, 10 (4) (1998), 189209. [26] T. Sakao, Y. Shimomura, Service Engineering: A Novel Engineering Discipline for Producers to Increase Value Combining Service and Product, Journal of Cleaner Production, 15 (6) (2007), 590604. [27] Y. Akao, Quality Function Deployment: Integrating Customer Requirements into Product Design, Productivity Press, Cambridge, MA, 1990. [28] L. Cohen, Quality Function Deployment, How to Make QFD Work for You, Addison-Wesley, 1995. [29] T. Sakao, K. Watanabe, Y. Shimomura, A method to support environmentally conscious service design using Quality Function Deployment (QFD), Environmentally Conscious Design and Inverse Manufacturing, 3rd International Symposium on EcoDesign IEEE (2003), 567-574. [30] T.L. Saaty, The Analytic Hierarchy Process, McGraw- Hill, NewYork, 1980. [31] B. Kirwan, L.K. Ainsworth, A Guide to Task Analysis, Taylor & Francis, London, 1992. [32] X. Geng, X. Chu, D. Xue, Z. Zhang, An integrated approach for rating engineering characteristics’ final importance in product-service system development, Computers & Industrial Engineering, 59 (2010), 585-594. [33] S. Wiesner, M. Peruzzini, G. Doumeingts, K.D. Thoben, Requirements Engineering for Servitization in Manufacturing Service Ecosystems (MSEE), Y. Shimomura Y. and K. Kimita (eds.) The Philosopher's Stone for Sustainability, 2013, 291-296. [34] S.Z. Mazo, M. Borsato, An Enhanced Tool for Incorporating the Voice of the Customer in ProductService Systems, Int. J. Mech. Eng. Autom., 1 (2) (2014), 57-76. [35] M. Peruzzini, M. Germani, Design for sustainability of product-service systems. Int. J. Agile Systems and Management 7 (3/4) (2014) 206-219. [36] G. Pahl, W. Beitz, J. Feldhusen, K. H. Grote, Engineering Design: a systematic approach, SpringerVerlag, London, 1994. [37] H. Sharp, Y. Rogers, J. Preece, Interaction Design; 2. John Wiley & Sons, Ltd, 2007. [38] K. T. Simsarian, Take it to the Next Stage: The Roles of Role Playing in the Design Process. CHI 2003, Ft. Lauderdale, Florida, USA (2003). [39] K. Matzler, H. H. Hinterhuberb, How to make product development projects more successful by integrating Kano's model of customer satisfaction into quality function deployment, Technovation 18 (1) (1998), 25-38. [40] U. Jüttner, H. Peck, M. Christopher, Supply chain risk management: outlining an agenda for future research. International Journal of Logistics: Research & Applications, 6 (4) (2003), 197-210. [41] M.J. Bitner, A.L. Ostrom, F.N. Morgan, Service Blueprinting: A Practical Technique for Service Innovation, California Management Review, 2008. Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-523 523 A Service-oriented Architecture for Ambient-assisted Living a Margherita PERUZZINIa,1 and Michele GERMANI b “Enzo Ferrari” Engineering Department, University of Modena and Reggio Emilia, via Vivarelli 10, 41125 Modena (Italy) b Dept. Industrial Engineering and Mathematical Sciences, Università Politecnica delle Marche, via Brecce Bianche 12, 60131 Ancona (Italy) Abstract. Ambient-Assisted Living (AAL) is currently an important research and development area, mainly due to the rapidly aging society, the increasing cost of health care, and the growing importance that individuals place on living independently. The general goal of AAL solutions is to apply ambient-assisted intelligence to enable people with specific demands (e.g. handicapped or elderly) to live in their preferred environment longer by tools (i.e. smart objects, mobile and wearable sensors, intelligent devices) being sensitive and responsive to the presence of people and their actions. The research describes the design and development of a novel service-oriented system architecture where different smart objects and sensors are combined to offer ambient-assisted living intelligence to older people. The design stage is driven by a user-centred approach to define an interoperable architecture and human-oriented principles to create usable products and well-accepted services. Such architecture has been realized in the context of an Italian research project funded by the Marche Region and promoted by INRCA (National Institute on Health and Science of Aging) in the framework of smart home for active ageing and ambient assisted living. The result is an interoperable and flexible platform that allows creating user-centred services for independent living. Keywords. Service-Oriented architecture; Smart Home; Smart Object; AmbientAssisted Intelligence; Ambient-Assisted Living. Introduction Nowadays Ambient-Assisted Living (AAL) represents one of the most flourishing fields of application of Information and Communication Technology (ICT) and Service Engineering since the growing importance of assistive services. This phenomenon directly derives from the continuous global population ageing and the growing necessity to help people with specific demands (e.g. elderly) to live longer in their preferred environment by increasing their autonomy, monitoring its actions and providing care [1]. Indeed, developments in assistive technology are likely to make an important contribution to the care of elderly people in institutions and at home thanks 1 Corresponding Author. Margherita Peruzzini, University of Modena and Reggio Emilia via Vivarelli 10, 41125 Modena (Italy), e-mail: margherita.peruzzini@unimore.it 524 M. Peruzzini and M. Germani / A Service-Oriented Architecture for Ambient-Assisted Living to the adoption of the so-called ambient-assisted intelligence systems: remote monitoring, electronic sensors and equipment such as fall detectors or door monitors can improve older people’s safety, security and ability to cope at home. Care at home is often preferred by patients themselves and is usually less expensive for care providers than institutional alternatives. In the last years, some attempts have been made to explore the use of intelligent objects, also called Smart Objects (SOs), to provide assistance and monitor the users’ wellbeing to support an independent living. In most cases, ambient-assisted intelligence can be added to traditional objects to create intelligent devices able to collect information about the environment and people living inside, and create high-level information to be re-used to design and configure products and services according to the users’ needs. However, the main problem of AAL systems is their complexity and poor acceptability [2]. In this context, only starting from the analysis of the users’ demands and requirements and adopting User-Centred Design (UCD) principles allow designing high usable systems able to collect the right set of data from both users and environment and optimize the human-machine interaction achieved [3]. Furthermore, only adopting a highly interoperable approach lead to design a flexible and scalable system with high performance [4, 5]. The present research proposes a new model to realize a service-oriented architecture for AAL: the system includes smart products, an ambient-assisted intelligence, and user-centred services to support active aging and independent living. The system is defined on the basis of an interoperable system architecture that manages every kind of product or service as a SO a priori, without knowing its specifications in advance or adhering to some specific communication standards. 1. Research background The notion of SO describes a technology-enhanced everyday object equipped with sensors and memory to have communication and data exchange capabilities [6,7]. A SO is usually able to capture information about its surrounding, communicate with other devices and react according to previously defined rules [8]. Such information can refer to object properties (physical properties or text), interaction information (e.g. position of handles, buttons, grips), object behaviour (based on state variables) or agent behaviours (rules that an agent should follow when using the object) [9]. From an assistive point of view, a SO interacts with humans directly and supports the users to accomplish their own tasks in an intuitive way [10]. As applied to people with disabilities or frailties in general, a SO can be considered as “a device that allows an individual to perform a task that they would otherwise be unable to do” [2]. Nowadays numerous SOs are commercially available at low cost and many researchers are intensively working on their application to support elderly at home. The applicability of SOs in monitoring the users’ conditions and providing the most appropriate support has been tested and demonstrated thanks to numerous EU projects: e.g. AGNES (Successful Ageing in a Networked Society) [11], HAPPY AGEING (A Home based APProach to the Years of AGEING) [12], PAMAP (Physical Activity Monitoring for Ageing People) [13]. At the same time in America the MIT Media Center is developing projects in this direction [14]. Furthermore, an inventory of the main characteristics and functionalities of such products for assistive purposes for elderly has been recently defined by [15]: it provides a useful overview for both M. Peruzzini and M. Germani / A Service-Oriented Architecture for Ambient-Assisted Living 525 researchers and designers about the potential application of SOs in supporting elderly people independent living and the main functional domains of activities that can be performed in a domestic environment with their support. However, an effective introduction of SOs into a real home environment is still hard to achieve. Acceptability and interoperability are the first two issues to solve. At the same time, home automation industry is growing and its market is expected to expand for the next few years [16]. Forecasts estimate that the global market of home automation will reach $ 51.77 billion by 2020 with an annual growth rate of 17.74% from 2014 to 2020 [17]. However, the analysis showed that the application of large-scale and full implementation of home automation interoperable systems to create really smart living environments is still far from a reality. This is due to several factors: the lack of standardization, high costs for system integration and maintenance, and lack of interoperability due to the use of proprietary systems and closed communication protocols. In recent years, in fact, the number of applications that require the cooperation of several heterogeneous devices has grown very rapidly. Components, belonging to very different technologies and characterized by different complexity and products from different vendors, are often used together within the same home automation ecosystem. In literature different models of interoperability have been proposed [18]; they typically are organized into 3 levels (syntactic interoperability, network interoperability, and basic connectivity interoperability). An important aspect to ensure the intelligence of a home automation environment and to address the issues of interoperability is to appropriately model the heterogeneous information as environmental parameters, devices and their characteristics, the users and their preferences, the operating environment, etc [19, 20, 21]. Furthermore, a proper semantic description of the devices involved is fundamental to enable interoperability: it allows the devices’ search and identification, and lets high-level services to be easily created and managed. Dibowski [22] provided a framework of ontologies to describe smart home devices as well as their functionalities, platform and producers in the domain of building automation. For instance, DogOnt has been specifically designed to model an environment describing intelligent home automation devices, their state, their functionality and notifications, and the architectural components [23]. Moreover, context-aware applications can use this information to support decision-making and to enable services to support users in their everyday activities (e.g. CONON ontology [24], CODAMOS ontology [25], Think Home [26], BONSAI ontology [27]). 2. The AAL system definition 2.1. The HicMO project The HicMO project has been promoted by INRCA (National Institute on Health and Science of Aging) and funded by the Marche Region, in Italy. It involved 12 companies, both Large Enterprises (LEs) and Small and Medium Enterprises (SMEs), and 1 Research Centre; it started in February 2013 and has just ended (February 2015). HicMO is the acronym of “Hic Manebimus Optime”, a latin sentence by Tito Livio (i.e. Ab Urbe condita libri, V, 55) which stands for “here we will live very well” and express the general approach of the future users of HicMO technologies. It aimed to develop ICT system architecture able to manage smart products and services to support the active ageing by an ambient assisted living approach. The concept of SO assumes a 526 M. Peruzzini and M. Germani / A Service-Oriented Architecture for Ambient-Assisted Living great importance for the project purposes; in he HicMO context a SO is “any object or home device that is able to communicate its interactions with the user in order to monitor the daily activities, to understand the habits, to detect any abnormal behaviour that may highlight situations of hardship or danger, or the symptoms of some incipient illness”. The present research moves into the boundaries of the HicMo project and, in particular, defines the HicMO system architecture and describes the HicMO system prototype that overcomes the main limitations of the previous systems in terms of both usability and interoperability. 2.2. The project approach In order to deal with usability and interoperability issues, HicMO is based on an Assistive Integration Platform (AIP) that allows both hardware and software integration and, in particular, enables communication between any product or service to create high-level AAL functionalities to support older people with their home environment. It is possible thanks to the abstraction of the SO concept and the adoption of a proper ontology to interpret and manage input/output data and information in a proper way. According to the HicMO approach, every entity can be represented as a SO independently from its specific characteristics, and can be described by a proper ontology. According to such ontology, new product functions and service applications can be created by ad-hoc system intelligence. The platform (AIP) is composed by a low-level layer and a high-level layer: the low-level layer allows to exchange inputoutput data with every SO connected to the platform, while the high-level layer interprets the collected data according to the defined ontology Interaction between the AIP and the SOs represents the core of the system. In particular, a Reference Model is defined to describe any SO entity and to manage any data exchange in different cases: data provider, data consumer, service provider, and service consumer. In this way communication between the AIP and any SO is enabled a priori, without exactly knowing the specific features of the SO in advance. A SO can be a physical entity or a virtual entity, also created by the aggregation of multiple sensors or devices. Similarly, also services can be managed by the AIP as a SO: they can be provided by means of products as well as web services and exchange data and information with the other entities connected to the system. Such architecture guarantees system flexibility, scalability, interpretability and configurability. Indeed, HicMO is not a closed entity but is a dynamic system able to configure its functions and services according to external conditions. Such conditions are directly defined considering the specific users’ needs. 2.3. The system design The entities constituting the HicMO system architecture have been defined according to a UCD methodology, which combined Delphi and Quality Functional Deployment (QFD) technique. The method focuses on the analysis of the users’ needs and identification of the main system requirements as well as the selection of the most proper technologies and the definition of a suitable architecture to assure interoperability, configurability and high usability. The method consists of a double phase of evaluation involving experts in different disciplines according to the Delphi technique; method phases are organized in four steps and formalized according to the QFD approach to better highlight the data involved and the correlations required as M. Peruzzini and M. Germani / A Service-Oriented Architecture for Ambient-Assisted Living 527 described in a previous research work [28]. The former technique (i.e. Delphi) aims to facilitate idea exchange between experts thanks to the comparison among the different opinions and their progressive convergence on common key points allowing the identification of relevant topics from the natural convergence of opinions [29]. QFD allows the easy correlation between users’ needs and technical specifications of the available systems and technologies in order to find out the best system design as demonstrated in [27]. The method steps are based on the following steps: 1. Analysis of the users’ needs: it involves experts and uses interviews and ad-hoc questionnaires to depict a wide market analysis, identify the market segments and define target users and needs; 2. Mapping of assistive devices’ functionalities: it involves technicians, designers and experts in SOs and ATs to define the functionalities that can be provided by the different technologies and their correlation with the specific devices by focus groups and brainstorming; 3. Correlation between users’ needs and functionalities: it involves both experts and samples users to evaluate and compare how different functionalities can support the users’ needs. It uses visible planning technique to support a pragmatic and effective discussion; 4. Elicitation of the system requirements: it relates the necessary functionalities with the technological parameters for system requirements elicitation by adopting the Design Structure Matrix (DSM) method. In the present research, such a method has been applied to a wide user sample and a real system architecture has been defined (Figure 1). Technical requirements have been investigated to create a unique integrated platform; also communication exchange requirements have been analysed. The architecture is based on following items: 1) an ad-hoc hardware data gateway able to concentrate heterogeneous data from different objects; 2) a set of adaptors able to make a simple object into a SO providing is a higher intelligence to exchange data with other systems, like smart plugs and smart adapters; 3) a dedicated software intelligence able to recognize and manage the different objects in an homogeneous way; 4) a set of local gateways able to exchange data with standard home automation protocols, such as Bticino, Konnex (KNX) and Modbus, as sub-systems. In more details, the AIP is the core entity of the system: it includes a Low Level Gateway (LLG) to physically connect the SOs, and an High Level Service (HLS) to manage data exchange and SOs interrelations, thanks to a system intelligence to enable service functionalities and to coordinate the SOs actions and reactions. In this way, data are collected from and send to any SO by the LLG, while data exchange is coordinated by the AIP. The LLG manages a set of different home devices by different communication protocols, from Bluetooth to ZigBee and Wi-Fi. At the same time, the AIP is connected to external gateways by means of a router in order to exchange information with standard home automation protocols and manage devices also in these environments. In this way the AIP can create specific services and assistive functionalities, specifically tailored on the users’ needs, by exploiting all the devices’ single functions into an interoperable context. As a result, the HicMO system allows to receive data from a wide variety of sensors (i.e. environmental, wearable, medical, etc.) and, at the same time, from commercial smart objects as well as standard home automation systems, and to interpret those data by creating a set of higher level information that can be used from the system objects and services to generate personalized reactions and provide ad-hoc 528 M. Peruzzini and M. Germani / A Service-Oriented Architecture for Ambient-Assisted Living functionalities according to the specific user needs and context-based conditions. In this way, such behaviour merges both assistive and home automation features. On one hand, typical home automation features refer to those functions based on the detection of environmental parameters and performed by independent devices that do not act directly on the person, but create a supporting environment. Indeed, the HicMO system includes a set of environmental sensors, which detect environmental conditions (e.g. temperature, humidity, air quality, lighting, infrared image) to provide measurements and generate alerts when needed (for instance, when fixed thresholds are exceeded or data processing highlights dangerous or abnormal conditions). Furthermore, the system includes a tracker that indirectly detects the user’s position in certain places and during certain actions by placing the corresponding tag close to reference points (e.g. door handles or appliances, drawers, etc.) and its combination with an RFID tag worn by the user (on a ring). Also the presence of sensorized kitchen items, in particular hangings, drawers and intelligent household appliances (i.e. fridge), provides information about user accessibility and can be remote controlled according to the user’s permissions. Furthermore, wearable devices such as shirt and shoes give information about their fit and user’s positioning and motion. Finally, the integration with standard home automation systems like Bticino and KNX allows detecting environmental parameters (e.g. temperature, presence etc.) and controlling cabled devices (i.e. lighting system, home automation controls). Figure 1. HicMO system architecture M. Peruzzini and M. Germani / A Service-Oriented Architecture for Ambient-Assisted Living 529 On the other hand, typical assistive features refer to direct-monitoring functions that directly act on the users and their behaviours, such as monitoring vital signs or monitoring of the users’ actions. Firstly, the HicMO system includes a telemedicine platform that allows measurements of vital signs through the integration of different devices: a digital sphygmomanometer to measure blood pressure (minimum pressure and maximum heart rate), a digital scale to measure the users’ weight, a pulse oximeter to measure the oxygen saturation in the blood, an electrocardiograph (ECG) to measure the ECG waveform. Similarly numerous other devices can be added. Furthermore, a sensorized shirt provides data related to the user’s movements (by means of an accelerometer and gyroscope) and his/her sweating by means of Galvanic Skin Response (GSR), and sensorized shoes measure the performed steps, the consumed energy, the duration and the frequency of the pitch and other parameters involved in the gait, which can be used to monitor the user’s state of health. Data are exchanged and managed in an appropriate way by the AIP to create intelligent behaviours. Interaction among devices is guaranteed by the following conditions: 1. all devices are seen as software entities (i.e. SO) able to communicate by http/https protocol, so that their low-level specifications are not important for the AIP; 2. any SO is implemented via web services and its interface is univocally described by its XML file, defined by a XSD (XML Schema Definition) file, which refers to common rules. In order to facilitate the creation of the XML files, an ad-hoc web form has been realized to guarantee the syntactic coherence. 3. The system prototype 3.1. System prototyping The AIP consisting of HLS and LLG has been developed in its HW and SW components, as well as the specific gateways enabling communication towards Bticino and Konnex platforms. At the same time, also some new devices has been developed in both HW and SW parts by involving manufacturing companies producing the relative products and electronic companies developing sensors, firmware and software. Finally the system prototype is composed by: - the AIP, that is made up of the local gateway (LLG) to connect SOs and the highlevel service (HLS) to properly formalize and interpret SOs’ data; - a set of commercial SOs monitoring environmental parameters (i.e. temperature, air quality, humidity, lighting, energy consumption, infrared images); - a set of commercial SOs monitoring medical parameters (i.e. oximeter, sphygmomanometer, electrocardiograph, scale, glucometer); - a telemedicine platform to interpret data from the medical SOs and enable telemedicine services; - a learning platform to provide training multimedia material to train the users into the correct use of the system devices; - a set of SOs specifically developed for the HicMO project, such as the sensorized shirt measuring acceleration, speed values and GSR; sensorized shoes monitoring the user’s movements and steps’ quality; a tracker composed by a wearable part mounted on a ring and a fixed part that is attached to significant points or items (doors, handles, etc.); an intelligent fridge equipped with a refrigerated drug drawer inside where opening / closing and temperature are strictly controlled; 530 M. Peruzzini and M. Germani / A Service-Oriented Architecture for Ambient-Assisted Living two adapters, such as a SmatTV adapted to make the actual TV act as a system interface, and a SmartPlug adapter to make any object act as a smart objects by exchanging information especially about its use and energy consumption; - two local gateway dedicated to connect the AIP respectively to Bticino and Konnex home automation systems, according to a Bus-to-Ethernet approach; - a system interface to visualize data and manage system SOs by tablet. Figure 2 shows some prototypes. Examples of intelligent behaviours are listed as follows: monitoring of the user’s daily activities through data acquisition from SOs and processing of collected data; promoting correct lifestyle by providing reminders and monitoring the user’s actions, such as remembering taking drugs or controlling vital parameters; generating alarms on the basis of the detection of high-risk events, such as fainting, falling or not taking drugs, or abnormal behaviours that may potentially indicate danger; improving comfort by the dynamic environmental configuration according to user behaviours, for instance by adjusting lights or temperature in a proper way when the user goes to rest or goes out; using multiple user interfaces or data views, thanks the management of alerts and reminders on different devices controlled by the same central intelligence, such as a SmartTV, a tablet, a smartphone or specific devices’ interfaces; creating an interoperable environment by combining inputs and outputs from different SOs, such as smart objects, commercial home automation devices and sensorized items; integrating with commercial health application, like e-Health (iOS) and Google Fit (Android), where information derived from the system platform can be sent and used for more general purposes by other applications. About security aspects, the platform allows to recognize the specific user (thanks to the HicMO Tracker) who is executing a specific action. As a consequence, data about that user are referred to the user and manage securely. A specific security protocol is then implemented by the Telemedicine system in order to certify the medical and vital data. - Figure 2. Example of HicMO SOs prototypes M. Peruzzini and M. Germani / A Service-Oriented Architecture for Ambient-Assisted Living 531 3.2. Service use cases The platform enabled four different services, which have been implemented as use cases. The services refer to the following aspects: - Accessibility control: the system is able to enable or unable some actions (e.g. opening drawers, opening cabinets, opening doors or windows, starting some devices) only to authorized users. Users are automatically recognized by the HicMO Tracker and the system can enable/unable the SOs functions according to the user’s rights; - Drug assumption: the system control if each user, among those that need taking specific drugs, assumes the right drugs at the right time and, in case of failure, sends memos until sends advices to familiars and caregivers; - Vital signs monitoring: the system is informed about the health condition monitoring required for each user, so it can remember to each user specifically when to control its parameters and which one (e.g. weight, blood pressure, glycaemia); - Environmental management: the system is able to properly manage the home environment (e.g. lighting, temperature, alarm system) according to the users’ needs and the occurring of specific situations (e.g. going to bed, waking up, watching television, doing physical exercises). 4. Conclusions The paper presents a service-oriented architecture to support active ageing in the context of an Italian research project. The research described the project approach focused on interoperability, flexibility and scalability, the user-centred methodology to support the system design according to the users’ needs, and the system prototype within the HicMO project. HicMO project is a valuable example of a successful system definition according to the specific needs of end-users and interoperability issues at the same time. For these reasons, it overcomes most of the current system architecture in the AAL context. Indeed, the HicMO system is highly flexible, since it can include any type of object or service coherent with the HicMO reference model, open due to the integration theoretically with any communication protocol and existing smart object, and scalable since it can be easily configured to be adapted to different contexts of application. Furthermore, it can be applied to modern houses as well as existing buildings. Future works will be oriented to test the system prototypes with final users in order to check the general level of system usability and user satisfaction as well as the specific functionalities of both products and services. References [1] N.N. Ageing in the Twenty-First Century: A Celebration and A Challenge, Published by the United Nations Population Fund (UNFP A), New York, and HelpAge International, London http://unfpa.org/ageingreport. [2] D. Cowan, A. R. Turner-Smith, The Role of Assistive Technology in Alternative Models of Care for Older People. In Tinker, A et al. Alternative Models of Care for Older People research, for The Royal Commission on Long Term Care, The Stationery Office, London, 2 (1999), 325-346. 532 M. Peruzzini and M. Germani / A Service-Oriented Architecture for Ambient-Assisted Living [3] R. Bevilacqua, M. Di Rosa, E. Felici, V. Stara, F. Barbabella, L. Rossi, Towards an impact assessment framework for ICT-based systems supporting older people: Making evaluation comprehensive through appropriate concepts and metrics. Proc. ForitAAL 2013 (2013). [4] D. Chen, G. Doumeingts, F. Vernadat, Architectures for enterprise integration and interoperability: Past, present and future. Computers in Industry, 59 (7) (2008), 647-659. [5] M. Gaynor, F. Yu, C. H. Andrus, S. Bradner, J. Rawn, A general framework for interoperability with applications to healthcare. Health Policy and Technology, 3 (1) (2014), 3-12. [6] H. W. Gellersen, A. Schmidt, M. Beigl, Adding Some Smartness to Devices and Everyday Things, Proc. of WMCSA 2000 – 3rd IEEE Workshop on Mobile Computing Systems and Applications IEEE Computer Society 2000, Monterrey, USA (2000). [7] G. T. Ferguson, Have your objects call my objects, Harvard business review 80 (6) (2003), 138-143. [8] M. Ziefle, C. Rocker, Acceptance of pervasive healthcare systems: A comparison of different implementation concepts, Pervasive Computing Technologies for Healthcare (PervasiveHealth), 4th International Conference on (2010), 1-6. [9] M. Kallman, D. Thalmann, Modeling Objects for Interaction Tasks, Proc. Eurographics Workshop on Animation and Simulation, Springer (1998), 73-86. [10] J. Bohn, V. Coroama, M. Langheinrich, F. Mattern, M. Rohs, Living in a World of Smart Everyday Objects - Social, Economic, and Ethical Implications, Human and Ecological Risk Assessment 10 (2004), 763–785. [11] AGNES, http://www.agnes-aal.eu/ [12] HAPPY AGING, http://happyageing.info [13] HERA, http://www.aal-europe.eu/projects/hera/ [14] S.S. Intille, P.Kaushik, R. Rockinson, Deploying Context-Aware Health Technology at Home: HumanCentric Challenges, In: H. Aghajan, J. C. A. Augusto, and R. Delgado (eds.) Human-Centric Interfaces for Ambient Intelligence, Elsevier, 2009. [15] R. Bevilacqua, S. Ceccacci, M. Germani, M. Iualè, M. Mengoni, A. Papetti, Smart Object for AAL: a Review. In Ambient Assisted Living, Italian Forum 2013 (2013). [16] Juniper Research. Smart Home Ecosystems & the Internet of Things Strategies & Forecasts 20142018 ,2014. [17] Markets and Markets. Smart Homes Market by Products (Security, Access, Lighting, Entertainment, Energy Management Systems, HVAC, Ballast & Battery Pack), Services (Installation & Repair, Renovation & Customization) and Geography - Analysis & Global Forecast (2013 - 2020), 2014. [18] T. Perumal, A. Ramli and C. Y. Leong, S. Mansor and K. Samsudin, Interoperability for Smart Home Environment Using Web Services, International of Smart Home, 2 (4), 2008. [19] D. Chang, C.H. Chen, Understanding the Influence of Customers on Product Innovation, Int. J. Agile Systems and Management, Vol. 7, Nos 3/4, pp. 348 – 364, 2014. [20] T. Ito, A proposal of body movement-based interaction towards remote collaboration for concurrent engineering, Int. J. Agile Systems and Management, Vol. 7, Nos 3/4, pp. 365-382, 2014. [21] R.C. Beckett, Functional system maps as boundary objects in complex system development, Int. J. Agile Systems and Management, Vol. 8, No. 1, pp. 53–69, 2015. [22] H. Dibowski and K. Kabitzsch, Ontology-Based Device Descriptions and Device Repository for Building Automation Devices, EURASIP Journal on Embedded Systems, pp. 1-17, 2011. [23] D. Bonino and F. Corno, DogOnt - Ontology Modeling for Intelligent Domotic Environments, A.P. Sheth et al. (eds.) 7th International Semantic Web Conference, ISWC 2008, Karlsruhe, Germany. Springer-Verlag, pp. 790-803, 2008. [24] X. H. Wang, T. Gu, D. Q. Zhang and H. K. Pung, Ontology based context modeling and reasoning using OWL, IEEE Annual Conference on Pervasive Computing and Communications Workshops, 2004. Proceedings of the Second, pp. 18-22, 2004. [25] D. Preuveneers and Y. Berbers, Automated context-driven composition of pervasive services to alleviate non-functional concerns, International Journal of Computing and Information Sciences, 3 (2), pp. 19-28, 2005. [26] C. Reinisch, M. Kofler, F. Iglesias and W. Kastner, ThinkHome Energy Efficiency in Future Smart Homes, EURASIP Journal on Embedded Systems, 1 (2011). [27] T. G. Stavropoulos, D. Vrakas, D. Vlachava and N. Bassiliades, BOnSAI : a Smart Building Ontology for Ambient Intelligence, in: D.D. Burdescu et al. (eds.) WIMS ’12 Proceedings of the 2nd International Conference on Web Intelligence, Mining and Semantics, ACM, 2012. [28] M. Peruzzini, M. Germani, Designing a user-centred ICT platform for active aging, 10th International Conference on Mechatronic and Embedded Systems and Applications (MESA), IEEE/ASME, pp. 1-6, 2014. [29] M. I. Yousuf, Using Experts’ Opinions through Delphi Technique. Practical Assessment Research & Evaluation 12 (4) (2007). Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-533 533 Studies of Air Transport Management Issues for the Airport and Region Z.W. ZHONG1, Y.Y. TEE and Y.J. LIN School of Mechanical & Aerospace Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798, Singapore Abstract. We analysed the effects of increasing low-cost carrier flights on the airport runway capacity. Different runway utilisation approaches were simulated to study their pros and cons. The single arrival and departure runway approach gives the best results for runway capacity and safety measures. The approach can be applied to runway 3. It can be allocated for only arrival or departure flights during certain time slots to cope with the ever increasing growth of flight movements in the region. We also examined the flight delays in December 2013. An ANOVA and a t-test were performed with two sets of the delays to find that thunderstorms would significantly affect the duration of the flight delays. The delays were translated into the number of flights loss due to deficiency caused by thunderstorms. The number was then used in a derived equation to compute the profit loss due to the thunderstorms in 2013. A simulation and modelling function has been established by the ATMRI to conduct analyses of airspace structures and traffic flows throughout the region. Keywords. LowǦcost carrier flights, airport runway capacity, flight delays, thunderstorm, simulation and modelling Introduction International air travel becomes more affordable over the years with the introduction of low-cost carriers in Asia Pacific. As a result, this region has seen exponential growth in its air traffic movements. Asia Pacific has been recording faster airport traffic growth compared to regions such as Europe and North America. This is largely due to emerging markets and developing economies in Asia Pacific. This is especially seen in the demand and supply for low-cost carriers in South East-Asia. Low-cost carriers are now holding over 50% of the region aviation market despite the fact that it was zero percent 10 years ago. The airport here, being one of the major air traffic hubs in South East Asia, is facing a capacity overload in the future due to this exponential growth. Although new terminals and runway are proposed and being built, the incoming years may still pose a huge issue to the airport growth if its existing capacity is not maximised to cope with the increasing demand. Hence, one of our studies sought to analyse the effects of future increasing low-cost carrier flights on the airport runway capacity. Different runway utilisation approaches were simulated and analysed to study their pros and cons on the airport runway capacity [1]. 1 Corresponding Author, E-mail: mzwzhong@ntu.edu.sg. 534 Z.W. Zhong et al. / Studies of Air Transport Management Issues for the Airport and Region Frequent lightning and thunderstorm weathers may inevitably bring along an operating deficiency and induce air flight delays. We examined the flight delays in December 2013. An ANOVA and a t-test were performed with two sets of the delays to find that thunderstorms would significantly affect the duration of the flight delays [2]. 1. Airport runway capacity The number of terminal gates, taxiways and the parking positions were all modelled exactly as the airport studied, to obtain high accuracy during simulations. The flight data obtained from the airport website and Flightstats.com were compiled. For the simulation data to be compared to flight movements handled by the airport runways, only flights that were not cancelled were taken into consideration. The following steps were taken to set up the baseline simulation model. Step 1: Input of Flight Plan. 59 flight plans were entered into the simulation model. The arrival and departure times of the aircraft were set such that the aircraft would land on the runways at their scheduled time. The specific aircraft for each flight plan was chosen from the existing list of aircraft available in the model in accordance with the actual flight data. For the specific simulation models studied, the runway for the flights movements was individually input into the flight plan. Step 2: Wake Turbulence Matrix. To conduct the simulation in accordance with the required wake turbulence separation and staggered runway separation, both turbulence separations were checked with the existing separation matrix and input as a factor into the simulation model. Step 3: Runway Schedule and Operations. As this study also sought to study the best runway operation approach for peak hour air traffic in the airport, the runway schedule was edited for both runways such that they fulfilled the various simulation models carried out. Step 4: Estimation of Flight Plan Arrival and Departure Time. Once the set-up of the runway schedules was done, the estimation of the flight plan arrival and departure time was calculated using the simulation software. Step 5: Runway Occupancy Time Modelling. The runway occupancy time for both arrival and departure was recorded. An additional data collection method was input into the simulation for it to capture both arrival and departure time for the analysis. Step 6: Simulation and Collection of Results. The simulation was conducted and the various data required for this study were output into excel files for further analyses and comparisons. Table 1 summarises the models simulated. Factors such as the number of flight movements completed within the peak hour, runway delay and holding duration were analysed to determine which runway operation method would be the most suitable for the airport to cope with future flight movements. Table 2 summarises the simulation results of models 1 and 2. Model 2c had the best overall results with a total of 64 flights movements achieved within the hour. However, as none of the models could cope with the projected flight movements in 2020, models 3a-3c had to be simulated to check if the runway capacity could cope with the increased demand. Z.W. Zhong et al. / Studies of Air Transport Management Issues for the Airport and Region 535 Table 1. Summary table of the simulation models. Simulation model 1a 1b 2a 2b 2c 3a 3b 3c Runway Operations Method Segregated Parallel Arrivals/Departures Mixed Parallel Segregated Parallel Arrivals/Departures Mixed Parallel Semi-mixed Parallel Mixed Independent Semi-Mixed Independent Dedicated Medium-Sized Aircrafts Arrival Runway Numberof Flight Movements (Year) 59 (2015) 59 (2015) 70 (2020) 70 (2020) 70 (2020) 70 (2020) 70 (2020) 70 (2020) Table 2. Summarised simulation results of models 1 and 2. Simulation model 1a 1b 2a 2b 2c Number of flights departed within peak hour/projected 25/25 25/25 29/29 29/29 27/29 Number of flights landed within peak hourand adhered to safety requirement/projected 34/34 34/34 32/41 33/41 37/41 Figure 1 shows model 3a has the highest runway delay of 5 min and 57 sec. Hence, despite being able to accommodate all 70 flights movements within the peak hour, model 3a has a relatively significant departure runway delay. This is because in this model, runway-C has 23 departures relative to its runway-L 6 departures. Hence there can be significant runway delay experienced by departure flights on runway C, while waiting for incoming arrival flights. In model 3b, where there is a dedicated runway for departure with a mixture of 7 arrivals, the runway delay is significantly less at only 2 min and 13 sec. Figure 1. Average runway departure delay comparison of models 2 and 3. 536 Z.W. Zhong et al. / Studies of Air Transport Management Issues for the Airport and Region Figure 2 shows that the holding durations of models 2a-2c are significantly higher than that of models 3a-3c, which indicates that the holding duration is a significant sign that the runway capacity of the airport reaches beyond its maximum limit. None of the models 2a-2c can accommodate all 70 flights movements within the peak hour. Model 3b has a relatively significant average holding duration of 4 min and 4 sec compared to models 3a and 3c. Figure 2. Average holding duration comparison of models 2 and 3. Hence, there are pros and cons to models 3a/3c and 3b as they experience relatively significant delays for departure and arrival flights respectively. However, as arrival flights use up more fuel and are more dangerous to be kept in holding for a long period of time in the air, models 3a and 3c reason to be better runway operation methods for the airport compared to model 3b. The departure runway delay experienced in models 3a and 3c can possibly be reduced by revising departure time slots within the peak hour while air traffic controllers can also adjust the departing traffic flow accordingly. On the other hand, the delays experienced by arrival flights are much harder to control and reduce, due to factors such as weather conditions and cruising speeds of different aircraft. By adopting an independent runway operation, it is possible to handle projected future flight movements. However, to reduce significant delays for arriving flights, it is optimal to adopt a mixed traffic approach on both runways or by allocating a specific runway to arriving flights based on their weight categories. Both approaches result in low delays for arrival flights, while the delay for departure flights might be reduced through the methods such as revised departure time slots and air traffic controller adjustments. Z.W. Zhong et al. / Studies of Air Transport Management Issues for the Airport and Region 537 2. Flight delays due to thunderstorms Extraction of the data from the flightstats.com was conducted manually, which turned out to be tedious. Hence, the study could only extract data for December 2013. The weather data were obtained from the National Environmental Agency and included in the flight data. For example, there were thunderstorms during the first two hours on 1 December 2013. This was an example of weather-interfered day, which had thunderstorms occurred on the day. In this study, those weather-interfered days were termed weather-days and their counterparts non-weatherdays. A comparison between weather-days and non-weather-days would give us the differences in the delays. The differences were likely to be caused by thunderstorms. Hauf and Sasse pointed out that a thunderstorm might affect an hour before and after the reported time of the thunderstorm [3]. This implies that approaching thunderstorms may destruct the traffic in the region surrounding the airport. After the thunderstorm has occurred, the aftermath may cause an unsteady state in the traffic. Hence, thunderstorm delays and non-thunderstorm delays were not chosen from the same day in our analysis. This explains the rationale of defining weather-days and nonweatherdays for comparison. On the other hand, differences in operating efficiencies and extremely different causes of delays and natures may exist over the time. Hence, the comparison between weather-day and non-weatherday would be insensible if the comparison was made between weather-day of 1 December 2013 and non-weatherday of 31 December 2013. From the non-weatherdays, reference days, Ref-days were selected for comparison with weather-days. To preserve the similarities of the operational environments as much as possible, these ref-days were chosen to be near days of the weather-days. With the ref-day, the delay of every flight occurred during a thunderstorm (weather-day) was compared with its corresponding ref-day’s computed average delay. The difference in the values was treated as the weather-induced delays. The total delays were summed up. In the airport context, the cost would be considered using the revenue and expenses, taken from the airport annual reports. Profits before tax were chosen in this calculation to avoid the non-linear tax rate. A figure of 430,000 optimal flight movements was obtained from an article [4] and email conversations. This figure was set based on current operating conditions. The number of 430,000 flights was used in this study as a benchmark for computation of the optimal condition and comparison with the actual flights movement condition. The combined duration of all the thunderstorm delays was used separately in the optimal and actual scenarios. In the optimal condition, the total thunderstorm delay was converted to an optimal number of flights losses. Similarly, the total thunderstorminduced delay was converted to the actual number of flights losses, based on the existing efficiency rate of operations. The two figures obtained would give the range of how many flights losses. These two figures were used in the equation to obtain monetary values of the delays induced. The hypothesis that thunderstorms induced higher delays than normal operational delays was tested using ANOVA and a t-test. This study performed the tests using the hourly delays on weather-days and ref-days. The null hypothesis was “thunderstorms had no effect on the delay duration more than normal operations”. From the ANOVA, the F-value obtained was 6.63, compared to F-critical of 4.03. From the t-test, a t value of 2.58 was obtained, compared to two-tail t-critical of 2.02. 538 Z.W. Zhong et al. / Studies of Air Transport Management Issues for the Airport and Region Both tests gave the indication that thunderstorms had a significant impact on the aviation delays. The variances of weather-days and non-weatherdays were 514,079 and 176,295 respectively. Clearly, weather-days possess greater uncertainty. Thus, it is expected that thunderstorms lead to a greater delay time in operations. These two groups have a significant difference in their means. This supports that thunderstorms would incur longer durations of delays than normal operations. Delays due to other variable causes were unknown in this study. The nonweatherday delay was assumed to be the normal operational delay of the airport, and the calculation of the total delays in December 2013 proceeded. All the thunderstorm-affected flights on weather-days were extracted out. The difference between the delay duration of these flights operated during thunderstorm hours and the same hourly average delay on its corresponding ref-days was calculated and termed After Normal Delay. For example, a thunderstorm occurred on 26 December 2013, 3 p.m. (weather-day); we calculated that on its ref-day (24 December 2013), 3 p.m., the hourly average delay was 6 min and 5 sec. This hourly average delay of 6 min approximately, was deducted from the delays of all the flights operated from 3 p.m. to 4 p.m. Any flights that had delays less than 6 min would bring up zero min of After Normal Delays. All the delays in that hour were summed up, averaged out and listed. The total After Normal Delays were summed up to 113 hr and 3 min. This implies that thunderstorm induced an additional 113 hr of delays on top of the normal operational delays. It is worth noting that the exact total delay on these weather-days was 180 hr and 5 min. The average delay on weather-days was 20 min and 19 sec. Flights before or after the thunderstorms were not considered in this study despite that numerous studies suggested that these flights would be affected too [3]. This was to prevent over complications as the details and the nature of the delays were not available. The study extended the same methodology for non-weatherdays, and a total delay of 3385 hr and 10 min was obtained. This gave an average of 13 min and 27 sec. All delays on weather-days and non-weatherdays were summed up for the month of December 2013 and were found to be 3565 hr and 15 min. Thunderstorm-induced delays contributed to about 5% of the total delays. This figure is close to the numbers reported in the literature overseas. In Figure 3, a graph of profit against the number of flights operated presents a well-fit curve. A polynomial regression on the profit against the number of flights was obtained with a good fit of R2 = 0.9999. The polynomial regression equation could be derived, as shown in Equation (1), where y is the profit with a unit of $1000 and x is the number of flights. ‫ ݕ‬ൌ ͵Ǥͺ͹ͷ͵ ൈ ͳͲିଵ଴ ‫ ݔ‬ଷ െ ͲǤͲͲͲʹͶ͹ͷͳ͵‫ ݔ‬ଶ ൅ ͷͳǤͻͻ͹ͳͶͺͳ͹‫ ݔ‬െ ͵ʹͳ͹ͻͶͶ (1) The airport can handle an optimal number of 430,000 flight movements a year [4], with two runways and current operational scenarios. It should be noted that flight movements are not static and should be less than the maximal throughput. Thus, this optimal condition is different from the ideal scenario when an airport can operate with its maximal throughput always. The so-called optimal condition may include insignificant delays as essential operational delays and fluctuate throughout the operations. This study assumed that 430,000 flight movements were static throughout the year for simplicity. 539 Z.W. Zhong et al. / Studies of Air Transport Management Issues for the Airport and Region 950 Proit in $, Millions 850 750 Linear Regression R² = 0.9317 650 Cubic Polynomial Regression, R² = 0.9999 550 450 350 250 240 260 280 300 320 340 Number of Flights in Thousands Figure 3. Profit versus number of flights. From [5, 6], the ratio of departures to arrival flights from 2007 to 2013 was obtained. Taking account of the ratio, this study assumed that the proportion of departures to arrivals was equal, and at an optimal state, the runways would dedicate for departure and arrival each. Then, the profit loss due to thunderstorm-induced delays was estimated for the year of 2013. 3. Regional airspace capacity enhancement Previously, there was no ASEAN-wide institution that undertook research in order to enhance efficiency and capacity for an eventual seamless ASEAN airspace from both the operator and airspace user’s perspectives. Therefore, a functioning ASEAN simulation and modelling capability has been established by the ATMRI (Air Traffic Management Research Institute), in order to be able to carry out analyses of airspace structures and traffic flows throughout the ASEAN region and to provide solutions for capacity and efficiency enhancements using modelling and simulation tools. The needs for the ASEAN region can be met initially by the ATMRI and subsequently the ATMRI can share its experience with the rest of the region. The research subjects include network and local capacity planning, airspace improvement, new route structure and procedures, etc. The ATMRI has acquired the tool, SAAM, to undertake the work, and its researchers have been trained by EUROCONTROL to be proficient in the use of SAAM. SAAM is an integrated system developed by EUROCONTROL. With the initial task done successfully, ATMRI researchers are able to examine the route structure from its origin to its destination, within and outside ASEAN. The ASEAN 540 Z.W. Zhong et al. / Studies of Air Transport Management Issues for the Airport and Region modelling and simulation function has enabled the establishment of the current baseline of traffic demands. The current traffic demands have been processed from the air traffic data provided by the ASEAN Member States. Such an initial analysis has formed the capacity/demand baseline and future demands on the ASEAN ATM system. 4. Summary One study found that the single arrival, single departure runway approach currently adopted by the airport displayed the best results in terms of runway capacity and safety measures for flights movements within a peak hour. As there is only one traffic flow per runway, there is less waiting time for arrival flights and departure flights to handle, as the scheduling can be done independently and will not be affected by any delay due to either traffic flow. The runway approach with the best results can also be applied to runway 3. It could be allocated for only arrival or departure flights during certain time slots depending on the traffic mix and flow, especially in the future, to cope with the ever increasing growth of flights movements in the region. Another study analysed the thunderstorm-induced delays in December 2013. The delay was translated into the number of flights losses due to deficiency caused by thunderstorms. The number was then used in a derived equation to estimate the profit loss due to the thunderstorms in 2013. With the initial task done successfully for regional airspace capacity enhancement, ATMRI researchers can examine the route structure within and outside ASEAN. The function has enabled the establishment of the current baseline of traffic demands. Acknowledgement The first two authors thank ATMRI for its supports provided to ATM studies in NTU. References [1] [2] [3] [4] [5] [6] Y.Y. Tee, Effects and optimisation of increasing low-cost carrier flights on runway capacity, Report of Nanyang Technological University, Singapore, 2015. Y.J. Lin, A Study of Cost of Thunderstorm Delay, Report of Nanyang Technological University, Singapore, 2014. T. Hauf, M. Sasse, The Impact of Thunderstorms on Landing Traffic at Frankfurt Airport (Germany) – A Case Study, 10th Conference on Aviation, Range, and Aerospace Meteorology, Portland, Oregon, 2002 (paper 5.12). Civil Aviation Authority of Singapore, 2012, Bridging Skies, Enhancing Air Traffic Capacity for Future Growth, https://www.bridgingskies.com/enhancing-air-traffic-capacity-for-future-growth-2/, Accessed: 28 Sep 2014. Department of Statistics Singapore, 2014, Yearbook od Statistics Singapore, http://www.singstat.gov.sg/publications/ publications_and_papers/reference/yearbook_of_stats.html, Accessed: 28 Sep 2014. Department of Statistics Singapore, 2014, Transport and Communications, http://www.singstat.gov.sg/statistics /browse_by_theme/transport.html, Accessed: 28 Sep 2014. Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-541 541 Service-oriented Life Cycles for Developing Transdisciplinary Engineering Systems Michael Sobolewski a,b,1 and Raymond Kolonay a Air Force Research Laboratory, WPAFB, Ohio 45433 b Polish Japanese Institute of IT, 02-008 Warsaw, Poland a Abstract. A transdisciplinary computational model requires extensive computational resources to study the behavior of complex engineering systems by computer simulation. The large system under study that consists of hundreds or thousands of variables is often a complex engineering design system for which simple, intuitive analytical solutions are not readily available. In this paper the basic concepts of mogramming (modeling and programming, or both) for N3 (three-dimensional design structure matrix) diagramming in a Service-ORriented Computing enviRonment (SORCER) are presented. On the one hand, mogramming with service variables allows for computational fidelity with multiple services, evaluations, and sources of data. On the other hand, any combination of local and remote services in the system can be described as a collaborative service federation of engineering applications, tools, and utilities. A service-oriented lifecycle for all phases of mogram-based systems development reflecting N3 diagraming is presented. In particular all basic phases from inception through analysis, design, construction, transition, and maintenance are outlined in a service-oriented framework for deploying transdisciplinary engineering design systems. Keywords. MADO, SDLC, service-orientation, N2 and N3 diagrams, exertionoriented programming, mogramming, transdisciplinary systems, SORCER Introduction Multidisciplinary Analysis and Design Optimization (MADO) is a domain of research that studies the application of numerical analysis and optimization techniques for the design of engineering systems involving multiple disciplines. The formulation of MADO problems has become increasingly complex as the number of engineering disciplines and design variables included in typical studies has grown from a few dozen to thousands when applying high-fidelity physics-based modeling early in the design process [1]. The Service-oriented computing environment (SORCER) is a true serviceoriented MADO environment that has been developed and applied to solve multidisciplinary design-optimization problems [2][3][4][6]. A service is the work performed in which a service provider (one that serves) exerts acquired abilities to execute a computation. A Service-oriented Architecture (SOA) is a software architecture using loosely coupled service providers that introduces a service registry, the third component to client-server architecture. The registry allows finding service providers in the network. 1 Corresponding Author, E-Mail: sobol@sorcersoft.org. 542 M. Sobolewski and R. Kolonay / Service-Oriented Life Cycles A Service-object-oriented Architecture (SOOA) is SOA with the communication based on remote message passing with the ability to pass data using any wire protocol that can be chosen by a remote object (provider) to satisfy efficient communication with its requestors. In SOOA a proxy object used by the requestor is created, registered, and owned by the provider. Service-oriented programming (SOP) is a programming model organized around service activities rather than service provider actions and service collaborations rather than service provider subroutines. The approach is about specifying service collaborations (activities) by the end user rather than the programmer developing subroutines (actions) of a single service provider. Historically, a program has been viewed as a subroutine (callable unit) that takes input data, processes it, and produces output data. The programming challenge was seen as how to write subroutines, not how to manage data, and not how to manage collaborations of services. Object-oriented programing shifted the focus from subroutines to data management - objects with encapsulated data managed by subroutines (methods). The SOP challenge is refocused on the collaboration of local/remote autonomous services. Service-oriented programming takes the view that what we really care about are the service collaborations we want to manage rather than the subroutines with data required to manage them. The first step in SOP is to identify all the services the end user needs to use and how they relate to each other in a compound service request, an exercise often known as service modeling. Once services have been identified, corresponding service providers define the kind of data they contain - data contexts - and any subroutines that can process the data. Each distinct subroutine is known as a service action (operation) defined by the provider’s service type used as a reference to service providers. Service providers communicate with well-defined declarative service requests called context models or imperative service requests called service exertions. We call a context model and service exertion respectively as a model and exertion for short, unless otherwise stated. Compound requests are called service mograms that are aggregations of both models and exertions- service models - expressed in a relevant service-oriented language. Mograms express work to be done by collaborating service providers, so they are front-end (abstract) services with respect to actualized service collaborations as their back-end (concrete) services. The MADO engineers (end users) usually create front-end services collaborations while software engineers develop service providers – actions of individual service providers. The N2 (N-squared) diagram or design structure matrix [5] represents the functional or physical interfaces between system elements depicted as diagonal nodes with connectors showing data flow between nodes. It is used to systematically define and analyze functional and physical interfaces of the system. It is an engineering tool for creating front-end MADO services or applications, for example in combination with essential design factor matrix [6] or with the full-scale UML-based flavor using SysML [7] tools. The analysis process and data flow represented as the N2 diagram for the design of the next generation efficient supersonic air vehicle (ESAV) is discussed in [8][9]. SORCER is based on SOOA using service signatures dependent on service types (provider-requestor contracts) that play the role of references to service providers and allow binding to local or remote services (tools, applications, and utilities) at runtime [2][3]. In this service-oriented (SO) representation, systems, subsystems and components are implemented as scalable, dynamic, and transdisciplinary collaborations M. Sobolewski and R. Kolonay / Service-Oriented Life Cycles 543 of local/remote services. Both system and subsystem components represented by N2 diagrams are expressed in SORCER as mograms [10][11]. Five types of context models and tree types of exertions are distinguished in SORCER. With high expressive power of mograms composed of models and exertions, the N2 diagram can be expanded recursively in the third direction with nested mograms as subsystems being again N2 diagrams (multiple layers of interconnected N2 diagrams). Also, each node in the N2 diagram can be defined with multiple fidelities that expend the N 2 diagram in the third dimension (a single node substituted with multiple nodes as a multi fidelity mograms). We call hierarchically organized mogram-based diagrams with multi-fidelity components and flow of control as N3 (N-cubed) diagrams. Both, N2 and N3 diagrams are discussed in Section 2. In most service systems the focus is on back-end aggregation of services into a single provider, thus having more services performed by the same provider or by the same provider node, e.g., an application server. In either case these new services are still elementary services to the end user. This type of back-end aggregation, done by software developers is called service assembly in contrast to the MADO aggregation corresponding to the N3 diagram created by the end user. In contrast, the front-end aggregation is a service composition and requires service-oriented languages to express declaratively and imperatively compositions of hierarchically organized services with multiple fidelities. Two service-oriented languages, declarative Context Modeling Language (CML) and imperative Exertion-Oriented Language (EOL) are discussed in Section 3. Two ways of defining service composition coexist within SORCER: declarative context models and imperative exertions with SO flow of control. A context model is a collection of interrelated service variables - functional compositions - called service entries. Imperative service compositions – object compositions (composite design pattern [13]) with flow of control - are called exertions. Both models and exertions use service signatures to bind at runtime to corresponding service providers. A dynamic collection of service providers requested for the actualization of model or exertion is called a service federation. Note that service collaboration is an activity while a service federation is just a collection of service providers needed for the collaboration. Values of dependent variables in context models can be evaluated by exertions and context models can be used as service components of exertions. Therefore in SORCER, an MADO system represented by the N3 diagram is a hierarchically organized aggregation of models and exertions with multiple fidelities. The remainder of this paper is organized as follows: Section 1 describes briefly problem solving with N2 and N3 diagramming; Section 2 describes SO mogramming for N3 diagramming; Section 3 describes life cycles for developing transdisciplinary MADO systems; finally Section 4 concludes with final remarks and comments. 1. Problem Solving with N-squared (N2) and N-cubed (N3) diagramming Sometimes it is just hard to get started with service-oriented MADO. Faced with a long problem or project description it’s not clear what is the required order of activities and related actions to perform. Project descriptions usually just include an overview of the project, because there are actually many ways to solve the problem or achieve a required purpose. 544 M. Sobolewski and R. Kolonay / Service-Oriented Life Cycles How do we actually approach the problem? One way to think about a problem is to consider it as interactions between uniform services within a system. Two methods of this form of interpretation are the top-down approach and the bottom-up approach. The top-down approach is considered the “compound service” approach, because the general idea of the system is first formulated declaratively and without getting down to the lowest level entities related to implementation of individual services (actions) and data used. The compound service, or activity, is then broken down into slightly smaller services. Those services are then split again until we reach the very bottom level of elementary services (actions). The bottom-up approach considers the lowest level entities first and their interaction with one another which build subsystems usually representing imperative processes. These subsystems will interact with each to form greater subsystems and slowly build our way up to the highest system. Top-down and bottom-up describes two different methods of thinking: working at the top is considered strategic and declarative, while working at the bottom is tactical and imperative. How a given situation is actually perceived and processed will vary with the person, experience, and runtime environment chosen. However, the approach is to do whatever is best for managing complexity of the solution by a combination of both declarative and imperative thinking. In declarative programming a process is expressed by a functional composition while in imperative programming is expressed by an algorithm. An algorithm is a procedure for solving a problem in the form of a self-contained step-by-step set of services (operations) to be performed with explicit control flow defined. The emphasis on explicit control flow distinguishes an imperative programming language from a declarative programming language. The N2 diagram design structure represents the functional or physical interfaces between system elements depicted as diagonal services with connectors showing data flow between services. In Fig.1 nodes of the N2 diagram represent services e1, m1, m3, and e6 and the diagram as a whole defines the functional composition: e6(e1, m1(e1), m3(e1)) where the parameters of strict function e6 are evaluated sequentially in the order specified. In general, N2 diagramming is a graphical representation of a functional composition. The composition is defined in a declarative modeling language with no explicit flow of control for branching and looping. An expanded N2 diagram with multi-fidelity of diagonal services that can be hierarchically organized with component diagrams along with flow of control is called an N3 (N-cubed) diagram. An example of an N3 diagram that expands the N2 diagram from Fig. 1 is depicted in Fig. 2. It represents the following functional composition: [g: e1,2; e6,3]e6,*(e1,2(mx,1), m1(e1,1), m3,1(mx,1, ez,2, e1,1)) where [g: e1,2; e6,3] denotes a guard for e6 defining the loop under condition g: if g is true then e1,2 else e6,3. A current fidelity e6,* of e6 is determined by this self-aware service at runtime. The third dimension here is represented by multiplicity of service nodes e1, m3, and e6 with fidelities: e1,1 and e1,2 for e1; m3,1 and m3,2 for m3; and e6,1, e6,2, and e6,3 for e6. Additionally, each service node can be hierarchically nested with its own N3 diagrams, for example m3,1 depends on two diagrams N32,1 and N32,2 and e1,1 on N31. M. Sobolewski and R. Kolonay / Service-Oriented Life Cycles 545 Figure 1. N2 diagram for composition: Figure 2. N3 diagram for composition: e6(e1, m1(e1), m3(e1)). [g: e1,2; e6,3]e6,*(e1,2(mx,1), m1(e1,1), m3,1(mx,1, ez,2, e1,1)). The matrix crossing points (small circles) represent connectors that match outputs to relevant inputs between heterogeneous and autonomous services. In SORCER a service-oriented process represented by an N3 diagram can be defined declaratively with a Context Modeling Language (CML) as a model or algorithmically as an exertion with an Exertion-Oriented Language (EOL), or with both languages at the same time as a mogram – see the following Section for details. Within EOL a control flow exertion (conditional exertion) is a statement whose execution results in a choice being made as to which of two or more execution paths should be followed. Multi-fidelity diagonal services are represented by instances of multi-fidelity service of the context model type. 2. Service-oriented mogramming A service mogram is a service model that is executed by a dynamic federation of services. In other words a mogram exerts the collaborating service providers in a service federation created at runtime. Mograms are specified in the Service Modeling Language (SML) that consists of two parts: Context Modeling Language (CML) and Exertion-Oriented Language (EOL). The former is used to specify data models (data contexts) for exertions and collections of interrelated functional compositions - context models. While CML is used for declarative service-oriented programming, EOL is focused on object-oriented composites of services - exertions. A model is a declarative representation of something, especially a system, phenomenon, or service that accounts for its properties and is used to study its characteristics expressed in terms of service variables associated with functional compositions. In every computing process variables represent data elements and the number of variables increases with the increased complexity of the problems being solved. The value of a service variable is not necessarily part of an equation or formula as in mathematics - its value is a result of service execution or the service itself. Handling large sets of interconnected variables for transdisciplinary computing requires adequate programming methodologies. In SORCER interrelated service variables of a model are called entries. An entry used in a model refers by name (path) to one of the pieces of data - value. A value can 546 M. Sobolewski and R. Kolonay / Service-Oriented Life Cycles be explicit or calculated by a subroutine. A parameter is a special kind of entry, named in a subroutine by its path (semantic name) and returning the value of the entry. These values, called arguments, that are used in subroutines are defined by input entries of the model. Most parameters are functionals – functions that take functions as their arguments. A selected subset of output entries defines a studied response of the model. Just as in standard mathematical usage, the argument is the actual input passed to a subroutine, whereas the parameter is the variable inside the implementation of the subroutine. Depending on the type of subroutine (evaluation, invocation, service, or composite evaluation) we distinguish four types of basic context models (EntModel, ParModel, SrvModel, VarModel) with ServiceContext as the data model for exertions, as depicted in the right part of in Fig. 3. Multi-fidelity diagonal services are represented by instances of multi-fidelity service of the context model type MultiFidelityService. Figure 3. The UML diagram of SORCER top-level interfaces and classes. A sketch of a context model is expressed in CML as follows: Service e1 = exertion(sig(“doAnalysis”, Optimization.class), context(…)); … Service em = exertion (…); Service m1 = model(ent(…); … Service mn = model(ent(…); Condition g = condition(…); Service mo = model( // order of entries does not matter ent(“e1”, fi(“e1,1“, e1)), M. Sobolewski and R. Kolonay / Service-Oriented Life Cycles 547 ent(“m1”, m1, args(“e1”))), … ent(“m3”, fi(“m3,1”, m3, args(“N32,1”, “N32,2”, “e1”))), … outEnt(“e6”, loop(g, e6, args(“m3”, “m1”, “e1”))), response(“e6”)); Model out = exert(mo); Context cxt = result(out); cxt = response(mo); Object obj = response(mo, “e1”); // evaluate the model mo // get the evaluation result // get declared response e6 // get declared response e1 The first part of the mogram declares component services used in the model mo. All entries in the model define subroutines with arguments defined by other entries in the same model. At the end of mogram the basic CML operators are illustrated for model evaluation and obtaining results. Exertions are structured by the composite design pattern [13] with elementary exertions called tasks and compound exertions, exertion of Blok and Job types, as seen in the left part in Fig. 3. A block represents concatenation of exertions with blockstructured programming combined with flow of control exertions. Jobs represent object-oriented composites (workflows with pipes for data flow). Therefore, a mogram is either a model (declarative SO program) or an exertion (imperative SO program), or a hierarchical hybrid of both as defined by the UML sketch in Fig. 3. Netlets are expressions in SML that are interpreted as SML scripts (text files) with the SORCER network shell (nsh). Technically netlets are both Groovy scripts and Java sources, therefore can be interpreted with a Groovy shell or compiled with a Java compiler. The former gives the agility of running MADO analyses and optimization and performing modifications to the netlets with no need for an IDE. The latter, with a Java IDE, allows for efficient development (compilation and debugging). Therefore, with their dual nature of netlets they can be developed much easier with Java IDEs and frequently updated as executable text files. In Java sources netlets can be used directly as services to provide implementation of service providers that can publish standard service types implemented by mograms. A sketch of an exertion-oriented program is expressed in EOL as follows: Service e1 = exertion(sig(…), context(result(“out/par”), …); … Service em = exertion (…); Service m1 = model(ent(…); … Service mn = model(ent(…); Condition g = condition(…); Service xrt = loop(g, block( // services are ordered for execution fi(“e1,1“, e1), m1, fi(“m3,1”, m3), 548 M. Sobolewski and R. Kolonay / Service-Oriented Life Cycles e6, context(…, result(“opti/value”)); Exertion out = exert(xrt); Context cxt = context(out); cxt = value(exertion); Object obj = value(xrt, “result/value”); // execute xrt // get result // get declared value at opti/value // get value at the path result/value The first part of the above exertion-oriented program declares component mograms used in the main exertion xrt. The execution of the exertion xrt is defined by the concatenation of component mograms (service-oriented statements) to be executed with the semantics of block-structured programing. At the end of the above mogram the usage of the basic EOL operators is illustrated for executing exertions and obtaining results. Note that both the main model mo and the main exertion xrt implement the same N3 diagram N30 in Fig. 2. It demonstrates that the main mograms representing N3 diagrams can be implemented either way, with declarative (CML) or imperative language (EOL) for the top-level mograms. 3. Life cycles for developing service-oriented MADO systems A systems development life cycle (SDLC) is composed of a number of distinct work phases that are used by engineers and system developers to plan for, design, build, test and deliver systems represented by N3 diagrams. An N3-based SDLC aims to produce high quality systems that meet or exceed customer expectations, based on requirements represented by hierarchically organized N3 diagrams. A well-defined SDLC process enables the delivery of transdisciplinary systems, which move through each clearly defined phase of the generic template of planning, creating, testing, and deploying an information system. In systems engineering, with the increasing level of service-orientation (everything as a service) and increasing number of legacy and new network services supplied by different development groups and organizations, also increases systems distribution and heterogeneity. To reliably manage the increasing level of distribution and heterogeneity, the SORCER environment has been expended to support N3-based mogramming combined with its unique SDLC phases: inception, analysis, design, construction, transition, and maintenance. 1. Inception a) Determine which processes better represent the problem being solved - topdown or bottom-up problem solving, or the hybrid approach b) For top-down solutions use CML modeling, for bottom-up use EOL programming, for hybrid solutions use SO mogramming with CML/EOL or EOL/CML c) Identify relevant existing services and those not available yet d) Identify service UIs required for the end users 2. Analysis a) Define N3 diagrams representing the MADO process with N3 nodes and N3 components for the hierarchically organized MADO system identified in 1b M. Sobolewski and R. Kolonay / Service-Oriented Life Cycles 549 b) If multi-fidelities are required, define corresponding high fidelity alternatives for corresponding nodes in the N3 diagrams defined in 2a c) Define service signatures for all local/remote services used in the N3 diagrams d) For all service types used in signatures, define the Java interfaces that define the behavior of the service providers e) Define entries in the data contexts and context models along with needed connectors to support seamless data flow across N3 diagrams f) Decide what codes to acquire/buy/develop in support of service providers that implement service types defined in 2d g) Define service UIs required for the end users identified in 1d 3. Design a) Design detailed service types (Java interfaces) for all N3 analysis interfaces defined in 2d b) Design all service providers in support of service types designed in 3a c) Design component mograms, with API or SML/EOL, for N3 diagrams defined in 2a d) Design front-end netlets in SML/EOL N3 diagrams for end-users defined in 2a e) Design required service UIs for corresponding service providers as defined in 2g 4. Construction a) Use SORCER project templates for developing and testing a service provider/requestor and service UIs b) Implement all service types designed in 3a c) Implement all service provider designed in3b d) Implement component mograms designed in 3c e) Implement service UIs designed in 3e f) Implement netlets designed in 3d as standalone files executable with the nsh shell g) Deploy SORCER operating system for development h) Deploy all required services for development i) Test services provider classes with local signatures j) Test remote service providers with service types implemented in 4c k) Test all service UIs implemented in 4e l) Test all component mograms implemented in 4d m) Test all netlets representing N3 diagrams designed in 4f 5. Transition a) Deploy SORCER operating system for production b) Deploy all required services for production c) Transition netlets to end users d) Provide support for updating and executing netlets by the end users e) Demonstrate all service UIs’ functionality to the end users. f) Demonstrate/run/modify netlets with the nsh shell g) Capture MADO inputs, results with related netlets along with required codes and sources as a part of corporate design history h) If updates are needed to N3 diagrams go to 2 550 M. Sobolewski and R. Kolonay / Service-Oriented Life Cycles 6. Maintenance a) Maintain the repository of all codes and sources for developed services with the unique service IDs used in N3 mograms for later reuse of persisted solution b) Maintain continuous integration testing of all N3 mograms/netlets/service UIs c) If minor updates of mograms/netlets/service UIs are required go to 2 or 3 d) If essential updates of mograms/netlets/service UIs are required go to 1 4. Conclusions Using presented higher-level SO abstractions for mogramming allows reducing the complexity of creating and using transdisciplinary MADO systems. These networkcentric collaborative systems are created at runtime by teams of engineers working together and using many shared services that can be provisioned autonomically on demand. Domain specific SO languages are for humans, unlike software languages that are for computers, intended to express domain specific complex MADO processes and related solutions. Two programming languages (CML and EOL) for SO computing are introduced in this paper in the context of N 2 and N3 diagramming. The SORCER network shell (nsh) manages the corresponding service federations at runtime for the N2 and N3 diagrams expressed by mograms. SDLC phases of mogram-based systems development are presented with the semantics of N3 diagramming. These continue to evolve with the focus on the development of interactive tools that facilitate easy creation and testing of graphical N3diagrams. In particular, all the basic phases from inception through analysis, design, transition and maintenance are tested and continuously improved for developing aerospace transdisciplinary engineering systems. The SORCER mogramming environment supports the two-way convergence of modeling and programming. It allows for flexible problem solving solutions as presented in Section 1 and 2. On one hand, EOL is uniformly converged with CML to express front-end service exertions. On the other hand, CML is uniformly converged with EOL to express a front-end declarative context models. Both front-end exertions and models can be used as service providers directly within SORCER. The evolving SORCER platform (the GitHub open source project [14]) introduces front-end mogramming languages [11] and an API with a modular service-oriented operating system [2]. It adds two entirely new layers of abstraction to the practice of SO computing. The presented SO MADO approach has been verified and validated in research projects at the Multidisciplinary Science and Technology Center, AFRL/WPAFB [8][15][16]. Acknowledgement This work was supported by Air Force Research Lab, Aerospace Systems Directorate, Multidisciplinary Science and Technology Center, the contract number FA8650-10-D3037, Service-Oriented Optimization Environment for Distributed High Fidelity Engineering Design Optimization. M. Sobolewski and R. Kolonay / Service-Oriented Life Cycles 551 References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] R. M. Kolonay, A physics-based distributed collaborative design process for military aerospace vehicle development and technology assessment, International Journal on Agile Systems and Management, Vol. 7 (2014) Nos. 3/4, pp. 242 – 260. M. Sobolewski, Service oriented computing platform: an architectural case study. In: R. Ramanathan, K. Raja (eds.) Handbook of research on architectural trends in service-driven computing, IGI Global, Hershey, 2014, pp. 220-255. M. Sobolewski, Unifying Front-end and Back-end Federated Services for Integrated Product Development, In: J. Cha et al. (eds.) Moving Integated Product Development to Service Clods in Global Economy, IOS Press, Amsterdam, pp. 3-16, 2014, Retrieved May 25, 2015, http://ebooks.iospress.nl/publication/37838. M. Sobolewski, Technology Foundations. In: J. Stjepandić et al. (eds.) Concurrent Engineering in the 21st Century, Springer International Publishing Switzerland, pp. 67-99, 2015. T. R. Browning, Applying the Design Structure Matrix to System Decomposition and Integration Problems: A Review and New Directions, IEEE Transactions on Engineering Management, Vol. 48 (2001), No. 3, pp. 292-306. L. Nan, W. Xu and J. Cha, A Hierarchical Method for Coupling Analysis of Design Services in Distributed Collaborative Design Environment, International Journal on Agile Systems and Management, Vol. 8, 2015, Nos. 3/4, in press. L. Delligatti, SysML Distilled: A Brief Guide to the Systems Modeling Language, Addison-Wesley Professional, Upper Saddle River, 2013. S. A. Burton, E. J. Alyanak, and R. M. Kolonay, Efficient Supersonic Air Vehicle Analysis and Optimization Implementation using SORCER, 12th AIAA Aviation Technology, Integration, and Operations (ATIO) Conference and 14th AIAA/ISSM AIAA 2012-5520. M. Sobolewski, S. Burton, and R. Kolonay, Parametric Mogramming with Var-oriented Modeling and Exertion-Oriented Programming Languages. Proceedings of the 20th ISPE International Conference on Concurrent Engineering, C. Bil et al. (Eds.), ISBN: 978-1-61499-301-8 (print), 978-1-61499-302-5 (online), IOS Press, 2013, pp. 381-390. Retrieved May 25, 2015, http://ebooks.iospress.nl/publication/34826. A. Kleppe, Software Language Engineering, Addison-Wesley Professional, Upper Saddle River, 2009. M. Sobolewski, and R. Kolonay, Unified Mogramming with Var-Oriented Modeling and ExertionOriented Programming Languages, Int. J. Communications, Network and System Sciences, 2012, 5, 9. http://www.scirp.org/journal/PaperInformation.aspx?paperID=22393, Accessed: 25 May 2015. M. Sobolewski, Object-Oriented Service Clouds for Transdisciplinary Computing, in: I. Ivanov et al. (eds.), Cloud Computing and Services Science, DOI 10.1007/978-1-4614-2326-3_1, Springer Science + Business Media New York, 2012. T. Bevis, Java Design Pattern Essentials, Ability First Limited, Leigh-on-Sea, 2012. SORCER Project. http://sorcersoft.org/project/site/, Accessed: 25 May 2015. R. M. Kolonay, and M. Sobolewski, Service ORiented Computing EnviRonment (SORCER) for Large Scale, Distributed, Dynamic Fidelity Aeroelastic Analysis & Optimization, International Forum on Aeroelasticity and Structural Dynamics, IFASD2011, 26–30 June, 2011, Paris. R. M. Kolonay, E. D. Thompson, J. A. Camberos and F. Eastep, Active Control of Transpiration Boundary Conditions for Drag Minimization with an Euler CFD Solver, AIAA-2007–1891, 48th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, Honolulu, 2007. This page intentionally left blank Part 9 Product Lifecycle Management This page intentionally left blank Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-555 555 A Gingival Mucosa Geometric Modelling to Support Dental Prosthesis Design Rodrigo Meira de ANDRADE a,1, Anderson Luis SZEJKA a,b,2, Osiris CANCIGLIERI JUNIOR a,b,3 a Pontifical Catholic University of Paraná – Control and Automation Engineering Department (PUCPR / ECA), Paraná, Brazil b Pontifical Catholic University of Paraná (PUCPR) - Polytechnic School Production and System Engineering Graduate Program (PUCPR / PPGEPS), Paraná, Brazil Abstract. The integration between health and engineering areas promotes the increasing precision of medical devices, making the diagnostic process progressively more accurate and safe. In this context, the integration between Product Engineering and Dentistry has provided tools development that translates human reality through mathematical concepts aiding complex processes, such as dental implant. This research presents the development of a system for teeth and gums detection using Matlab® aiming the acquisition of the point cloud for teeth and gums 3D reconstruction in a CAD system. The development of this system has aimed to support dentists in their diagnostics, offering easy understandable information and creating a tool that can help the elaboration of auxiliary masks for dental implant process. The developed system also allows easy DICOM images manipulation, utilization of edge filters, highlighted of specific range of bone density and image reconstruction of different views. The use of mathematical concepts in some functionality of the system allowed the automatic identification and processing of images parts, eliminating the possibility of human error. Keywords. Dental Implant. Computerized Tomography. DICOM. Medical Image Processing. Introduction The integration between health and engineering areas promotes the increasing precision development of medical devices offering information support more accurate and safe decision-making in the diagnostic process. 1 Control and Automation Engineer at Pontifical Catholic University of Paraná (PUCPR), R. Imaculada Conceição, 1155, Prado Velho, Curitiba, CEP 80215-901, PR, Brazil; Tel: +55 (0) 41 32711304; Fax: +55 (0) 41 32711345; E-mail: rodrigormda@outlook.com. 2 Ph.D. Research Student of Production and System Engineering Graduate Program (PPGEPS) at Pontifical Catholic University of Paraná (PUCPR), Rua Imaculada Conceição, 1155, Prado Velho, Curitiba, CEP 80215-901, PR, Brazil; Tel: +55 (0) 41 32712425; Fax: +55 (0) 32711345; E-mail: anderson.szejka@pucpr.br. 3 Professor in the Department of Production Engineering at Pontifical Catholic University of Paraná (PUCPR), Rua Imaculada Conceição, 1155, Prado Velho, Curitiba, CEP 80215-901, PR, Brazil; Tel: +55 (0) 32711304; Fax: +55 (0) 41 32711345; E-mail: osiris.canciglieri@pucpr.br. 556 R.M. de Andrade et al. / A Gingival Mucosa Geometric Modelling The Oral Implantology has become an important branch of dentistry because it rehabilitates partial or total edentulous, providing welfare and improving the patient’s quality of life. However, the reduced and inaccurate information make the definition of the dental implant imprecise and may cause premature failure, bone loss, implant rejection and infections [1][2][3]. Currently, the dental implant planning and implant definition are based on the surgeon’s experience and visual analysis of the tomographic images. Dental Implant is a multivariable and complex process and although researches is being carried out on dental implant determination through informational resources extract from Computerized Tomography (CT scan) images in DICOM (Digital Imaging and Communication in Medicine) [4][5][6][7] format, the gingival mucosa reconstruction is still using intrusive methods such Conventional Impression. In this context, this research focus on the development of bone structure and gingival mucosa recognition tool, aiming to support the dentist surgeon throughout the dental implant planning process, providing measurable information which makes the procedure safer. Furthermore, the research contributes to the medical bioprocessing field. Thus, the main research’s objective was to extract accurate information about bone and gingival mucosa geometry through techniques of medical images processing. 1. Theoretical Basis Medical images are widely used in patient’s diagnostics and there are several approaches for image acquirement through equipment such as X-Ray, CT scan, Magnetic Resonance Imaging (MRI), and Ultrasound scan [8]. The majority of equipment produces digital images in the DICOM format, which are tomographic images in grey shades that follow the Hounsfield scale. These files have a header with relevant information about the image and patient’s biological data [9]. The CT scanners have a high precision and generate a large quantity of images in a small area of analysis. The digital images generation save physical pace area and time, once they allow great variety of data manipulation, for example, the application of mathematical filters that explore the device precision, which would not be possible to the human naked eye. The Hounsfield scale is a linear transformation of the material density attenuation coefficient, which is measured in shades of grey. The scale normally has a resolution from 12 to 16 bits, that is, a variation from 4048 to 65536 shades of grey and usually, the more shades of grey measured, and the better will be the precision. These shades of grey are captured by the CT scanners and stored in a linearization that uses air and water density as reference, within normal conditions of temperature and pressure. The air density, in Hounsfield Unit (HU), is defined as – 1000HU, while the water density is defined as 0 HU [10]. For a correct exhibition of DICOM image in the Hounsfield Scale, the image has to be equalized in reference to the shades of grey, where the air (1000HU) will be represented as 0 (which represents the darkest shade of grey - black) The highest HU value detected in the image will be represented by the highest resolution value of the grey shades scale. This scale is usually used with a 16 bits resolution, where the highest value is 65535 (white). Hounsfield scale [11] presents direct relation between the bone tissue with the intensity by level in grey scale. Once the image is equalized to the grey scale level, it is possible to apply image filters that are mathematical processes used to improve the image quality, lessening noises, highlighting borders, attenuating dark shades, etc. [12][13]. The majority of border filters follow the same logical reasoning, verifying grey levels in an image and R.M. de Andrade et al. / A Gingival Mucosa Geometric Modelling 557 detecting steep changes in values. These filters can vary from basic concepts until the most complex, which uses, for example, the concepts of artificial intelligence. The method of particle swarm - PSO (Particle Swarm Optimization) is an artificial intelligence approach classified as evolutionary computation and it was proposed in 1995 and modelled by biologist Frank Hepnner [14]. The Swarm Intelligence is the name of a sub-area of computational intelligence that covers a set of methodologies and techniques inspired by the "collective intelligence" observed in some animal species such as social insects. The term "swarm intelligence" has been increasingly used, although this expression suggests that the biological inspiration of these methods comes only from bees [15]. PSO is a methodology driven by the simulation of social behaviour between individuals of a population rather than the nature evolution, such as in the genetic algorithm based on DNA. In PSO, particles are moved in space searching for the best solution to the problem. In the PSO formula the best result solution for the problem is attributed the variable PBest (Personal best), and in the entire population, the best PBest in search for the solution is assigned the variable Gbest (Global best) and retained until another PBest has better value for solving the problem [16]. Image filters have many different purposes and can also be used in various areas of knowledge, an example is the use of border filters used in the Oral Implantology area. The Oral Implantology is the field of dentistry that deals with edentulous through the dental implant and is considered a real revolution in the dental field, being recognized from the early 80s. Aspects of treatment longevity, possibility of recurrence insertion process in the case of rejection or problem with the implant and the simplicity of the technique when respected surgical protocols contributed to its rapid development [17][18]. The dental implant success requires careful planning, ranging from the selection of the dental implant to the surgical procedures. In the implant planning process, one should take into account factors such as bone density, connection type, length, diameter, and prosthesis inclination [1][19][20][21]. Additionally, to [17], the gingival thickness and prosthesis space should also be evaluated. 2. Research Development This work has started on previous research [22][23][24][25]where have explored most suitable dental implant based on DICOM processing. For the development of this research, the literature review of methods and tools that are able to manipulate the CT images in order to extract information regarding the bone and gingival mucosa were identified. Figure 1 presents the four stages of conceptual design: i) reading and manipulation of DICOM file; ii) detection of the area of interest; iii) image processing for teeth and gum delinearization; and iv) teeth and gum geometrical definition through point cloud creation (teeth and gums). Figure 1. Conceptual Design phases. 558 R.M. de Andrade et al. / A Gingival Mucosa Geometric Modelling The research used CT images in DICOM format in axial cuts obtained from a partial edentulous patient. For this specific case, each image of the DICOM file contained 640x640 pixel resolution and 216 cuts 1 mm equidistant. The DICOM images are volumetric allowing their grouping and the virtual 3D reconstruction of the geometry. Axial images provided by CT scanner (DICOM format) have axis - X (width) and Y (length) and the creation of three-dimensional matrix provides the Z axis representing depth. Thus, the three dimensional matrix can display, in addition to the axial cut, the frontal and lateral cuts. The images created by the frontal cutting were obtained by decomposition of the three-dimensional matrix into two-dimensional matrix as illustrated in Figure 2. The figure shows the DICOM image in three perspectives: axial (detail A), frontal (detail B) and lateral (detail C). It is appropriate to emphasize that analytical Geometry equations can be used to make the matrix rotation in order to reconstruct the image at any angle, but this technique was not necessary in this study. The axial cut image (Figure 2 - Detail A) was defined as the object of study since it was obtained directly from the scanner and did not require additional processing, ensuring the accuracy and reliability of the information obtained and reducing time of the image processing. The research used axial cut image files in DICOM format, containing 216 slices depth of the patient's jaw, stored in the three-dimensional matrix in the Matlab® [6][12] environment in order to determine the area of interest for the dental implant insertion. Figure 2. DICOM Image perspectives. PSO algorithm was used to detect the area of interest, once its application eliminates unnecessary information for the processing and highlights the information regarding the teeth, gums and the bone of the dental arch. This algorithm allows an individual analysis of slices, differentiating the teeth from other human body tissues based on the Hounsfield density scale. Figure 3 illustrates the application of the PSO algorithm and an axial cut image in DICOM format. So that, the 3D peaks shown in the figure represent the bones and teeth in a DICOM image once they own the highest density. Normally the entire image has a lot of useless information, thus image filters are mathematics processes that improves image quality, removing noise, smooth regions, shaping borders, etc. Basically, all filters follow the same idea, identified the grey scale levels and abrupt changes in grey scale levels. An example is a sharpening filter, which highlights the image borders. Sharpening filters analyse all image pixels, subtracting the pixels in “X” position with “X-1”position. The results of this process are a highlighting R.M. de Andrade et al. / A Gingival Mucosa Geometric Modelling 559 of image border and reducing of non-necessary information, in binary format (0,1). In other word, black level (value = 0) is adopted in constant image area and white level is adopted in abrupt change image area [12]. In this context, three filters are generally used, such as: Roberts, Prewitt and Sobel. Roberts’ filter uses a cross gradient, i.e., it use the difference between the brightness in 45º rotate way. Prewitt’s filters uses standard values in matrix of 3x3 dimension that is able to contrast smooth and highlight the noise effects. Sobel’s filter is similar to Prewitt’s filter with the same matrix dimension (3x3), but the standard values are different. Thus, Sobel’s filter result is an image lesser highlights that Prewitt’s Filter [13]. Figure 3. PSO algorithm applied on DICOM image. The algorithm executes a simple logic to determine the extreme points containing teeth and thereby determine the area of interest. Figure 4 shows the result of applying PSO algorithm on DICOM image. After the detection of the region of interest, the Sobel, Prewitt and Roberts border detection filters, which are able of identifying the contours of the teeth and bone were applied individually in the region of interest, resulting in three distinct images. An algorithm (code) was developed including a simple logic which is able to merge the filters using recurrent loops "FOR -THEN" and the logical operator "OR" since none of them were able of delineating accurately the contours of the teeth. These recurrent loops were used for merge the three filters resulting images, generating a new image and making a more precise delinearization of the contours of teeth (Figure 5). The figure shows the delinearization result using the developed algorithm. Figure 4. Detected of interest area. 560 R.M. de Andrade et al. / A Gingival Mucosa Geometric Modelling Figure 5. Teeth delinearization result using the developed algorithm. To identify the presence of the gingival mucosa near the teeth was necessary to map the teeth and prosthesis adjacent regions due to the high-density value that characterize them. This facilitated the image processing since the noises were insignificant and for this reason were disregarded. For gingival mucosa detection, the developed algorithm firstly surveys the region of the teeth and adds a 0.5 cm margin around this region in order to identify the possible gum region. The 0.5 cm margin excludes gingival mucosa regions that are not concern, such as the palate. After mapping the region, the algorithm highlights the edges of densities in the range of Hounsfield (-50HU to 100HU) to detect the entire gingival mucosa. Figure 6 shows the gum delinearization process: i) detail A shows the image where the gum edge will be detected; ii) Detail B illustrates the highlighting of the density within the range of -50HU to 100HU of the previous image; and iii) the detail C shows the image of the outlined gum. Figure 6. Gum delinearization process. The point cloud creation and its exportation to a CAD system environment aimed the virtual 3D geometric reconstruction of the teeth and gums, providing support to the project and facilitating the creation of prostheses and moulds. A point cloud is basically a spreadsheet with all possible points of a particular model, the indication of each point is made by the Cartesian plane, where the coordinates x, y and z are recorded. So that, after the teeth and gum delinearization, all the detected points of the gingival mucosa and teeth are exported to a spreadsheet in Excel® to facilitate the import of the point cloud to any CAD software (SolidWorks®), since most of these software has tools that are able to import data from Excel® spreadsheets. For a better understanding, Figure 7 shows a 3D geometric model imported into a CAD system. R.M. de Andrade et al. / A Gingival Mucosa Geometric Modelling 561 Figure 7. 3D geometric model in SolidWorks®. 3. Results The system was developed aiming the detection of teeth and gum in the Matlab® environment in order to obtain the point cloud for three-dimensional reconstruction in a CAD system. The system allows the insertion, manipulation and visualization of patient’s axial, frontal and lateral DICOM images, the use of filters to detect edges, determination of certain bone density ranges and different image views reconstruction. Figure 8 shows the interface and displays the tools that can be applied to the image. The red dashed rectangle shows the area of interest identified by the system and where the image processing will be applied, avoiding noise and unwanted information of the whole image, consequently reducing significantly the computational processing time for calculating and applying the filters. Figure 8. Developed software interface. 562 R.M. de Andrade et al. / A Gingival Mucosa Geometric Modelling The analysis and validation of the system were made through two experimental studies. In the first case, CT images of a patient in DICOM file with 216 axial slices with a resolution of 640x640 pixels were used (Figure 9). The Figure shows the steps of detecting teeth and gum contours from case study "1" as follows: i) selected image for system application (detail A); ii) detection of the region of interest (detail B); iii) setting the Hounsfield scale for teeth detection in the 600HU to 7190HU range (detail C); iv) determination of the tooth region with a margin of 0.5 cm around it for the detection of gum (detail D); and vi) the detection of the gum (green) and tooth in red (detail E). The highlighted points form the point cloud that is exported to Excel® validating the system (detail F). Figure 9. Sequence of image processing to detect teeth and gum – Case Study 1. In the case study 2 the tomographic images of a patient in DICOM file with 518 axial slices with a resolution of 640x640 pixels were used, as shown in Figure 10. The figure shows the steps for detection of teeth and gum: i) selected image for system application (detail A); ii) detection of the region of interest (detail B); iii) setting the Hounsfield scale for teeth detection in the 600HU to 7190HU range (detail C); iv) determination of the tooth region with a margin of 0.5 cm around it for the detection of gum (detail D); and vi) the detection of the gum (green) and tooth in red (detail E). The highlighted points form the point cloud that is exported to Excel® validating the system (detail F). The experimental studies indicated that the developed system has the ability to extract accurately the geometric information of DICOM images, export them to Excel®, import them into the CAD system and make their 3D virtual geometric reconstruction. R.M. de Andrade et al. / A Gingival Mucosa Geometric Modelling 563 Figure 10. Sequence of image processing to detect teeth and gum – Case Study 2. 4. Conclusion This article presented the development of a system able to detect the geometry of the teeth and gum in MATLAB® environment. The system generates a point cloud from CT slices in DICOM format, aiming at the teeth and gum virtual threedimensional reconstruction in a CAD system. The system supports dentists, providing easy understandable information by creating a tool that can serve as basis for making guide masks to help in the dental implant process. The use of mathematical concepts in some features of the system allowed the automatic identification and processing of images parts, excluding the possibility of human error. However, it is important to note that the diagnosis final decision must be provided only and exclusively by the specialist, since the system, although with a low percentage of fail, still only a support tool in the dental implant decision-making process. The tendency is the increasingly common use of software on the support of specialists in the decision-making, helping them in their diagnoses and generating information easy to interpret. To extend this research work it is necessary to explore the potential of this model against to other approaches such as a Zedview. References [1] T. Li, K. Hu, L. Cheng, Y. Ding, Y. Ding, J. Shao, and L. Kong, Optimum selection of the dental implant diameter and length in the posterior mandible with poor bone quality – A 3D finite element analysis, Applied Mathematical Modelling, 35 (2011), 446–456. [2] E. C. L. C. M. DIAS, Análise descritiva do grau de adaptação de pilares protéticos a implantes osseointegráveis e seu efeito na infiltração bacteriana: um estudo in vitro. Dissertation (M.Sc.) – University of Grande Rio, 2007. 564 R.M. de Andrade et al. / A Gingival Mucosa Geometric Modelling [3] A. D. Pye, D. E. A. Lockhart, M. P. Dawson, C. A. Murray, and A. J. Smith, A review of dental implants and infection. Journal of Hospital infection, 72 (2009), 104-110. [4] P. Mildenberger, M. Eichelberg, and E. Martin. Introduction to the DICOM standard. European Radiology, 12 (2001), 920-927. [5] R. N. J. Graham, R. W. Perriss, and A. F. Scarsbrook, DICOM demystified: A review of digital file formats and their use in radiological practice. Clinical Radiology, 60 (2005), 1133-1140. [6] D. Grauer, L.S.H. Cevidanes, and W.R. Proffit, Working with DICOM craniofacial images. American journal of orthodontics and dentofacialorthopedics: official publication of the American Association of Orthodontists, 136 (2009), 460-470. [7] Z. Zhou, B.J. Liu, and A.H. Le, CAD-PACS integration tool kit based on DICOM secondary capture, structured report and IHE workflow profiles, Computer Medical Imaging and Graphics, 31 (2007), 346352. [8] S. C. White, E. W. Heslop, L.G. Hollender, K.M. Mosier, A. Ruprecht, and M. K. Shrout Parameters of radiologic care: An official report of the American Academy of Oral and Maxillofacial Radiology. Oral Surg Oral Med Oral Pathol Oral Radiol Endod. 91(2001), 498-511. [9] J. Medina, S. Jaime-Castilho, and E. Jiménez, A DICOM viewer with flexible image retrieval to support diagnosis and treatment of scoliosis. Expert Systems with Aplications, 10 (2012), 8799-8808. [10] R. Assenciros, Fusão de imagens médicas para aplicação em sistemas de planejamento de tratamento em radioterapia. PhD Thesis University of São Paulo, 2006. [11] C. E. Misch, Implantes Dentários Contemporâneos. 2 ed. Santos Livraria Editora, São Paulo, 2000. [12] R. C. Gonzalez, R. E. Woods, S. L. Eddins, Digital Image Processing Using MATLAB. 2 ed., McGrawHill, Berkshire, 2009. [13] R. C. Gonzalez, R. E. Woods, Digital Image Processing. 3 ed., Addison-Wesley Longman, Boston, 2008. [14] J. F. Kennedy, R. C. Eberhart, and Y. Shi, Swarm Intelligence, Morgan Kaufmann, San Francisco, 2001. [15] L. S. Coelho and C. A. Sierakowski, A software tool for teaching of particle swarm optimization fundamentals, Advances in Engineering Software, 39 (2008), 877-887. [16] G. T. Pulido, C. A. C. Coello, and L. V. Santana-Quintero, EMOPSO: A Multi-Objective Particle Swarm Optimizer with Emphasis on Efficiency. Lectures Notes in Computer Science, 4403 (2007), 272285. [17] F. D. Neves, A. J. Fernandes Neto, and G. A. S. Barbosa, P.C. Simamoto Júnior, Sugestão de Sequência de Avaliação para a Seleção do Pilar em Próteses Fixas Sobre Implantes/Cimentadas e Parafusadas. Revista brasileira de prótese clinica e laboratorial. 27 (2003), 535-548. [18] P. I. Brånemark, G.A. Zarb, and T. Albrektsson. Introduction in Osseointegration. In. Tissue-integrated prostheses. Quintessence Books, Chicago, 1985. [19] S. Olate, M. C. N. Lyrio, M. Moraes, R. Mazzonetto, and R. W. F. Moreira, Influence of diameter and length of implant on early Dental Implant failure. American Association of Oral Maxillofacial Surgeons, 68 (2010), 414-419. [20] B. C. P. Moraes, Avaliação da angulação e inclinação dos dentes anteriores por meio da tomografia computadorizada por feixe cônico, em pacientes com fissura transforame incisivo unilateral, MSc thesis, University of Sao Paulo, 2010. [21] M. F. Haddad, E. P. Pellizzer, J. V. Q Mazaro, F. R. Verri, and R. M. Falcón-Antenucci, Conceitos básicos para a reabilitação oral por meio de implantes osseointegrados - parte II: influência da inclinação e do tipo de conexão. Revista Odontológica de Araçatuba, 29 (2008), 24-29. [22] A. L. Szejka, M. Rudek, and O. Canciglieri Júnior, Methodological Proposal to Determine a Suitable Implant for a Single Dental Failure Through CAD Geometric Modelling. In: C.Bil et al. (eds.) 20th ISPE International Conference on Concurrent Engineering, IOS Press, Amsterdam, 2013, pp. 303–313. [23] A. L. Szejka, M. Rudek, and O. Canciglieri Júnior, Engineering inference mechanisms reasoning system in design for dental implant. WIT Transactions on the Built Environment, 145 (2014) 549–557. [24] A. L. Szejka, O. Canciglieri Júnior, M. Rudek, and H. Panetto, A Conceptual Knowledge-link model for supporting Dental Implant Process. Advanced Materials Research, 945–949, (2014) 3424–3429. [25] D. J. Czelusniak, A. L. Szejka, and O. Canciglieri Júnior, Agents Software with Ontologies in Expert Systems to Support Dental Prosthesis Design Decisions. Advanced Materials Research, 945–949 (2014), 3430–3437. Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-565 565 Engineering Collaboration in Mechatronic Product Development Sergej BONDARa, Henry BOUWHUISb and Josip STJEPANDIĆa,1 a PROSTEP AG, Germany b Sensata Technologies Holding N.V., The Netherlands Abstract. Sensata Technologies Holding N.V., a global industrial technology company, is a leader in the development, manufacture and sale of sensors and controls. It produces a wide range of customized, innovative sensors and controls for mission critical applications such as thermal circuit breakers in aircraft, pressure sensors in automotive systems, and bimetal current and temperature control devices in electric motors. Business centers and manufacturing sites in twelve countries are involved in the engineering process for over 50 OEMs all over the world. Providing that the engineered data is delivered to the recipient in the right format, quality and time is crucial in the collaboration between Sensata and its customer. As the numbers of partners and CAD systems (which differ by releases) expand the complexity exponentially, a direct transfer and translation service resembles a big challenge for Sensata’s IT. Not to mention that know-how protection is becoming increasingly important and is usually a part of the exchange process. Sensata has tackled the challenge by establishing a focal point of data exchange and translation (including a knowledge protection process), which can be controlled in an easy way to ensure that the compliance of Sensata is kept. The OpenDESC.com service, utilized by Sensata as the focal point of data exchange and translation, is capable of sending data to all Sensata partners, ensuring the desired translation and knowledge protection settings and standards are set to the newest releases. By this Sensata is independent from the standards and tools their partners use. Therefore a partner’s transition to JT or other formats does not affect the engineering process at Sensata, which leads to large efficiency and quality gains and cost savings. This paper describes the requirements in the engineering collaboration of mechatronic product development and implemented solution. Keywords. Engineering Collaboration, Mechatronic Product Development, CAD Data Exchange Service Center Introduction About 50 different sensors are installed in a normal middle class car, and every year this number increases. Sensors ensure a cleaner environment, driving safety, higher fuel efficiency, and efficient energy consumption (Figure 1) [1] [2]. Such expanding application fields generate huge growth potential for sensor industry worldwide. One of the global leading manufacturers of safety-critical sensors and controllers for the automotive industry, but also for other industries such as aerospace, shipbuilding, railway, domestic appliances, air-conditioning, photovoltaic or mobile communication is Sensata Technologies. The international company headquartered in Attleboro, MA 1 Corresponding Author; E-mail: josip.stjepandic@opendesc.com. 566 S. Bondar et al. / Engineering Collaboration in Mechatronic Product Development (USA) generated sales of 2.4 billion dollar in the last fiscal year. With 17,000 employees, it is the global leader for high-level pressure sensors. Thanks to the growing demand for sensors and a series of strategic acquisitions, Sensata has grown dynamically over recent years. The product portfolio of the company covers 17,000 separate articles (Figure 2). Figure 1. Growth drivers for the sensor industry. Nowadays, powerful sensors are developed by using modern IT tools like CAD, PDM and validation tools. Many automotive suppliers develop their products with the CAD systems of the respective customer [3]. Sensor manufacturer Sensata saves the costly maintenance of a zoo of tens of different CAD systems, but translates 3D models and 2D drawings. Since then, there are hardly any complaints, although the automotive manufacturers are becoming even more demanding [4]. This paper describes the challenge, the solution and the practice of engineering collaboration in mechatronic product development related to Sensata with focus to data exchange in customer process. After the explanation of background and related work with relevant research fields, we will illustrate the application case of Sensata in this paper. 1. Background and Related Work Mechatronic development process has been subject of research for many years. For a general classification, two types of mechatronic systems that illustrate the wide range of mechatronics can be distinguished: systems based on the spatial integration of mechanics and electronics, and multi-body systems with a controlled movement behavior [5]. The aim of the first type of system is a high number of mechanical and electrical function carriers on a small installation space. The essential capability of the system integration lies in miniaturization, the lower manufacturing cost and higher reliability. The assembly and connection technology with specific characteristics such as MID (Molded Interconnect Devices) is the prime focus. The special consequence of the S. Bondar et al. / Engineering Collaboration in Mechatronic Product Development 567 development of such products is that the product concept is already determined by the production technology. This yields the necessity to develop the product and the production system virtually, concurrently and integrally using comprehensive validation procedures [6] [7]. Figure 2. Sensata’s product portfolio The latter type of systems is about improving the movement behavior of multibody systems. Therefore, sensors detect information about the environment, but also about the system itself. This information is subsequently processed and, with the aid of actuators, suitable reactions in order to improve the movement behavior are triggered in the respective context. Control systems engineering is the major task in the development of products of this type. The Guideline 2206 from VDI (Association of German Engineers) gives the practitioner a guide for the development of such systems [8]. The interoperability of the disciplines involved in the product development process is often not mastered yet. As before, the technical system is considered primarily from the point of view of the rather isolated specialist discipline and domain. At the latest, since contributions from different fields of study have been merged to the product as a whole, there are time-consuming and costly iterations. This results in a need to address methods, tools and procedures for model-driven and synchronized development [9] [10] [11]. The mechatronic products are still not reliable enough. This is reflected, for example, in the automotive industry by increasing goodwill costs and warranty costs. There is a substantial need for action regarding the prediction and securing the reliability of mechatronic systems, as well as monitoring, inspection, testing and diagnostic procedures [12]. Mechatronic system design requires a high degree of integration; therefore the complex mechatronic system is often divided into simpler subsystems or components, and meanwhile, the complex design project calls for the coordination of resources and 568 S. Bondar et al. / Engineering Collaboration in Mechatronic Product Development persons in order to be successful [13]. Collaboration is a measure for enhanced agility [14]. As the result, the collaboration among different individuals and disciplines during the mechatronic system design process plays a key role to ensure that the results of their efforts are successful, especially to get an integrated system [15] [16] [17]. Basically, there are two types of collaboration levels. The first one focuses on the collaboration of individuals, in other words, the interaction between designers, which can be called low-level collaboration (micro level). The second one, called high-level collaboration, emphasizes the collaboration among different disciplines or domains (macro level) [18] [19]. The low-level collaboration, which takes place among the individuals, is highly important for mechatronic system design. A design project is often decomposed into tasks or subtasks, and each task or subtask is assigned to individuals. The project management is based on the organization of available resources to accomplish these tasks or subtasks. In the project management, the collaboration is significant because it is possible that one task must be started when several tasks have been done and any individual in the project should be able to determine the status of the project [20] [21]. Traditionally, the low-level collaboration is often realized thanks to informal communication supported by face-to-face meeting or communication equipment (mail, telephone, tele-conference). Compared with the low-level collaboration, the high-level collaboration emphasizes the multi-discipline/domain collaboration [22]. The high-level collaboration not only focuses on the assembling specific-discipline design, but pays special attention to the design interfaces among them as well. It can help us to achieve a sound integration of the components, and meanwhile, it brings us a synergetic integration. In case of Sensata, all aforementioned design models are relevant above in the high-level and low-level collaboration. The aforementioned design models provide available approaches for mechatronic system design. Three criteria can be used for the classification of collaboration: concurrent design, macro level collaboration, and micro level collaboration [18] [23]. (1) Concurrent design of the expert knowledge is very important to shorten the period of the design process as markets are rapidly changing and lead to shorten product development lifecycle; the sequential design model is the only one that cannot provide a concurrent design [1] [6]. (2) Macro level collaboration; Exchange of domain-specific design data and simulation results supports the multi-physic simulation during the mechatronic system design process. Actuators (electronics and mechanics), embedded control systems (electronics and software) and sensors (mechanics, optics and software) are considered as the links between the different expert components, so there are links between the expert components in every design model discussed above, but not explicit in V-model and VDI 2206 [8]. (3) Micro level collaboration; Specific management of the collaboration between the expert knowledge allows engineers to manage the design data systematically and to obtain more integrated mechatronic products. The hierarchical design model is considered to “partially” realize the specific management of collaboration between the experts because all design parameters and requirement parameters affecting multiple disciplines have been presented in the mechatronic coupling level of hierarchical design models [18] [24]. In case of Sensata, concurrent design is mostly important in the customer process. Macro level and micro level collaboration are relevant for internal processes. S. Bondar et al. / Engineering Collaboration in Mechatronic Product Development 569 2. Use Case The product portfolio of Sensata covers 17,000 separate items, of which about 1.3 billion units are shipped to the customers every year. Recently, Sensata set standards in terms of innovation with the development of a pressure sensor, which is used in the cylinder heads of internal combustion engines to optimize the compression of the mixture. In this way the CO2 emissions can be significantly reduced. Sensata has a presence in 15 countries worldwide with development and production facilities. The business center in Almelo (The Netherlands) is responsible for the design and development of pressure sensors for automotive and commercial vehicle sectors (Figure 3). The sensors for the automotive industry are usually developed by customer order for a particular model or a model series. Due to the various spatial installation situations, there are numerous design variants. To develop all these variants quickly, is an important competitive advantage of a supplier [14] [25]. A key challenge for the product developers on the supplier’s side is the demanding documentation obligation towards their clients, because their sensors are used for safety-critical applications and must not fail. Therefore, such a supplier is forced to exchange his product documentation frequently with his customer in an appropriate way [26]. Large automotive manufacturers evaluate this ability of their suppliers in the purchasing process [27] [28]. Figure 3. Different sensors in an automobile. 3. Stringent Documentation Obligation The requirements of the automotive manufacturers and their large system suppliers have become more stringent in recent years in terms of product documentation. 570 S. Bondar et al. / Engineering Collaboration in Mechatronic Product Development Formerly, suppliers were allowed to deliver their CAD data in neutral formats such as STEP. Today, most of the original equipment manufacturers (OEM) require not only the 3D models in native formats for approval, but further want to have geometric associative 2D drawings that are generated according to their guidelines [29] [30]. Subsequently, the translation of CAD data into different customer required formats can hardly be fully automated. Sensors are mechatronic products in which the mechanical components play an important role for reliability. At Sensata, these are always designed with the 3D CAD system SolidWorks that is installed at around 100 workplaces worldwide and can be used simultaneously as a floating license. To speed up the coordination with the clients, product developers may deliver their product geometry in neutral formats during the design phase. However, at the latest for the approval, the CAD data has to be translated to the appropriate target format of CATIA V5, NX or PTC Creo. Figure 4. CAD exchange and translation in www.opendesc.com. While the developers in the headquarter translate their CAD data in-house, European business centers always took advantage of an external service provider for the CAD translation. Nevertheless, they had to take care about the upload of the data to the client systems and portals by themselves before, which required human resources with the appropriate know-how. In order to reduce the corresponding costs, four years ago Sensata decided to replace their former partner and use the translation and exchange service OpenDESC.com instead (Figure 4). There are not many alternative service providers globally who offer both services from a single source. The service provider must know the target systems of the OEMs, their configurations and start models and what CAD data the customers need. Even if Sensata gets a new customer, the service provider must be able to provide his service after the transition phase [31]. Further requirements such as the intellectual property protection can be taken into account here [32]. 4. Translation of Drawings Using the OpenDESC.com pipeline, the translation of 3D models can be largely automated. For such purpose, specific workflow methods are pre-defined. However, the fitting of the data to the OEM-specific standards requires a mostly manual intervention of translation experts who, for example, need to set the appropriate starting model or to customize the profile for the quality check, depending on the particular recipient. They also take care about the visual inspection of the data to be translated, which sometimes contain errors already in the original system and therefore must be corrected prior to the translation [33]. S. Bondar et al. / Engineering Collaboration in Mechatronic Product Development 571 When translating the CAD models, the associativity between 3D geometry and the derivative 2D drawings is lost. Since the OEMs increasingly require associative drawings in native formats for documentation purposes, these relations must be subsequently restored. For such purpose, appropriate templates were developed for the different target systems that make it possible to partially automate the process of correlating model and drawing. However, some manual work is always required due to the complexity of drawings. Figure 5. User interface of OpenDESC.com. The users do not send their files directly to OpenDESC.com but put them in a special transfer directory together with the information about the intended recipients.. That is where the key user responsible for data translation collects the files and uploads them via an encrypted connection to OpenDESC.com, on which the data is translated. For quality control, Sensata gets the translated drawings and models as 2D and 3D PDF documents. At the same time, the translated files are provided on the platform ready for dispatch in the CAD format of the respective OEM, so that after approval they can be automatically sent to the receiver or made available to download (Figure 5). Usually, such a translation order does not take longer than two days. 5. Experiences and Achievements OpenDESC.com keeps track of the data exchange and informs the sender whether the data has been properly delivered. When the transmission to the OEM is finished or the experts for data exchange have received an end-to-end response in the transmission via OFTP, they send a time stamped copy of the transmitted data to the corresponding Sensata employee, who stores the data into the PDM system Agile. This allows the sensor manufacturer to proof the kind of version and the sending time to the client at any time, regardless of the service collaboration with OpenDESC.com. Since sensors are not particularly large, the volume of data to be translated is about a few megabytes. The number of translation jobs actually increased continuously 572 S. Bondar et al. / Engineering Collaboration in Mechatronic Product Development thanks to the company's growth and the growing number of development projects in recent years. Currently, about 100 new sensors are annually translated to the formats of the customers at the sites in the Netherlands, Belgium and France, who actively use OpenDESC.com. It is expected that the number of translation jobs will increase with the integration of additional locations and acquisition of new customers in the next few years (Figure 6). Figure 6. CAD translation via www.OpenDESC.com A key advantage of the translation and exchange service is the full transfer of responsibility to the service provider [34]. As a result, no erroneous data or data that do not meet the formal requirements of the OEM, are sent to the client. By using OpenDESC.com, the quality of outgoing data was improved significantly. Thus, Sensata hardly gets back any data rejected from the customers. Improving data quality is to ascribe to the good knowledge of OEM requirements on the one hand, and to the thorough data quality check by appropriate check tools prior to translation on the other hand [30]. If the original data does not meet the quality requirements, the job is simply aborted and restarted after the data has been corrected. How much money Sensata saves due to outsourcing, can not be quantified precisely due to the entirely changed workflows. In order to translate and exchange the data by themselves, the company would need at least one license for each CAD system and an operator who can use it. In addition, trained personnel would be required to keep the IT environment up to date. In the sum, that are fix costs of several hundred thousand Euro which exceed the annual service costs. Big advantage of outsourcing is the resilience against not predicable changes in IT environment, when customers update their CAD and PDM equipment. The follow-up is quite sophisticated for a single company, even if you know what to do. Thus, it is not only cost effective but also more reliable, to subcontract the data translation and data transfer to an external provider. S. Bondar et al. / Engineering Collaboration in Mechatronic Product Development 573 6. Conclusions and Outlook Manufacturers of typical supply parts such as sensors are forced to adapt their business processes to a plethora of customers with different process requirements. Engineering collaboration in mechatronics can be subdivided in different levels. For this, different approaches are available, not only to meet the requirements of the customers, but also to achieve the operational excellence [35]. In a dynamic collaborative environment like the global automotive industry, the working conditions are subject to continuous change. Suppliers who work together with different OEMs and tier-1 suppliers have to constantly cope with new requirements relating to exchange partners, data formats, system environments to be supported, quality and security requirements, etc. If they take data communication with their customers into their own hands, this means that they have to constantly adapt their data translation and exchange processes to the everchanging requirements. To prevent the explosion of fix costs for setup and maintenance of such communication infrastructure, collaboration with a competent service provider could be an interesting alternative as it not only cuts costs but also facilitates making the exchange processes uniform and, thus, ensures a higher level of flexibility, reliability and traceability. Whether outsourcing is worthwhile depends on various, hardly predicable factors with changing imapct such as the number of exchange partners involved, the volume of data, requirements regarding data quality, etc. The example provided by Sensata, however, makes it clear that the ROI for such an investment can be calculated relatively well. Based on contractual provisions with the service provider, the probably most important argument is the high customer satisfaction index which can justify such a decision. As further consolidation and unification of CAD and PDM market can’t be expected, suppliers must handle this constellation with heterogeneous target nodes in supply network again. The future development belongs the further automation of the whole communication process and provision of standard communication software products OEM-to-supplier communication based on recent standards (STEP AP242 and JT) to avoid the expensive point-to-point connection [4]. References [1] B.T. Fijalkowski, Automotive Mechatronics: Operational and Practical Issues, Springer-Verlag, Heidelberg, 2011. [2] K. Reif, Automotive Mechatronics: Automotive Networking, Driving Stability, Systems, Electronics, Springer Fachmedien, Wiesbaden, 2015. [3] M. Borsato, M. Peruzzini, Collaborative Engineering, in: J. Stjepandić et al. (eds.): Concurrent Engineering in the 21st Century: Foundations, Developments and Challenges, Springer International Publishing Cham, 2015, pp. 165–196. [4] A. Katzenbach, Automotive, in: J. Stjepandić et al. (eds.): Concurrent Engineering in the 21st Century: Foundations, Developments and Challenges, Springer International Publishing Cham, 2015, pp. 607– 638. [5] M. Jouaneh, Fundamentals of mechatronics, Cengage Learning, Stamford, 2013. [6] S.I. Weiss, Product and Systems Development: A Value Approach, John Wiley & Sons, Hoboken, 2013. [7] A. A. Alvarez Cabrera, K. Woestenenk, T. Tomiyama, An architecture model to support cooperative design for mechatronic products: A control design case, Mechatronics, Vol. 21, pp. 534–547, 2011. [8] N.N., VDI-Richtlinie 2206, Entwicklungsmethodik für mechatronische Systeme, 2004. [9] Y. Ni, J.F. Broenink, A co-modelling method for solving incompatibilities during co-design of mechatronic devices, Advanced Engineering Informatics, Vol. 28, pp. 232–240, 2014. 574 S. Bondar et al. / Engineering Collaboration in Mechatronic Product Development [10] H. Komoto, T. Tomiyama, A framework for computer-aided conceptual design and its application to system architecting of mechatronics products, Computer-Aided Design, Vol. 44, pp. 931–946, 2012. [11] J.M. Torry-Smith, N.H. Mortensen, S. Achiche, A proposal for a classification of product-related dependencies in development of mechatronic products, Res Eng Des, Vol. 25, pp. 53–74, 2014. [12] S. Sierla, I. Tumer, N. Papakonstantinou, K. Koskinen, D. Jensen, Early integration of safety to the mechatronic system design process by the functional failure identification and propagation framework, Mechatronics, Vol. 22, pp. 137–151, 2012. [13] S. Alguezaui, R. Filieri, A knowledge-based view of the extending enterprise for enhancing a collaborative innovation advantage, Int. J. Agile Systems and Management, Vol. 7, No. 2, pp.116–131, 2014. [14] A. McLay, Re-reengineering the dream: agility as competitive adaptability, Int. J. Agile Systems and Management, Vol. 7, No. 2, 2014, pp. 101–115. [15] G. Barbieri, C. Fantuzzi, R. Borsari, A model-based design methodology for the development of mechatronic systems, Mechatronics, Vol. 24, pp. 833–843, 2014. [16] A. Lanzotti, F. Renno, M. Russo, R. Russo, M. Terzo, Design and development of an automotive magnetorheological semi-active differential, Mechatronics, Vol. 24, pp. 426–435, 2014. [17] M. Törngren, A. Qamar, M. Biehl, F. Loiret, J. El-khoury, Integrating viewpoints in the development of mechatronic products, Mechatronics, Vol. 24, pp. 745–762, 2014. [18] C. Zheng, J. Le Duigou, M. Bricogne, B. Eynard, Survey of Design Process Models for Mechatronic Systems Engineering, 10e Congres International de Genie Industriel CIGI2013, June 12-13, 2013. [19] C. Zheng, M. Bricogne, J. Le Duigou, B. Eynard, Survey on mechatronic engineering: A focus on design methods, Advanced Engineering Informatics, Vol. 28, pp. 241–257, 2014. [20] C. Acosta, V. J. Leon, C. Conrad, C. O. Malave, Global Engineering. Design, Decision Making, and Communication, CRC Press, Boca Raton, 2010. [21] A. Villa, Managing Cooperation in Supply Network Structures and Small or Medium-sized Enterprises Main Criteria and Tools for Managers, Springer-Verlag, London, 2011. [22] R.C. Beckett, Functional system maps as boundary objects in complex system development, Int. J. Agile Systems and Management, Vol. 8, No. 1, pp. 53–69, 2015. [23] C. Emmer, A. Fröhlich, V. Jäkel, J. Stjepandić, Standardized Approach to ECAD/MCAD Collaboration, In J. Cha et al. (eds.), Moving Integrated Product Development to Service Clouds in Global Economy. Proceedings of the 21st ISPE Inc. International Conference on Concurrent Engineering, IOS Press, Amsterdam, pp. 587-596, 2014. [24] A. Biahmou, A. Fröhlich, J. Stjepandić, Improving interoperability in mechatronic product development. In: Thoben KD et al. (eds.) Collaborative Value Creation throughout the whole Lifecycle. Proceedings of PLM10 international conference, Inderscience, Geneve, pp. 510-521, 2010. [25] M. Stevenson, The role of services in flexible supply chains: an exploratory study, Int. J. Agile Systems and Management, Vol. 6, No. 4, pp. 307–323, 2013. [26] N.N., VDA Empfehlung 4961/3, Abstimmung der Datenlogistik in SE-Projekten, VDA, Frankfurt, 2012. [27] D.W. Cho, Y.H. Lee, S. H. Ahn, M. K. Hwang, A framework for measuring the performance of service supply chain management, Computers & Industrial Engineering, Vol. 62, pp. 801–818, 2012. [28] H. Carvalho, S.G. Azevedo, V. Cruz-Machado, An innovative agile and resilient index for the automotive supply chain, Int. J. Agile Systems and Management, Vol. 6, No. 3, pp 258–278, 2013. [29] S. Bondar, L. Potjewijd, J. Stjepandić, Globalized OEM and Tier-1 Processes at SKF. In J. Stjepandić et al. (eds.), Concurrent Engineering Approaches for Sustainable Product Development in a MultiDisciplinary Environment, Springer-Verlag, London, 2013, pp. 789-800. [30] S. Bondar, C. Ruppert, J. Stjepandić, Ensuring data quality beyond change management in virtual enterprise, Int. J. Agile Systems and Management, Vol. 7, Nos 3/4, pp. 204–223, 2014. [31] S. Bondar, J.C. Hsu, J. Stjepandić, Network-Centric Operations during Transition in Global Enterprise, Int. J. Agile Systems and Management, Vol. 8, No. 3, 2015, in press. [32] J. Stjepandić, H. Liese, A.C. Trappey, Intellectual Property Protection, in: J. Stjepandić et al. (eds.): Concurrent Engineering in the 21st Century: Foundations, Developments and Challenges, Springer International Publishing Cham, 2015, pp. 607–638. [33] T. Fischer, H.P. Martin, M. Endres, J. Stjepandić, O. Trinkhaus, Anwendungsorientierte Optimierung des neutralen CAD-Datenaustausches mit Schwerpunkt Genauigkeit und Toleranz, VDA, Frankfurt, 2000. [34] F.J. Contractor, V. Kumar, S. K. Kundu, T. Pedersen, Global outsourcing and offshoring: an integrated approach to theory and corporate strategy, Cambridge University Press, Cambridge, 2011. [35] E. Hofmann, P. Beck, E. Füger, The Supply Chain Differentiation Guide. A Roadmap to Operational Excellence, Springer-Verlag Berlin Heidelberg, 2012. Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-575 575 Leveraging 3D CAD Data in Product Life Cycle: Exchange – Visualization – Collaboration Alain PFOUGA and Josip STJEPANDIĆ PROSTEP AG, Darmstadt, Germany Abstract. With their practical introduction by the 1970’s, virtual product data have emerged to a major technical source of intelligence in manufacturing. Modern organization have since then developed and continuously improved strategies, methods and tools to feed the individual needs of business domains, multidisciplinary teams and supply chain to master the growing complexity of virtual product data and manufacturing processes. Three principal activities are associated to the repurposing of virtual product data. These are Exchange, Visualization and Communication of the manufacturing intelligence from its virtual product representation perspective. One development approach alongside PLM, which declares the 3D CAD model as the record of authority and the source for which all other documentation flows is Model-Based-Design (MBD). By emphasizing digital CAD file use for collaboration at the beginning of development, it is the ground for a fully integrated and collaborative environment founded on a 3D model based definition detailed, documented and shared across the enterprise to enable rapid, seamless, and affordable deployment of products from concept to disposal. Since the practical introduction of virtual product data by the 1970’s, several CAD interoperability and visualization formats have indeed been developed to support the aforementioned strategies. Most of them, however, have not yet provided the expected outcome mainly due to their lack of versatility and primary focus on only selected business need. This paper analyses methods and tools used in virtual product development to leverage 3D CAD data across the entire life cycle. It presents a set of versatile concepts for mastering exchange, aware and unaware visualization and collaboration from single technical packages fit purposely for different domains and disciplines. Keywords: 3D, Visualization, Collaboration, Data Exchange, CAD, PDM/PLM. Introduction The introduction of virtual product data and predominantly the usage of Computer Aided (CAx) Systems have fundamentally transformed product development. Particularly applying 3D CAD and PLM strategies has led to higher productivity, better quality and a simultaneous reduction of overall development time and costs. The fundamental advantages provided with the introduction of aforementioned methods and tools have likewise contributed to growing complexity. Combined with various domain- and organization-specific software applications available with new product development trends, the pace of changes, the volume of data and the amount of knowledge embedded in virtual product data are reaching exponential grow. New product development methods such as Concurrent Design (CD) and Simultaneous Engineering (SE) have been widely adopted. They declare design and manufacturing engineering tasks as integrated functional units, which can be performed 576 A. Pfouga and J. Stjepandić / Leveraging 3D CAD Data in Product Life Cycle concurrently in the extended enterprise. In this context, it is fundamental to reach great accuracy at providing the right data, within the right application context to the right party. Modern organizations achieve this successfully, if core product development activities are contextually linked together. These main activities consist of the exchange and re-use of product relevant data across different applications, domains and disciplines. The visualization of virtual product models with purposely disclosure of the authors intents and the communication are other 2 main product development activities. The latest implies richer collaboration experience throughout engineering and is integrated across the entire supply chain. Mastering quality, product design and configurations, bill of materials, changes and releases requires an overall product and process integration, which takes care of differences in coordination workflows, engineering domains, methods and tools of the parties involved in the development process. 1. The challenge with 3D enabled CAD interoperability formats Several interoperability data formats have emerged in the past. At this, there are basically two primary types of formats: Proprietary and Open formats. Proprietary formats are vendor-specific. They are used to describe product data in the majority of authoring tools in the marketplace. Descriptions of these formats are generally regarded as intellectual property by the software vendors and are protected appropriately. Due to their lack of openness they are essentially less suitable for collaboration in the extended enterprise and in the context of this paper. Thus, they will no longer be taken into account. Open formats, on the other hand, are often developed to enable interoperability between applications. They provide definitions which are openly specified and accessible to third-parties (application vendors and customers), who wish to make data available from and to their own applications. Open formats and particularly international standards by their nature are stable and may slowly evolve. However, they protect the investment in tools, methods and processes by ensuring that the data they encapsulate is always capable of being leveraged downstream and recoverable from an archive repository [5]. Heteregenous 2D/3D CAD Systems Heteregenous Portals & Marketplaces • VDAFS: 3D Surface Models • IGES: Drawings, 3D Parts • W3C: XML Schema • W3C: PLM Services Heteregenous TDM/PDM Systems • ISO 10303: AP203/214 CC1 3D Parts • ISO 10303: AP214CC2 - 3D Assemblies • ISO 10303: AP214CC6 - PDM Data Gobal Backbones • ISO 10303: AP242 - XML, Kinematics • ISO 14306: JT - Visualization Figure 1. Continuous development of collaboration standards[6]. A. Pfouga and J. Stjepandić / Leveraging 3D CAD Data in Product Life Cycle 577 It hereby goes without saying that these formats (Figure 1) such as IGES (Section 1.1), DXF and STEP (Section 1.2), 3D XML or JT (Section 1.3) are being widely adopted and have contributed to greater momentum in product development. 1.1. IGES - Initial Graphics Exchange Specification IGES is a file format, which defines a vendor neutral data format establishing information structures for the digital representation and exchange of product definition data. It was initially published in 1980 by the U.S. National Bureau of Standards (NBS) as NBSIR 80-1978. It supports exchanging geometric, topological, and non-geometric product definition among Computer Aided Design and Computer Aided Manufacturing (CAD/CAM) Systems such as: administrative identifications, design or analysis idealized models, shapes including their physical characteristics, processing and presentation information. Applications supported by IGES thus include traditional engineering drawings and design, models for simulation analysis and other manufacturing functions. 1.2. STEP ISO 10303 – STandard for the Exchange of Product data. The development of STEP started in 1984 as a worldwide collaboration. The goal was to define a mechanism that is capable of describing product data throughout the lifecycle of a product, independent from any particular system. This kind of attempt was made for the very first time. The nature of its description makes STEP suitable not only for neutral file exchange, but also as a basis for implementing and sharing product databases and archiving. Typically STEP can be used to exchange data between CAD, Computer-aided manufacturing, Computer-aided engineering, Product Data Management/EDM and other CAx systems. STEP addresses product data from mechanical and electrical design, geometric dimensions and tolerances, analysis and manufacturing, with additional information specific to various industries such as automotive, aerospace, building construction, ship building, oil and gas, process plants and others. Unlike modern formats like e.g. JT, STEP does not consider “lightweight” representations of a product or object, nor does it concern itself with compression. This makes STEP not first choice for visualization in downstream processes. STEP is the most important and largest effort ever established in the engineering domain and has replaced various CAD exchange standards that were established before its wide industrial adoption. It is developed and maintained by the ISO technical committee TC 184. 1.3. JT ISO 14306 – Jupiter Tessellation The JT format described in ISO 14306:2012 is used primarily in industrial use cases as the means for capturing and repurposing lightweight 3D product definition data[4]. It is a binary file format, whose development started in 1990. JT is used as both a data exchange format between design partners and manufacturers, as well as for visualization applications such as digital preassembly (also called digital mock-up or DMU) and generalized visualization, more commonly referred to as view/measure/mark-up (VMM). 578 A. Pfouga and J. Stjepandić / Leveraging 3D CAD Data in Product Life Cycle According to Opsahl [5], one of the key characteristics that distinguish JT from other formats is this “duality” of being able to be used in cases where data exchange from one application to a second, as well as in cases where visualization is desired. Figure 2. Capabilities of the widely used CAD Data interoperability standard formats. As a matter of fact, among all the aforementioned proprietary and open formats, none has the versatility and capability on its own to equally sustain the diverse requirements of engineering collaboration [7] in the extended enterprise and, further, beyond product development stages of product lifecycle. Either they are not easily accessible or they do not have sufficient capacity for sharing all relevant product data across different applications, domains and teams. Or they aren’t providing sufficient tools and SDKs to support and customize the collaboration experience. Or their industrial use is very low or they just are not ratified by a recognized standards organization, which makes them strategically unsustainable for modern organizations. The industrial application of these 3D formats have moreover been around the transport of specific data set mainly for the purpose of visualization, data exchange or bulk migration (Figure 2) in downstream processes, whose underlying goals are presentation and transformation of native 3D CAD geometry from an authoring application into an alternative format. The resulting data are finally translated into a proprietary format of a third party application for use in e.g. design, validation, and viewing or long term archiving. In normal case and as far as engineering collaboration is concerned, different parts describing an affected request and their virtual product data are submitted through different channels and towards quite a lot of authoring systems; be it a request for information, work, change or approval. E-mail, CAD and various data exchange applications as well as a bunch of data communication channels are used likewise. Fundamentally, this approach is a limitation to leveraging product data across lifecycle stages, domains and supply chains, because the required information is delivered in disconnected parcels. They have to be collected systematically, and realigned to each other on reception to effectively consume them. In many cases, they A. Pfouga and J. Stjepandić / Leveraging 3D CAD Data in Product Life Cycle 579 have to be translated into the recipient’s workspace. The missing link between the parcels, though, is an issue which leads to an unnecessary management overhead for many organizations. As far as manufacturing is concerned, this means that the development partners having to support different systems and configurations are busy adapting and integrating data instead of using them directly. 2. Current approaches for improved 3D-based engineering collaboration Lifecycle Collaboration is more versatile than providing chunks of data. It is more than disconnected product structure, visualization or 3D design! It is the consistent combination of all relevant data streams put in context with a recipient consuming these data to better perform a set of product development tasks. Regarding this, research and industrial communities are investigating approaches incorporating different types of information. 2.1. JT/STEP Integration There is one effort – the first of its kind – aiming at the smart combination of the two international standards STEP and JT to establish a process oriented solution for supporting automotive data exchange requirements. The manufacturing community has recognized that JT itself can only reach its full potential by applying it in combination with the smart XML functionalities of the new Application Protocol (AP) 242 of the STEP standard [4]. In this perspective, STEP AP 242 should become the process backbone for e.g. assembly, metadata and kinetic, whereas JT is enabler for lightweight visualization of 3D data. 2.2. VDA recommendation 4953-2 The recommendation 4953-2 is a proposal of the German Automotive Association (VDA), which describes concepts and means to replace the conventional 2D drawing (as a leading carrier of product information) by documentations on the basis of a technical data container [8]. The scope of this recommendation is a document-based container, which comprises mandatory and optional contents using 3D technologies and providing linked metadata. It aims at eliminating the need existing in many areas of derivation and management of 2D-based collaboration and technical documentation (Figure 3). VDA 4953 describes the structure and the handling of product data embedded in a technical container as well as its architecture. A 3D content with annotated geometry representation is a major mandatory content, where JT (ISO 14306) is recommended for use. A structured metadata content, which isn’t embedded into but linked with the 3D content, is building another mandatory part. VDA 4953-2 recommends STEP AP242 BO XML-Format (ISO 10303-242) for storing metadata and PDF/A (ISO 19005) for their presentation inside the container. Optional contents can be embedded and should be of any file format that can be used for long-term archiving. The German automotive OEM Volkswagen has published and introduced such a container using PDF as container and JT for storing 3D product data. An external viewer is launched interactively to present and query JT objects from the PDF/A presentation layer for metadata. 580 A. Pfouga and J. Stjepandić / Leveraging 3D CAD Data in Product Life Cycle Create Product Data Generate Document Container • Authoring applications and information systems such as • 3D CAD System • PDM System Publish •Mandatory Content •Metadata •3D Data •Optional Content such as additional technical document Consume Product Data •Use cases •Visualization •Data Exchange •Data Processing •Archiving •... Figure 3. Processes addressed within VDA 4953-2. 2.3. Model-Based Definition Model-Based Definition (MBD) is a concept of managing engineering and manufacturing information using 3D models as primary source and record of authority of all other product data related to design, process planning, manufacturing, test, services and overall product lifecycle [5] [10]. MBD in its core is truly not pushing a format or a tool. It is rather defining a “3D Master” with its associated descriptions and technical files to push interoperability one step further. It thus can be implemented with various standard formats such as STEP, JT or PDF. 3. Improving lifecycle collaboration with 3D PDF 3D PDF describes a PDF/E (ISO 32000, ISO 24517) document containing 3D data in PRC (3.1) or in Universal 3D (ECMA-363) format. Unlike traditional interoperability formats, PDF supports the creation of authored fit for purpose documents used for distribution, display and collection of data relevant to fulfill a job role. This information is represented in the way of 3D information, in such data types as 2D drawings, audio, video, animations and images (Figure 4) – all encapsulated in a ubiquitously consumable form that includes forms, templates, digital rights management and signatures [5]. As a “transport container” and besides the ubiquitous availability of the Adobe acrobat Reader in almost any organization, PDF provides the options to consume 3D without the need of an extra plug-in or application or to get product data such as 3D geometry and metadata through an exchange process or to a specialized visualization application, thereby leveraging the relevant infrastructure. An entire business logic defining interactions with embedded data of any type can be implemented through programmatic routines in languages, which are supported by reader applications such as JavaScript (ISO-16262). As far as manufacturing is concerned, a 3D PDF document provides fundamental descriptions to achieve simultaneous engineering and concurrent design based on virtual product in aerospace, automotive or shipbuilding. It is used to improved visualization and productivity in architecture and construction (AEC) through enhanced collection and delivery of information. It is likewise applicable in 3D based medical imaging workflows to improve 3D diagnosis and therapy. A. Pfouga and J. Stjepandić / Leveraging 3D CAD Data in Product Life Cycle 581 Figure 4. Contents of a 3D PDF document. 3.1. PRC ISO 14739 – Product Representation Compact – The PDF/E 3D interoperability format PRC is a compact 3D file format that can be used independently for representing 3D CAD-based models. It is designed to be included in PDF (ISO 32000) and other similar document formats for the purpose of 3D visualization and exchange [9]. It can be used for creating, viewing and distributing 3D data in document exchange workflows. With PRC, documents can be created that are interoperable with computer aided applications such as CAD or CAM. In this regards, PRC is by many extents equivalent to traditional CAD formats such as STEP or JT (Figure 5). It is optimized to store, load and display various kinds of 3D data, especially that coming from CAD systems. It can deliver much higher compression rate for large CAD files without losing accuracy, quality or efficiency. PRC unites features to handle CAD product structure, 3D visualization and accurate graphical description of virtual products as well as Product and Manufacturing Information (PMI). PMIs are non-geometric attributes, which are available in CAD models and which are necessary for manufacturing components. These include geometric dimensions and tolerances, 3D annotations, surface finish and material specifications. The PRC format offers semantic PMI’s in machine-readable data structures, which can be processed in downstream phases. 582 A. Pfouga and J. Stjepandić / Leveraging 3D CAD Data in Product Life Cycle Figure 5. Comparison of the different CAD interoperability formats. 3.2. Scenario and use cases The following scenario outlines the great value of 3D PDF technology in the extended enterprise. It describes a solution where a universal representation of the digital product is required for different kinds of downstream users, but without a need to unnecessarily disclose a vast amount of native CAD data. Using PRC and 3D PDF, the built solution provides a reliable 3D reproduction for geometric features, views, annotations and product configurations, which is fundamental to support visualization, paperless inspection and reporting, faster approval and review. The underlying application of 3D PDF technology furthermore leverages existing CAD and PLM cornerstones systems, while maintaining compliancy to corporate policies like such related to data quality, exchange and intellectual property protection. The scenario, which can be reduced or extended to real world research and business cases, is described in Figure 6. Reference process for 3D PDF based collaboration. This process covers various aspects specified with VDA4953-2 recommendation, except that it relies on PRC instead of JT for geometry representation and is not restricted to the use of STEP AP 242 for the representation of engineering metadata. 3D PDF furthermore is a key enabler for MBD providing all required functions to reuse and leverage 3D data in downstream processing. In this scenario, a designer, who manages the product data inside a corporate information system, creates geometrical shapes using a 3D CAD system. He also designs views and annotations, which are needed to derive comprehensive fit for the purpose of technical documentation for downstream usage. Documents published in the information system are translated automatically to 3D PDF using predefined templates and agreed upon XML standard descriptions for A. Pfouga and J. Stjepandić / Leveraging 3D CAD Data in Product Life Cycle 583 metadata. Additional technical documents and forms aiming at seamless collaboration with the document can be embedded inside the PDF container (Figure 4). Design & PLM: Authoring Application Convert • Native CAD Model, which lifecycle is managed within PLM application • 3D CAD System • Information management systems such as PDM, ERP, etc. Convert: 3D PDF Generator Publish Consume: Adobe Acrobat/Reader • A vendor neutral PRC data model provided within an intelligent 3D PDF file, which contains the exact geometry representation from source CAD system • Optional DRM protection or activation of reader extension • Publication using corporate data exchange platform • Portable PLM document, a technical data package represented by an interactive 3D PDF for • Visualization • First Article Inspection • Review & Documentation • Paperless collaboration • Interactive work instructions • etc. Figure 6. Reference process for 3D PDF based collaboration. The Conversion layer provides optional encryption and extensions mechanisms, which are useful to grant or reduce access to data embedded inside the 3D PDF container [11]. The 3D PDF document generated thereon can be consumed internally or sent to development partners through corporate data exchange mechanisms for engineering data. In both cases, recipients have the ability to easily visualize embedded contents and interactively interrogate product data from within the free Acrobat Reader. They can extract files as well as machine-readable data or synchronize PDF data with thirdparty applications. The 3D PDF document can furthermore be enriched through insertion of animations and supplementary data required to support interactive work instruction use cases from further systems. The final document is a rendition of multiple data gathered from many applications and through form fields, which are accessible from a single technical data package within the Acrobat Reader used to consume these product data. Further detailed information on 3D PDF use cases can be retrieved from sources [12] [13]. Figure 7. Example of a 3D PDF container having multiple data streams. 584 A. Pfouga and J. Stjepandić / Leveraging 3D CAD Data in Product Life Cycle 4. Summary and closing thoughts 3D interoperability makes an important contribution to engineering collaboration. Several formats made to that end successively deal with challenges of their time. Some of these such as STEP are highly verbose formats, which gradually encapsulate all information necessary to define a product, its manufacture, and lifecycle support. Others are focusing best on lightweight visualization use cases and endure better with increasing size and complexity of data [5]. Traditional formats like STEP and JT, though, are not capable of supporting the publishing activity in even broader fashion. New tendencies therefore are aiming at strengthening these individual formats through combination with complementary standards or by using document-based approaches. Unlike STEP or JT, 3D PDF can serve multiple purposes and leverages 3D data downstream throughout the product lifecycle to create, distribute and manage ubiquitous, highly consumable, role-specific rich renditions. 3D PDF is a fundamentally different approach from traditional experience established in product development – it is an exceptionally proficient contextual aggregation of multi-domain and multi-disciplinary product data. The manufacturing community should embrace it as an addition and great improvement to current engineering collaboration standards. All engineering components required for its descriptions are meanwhile published international standards. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] J. Kluger, Simplexity: Why Simple Things Become Complex (And How Complex Things Can Be Made Simple), Hyperion Books, 2008. P. Pfalzgraf, A. Pfouga, T. Trautmann, Cross Enterprise Change and Release Processes based on 3D PDF, in J. Stjepandić et al. (eds.) Concurrent Engineering Approaches for Sustainable Product Development in a Multi-Disciplinary Environment, Springer-Verlag, London, 2013. A. Katzenbach, S. Handschuh, S. Vettermann, JT Format (ISO 14306) and AP 242 (ISO 10303): The Step to the Next Generation Collaborative Product Creation in E. Kovács, D. Kochan (eds.) Digital Product and Process Development Systems - IFIP TC 5 International Conference, Proceedings, Springer-Verlag, Berlin Heidelberg, 2013. ISO 14306 - Industrial automation systems and integration — JT file format specification for 3D visualization, ISO 2012. D. Opsahl, Positioning 3DPDF in Manufacturing - How to Understand 3DPDF when Compared to Other Formats. White paper by 3D PDF Consortium, 2012. (http://www.3dpdfconsortium.org) A. Katzenbach, Automotive, in J. Stjepandić et al (eds.) Concurrent Engineering in the 21st Century Foundations, Developments and Challenges, Springer International Publishing Switzerland, 2015. A. Fröhlich, 3D Formats in the Field of Engineering — a Comparison, White Paper, PROSTEP AG, 2013 VDA 4953-2 Zeichnungslose Produktdokumentation, VDA Recommendations, 19 March 2015 https://www.vda.de/en/services/Publications/Publication.~1263~.html Document management – 3D use of Product Representation Compact (PRC) format, ISO 2012 F. Tian, H. Zhang, X. Chen, H. Zhou, D. Chen, A graphical symbol for machining process information description using Model-Based Definition technology, Trans Tech Publications, Switzerland, 2014. Data Security and Know-How Protection, PROSTEP AG (White paper http://www.3dpdf.com/nc/en/server-solution/white-paper-data-security.html ), 2014 3D PDF technology, PROSTEP AG (White paper http://www.3dpdf.com/nc/en/server-solution/whitepaper-3d-pdf-technology.html), 2012 A. Katzenbach, S. Handschuh, R. Dotzauer, A. Fröhlich, Product Lifecycle Visualization, in J. Stjepandić et al (eds.) Concurrent Engineering in the 21st Century - Foundations, Developments and Challenges, Springer International Publishing Switzerland, 2015. Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-585 585 The Research of Music and Emotion Interaction with a Case Study of Intelligent Music Selection System Li-Wei KO, Kai-Hsiang CHUANG and Ming-Chuan CHIU1 National Tsing Hua University, Taiwan Abstract. Music plays an important role in human history. Since music could influence the emotion and performance of people, it has been widely applied to enhance the efficiency of some particular tasks, such as sports or medical applications. For example, a jogger can run longer distance with a proper music. However, people usually select the music according to some personal preference. In this manner, it might not be able to reach the anticipative goal of task. Moreover, manually music selection is a time-consuming activity. Therefore, to select the proper music from a huge amount of music database became a major issue. The aim of this study is to establish an intelligent music selection system. A method based on Music Information Retrieval (MIR) technique which can query and retrieve certain types of music was developed. This system was realized by application software. This system is not confined to appropriate music selection by tracing the user’s heart rate variability. More importantly, it could keep users in a positive emotional state. This study expects to provide effective music patterns, contributing to the therapeutic music creation and the applications in the relevant field of music therapy. Keywords. Data Mining, Music information retrieval (MIR), Music therapy, Heart rate variability (HRV) Introduction Music plays an important role in human’s history, even more so in the digital age. With the advance of science and technology, people could easily store thousands of songs on personal computer or mobile device. As the result, searching a song from tremendous music database became a fatal issue. This issue brought about the prevalence of music information retrieval (MIR). Music information retrieval (MIR) is an emerging research area that receives growing attentions from both the research community and music industry. It addresses the problem of querying and retrieving certain types of music from tremendous database. However, selecting music manually still a timeconsuming behavior. Further, most of users do not know or reluctant to select song. Users intend to listen the music which is conformed to the current mood or improve their emotion through music therapy. Nevertheless, only a few music information retrieval system took music therapy as the basis for the classification. Furthermore, there is little music selecting system could select the appropriate music automatically according to users’ emotion. 1 Corresponding author, E-mail: mcchiu@ie.nthu.edu.tw 586 L.-W. Ko et al. / The Research of Music and Emotion Interaction with a Case Study Therefore, the aim of this study is to establish an intelligent music selection system, which could select appropriate music by detecting users’ heart rate variability, to make user’s emotion maintain in positive emotional state, and make user’s heart rate variability compliance with the range of health standard. This study would establish a music database through design of experiment, and took 15 pure music as training data. Subjective feelings, positive affect and negative affect and objective data, heart rate variability were observed to classify categories. Data mining was applied as research method and amplitude, frequency and tempo were analyzed. At last, input 300 popular music as testing data and verified the system by experiment. The paper is organized as follows. In chapter 1, we discuss the model of affect, music features and music therapy. Chapter 2 illustrates the methodology and the framework of this study. Experimental analysis are discussed in chapter 3. Conclusions and potential research issues for future study are given in chapter 4. 1. Literature Review 1.1 Music and Emotion Many related studies have discussed the interrelationship between music and emotion. However, most of them only expounded the emotion of music itself, rather than the changes of audience mood after listening to music. (Kenny, 2004; Juslin & Laukka, 2004). This study will use circumplex model of affect (Russell, et al., 1980), as shown in Figure 1. The horizontal axis and vertical axis are pleasure / displeasure (positive / negative) along with the degree-of-arousal. Valence dimension represents the pleasure extent received by individual in emotional experience. Positive, on behalf of pleasure, while negative stands for displeasure; the degree of arousal refers to the intensity of felt by individual in emotional experience. Through the cross of two dimensions, this model could be presented as a two-factor model, which consists of Positive Affect and Negative Affect (Watson, et al., 1988). Positive Affect (PA) reflect extent to which a person feels enthusiastic, active, and alert. High PA is a state of high energy, full concentration, and pleasurable engagement, whereas low PA is characterized by sadness and lethargy. In contrast, Negative Affect (NA) is a general dimension of subjective distress and unpleasable engagement that subsumes a variety of aversive mood states, including anger, contempt, disgust, guilt, fear, and nervousness, with low NA being a state of clamminess and serenity. The aim of our research is dividing music into positive music and negative music, which inspire people generate positive affect and negative affect respectively. Figure 1. Circumplex Model of Affect (Russell et al., 1980). L.-W. Ko et al. / The Research of Music and Emotion Interaction with a Case Study 587 1.2 Emotion and Heart Rate Variability The emotions that people experience while interacting with their environment are associated with varying degrees of physiological arousal (Levenson, 2003). Heart rate variability (HRV) is used to measure the impact of the interaction between sympathetic nerve and parasympathetic nerve. The generated autonomic nervous message represents the capabilities of monitoring emotional reaction. Over the last 30 years, HRV analysis has become more and more popular as a non-invasive research and clinical tool for indirectly investigating both cardiac and autonomic nervous system function in both health and disease area. Most studies explored the relationship between music and emotion, nonetheless, the related applications that selected music based on HRV are still few. With the limitation of equipment, this paper applied time domain variables, SDNN as judging criteria. Based on the Yerkes-Dodson law, the arousal theory stipulated that some intermediate level of arousal was optimal for performance (Eysenck & Eysenck, 1985). The Yerkes-Dodson law suggested that arousal and performance had an inverted-U relationship as shown in Figure 2. That is, task performance was impaired when motivation was either very low or very high, and performance was maximized at some intermediate level of "optimal" motivation. However, there was little research brought the Yerkes-Dodson law into practice. 1.3 Music Features Music features were utilized to represent the music, which include low-level, mid-level and top-level three types, as shown in Figure 3. This study would analyze the pitch, volume, tempo of music, and observe human’s emotion by experiment which results to bridge the gap by inferring high-level features from the low and mid-level features. The following would discuss the music features, which adopted in this research, volume, pitch and tempo in detail. Volume is an acoustic feature that is correlated to the samples' amplitudes within a frame. The greater the amplitude the greater is the volume. Watson (1942) found that the louder songs were characterized as extremely exciting or happy, while the softer songs were peaceful or serious. Pitch is another important feature of audio signals. Pitch is determined based on the frequency of vibration and the size of the vibrating object. The object with slower or bigger vibration cause lower pitch; the faster or smaller vibration results in higher pitch. Researches showed that the smaller variation of pitch may relate to less active emotion. Figure 2. The Yerkes-Dodson law. Figure 3. Characterization of music features. 588 L.-W. Ko et al. / The Research of Music and Emotion Interaction with a Case Study and large variation of pitch may relate to intense emotion (Fairbanks & Hoaglin, 1941; Fairbanks & Pronovost, 1939). Tempo indicates how slowly or fast the piece should be played. Tempo is the one of the major musical features that could distinguish happy from sad (Gabrielsson & Lindstrom, 2001; Schellenberg et al., 2000). The specific manipulation of musical tempo affected the perception of joyfulness and sadness (Dalla Bella et al., 2001; Khalfa et al., 2005). Particular tempo is usually related to specific emotions, such as fast tempo with joy and slow tempo with sadness and tenderness (Davitz, 1964; Fonagy & Magdics, 1963). According to the above literature review, this study adopted the most widely-used and available musical features, which were volume, pitch, and tempo as analysis indicators. 1.4 Summary Although many researches declared that music could enhance human’s performance of some tasks, little researches definitely indicate that the relationship between music and performance. (Mindlab, 2014; Satoh et al., 2014; Gray, 2013). Moreover, there is little research discussed the music classification, which based on different degree of arousal and measured the influence between music and emotion by heart rate variability. For these reasons, this study would predict users’ emotion through detecting HRV and improve users’ performance by playing appropriate music. This study would establish the emotional music database through data mining. Finally, this research developed an intelligent music selection system to improve users’ performance. 2. Method The aim of this paper is to establish an intelligent music selection system, which could select appropriate music through detecting users’ heart rate variability. The methodology in this paper is divided into two parts. Phase I establishes a music database through data mining. 15 pure songs are adopted as training data and divided into four categories. Data mining is applied as research method and volume, pitch and tempo were analyzed. At last, 300 popular songs are input as testing data to forecast music category. Phase II is intelligent music selection system establishment. This study integrates wearing device, smartwatch with application software to measure subjects’ heart rate variability. Music is selected for subjects by the heart rate variability. Finally, heart rate variability are detected and control in the range of health standard. 2.1 Phase I: Music Database Establishment Data mining is a series of process that extracting interesting information from database, and processing, transforming, mining and evaluating. Data mining is a part of knowledge discovery in databases (KDD) (Fayyad et al., 1996). This study employed data mining as research method, extracting 15 songs from numerous music and analyzing the information of music, including volume, pitch, tempo and the impact on L.-W. Ko et al. / The Research of Music and Emotion Interaction with a Case Study 589 human heart rate variability. The process of part I methodology was shown in Figure 4. Following would introduce in detailed. Figure 4. The process of part I methodology. Step 1: Collect Music The music which collected in this study was divided into testing data and training data. 15 pure songs were adopted as training data from Hsu and Chiu (2014), while 300 songs were collected as testing data from KKBOX®, which is a popular music platform. Step 2: Determine the Type of Music This study defined the type of music based on the circumplex model of affect into four categories, including Misery and Arousal (I), Pleasure and Arousal (II), Misery and Sleepiness (III) and Pleasure and Sleepiness (IV). The determination of music type was shown as Figure 1. Step 3: Select Feature The goal of feature selection in pattern recognition is to select the most influential features from the original feature set to construct a classifier that gives better performance (Thomas, 1994).. This study referred to the music features, which were performed in music information retrieval (MIR) classification based on emotion, and had significant impact on human emotions as the selected music features, such as volume, pitch, and tempo. Step 4: Analyze Decision Tree Decision tree algorithm is one of the classification methods of data mining, which can classify the data automatically by classification rules. The algorithm utilized in this study is Classification & Regression Trees (C&RT), proposed by Breiman et al. (1984). The process is to construct a very complex tree and prune an optimal tree according to the cross detection and the test results. C&RT algorithm consists of three main steps. Initially, construct the tree, using recursive binary rule division to conduct cutting action. Each node is divided down into two data subsets. In the process of cutting nodes, Gini index is adopted to judge the heterogeneity of nodes. Formula is as follow (Breiman et al., 1984): ௡ ‹‹ሺ‫ݐ‬ሻ ൌ ͳ െ ෍ ‫݌‬௜ଶ ௜ୀଵ (1) 590 L.-W. Ko et al. / The Research of Music and Emotion Interaction with a Case Study Where t represents a known node t, which has n classes, ‫ ݅݌‬represents the probability of the ith class in ith node. ܰଵ ܰଶ (2) ݃݅݊݅ሺܵଵ ሻ ൅ ݃݅݊݅ሺܵଶ ሻ ܰ ܰ Data subset S is divided into S1 and S2, and the size of each subset were N1 and N2. N represents the total number of samples. In accordance with the above formula, it can be observed that the smaller Ginisplit value is, the simpler the composed sample data generated in sub nodes it will be. Therefore, the probability of classification error will be lower. The second step is pruning the tree to the appropriate tree size, the guidelines of pruning is using calculated error rate or error cost as a basis to judge the decision tree pruning. The criterion of pruning in the process is Resubsitution Estimate. R(T) is Resubsitution estimates of the tree. The smaller R(T) is, the larger number of ended nodes have. In other words, the more cutting points in the tree, the larger structure of the tree as well. The calculation formula are shown in (3)-(7) (Breiman et al., 1984). ‫݅݊݅ܩ‬௦௣௟௜௧ ሺܵሻ ൌ (3) ܴሺܶሻ ൌ ߛሺ‫ݐ‬ሻ‫݌‬ሺ‫ݐ‬ሻ ߛሺ‫ݐ‬ሻ ൌ ͳ െ ݉ܽ‫݌ ݔ‬ሺ݆ȁ‫ݐ‬ሻ (4) (5) (6) (7) T: Maximum tree generated in the previous step; t: A node in the tree; R(t): Resubsitution estimates of tth node; γ(t): Error rate of tth node; ‫݌‬ሺ݆ȁ‫ݐ‬ሻ: The probability of tth node in jth class; p(t): The probability of tth node in every sample; p(j,t): The probability of tth node belongs to jth class in every sample Step 5: Extract Pattern and Construct Music Database In step 5, the effective threshold and score of music features, which are the music patterns of positive music and negative music were proposed. The results would be described in Chapter 3. 2.2. Phase II: Intelligent music selection construction After established the music database, this study developed the intelligent music selection system by Java language. Samsung® Galaxy Note 3 Android smartphone and wearable device, Samsung® Gear Live Smartwatch were used as experimental equipment. The operation process of the system will detect the HRV of user and select proper music accordingly. The interface of application software would be illustrated in Chapter 3. L.-W. Ko et al. / The Research of Music and Emotion Interaction with a Case Study 591 3. Case Study 3.1 Phase I: Music Database Establishment 3.1.1 Data Collection This study adopted the experiment results from Hsu and Chiu (2014), which consists the valence degree and heart rate variability of 15 songs as training data. Through the cross of two dimension, this model could divided in four types, which are misery and arousal (I), pleasure and arousal (II), misery and sleepiness (III) and pleasure and sleepiness (IV). 3.1.2 Model Establishment   In this study, the music type forecasting model was established by music features, corresponding to the music type. The type of music was defined by valence and arousal degree. The music feature information of training data was utilized to identify the key factors that influence the classification of type. Then, the music type of testing data was predicted by the music features of testing data. 3.1.3 Model Selection ġ  This research selected three candidate models through auto numeric function of SPSS® Modeler. C&RT decision tree, Regression and General Linear were the candidate models of Classifier 1 (forecasted the valence), and C&RT decision tree, CHAID and Regression were the candidate models of Classifier 2 (forecasted the arousal). The analyzed results were shown in the following tables. Table 1. Auto numeric analysis of valence. Model Correlation C&RT 1 Regression 1 General Linear 1 Fields 3 3 3 Relative Error 3.677 1.072 1.072 Table 2. Auto numeric analysis of heart rate variability. Model Correlation C&RT 1 CHAID 0.972 Regression 0.221 Fields 3 1 3 Relative Error 1.111 0.071 1.283 To select the optimal model, the validity of the model was verified by five-fold cross validation. The training data was divided into training set (80%) and validation set (20%) by random sampling. In statistics, the mean absolute error (MAE) is a quantity used to measure how close forecasts or predictions are to the eventual outcomes. The smaller MAE means better prediction. MAE is an average of the absolute errors ݁௜ , given by:  ൌ ௡ ௡ ௜ୀଵ ௜ୀଵ ͳ ͳ ෍ȁ݂௜ െ ‫ݕ‬௜ ȁ ൌ ෍ȁ݁௜ ȁ ݊ ݊ (8) MAE were calculated five times, and the average MAE were used as the selection criterion. In the cross validation of valence, the average MAE of C&RT decision tree was 0.8186, which was lower than the average MAE of Regression (1.7736) and General Linear (1.7736). In the cross validation of valence, the average MAE of C&RT decision tree was 0.2554, which was lower than the average MAE of CHAID (0.3634) and Regression (0.3210). According the comparison results of cross 592 L.-W. Ko et al. / The Research of Music and Emotion Interaction with a Case Study validation, the C&RT decision tree was the optimal model of forecasting valence and heart rate variability. Therefore, this research predicted the valence and heart rate variability by C&RT decision tree, and further, predicted the music type of testing data. The analyzed results were shown in Table 3 and 4. 3.1.4 Type forecasting Through C&RT decision tree analyzing, the valence and heart rate variability of 300 popular songs were forecasted. 81 songs were predicted as I type, 156 songs were predicted as II type, 22 songs were predicted as III type and 41 songs were predicted as IV type. Table 3. The cross validation result of valence. MAE(1st) MAE (2nd) MAE (3th) MAE (4th) MAE (5th) 0.672 0.733 0.297 1.353 1.038 C&RT 0.588 0.752 0.85 2.18 4.498 Regression 0.588 0.752 0.85 2.18 4.498 General Linear Table 4. The cross validation result of heart rate variability MAE(1st) MAE(2nd) MAE (3th) MAE (4th) MAE (5th) 0.315 0.106 0.368 0.231 0.257 C&RT 0.610 0.230 0.455 0.234 0.288 CHAID 0.161 0.43 0.406 0.221 0.379 Regression Average 0.8186 1.7736 1.7736 Average 0.2554 0.3634 0.3210 ġġġġġġġġ The positive music was employed to construct the music database, which could relieve user’s emotion. Wherefore, the positive music (197 songs) were discussed further in our research and divided into four music databases by arousal degree, which were Extremely High, High, Low, and Extremely Low. The classified results of positive music were shown in Table 5. Table 5. The classified result of positive music Type Extremely High High Low Extremely Low Arousal degree(SDNN) 0.75~1 0.5~0.75 0.25~0.5 0~0.25 Quantity 152 4 28 13 3.2 Phase II Intelligent Music Selection System Establishment 3.2.1ġApplication software design The application software interface design of the system would introduced in this section, included Heart and Music. As illustrated in Figure 5. The application software was developed as a mobile application (APP) on a cell phone. The page included four music databases, there are different types of music in each music database in Figure 6. If the user want to select music by themselves, user could select music from music database directly. After clicking the music database, the user would enter “Music Playlist” page. The application software interface were shown in Figure 7. If the user click the song, the music would play. 3.2.2 The result of user experience experiment The system was conducted by a small-sample pretest. Two records of heart rate variability from pretest experiment were shown in Figure 9. As show in these two picture, the tendency of SDNN were more and more close to the center. Therefore, the L.-W. Ko et al. / The Research of Music and Emotion Interaction with a Case Study 593 system were proved that it could influence human’s heart rate variability and help people to reach the optimal performance. Figure 5. Software Interface. Figure 6. Emotion Recognition. Figure 7. Music Playlist. Figure 9. Application software interface - Music Playlist. 4. Conclusions This research constructed an emotional music database through data mining, and then further integrated it with the wearable devices and application software which could help user select appropriate songs and hence enhance their performance. The present study is the first music selection system, which could select music for users based on heart rate variability. Moreover, this study is conducted on the basis of the YerkesDodson’s law so the system design is intelligent, innovation and personalized. In practical, it could not only apply to individuals but also to every family member who live in smart home. Lastly, this system meets the requirements of both controlling media player through wearable devices and health monitoring. Alternatively, this system could be utilized in music therapy and other music playing related industries. In the future, our research hopes to extend to more diverse music such as building a cloud database for users to download classified music or establishing the classifiers in application software which would then allow users to download their favorite songs to the application software for future analysis. Also, this study will validate the system with transformation scenes. For instance, selecting suitable music for the user who burns the midnight oil to boost their morale; selecting appropriate music for the exerciser to reduce sports injuries and achieve better performance. Last, this study hopes to remotely monitor users’ physiological conditions, especially to the elders and patients with special disease. 594 L.-W. Ko et al. / The Research of Music and Emotion Interaction with a Case Study References [1] L. Breiman, J. Friedman, C.J. Stone, R. A, Olshen, Classification and regression trees, CRC press, Belmont, 1984. [2] C-Y. Chuang, Music therapy, Psychological, Taiwan, 2004. [3] S. Dalla Bella, I. Peretz, L. Rousseau, N. Gosselin, A developmental study of the affective value of tempo and mode in music, Cognition 80 (2001),1–9 [4] J. R. Davitz, The communication of emotional meaning. McGraw Hill, Oxford, 1964. [5] H.J. Eysenck, M.W. Eysenck, Personality and individual differences: A natural science approach, Plenum, New York, 1985. [6] M. A.Friedl, C.E. Brodley, Decision tree classification of land cover from remotely sensed data. Remote Sensing of Environment 61(3) (1997), 399-409. [7] Z. Fu, G. Lu, K.M. Ting, D. Zhang, A survey of audio-based music classification and annotation. Multimedia, IEEE Transactions on 13(2) (2011), 303-319. [8] G. Fairbanks, W. Pronovost, An experimental study of the pitch characteristics of the voice during the expression of emotion‫כ‬. Communications Monographs, 6(1) (1939), 87-104. [9] G. Fairbanks, L.W. Hoaglin, An experimental study of the durational characteristics of the voice during the expression of emotion. Communications Monographs, 8(1) (1941), 85-90. [10] I. Fonagy, K. Magdics, Emotional patterns in intonation and music. Zeitschrift für Phonetik, 16(1-3) (1963), 293-326. [11] A. Gabrielsson, E. Lindstrom, The influence of musical structure on emotional expression. In: P.N. Juslin, J.A. Sloboda (eds.), Music and Emotion: Theory and Research. Oxford University Press, Oxford, pp. 223–248, 2001. [12] E. Gray (2013) Study: Listening to music while studying could enhance intelligence. Retrieved April 8, 2014, from http://kdvr.com/2013/09/11/study-listening-to-music-while-studying-could-enhanceintelligence/ [13] Y-W. Hsu, M.C. Chiu, S.L. Hwang, Investigating the Relationship between Therapeutic Music and Emotion: A Pilot Study on Healthcare Services, J. Cha et al. (eds.) Moving Integrated Product Development to Service Clouds in Global Economy. Proceedings of the 21st ISPE Inc. International Conference on Concurrent Engineering, IOS Press, Amsterdam, 2014, pp. 688 – 697. [14] P. N. Juslin, P. Laukka, Expression, perception, and induction of musical emotions: A review and a questionnaire study of everyday listening, Journal of New Music Research, 33(3) (2004), 217-238. [15] D. Kenny, Treatment Approaches for Music Performance Anxiety: What works?, Music Forum, 2004. [16] S. Khalfa, D. Schon, J.L. Anton, C. Liégeois-Chauvel, Brain regions involved in the recognition of sadness and happiness in music, Neuroreport 16 (18) (2005), 1981–1984 [17] R.W. Levenson, Blood, sweat, and fears, Annals of the New York Academy of Sciences 1000.1 (2003): 348-366. [18] Mindlab, Does playing music at work increase productivity? (2014). Retrieved April 8, 2014, from http://themindlab.co.uk/ [19] J. A. Russell, A circumplex model of affect. Journal of Personality and Social Psychology 39(6) (1980), 1161. [20] M. Satoh, J.I. Ogawa, T. Tokita, N. Nakaguchi, K. Nakao, H. Kida, H. Tomimoto, The effects of physical exercise with music on cognitive function of elderly people: Mihama-Kiho project. PloS one, 9(4) ġ(2014), e95230. [21] G.E. Schellenberg, A.M. Krysciak, J.R. Campbell, Perceiving emotion in melody: interactive effects of pitch and rhythm, Music Percept 18 (2000),155–171. [22] J. Serra, E. Gómez, P. Herrera, X. Serra, Chroma binary similarity and local alignment applied to cover song identification, IEEE Transactions on Audio, Speech, and Language Processing, 16(6) (2008), 1138-1151. [23] J. Shen, J. Shepherd, B. Cui, K. L. Tan, A novel framework for efficient automated singer identification in large music databases, ACM Transactions on Information Systems (TOIS), 27(3) (2009), 18. [24] E.V. Thomas, A primer on multivariate calibration, Analytical Chemistry, 66(15) (1994), 795A-804A. [25] J.C. Tsai, The Cognitive Psychology of Music, National Taiwan University Press, 2013. [26] K. B. Watson, The nature and measurement of musical meanings. Psychological Monographs: General and Applied 54(2) (1942), i-43. [27] D. Watson, L.A. Clark, A. Tellegen, Development and validation of brief measures of positive and negative affect: the PANAS scales, Journal of Personality and Social Psychology, 54(6) (1988), 1063. [28] Y.H. Yang, Y.C. Lin, Y.F. Su, H. H.Chen, A regression approach to music emotion recognition, IEEE Transactions on Audio, Speech, and Language Processing 16(2) (2008), 448-457. Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-595 595 The Design Process Structural & Logical Representation in the Concurrent Engineering Infocommunication Environment Denis TSYGANKOV1, Alexander POKHILKO , Andrei SIDORICHEV , Sergey RYABOV , Oleg KOZINTSEV Ulyanovsk State Technical University, 32 North Venets st., 432027 Ulyanovsk, Russian Federation Abstract. Design of composite objects differs by the presence of the technical requirements to the entire device, without separate requirements to the components. Results of the design activity are the technical documentation and a 3D-assembly. Generation of a 3D-assembly includes adding 3D-models of components and the introduction of conjugations between them. For 3D-assembly corrections and adjustments, it is needed to replace manually the parts and install their conjugations that take considerable time. The proposed solution is to establish structural and logical relationships between the components of the 3D-assembly model. This allows components to bind with structural elements of each other. Interconnections between the components can determine their location and conjugation with each other, as well as to modify and export the final design solutions, not disrupting its integrity, what is not provided by any of the modern CAD systems. Keywords. CAx technologies, design activities, automation, design solutions, 3Dimage, interoperability, design process. Introduction Accomplishment of interoperability remains a major challenge. It complicates the complete exchange of project activities results between different CAD systems. However, currently offered approaches for solving this problem cannot be implemented for a various reasons [1], [2]. The theoretical basis of this research is presented and described. Design solution representation based on technology of structural logic linking of design procedures and using free geometric kernel Open CASCADE Technology is an approach that allows us to solve the problem of interoperability in systems of 3Ddesign, and ensure the preservation of the structural and logical integrity of design decisions [3]. While the solutions are described as a set of techniques required for their formation [4]. 1 Corresponding author, E-mail: furius73@gmail.com 596 D. Tsygankov et al. / The Design Process Structural & Logical Representation 1. Design process procedural representation The design process can be represented by a sequence of design procedures, each of which has specific physical meaning. The simplest option – linear performance of design stages is shown in Fig. 1. (Left). Figure 1. Design process procedural representations. The Scope Statement comes first stage Rsin, which is a set of input data and the requirements for future product. Each next step is carried out after successful completion of the previous one. Input data for i-th step are output (i-1)-th step. Moreover, steps may have different variants of transforming source data into output data; it is called an branches of alternative. For a linear sequence of steps, the Dsout design solution formed by the formula: (1) Dsout ( Dpo1 ,1 )‰ ( Dpo2 ,1 ) ‰ ... ‰ ( Dpon ,1 ); Various solutions can be prepared by the following sequence of design steps represented by the formula (1). However, this sequence is able to describe a class of geometric objects such as variability does not provide for completion of design procedures. The sequence, which has two variants for the design routes of the 1-st step of the project is shown in Figure 1. (in the center). Thus, the values of input and intermediate design parameters determine the subsequent design stage – Do12 or Do22, each of them will lead to the original design solution Dsiout. For this sequence, the design solution Dsout expressed by the formula: (2) Dsout {[( Dpo1 ,1 ) ‰ ...‰ ( Dpon ,1 )] ˆ [( Dpo1 ,2 ) ‰ ...‰ ( Dpon ,2 )]}; Building a 3D-model of the cylindrical detail can be considered as a typical example: it can be obtained as the different software as well as different ways (by drawing a sketch, its rotation, built a loft, and others.). The most general case – the design tree sequence of steps, each of which has several branches alternatives presented in Figure 1. (Right). In the general case, the set of possible treatments n is formed as a result of i-th step of the project, on (i + 1)-th stage. As a result, this sequence allows to obtain the set of design solutions at output, each of them is unique, but takes into account the specifics of the project tree routes (like the design of objects in a single class). In the general design solution, Dsout formed according to the following formula: D. Tsygankov et al. / The Design Process Structural & Logical Representation 597 Dsout ( Dpo 1,0 ) ‰ ^>( Dpo 2 ,1 ) ‰ ( Dpo 3 ,1.1 ) ˆ ...ˆ ( Dpo n ,1.n ) @ ˆ > @ ˆ ( Dpo 2 ,2 ) ‰ ( Dpo 3 ,2.1 ) ˆ ...ˆ ( Dpo n ,2.n ) ˆ ...ˆ > @` (3) ˆ ...ˆ ( Dpo 2 ,m ) ‰ ( Dpo 3 ,m.1 ) ˆ ...ˆ ( Dpo n ,m.n ) . Since each branch of the project route can be characterized by certain property, that distinguishes one solution from the other, the represented sequence may be used in the design of complex technical objects. 2. Design procedures structural & logical representation The initial data for the design is scope statement that can be described as a set of input design parameters: N N N Q Q Q (4) Rsin { ( dp1 , dp2 ... dpn ), ( dp1 , dp2 ... dpm ) }, N Q Rsin – set of input design parameters dp i и dp i – design parameters, taking the quantitative and qualitative values. Scope statement defined by the user. Scope statement defined by the user. The parameters dpNi and dpQi can be set by interactive input of arbitrary values as well as the choice of normalized values. This occurs because to the fact that they are limited by various conditions. Thus, dpNi parameters can take a range of values [dpNi.min ... dpNi.max], whereas dpQi parameters – only discrete values [dpQ1, dpQ2 ... dpQn]. Taken values of parameters both types are predefined according to the design algorithm and other standards. The resulting 3D-model is generated according output design parameters: A A A C C C (5) Rdout { ( dp1 , dp2 ... dpn ), ( dp1 , dp2 ... dpm ) }, A C Rdout – set of output design parameters, dp i and dp i – design parameters whose values are determined automatically and interactively by the user. Each of the parameters affects the formed solution, making it a unique design for all output parameters. Automatic determination of the values of output parameters dpAi occurs according to a predetermined algorithm – design techniques which contains a minimal required set of initial data on its output. It should be noted that the specific value of the i-th parameter affects the decision, as well as the system their values, this factor is used in the selection the optimal structure of design solution. Some output solutions dpCi may have several alternative values that correspond to the technical task. Therefore, the user must select the preferred option “manually”. In general, the output Rdout design parameters are determined by the function: (6) Rdout f Rsin , Rsmd , Comnf , Rsmd – a set of intermediate of design parameters, Comfn – set of design parameters of n components included in the designed device. The set of parameters Comfn completely determines the three-dimensional images of all the components of the composite device. When designing any device is represented as a system of interconnected components, therefore it can be formally submitted in the following form: f f f f f f (7) RtDForm { Com1 , Int1 , Com2 , Int 2 ... Comn , Int n } , f RtDForm – formal representation of a composite device, Int i – is a set of interconnections of design parameters of i-th component. The set design parameters of (Comfi) components similar to the formula (1): f N N N Q Q Q (8) Com1 { ( dpc1 , dpc2 ... dpcn ), ( dpc1 , dpc2 ... dpcm ) } , 598 D. Tsygankov et al. / The Design Process Structural & Logical Representation dpcNi and dpcQi – design parameters, taking the numerical and qualitative values. The set Intfi includes three sets: interconnections with input and output design parameters f(Rsin) and f(Rdout), and interconnections with parameters of other components f(Comfi): f f f (9) Inti { f ( Rsin ), f ( Rdout ), f ( Com1 ... Comn ) } , Interconnections with the input parameters f(Rsin) determine each of the components and arrange them into system in accordance with terms of reference. Interconnections with output parameters f(Rdout) provide qualitative and quantitative changes in the system when the terms of reference changed. Interconnections with the parameters of other components f(Comfi) allow match the components, ensuring the integrity of design solutions. In general, the design process is a set of design procedures that display it as a sequence of steps with physical meaning. When designing, there are two design procedures: first one – design procedure for constructing 3D-image of the i-th component dP3Di, and the second one – the design procedure for establishing interrelations between the i-th and the n-th components of dPInti,n. In this case, the design process DesPForm can be formally represented in the following form: P Int (10) Des Form { ( dn1 , da1 , dP13 D ) ... ( dnn , dan , dPn3 D ), ( dP1Int , 2 ... dP( n1 ), n ) } , dni – the serial number of execution of the i-th project procedure (dP3Di), dai – branch number of alternative, which owns the current design process. Number of branches alter the native project determines the route in accordance with the specified design parameters, making a unique solution. 3. Approbation Waveguide horn antenna is considered for example. It includes components such as the open end of the waveguide sectorial E- and H-horns, pyramidal and wedge-shaped horns (Fig. 2). Figure 2. Horn waveguide antennas. Initially, design parameters common for all components of this devices class must be selected. In the case of waveguide horn antennas, there are the height and width of the waveguide section a×b, flange type Tf, the waveguide section length Lw, the wall thickness of the waveguide tw, the material m and the type of deposition sm. Then, the parameters peculiar for some objects are selected. In this case of sectorial and tapered horn there are width of the horn in the E- and H-plane rE, rH, and the length of 599 D. Tsygankov et al. / The Design Process Structural & Logical Representation the horn in a plane lE, lH. There are same parameters for pyramidal horn, but in this case instead the length of the horn in a particular plane has a length in both planes l. When all parameters are set, their values are checked. The fact that some parameters may take the whole interval of values, and the other – on only discrete values. Since all input parameters specified by the user, they are limited to certain conditions. In case set values fall in the range, arbitrary values entered by the user interactively [Pmin ... Pmax] (the difference between the two closest values depend on the entered sampling step Δt); and discrete values selected by the user interactively from a predefined “normalized” values, strict adherence between selected option and the offered one is required condition in this case. Normalized values are determined by reference and scientific and technical literature, which is applicable to the planned technical objects. Horn antennas are part of the microwave waveguide path, and hence the value of the normalized design parameters are determined in accordance with Russian national standards – GOST 20900-2015 and GOST 13317-89. Table 1 shows the design parameters of the waveguide horn antennas, their lettering and type of input values. Table 1. Design parameters of horn waveguide antennas class. Design parameters The waveguide height and width Flange type Material Type of deposition The waveguide wall thickness The waveguide length The E-plane horn width The H-plane horn width The E-plane horn length The H-plane horn length The both planes horn length Symbol a×b Tf m sm tw Lw rE rH lE lH l Input type Selection of normalized values of design parameters Interactive input of design parameters values After the input type of input parameter values specified, the formal representation of the design process device class is formed. It’s a series of design procedures that form the design process. Formal representation of the class horn waveguide antenna has the form: ^ T s Ant. FrMd Horn = (di1 , dv1 , Dpc1aub ),(di2 , dv2 , Dpc2 f ),(di3 , dv3 , Dpc3m ),(di4 , dv4 , Dpc4m ), t L r r l (di5 , dv5 , Dpc5w ),(di6 , dv6 , Dpe6 w ),(di7 , dv7 , Dpe7E ),(di8 , dv8 , Dpe8H ),(di9 , dv9 , Dpe9E ), (11) l l (di10 , dv10 , Dpe10H ),(di11 , dv11 , Dpe11 ),(di12 , dv12 , Dpb12Oew ),(di13 , dv13 , Dpb13Hes ), (di14 , dv14 , Dpb14Hhs ),(di15 , dv15 , Dpb15Hp ),(di16 , dv16 , Dpb16Hw ), FrMd .Horn – set of design procedures required to design components horn waveguide antenna; dii – serial number of the i-th execution of project procedures i = 1...16; dvi – branch number of alternative that contains the i-th design procedures; Dpci – design procedure for selecting normalized values of the input parameters a×b, Tf, m, sm, tw, for i = 1...5; Dpei – design procedure for interactive entering values of the input parameters Lw, rE, rH, lE, lH, l, for i = 6...11; Dpbi – design procedure for creating 3D components of waveguide antenna: the open end of the waveguide (for i = 12), sectorial E- and Hhorns (for i = 13,14), a wedge-shaped and pyramidal horns (for i = 15,16). Branch number of alternative dvi is unique for each component of class; it defines the design route. If the project procedures are common, dvi = 0. Ant 600 D. Tsygankov et al. / The Design Process Structural & Logical Representation Design procedure for selecting normalized values of the input project parameters Dpsi is a function that contains only discrete values of the design parameters on the output. For example in case of interactive entering wavelength value λ, the procedure selects the corresponding height and width of the waveguide section a×b, which is shown in Fig. 3. The number of possible discrete values a×b, corresponding to design routes at the procedure output is limited. Figure 3. The structure of design procedure for selecting normalized values of the input project parameters. The design procedure for interactive entering the parameter values Dpei – a function that contains a set of parameter values at the output, and their number is controlled by the sampling step Δt – a quantitative measure that distinguishes the two nearest values. For example, when setting the length of the waveguide tw, the user can enter values from 0 (equivalent to the absence of the waveguide) to tw.max. In this case, all the values obtained at the output of the procedures relate to a single project route, but make a unique final design decision. Fig. 4 shows the design decisions associated with one project route, and differ only in the value of design parameters – the width of the aperture of the horn in the H-plane rH. Figure 4. Design solutions corresponding to one branch of the project route. The design procedure for constructing 3D-image Dpbi is the main procedure for program implementation. It reflects the principle of the software tools in general. IDEF0-model of design procedure for constructing 3D image is shown in Fig. 5. Figure 5. Design procedure for creation 3D-model. D. Tsygankov et al. / The Design Process Structural & Logical Representation 601 This is a complex procedure; it is an ordered sequence of design procedures. These procedures are functions of the input values (or intermediate values) of design parameters. In addition, each procedure has a serial number and the number of execution of branch alternatives. Specifying of design procedures included in the procedure for constructing 3Dimages, occurs on the basis of structural analysis of all the components that make up the class, which aim – the selection common and unique components that have physical meaning. Horn antenna includes five components. Common parts: the flange and waveguide segment; unique parts: sectorial horn, pyramidal horn and tapered mouthpiece. Separation of horns explained by the different set of input parameters and design operations required for their construction. Procedural model of designed devices formed after specifying these components. 4. Design process representation based on Open CASCADE Technology Design procedure for creating 3D-components of horn antennas has the following form: ^( di1 ,dv1 , Dpf Fl ),( di2 ,dv2 , DpfWg ), ( di3 ,dv3 , Dpf Sh ),( di4 ,dv4 , Dpf Ph ),( di5 ,dv5 , DpfWh ) ` , Ant . PrcMd Horn (12) PrcMdAnt.Horn – set of design procedures involved in the construction of 3D models of waveguide antenna components; DpfFl – design procedure for constructing the flange; DpfWg – design procedure for constructing the waveguide; DpfSh – design procedure for constructing a sectorial horn; DpfPh – design procedure for constructing a pyramidal horn; DpfWh – design procedure for constructing a wedge-shaped horn. Eigen function is formed for each of the component parts. A defined set of design parameters determining its model is the argument of this function. The function of 3D-model of a waveguide flange is presented below as an example. TopoDS_Shape CSAPR_WG_Antenna::Build_Flange (float a, float b) { Set_Size (a, b); // Determination of local parameters TopoDS_Shape Base = B_base (a, b, h, r); // Building the base flange TopoDS_Shape WaveGuide = B_wg (a, b, Base); // Building the waveguide TopoDS_Shape Flange = B_Holes (A, B, a, b, d, h, WaveGuide); // Building holes Return Flange; } Listing 1. Software implementation of design procedures for constructing the flange. B_Base, B_WG and B_Holes Procedures are code of operations included in the kernel of Open CASCADE libraries. Each function builds a 3D-model completely determined by input parameters, also created models can be combined and subtracted, – all these activities are provided by kernel libraries Open CASCADE. Thus, there is a distribution of initial design parameters on procedures required for their execution. The design parameters may be initial for all procedures as well as for a single procedure. Structure of the design solution formation is based on the procedural model. Depending on the selected component, design solution DSol3DHorn is formed by the following formula: 602 D. Tsygankov et al. / The Design Process Structural & Logical Representation 3D DSol Horn ^( 1,0 , Dpf Fl ) ‰ ( 2,0 , DpfWg ) ‰ ‰ ( 3,1, Dpf Sh ) ˆ ( 3,2, Dpf Sh ) ˆ ( 3,3, Dpf Ph ) ˆ ( 3,4 , DpfWh ) ` (13) As a result of the design project are five different types of solutions can be obtained at the output. They correspond to the number of components of the class shown in Fig. 2. Thus, in some cases, solutions may be identical for different design routes. For example, a wedge horn with the lE=lH condition and pyramidal horn with the l=lH=LE condition simultaneously. Despite the fact that the 3D-models are same, solutions relate to various branches design routes. Output values of design parameters (such as beamwidth, gain, etc.) will be the same, confirming the equivalence of these solutions. 5. Conclusions Program shell is developed based on a formal representation, programmatically: window display the components, in which, after selecting the component output panel set the values of the design parameters on the basis of which formed design solution. Software implementation of formed procedural model is a code of design procedures and operations of the nucleus Open CASCADE Technology, entering into their structure, and their further alignment in the manner determined by the formed design decision. Structural logic binding of design procedures occurring during software implementation allows designing a class of devices while remaining within the strict framework of rules and algorithms defined by standards and specifications. Consequently it suffices to define the source data – terms of reference, to get the output solution that satisfies it. The resulting models can be stored in the format of ISO 10303 STEP, strengthen their opening processing and preservation in any modern CAD. References [1] [2] [3] [4] [5] [6] [7] L.E. Kamalov and A.F. Pokhilko, The Process Approach to the Synthesis and Analysis of ThreeDimensional Representations of Complex Technical Objects, Pattern Recognition and Image Analysis, 2011, Vol. 21, № 3, pp. 491-493. P. Hamilton, CAD / CAM / CAE Observer, 2008, № 2 (38), pp. 34-36 O. Kozintsev, A. Pokhilko, L. Kamalov, I. Gorbachev, D. Tsygankov, A Model for Storing and Presenting Design Procedures in a Distributed Service-oriented Environment, J. Cha et al. (eds.) Moving Integrated Product Development to Service Clouds in the Global Economy, IOS Press, Amsterdam, 2014, pp. 84-91. A.F. Pokhilko, L.E. Kamalov, The Process Approach to Synthesizing and Analyzing of 3D Representations of Complex Technical Objects, Pattern Recognition and Image Analysis, 2013, Vol. 23. № 1. pp. 68-73. M. Sobolewski, Foreword Next Generation Concurrent Engineering: Smart and Concurrent Integration of Product Data, Services, and Control Strategies, M. Sobolewski & P. Ghodous (eds) © 2005 ISPE, 620 p. G. Booch, Object-oriented analysis and design with applications, Third Edition. Moscow, Williams, 2008, 720 p. L.E. Kamalov, A.F. Pokhilko, T.F. Tylaev, A Formal Model of a Complex Estimation Method in Lean Product Development Process. Proceedings of the 17th ISPE International Conference on Concurrent Engineering, Springer-Verlag, Lomdon, 2010, pp. 285-289. Transdisciplinary Lifecycle Analysis of Systems R. Curran et al. (Eds.) © 2015 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-544-9-603 603 Search Engine Optimization Process: A Concurrent Intelligent Computing Approach a Sylvain SAGOT a,1, Alain-Jérôme FOUGÈRES a, b and Egon OSTROSI a UTBM, University of Technology of Belfort-Montbéliard, 90010 Belfort, France, IRTES-M3M b ESTA, School of business & engineering, 90010 Belfort, France; Abstract. The modification of customers’ behaviors, the competition and the increase in the number of websites have forced companies to improve their visibility on the Internet. Search engines are widely used by customers and companies have to be present in the search engines results pages to be able to reach their clients. Different techniques have been developed to optimize the website’s ranking on search engines, such as the SEO (Search Engine Optimization). The search engine optimization is the process of improving a website’s position in the Internet search engine results. But the search engines protect their ranking models and results are long and difficult to obtain. Thus the non-transparency of ranking models, the important number of interactions, and the uncertainty in terms of results make the SEO process a complex problem. In order to permit its adaptability and sustainability in a dynamic and uncertain environment, the SEO process needs the elaboration of holistic and concurrent engineering approaches. In this paper, we used the multi-agent paradigm which is appropriate to solve concurrent and distributed problems. By decomposing the SEO process in sub-entities represented by communities of autonomous agents such as requirements, functions, constraints and solutions, we were able to analyze interactions and actions that take place into this process. A multi-agent based simulation using data from pharmacy websites was developed to test our approach. Keywords. Search engine optimization, engineering, engineering meta-model multi-agent system, concurrent Introduction The use of Internet has been significantly impacted by the development of search engines in the mid-1990s [1] [2]. Nowadays, Internet users widely use search engines to find relevant information and the constant increase in website development intensifies the competitiveness. That is the reason why companies' websites have to be present on search engine results pages to hope to reach their clients. Different techniques have been developed to optimize the website’s ranking on search engines such as the Search Engine Optimization (SEO) [3]. The search engine optimization is the process of improving a website’s position in the Internet search 1 Corresponding author, E-mail: sylvain.sagot@utbm.fr 604 S. Sagot et al. / SEO Process: A Concurrent Intelligent Computing Approach engine results. By using several techniques, it is possible for SEO practitioners to improve a website ranking in order to entice qualified traffic [4] [5]. The purpose of search engines is to bring to the users the most relevant information according to their search terms; this is why search engines’ algorithms are constantly changing. They have to be adapted to technology evolutions and users’ behaviors. If search engines algorithms are not made public, search engines offer some advices to webmasters in order to improve their website’s ranking [6]. The problem for SEO practitioners is that results are uncertain, long to obtain, and that the SEO process is not clearly defined. Thus, the non-transparency of ranking models, the important number of interactions, and the uncertainty in terms of results make the SEO process a complex problem. Concurrent engineering approaches permit to improve processes of multiple disciplines, especially the process of product development. The design of complex processes such as the SEO process could also be improved by using concurrent engineering approaches. As well as the process of product development, the SEO process begins by a client’s requirement that the SEO practitioner must be able to understand in order to bring to him relevant solutions. These solutions have to be in harmony with the client’s needs to reduce the process delay and improve results. SEO can also be considered as a concurrent and dynamic process. SEO process involves the cooperation of many distributed entities: requirements, functions, solutions and constraints. Distribution and cooperation allow SEO process’ modelling by multi-agent system. Thus, the goal of this paper is to propose a concurrent intelligent computing approach for intelligent SEO process. A model of multi-agent system for intelligent SEO process and its implementation are proposed. In the proposed model, agents are organized into communities. Four communities are proposed: community of requirement agents, community of function agents, community of solution agents and the community of constraint agents. The ranking emerge from concurrent intracommunity and inter-community interactions. The paper is structured as follows. Section 2 proposes a meta-model for SEO process. Section 3 proposes the abstract formulation of multi-agent system: the model of multi-agent system