Introduction to Software Testing
This extensively classroom-tested text takes an innovative approach to
explaining software testing that defines it as the process of applying a few
precise, general-purpose criteria to a structure or model of the software. The text incorporates
cutting-edge developments, including techniques to
test modern types of software such as OO, web applications, and embed-
ded software. This revised second edition significantly expands coverage
of the basics, thoroughly discussing test automaton frameworks, and adds
new, improved examples and numerous exercises. Key features include:
The theory of coverage criteria is carefully, cleanly explained to help
students understand concepts before delving into practical applica-
tions. Extensive use of the JUnit test framework gives students practical
experience in a test framework popular in industry. Exercises feature specifically tailored tools
that allow students to
check their own work. Instructor’s manual, PowerPoint slides, testing tools for students, and
example software programs in Java are available from the book’s
website.
Paul Ammann is Associate Professor of Software Engineering at George
Mason University. He earned the Volgenau School’s Outstanding Teach-
ing Award in 2007. He led the development of the Applied Computer
Science degree, and has served as Director of the MS Software Engi-
neering program. He has taught courses in software testing, applied
object-oriented theory, formal methods for software engineering, web
software, and distributed software engineering. Ammann has published
more than eighty papers in software engineering, with an emphasis
on software testing, security, dependability, and software engineering
education.
Jeff Offutt is Professor of Software Engineering at George Mason Uni-
versity. He leads the MS in Software Engineering program, teaches
software engineering courses at all levels, and developed new courses
on several software engineering subjects. He was awarded the George
Mason University Teaching Excellence Award, Teaching with Technol-
ogy, in 2013. Offutt has published more than 165 papers in areas such
as model-based testing, criteria-based testing, test automaton, empirical
software engineering, and software maintenance. He is Editor-in-Chief
of the Journal of Software Testing, Verification and Reliability; helped
found the IEEE International Conference on Software Testing; and is
the founder of the μJava project.
INTRODUCTION TO
SOFTWARE
TESTING
Paul Ammann
George Mason University
Jeff Offutt
George Mason University
University Printing House, Cambridge CB2 8BS, United Kingdom
One Liberty Plaza, 20th Floor, New York, NY 10006, USA
477 Williamstown Road, Port Melbourne, VIC 3207, Australia
4843/24, 2nd Floor, Ansari Road, Daryaganj, Delhi – 110002, India
79 Anson Road, #06-04/06, Singapore 079906
Cambridge University Press is part of the University of Cambridge.
It furthers the University’s mission by disseminating knowledge in the pursuit of
education, learning, and research at the highest international levels of excellence.
www.cambridge.org
Information on this title: www.cambridge.org/9781107172012
DOI: 10.1017/9781316771273
c Paul Ammann and Jeff Offutt 2017
This publication is in copyright. Subject to statutory exception
and to the provisions of relevant collective licensing agreements, no reproduction of any part
may take place without the written
permission of Cambridge University Press.
First published 2017
Printed in the United States of America by Sheridan Books, Inc.
A catalogue record for this publication is available from the British Library.
Library of Congress Cataloguing in Publication Data
Names: Ammann, Paul, 1961– author. | Offutt, Jeff, 1961– author. Title: Introduction to software
testing / Paul Ammann, George Mason
University, Jeff Offutt, George Mason University. Description: Edition 2. | Cambridge, United
Kingdom; New York, NY, USA:
Cambridge University Press, [2016]
Identifiers: LCCN 2016032808 | ISBN 9781107172012 (hardback)
Subjects: LCSH: Computer software–Testing. Classification: LCC QA76.76.T48 A56 2016 | DDC
005.3028/7–dc23
LC record available at https://lccn.loc.gov/2016032808
ISBN 978-1-107-17201-2 Hardback
Additional resources for this publication at https://cs.gmu.edu/∼offutt/softwaretest/.
Cambridge University Press has no responsibility for the persistence or accuracy of
URLs for external or third-party Internet Web sites referred to in this publication
and does not guarantee that any content on such Web sites is, or will remain,
accurate or appropriate.
Contents
List of Figures page ix
List of Tables xii
Preface to the Second Edition xiv
Part 1 Foundations 1
1 Why Do We Test Software? 3
1.1 When Software Goes Bad 4
1.2 Goals of Testing Software 8
1.3 Bibliographic Notes 17
2 Model-Driven Test Design 19
2.1 Software Testing Foundations 20
2.2 Software Testing Activities 21
2.3 Testing Levels Based on Software Activity 22
2.4 Coverage Criteria 25
2.5 Model-Driven Test Design 27
2.5.1 Test Design 28
2.5.2 Test Automation 29
2.5.3 Test Execution 29
2.5.4 Test Evaluation 29
2.5.5 Test Personnel and Abstraction 29
2.6 Why MDTD Matters 31
2.7 Bibliographic Notes 33
3 Test Automation 35
3.1 Software Testability 36
3.2 Components of a Test Case 36
vi Contents
3.3 A Test Automation Framework 39
3.3.1 The JUnit Test Framework 40
3.3.2 Data-Driven Tests 44
3.3.3 Adding Parameters to Unit Tests 47
3.3.4 JUnit from the Command Line 50
3.4 Beyond Test Automation 50
3.5 Bibliographic Notes 53
4 Putting Testing First 54
4.1 Taming the Cost-of-Change Curve 54
4.1.1 Is the Curve Really Tamed? 56
4.2 The Test Harness as Guardian 57
4.2.1 Continuous Integration 58
4.2.2 System Tests in Agile Methods 59
4.2.3 Adding Tests to Legacy Systems 60
4.2.4 Weaknesses in Agile Methods for Testing 61
4.3 Bibliographic Notes 62
5 Criteria-Based Test Design 64
5.1 Coverage Criteria Defined 64
5.2 Infeasibility and Subsumption 68
5.3 Advantages of Using Coverage Criteria 68
5.4 Next Up 70
5.5 Bibliographic Notes 70
Part 2 Coverage Criteria 73
6 Input Space Partitioning 75
6.1 Input Domain Modeling 77
6.1.1 Interface-Based Input Domain Modeling 79
6.1.2 Functionality-Based Input Domain Modeling 79
6.1.3 Designing Characteristics 80
6.1.4 Choosing Blocks and Values 81
6.1.5 Checking the Input Domain Model 84
6.2 Combination Strategies Criteria