CH - 08
Software Testing
                   1
Topics
◦ Software Testing
◦ Validation and Verification
◦ Testing for development
◦ Test-driven development
◦ Types of software testing
                                2
Software testing
Before being utilized, software is tested to make sure it functions as
intended and to find any flaws. When testing software, artificial data
is used to run a program.
You review the test run results to check for mistakes, oddities, or
details regarding the program's shortcomings.
◦ Can show the existence of mistakes, NOT their absence.
◦ Testing is one of the components of a bigger verification and validation
  process, which also includes static validation techniques.
                                                                             3
 Aims of software testing
◦ Must prove to the customer and the developer that the product complies
  with their needs.
◦ This means that each need specified in the requirements document must
  have at least one test for bespoke software. This means that for generic
  software products, testing should be done on all system features that will be
  present in the product release as well as feature combinations.
◦ To spot instances where the software behaves in an incorrect, disagreeable,
  or non-specified way.
◦ System crashes, data corruption, computation mistakes, and other
  problematic behaviors, as well as unwanted interactions with other systems,
  are all targets of defect testing, which tries to find and eliminate them.
                                                                            4
 Testing for validity and
 defects
◦ Testing for validation is the initial objective. Using a collection of test
  cases that represent the system's anticipated use, you anticipate the
  system to operate correctly.
◦ Testing for defects is the second objective. To highlight problems, the
  test scenarios are designed. Defect testing test cases can even be
  purposefully ambiguous and don't even need to reflect how the system
  is commonly used.
                                                                            5
 Test process objectives
◦ Validation testing: proving that the program satisfies the system's
  criteria to the developer and system customer. An effective test
  demonstrates that the system functions as planned.
◦ Defect testing: identifying errors or flaws in the software that cause it
  to behave incorrectly or inconsistently with its specification. A test is
  successful if it gets the system to behave incorrectly and so reveals a
  bug in the system.
                                                                          6
An input-output program testing model
                                        7
 Validation VS Verification
◦ "Are we building the product right?" is a verification.
 ◦ The program must follow its specification.
◦ "Are we building the right product?" is validation.
 ◦ The software ought to fulfill the actual needs of the user.
                                                                 8
  V&V confidence
The goal of V&V is to create trust that the system is "fit for purpose."
Based on the system's goal, the user's expectations, and the marketing
environment:-
◦ Software objectives: Depending on how important the program is to an
  organization, the level of confidence varies.
◦ Expectations of users: Users' expectations for some software types may be low.
◦ Environment for marketing: Finding program flaws may not be as crucial as getting
  a product to market quickly.
                                                                                9
  Testing and inspections
In addition to software inspections that analyze the static system
representation to look for problems (static verification), tool-based
document and code analysis may be utilized.
Software evaluation dynamic verification: the act of testing and
observing a product's behavior. With test data, the system is run, and its
operational behavior is scrutinized.
                                                                        10
Relation of Testing and inspections
                                      11
  Software inspections
These entail looking closely at the source representation in order to spot
flaws and oddities.
Inspections can be utilized prior to implementation because they do not
call for system execution.
Any kind of system representation, including as requirements, designs,
configuration information, test results, etc., can be utilized with them.
They have been proven to be a useful method for locating programming
faults.
                                                                        12
 The benefits of inspections
◦ Faults can cover up (conceal) other faults during testing. You don't
  need to worry about relationships between errors as inspection is a
  static procedure.
◦ A system's incomplete versions can be examined without incurring
  additional fees. You must create custom test harnesses to evaluate the
  accessible portions if a program isn't complete.
◦ An inspection can consider a program's more general quality attributes,
  such as standard compliance, portability, and maintainability, in
  addition to checking for program defects.
                                                                     13
  Testing and inspections
Testing and inspections are complementing verification approaches, not
rival ones.
When doing the V & V process, both should be used.
Inspections can assess adherence to a specification, but they cannot
assess adherence to the real demands of the customer.
Performance, usability, and other non-functional qualities cannot be
examined during inspections.
                                                                       14
 Phases of testing
◦ Development testing is the process of testing a system as it is
  being created in order to identify issues and defects.
◦ Release testing refers to the process of having a different
  testing team test a complete system before it is made
  accessible to users.
◦ During user testing, actual or future users of a system test the
  system in their own settings.
                                                               15
Process flow diagram for software testing
                                            16
  Testing for development
The term "development testing" refers to all testing activities carried out by the team
building the system.
 ◦ Unit testing: As part of unit testing, certain program modules or object classes are
   tested. The goal of unit testing should be to confirm how a method or object
   works.
 ◦ Component testing: wherein various independent pieces are merged to create
   composite components. Component interface testing should be the main
   emphasis of component testing.
 ◦ System testing: A system's components may need to be integrated in some cases,
   and the entire system may need to be tested. System testing should place a strong
   emphasis on component interactions.
                                                                                   17
  Unit testing
Individual components are tested separately during a unit test. It involves testing for
defects.
◦ Individual functions or methods contained within an object.
◦ Classes of objects having numerous properties and methods. A class's whole test coverage
  entails
  ◦ testing every action a given object has
  ◦ Setting and examining every attribute of the object
  ◦ Exercising the item in every condition that is conceivable.
            Because the information to be tested is not localized, inheritance makes it more
  challenging to     develop object class tests.
◦ Access to their functionality is provided by composite components with clearly specified
  interfaces.
                                                                                               18
Example of Weather station unit testing
                                          19
  Unit testing -> Test
  automation
Unit testing should, whenever possible, be automated to allow for quick
execution and verification.
When performing automated unit testing, you write and execute your
program tests using a test automation framework (like JUnit).
Frameworks for unit testing offer generic test classes that can be
customized to produce special test cases. They can then run each test
you've added and give you a report—often via a GUI—on how well it all
worked.
                                                                     20
  Unit testing -> Test automation
  components
A setup phase in which the system is initialized with the test case's inputs
and anticipated outputs.
A call section where the object or method under test is called.
A portion of the assertion where the call's outcome is compared to what
is anticipated. The test has been successful if the claim evaluates to true;
else, it has failed.
                                                                         21
  Unit testing -> Unit test
  cases Selection
The test cases should demonstrate that the component you are testing
performs as intended when utilized as expected.
Test cases should show any flaws in the component if there are any.
This results in two categories of unit test cases:
The first of these should demonstrate that the component performs as
intended and should depict typical program functioning.
The other type of test case ought to be focused on assessing knowledge
of typical problem areas. To ensure that these are correctly handled and
do not cause the component to crash, it should test using atypical inputs.
                                                                       22
 Unit testing -> Testing
 approaches
◦ Using partition testing, you can find input groups that share traits and
  ought to be handled similarly. Within each of these groupings, you
  should select tests.
◦ Using testing guidelines to select test cases is known as "guideline-
  based testing." These recommendations are based on prior knowledge
  of the kind of mistakes that programmers frequently make when
  creating components.
                                                                        23
 Unit testing -> Partition
 testing
Often, input data and output outcomes belong to different
classes with linked members. Each of these classes is an
equivalence partition or domain in which every member of the
class is treated equally by the program. There should be a
unique set of test cases for each partition.
                                                          24
  Unit testing -> Testing
  guidelines
Use sequences with a single value to test software. Use different-sized sequences for
various tests.
Create tests that allow access to the sequence's first, middle, and last elements.
General testing recommendations:-
 ◦ Select inputs to have the system produce all error warnings.
 ◦ Design elements that lead to overflowing input buffers
 ◦ Repeatedly use the same input or set of inputs.
 ◦ Make sure that invalid outputs are produced
 ◦ Forced overly large or underlie little outcomes from computation.
                                                                                     25
Equivalence-based partition
                              26
  Component testing
Composite components made composed of multiple interacting objects
are frequently found in software components. The reconfiguration
component, for instance, in the weather station system includes objects
that address each reconfiguration-related issue.
These objects' functionality is accessible via the specified component
interface.
Therefore, it's crucial to show that the component interface works as
intended while testing composite components. You can assume that all
of the unit tests for each of the various objects in a component have
been completed.
                                                                         27
  Component testing ->
  Interface testing
The goal is to identify defects brought on by interface mistakes or
incorrect interface presumptions.
Various interfaces:-
◦ Interfaces for variables: Data exchanged between procedures or methods.
◦ Interfaces for shared memory: Many processes or functions share the same
  memory block.
◦ Interacts with procedures: A subsystem has a set of resources that other
  subsystems can access.
◦ Interfaces for passing messages: Subsystems ask other subsystems for
  services.
                                                                       28
Interface test relation
                          29
 Component testing ->
 Interface issues
◦ Abuse of interfaces: When one component calls another, it uses the
  interface incorrectly, for example, by passing parameters in the wrong
  sequence.
◦ Interface miscommunication: The calling component has false
  expectations about how the called component will behave.
◦ Timing issues: Older information is accessible and the called and
  calling components work at different speeds.
                                                                      30
 Recommendations for
 interface testing
◦ Create tests with the called procedure's parameters at the very edges
  of their ranges.
◦ Always use null pointers when testing pointer parameters.
◦ Design evaluations that lead to component failure.
◦ In message-passing systems, use stress testing.
◦ Reorder the components in shared memory systems that are enabled.
                                                                     31
  System testing
System testing occurs throughout development when system
components are integrated to create a system version, then the united
system is tested.
System testing's primary objective is to look at how different parts work
together. During system testing, compatibility between components,
suitable interaction, and prompt data flow across interfaces are all
examined.
System testing looks at the emergent behavior of a system.
                                                                       32
  System and component
  testing
During system testing, pre-made systems and previously created
reusable components may be mixed with newly constructed
components. Then, the entire system is evaluated.
Now, components created by several subteams or team members can be
integrated. System testing is a collective action rather than an individual
one.
Some businesses may use a separate testing team for system testing
without involving designers or programmers.
                                                                        33
  Use-case testing
System testing can be created using the use-cases created to discover
system interactions.
Testing the use case forces these interactions to happen because each
use case often involves multiple system components.
The components and interactions that are being evaluated are
documented in the sequence diagrams linked to the use case.
                                                                        34
Example of Sequence diagrams are used to create test cases
                                                             35
  Testing policy
Since comprehensive system testing is not achievable, testing policies
that specify the minimum level of system test coverage can be created.
Testing policies examples:-
◦ It is advisable to evaluate every system feature that can be accessible
  through a menu.
◦ It is necessary to test combinations of features that can be accessible from
  the same menu, such as text formatting.
◦ Every function that requires user input must be tested using both right and
  incorrect input.
                                                                            36
  Test-driven development
With test-driven development (TDD), you integrate testing and code
development when creating programs.
Tests are developed before code, and 'passing' the tests is the key factor
in development.
You create tests for each increment of code as you create it. The code
you have written must pass its test before you can move on to the next
increment.
TDD was first used in conjunction with agile techniques like Extreme
Programming. It can also be applied to development procedures that are
plan-driven, though.
                                                                        37
The process of test-driven development
                                         38
 Process actions for TDD
◦ Select the initial functionality increment that is required. Normally,
  implementing this should only involve a few lines of code.
◦ Create an automated test for this feature and use it in the test.
◦ Execute the test and any other tests that have been added. Since the
  functionality has not yet been implemented, the new test will fail.
◦ After the functionality has been added, rerun the test.
◦ Following the successful completion of all tests, the following
  functionality is implemented.
                                                                       39
 Test-driven development
 advantages
◦ Code protection: All of the code you write has at least one test as each
  segment of code has an associated test.
◦ Testing for regression: As a program is created, a regression test suite is
  incrementally created.
◦ Streamlined bug fixing: Where the issue is should be clear when a test
  fails. It is necessary to review and edit the recently written code.
◦ System specifications: A sort of documentation that outlines what the
  code ought to be doing is the tests itself.
                                                                         40
  Regression testing
Regression testing involves running tests on the system to ensure that no
previously functional code has been "broken" by changes.
Regression testing is expensive when done manually, but it is easy and
uncomplicated when done automatically. Every time the software is
altered, all tests are again run.
Before a modification is committed, tests must ’successfully' complete
their runs.
                                                                         41
  Release testing
Release testing is the process of evaluating a specific system release that
is intended for use by users outside the development team.
The main objective of the release testing process is to convince the
system's creator that the end product is safe for implementation.
Therefore, release testing must show that the system satisfies it’s
functional, performance, and reliability and is reliable under regular
operating settings.
Release testing is frequently a type of "black-box" testing in which only
tests created from the system specification are engaged.
                                                                         42
  Release testing and system
  testing
System testing includes release testing.
Important distinctions: -
◦ A distinct team that wasn't involved in the system's development should
  undertake release testing.
◦ The development team should focus their system testing efforts on
  identifying systemic flaws (defect testing). The purpose of release testing
  (validation testing) is to verify that the system satisfies its specifications and
  is suitable for usage by the wider public.
                                                                                 43
  Requirements based testing
Conditions for the Mentcare system:-
 ◦ If a patient has a history of allergies to a specific medicine, the system
   user will receive a warning notice if that medication is prescribed.
 ◦ A prescriber must explain why they choose to disregard an allergy
   warning if they do so.
                                                                           44
Example of Requirement test in Mentcare
                                          45
  Performance testing
As part of release testing, a system's emergent properties, such as
reliability and performance, may be examined. Tests should accurately
follow the system's use profile.
Performance tests are often planned as a set of tests where the amount
is progressively added until the system performance is insufficient.
During a performance test, a system's failure behavior is investigated
when it is purposefully overloaded.
                                                                         46
  User testing
The "user or customer testing" phase of the testing process is where
customers or users provide comments and suggestions on system
testing.
Even after thorough system and release testing has been done, user
testing is still crucial.
This is because a system's performance, usability, dependability, and
robustness are all greatly influenced by factors related to the user's
workplace. These are impossible to duplicate in a testing setting.
                                                                         47
 User testing types
◦ Alpha Test: Together with the development team, users test the
  program at the developer's facility.
◦ Beta Test: Users are provided access to a program release so they can
  experiment with it and talk to the system developers about any
  problems they encounter.
◦ Acceptance Test: Customers review a system after the system
  developers have given it their seal of approval to decide whether or
  not it is ready for deployment in the customer environment.
                                                                     48
The procedure for acceptability testing
                                          49
  Acceptance testing and agile
  methodologies
In agile approaches, the user/customer participates in the development
team and is in charge of deciding whether the system is acceptable.
The user or client defines the tests, which are interconnected with other
tests in that they are launched automatically in response to changes.
There isn't a separate process for acceptability testing.
The primary issue in this case is whether or not the embedded user is
"typical" and is able to speak for all system stakeholders.
                                                                        50
 Summary
◦ Testing is limited to identifying program errors. It cannot prove that all flaws have
  been fixed.
◦ The software development team is in charge of development testing. Before a
  system is made available to clients, it should be tested by a different team.
◦ Testing during development includes unit testing, which involves testing individual
  items, component testing, which involves testing linked groups of objects, and
  system testing, which involves testing partial or full systems.
◦ To "break" software when testing it, employ test scenarios that have been
  successful in identifying errors in other systems.
                                                                                    51
 Summary
◦ Whenever possible, it should be done to write automated tests. A piece of
  software that can be used each time a system is modified contains the tests.
◦ Tests are written before the code that will be tested in the test-first approach to
  development.
◦ In scenario testing, test cases are created by imagining a typical usage scenario.
◦ In order to determine whether the program is suitable for deployment and use in
  its operating environment, a user testing procedure called acceptance testing is
  used.
                                                                                   52
## Exercise Questions
In software testing, what is regression testing?
What exactly is a test environment?
What does a software testing bug mean?
                                                   53