[go: up one dir, main page]

0% found this document useful (0 votes)
30 views40 pages

Software Testing

This document provides an overview of software testing, including definitions, objectives, and various testing methods such as static and dynamic testing. It covers key concepts like verification vs. validation, boundary value analysis, decision tables, and the V-model of software development. Additionally, it outlines different levels of testing, the roles of stubs and drivers, and the importance of GUI testing.

Uploaded by

Jay Kadlag
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views40 pages

Software Testing

This document provides an overview of software testing, including definitions, objectives, and various testing methods such as static and dynamic testing. It covers key concepts like verification vs. validation, boundary value analysis, decision tables, and the V-model of software development. Additionally, it outlines different levels of testing, the roles of stubs and drivers, and the importance of GUI testing.

Uploaded by

Jay Kadlag
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

🧑🏻‍🔧

Software Testing

Chapter 1 - Basics of Software Testing and Testing Methods (14


Marks)
Define Software testing. ✅
Software Testing is the process of evaluating a software application to find errors,
gaps, or missing requirements.

It is done to ensure that the software functions as expected and meets user needs.

Define Static and Dynamic Testing. (2M, W-19) ✅


1. Static Testing:

Static testing is a verification activity.

Static Testing is a type of software testing where the code is checked without
actually running it.

This includes reviewing the code, design documents, and other related documents to
find errors early on.

Examples include inspection, reviews, and walkthroughs.

2. Dynamic Testing:

Dynamic testing is a validation activity.

Dynamic Testing involves testing the software by actually executing the code.

This type of testing checks the functionality of the software by running it and
verifying that it behaves as expected.

Examples include unit tests, integration tests, system tests, and acceptance tests.

Enlist objectives of software testing. (2M, W-19, W-22, W-23, S-24) ✅


1. Find defects made by the programmer while developing the software.

2. Gain confidence in the software's quality.

3. To make sure that the final product meets user and business requirements.

4. Prevent defects from occurring.

5. To provide customers with a quality product.

6. Creating good test cases. A good test case is one that has a high chance of finding
errors that have not been discovered yet.

7. It ensures that the application meets the Business Requirement Specifications (BRS) and
the System Requirement Specifications (SRS) completely.

Define Bug, Error, Fault and Failure. (2M, W-19, W-22, W-23, S-22) ✅
1. Bug: The presence of error at the time of execution of the software.

2. Error: An error is a human action that produces the incorrect result.

3. Fault: An incorrect step, process, or data definition in a computer program.

4. Failure: A failure occurs when the external behavior of a system does not match the
system specification.

Compare Verification and Validation. (2M, W-22, W-23) ✅


Verification Validation

Verification is the static testing. Validation is the dynamic testing.

Software Testing 1
Verification Validation

It does not include the execution of the


It includes the execution of the code.
code.

Focus is on checking if the product is Focus is on checking if the right product


built right. is built.

Methods used in verification are reviews, Methods used in validation are Black Box
walkthroughs, inspections, and desk Testing, White Box Testing, and non-
checking. functional testing.

The quality assurance team performs Validation is done by the testing team on
verification. the software code.

Validation is performed after


Verification is done before validation.
verification.

The cost of fixing errors found in The cost of fixing errors found in
verification is lower than fixing errors validation is higher than fixing errors
found in validation. found in verification.

Enlist any four skills for Software Tester. (2M, S-22, S-24) ✅
1. Communication Skills

2. Analytics Skills

3. Knowledge of testing tools

4. Planning

5. Basic Knowledge of Programming

6. Negotiation Skills

7. Attention to details

Describe Boundary Value Analysis with example. (4M, W-19, W-23) ✅


Most software defects occur near boundaries or limits of variable values.

Boundary Value Analysis (BVA) is a testing technique where test cases are designed using
values at the boundaries of input limits.

Positive testing uses input values within the boundary limits, while negative testing
uses values outside the boundaries.

BVA focuses on finding errors at the boundaries of the input domain rather than in the
center.

Each boundary has both valid and invalid values, and test cases are created for both
scenarios.

BVA is primarily a black box testing technique, but can also be applied in white box
testing, especially for data structures like arrays, stacks, and linked lists.

This technique helps identify test cases that are more likely to detect defects
effectively.

Examples:

1. Example 1: A system accepts numbers from 1 to 10. All other values are invalid.

Test boundary values: 0, 1, 2, 9, 10, 11.

2. Example 2: An exam has boundaries for pass, merit, and distinction:

Pass: 40% (Test values: 39, 40, 41).

Merit: 75% (Test values: 74, 75, 76).

Distinction: 85% (Test values: 84, 85, 86).

Invalid values: Below 0 and above 100 (e.g., 0 and 101).

This approach checks values just inside, on, and just outside the boundaries to make sure
they are handled correctly.

Describe use of Decision Tables with example. (4M, W-19) ✅


Decision tables are a black-box testing technique used to determine test scenarios for
complex business logic.

Software Testing 2
Decision tables organize complex business rules, making them useful for both developers
and testers.

Decision tables can assist in test design even if they are not part of the requirements,
as they show the effects of different input combinations and how the software should
respond.

They help developers improve their work and build better relationships with testers.

Testing all possible input combinations can be challenging because there may be too many
combinations.

It is not always possible to test every combination, so selecting the right ones to test
is very important.

If you don’t select combinations systematically, you might end up testing the wrong ones,
making the testing less effective.

Example:

Conditions TC1 TC2 TC3 TC4

Request Login 0 1 1 1

Valid username entered X 0 1 1

Valid password entered X X 0 1

Offer recover credentials 0 1 1 0

Activate entry box for username 0 1 1 0

Activate entry box for password 0 0 1 0

Enter privilege area 0 0 0 1

Where:

1 = True

0 = False

X = No action (Doesn't matter)

State Entry and Exit Criteria for software testing. (4M, W-22, W-23) ✅
Entry Criteria for Software Testing:

Entry criteria are the minimum conditions that must be met before testing can begin.

These criteria are defined and approved during the test planning phase and are part of
the test plan.

Typical Entry Criteria:

All necessary hardware and software must be installed, configured, and working
properly.

Required documentation, including design and requirement details, must be available


for testing.

Testing tools must be installed and functioning correctly.

All team members involved in testing should be trained to use the testing tools.

Test data must be prepared and available.

The test environment, including hardware, software, and lab setup, should be ready for
use.

Exit Criteria for Software Testing:

Exit criteria are the minimum conditions that must be met to end a specific phase of
testing.

These criteria are documented and approved during the test planning phase.

The criteria confirm the completion of the testing phase or project.

Typical Exit Criteria:

All planned test cases must be executed.

Required test coverage levels should be met.

Software Testing 3
No high-priority bugs should remain open.

All critical defects should be fully tested and resolved.

The project budget should be within limits.

The testing schedule should be completed as planned.

Describe Code Complexity. (4M, S-22) ✅


Code complexity is a type of testing that verifies both the system design and the coding
of the system.

It involves inspections, reviews, and walkthroughs to assess complexity systematically.

Complex programs are harder to maintain in the future, which can lead to problems.

More complex code can create new defects, especially when complex decision loops are
executed.

Complicated code and designs can lead to failures under complex conditions.

Cyclomatic complexity is a metric used to determine the complexity of code functionality.

It calculates the minimum number of paths needed to cover all possible paths through a
module.

The more decisions a system makes, the more tests are required.

Cyclomatic complexity helps define the minimum number of test cases needed for a module
and is used throughout the Software Development Life Cycle (SDLC).

It is calculated using the formula: CC = E - N + P , where E is the number of edges, N is the


number of nodes, and P is the number of connected components in a graph.

Cyclomatic complexity is also known as basis path testing.

The development team can use a Control Flow Graph (CFG) to show the flow of the program.

A CFG is a directed graph where nodes represent different sections of the program, and
edges represent the program's flow between them.

Differentiate Somke and Sanity Testing. (4M, S-22) ✅


Smoke Testing Sanity Testing

Initial testing done to check if the basic Focuses on verifying specific bug fixes or
functionality of the software works. new features after changes.

Performed after receiving a stable build


Performed on every new build.
with minor changes.

Broad and shallow testing of the Narrow and deep testing, focusing on
application. specific areas.

A subset of regression testing. A subset of acceptance testing.

Usually scripted and documented. Usually unscripted and informal.

Covers all major functionalities of the Focuses on only the areas affected by
application. recent changes.

Example: Checking if the application opens Example: Verifying that a login bug is
and basic functions work. resolved.

Differentiate Quality Assurance and Quality Control. (4M, S-22, S-23, S-24) ✅
Quality Assurance (QA) Quality Control (QC)

QA is a managerial tool. QC is a corrective tool.

Focuses on processes. Focuses on the product.

QA involves prevention-oriented tasks. QC involves detection-oriented tasks.

QA is the process of ensuring quality in QC is the process of verifying the quality


processes. of the product.

Measures processes, finds weaknesses, and Measures products, finds defects, and
suggests improvements. suggests improvements.

SQA (Software Quality Assurance) ensures SQC (Software Quality Control) ensures
quality in software development processes. quality in the final software product.

Software Testing 4
Quality Assurance (QA) Quality Control (QC)

Activities include defining processes,


Activities include reviews and testing.
audits, and training.

Verification is an example of QA. Validation or testing is an example of QC.

Differentiate between Static and Dynamic Testing. (4M, S-23, S-24) ✅


Static Testing Dynamic Testing

Static testing is a verification activity. Dynamic testing is a validation activity.

Static testing checks the code without Dynamic testing involves testing the
running it. software by executing the code.

It includes reviewing code, design It checks the functionality of the


documents, and related documents to find software by running it and verifying its
errors early on. behavior.

Done early in the development process. Done after the code is developed.

The work product is reviewed using Methods evaluate the product based on
checklists, standards, and knowledge to defined requirements and designs, marking
locate defects. it as ‘pass’ or ‘fail’.

Examples include inspection, reviews, and Examples include unit tests, integration
walkthroughs. tests, system tests, and acceptance tests.

Differentiate between White Box and Black Box Testing. (4M, W-22, S-24) ✅
WBT (White Box Testing) BBT (Black Box Testing)

Do not need knowledge of software in


Knowledge of software needed in detail.
detail.

It is also called Clear box, Glass-box, or It is also called as Closed box, or Dark-
transparent box testing. box testing.

Testing can be performed by developers and Testing can be performed by end users or
professional testers. anyone.

It is suitable for algorithm testing. It is not suitable for algorithm testing.

It is more time consuming. It is less time consuming.

It is structural testing of system. It is behavioural testing of system.

Can be based on Requirement specification


Can be based on detailed design documents.
document.

This testing is best suited for a lower This type of testing is ideal for higher
level of testing like Unit Testing or levels of testing like System Testing,
Integration testing. Acceptance testing.

Describe V-Model with diagram. (6M, W-23) ✅


The V-model is a type of Software Development Life Cycle (SDLC) model that follows a
sequential process shaped like a "V."

It is also called the Verification and Validation model.

Each development stage has a corresponding testing phase.

Development and testing phases are directly linked; the next phase starts only after the
previous one is completed.

The left side of the "V" represents Verification phases, while the right side represents
Validation phases.

The two sides connect at the coding phase.

Software Testing 5
Verification Phases (Left Side of the V) -

1. Requirement Analysis:

This phase involves talking to customers to understand their needs and expectations.

It’s also known as Requirement Gathering.

2. System Design:

This phase focuses on designing the system and setting up the necessary hardware and
communication for product development.

3. Architectural Design:

The system design is broken down into smaller modules, each handling different
functions.

This phase clarifies how data is transferred and how modules communicate with each
other and external systems.

4. Module Design:

The system is further divided into small modules.

Detailed designs for these modules are specified, known as Low-Level Design (LLD).

Validation Phases (Right Side of the V) -

1. Unit Testing:

Unit Test Plans are created during the module design phase.

These plans are executed to find and fix bugs in individual code units.

2. Integration Testing:

After unit testing is complete, integration testing is performed.

This checks that the different modules work together correctly.

It verifies the communication between modules based on the architectural design.

3. System Testing:

This phase tests the entire application, focusing on its functionality,


interdependencies, and communication.

It checks both functional and non-functional requirements of the application.

4. User Acceptance Testing (UAT):

UAT is conducted in an environment similar to the real-world production environment.

This testing confirms that the delivered system meets user requirements and is ready
for use.

Chapter 2 - Types and Levels of Testing (18 Marks)


List levels of Testing. (2M, W-22) ✅
1. Unit Testing

2. Integration Testing

3. System Testing

Software Testing 6
4. Acceptance Testing

Define Stubs and Drivers. (2M, W-23) ✅


Drivers:

Drivers are dummy modules that are always used to simulate the high-level modules.

Drivers are only used when main programs are under construction.

Drivers are used in bottom-up integration.

Stubs:

Stubs are dummy modules that are always used to simulate the low-level modules.

Stubs are used when sub-programs are under construction.

Stubs are used in the top-down approach.

What is GUI testing? Give one example (2M, W-23) ✅


GUI stands for Graphical User Interface where you interact with the computer using images
rather than text.

GUI Testing is the process of checking these visual elements like screens, buttons,
menus, toolbars, and dialog boxes to ensure they function properly.

GUI Testing focuses on what the user sees, ensuring the layout and design work correctly.

Examples of GUI testing include:

1. Check Screen Validations

2. Verify All Navigations

3. Verify Data Integrity

4. Verify the object states

5. Verify the date Field and Numeric Field Formats

State any two examples of integration testing. (2M, W-19) ✅


1. Checking the connection between the login page and the home page, i.e., after entering
login details, the user should be taken to the home page.

2. Checking the connection between the login page and the mailbox.

3. Checking the connection between the mailbox and the delete mails function.

4. Checking the connection between the home page and the profile page, i.e., the profile
page should open up.

State the process of Performance Testing. (2M: S-22, 4M, S-24) ✅

1. Identify Test Environment:

Understand the physical test environment and the production environment.

Identify the available testing tools that can be used.

Gather details about the hardware, software, and network setup used for testing.

Software Testing 7
2. Determine Performance Criteria:

Define the acceptable performance standards, including response time, resource usage,
and throughput (the amount of data that can be processed).

3. Plan and Design:

Analyze the different types of end users who will interact with the system.

Plan the performance tests and determine what data will be collected during the
testing process.

4. Configure Test Environment:

Set up the testing environment before running tests.

Arrange the necessary tools and resources required for the testing process.

5. Implement Test Design:

Create the performance tests based on the design plan.

6. Run the Test:

Execute all the test cases and closely monitor the testing process.

7. Analyze and Retest:

Analyze the test results and share them with the team.

If necessary, make adjustments to improve performance and retest the system.

Enlist any two advantages of acceptance testing. (2M, S-22) ✅


1. Ensures User Satisfaction: It helps confirm that the software meets the user's needs and
requirements, ensuring they are satisfied with the product.

2. Validates Business Requirements: It checks if the software fulfills the business goals
and requirements, making sure it solves the intended problems.

3. Reduces Risks: By testing in real-world scenarios, it identifies potential issues early,


reducing the risk of failure after release.

4. Improves Quality: It ensures the software works as expected in the production


environment, improving its overall quality.

5. Helps in Decision Making: It provides valuable feedback for making decisions about
whether to proceed with the release or make improvements.

Differentiate between Alpha and Beta testing. (2M: S-24, 4M: W-19, W-23) ✅
Alpha Testing Beta Testing

Internal Testing Phase External Testing Phase

Performed by developers and internal QA team. Performed by end users or selected external testers.

Involves both black box and white box testing


Uses only black box testing.
techniques.

Conducted in a controlled, internal


Conducted in a real-world environment.
environment.

It is less effective compared to beta


It is more effective compared to alpha testing.
testing.

Long execution cycle may be required for Only a few weeks of execution are required for beta
alpha testing. testing.

Its purpose is to identify and fix issues Its purpose is to validate the product in real-world
before public release. conditions and get user feedback.

In beta testing, most of the feedback and issues are


In alpha testing, developers can quickly
collected to be addressed in future updates of the
address and fix critical issues right away.
product.

Differentiate between Drivers and Stubs. (4M, W-19) ✅


Stubs Drivers

Stubs are dummy modules used to simulate Drivers are dummy modules used to simulate
low-level modules. high-level modules.

Software Testing 8
Stubs Drivers

Stubs are used when sub-programs are under Drivers are used when main programs are
construction. under construction.

Stubs are the called programs. Drivers are the calling programs.

Stubs are used in top-down approach. Drivers are used in bottom-up approach.

Stubs help test the main module by Drivers help test lower modules by
replacing missing sub-modules. replacing missing main modules.

Describe Unit Testing. (4M, S-22) ✅


Unit testing is also called module testing because it tests individual units or modules
of code.

A unit is the smallest part of software, typically with one or a few inputs and a single
output.

Each test checks a single module based on the design document.

It is the level of software testing where individual components of software are tested.

In procedural programming, a unit can be a single program, function, or procedure.

In object-oriented programming, the smallest unit can be a class.

Unit testing focuses on functionality and reliability of each module. The entry and exit
criteria are usually the same for all modules.

Unit testing is done in a test environment before moving to integration testing.

The main goal is to check if the smallest program of the software works as expected based
on the code.

Each unit is tested individually before integration, to catch defects early.

Unit testing often requires the use of drivers and stubs.

Drivers: Act as the calling units.

Stubs: Act as the called units.


Illustrate process of bi-directional integration testing. State its two advantages and
disadvantages. (4M, W-23)

The bidirectional integration testing strategy combines both Top-Down and Bottom-Up
approaches.

In this method, top modules and lower modules are tested simultaneously, allowing
integration testing to occur in both directions.

Stubs and Drivers are both used in this process.

This strategy is particularly beneficial for large projects with multiple subprojects and
is suitable for systems developed using a spiral model.

Both approaches can be executed in parallel, increasing efficiency.

Advantages:

1. Useful for large enterprises and projects with several subprojects.

2. Suitable for spiral development models with large modules functioning like systems.

3. Combines Top-Down and Bottom-Up approaches based on the development schedule.

4. Units are tested and integrated to form a complete system.

5. Integration testing is performed in a downward direction.

Disadvantages:

1. Testing costs are very high due to dual approach execution.

2. Not suitable for smaller systems with high interdependencies.

3. Only practical when individual subsystems are nearly as robust as the full system.

4. Requires testers with different skill sets for various levels of testing.

Explain Security Testing in detail. ✅

Software Testing 9
Security testing checks if an information system protects data and maintains
functionality at the same time.

It verifies six key principles:

1. Confidentiality: Ensures data is only accessible to authorized users.

2. Integrity: Confirms data is accurate and unaltered.

3. Authentication: Verifies user identities to ensure that only legitimate users can
access the system.

4. Authorization: Ensures users have permission to access certain resources based on


their roles.

5. Availability: Ensures the system remains accessible to authorized users when needed.

6. Non-repudiation: Prevents users from denying their actions, ensuring accountability.

Security testing ensures the application is secure from data theft, unauthorized access,
and cyber threats, particularly on the web.

Areas Tested:

Data security

Password protection

Prevention of unauthorized access

Network security

Example:

A student management system is insecure if the admission branch can edit exam branch
data without proper authorization.

Additional Security Tests:

1. Unauthorized access to secure web pages should not be allowed.

2. Restricted files should not be downloadable without proper access control.

3. Sessions should automatically end after a period of inactivity to protect user data
from unauthorized access.

Explain Top-down and Bottom up integration testing. (4M, W-22) ✅


Top-Down Integration Testing:

Testing is performed from the top-level modules down to the lower-level modules.

Higher-level modules are tested and integrated first, followed by the integration of
lower-level modules.

Modules are integrated from main modules to sub-modules, either in depth-first order or
breadth-first order.

Stubs are used to simulate the lower-level modules that are not yet developed or ready
for testing.

These stubs act as placeholders for missing modules, allowing testing to proceed without
waiting for the complete system.

Bottom-Up Integration Testing:

Testing is performed from the lower-level modules up to the higher-level modules.

Software Testing 10
Lower-level modules are tested and integrated first, followed by the integration of
higher-level modules.

Modules are integrated from sub-modules to main modules, either in depth-first order or
breadth-first order.

Drivers are used to simulate the higher-level modules that call the lower-level modules.

These drivers act as placeholders for missing higher-level modules, allowing testing to
proceed without waiting for their availability.

Describe Graphical User Interface (GUI) Testing and its important traits (4M, W-22, S-24) ✅
A computer application can have two types of interfaces:

1. Command Line Interface (CLI) – where you type text commands.

2. Graphical User Interface (GUI) – where you interact with the system using images,
buttons, menus, and icons instead of typing commands.

GUI Testing is the process of checking these visual elements like screens, buttons,
menus, toolbars, and dialog boxes to ensure they function properly.

GUI Testing focuses on what the user sees, ensuring the layout and design work correctly.

Important Traits of GUI Testing:

1. Screen Validation – Ensure all fields and labels are displayed correctly.

2. Navigation – Verify that buttons, links, and other elements direct users to the right
place.

3. Usability – Check that the interface is easy to use and understand.

4. Data Integrity – Ensure data entered through the interface is correctly saved and
retrieved.

5. Object States – Verify elements (like buttons) change their state correctly (enabled,
disabled, etc.).

6. Field Formats – Ensure proper format for date, numeric, and other input fields.

Advantages of GUI Testing:

A good GUI makes the app look nice and easy to use, which helps users like it more.

A well-designed interface gives users a better experience when using the app.

Consistent design and layout make it easier for users to understand and navigate the
app.

Explain Regression Testing. State when the Regression testing shall be done. (4M, W-23) ✅
Regression Testing is a type of testing used to confirm that recent changes in the
program or code have not negatively affected existing features.

It is a black box testing technique.

It involves running previously executed test cases to ensure that the existing
functionalities still work correctly.

This testing ensures that new changes in the code don’t cause problems in the existing
functionality.

When to Perform Regression Testing:

After Changes in the Software: Test whenever the code is updated.

After Fixing Bugs: Run tests after fixing bugs to ensure no new problems occur.

When Adding New Features: Check that old features still work after adding new ones.

After Code Refactoring: Test to confirm that reorganizing the code doesn’t cause
issues.

Before Software Release: Perform testing before delivering the software to users or
moving it to production.

Types of Regression Testing:

Software Testing 11
1. Retest All: This method involves re-running all test cases to ensure everything
functions correctly.

2. Regression Test Selection: Instead of testing all cases, this approach selects only
relevant test cases for re-execution.

3. Prioritization of Test Cases: Test cases are selected based on their priority or
business needs, focusing on the most important functionalities first.


State approaches that are considered during Client Server Testing. (4M: W-19, 6M: W-23, S-
24)

The Client-Server application involves two systems: the Client and the Server. These two
systems communicate and interact with each other over a computer network.

In Client-Server application testing, the client requests specific information from the
server, and the server responds with the requested information. This process involves two
layers, which is why it is also called two-tier application testing.

Testing Approaches for Client-Server Systems:

1. Component Testing:

Create a plan for testing the client and server separately.

Use a client simulator for testing the server and a server simulator for testing the
client.

For testing the network, both simulators are used at the same time.

2. Integration Testing:

After testing the server, client, and network individually, they are combined for
system testing.

Communication between client and server is tested in integration testing.

3. Performance Testing:

This checks how the system performs when many clients are communicating with the
server at once.

Volume testing and stress testing are used to test the system under both normal and
maximum loads.

4. Concurrency Testing:

This is important for client-server systems because multiple users may try to access
the same data at the same time.

It tests how the system behaves in such situations.

5. Disaster Recovery Testing:

Software Testing 12
Communication between the client and server can break due to various reasons like
server failure, client failure, or connection problems.

The system should have a plan for what happens in case of failure, and this needs to
be tested.

6. Testing for extended periods:

In a client-server application, the server is expected to run 24/7 for a long time.

Testing needs to be done for a long time to check if the network or server performance
gets worse over time due to any issues.

7. Compatibility Testing:

Servers may use different hardware, software, or operating systems than what is
recommended.

The client may differ from the expected environment setup.

Testing should ensure that performance is maintained across different hardware and
software configurations.

Describe Load Testing and Stress testing. (4M: W-22, 6M: W-23) ✅
Load Testing:

Load testing checks how a system performs under increasing amounts of load until it
reaches its limit.

The load can come from more users or more transactions at the same time, measuring how
the system behaves under this pressure.

It is done in a controlled environment to see how different systems handle similar loads.

The main goal is to monitor the system’s response time and how well it works when there
is heavy use.

Load testing is successful if the system runs the test cases without errors within a set
time limit.

It tests the software under expected loads, like when many users or large files are being
processed.

Examples of Load Testing:

Sending large print jobs to test a printer.

Editing a big document to test a word processor.

Reading and writing a lot of data on a hard disk continuously.

Running many applications on a server at the same time.

Testing an email server by opening thousands of mailboxes.

Stress Testing:

Stress testing checks how a system behaves when there aren’t enough hardware resources,
like low memory or slow processors.

It is used to find the system's breaking point by testing with loads higher than
expected.

Stress testing focuses on two main things:

Response Time: how quickly the system reacts.

Throughput: how much work the system can handle.

This is sometimes called "Fatigue Testing" because it pushes the system beyond its
limits.

Example of Stress Testing:

Running a word processor with very low memory and disk space to see how the software
handles it.

Trying to use thousands of connections on an internet server to check how it performs.

Reducing resources like memory or CPU speed to see if the system can still function
without crashing or losing data.

Software Testing 13
Chapter 3 - Test Management (14 Marks)
Define Test Plan. (2M, S-23, S-24) ✅
A Test Plan is a detailed document that outlines the testing strategy, goals, schedule,
time estimates, deliverables, and resources needed to test a software product.

It helps determine how much effort is required to check the quality of the application
being tested.

The test plan acts as a guide for conducting software testing in an organized way, which
is closely monitored and controlled by the test manager.

Enlist any two activites involved in test planning. (2M, W-19) ✅


1. Scope Management: Deciding which features will be tested and which won’t.

2. Test Approach/Strategy: Deciding what types of testing will be done, like configuration,
integration, or localization.

3. Setting Test Criteria: Defining clear rules for when testing starts and ends, and how
different features will be tested.

4. Identifying Responsibilities, Staffing and Training needs: Deciding who will do what
tasks and what training is needed.

State any four need to prepare a test plan. (2M, W-22) ✅


1. A test plan ensures that all functional and design requirements are implemented according
to the documentation.

2. It provides detailed aspects such as test scope, estimation, and strategy.

3. The plan helps determine the time, cost, and effort involved in testing.

4. It helps in determining the quality of software applications.

5. A test plan provides a schedule for testing activities.

6. The document can be used for similar projects.

7. It helps to understand the testing process.

Explain Types of test deliverables. (2M: S-22, 4M: W-22) ✅


Test Deliverables are documents provided to stakeholders at different stages of the
software development process.

These deliverables are given before, during, and after the testing phase.

Some common test deliverables include:

Test Case Documents

Test Plan

Test Strategy

Test Scripts

Test Data

Test Traceability Matrix

Test Results/Reports

Test Summary Report

Installation/Configuration Guides

Defect Reports

Release Notes

Test Plan describes how to check if the software meets its requirements and customer
needs. It includes goals for quality, the resources needed, the schedule, methods, and
tasks.

Software Testing 14
Test Cases outline the specific items to test and the steps to follow to verify the
software works as expected.

Bug Reports list problems found during testing. These reports can be written on paper or
kept in a database.

Metrics, Statistics, and Summaries show the progress of testing. They include charts,
graphs, and written reports to track how testing is going.

Explain Test Case specifications. (4M, S-22) ✅


Test case specifications (TCS) are developed from the test plan and form the second phase
of development.

TCS give testers an idea of which scenarios will be tested, how they will be tested, and
how often they should be tested.

Test case specifications explain how to implement the test cases described in the test
plan.

They are useful because they provide detailed specifications of each test item.

A test case is a set of inputs, execution conditions, and pass/fail criteria.

TCS represents the requirements that need to be fulfilled by one or more actual test
cases.

TCS should include the following items:

Test Case ID

Test Case Description

Pre-conditions

Steps

Input Data

Expected Result

Actual Result

Status (Pass/Fail)


Describe Test Infrastructure Management/Components with diagram (4M, W-19, W-22, S-23, S-24)

Testing needs a strong infrastructure that should be planned in advance.

This infrastructure consists of three main elements:

1. Test Case Database (TCDB):

The TCDB stores all important information about the test cases used in an
organization.

Here are some of the main components:

1. Test Case: Stores all the basic information about each test.

Attributes: Test Case ID, Name, Owner.

Software Testing 15
2. Test Case Product Cross-Reference: Links each test case to the feature it
tests.

Attributes: Test Case ID, Module ID.

3. Test Case Run History: Records when a test was run and the results.

Attributes: Test Case ID, Date of Test, Time Taken, Status.

4. Test Case-Defect Cross-Reference: Connects test cases with any defects found.

Attributes: Test Case ID, Defect ID.

2. Defect Repository:

A defect repository holds all necessary information about defects found in a


product.

It serves as a communication tool for the team.

It includes:

Details of each defect

Information on test cases related to the defect

Fix details and solutions

Communication regarding the defect

3. Configuration Management Repository and Tools:

Software Configuration Management (SCM) is a process to organize and control


changes in software development.

It manages version control to ensure the correct versions of files and components
are used.

It ensures:

Changes to test files are made in a controlled fashion and only with proper
approvals.

Changes made by one test engineer are not accidentally lost or overwritten by
other changes.

Each change produces a distinct version of the file that is re-creatable at any
point in time.

Everyone gets access to only the most recent version of the test files.

State contents of Test Summary Reports used in Test Reporting / Describe process of


preparing Summary Report in Test Planning (Both have same answer) (4M, W-19, W-22, S-22, S-24)

Test reporting helps communicate progress and results throughout the testing process.

There are three main types of test reports:

1. Test Incident Report

2. Test Cycle Report

3. Test Summary Report

Test Summary Report:

This is the final report created at the end of a testing cycle.

It helps decide if the product is ready for release.

It summarizes the test cycle's results and comes in two forms:

1. Phase-wise Test Summary: This report is made at the end of each testing phase.

2. Final Test Summary Report: This combines the details of all phases into one report.

A Good Test Summary Report includes:

1. Test Summary Report Identifier: A unique identifier for the report.

2. Description: A brief overview of the test items being reported, with their test IDs.

3. Variances: Any deviations from the test plans or procedures.

Software Testing 16
4. Summary of Results: A summary of all test results, including incidents that were
resolved and their solutions.

5. Assessment and Recommendation for Release: A final evaluation of whether the product
is ready for release, with a recommendation for moving forward.

Describe standards included in Test Management (Internal and External). (4M, W-19) ✅
Test management standards guide how testing activities are organized and executed.

They are divided into internal and external standards.

Internal Standards:

1. Naming and Storage Conventions for Test Artifacts:

Every test document (like test specifications, test cases, and test results) should
have a clear, meaningful name.

This makes it easier to:

Identify the specific functionality being tested.

Trace the functionality back to its corresponding set of tests.

For example, module names could be M01, M02, and file types could be .sh or .SQL .

2. Documentation Standards:

Files should include:

Header comments at the top, explaining what the test does.

Inline comments throughout the file to clarify the code.

Change history showing updates made to the test file.

3. Test Coding Standards:

Include guidelines for:

Correct initialization.

Naming variables in a standard way.

Encouraging reusability of test elements.

Providing consistent interfaces with external systems like the OS or hardware.

4. Test Reporting Standards:

Ensure that all stakeholders receive regular, consistent updates on test progress.

Reports should follow a standard format with specific details and content.

External Standards:

These are set by outside organizations and outline how the product should meet certain
requirements.

1. Customer Standards:

Defined by the customer based on their business needs or requirements.

2. National Standards:

Defined by the regulatory bodies in the country where the supplier or customer
operates.

3. International Standards:

Global standards that apply to all customers worldwide.

Prepare test plan for creating saving account at Bank (Also write Test Cases). ✅
Test Plan Identifier: TP_BA_001
Introduction:

This document outlines the test plan for the savings account creation process at the
bank.

The purpose is to ensure that the application functions correctly, is user-friendly, and
meets all regulatory requirements.

Software Testing 17
Test Items:

Account registration

User verification

Initial deposit processing

Confirmation of account creation

Features to Be Tested:

Input validation for personal information

User experience of the registration form

Security features for data protection

Email/SMS notifications

Approach:

Functional and Non-functional testing

Positive and Negative testing

Intuitive Testing (ad hoc)

Regression testing

Pass/Fail Criteria:

All critical test cases must pass.

95% test coverage of requirements.

No critical defects remaining.

The test report will be compiled and approved by the team lead and customer.

Suspension Criteria and Resumption Requirements:

Suspension: Testing will be paused if a critical defect is found.

Resumption: Testing resumes once the defect is resolved.

Test Deliverables:

Test Plan

Test Case Specification

Test Cases

Test Summary Report

Test Tasks:

Writing test cases

Conducting tests

Documenting results

Compiling final test reports

Environmental Needs:

Banking application interface

User devices (desktop, mobile)

Testing tools (e.g., JIRA, Selenium)

Responsibilities:

Functionality Responsible

Account Registration Test Engineer 1

User Verification Test Engineer 2

Deposit Processing Test Engineer 3

Notification System Test Engineer 4

Staffing and Training Needs:

Knowledge of banking regulations

Knowledge of various types of testing including functional and non-functional.

Software Testing 18
Familiarity with security practices

Schedule:
All testing activities and final delivery are due by 01/07/2025 by 5.00pm.

Risks and Contingencies:

Insufficient human resources to meet deadlines

Changes in product requirements

Approvals:

Test Manager

Quality Manager

Test Engineers

Test Cases for Creating a Savings Account:

Test Case
Input Data Expected Result Actual Result
Description

Name: Jay Kadlag


Email:
Verify account Account should be created Account created
jkcool11@gmail.com
registration with successfully, confirmation successfully with
Phone: 1234567890
valid data received confirmation email sent
Address: 123 Main
St

Name: Jay Kadlag


Email:
Verify registration Error message should display Error message displayed:
jkcool11@com
with invalid email for invalid email "Invalid email format."
Phone: 9876543210
Address: 456 Elm St

Verify user Verification code: Account should be verified Account verified


verification process 123456 successfully successfully

Verify initial Deposit should be processed, Deposit processed, new


Amount: 1000
deposit processing account balance updated balance is $1000

User should receive a Notification received:


Check confirmation
N/A notification of account "Your savings account has
notification
creation been created."

Check account Error message displayed:


Error message should display
registration with No input "Please fill out all
for missing fields
missing fields required fields."

Write the test cases for Notepad application (Any 8 test cases) (4M, W-22) ✅
Test Case
Steps Expected Result Actual Result
Description

Click on "Edit" > "Select All text should be


Select All Text All text was selected.
All" selected.

Selected text should be Selected text was cut


Cut Text Click on "Edit" > "Cut"
cut from the document. from the document.

Selected text should be Selected text was copied


Copy Text Click on "Edit" > "Copy"
copied to the clipboard. to the clipboard.

Text from the clipboard Text from the clipboard


Paste Text Click on "Edit" > "Paste"
must be pasted. was pasted.

Selected text should be Selected text was


Delete Text Click on "Edit" > "Delete"
deleted. deleted.

Cursor must jump to the Cursor jumped to the


Find Text Click on "Edit" > "Find"
text found. text found.

All occurrences should be All occurrences were


Replace Text Click on "Edit" > "Replace"
replaced. replaced.

Document must be saved Document was saved


Save Document Click on "File" > "Save"
successfully. successfully.

Design Test Cases for withdraw amount from ATM (6M, S-22). ✅

Software Testing 19
Test Case Description Input Data Expected Result Actual Result

Withdraw cash with a Transaction should be


Amount successfully
valid account and valid Amount: 500 successful and the amount
withdrawn
amount should be dispensed

Error message should appear


Withdraw cash with Error message
Amount: 20000 indicating insufficient
insufficient balance displayed
balance

Error message should appear


Withdraw amount exceeding Error message
Amount: 100000 indicating daily withdrawal
daily limit displayed
limit exceeded

Withdraw cash with an Error message should appear Error message


PIN: 1234
invalid PIN indicating incorrect PIN displayed

Error message should appear


Withdraw cash from a Error message
Amount: 1000 indicating the account is
frozen account displayed
frozen

Withdraw less than the Error message should appear


Error message
minimum allowed Amount: 10 indicating minimum withdrawal
displayed
withdrawal amount limit

Prepare test plan for Identified Mobile Application. ✅


Test Plan Identifier: TP_MA_001

Introduction:
This document outlines the test plan for the identified mobile application, aimed at
ensuring the application meets its functional and non-functional requirements, is user-
friendly, and performs effectively across various devices.
Test Items:

User authentication

Data entry and retrieval

User interface and experience

Connectivity features (e.g., Wi-Fi, mobile data)

Push notifications

Features to Be Tested:

Login and registration functionality

Data synchronization

In-app purchases

Search functionality

Settings and preferences management

Approach:

Functional and Non-functional testing

Positive and Negative testing

Intuitive Testing (ad hoc)

Regression testing

Pass/Fail Criteria:

All critical and high-priority test cases must pass.

Minimum test coverage of 95% on all requirements.

Any critical defects must be resolved before release.

The test report will be compiled and approved by the team lead and customer.

Suspension Criteria and Resumption Requirements:

Suspension: Testing will be paused if critical defects are identified.

Resumption: Testing resumes once defects are resolved and retested.

Test Deliverables:

Software Testing 20
Test Plan

Test Cases

Test Reports

Test Tasks:

Developing test cases

Executing tests

Documenting results

Compiling a final test report

Environmental Needs:

Mobile devices (Android, iOS)

Testing tools (e.g., Appium, JIRA)

Responsibilities:

Functionality Responsible

User Authentication Test Engineer 1

Data Entry/Retrieval Test Engineer 2

UI/UX Testing Test Engineer 3

Connectivity Features Test Engineer 4

In-app Purchases Test Engineer 5

Staffing and Training Needs:

Familiarity with mobile application testing

Knowledge of testing tools and methodologies

Understanding of usability principles

Schedule:
All testing activities and final delivery are due by 06/12/2019 at 5:00 PM..
Risks and Contingencies:

Limited device availability for testing.

Changes in requirements during the testing phase.

Approvals:

Test Manager

Test Engineers


Design test cases for Online Mobile Recharge (Data filed are mobile number, state, email-id,
recharge amount.) (6M, S-22)

Test Case Description Input Data Expected Result Actual Result

To check if the mobile Mobile Number: The mobile number should The mobile number is
number format is valid 9876543210 be valid valid

To check if the mobile The mobile number should The mobile number is
Mobile Number: 9876ABCD
number format is invalid be invalid invalid

To check if the recharge The recharge amount The recharge amount is


Recharge Amount: 500
amount is valid should be valid valid

To check if the recharge


The recharge amount The recharge amount is
amount is invalid Recharge Amount: -100
should be invalid invalid
(negative value)

To check if the recharge An error message


An error message should
amount is less than the Recharge Amount: 10 appears: "Amount too
appear: "Amount too low"
minimum limit low"

To check if the recharge An error message should An error message


amount exceeds the Recharge Amount: 10000 appear: "Amount exceeds appears: "Amount exceeds
maximum limit limit" limit"

Software Testing 21
Test Case Description Input Data Expected Result Actual Result

Mobile Number:
To check if the recharge An error message should An error message
9876543210,
process works with a appear: "Recharge amount appears: "Recharge
Email ID:
missing recharge amount is required" amount is required"
test@example.com

With respect to client-server testing design test cases for Online Payment Transfer at
banking system. (6M, S-22)

Test Case
Input Data Expected Result Actual Result
Description

Sender Account:
To check if the 1234567890, Receiver The client should send a
The server receives a
client initiates a Account: 9876543210, valid payment request to the
valid payment request
payment transfer Amount: 1000, Payment server
Method: Credit Card

Payment Request:
To check if the Sender Account: The server validates the
The server must validate the
server processes the 1234567890, Receiver payment details and
payment details and balance
payment request Account: 9876543210, confirms balance
Amount: 1000

Sender Account:
To check if the 1234567890, Receiver The client displays
The client must display
client handles Account: 9876543210, "Insufficient balance"
"Insufficient balance" error
insufficient balance Amount: 50000, Payment error
Method: Debit Card

Sender Account:
To check if the 1234567890, Receiver The client displays
The client must display
client receives a Account: 9876543210, "Payment Successful"
"Payment Successful" message
success confirmation Amount: 1000, Payment message
Method: Credit Card

Sender Account:
To check if the The server must deduct from
1234567890, Receiver The server updates both
server updates sender and add to receiver's
Account: 9876543210, accounts correctly
account balances account
Amount: 1000

Payment Method: Debit


To check if the Card, Sender Account: The server must validate the The server validates the
server validates the 1234567890, Receiver payment method and process payment method and
payment method Account: 9876543210, transfer processes the transfer
Amount: 1000

Payment Request:
To check if the Sender Account:
The server must reject the The server rejects the
server validates 1234567890, Receiver
request with "Invalid request with "Invalid
payment request Account: 9876543210,
authentication" error authentication" error.
authenticity Amount: 1000, Invalid
Authentication Token

Design test cases for Simple Calculator Application. (6M, W-22) ✅


Test Case Description Input Data Expected Result Actual Result

To add two integers and display the result 176 + 100 276 276

To subtract two integers and display the


176 - 100 76 76
result

To multiply two integers and display the


100 x 20 2000 2000
result

To divide two integers and display the


100 / 5 20 20
result

Symbol "0" Symbol "0"


To clear the screen Press "C"
appears appears

One digit
To delete digits one by one Press "DEL" One digit deleted
deleted

Design test cases for Railway Reservation System. (6M, W-19, S-23) ✅

Software Testing 22
Test Case Description Input Data Expected Result Actual Result

To verify login field Any valid login It should accept the login
It accepted the login name.
with valid credentials name (e.g., abcxyz) name.

To verify password
Any valid password It should accept the
field with valid It accepted the password.
(e.g., P@ssw0rd) password.
credentials

To verify password Any invalid An error message should An error message appeared
field with invalid password (e.g., appear indicating invalid indicating invalid
credentials 12345) credentials. credentials.

The date format must be


To verify date of Any date (e.g., The date format was
validated correctly (e.g.,
journey input format 12/25/2024) validated correctly.
DD/MM/YYYY).

The return date format


To verify date of
Any return date should be validated The return date format was
return journey input
(e.g., 12/30/2024) correctly (e.g., validated correctly.
format
DD/MM/YYYY).

Any station name The station name should be


To verify boarding The station name was
(e.g., Mumbai validated and accepted if
station input validated and accepted.
Central) valid.

The train number must be


To verify train number Any train number The train number was
validated and accepted if
input (e.g., 12345) validated and accepted.
valid.

Prepare test plan for ‘Cam Scanner’ which is installed on mobile. (6M, W-22) ✅
Test Plan Identifier: TP_10
Introduction:

This document outlines the test plan for the CamScanner application installed on mobile
devices.

The primary objective of testing is to verify the correct operation of the application's
functionalities and its ease of use.

Test Items:

Scanning documents

Editing documents

PDF conversion

Features to be Tested:

Document Scanning

Document Editing

PDF Conversion

Approach:

Functional and Non-functional testing

Positive and Negative testing

Intuitive Testing (ad hoc)

Regression testing

Pass/Fail Criteria:

All critical and high-priority test cases must pass.

Minimum test coverage of 95% on all requirements.

Any critical defects must be resolved before release.

The test report will be compiled and approved by the team lead and customer.

Suspension Criteria and Resumption Requirements:

Suspension: Testing will be paused if critical defects are identified.

Resumption: Testing resumes once defects are resolved and retested.

Test Deliverables:

Test Plan

Software Testing 23
Test Cases

Test Report

Test Tasks:

Develop the test plan

Write and execute test cases

Establish success criteria for testing

Conduct testing and evaluate results

Prepare test reports

Environmental Needs:

Mobile Phone

CamScanner Installed

Responsibilities:

Functionality Responsible

Scan Document Test Engineer 1

Edit Document Test Engineer 1

PDF Conversion of Document Test Engineer 3

Staffing and Training Needs:

Knowledge of the CamScanner application

Practical application of basic test design techniques

Understanding of various testing types, including functional and non-functional testing

Schedule:

All tasks must be completed, and the project delivered by 25/01/2023 at 5:00 PM.

Risks and Contingencies:


Potential risks include:

Insufficient human resources to meet deadlines

Changes in product requirements

Approvals:

Team Lead

Test Engineer 1

Test Engineer 2


Write a program for calculating even numbers from 1 to 20 and desing test cases for the
same. (6M, W-22)

Java Program to calculate even numbers from 1 to 20:

public class EvenNumbers {


public static void main(String[] args) {
System.out.println("Even numbers from 1 to 20:");
for (int i = 1; i <= 20; i++) {
if (i % 2 == 0) {
System.out.println(i);
}
}
}
}

Test Cases:

Test Case Description Input Expected Result Actual Result

Check initial condition Initial value of for loop Initial value of for loop
Initial value
of for loop should be 0 or 1 is 1

Software Testing 24
Test Case Description Input Expected Result Actual Result

Check final condition of Final condition should be


Final condition Final condition is "< 20"
for loop "< 20" or "<= 20"

Check the increment Increment operator should Counter is incremented by


Increment check
operator increment by 1 1

Even numbers should be


It is displaying even
Check output Output check displayed on output
numbers
screen


Design test cases for Web pages testing of any Website (take a suitable example) (6M, W-22)

Test Case Description Input Data Expected Result Actual Result

Check cursor position Click on email or Cursor should be placed Cursor placed
at email field mobile number field in the field correctly

Check cursor position Click on password Cursor should be placed Cursor placed
at password field field in the password field correctly

Check the continue Click on continue Should redirect to the Redirected to password
button functionality button password page page

Visually inspect the


Check readability of Contents should be
text on the login Contents are readable
font on login page readable
page

Verify login button Click on the login Should proceed to the


Successfully logged in
functionality button user's account

Hover over and click Cursor changes,


Test hyperlink Cursor changed,
on a hyperlink (e.g., redirects to respective
functionality redirection OK
"Help") page

Prepare six test cases for marketing site www.flipkart.com (6M, W-23) ✅
Test Case Description Steps Expected Result Actual Result

To check if the Enter username: It should accept the


It accepts the username
username is accepted Abc123 username

To verify if the Enter password: It should accept the


It accepts the password
password is accepted Co5i518 password

To check if the home Home page should be Home page is displayed


page is displayed after Click on login button displayed after login, and after login, and
login username shown username is shown

To verify redirection User should be redirected User redirected to


to the product Click on a product to the product product specification
specification page specification page page

To check if the product The product should be added The product added to
Click on add to cart
is added to the cart to cart cart

To verify if the total The total amount of all The total amount of all
amount is displayed in Click on go to cart items in cart should be items in cart is
the cart displayed displayed

To check if the item is Click on remove from The item should be removed The item is removed from
removed from the cart cart from the cart the cart

To verify if the The checkout page should be The checkout page is


Click on checkout
checkout page is displayed with payment displayed with payment
button
displayed options options

To check if the order


Complete payment User should get order User gets order details
details are received
process details by message or email by message or email
after payment


Prepare test plan along with test cases for Edit Notepad functionality. (6M: W-19, 4M: W-23)

Test Plan:

Test Plan Identifier: TP_10

Introduction:

This document outlines the test plan for the EDIT functionality of Notepad.

Software Testing 25
The goal is to ensure the correct operation of all features and assess the application's
ease of use.

Test Items:

Document operations (selecting, cutting, copying, etc.)

Features to Be Tested:

Select All Text

Cut Text

Paste Text

Delete Text

Copy Text

Find and Replace Text

Accessing Help

Time and Date Option

Approach:

Functional and non-functional testing

Positive and negative testing

Intuitive testing (ad hoc)

Regression Testing

Pass/Fail Criteria:

All critical and high-priority test cases must pass.

Minimum test coverage of 95% on all requirements.

Any critical defects must be resolved before release.

The test report will be compiled and approved by the team lead and customer.

Suspension Criteria and Resumption Requirements:

Suspension: Testing will be paused if critical defects are identified.

Resumption: Testing resumes once defects are resolved and retested.

Test Deliverables:

Test Plan

Test Cases

Test Report

Test Tasks:

Writing the Test Plan

Creating Test Cases

Developing Success Criteria

Conducting Tests and Evaluating Results

Compiling Test Reports

Environmental Needs:

Notepad application

Computer

Windows OS

Responsibilities:

Functionality Responsible

Select All Text Test Engineer 1

Cut Text Test Engineer 1

Paste Text Test Engineer 1

Copy Text Test Engineer 1

Software Testing 26
Functionality Responsible

Find Text Test Engineer 2

Replace Text Test Engineer 2

Delete Selected Text Test Engineer 2

Staffing and Training Needs:


To perform testing, team members must have:

Knowledge of Notepad functionality

Practical application of test design techniques

Understanding of functional and non-functional testing types

Schedule:
All tasks and project delivery are due by 06/12/2019 at 5:00 PM.

Risks and Contingencies:


Possible risks include:

Insufficient human resources to meet deadlines

Changes in product requirements

Approvals:

Team Lead

Test Engineer 1

Test Engineer 2

Test Engineer 3

Test Engineer 4

Test Cases:

Test Case Description Steps Expected Result Actual Result

Test the Select All All text should be


Click on "Select All" All text is selected
option selected

Select the text and click Selected text should be


Test the Cut option Selected text is cut
on "Cut" cut

Contents should be
Test the Paste option Click on "Paste" Contents are pasted
pasted

Select the text and click Contents should be


Test the Delete option Contents are deleted
Click on "Delete" deleted

Last action should be


Test the Undo option Click on "Undo" Last action undone
undone

Last action should be


Test the Redo option Click on "Redo" Last action redone
redone


With respect to GUI testing write the test cases for Amazon/Flipkart login form. (6M, W-19,
S-22)

Test Case Description Input data Expected Result Actual Result

Check cursor position at Click on email or Cursor should be placed


Cursor placed correctly
email field mobile number field in the field

Check cursor position at Cursor should be placed


Click on password field Cursor placed correctly
password field in the password field

Check the continue button Click on continue Should redirect to the Redirected to password
functionality button password page page

Check readability of font Visually inspect the Contents should be


Contents are readable
on login page text on the login page readable

Verify login button Click on the login Should proceed to the


Successfully logged in
functionality button user's account

Hover over and click on


Test hyperlink Cursor changes, redirects Cursor changed,
a hyperlink (e.g.,
functionality to respective page redirection OK
"Help")

Software Testing 27
Write important six test cases for the ‘Login Form’ of the Facebook website. (6M, W-23) ✅
Test Case Description Input Data Expected Result Actual Result

Username field is left It should display ‘Enter It displays ‘Enter


Leave username blank
blank Username’ Username’

It should prompt ‘Couldn’t It prompts ‘Couldn’t


Enter invalid username abc
find your account’ find your account’

Enter valid username and Username: abc123 It should display ‘Wrong It displays ‘Wrong
invalid password Password: 123 password’ message password’ message

Enter valid username and Username: abc123 It should display ‘Enter It displays ‘Enter
no password Password: N/A password’ password’

Enter valid username and Username: abc123 It should displays user's It displays user's
password Password: co5i22518 account's Facebook page account's Facebook page

Click on ‘Forgotten Click ‘Forgotten It should go to the Find It goes to the Find
password?’ password?’ your account page your account page


Prepare test cases for College Admission form / Design test cases for Hostel admission form
(Both questions have same answer) (6M, W-19)

Test Case Description Input Data Expected Result Actual Result

Name: Jay Kadlag Form submitted


Submit admission form with DOB: 11/09/2004 successfully and Confirmation message
valid data Email: confirmation message displayed
jkcool11@gmail.com displayed

Error message should


Submit form with missing Name: (Blank) Error message
appear for missing
mandatory fields DOB: 01/01/2004 displayed
mandatory fields

Error message should


Submit form with invalid Email: Error message
appear for invalid email
email jkcool[at]example displayed
format

Submit form with age below Error message should Error message
DOB: 01/01/2008
the eligible limit appear for age restriction displayed

Form submitted
Name: Jay Kadlag
Submit form with valid data successfully, optional Form submitted
Email:
but optional fields blank fields should not affect successfully
jkcool11@gmail.com
submission

Error message should


Submit form with special Error message
Name: @Jay!Kadlag appear for invalid
characters in name field displayed
characters in name field

Chapter 4 - Defect Management (12 Marks)


Define Defect. (2M, W-19, S-24) ✅
A defect is an error or bug found in the application.

While designing and building the software, a programmer may make mistakes or errors.

These mistakes result in flaws within the software, which are called defects.

Write any two root causes of defect. (2M, W-23) ✅


1. Miscommunication of requirements introduces errors in code.

2. Lack of design experience leads to poor system design.

3. Lack of coding practice results in inefficient or incorrect code.

4. Unrealistic time schedule for development causes rushed work, leading to defects.

5. Multiple changes in the requirements lead to confusion and inconsistencies in the final
product.

State any four defect reporting guidelines. (2M, S-22) ✅


1. Clear Description: Provide a clear and concise description of the defect, including steps
to reproduce it.

Software Testing 28
2. Environment Details: Mention the software version, operating system, hardware, and other
relevant environment details where the defect was found.

3. Severity and Priority: Specify the severity (impact on functionality) and priority
(urgency to fix) of the defect.

4. Expected vs. Actual Results: Clearly state what was expected to happen and what actually
happened, highlighting the discrepancy.

Give Defect classification and its meaning. (2M, W-22, S-24) ✅


1. Requirement/Specification Defects:

These defects arise when the product doesn't meet the customer's needs. They can occur
due to:

Customer Gap: The customer is unable to clearly define their requirements.

Producer Gap: The development team fails to create the product according to the
requirements.

2. Design Defects:

Design defects occur when the system components, their interactions, or their
connection to external software/hardware are poorly designed.

This includes mistakes in how the design is created or used.

3. Coding Defects:

These defects arise from improperly initialized or declared variables, or issues with
database creation.

Good coding also requires proper comments to ensure the code is readable and
maintainable.

4. Testing Defects:

These include problems with test cases and procedures, such as missing, incomplete, or
incorrect tests.

Draw Defect Prevention Process Cycle. State working of every phase. (4M, S-22) ✅

1. Identify Critical Risks:

Start by identifying the critical risks that could affect the project or system.

These risks can include defects that might occur during the project.

Recognizing these risks is the first step in the detection process.

Examples of Risks:

Missing important requirements

Poor performance of the system

Incompatibility between hardware and software

Critical software applications not functioning correctly

Delayed hardware delivery

Challenges with installing new hardware

Users not being able to participate in the project

2. Estimate Expected Impact:

Software Testing 29
For each critical risk, evaluate the financial consequences if the risk becomes a
problem.

The expected impact can be calculated using the formula: E = P × I

Where:

E = Expected impact of the risk

P = Probability of the risk occurring

I = Financial impact in dollars if the risk occurs

3. Minimize Expected Impact:

After identifying the risks, focus on eliminating them wherever possible.

For risks that cannot be fully eliminated, work to decrease the chances of them
becoming issues.

Three Strategies for Minimizing Impact:

Eliminate the risk completely

Reduce the probability of the risk occurring

Lower the impact if the risk does become a problem

Explain Defect Management Process with suitable diagram. (4M, W-23, S-23, S-24) ✅

1. Defect Prevention:

Defect Prevention is about improving the quality and productivity of a software


product by preventing defects before they occur.

While it’s nearly impossible to eliminate all defects, the goal is to reduce the risk
of defects.

This is achieved through the implementation of techniques, methodologies, and standard


processes.

The focus is on removing the possibility of defects before they happen.

2. Deliverable Baseline:

Once a defect is fixed, retested, and confirmed as closed, the product is recreated.

If the newly created product meets the acceptance criteria, it is base lined.

Only base lined work products can move to the next stage of the process.

3. Defect Discovery:

A defect is considered discovered when it is brought to the attention of the


developers and is acknowledged as a valid defect.

The team should identify defects early before they become major issues.

Once a defect is found, it should be reported for resolution. Developers should also
acknowledge the defect as valid.

4. Defect Resolution:

The development team works to prioritize, schedule, and fix the defect, documenting
the resolution process.

This also includes notifying the tester to verify that the defect has been properly
resolved.

5. Process Improvement:

All defects are a result of failures in the processes used to create software.

Software Testing 30
Defects provide an opportunity to identify and address issues with the processes,
leading to improvements.

Better processes lead to better software products with fewer defects.

6. Management Reporting:

Analysis and reporting of defect information help management with tasks like risk
management, process improvement, and project management.


Enlist any four attributes of defect. Describe them with suitable example. (4M: W-23, 2M: S-
23)

1. Defect ID:

Identifies the defect as there might be many defects in the system.

Example: D1, D2, etc.

2. Defect Name:

Name of the defect that explains it briefly.

It must be short but descriptive.

Example: Login error.

3. Project Name:

Indicates the project name where the defect is found.

Example: Library Management System.

4. Module / Sub-module Name:

The module or sub-module where the defect is found.

Example: Login form.

5. Severity:

Declared in the test plan.

Example: High, Medium, or Low.

6. Priority:

Defines how the project schedules the defects for fixing.

Example: High, Low, Moderate.

7. Summary:

Provides a short description of the defect.

8. Description:

Describes the defect in detail.

9. Status:

A dynamic field showing the current status of the defect.

Examples: Open, Assigned, Resolved, Closed, Hold, Deferred, Reopened, etc.

10. Reported by / Reported on:

Indicates who found the defect and when it was reported.

11. Assigned to:

Specifies the tester or team member assigned to fix the defect.

Prepare defect report after executing test cases for any Login form. (4M, W-22) ✅
ID: F1

Project: Facebook Login System

Product: http://www.facebook.com

Release Version: v1.0

Module: Home Page > Login

Detected Build Version: v1.1

Software Testing 31
Summary: Login fails with valid credentials, showing an error message.

Description: Users are unable to log in with valid credentials. The system displays an
error message despite entering correct login details.

Steps to Replicate:

1. Open the Facebook website.

2. Enter a valid username and password in the login fields.

3. Click on the "Login" button.

Expected Results: The user should be successfully logged in and redirected to their
account's home page.

Actual Results: The system displays an error message saying "Invalid username or
password," even though the entered credentials are correct.

Attachments: Screenshot of the error message.

Remarks: This causes significant user inconvenience as they cannot access their accounts
even with correct credentials.

Defect Severity: Critical

Defect Priority: High

Reported By: Test Engineer 1

Assigned To: XYZ

Status: Assigned


State different techniques for finding defect and describe any one with example. (4M, W-19)

There are different techniques to find defects in software, including:

1. Static Techniques:

Static techniques check the software and related documents without running them. This
process is also known as desk checking, verification, or white box testing.

It includes activities like reviews, walkthroughs, inspections, and audits.

Reviewers use checklists, standards, and their own knowledge to find defects based on
set criteria.

It's called "static" because no code or product is executed. This technique ensures
the product meets the requirements.

Example: Reviewing code for proper formatting and naming conventions without actually
running it.

2. Dynamic Techniques:

Dynamic testing is a validation technique that involves running the software (or parts
of it) to compare its behavior with expected outcomes.

It includes black box testing methods like system testing and unit testing.

These tests evaluate if the product meets the requirements and design specifications,
marking it as "pass" or "fail".

Example: Running a login feature to check if a user can successfully log in with valid
credentials and fails with invalid ones.

3. Operational Techniques:

Operational techniques involve checking whether the processes used for development and
testing are being followed properly and are effective.

This includes auditing work products and revisiting defects before and after fixing.

It may also involve quick tests like smoke testing and sanity testing to check if the
product works as expected.

Example: Performing smoke testing to ensure the basic functionalities of an app work
after a new feature is added, like checking if the homepage loads without errors after
a minor update.

Software Testing 32
Describe Defect Life Cycle with neat diagram. (6M, W-23) ✅

1. New: This is the initial state when a defect is posted for the first time.

2. Assigned: After a tester posts a bug, the tester’s lead checks that the bug is valid and
assigns it to the appropriate developer or development team.

3. Open: In this state, the developer has started to analyze and work on fixing the defect.

4. Test/Retest: At this stage, the tester retests the updated code provided by the developer
to see if the defect has been fixed.

5. Deferred: A defect is marked as deferred if it will be fixed in a future release. Reasons


include low priority, time constraints, or minimal impact on the software.

6. Rejected: If the developer believes that the defect is not valid, they can reject it,
changing its status to "rejected".

7. Reopen/Reassigned: If the defect still exists after the developer has claimed to fix it,
the tester can change the status to "reopened", and the defect will go through the
lifecycle again.

8. Verified: After retesting the defect and confirming it has been fixed, the tester changes
the status to "verified".

9. Closed: Once the defect is fixed and tested by the tester, if everything is satisfactory,
the status is changed to "closed". This means the defect is resolved, tested, and
approved.

Prepare defect report for Library Management System. (With Test cases). (6M, S-24) ✅
Defect Report for Library Management System:

ID: L1

Project: Library Management System

Product: http://www.librarysystem.com

Release Version: v2.0

Module: Home Page > Borrow Books

Detected Build Version: v2.1

Summary: Unable to borrow books, error message displayed when trying to borrow more than
3 books.

Description: Users are unable to borrow more than 3 books at a time due to an error
message being displayed.

Steps to Replicate:

1. Open the Library Management System website.

2. Log in with a valid user account.

3. Navigate to "Borrow Books" under the main menu.

4. Select 4 or more books and click "Borrow."

Expected Results: Users should be able to borrow up to the allowed limit (e.g., 5 books),
or the system should display an appropriate error message when the limit is exceeded.

Software Testing 33
Actual Results: The system displays an error message and blocks the borrowing process for
more than 3 books.

Attachments: Screenshot of the Error message produced.

Remarks: Causes inconvenience as users are unable to borrow up to the maximum allowed
books.

Defect Severity: High

Defect Priority: High

Reported By: Test Engineer 1

Assigned To: XYZ

Status: Assigned

Test Cases for Library Management System:

Test Case
Actual Input Expected Output Actual Output
Description

1. Enter Username:
Verify user login
"22203A0011" Login should be done Login is done
with valid
2. Enter Password: successfully. successfully.
credentials
"sandip@1234"

1. Enter Username:
Verify user login It should display message It is displaying message
"22203A0011"
with invalid "Invalid Username or "Invalid Username or
2. Enter Password:
credentials Password." Password."
"wrongpassword"

1. Book is issued to
1. Book should be issued to
Issue a book with Enter Book Name: "Data student. 2. Book is marked
student.
a valid name structure using C" as borrowed in the
2. Book marked as borrowed.
database.

Validate error for It should display message It is displaying message


Enter Book Name: "123abcd"
invalid book name "Enter Valid Book Name." "Enter Valid Book Name."

Check message for Enter Book Name: "Data It should display message It is displaying message
unavailable book structure and algorithm" "Book not available." "Book not available."

It should mark Book as It is marking Book as


Return a borrowed
Return a borrowed book. returned in student and returned in student and
book
library database. library database.

It should display Fine calculated and


Calculate fine for Return a book after the
calculated fine in student displayed in student
late return due date.
login. login.

1. Enter Book Name: "Data Student should receive Student receives


Send notification structure and algorithm" notification for notification for
after borrowing 2. Collect book from successfully borrowed successfully borrowed
librarian. books. books.

It should display the It is displaying the


Check book Enter Book Name:
availability status of the availability status of the
availability "Operating System
book (available/not book (available/not
status Concepts"
available). available).


Prepare defect report after executing test cases for withdraw of amount from ATM Machine.
(6M, W-22)

ID: R1

Project: Cash Simulator (ATM)

Product: http://www.motc.gov.qa/en/ditoolkit/cash-machine-simulator-atm

Release Version: v1.0

Module: Home Page > Our Programs > Digital Inclusion Tools

Detected Build Version: v1.1

Summary: Limited options for cash withdrawal denominations, restricting withdrawals to a


maximum of 3000.

Description: There is no option to withdraw more than 3000.

Steps to Replicate:

Software Testing 34
1. Open the website.

2. Select "Our Programs."

3. Go to "Digital Inclusion Tools" and select "Cash Machine Simulator (ATM)."

4. Choose a language and skip to the simulator.

5. Insert the card.

6. Select the account type.

7. Go to "Other Functions" and select "Cash Withdrawal."

Expected Results: It should add more options in denominations in withdrawal function or


it should take amount input from the user.

Actual Results: The simulator displays only limited options for cash withdrawal
denominations.

Attachments:

Remarks: Causes inconvenience to the user in terms of limited cash withdrawal options.

Defect Severity: High

Defect Priority: High

Reported By: Test Engineer 1

Assigned To: XYZ

Status: Assigned


Draw diagram for Defect Life Cycle and write example for Defect Template. (6M, W-19, S-24)

Defect Life Cycle:

Defect Template Example:

ID: R1

Project: Cash Simulator (ATM)

Product: http://www.motc.gov.qa/en/ditoolkit/cash-machine-simulator-atm

Release Version: v1.0

Module: Home Page > Our Programs > Digital Inclusion Tools

Detected Build Version: v1.1

Software Testing 35
Summary: Limited options for cash withdrawal denominations, restricting withdrawals to a
maximum of 3000.

Description: There is no option to withdraw more than 3000.

Steps to Replicate:

1. Open the website.

2. Select "Our Programs."

3. Go to "Digital Inclusion Tools" and select "Cash Machine Simulator (ATM)."

4. Choose a language and skip to the simulator.

5. Insert the card.

6. Select the account type.

7. Go to "Other Functions" and select "Cash Withdrawal."

Expected Results: It should add more options in denominations in withdrawal function or


it should take amount input from the user.

Actual Results: The simulator displays only limited options for cash withdrawal
denominations.

Attachments:

Remarks: Causes inconvenience to the user in terms of limited cash withdrawal options.

Defect Severity: High

Defect Priority: High

Reported By: Test Engineer 1

Assigned To: XYZ

Status: Assigned

Chapter 5 - Testing Tools and Management (12 Marks)


Enlist any four software testing tools. (2M, W-23, S-22) ✅
1. Selenium

2. Test complete

3. LoadRunner

4. Cucumber

5. Quick test professional (QTP)

6. Cypress

State any four advantages of using tools. (2M, W-19) ✅


1. Time-Saving: Automated testing tools can execute tests faster than humans, allowing
testers to focus on other important tasks.

2. Consistency: Tests can be repeated exactly the same way every time, reducing human errors
like forgetting steps or making mistakes, which helps identify defects accurately.

Software Testing 36
3. Simulated Testing: Tools can create multiple virtual users or data sets, allowing for
effective testing in a controlled environment before the product is released.

4. Test Case Design: Automated tools can help design test cases, ensuring better coverage
compared to manual testing.

5. Reusability: Automated tests can be reused across different software versions, even if
the user interface changes.

6. Error Reduction: Automation reduces the likelihood of human errors that can occur during
manual testing.

7. Internal Testing: Tools can easily check for issues like memory leaks and test coverage.

8. Cost Reduction: By speeding up the testing process, automation can lower overall costs
associated with software development.

State the Need of Automated testing tools. (2M: W-22, S-24, 4M: S-22) ✅
An automated testing tool can replay recorded actions, compare results to expected
behavior, and report success or failure to a test engineer.

Once automated tests are created, they can be easily repeated and extended to perform
tasks that manual testing can't handle.

Automated software testing saves time and money.

Once automated tests are set up, they can be run repeatedly at no extra cost and are much
faster than manual tests.

Automated testing improves accuracy, as even careful testers can make mistakes during
tedious manual testing.

Automated tests perform the same steps precisely every time and always record detailed
results. They can also run on multiple computers with different setups.

Give any four differences between manual and automated testing. (S-24) ✅
Manual Testing Automation Testing

Test cases are executed manually. Test cases are executed with the help of tools.

Time required to execute test cases is high. Time required to execute test cases is low.

Initial investment required for manual testing is


Initial investment for automation testing is higher.
lower.

Automation testing is more accurate with


Manual testing may include human errors.
tools/scripts.

It provides human observation to assess user


It cannot guarantee user friendliness or experience.
experience.

Suitable only for stable systems and mainly for


Suitable for almost any software product.
regression.

State and explain any four benefits of Automation in testing. (4M, W-23) ✅
1. Reduces Testing Time:

Software needs to be tested repeatedly during development, especially after code


changes or for new releases.

Manual testing is time-consuming and expensive.

Automated tests, once created, can run repeatedly at no extra cost and much faster
than manual tests.

2. Improves Accuracy:

Manual testing can lead to mistakes, especially during repetitive tasks.

Automated tests perform tasks with consistent precision, reducing human errors.

3. Delivers High-Quality Products:

Automated testing ensures thorough testing without relying on individual testers’


experience or avoiding manual errors.

It runs steps accurately, ensuring consistent results and unbiased reporting.

Software Testing 37
4. Allows Testing with Different Data and Configurations:

Automated tests can run multiple times with various inputs and across different
devices or configurations.

Thousands of test cases can be executed in a single run, which is not feasible with
manual testing.

5. Saves Resources:

Manual regression testing can be tedious and time-consuming.

Automated testing tools create reusable test scripts, eliminating the need to write or
execute tests manually, saving both time and effort.

State eight limitations of Manual Testing. (4M, W-19, W-22, W-23, S-22, S-23, S-24) ✅
1. Slow and Expensive: It takes a lot of time and costs more.

2. Labor-Intensive: Testing manually requires a lot of effort and takes a long time to
finish.

3. Doesn't Scale Well: As the software becomes more complex, testing becomes much harder and
takes even more time and money.

4. Inconsistent Results: Manual tests can vary because different testers may perform the
same test in different ways, leading to inconsistent results.

5. Lack of Training: Many testers may not be properly trained, which affects the quality of
testing.

6. Difficult to Spot Small GUI Issues: It’s hard to notice small differences in object sizes
and color combinations manually.

7. Not Suitable for Large or Time-Sensitive Projects: Manual testing is inefficient for big
or urgent projects.

8. No Batch Testing: Every test needs human interaction; there’s no way to automate multiple
tests at once.

9. Hard to Compare Large Data Sets: It’s impractical to manually compare large amounts of
data.

10. Slow Maintenance Changes: Processing changes during software maintenance takes more time
with manual testing.


Enlist factor considered for selecting testing tool for Test Automation. (4M, W-19, W-22, W-
23, S-22, S-23)

When selecting a tool, it's important to consider the following factors:

1. Assess the organization’s readiness for change: Check if the organization is prepared to
adapt new tools and processes.

2. Identify areas where the tool can improve testing processes: Figure out which parts of
the organization will benefit from tool support to make testing better.

3. Evaluate tools based on clear requirements and criteria: Compare tools against specific
needs and objective standards.

4. Do a proof-of-concept: Test the tool to see if it works as expected and meets the goals
set for it.

5. Evaluate the vendor or open-source support: Check the quality of vendor services, such as
training and support, or the community support if it's an open-source tool.

6. Plan for internal implementation: Make a plan for introducing the tool, including
training and mentoring for those who are new to it.

Describe Object Oriented Metrics in testing. (4M, W-19) ✅


1. Source Code Size Metrics:

Lines of Code (LOC): Measures the overall system size, indicating the amount of work
completed. Used to estimate time and costs.

Effective Lines of Code (eLOC): Measures only the lines of code that contain actual
code, excluding comments and blank lines, providing a better idea of the work done.

Software Testing 38
Comment Line and Comment Percent: Measures the amount of comments in the code. Proper
commenting is essential for easier maintenance. A minimum of 20% comments is
recommended.

Blank Line and White Space Percent: Measures blank lines in code, which affects
readability.

File Count Metric: Counts files based on file extensions, showing the distribution of
source code types.

2. Procedural Metrics:

Cyclomatic Complexity: Measures the number of decision points (like “if”, “for”,
“while”) in a function. Helps identify areas that need inspection, redesign, or more
testing.

3. Class Metrics:

Class Volume: Measures the amount of information within a class, using the number of
variables and methods.

Average LOC per Class/Method: Measures the size of a class or method, helping to
understand system complexity.

Method Metrics:

Number of Parameters per Method: Measures the number of parameters passed to a


method.

Weighted Methods per Class: Measures complexity by considering the methods in a


class.

Maximum Nesting Level: Measures how deep the nesting of statements is within a
method.

Method Rank: Measures the importance of a method within a class.

Coupling Metrics:

Afferent Coupling: Measures how many other methods depend on a given method.

Efferent Coupling: Measures how many methods a given method depends on. Keeping
this low improves stability.

4. Inheritance Metrics:

Height of Inheritance Tree: Measures how deep a class is in the inheritance hierarchy.
A deeper hierarchy means more inherited methods, increasing complexity.

State advantages and disadvantages of using Tools. (4M, W-22) ✅


Advantages of Using Testing Tools:

1. Time-Saving: Automated testing tools can execute tests faster than humans, allowing
testers to focus on other important tasks.

2. Consistency: Tests can be repeated exactly the same way every time, reducing human errors
like forgetting steps or making mistakes, which helps identify defects accurately.

3. Simulated Testing: Tools can create multiple virtual users or data sets, making it
possible to test in a controlled environment before the product is released.

4. Better Test Case Design: Automated tools can help design test cases, ensuring better
coverage compared to manual testing.

5. Reusability: Automated tests can be reused across different software versions, even if
the user interface changes.

6. Error Reduction:Automation helps to minimize human errors that can occur during manual
testing.

7. Internal Testing: Tools can easily check for issues like memory leaks and test coverage.

8. Cost Reduction: By speeding up the testing process, automation can reduce overall costs
of software development.

Disadvantages of Using Testing Tools:

1. Unrealistic Expectations: People may expect tools to perform miracles without


understanding their limitations.

Software Testing 39
2. Misjudged Efforts: Underestimating the time and resources needed to implement tools can
lead to frustration.

3. Maintenance Challenges: Maintaining automated tests can require more effort than
expected.

4. Over-Reliance: Relying too much on tools can lead to neglecting important manual testing
practices.

Elaborate term Metrics and Measurement and write need of Software Measurement. (6M, W-19) ✅
A metric is a measurement of the degree that any attribute belongs to a system, product,
or process.

For example, counting the number of errors per hour of work is a metric.

A measurement indicates the size, quantity, or amount of a specific attribute in a


product or process.

For example, counting the total number of errors in a system is a measurement.

A metric provides a quantitative measure of how well a system, its components, or a


process meets certain attributes.

Metrics are often referred to as "standards of measurement".

Software Metrics are specific metrics used to evaluate the quality of a software project.

Simply, a metric is a unit that describes a specific attribute, functioning as a scale


for measurement.

Need for Software Measurement:

1. To determine the quality of the current product or process.

2. To predict future qualities of the product or process.

3. To improve the quality of a product or process.

4. To determine the state of the project in relation to budget and schedule.

Design test cases for MS word application using an Automation tools. (6M, W-22) ✅
Test Case Description Input Data Expected Result Actual Result

Check whether Undo in Edit main Perform any action, then Previous action Previous action
menu undoes the previous action click Undo in the Edit menu should be undone was undone

Check whether the Undo button in


Perform any action, then Previous action Previous action
right-click context menu undoes the
right-click and click Undo should be undone was undone
previous action

Check whether the Undo button in Open the application without


Undo Button should Undo Button was
the Edit main menu is disabled when performing any action, then
be disabled disabled
there are no previous actions check the Undo button

Check whether the Undo button in Open the application without


right-click context menu is performing any action, then Undo Button should Undo Button
disabled when there are no previous right-click and check the be disabled remained disabled
actions Undo button

Check whether hotkey (CTRL+Z) Open the application without


No response is
responds when there are no previous performing any action, then No response
expected
actions press CTRL+Z

Check whether the Cut option in


Select some text, then click Selected text Selected text was
Edit main menu cuts the selected
Cut in the Edit menu should be cut cut
text

Check whether the Cut option in Open the application without


Cut Option should Cut Option was
Edit Menu is disabled when no text selecting any text, then
be disabled disabled
is selected check the Cut option

The above test cases will be executed on automation tools like AutoIT, QTP, etc.

Software Testing 40

You might also like