[go: up one dir, main page]

0% found this document useful (0 votes)
5 views29 pages

STT Imp

The document provides a comprehensive overview of test case design, software testing, and various testing techniques. It discusses the significance of test cases, the purpose of software testing, and outlines popular testing tools and methodologies. Additionally, it covers specific testing strategies such as decision coverage, statement coverage, and the components of a test plan.

Uploaded by

omkarbunny12
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views29 pages

STT Imp

The document provides a comprehensive overview of test case design, software testing, and various testing techniques. It discusses the significance of test cases, the purpose of software testing, and outlines popular testing tools and methodologies. Additionally, it covers specific testing strategies such as decision coverage, statement coverage, and the components of a test plan.

Uploaded by

omkarbunny12
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

Chapter 1: Introduction to Test Case Design - Long Answer Questions (4 Marks Each)

1. What is a test case? How to design it? Which techniques are used for designing a test
case?

A test case is a set of conditions, inputs, actions, and expected results developed to verify a
particular functionality of a software application. It ensures that the software performs
according to the requirements.

To design a test case, one must:

 Define test inputs


 Set entry and exit criteria
 Describe the execution steps
 Identify the expected outcomes

Test case design techniques are used to systematically develop test cases:

1. Specification-based (Black-box testing): Derived from requirements and functional


specifications.
2. Structure-based (White-box testing): Based on the internal structure/code of the
software.
3. Experience-based techniques: Rely on the tester’s experience with similar systems.

These techniques aim to ensure test reliability, validity, and effectiveness.

2. What is software testing? Why is it needed? Also explain its purpose.

Software testing is the process of executing a software system to detect bugs, errors, or
defects. It involves validating that the application meets specified requirements and functions
correctly.

Why it's needed:

 To identify defects early


 To ensure quality and reliability
 To meet client requirements
 To prevent failures in the production environment

Purpose of software testing:

 To validate that the software performs as intended.


 To verify that it meets the defined criteria.
 To uncover errors such as syntax errors, logic errors, or defects.
 To enhance user satisfaction and system performance.
3. What is a test tool? What is its purpose? Also give names of popular testing tools.

A test tool is a software application that aids testers in executing software testing tasks such
as planning, executing, reporting, and tracking defects.

Purpose:

 Enhances speed, accuracy, and efficiency


 Supports automation, defect tracking, and test management
 Ensures reliability in repetitive testing

Popular testing tools:

 Jira – Issue and project tracking


 Bugzilla – Defect tracking and test management
 Redmine – Web-based project management
 MantisBT – Bug tracking in PHP
 Backlog, BugNet, Trac – Other open-source tracking tools

4. Which are the different features to be considered while doing software testing?
Explain.

Important features to consider in software testing include:

1. High Probability of Detecting Errors: Testers should anticipate where the software
may fail, e.g., divide-by-zero situations.
2. No Redundancy: Avoid repetitive test cases to optimize time and resources.
3. Choose the Most Appropriate Test: Prioritize tests that are more likely to reveal
defects.
4. Moderate Complexity: Tests should be simple yet effective, avoiding
overcomplication or oversimplification.

These features help ensure comprehensive coverage and quality assurance during the testing
process.

5. Draw and explain in short, IEEE 829 Test Case Specification Template outline.

The IEEE 829 test case specification template includes:

 Test Case Specification Identifier – Unique ID for the test case


 Test Items – Features to be tested
 Input Specifications – Data values, states, timing
 Output Specifications – Expected outcomes
 Environmental Needs – Hardware/software required
 Special Procedural Requirements – Setup/teardown procedures
 Inter-Case Dependencies – Linkages with other test cases

This structure ensures that test cases are standardized, reusable, and traceable.

6. Explain the various fields of a basic test case template in Excel that can be used for
either manual or automated testing.

A basic Excel test case template includes:

 Test Case ID – Unique identifier (e.g., TC_UI_01)


 Test Description – Brief of the test's purpose
 Preconditions – Requirements before test execution
 Execution Steps – Step-by-step instructions
 Expected Result – Anticipated outcome
 Actual Result – Observed outcome
 Status – Pass/Fail
 Comments – Notes on results or blockers

Additional fields may include Test Priority, Designed By, Execution Date, etc.

This format facilitates organized, traceable testing documentation.

7. Explain Entry Criteria by giving an example.

Entry criteria are the conditions that must be fulfilled before testing can commence.

Example: In the SpeedyWriter application,

 Bug tracking system must be in place


 Test environment must be configured
 Code must be unit-tested
 Less than 50 “must-fix” bugs should be open

These criteria ensure that the software is stable and ready for the next phase of testing.

8. Which are different bug tracking tools? Explain any two.

Bug tracking tools help in identifying, recording, and managing software bugs.

1. Jira:

 Open-source project management tool


 Supports recording, reporting, and workflow
 Integrates with GitHub
 Allows custom workflows and search

2. Bugzilla:

 Widely used, open-source tool


 Supports multiple OS like Linux, Windows, Mac
 Offers email notifications, advanced search, and strong security

These tools streamline the defect management lifecycle.

9. Elaborate Exit Criteria with an example.

Exit criteria define when testing activities can be concluded.

Example: In the SpeedyWriter system test:

 No crashes/stoppages occurred in 3 weeks


 All “must-fix” bugs are resolved
 All test cases have been executed
 Project management agrees that the product meets expectations

This ensures that the product is stable, reliable, and ready for deployment.

10. How to identify errors, bugs in the given application?

To identify errors and bugs:

 Understand types of errors: syntax errors (code typos), logic errors (wrong logic),
missing/extra coding
 Understand bugs as flaws causing incorrect behavior
 Use bug tracking tools like Jira or Bugzilla
 Track deviations between expected and actual outcomes
 Classify defects: Functional, Logical, etc.
 Use structured test cases for identification

This helps ensure early detection and resolution of issues.

11. How to design test cases in MS Excel? Describe with example.

Designing test cases in Excel:

 Use rows/columns to document test data


 Include fields like Test Case ID, Description, Steps, Input, Expected Result, Actual
Result, Status
 Use formulas for repetitive entries
 Maintain one workbook with multiple sheets (test conditions, data, environment)

Example: For Facebook login:

 Input: Username and password


 Expected Output: Redirect to user homepage
 Actual Output: Login success/failure
 Status: Pass or Fail

Excel allows easy documentation, updates, and import to test management tools.
Chapter 2: Test Cases for Simple Programs – Long Answer Questions (Full Answers as
per Page 2.30–2.31)

1. What is meant by test case design techniques? Enlist them.

Test case design techniques are systematic approaches used to create test cases that validate
whether a software system behaves according to its specifications and requirements. The goal
of test case design is to ensure comprehensive testing with maximum effectiveness and
efficiency, by identifying the minimal number of test cases that provide maximum coverage.

These techniques help uncover defects in the software by selecting appropriate input values
and predicting the expected output, ensuring that both valid and invalid scenarios are
covered.

Types of test case design techniques:

1. Specification-Based Techniques (Black-box Testing):


o These are based on system requirements and functionality without considering
internal code.
o Examples: Equivalence Partitioning, Boundary Value Analysis, Decision
Table Testing.
2. Structure-Based Techniques (White-box Testing):
o These require knowledge of the internal code structure.
o Examples: Statement Coverage, Branch Coverage, Path Coverage.
3. Experience-Based Techniques:
o These depend on the experience, intuition, and domain knowledge of the
tester.
o Examples: Error Guessing, Exploratory Testing.

These techniques are applied based on the type of application, risk involved, and test
objectives.

2. What is decision coverage testing? Explain with an example.

Decision coverage testing, also known as branch coverage testing, is a white-box testing
technique where all possible decision outcomes in the program (true and false) are tested at
least once.

This ensures that every if condition or branching logic in the code is tested for both true and
false outcomes.

Example:

if (x < 20)
System.out.println("x is less than 20");
else
System.out.println("x is 20 or more");
Test Cases for 100% Decision Coverage:

 Test Case 1: x = 10 → Condition is true → Executes first statement.


 Test Case 2: x = 30 → Condition is false → Executes else part.

Both outcomes of the decision are executed, ensuring full decision coverage.

This type of coverage helps validate logic and control flow in the program.

3. Explain statement coverage testing with an example.

Statement coverage testing is a white-box testing technique used to ensure that every
executable line of code is executed at least once.

It helps detect portions of the code that are not executed under certain conditions, and thus
potentially untested.

Example:

if (a > b)
max = a;
else
max = b;

To achieve 100% statement coverage:

 Test Case 1: a = 10, b = 5 → Executes max = a;


 Test Case 2: a = 4, b = 9 → Executes max = b;

Both statements inside the conditional branches are executed.

Statement coverage helps identify untested parts of the code, though it may not detect
missing logic conditions.

4. Explain in brief, branch coverage testing.

Branch coverage testing ensures that every possible branch from each decision point is
executed at least once.

It validates that each possible path in a decision-making statement is covered, whether it's an
if, else, or switch-case branch.

Key Points:

 More thorough than statement coverage


 Covers both true and false outcomes of decisions
Example: For a loop or if-else, the test must include cases that take each path.

Branch coverage is important for identifying potential decision-based bugs.

5. Give the classification of loop testing. Explain any one in detail.

Classification of Loop Testing:

1. Simple Loops
2. Nested Loops
3. Concatenated Loops
4. Unstructured Loops

Explanation of Simple Loops: These loops execute a block of code repeatedly based on a
condition.

Test Cases for Simple Loops:

 Zero Iterations: Condition is false from the beginning.


 One Iteration: Loop executes only once.
 Multiple Iterations: Loop runs for several cycles.
 Maximum Iterations: Loop runs for its upper limit.
 Exceeding Limits: Check behavior if it exceeds limit (to detect infinite loops).

Proper loop testing helps in identifying logic issues and performance bottlenecks.

6. What is unstructured loop testing?

Unstructured loop testing deals with loops that are not well-structured, such as those with
multiple entry and exit points, or those that use goto statements.

These loops are difficult to maintain, test, and understand due to their irregular control flow.

Challenges:

 No clear exit condition


 Difficult to cover all paths
 Increased chance of infinite loops or skipped logic

Such loops are discouraged in modern software design due to their complexity and
unpredictability.

7. Write a note on path coverage testing.


Path coverage testing is a white-box technique that ensures every possible independent path
through a code unit is executed at least once.

It is based on control flow graphs, where each path represents a sequence of instructions or
decisions.

Objective:

 Cover all linearly independent paths in the program.


 Measure complexity using Cyclomatic Complexity.

Formula: V(G) = E - N + 2P Where,

 E = Number of edges
 N = Number of nodes
 P = Number of connected components (usually 1)

Path coverage provides a more in-depth analysis than branch or statement coverage.

8. Apply statement coverage testing for the following code by considering some test
cases:

public class IfelseEx {


public void check_evenodd(int number) {
if (number % 2 == 0)
System.out.println("even number");
else
System.out.println("odd number");
}
}

Test Cases:

 Input: number = 4 → Output: even number


 Input: number = 5 → Output: odd number

Result: Both if and else statements are executed. Hence, 100% statement coverage is
achieved.

9. Apply loop testing to following code by considering some test data:

class TestClass {
public void display_no(String args[]) {
int i = 1;
while (i < 6) {
System.out.println("Hello World");
i++;
}
}
}

Test Objective: Test how the loop behaves for expected and boundary values.

Test Data and Results:

 Initial value of i = 1
 Loop condition: i < 6

Expected Output: "Hello World" is printed 5 times.

The loop executes for values i = 1, 2, 3, 4, 5 and exits when i becomes 6.

This validates loop conditions and ensures there are no infinite loops.

10. Apply decision coverage testing for the following code by considering some test
cases:

example(int x) {
if (x < 20)
print("x < 20");
else
print("x >= 20");
}

Test Cases:

 x = 10 → Executes x < 20
 x = 25 → Executes x >= 20

Decision Outcomes:

 True: x < 20
 False: x >= 20

Result: Both decision outcomes are tested. Hence, 100% decision coverage is achieved.

11. Apply branch coverage testing for the following code:

READ username
READ password
IF count(username) < 8
PRINT “Enter a valid username.”
ENDIF
IF count(password) < 5
PRINT “Enter a valid password”
ENDIF
IF count(username & password) < 1
PRINT “Please Fill the Username & Password Fields”
ELSE
PRINT “Login Successfully”
ENDIF

Test Cases:

1. username = "abc", password = "1234"


o Username length < 8 → Prompt for valid username
o Password length < 5 → Prompt for valid password
2. username = "testUser1", password = "12345"
o Both lengths valid
o Combined length > 0 → Login Successful

Result: All three IF conditions are tested for both true and false outcomes. Thus, full branch
coverage is achieved.
Chapter 3: Test Cases and Test Plan – Long Answer Questions (Full Answers as per
Page 3.26)

1. What are components of test plan? Explain in detail.

A test plan is a document that defines the scope, approach, resources, schedule, and activities
intended for testing a software product. It serves as a blueprint that guides the testing process
from start to finish and ensures that the product is tested systematically and efficiently.

Key Components of a Test Plan:

1. Test Plan Identifier: A unique identifier to distinguish the test plan.


2. Introduction: A brief summary of the software product, the purpose of the test plan,
and its scope.
3. Test Items: A list of features, functionalities, or modules to be tested.
4. Features to be Tested: Specifies the functions or features that will be covered during
the test.
5. Features Not to be Tested: Identifies functionalities or modules that are out of the
scope for this phase.
6. Test Approach: Defines the testing techniques to be used (black-box, white-box,
exploratory), the levels of testing, and the types of testing.
7. Pass/Fail Criteria: Defines what determines the success or failure of a test case.
8. Suspension and Resumption Criteria: Specifies when testing should be halted and
under what conditions it should resume.
9. Test Deliverables: Lists all documents and outputs to be delivered (test cases, test
data, test reports, defect logs, etc.).
10. Testing Tasks: Describes individual tasks and their dependencies.
11. Environmental Needs: Includes both software and hardware requirements for
testing.
12. Responsibilities: Assigns responsibilities to team members (testers, test leads,
developers).
13. Schedule: Provides a timeline for the testing process, including milestones.
14. Risks and Mitigation: Identifies possible risks and outlines strategies to reduce or
eliminate them.
15. Staffing and Training Needs: Specifies if special skills or training are required for
the testing team.
16. Approvals: Contains the sign-off from stakeholders who approve the plan.

2. What is test case? Explain general test case template briefly.

A test case is a documented set of conditions, inputs, expected outcomes, and actual
outcomes that are developed to verify that a specific software feature works as intended. A
test case checks whether the software application behaves correctly under various conditions.

General Test Case Template Includes:


1. Test Case ID: A unique ID used to identify and refer to the test case.
2. Test Case Name/Title: Short name indicating the purpose of the test.
3. Objective: The aim of the test (e.g., to verify login with valid credentials).
4. Preconditions: Any conditions or setup required before executing the test.
5. Test Steps: Detailed and sequential actions to execute the test.
6. Input Data: Values required to perform the test (e.g., username and password).
7. Expected Result: The expected outcome if the system works correctly.
8. Actual Result: The outcome observed after execution.
9. Status: Pass or Fail, depending on the comparison between expected and actual
results.
10. Comments: Additional observations or references.

Using a standardized template ensures clarity, traceability, and repeatability in the testing
process.

3. Design test cases for an application “login page activity”.

Test cases for login page:

Test Case 1: Valid Login

 Test Case ID: TC_Login_001


 Description: Login with valid username and password.
 Preconditions: User account is already registered.
 Test Steps:
1. Navigate to login page
2. Enter valid username and password
3. Click login button
 Input: username = "user123", password = "pass123"
 Expected Result: Redirects to dashboard
 Actual Result: As per execution
 Status: Pass/Fail

Test Case 2: Invalid Password

 Test Case ID: TC_Login_002


 Description: Login with valid username but incorrect password
 Input: username = "user123", password = "wrongpass"
 Expected Result: Error message: "Invalid credentials"

Test Case 3: Blank Fields

 Test Case ID: TC_Login_003


 Description: Attempt login with blank username and password
 Expected Result: Prompt to enter credentials

These test cases cover positive and negative login scenarios.


4. Explain test steps with example.

Test steps are a set of instructions followed by a tester to execute a test case. They provide
clear, repeatable actions that need to be performed to verify a particular functionality.

Example: Testing login feature

 Test Case ID: TC_Login_001


 Objective: Verify successful login
 Test Steps:
1. Open browser and navigate to login page
2. Enter valid username
3. Enter valid password
4. Click login
5. Verify redirection to home/dashboard

Each step must be clear, simple, and allow other testers to follow the same path.

5. What is test scenario? What is the difference between test cases and test scenario?

A test scenario is a high-level documentation that describes what to test. It covers one or
more end-to-end functionalities of the application.

Example: “Verify that registered users can log in.”

Difference Between Test Case and Test Scenario:

Aspect Test Case Test Scenario


Detail Level Highly detailed High-level description
Steps, input, expected output, actual
Components Functionality to be tested
result
Documentation More time-consuming and descriptive Requires less documentation
Ensures precise validation of Ensures coverage of system
Purpose
functionality requirements
Reusability Can be reused multiple times Often expands into multiple test cases

Test scenarios help derive test cases and ensure broader test coverage.

6. Explain parameters of test report.

A test report provides a summary of testing activities and results. It is prepared after the
execution of test cases and highlights both progress and quality metrics.
Key Parameters in a Test Report:

1. Module/Scenario: The name of the functionality tested.


2. Complexity Level: Categorized as Simple, Medium, or Complex.
3. Test Level: Indicates the type (Unit, Integration, System).
4. Test Case Description: A short explanation of what is tested.
5. Execution Date: Date on which testing was performed.
6. Tester Name: Person who executed the test.
7. Status: Pass, Fail, Blocked, Not Run.
8. Bug ID: Unique identifier of the defect found.
9. Severity: Criticality of the defect (Low, Medium, High, Critical).
10. Remarks: Additional information or comments.

The report offers a quick view of testing effectiveness and software quality.

7. How to prepare test plan?

Preparing a test plan involves defining the scope, strategy, resources, schedule, and
deliverables of the testing process. It acts as a guide for the testing team and ensures that all
testing activities are carried out systematically and efficiently.

Steps to prepare a test plan:

1. Understand the Product: Analyze the software requirements specification (SRS),


business requirements, and functionalities.
2. Define the Scope: Clearly mention what features will be tested and what will not be
tested.
3. Identify Testing Types: Decide on the levels of testing (unit, integration, system) and
types of testing (functional, regression, usability, etc.).
4. Define Test Strategy: Select testing techniques (black-box, white-box), test design
approach, and entry/exit criteria.
5. Determine Test Deliverables: List documents such as test plans, test cases, defect
logs, test reports, etc.
6. Identify Resources and Roles: Allocate team members for different roles and tasks.
7. Schedule: Prepare a timeline that includes all test activities, milestones, and
deadlines.
8. Estimate Effort and Costs: Include time, tools, infrastructure, and manpower
requirements.
9. Identify Risks and Mitigation Plans: List potential risks such as delay in
environment setup or unclear requirements and specify solutions.
10. Review and Approve: Share the draft plan with stakeholders for feedback and get it
signed off.

8. How to write test cases for given application?


To write test cases for an application, follow a systematic approach that aligns with the
application’s requirements and expected user behavior.

Steps to write test cases:

1. Understand Requirements: Read the SRS or user stories thoroughly.


2. Identify Features to Test: List out the functionalities that need to be verified.
3. Break Down Scenarios: Convert functionalities into test scenarios.
4. Use a Standard Format: Use fields like Test Case ID, Title, Steps, Input Data,
Expected and Actual Result.
5. Write Clear Test Steps: Ensure that each step is executable and unambiguous.
6. Specify Test Data: Use valid, invalid, and boundary inputs.
7. Define Expected Results: Match this with the application’s expected behavior.
8. Review: Conduct peer review for coverage and correctness.

Example: For a login application:

 Test Case ID: TC_Login_004


 Title: Login with blank username and password
 Steps: Leave both fields blank and click Login
 Expected Result: Error message should appear

9. Design a test report for Web application.

A test report provides a summary of the testing activities, including test results, defect status,
and coverage metrics. It is an essential deliverable that communicates test progress and
quality.

Sample Test Report Table:

Test Case ID Module Scenario Date Tester Status Bug ID Severity


Valid login 12-04- A.
TC_Login_001 Login Page Pass – –
credentials 2025 Mehta
12-04- A.
TC_Login_002 Login Page Invalid password Fail BUG101 Medium
2025 Mehta
Search Search with 13-04-
TC_Search_001 S. Khan Fail BUG102 High
Feature invalid key 2025
Cart 13-04-
TC_Cart_001 Add item to cart S. Khan Pass – –
Module 2025

This report helps stakeholders analyze release readiness.

10. Prepare test plan to check login credentials.

Objective: To validate the login functionality for correct and incorrect user credentials.
Scope: Login page of the application where users input credentials to access the system.

Test Items:

 Username field
 Password field
 Login button
 Forgot password link

Features to be Tested:

 Valid login
 Invalid login
 Blank fields
 SQL injection prevention

Test Deliverables:

 Test Plan Document


 Test Case Document
 Defect Report
 Test Summary Report

Resources:

 1 Test Lead, 2 Test Engineers

Schedule: Testing to be conducted from 15th April to 18th April.

Environment: Windows 10, Google Chrome v100+, Firefox, Localhost Server

Risk: Delay in credential setup

Mitigation: Use dummy credentials if needed

Approvals: QA Manager, Project Manager

11. Explain test strategy in detail.

A test strategy is a high-level document that outlines the general approach, goals, and
guidelines for software testing in an organization or project. It provides direction and
establishes a standard framework for testing activities.

Key Elements of Test Strategy:

1. Scope and Objectives: Define what to test and goals to be achieved.


2. Testing Levels: Specify unit, integration, system, and acceptance testing.
3. Testing Types: Functional, non-functional, performance, usability, security, etc.
4. Test Design Techniques: Black-box, white-box, risk-based, exploratory.
5. Test Data Strategy: Define how test data will be generated, masked, and maintained.
6. Configuration Management: Guidelines for version control of test assets.
7. Test Environment: Describe tools, OS, hardware, and network configuration.
8. Defect Management: Workflow for reporting, tracking, and closing bugs.
9. Metrics and Reporting: List key metrics like test coverage, defect density, and test
execution status.
10. Tools to be Used: Automation tools (Selenium, QTP), Defect tracking tools (Jira,
Bugzilla).

The strategy ensures that the testing is consistent, measurable, and traceable across all project
phases.

12. With the help of diagram describe test summary report.

A test summary report is a document that summarizes the testing activities and results of the
testing cycle. It is prepared after testing is completed and serves as a communication tool to
stakeholders.

Contents of Test Summary Report:

 Summary of Testing Activities


 List of Test Cases Executed
 Number of Passed/Failed/Blocked Cases
 Severity-wise Defect Summary
 Evaluation Against Entry/Exit Criteria
 Defect Density and Closure Rates
 Risks Identified During Testing
 Recommendations
 Sign-off Section

Diagram (Sample Table Format):

Criteria Value
Total Test Cases Executed 120
Passed 112
Failed 8
Blocked 0
Open Defects 5
Critical Defects 2
Exit Criteria Met Yes
Recommended for Release Yes

The report helps management make informed decisions on product readiness.


13. Explain test incident report in detail.

A test incident report is a formal documentation of any unexpected behavior, deviation, or


issue encountered during the execution of a test case. It is typically prepared when actual
results differ from expected results and may indicate a defect.

Fields in Test Incident Report:

1. Incident ID: Unique identifier for the issue.


2. Title: Brief heading about the incident.
3. Summary: Short description of the issue.
4. Test Case Reference: Which test case encountered the issue.
5. Steps to Reproduce: Sequence of actions to replicate the issue.
6. Expected Result: What should have occurred.
7. Actual Result: What actually occurred.
8. Environment Details: Browser, OS, version, network details.
9. Severity and Priority: Business impact and urgency.
10. Attachments: Screenshots, logs, etc.
11. Reported By / Date: Tester details.
12. Status: New, Assigned, In Progress, Resolved, Cl

Chapter 3: Test Cases and Test Plan – Long Answer Questions (Full Answers as per
Page 3.26)

[Existing Q1 to Q12 content remains unchanged.]

13. Explain test incident report in detail.

A test incident report is a structured document that captures any unexpected behavior, failure,
or deviation from expected outcomes during test execution. It helps ensure that such issues
are formally recorded, analyzed, and resolved appropriately.

The report serves both as a communication tool between testers and developers and a tracking
mechanism for defects or anomalies in the software.

Essential Elements of a Test Incident Report:

1. Incident ID: A unique identifier assigned to each incident report for easy tracking
and reference.
2. Title: A concise summary or name describing the nature of the incident.
3. Summary: A brief narrative highlighting the issue and its context.
4. Test Case Reference: The ID or name of the test case where the issue was
encountered.
5. Steps to Reproduce: A sequential list of actions taken during test execution that led
to the incident. This helps developers simulate the same scenario to investigate and fix
the issue.
6. Expected Result: The correct or intended system behavior as defined in the
requirements or test case.
7. Actual Result: The behavior observed during testing, which differs from the expected
result.
8. Environment Details: Information about the environment where the test was
conducted, such as:
o Operating System
o Browser and version
o Hardware
o Network configuration
9. Severity: The seriousness of the issue based on its impact on the functionality or user
experience. Common levels: Critical, High, Medium, Low.
10. Priority: Indicates how urgently the issue needs to be resolved.
11. Attachments: Supporting evidence such as screenshots, error logs, screen recordings,
or stack traces.
12. Reported By / Date: Name of the tester and the date when the issue was logged.
13. Status: The current state of the incident (e.g., New, Open, Assigned, Resolved,
Rejected, Closed).

Purpose of a Test Incident Report:

 To formally communicate bugs to the development team.


 To ensure traceability of defects.
 To provide metrics for quality assurance and process improvement.

14. How to prepare test report for test cases executed? Describe in detail.

A test report for executed test cases is a comprehensive document that summarizes the testing
activities performed, test results obtained, and defects identified during a specific testing
phase. It serves as evidence of test execution and supports the decision-making process
regarding software release.

Steps to Prepare a Test Report:

1. Summarize Execution Status:


o Mention the total number of test cases executed.
o Provide counts of passed, failed, blocked, and not run test cases.
o Include a pass percentage and failure rate.
2. Document Defect Summary:
o Provide a table of all defects found during execution, including:
 Defect ID
 Test Case ID
 Module/Feature
 Severity
 Status
3. Requirement Traceability:
o Map executed test cases to the respective requirements or user stories to
confirm full coverage.
4. Environment Details:
o Describe the hardware/software environment used during testing.
5. Visuals and Charts:
o Use pie charts, bar graphs, and trend lines to show defect distribution, test case
execution rate, and other metrics.
6. Highlights and Observations:
o Note any critical failures, showstopper bugs, or test blockers.
o Mention modules with higher failure rates or repeated issues.
7. Recommendations:
o Based on results, suggest areas for improvement, need for regression testing,
or deferral of certain features.
8. Conclusion:
o State whether the product is ready for release or further testing is required.
9. Approvals and Sign-off:
o Include a section where the QA lead and stakeholders can formally approve
the test results.

Sample Summary Table:

Test Suite Total Pass Fail Blocked Defects Raised


Login Module 10 9 1 0 2
Search Module 15 13 2 0 3
Cart Module 12 12 0 0 0

Purpose of the Test Report:

 To provide clarity to stakeholders on testing progress and product quality.


 To serve as a historical record for audits and compliance.
 To support go/no-go release decisions.
Chapter 4: Defect Report - Long Answer Questions

1. What is defect? List its causes in a software.


A defect, often called a bug or fault, is an inconsistency between expected and actual behavior of a
software application. It arises when the application does not function according to specified
requirements or user expectations. Defects can manifest as errors in logic, incorrect data processing,
crashes, or any unexpected result.

Common root causes of defects include:

- Incomplete, ambiguous, or misunderstood requirements

- Design flaws or misinterpretation of design specifications

- Coding errors such as syntax mistakes, logic errors, or incorrect algorithms

- Inadequate unit testing leading to undetected errors in modules

- Environmental issues, e.g., incorrect configurations or incompatible platforms

- Poor communication among stakeholders, developers, and testers

- Insufficient training or experience of development and testing teams

- Time pressure leading to rushed coding and shortcuts

- Inadequate version control or configuration management

- Lack of or inadequate code reviews and walkthroughs

2. Explain defect life cycle with help of detailed diagram.


The Defect Life Cycle describes the various stages a defect goes through from its discovery to
closure. Key stages include:

1. New: A tester logs a new defect after observing a deviation.


2. Assigned: The defect is reviewed and assigned to a developer for fixing.
3. Open: The developer begins analyzing and working on the defect.
4. Fixed: Developer implements a fix and marks the defect as fixed.
5. Retest: Tester verifies the fix by retesting the scenario.
6. Reopened: If the defect persists, it is reopened and sent back to the developer.
7. Verified: If the defect no longer occurs, the tester verifies and marks it as verified.
8. Closed: After verification, the defect is closed.
9. Deferred/Rejected/Duplicate: Special states where defects may be postponed, marked invalid, or
recognized as duplicates.

Detailed Diagram (textual representation):

New → Assigned → Open → Fixed → Retest → Verified → Closed


↳ Reopened ↩︎
↳ Deferred/Rejected/Duplicate
3. Explain different states of defect in defect workflow.
Common defect states:

- New: Initial state when a defect is first logged.


- Assigned: Developer has taken ownership and will work on it.
- Open: Developer is actively working on the defect.
- Fixed: Developer believes the defect is resolved.
- Retest: Tester verifies the fix.
- Reopened: Defect persists after retest.
- Verified: Tester confirms the defect no longer occurs.
- Closed: Defect is officially closed.
- Deferred: Fix postponed to a future release.
- Rejected: Defect deemed invalid or non-reproducible.
- Duplicate: Defect already exists under another ID.

4. Explain the attributes of defect report.


A Defect Report typically contains the following attributes:

- Defect ID: Unique identifier for the defect.


- Title/Summary: Brief description of the issue.
- Description: Detailed explanation of the defect and its impact.
- Steps to Reproduce: Exact sequence of actions to trigger the defect.
- Test Data: Specific input values used.
- Expected Result: What should happen.
- Actual Result: What actually happens.
- Severity: Impact level (Critical, Major, Minor, Trivial).
- Priority: Fix order (High, Medium, Low).
- Status: Current state (New, Assigned, Fixed, etc.).
- Environment: Hardware, software, OS, browser details.
- Reported By / Date Raised: Tester name and date.
- Assigned To: Developer responsible for the fix.
- Fixed In Build: Version/build number where fix is applied.
- Comments/Attachments: Screenshots, logs, or other supporting files.

5. Explain functional defect in detail.


A functional defect is a deviation from specified functional requirements. It occurs when a feature
does not perform its intended function. For example, if a login function accepts incorrect passwords
or fails to authenticate valid users, it is a functional defect. These defects are identified through
functional testing and are prioritized based on impact on business workflows.

6. Explain classification of defects.


Defects can be classified by:

- Severity: Critical, Major, Minor, Trivial.


- Priority: High, Medium, Low.
- Type/Nature: Functional, Performance, Usability, Security, Compatibility.
- Phase Injected: Requirements, Design, Coding, Testing, Deployment.
- Frequency: Reproducible, Intermittent, Rare.
- Origin: Internal (found by development/test team), External (found by end users).

7. Explain components of Defect Report.


Components include:

- Defect Identification: ID, title, module information.


- Description: Detailed narrative of the issue.
- Reproduction: Steps, test data, environment.
- Impact Analysis: Severity, priority, affected business processes.
- Status Workflow: Current state and history of transitions.
- Assignment & Resolution: Owner, fix details, verification notes.
- Attachments & References: Screenshots, logs, related requirements.

8. Write short note on coding defect.


A coding defect arises from mistakes in source code implementation. Common causes include syntax
errors, incorrect logic, off-by-one errors in loops, and misuse of APIs. These defects are often caught
during unit testing or code reviews. Effective practices like pair programming, static analysis, and
automated unit tests help reduce coding defects.

9. What are software defects by severity.


Severity levels:

- Critical: System crash or data loss with no workaround.


- Major: Core functionality broken but workaround exists.
- Minor: Non-critical feature malfunction, minimal impact.
- Trivial: Cosmetic issues or typos with no functional impact.
- Enhancement: Request for new functionality or improvement.

10. Write short note on testing defect.


A testing defect refers to flaws in the test process or artifacts. Examples include missing test cases
for critical scenarios, incorrect test data, ambiguous test steps, and misconfigured test
environments. Testing defects undermine test coverage and can allow software defects to go
undetected.

11. Explain defect report template with example.


A defect report template provides a structured format to capture defects consistently. Example
layout:

| Field | Description |
|----------------------- |-------------------------------------------------------|
| Defect ID | DFT-101 |
| Title | Login fails with valid credentials |
| Module | User Authentication |
| Steps to Reproduce | 1. Navigate to login page<br>2. Enter valid user/pass<br>3. Click Login |
| Expected Result | User dashboard loads |
| Actual Result | Error message displayed: 'Invalid credentials' |
| Severity | Major |
| Priority | High |
| Status | New |
| Environment | Windows 10, Chrome v90 |
| Reported By / Date | Alice / 18-Apr-2025 |
| Assigned To | Developer Bob |
| Comments | Screenshot attached
Chapter 5: Testing Tools - Long Answer Questions

1. What is manual testing? What are limitations of manual testing?


Manual testing is the process of manually executing test cases without the use of automation tools.
A tester follows defined test steps, inputs data, observes application behavior, and compares actual
results against expected outcomes. Manual testing is essential for exploratory, usability, and ad-hoc
testing.

Limitations of manual testing include:

- Slow and labor-intensive, leading to higher costs for repetitive testing

- Not scalable for large or complex systems; increases test effort exponentially

- Inconsistent execution due to human factors and fatigue; results may vary per tester

- Difficult to cover all test scenarios thoroughly under time constraints

- Prone to human error, overlooking defects in routine tests

- Challenges in reproducing defects precisely without detailed records

- Limited reusability; manual tests must be redone for each release or iteration

2. Explain different types of testing tools. Explain four of them in short.


Testing tools support various phases and activities of the software testing lifecycle. Types include:

• Test Management Tools: Plan, organize, and track testing activities (e.g., TestRail).
• Defect Tracking Tools: Log and manage defects (e.g., Jira, Bugzilla).
• Functional Test Automation Tools: Automate UI/API tests (e.g., Selenium, Katalon).
• Performance Testing Tools: Measure system performance under load (e.g., JMeter, LoadRunner).
• Security Testing Tools: Identify vulnerabilities (e.g., OWASP ZAP, Burp Suite).
• Continuous Integration Tools: Integrate automated tests into pipelines (e.g., Jenkins).
• Test Data Management Tools: Create and manage test data sets.

Four tools explained briefly:

Selenium: An open-source framework for automating web browsers. Supports multiple languages
and browsers. Ideal for functional regression tests.

JMeter: An open-source tool for load and performance testing of web applications and services.
Allows scriptable test plans and extensive reporting.

Jira: A widely used issue and project tracking tool. Enables logging, prioritizing, and tracking defects
and tasks with customizable workflows.

OWASP ZAP: An open-source security scanner for web applications. Provides automated scans,
manual penetration testing utilities, and vulnerability reporting.

3. What are the major criteria for tool selection?


Key criteria for selecting a testing tool include:
• Compatibility: Supports the application’s technology stack and platforms.
• Usability: Ease of learning, scripting, and maintenance.
• Integration: Seamless integration with CI/CD pipelines, test management, and defect tracking
tools.
• Cost: Licensing, maintenance, and training expenses.
• Scalability: Handles increasing test workloads and concurrent executions.
• Community and Support: Active user community, documentation, and vendor support.
• Extensibility: Ability to customize and extend tool functionality via plugins or APIs.
• Reporting and Analytics: Comprehensive reporting, dashboards, and metrics.
• Reliability and Performance: Stable execution under various conditions.

4. Define Automation testing tool. How to make use of Automation tools?


An automation testing tool is software that enables automated execution of test cases using scripts
or keywords. It interacts with applications under test, simulating user actions, verifying results, and
logging outcomes without manual intervention.

Using automation tools effectively involves:

- Identifying suitable test cases for automation (e.g., repetitive, regression, data-driven tests).

- Setting up the test environment and installing necessary tool dependencies.

- Designing a robust test framework (e.g., Page Object Model, data-driven or keyword-driven
framework).

- Writing, organizing, and maintaining test scripts using best practices and coding standards.

- Integrating tests into CI/CD pipelines for automated execution on code commits or scheduled runs.

- Generating and analyzing detailed reports to track test results, defects, and trends.

- Regularly reviewing and refactoring test suites to accommodate application changes and improve
coverage.

5. Describe features of Selenium testing tool.


Selenium is a popular open-source automation framework for web applications. Key features
include:

• Multi-language Support: Write tests in Java, Python, C#, Ruby, JavaScript, and others.
• Cross-browser Testing: Compatible with Chrome, Firefox, Safari, Edge, and more via WebDriver.
• Distributed Testing: Selenium Grid allows parallel execution across multiple machines and
browsers.
• Integration: Works with testing frameworks (e.g., TestNG, JUnit) and CI tools (e.g., Jenkins).
• Community and Ecosystem: Extensive plugins, libraries, and active community support.
• Flexibility: Automate complex user interactions via WebDriver API and support for advanced web
technologies.

6. What are the benefits of Automation testing?


Automation testing offers:

• Faster Test Execution: Run large test suites quickly and repetitively without manual effort.
• Improved Test Coverage: Automate scenarios difficult or time-consuming to test manually.
• Consistency: Eliminate human errors and ensure tests run identically each time.
• Reusability: Leverage reusable scripts and modular frameworks across releases.
• Early Defect Detection: Integrate into CI to catch regressions immediately after code changes.
• Cost Savings Over Time: Lower manual effort and accelerate release cycles, reducing long-term
costs.
• Reporting and Metrics: Automated reports provide clear insights into quality and trends.

7. Explain key features of any three popular open source automation software
testing tools.
Three open-source automation tools and their key features:

1. **Katalon Studio**:
- Built-in keywords for web, API, mobile, and desktop testing.
- Record-and-playback interface and scripting mode for advanced tests.
- Integrations with CI/CD, Jira, Git, and test management tools.

2. **Appium**:
- Mobile automation for native, hybrid, and mobile web apps.
- Supports iOS and Android using the WebDriver protocol.
- Language-agnostic; write tests in any WebDriver-compatible language.

3. **Robot Framework**:
- Keyword-driven testing framework supporting web, API, database, and more.
- Extensible via Python or Java libraries (SeleniumLibrary, AppiumLibrary).
- Human-readable test cases and rich reporting.

8. Write the detail steps for using selenium to test the application.
Steps to use Selenium for testing:

1. **Set up the environment**: Install Java/Python, Selenium WebDriver, and browser drivers.
2. **Choose a framework**: Select TestNG/JUnit for Java or pytest/unittest for Python.
3. **Create project structure**: Organize folders for tests, page objects, data, and utilities.
4. **Develop page objects**: Implement Page Object Model classes encapsulating page elements
and actions.
5. **Write test scripts**: Use test framework annotations to define test methods invoking page
object actions.
6. **Configure test data**: Externalize inputs using CSV, Excel, JSON, or database sources.
7. **Implement logging and reporting**: Integrate logging (Log4j) and generate HTML/XML reports.
8. **Execute tests**: Run tests locally or on Selenium Grid for parallel execution.
9. **Analyze results**: Review reports for pass/fail status and capture screenshots on failures.
10. **Maintain scripts**: Refactor page objects and tests for new UI changes and enhancements.

9. Write note on Selenium tool.


Selenium is an industry-standard open-source suite for automating web browsers. It provides:
- **Selenium IDE**: Browser plugin for recording and playing tests.
- **Selenium WebDriver**: Core API for driving browsers programmatically.
- **Selenium Grid**: Infrastructure for distributed and parallel test execution.
Its flexibility, community support, and compatibility with major browsers make it a preferred choice
for web test automation.

You might also like