STT Imp
STT Imp
1. What is a test case? How to design it? Which techniques are used for designing a test
case?
A test case is a set of conditions, inputs, actions, and expected results developed to verify a
particular functionality of a software application. It ensures that the software performs
according to the requirements.
Test case design techniques are used to systematically develop test cases:
Software testing is the process of executing a software system to detect bugs, errors, or
defects. It involves validating that the application meets specified requirements and functions
correctly.
A test tool is a software application that aids testers in executing software testing tasks such
as planning, executing, reporting, and tracking defects.
Purpose:
4. Which are the different features to be considered while doing software testing?
Explain.
1. High Probability of Detecting Errors: Testers should anticipate where the software
may fail, e.g., divide-by-zero situations.
2. No Redundancy: Avoid repetitive test cases to optimize time and resources.
3. Choose the Most Appropriate Test: Prioritize tests that are more likely to reveal
defects.
4. Moderate Complexity: Tests should be simple yet effective, avoiding
overcomplication or oversimplification.
These features help ensure comprehensive coverage and quality assurance during the testing
process.
5. Draw and explain in short, IEEE 829 Test Case Specification Template outline.
This structure ensures that test cases are standardized, reusable, and traceable.
6. Explain the various fields of a basic test case template in Excel that can be used for
either manual or automated testing.
Additional fields may include Test Priority, Designed By, Execution Date, etc.
Entry criteria are the conditions that must be fulfilled before testing can commence.
These criteria ensure that the software is stable and ready for the next phase of testing.
Bug tracking tools help in identifying, recording, and managing software bugs.
1. Jira:
2. Bugzilla:
This ensures that the product is stable, reliable, and ready for deployment.
Understand types of errors: syntax errors (code typos), logic errors (wrong logic),
missing/extra coding
Understand bugs as flaws causing incorrect behavior
Use bug tracking tools like Jira or Bugzilla
Track deviations between expected and actual outcomes
Classify defects: Functional, Logical, etc.
Use structured test cases for identification
Excel allows easy documentation, updates, and import to test management tools.
Chapter 2: Test Cases for Simple Programs – Long Answer Questions (Full Answers as
per Page 2.30–2.31)
Test case design techniques are systematic approaches used to create test cases that validate
whether a software system behaves according to its specifications and requirements. The goal
of test case design is to ensure comprehensive testing with maximum effectiveness and
efficiency, by identifying the minimal number of test cases that provide maximum coverage.
These techniques help uncover defects in the software by selecting appropriate input values
and predicting the expected output, ensuring that both valid and invalid scenarios are
covered.
These techniques are applied based on the type of application, risk involved, and test
objectives.
Decision coverage testing, also known as branch coverage testing, is a white-box testing
technique where all possible decision outcomes in the program (true and false) are tested at
least once.
This ensures that every if condition or branching logic in the code is tested for both true and
false outcomes.
Example:
if (x < 20)
System.out.println("x is less than 20");
else
System.out.println("x is 20 or more");
Test Cases for 100% Decision Coverage:
Both outcomes of the decision are executed, ensuring full decision coverage.
This type of coverage helps validate logic and control flow in the program.
Statement coverage testing is a white-box testing technique used to ensure that every
executable line of code is executed at least once.
It helps detect portions of the code that are not executed under certain conditions, and thus
potentially untested.
Example:
if (a > b)
max = a;
else
max = b;
Statement coverage helps identify untested parts of the code, though it may not detect
missing logic conditions.
Branch coverage testing ensures that every possible branch from each decision point is
executed at least once.
It validates that each possible path in a decision-making statement is covered, whether it's an
if, else, or switch-case branch.
Key Points:
1. Simple Loops
2. Nested Loops
3. Concatenated Loops
4. Unstructured Loops
Explanation of Simple Loops: These loops execute a block of code repeatedly based on a
condition.
Proper loop testing helps in identifying logic issues and performance bottlenecks.
Unstructured loop testing deals with loops that are not well-structured, such as those with
multiple entry and exit points, or those that use goto statements.
These loops are difficult to maintain, test, and understand due to their irregular control flow.
Challenges:
Such loops are discouraged in modern software design due to their complexity and
unpredictability.
It is based on control flow graphs, where each path represents a sequence of instructions or
decisions.
Objective:
E = Number of edges
N = Number of nodes
P = Number of connected components (usually 1)
Path coverage provides a more in-depth analysis than branch or statement coverage.
8. Apply statement coverage testing for the following code by considering some test
cases:
Test Cases:
Result: Both if and else statements are executed. Hence, 100% statement coverage is
achieved.
class TestClass {
public void display_no(String args[]) {
int i = 1;
while (i < 6) {
System.out.println("Hello World");
i++;
}
}
}
Test Objective: Test how the loop behaves for expected and boundary values.
Initial value of i = 1
Loop condition: i < 6
This validates loop conditions and ensures there are no infinite loops.
10. Apply decision coverage testing for the following code by considering some test
cases:
example(int x) {
if (x < 20)
print("x < 20");
else
print("x >= 20");
}
Test Cases:
x = 10 → Executes x < 20
x = 25 → Executes x >= 20
Decision Outcomes:
True: x < 20
False: x >= 20
Result: Both decision outcomes are tested. Hence, 100% decision coverage is achieved.
READ username
READ password
IF count(username) < 8
PRINT “Enter a valid username.”
ENDIF
IF count(password) < 5
PRINT “Enter a valid password”
ENDIF
IF count(username & password) < 1
PRINT “Please Fill the Username & Password Fields”
ELSE
PRINT “Login Successfully”
ENDIF
Test Cases:
Result: All three IF conditions are tested for both true and false outcomes. Thus, full branch
coverage is achieved.
Chapter 3: Test Cases and Test Plan – Long Answer Questions (Full Answers as per
Page 3.26)
A test plan is a document that defines the scope, approach, resources, schedule, and activities
intended for testing a software product. It serves as a blueprint that guides the testing process
from start to finish and ensures that the product is tested systematically and efficiently.
A test case is a documented set of conditions, inputs, expected outcomes, and actual
outcomes that are developed to verify that a specific software feature works as intended. A
test case checks whether the software application behaves correctly under various conditions.
Using a standardized template ensures clarity, traceability, and repeatability in the testing
process.
Test steps are a set of instructions followed by a tester to execute a test case. They provide
clear, repeatable actions that need to be performed to verify a particular functionality.
Each step must be clear, simple, and allow other testers to follow the same path.
5. What is test scenario? What is the difference between test cases and test scenario?
A test scenario is a high-level documentation that describes what to test. It covers one or
more end-to-end functionalities of the application.
Test scenarios help derive test cases and ensure broader test coverage.
A test report provides a summary of testing activities and results. It is prepared after the
execution of test cases and highlights both progress and quality metrics.
Key Parameters in a Test Report:
The report offers a quick view of testing effectiveness and software quality.
Preparing a test plan involves defining the scope, strategy, resources, schedule, and
deliverables of the testing process. It acts as a guide for the testing team and ensures that all
testing activities are carried out systematically and efficiently.
A test report provides a summary of the testing activities, including test results, defect status,
and coverage metrics. It is an essential deliverable that communicates test progress and
quality.
Objective: To validate the login functionality for correct and incorrect user credentials.
Scope: Login page of the application where users input credentials to access the system.
Test Items:
Username field
Password field
Login button
Forgot password link
Features to be Tested:
Valid login
Invalid login
Blank fields
SQL injection prevention
Test Deliverables:
Resources:
A test strategy is a high-level document that outlines the general approach, goals, and
guidelines for software testing in an organization or project. It provides direction and
establishes a standard framework for testing activities.
The strategy ensures that the testing is consistent, measurable, and traceable across all project
phases.
A test summary report is a document that summarizes the testing activities and results of the
testing cycle. It is prepared after testing is completed and serves as a communication tool to
stakeholders.
Criteria Value
Total Test Cases Executed 120
Passed 112
Failed 8
Blocked 0
Open Defects 5
Critical Defects 2
Exit Criteria Met Yes
Recommended for Release Yes
Chapter 3: Test Cases and Test Plan – Long Answer Questions (Full Answers as per
Page 3.26)
A test incident report is a structured document that captures any unexpected behavior, failure,
or deviation from expected outcomes during test execution. It helps ensure that such issues
are formally recorded, analyzed, and resolved appropriately.
The report serves both as a communication tool between testers and developers and a tracking
mechanism for defects or anomalies in the software.
1. Incident ID: A unique identifier assigned to each incident report for easy tracking
and reference.
2. Title: A concise summary or name describing the nature of the incident.
3. Summary: A brief narrative highlighting the issue and its context.
4. Test Case Reference: The ID or name of the test case where the issue was
encountered.
5. Steps to Reproduce: A sequential list of actions taken during test execution that led
to the incident. This helps developers simulate the same scenario to investigate and fix
the issue.
6. Expected Result: The correct or intended system behavior as defined in the
requirements or test case.
7. Actual Result: The behavior observed during testing, which differs from the expected
result.
8. Environment Details: Information about the environment where the test was
conducted, such as:
o Operating System
o Browser and version
o Hardware
o Network configuration
9. Severity: The seriousness of the issue based on its impact on the functionality or user
experience. Common levels: Critical, High, Medium, Low.
10. Priority: Indicates how urgently the issue needs to be resolved.
11. Attachments: Supporting evidence such as screenshots, error logs, screen recordings,
or stack traces.
12. Reported By / Date: Name of the tester and the date when the issue was logged.
13. Status: The current state of the incident (e.g., New, Open, Assigned, Resolved,
Rejected, Closed).
14. How to prepare test report for test cases executed? Describe in detail.
A test report for executed test cases is a comprehensive document that summarizes the testing
activities performed, test results obtained, and defects identified during a specific testing
phase. It serves as evidence of test execution and supports the decision-making process
regarding software release.
| Field | Description |
|----------------------- |-------------------------------------------------------|
| Defect ID | DFT-101 |
| Title | Login fails with valid credentials |
| Module | User Authentication |
| Steps to Reproduce | 1. Navigate to login page<br>2. Enter valid user/pass<br>3. Click Login |
| Expected Result | User dashboard loads |
| Actual Result | Error message displayed: 'Invalid credentials' |
| Severity | Major |
| Priority | High |
| Status | New |
| Environment | Windows 10, Chrome v90 |
| Reported By / Date | Alice / 18-Apr-2025 |
| Assigned To | Developer Bob |
| Comments | Screenshot attached
Chapter 5: Testing Tools - Long Answer Questions
- Not scalable for large or complex systems; increases test effort exponentially
- Inconsistent execution due to human factors and fatigue; results may vary per tester
- Limited reusability; manual tests must be redone for each release or iteration
• Test Management Tools: Plan, organize, and track testing activities (e.g., TestRail).
• Defect Tracking Tools: Log and manage defects (e.g., Jira, Bugzilla).
• Functional Test Automation Tools: Automate UI/API tests (e.g., Selenium, Katalon).
• Performance Testing Tools: Measure system performance under load (e.g., JMeter, LoadRunner).
• Security Testing Tools: Identify vulnerabilities (e.g., OWASP ZAP, Burp Suite).
• Continuous Integration Tools: Integrate automated tests into pipelines (e.g., Jenkins).
• Test Data Management Tools: Create and manage test data sets.
Selenium: An open-source framework for automating web browsers. Supports multiple languages
and browsers. Ideal for functional regression tests.
JMeter: An open-source tool for load and performance testing of web applications and services.
Allows scriptable test plans and extensive reporting.
Jira: A widely used issue and project tracking tool. Enables logging, prioritizing, and tracking defects
and tasks with customizable workflows.
OWASP ZAP: An open-source security scanner for web applications. Provides automated scans,
manual penetration testing utilities, and vulnerability reporting.
- Identifying suitable test cases for automation (e.g., repetitive, regression, data-driven tests).
- Designing a robust test framework (e.g., Page Object Model, data-driven or keyword-driven
framework).
- Writing, organizing, and maintaining test scripts using best practices and coding standards.
- Integrating tests into CI/CD pipelines for automated execution on code commits or scheduled runs.
- Generating and analyzing detailed reports to track test results, defects, and trends.
- Regularly reviewing and refactoring test suites to accommodate application changes and improve
coverage.
• Multi-language Support: Write tests in Java, Python, C#, Ruby, JavaScript, and others.
• Cross-browser Testing: Compatible with Chrome, Firefox, Safari, Edge, and more via WebDriver.
• Distributed Testing: Selenium Grid allows parallel execution across multiple machines and
browsers.
• Integration: Works with testing frameworks (e.g., TestNG, JUnit) and CI tools (e.g., Jenkins).
• Community and Ecosystem: Extensive plugins, libraries, and active community support.
• Flexibility: Automate complex user interactions via WebDriver API and support for advanced web
technologies.
• Faster Test Execution: Run large test suites quickly and repetitively without manual effort.
• Improved Test Coverage: Automate scenarios difficult or time-consuming to test manually.
• Consistency: Eliminate human errors and ensure tests run identically each time.
• Reusability: Leverage reusable scripts and modular frameworks across releases.
• Early Defect Detection: Integrate into CI to catch regressions immediately after code changes.
• Cost Savings Over Time: Lower manual effort and accelerate release cycles, reducing long-term
costs.
• Reporting and Metrics: Automated reports provide clear insights into quality and trends.
7. Explain key features of any three popular open source automation software
testing tools.
Three open-source automation tools and their key features:
1. **Katalon Studio**:
- Built-in keywords for web, API, mobile, and desktop testing.
- Record-and-playback interface and scripting mode for advanced tests.
- Integrations with CI/CD, Jira, Git, and test management tools.
2. **Appium**:
- Mobile automation for native, hybrid, and mobile web apps.
- Supports iOS and Android using the WebDriver protocol.
- Language-agnostic; write tests in any WebDriver-compatible language.
3. **Robot Framework**:
- Keyword-driven testing framework supporting web, API, database, and more.
- Extensible via Python or Java libraries (SeleniumLibrary, AppiumLibrary).
- Human-readable test cases and rich reporting.
8. Write the detail steps for using selenium to test the application.
Steps to use Selenium for testing:
1. **Set up the environment**: Install Java/Python, Selenium WebDriver, and browser drivers.
2. **Choose a framework**: Select TestNG/JUnit for Java or pytest/unittest for Python.
3. **Create project structure**: Organize folders for tests, page objects, data, and utilities.
4. **Develop page objects**: Implement Page Object Model classes encapsulating page elements
and actions.
5. **Write test scripts**: Use test framework annotations to define test methods invoking page
object actions.
6. **Configure test data**: Externalize inputs using CSV, Excel, JSON, or database sources.
7. **Implement logging and reporting**: Integrate logging (Log4j) and generate HTML/XML reports.
8. **Execute tests**: Run tests locally or on Selenium Grid for parallel execution.
9. **Analyze results**: Review reports for pass/fail status and capture screenshots on failures.
10. **Maintain scripts**: Refactor page objects and tests for new UI changes and enhancements.