STQA Module3
STQA Module3
Module 3
Test Management
• Test Management is a process where testing activities are managed to ensure
high-quality and high-end testing of software applications.
• This method consists of tracking, organization, controlling processes and checking the
visibility of the testing process to deliver a high-quality software application.
• It also gives an initial plan and discipline specifications for the software testing process.
Responsibilities
● Works in collaboration with test analyst and technical test analyst to select and
customize the appropriate templates and also establish standards.
● Provides all facilities to keep track and control the testing throughout the project.
● Gives a clear concept of understanding the testing activity of the prior upcoming
project and also posts one
The test management process has two main parts of test
Management Process:
Planning :
● Risk analysis
● Test Estimation
● Test planning
Execution :
● Testing Activity
● Issue Management
● Test report and evolution
Test Management: Test Organization
• Test organization in test management refers to structuring the testing team and defining
roles and responsibilities to ensure efficient and effective testing activities.
• It involves establishing clear communication channels, collaboration processes, and
assigning tasks based on team members' expertise.
• Effective test organization ensures that all testing goals are clearly defined and that the
team works cohesively towards those goals
1. Defining Roles and Responsibilities
• Test organization establishes who is responsible for which testing activities.
• This includes defining roles like test manager, test lead, test analyst, and test
executioner.
• It also outlines responsibilities for specific tasks like test case design, test execution,
defect management, and test environment setup.
• Clear role definition prevents confusion and ensures accountability
2. Structuring the Testing Team
• Test organization determines the structure of the testing team, which may be
project-based, matrix-based, or a combination of both.
• It considers the skills and expertise required for different testing activities and ensures
the team has the necessary resources.
• The structure should align with the overall software development lifecycle (Waterfall,
Agile, etc.).
• Different approaches, like in-house testing, outsourcing, or crowd testing, also
influence the test organization.
3. Facilitating Communication and Collaboration:
• Effective test organization establishes clear communication channels between team
members, stakeholders, and other development teams.
• It promotes collaboration through tools and processes, ensuring that everyone is on the
same page and working towards common goals.
• This can involve regular team meetings, communication platforms, and shared
documentation
4. Ensuring Alignment with Project Goals
• Test organization ensures that testing activities are aligned with the overall project goals
and timelines.
• It helps in identifying potential risks early in the testing cycle and developing
mitigation strategies.
• By defining clear objectives and metrics, test organization enables the team to track
progress and ensure that the testing process is effective in delivering a high-quality
product
Test planning
• Test planning in software testing is the foundational phase where a comprehensive
document, known as a Test Plan, is created.
• This document serves as a roadmap for the entire testing process, outlining the
approach, scope, resources, schedule, and activities required to ensure the quality and
performance of a software application or system.
Key aspects of test planning include
Product Analysis:
Understanding the software under test, its requirements, functionalities, and potential
risks.
Risk Management:
Identifying potential risks to the testing process and outlining mitigation strategies.
Deliverables:
Defining the expected outputs of the testing process, such as test cases, test reports, and
defect logs.
Detailed test Design and test Specification
Detailed test design is the process of translating high-level test objectives and strategies
into concrete, executable test cases. This involves:
● These metrics help identify areas for improvement, track progress, and make
data-driven decisions for continuous enhancement.
Software Quality Metrics
● Definition:
Software quality metrics are quantifiable measures used to assess various aspects of
software, including functionality, performance, usability, and reliability.
● Purpose:
They provide insights into the efficiency and effectiveness of the software
development process and help ensure the final product meets quality standards.
● Examples:
Defect density, test coverage, code complexity, performance, and usability are
examples of software quality metrics.
Quality Assurance
● Quality assurance (QA) in Software Testing (STQA) is a systematic process of
ensuring that software products meet predefined quality standards and customer
expectations.
● It involves a range of activities focused on preventing defects and improving the
overall quality of the software development process.
● QA aims to build confidence that the software will fulfill its intended purpose and
satisfy stakeholders.
Benefits of Software Metrics
● Improved Software Quality: By tracking metrics, developers can identify and
address issues early in the development lifecycle, leading to higher quality software.
Decision making. Such metrics can come in handy when estimating the influence of the
decisions made. PMs and CEOs can sort out objectives, and priorities, and avoid impulsive
resolutions. It helps them to make deliberate compromises, optimize the project, and
achieve the goals of software quality assurance.
Data sorting. You can use metrics to reduce misunderstandings and ambiguities in
complex projects. Through the software organization, you will get objective information.
Priorities. With metrics, managers will no longer have difficulties while tracking,
identifying, or prioritizing the project’s issues. They can communicate at all organizational
levels.
Progress management. Is the project meeting the schedule? Is everything going well? It is
important to control the work’s progress and result, and always have answers to these
questions. Such metrics show the software product’s status as well as its quality and
changes.
Management strategy. There are some risks that you have to instantly estimate, control,
and prioritize. Metrics help to manage such issues and avoid future costly solutions. They
determine errors and correct technical parts of the project as well as facilitate management
strategies
Aspects of software quality (how well a software product meets user needs and expectations)
Types of Software Quality Metrics
- Quality Metrics are a broad category of measures used to assess the overall quality of a
product or process.
Agile metrics- An agile metric is useful when you want to improve the development
process. It takes into account lead and cycle time, velocity, as well as open and close
percentage.
Lead time- This is the time the engineers spend coming up with ideas, designing,
developing, and finishing the software project. When you shorten the time, you can release
the product faster and get the consumer’s attention. Since they will not be made to wait for
a long time, their loyalty will increase.
Cycle time - It can be difficult to grasp the difference between these two definitions
(Cycle time and Lead time), but they are not the same. The cycle period starts with the
app’s development and ends when it is complete, while the lead time starts with receiving
the order and finishes with its delivery.
Velocity - This estimates the time the programmers will need to develop a product. It
helps to understand how much time the team needs for each stage. Thus, you can make a
plan for future products according to already existing analyses.
Production metrics - This metric estimates the amount of work that the developers have
already performed, their productivity, and speed. It can be checked by the active days,
failures and repair time, productivity, task scopes, and other factors.
Active days - This is the time the developers/testers spend on coding/testing. It does not
include any other type of minor activities, such as planning. This metric helps to identify
the hidden costs.
Failure and repair time - When developing a product from scratch, you can never avoid
mistakes and bugs. That’s why all you can do is note the time the engineers spend on
solving the problem.
Productivity. It is difficult to accurately measure this aspect, but each developer’s code
volume can be used as a reference.
Task scopes. This is the volume of code that a developer can produce every year. Seems
weird, but it helps to calculate how many engineers you will need for a project.
Code churn. This is the volume of the code that has been modified in the product.
Security responses metrics - As the name implies, the aim of these metrics is to ensure
the security of the product. When measuring software quality, you need to check how the
app responds to security. It is a very important stage since the number of hacker attacks
rises every day. It is important to check how fast your project can detect a problem and
eliminate it, or at least alarm the IT manager about it.
Dependencies age - Another indicator that shows the product’s quality is your
dependencies. You should make sure all the dependencies in your base work properly.
Some of them may need to be updated.
Size-oriented measurements - Such a metric uses the quantifier KLOC (abbreviation for
kilo) to calculate the size of the code and determine bugs, errors, and costs per 1000 lines.
It helps to measure the app’s quality according to its size and code accuracy.
Function-oriented methods - This metric shows how much business functionality you
can get from the product. It stands for the main quantifier and analyses all the available
information such as user input and requests, reports, messages on the errors, and user
requests.
Defect metrics - The amount of defects is the number one indicator of the software’s
quality. It includes:
A pull request (PR) in the context STQA is a formal mechanism used in collaborative
software development, particularly with version control systems like Git (global
information tracker- tracking and managing changes in software).
It serves as a proposal to merge changes from a feature branch, where development and
testing have occurred, into a main or target branch of the codebase.
• Testing metrics are quantifiable measures used to track, assess, and control the software
testing process.
• They provide insights into the efficiency, effectiveness, and quality of testing activities,
helping teams identify potential issues, optimize processes, and make informed
decisions.
• Example of a testing metric:
A common metric is the defect density, which is calculated by dividing the number of
defects found by the size of the software (e.g., lines of code). A high defect density may
indicate a need for improved testing or development practices.
Key Types of Testing Metrics
Test Execution Metrics:
Monitor the progress of test execution, including the number of test cases executed, their pass/fail
status, and any blocked or skipped tests.
Defect Metrics:
Track the number of defects found, their severity and priority, and the rate of defect discovery and
resolution.
Test Coverage Metrics:
Measure how much of the application or system has been covered by the test cases, ensuring
comprehensive testing.
Test Effort Metrics:
Analyze the time, resources, and cost associated with testing activities.
Product Metrics:
Focus on the quality attributes of the software product, such as defect density (defects per unit of
code) and code coverage.
Project Metrics:
Monitor the overall progress of the testing project, including resource utilization and adherence to
timelines.
Benefits of using Testing Metrics
• Improved Testing Efficiency:
By identifying bottlenecks and areas for improvement, metrics help optimize the testing process.
• The purpose of software testing metrics is to increase the efficiency and effectiveness of the
software testing process while also assisting in making better decisions for future testing by
providing accurate data about the testing process.
• A metric expresses the degree to which a system, system component, or process possesses a certain
attribute in numerical terms.
• A weekly mileage of an automobile compared to its ideal mileage specified by the manufacturer is
an excellent illustration of metrics.
Importance of Metrics in Software Testing
Test metrics are essential in determining the software's quality and performance. Developers may use
the right software testing metrics to improve their productivity.
● Early Problem Identification: By measuring metrics such as defect density and defect arrival
rate, testing teams can spot trends and patterns early in the development process.
● Allocation of Resources: Metrics identify regions where testing efforts are most needed, which
helps with resource allocation optimization. By ensuring that testing resources are concentrated
on important areas, this enhances the strategy for testing as a whole.
● Monitoring Progress: Metrics are useful instruments for monitoring the
advancement of testing. They offer insight into the quantity of test cases that have
been run, their completion rate, and if the testing effort is proceeding according to
plan.
● Continuous Improvement: Metrics offer input on the testing procedure, which helps
to foster a culture of continuous development.
Types of Software Testing Metrics
Software testing metrics are divided into three categories:
1. Process Metrics: A project's characteristics and execution are defined by process metrics. These
features are critical to the SDLC process's improvement and maintenance (Software
Development Life Cycle).
2. Product Metrics: A product's size, design, performance, quality, and complexity are defined by
product metrics. Developers can improve the quality of their software development by utilizing
these features.
3. Project Metrics: Project Metrics are used to assess a project's overall quality. It is used to
estimate a project's resources and deliverables, as well as to determine costs, productivity, and
flaws.
Test Metrics Life Cycle
The various stages of the test metrics lifecycle are-
• Analysis:
○ The metrics must be recognized.
○ Define the QA metrics that have been identified.
• Communicate:
○ Stakeholders and the testing team should be informed about the requirement for metrics.
○ Educate the testing team on the data points that must be collected in order to process the
metrics.
• Evaluation:
○ Data should be captured and verified.
○ Using the data collected to calculate the value of the metrics
• Report:
○ Create a strong conclusion for the paper.
○ Distribute the report to the appropriate stakeholder and representatives.
○ Gather input from stakeholder representatives.
Formula for Test Metrics
To get the percentage execution status of the test cases, the following formula can be used:
Percentage test cases executed = (No of test cases executed / Total no of test cases
written) x 100
Test Case Effectiveness = (Number of defects detected / Number of test cases run) x 100
2. Passed Test Cases Percentage: Test Cases that Passed Coverage is a metric that indicates the
percentage of test cases that pass.
Passed Test Cases Percentage = (Total number of tests ran / Total number of tests
executed) x 100
3. Failed Test Cases Percentage: This metric measures the proportion of all failed test
cases.
Failed Test Cases Percentage = (Total number of failed test cases / Total
number of tests executed) x 100
4. Blocked Test Cases Percentage: During the software testing process, this parameter
determines the percentage of test cases that are blocked.
5. Fixed Defects Percentage: Using this measure, the team may determine the percentage
of defects that have been fixed.
Rework Effort Ratio = (Actual rework efforts spent in that phase/ Total actual efforts
spent in that phase) x 100
7. Accepted Defects Percentage: This measures the percentage of defects that are accepted out of the
total accepted defects.
Accepted Defects Percentage = (Defects Accepted as Valid by Dev Team / Total Defects
Reported) x 100
8. Defects Deferred Percentage: This measures the percentage of the defects that are deferred for
future release.
Defects Deferred Percentage = (Defects deferred for future releases / Total Defects
Reported) x 100
1. Percentage test cases executed = (No of test cases executed / Total no of test cases
written) x 100
= (164 / 200) x 100
= 82
2. Test Case Effectiveness = (Number of defects detected / Number of test cases run) x 100
= (20 / 164) x 100
= 12.2
3. Failed Test Cases Percentage = (Total number of failed test cases / Total number of tests
executed) x 100
= (60 / 164) * 100
= 36.59
4. Blocked Test Cases Percentage = (Total number of blocked tests / Total number of tests
executed) x 100
= (4 / 164) * 100
= 2.44
5. Fixed Defects Percentage = (Total number of flaws fixed / Number of defects reported) x
100
= (12 / 20) * 100
= 60
6. Accepted Defects Percentage = (Defects Accepted as Valid by Dev Team / Total Defects
Reported) x 100
= (15 / 20) * 100
= 75
7. Defects Deferred Percentage = (Defects deferred for future releases / Total Defects
Reported) x 100
= (5 / 20) * 100
= 25
Test Process : Estimation model for testing effort
● Software testing estimation is a management activity to calculate and approximate
time, resources and expenses needed to complete test execution in a specified
environment.
● In the context of Software Testing and Quality Assurance (STQA), test estimation
models are used to predict the effort, time, and resources required for testing
activities. These models help in planning, scheduling, and managing the testing
process effectively.
What to estimate in Software Tests?
● Time required: Time is the key to a project’s success. Estimating time in Software
Testing helps sync other process for maximum efficiency.
● Cost required: Costs estimation helps in budgeting the project accordingly and this
also helps in calculating the ROI (return on investment).
● Resources required: Team member(s) needed to complete the testing cycle along
with other hardware and software resources required.
● Skills required: Skill-set of the team members to run the tests successfully.
● Risks involved: Identification of risks involved help in finding the right solutions to
mitigate the risks beforehand for smooth operations.
Common Test Estimation Models
● Work Breakdown Structure
● Functional Point Analysis
● Wideband Delphi Method
● 3-Point Software Estimation Test
● Use-Case Methodologies
● Distribution in Percentage
● Method of Ad-Hoc
Work
Breakdown
Structure (WBS)
Work Breakdown Structure (WBS)
● Work Breakdown Structure (WBS) can be easily understood as a method of breaking
down large tasks into smaller, easily executable groups. The aim of this method is to
make tasks more approachable and manageable.
● In WBS, a testing task gets broken down into smaller modules, and those modules are
further divided into measurable sub-modules.
Functional Point
Analysis
Function Point Analysis (FPA)
● This method estimates testing effort based on the functionality of the software. It
involves identifying and assigning "functional points" to various features and then
using these points to calculate the estimated effort.
● FPA is based on specification documents, such as SRS document or Design. Again, as
with WBS, the project is split into modules. Each module — depending on its
complexity — is assigned a functional point (FP). Simple tasks get lower points,
difficult tasks get higher points. Total effort is calculated by the formula:
● Total Effort = Total FP x Estimate Defined per FP
What is "Estimate Defined per FP"?
● It's the average amount of effort (e.g., person-hours, cost) needed to develop or test
one Function Point.
● This value is typically determined by the development team or test manager based on
past experience, team skill level, and project characteristics.
● For example, if a team estimates that it takes an average of 10 hours to develop one
FP, then "Estimate Defined per FP" would be 10 hours.
Example
● Let’s consider the total effort with respect to cost and take the estimate defined per FP
as equal to $100/points.
● The whole project is divided into three groups of modules:
● Complex modules (FP is 5) — 2 pieces
● Medium modules (FP is 3) — 10 pieces
● Simple modules (FP is 1) — 5 pieces
● Total effort = 45 x 100 = $4,500
● This means that to complete the project, you need $4,500.
Wideband Delphi
Method
Wideband Delphi Method
● This is a group-based estimation technique where a panel of experts provides anonymous estimates. The
estimates are then discussed and refined iteratively until a consensus is reached, reducing individual bias.
● This is one of the most widely used testing estimation techniques based on surveys of the experts involved in
the testing process. The essence is that a panel of experts discuss the given task under the guidance of a
manager and make anonymous personal forecasts (how many man-hours this task will take), providing the
reasons for their opinions.
● As a rule, after the first round, the range of answers is quite wide. Then, the experts are encouraged to revise
their answers taking into account other members’ judgments. Several rounds may take place until the range of
answers decreases and the average value can be calculated. The process is finished after a predefined criterion
(i.e. after a limited number of rounds, or if the consensus is achieved, or when the results are stable).
● Delphi technique is very simple and quite reliable due to the participation of experienced people and
maintained anonymity. It gives qualitative and quantitative results and can be combined with other methods.
3-Point Software
Estimation Test
3-Point Software Estimation Test
● This technique involves calculating three estimates for each task: an optimistic
estimate (O), a most likely estimate (M), and a pessimistic estimate (P). These three
values are then used to calculate a weighted average, providing a more realistic
estimate.
● This is a statistical method, but it also breaks down the task into subtasks (in this it is similar
to WBS). Then, three possible scenarios should be estimated for each sub-task.
● The best case: assuming that you have a talented team and all of the necessary resources,
and assuming that no problem occurs and everything goes right, you can complete the task,
for example, in 100 man-hours (B). This is an optimistic scenario.
● The most likely case: assuming that you have a good team, enough resources, and almost
everything goes right, although some problems may occur, you can complete the task in 150
man-hours (M). This is a normal scenario.
● The worst case: assuming that your team is not experienced, everything goes wrong and
you have to solve numerous problems, you can complete the task in 200 man-hours
(W). This is a pessimistic scenario.
Example 1:
● Consider a test case for "User Login Functionality."
● Optimistic Estimate (O):
● 2 hours (assuming a perfectly stable environment, no defects, and quick execution).
● Most Likely Estimate (M):
● 4 hours (considering minor environment setup, potential small defects, and standard execution time).
● Pessimistic Estimate (P):
● 8 hours (accounting for complex defects, environment instability, or unexpected dependencies).
● Calculation using the PERT formula:
● Expected Estimate (E) = (O + 4M + P) / 6
● E = (2 + 4 * 4 + 8) / 6
E = (2 + 16 + 8) / 6
E = 26 / 6
E ≈ 4.33 hours
Example 2. Thus, you have three values:
B = 100
M = 150
W = 200
Now, you can calculate the average value for the test estimation (E) using the following
formula:
E = (B + 4M + W) / 6
E = (100 + 4 x 150 + 200) / 6 = 150 man-hours
As the average value may fluctuate a little bit, to be more accurate, you need to
calculate standard deviation (SD) — the limits within which E may change. The formula is as
follows:
SD = (W – B) / 6
SD = (200 – 100) / 6 = 16.7 man-hours
● You can present the final estimate as this: the team needs 150 +/- 16.7 person-hours to
accomplish the sub-task.
● Three-Point Estimation is one of the most effective methods for software testing when you have
practice and data from the previous projects and an ability to apply them. The essence of this
method is to find out the best and the worst working conditions for your team.
Use-Case
Methodologies
Use-Case Methodologies
● Use Case can easily be defined as the scenario or a case where a system is receiving
structured instruction from a user to fulfill a goal.
● Every use case contains the following elements:
● The actor – Also known as the user interacting with the process. They are external to the
system, it can be a single person, a group of people, or an external system.
● The goal – The final successful outcome
● The system – the system is upon which any function is being performed to reach the desired
goal
● In the use case methodology, every possible outcome is recorded between the actor or the
user and the system in a pre-set environment and related to a specific goal. A document is
created describing all the steps taken by the user to complete the tasks.
● A use case document can help the developers to zero down on the errors that can occur
during the exchange between the user and the system, and resolve them.
Distribution in
Percentage
Distribution in Percentage
● In this method, once all the different stages of the testing cycle are determined, every
stage is assigned or assessed in terms of percentages. This is to discern how much
effort should be put into each stage of the testing cycle.
● During any project, it is always best to determine the effort required or should be put
into each stage so that bigger problems can be tackled easily and all the resources are
not spent on the smaller issues.
Method of Ad-Hoc
Method of Ad-Hoc
● In some cases, less formal or structured methods might be used, relying on expert judgment
and experience without a strict model.
● Ad hoc testing is an undocumented testing practice in which there are no test cases, and the
test is performed randomly with the aim of breaking the system and check the
responsiveness. It is very useful to find possible errors or defects at an early stage.
● Ad-hoc testing works on error guessing, where arbitrarily, any component or part of the
system is picked or ‘guessed’ to have an error and to possibly find the source of the error.
Error guessing is generally pulled off by an experienced person who is well versed with the
system in question.
● It is an unstructured testing method, aiming at finding defects randomly; thus, there is no
documentation. Therefore, defects cannot be mapped to test cases. This can make
reproducing the defect quite difficult.
● There are different types of AdHoc testing:
● Buddy Testing: This Ad-Hoc testing takes place after the module has undergone unit
testing. The testing is usually carried out by a team of at least two people or two
buddies who mutually work on identifying defects in the same module. This team
consists of at least one software developer and one software tester.
● Pair testing: The team consists of only testers rather than one tester and one
developer. This team of at least two testers can be chosen according to having
different levels of understanding and experience of the system so that they can
mutually support each other in sharing ideas, views, knowledge, and finding defects
in the system. One can take the role of the tester, and the other can note down the
findings.
● Monkey Testing: This testing method is called monkey testing because of its
randomness. Test inputs are random, and corresponding outputs are observed for the
software under test. Any errors, inconsistencies, or system crashes are determined
based on the outputs.
Testing process : Information flow matrix
● An information flow matrix is not a standard or commonly recognized artifact used
for testing. The term "information flow matrix" might be a misinterpretation or a less
common term for related concepts like a traceability matrix or a test matrix.
● Traceability Matrix:
● Purpose: A traceability matrix links different artifacts of the software development
lifecycle, ensuring that requirements are covered by design, code, and test cases. It
demonstrates the flow of information from high-level requirements down to
individual test cases.
● Usage in Testing: It helps in ensuring complete test coverage, identifying missing
test cases, and performing impact analysis during changes. For example, a row might
represent a requirement, and columns might represent corresponding design
specifications, code modules, and test cases, indicating their relationships.
Key Parameters included in a Traceability Matrix Template
● Requirement ID: A unique identifier for each specific requirement (e.g., functional,
non-functional).
● Requirement Description: A detailed explanation of the requirement.
● Test Case ID: A unique identifier for each test case designed to verify a requirement.
● Test Case Description: A description of the steps and expected outcomes of the test case.
● Test Execution Status: The current status of the test case execution (e.g., Pass, Fail,
Blocked, Not Run).
● Defect ID: If a test case fails, the ID of the associated defect.
● Defect Status: The current status of the defect (e.g., Open, Closed, Reopened).
● Risks (Optional): Any identified risks associated with the requirement or its testing.
● Priority (Optional): The priority level of the requirement or test case.
● Source of Requirement (Optional): Whether it's a new requirement, change request, etc.
Creating a Traceability Matrix
● Identify Requirements:
● Gather all project requirements and assign unique IDs.
● Outline Test Cases:
● List all test cases designed to verify the requirements and assign unique IDs.
● Choose a Tool:
● Select a tool for creating the matrix, such as Microsoft Excel, Google Sheets, or a dedicated
test management tool.
● Populate the Matrix:
● Link requirements to their corresponding test cases, test results, and any defects.
● Maintain and Update:
● Regularly update the matrix to reflect changes in requirements, test status, and defect
resolution.
● Test Matrix:
● Purpose: A test matrix is a table that organizes test cases based on various parameters
like test types, test levels, or test environments. It provides a structured overview of
the testing scope and helps in managing test execution.
● Usage in Testing: It can show the relationship between test cases and test conditions,
helping to ensure that different scenarios are covered. For instance, a matrix could
map test cases to specific features, platforms, or data sets to be tested.
Test Process: Function point and test point analysis
● FPA is a software measurement technique used to estimate the size and complexity of a software system based on its
functionality, primarily from a user's perspective. It quantifies the "function points" by analyzing various user-centric
elements like:
● External Inputs: Data or control information entered by the user.
● External Outputs: Data or control information presented to the user.
● External Inquiries: User requests for information that retrieves data.
● Internal Logical Files: User-identifiable groups of logically related data or control information residing within the
system's boundary.
● External Interface Files: User-identifiable groups of logically related data or control information referenced by the
system but maintained by another system.
● Each of these elements is assigned a complexity level (simple, average, complex) and a corresponding weight. These
weighted values are summed to calculate the Unadjusted Function Points (UFP). A Value Adjustment Factor (VAF),
derived from General System Characteristics (GSCs) like data communication, distributed processing, etc., is then applied
to the UFP to arrive at the Adjusted Function Points (AFP), which provides a measure of the software's functional
size. FPA is often associated with white-box testing estimations, as it delves into the system's internal structure and
functionality.
● TPA is a technique used specifically for estimating black-box testing effort. It leverages the
functional points derived from FPA to estimate the testing effort required for user-facing
functionalities without needing knowledge of the internal code structure. TPA typically
considers three main entities for estimation:
● Function Points: The output from a Function Point Analysis, representing the size and
complexity of the functionalities.
● Complexity Factors: Factors influencing testing effort, such as technical complexity,
environmental complexity, and organizational complexity.
● Skill Factors: The experience and expertise of the testing team.
● By combining these factors, TPA provides an estimate of the test points, which can then be
translated into estimated testing effort, time, and resource requirements for black-box testing
activities.