[go: up one dir, main page]

0% found this document useful (0 votes)
15 views25 pages

Sqa Summary222

Social quality

Uploaded by

jellyfish0145
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views25 pages

Sqa Summary222

Social quality

Uploaded by

jellyfish0145
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

lecture 2**************************************

Causal Analysis &Resolution: identify outcomes causes, prevent bad outcomes recurrence, know recurrence of posi-
tive outcomes. eliminates rework & improves quality, best performed after the problem is first identified
Causal Analysis in Agile Teams: collect impediment &retrospective data during each iteration, implement selected im-
provements During task When process performance exceeds expectations or doesn’t meet its quality.

Root Cause Analysis (RCA): CMMI mechanism of analyzing the defects, to identify their causes. Identify whether the
defect was due to “testing miss”, “development miss” or was a “requirement or designs miss.”

Types of Root Causes:


I. Human Cause: Human-made error
II. Organizational Cause: A process that people use to make decisions that were not proper.
III. Physical Cause: Any physical item failed in some way.

Perform a Root Cause Analysis


• Define the Problem
• Collect Data
• Identify Possible Causal Factors
• Identify Root Cause BY ROOT CAUSE ANALYSIS TOOLS
• Recommend & Implement Solutions

Root Cause Analysis Tool:


Brainstorming Everyone on group process give idea in rotation collecting different viewpoints
5 Whys inspect a problem in depth until it shows you real cause
Fishbone cause-and-effect diagram show problem at head &factors at the backbone under categories&sub-
Diagram categories
Scatter Plots visual representations of relationship between two sets of data to test correlation between variables
independent variable on the x-axis dependent value in y-axis
Pareto Chart bar graph that groups frequency distribution to show the relative significance of causes of failure,
easy to see the most common issues 80% effects , 20 %causes
FMEA Failure mode and effects analysis method used during the product design lifecycle to identify poten-
tial problems and solve them.
Failure modes: Identifies all ways a problem could occur within a process.
Effects analysis: Evaluates the causes and consequences of each failure mode
Internal Audit organization shall conduct internal audits at planned intervals to provide information on whether the qual-
ity management system conforms to the organization’s requirements.
Action Proposal Description of necessary tasks • Description of outcome to be addressed • Description of the root cause
Action Plan Develop action plans to implement selected action proposals, Schedule, DOC of implemented actions

Internal Audit Steps


• 1-plan, establish, and maintain an audit program
• 2-define the audit criteria
• 3-ensure objectivity of audit process
• 4- results of audits are reported to management.
• 5-take appropriate corrective actions
• 6- retain documented information as evidence
• 7-follow-up on issues
Management Reviews: Top management review the organization’s quality management system, to ensure its continuing
suitability,

Continuous Improvement organization continue improve suitability, effectiveness of quality management system

Ex : Plan-Do-Check-Act (PDCA) tool used to solve problems more efficiently& prevents recurring mistakes.
PLAN • Plan what needs to be done. smaller steps to build a proper plan with fewer failures.
• DO • apply everything that considered during the previous stage.
Standardization: Make sure that everybody knows their roles and responsibilities
• CHECK • audit your plan’s execution and see if your initial plan worked.
• ACT • If everything seems perfect apply your initial plan.

RACI Model/Matrix RACI model/matrix shows how each person contributes to a project.
• Responsible: A person makes sure that the right things happen at right time
• Accountable: the "owner" for ensuring that something gets done correctly. make sure the responsible per-
son/team knows the expectations of the project and completes work on time.
• Consulted: They provide their inputs to the task. Several people may be consulted for a task.
• informed: These are the people who need to be kept informed on the progress of the task.

Cost of Quality (CoQ) method to calculate costs companies, ensure products meet or fail quality standards,
I. Prevention Cost associated with activities specifically designed to prevent poor quality in products.
II. Appraisal Cost activities designed to evaluate products to assure conformance to quality requirements.
III. Internal Failure Cost product fails to conform to quality specification before shipment.
IV. External Failure Cost product fails to conform quality specification after shipment.
V. The Cost of Poor Quality (COPQ: the costs that are generated as a result of producing defective material

COQ Goals COQ Limitations


Increased efficiency, Risk reduction & protecting your brand, Lower does not solve quality problems.
Costs, Reduce waste. inability to quantify Hidden Quality Costs
b. Which root cause analysis tool should be used in the following cases:
1. 1.We identify if there is a relationship between two sets of data to test correlation between variables: scatter plot
2. 2.We outline severity, occurrence, and detection rating of failures to calculate risk and determine the next steps
(FMEA)
3. 3.We need to prioritize the most common causes of failure according to their frequency distribution: Pareto Chart
c. Identify the cost type for each of the following costs of quality:
Inspections: Appraisal cost Warranty charges: External failure cost
d. Describe steps of internal audit process:
▪ Planning: Define the scope and objectives.
▪ Preparation: Develop the audit plan and checklist.
▪ Execution: Conduct the audit by collecting and evaluating evidence.
▪ Reporting: Document the findings and provide recommendations.
▪ Follow-up: Ensure corrective actions are implemented.
▪ Closure: Conclude the audit and verify the effectiveness of corrective actions.
III. Explain activity conducted in each phase of the PDCA/Deming cycle:
o Plan: Define objectives and processes required to deliver the desired results.
o Do: Implement the plan and execute the process.
o Check: Monitor and evaluate the implemented process against the objectives.
o Act: Take actions to improve the process based on the evaluation.
root Cause Type Identification
• Picking the wrong person for a task Type: Organizational
• The server is not booting up Type: Physical
• Instructions not accurately followed type Human.
Cost Specification
• Periodic review of documentation: Prevention cost
• Client complaints External: failure cost
• Design verification: Prevention cost.
• Defective products, before shipment to client: Internal failure cost
• New employee training: Prevention cost
• Internal auditing: Appraisal cost
Quality Tools and Techniques Identification
1. Visual representations of a relationship between two sets of data to test correlation Scatter Plot
2. Includes a severity occurrence and detection rating to calculate risk and determine the next steps (FMEA)
3. Looking at any problem asking questions and avoid answers that are too simple or overlook important details:
Five Whys Analysis
4. Every contributing cause, potential effects can be shown under categories and sub-categories Fishbone Diagram
5. Bar graph groups the frequency distribution to show the relative significance of causes of failure: Pareto Chart
6. Group activity to collect different viewpoints encouraging a deeper level of critical thinking Brainstorming.
the outputs of the management reviews shall include decisions and actions related to all following, except:
a. opportunities for improvement
b. t b. need for changes to quality management system
c. c. audit results d. resource needs •
Which of the following is not included in action plan?
a. affected stakeholders c. expected cost
b. schedule d. results achieve
Cost incurred when a product fails to conform to quality specification before shipment to a customer. Internal failure
• Principle that states that 80% of effects come from 20% of causes. Pareto graph
• Management meeting at planned intervals to ensure the company quality management system’s continuous suitabil-
ity, adequacy management review
• Method for calculating the costs companies incur ensuring that products meet quality standards, as well as the costs
of producing coq
goods that fail to meet quality standards.
• Organization conduct examination at planned intervals to check if quality management system conforms to require-
ments of International Standard and effectively implemented & maintained. Internal audit
• Percentage of an organization’s total capacity or effort that used to overcome the cost of poor quality. Quality loss
• Matrix that shows each person contributes to a project. RACI matrix
• A mechanism of analyzing the defects to identify their sources root cause analysis

lecture 3*********
• Error made by a human being, will produce a defect (bug / fault) in the program, which if executed in the code, will lead to a failure

Testing objectives
1. Evaluate work products as requirements. Build confidence. Find and prevent defects. Reduce risks.
Seven Testing Principles
1. Testing shows the presence of defects: Testing show defects but cannot prove that there no defects.
2. Exhaustive testing is impossible: Testing everything is not feasible except for trivial cases.
3. Early testing: Testing activities should start as early as possible in the software or system development lifecycle.
4. Defect clustering: A small number of modules contain most of the defects discovered during pre-release testing.
5. Pesticide paradox: If the same tests are repeated, these tests will no longer find any new defects.
6. context-dependent: Testing is different in different context
7. Absence-of-errors fallacy: Finding and fixing defects does not help if the system built is unusable
• confirmation bias difficult to accept information that disagrees with currently held beliefs
• useful if test basis (any level or type of testing) has measurable coverage criteria defined.
• key performance indicator drive the activities that demonstrate achievement of software test objectives.
• Testing Throughout SDLC
• Waterfall model: only occur after all other development activities done
• Incremental model: testing a system in pieces incrementally.
• Iterative model: groups of features tested together in a series of cycles
tester's role during each SDLC phase.
1. Requirement Analysis: Understand and analyze the requirements to create test plans and test cases.
2. Design: Review design documents and create detailed test cases based on the design.
3. Implementation: Develop automated tests and prepare for test execution.
4. Testing: Execute test cases, report defects, and verify fixes.
5. Deployment: Perform final acceptance testing and ensure a smooth transition to production.
6. Maintenance: Conduct regression testing and support ongoing maintenance activities.

Static testing depends on manual examination of work products or tool-driven evaluation of the code & find
maintainability defects.

Static Testing Reviews


Informal reviews are characterized by not following a defined process & not having formal documented output.
Formal reviews are characterized by team participation, documented results of the review, and documented procedures

Static testing types: 1. Peer Review. 2. Walkthrough. 3. Technical review. 4.Inspection

Activities of Formal Review/Inspection Most formal review type• Led by the trained moderator • Pre-meeting preparation

1. Planning: Identify: Scope, Resources/timeframe •Roles • Entry & exit criteria • review characteristics
2. Initiate review (kickoff) Distribute documents • material • Explain scope/process
3. Individual review take notes: • Defects • Questions , Noting potential defects
4. Review meeting Discussing or logging, with documented results
5. Fixing and reporting Create defect reports • Author fixes the defects
6. Follow-up Checking if defects have been addressed, Gathering metrics ,Checking exit criteria
Formal Review Roles
Manager / decides on the execution of reviews, allocates time in project schedules
Moderator/ Facilitator the person who leads, plans and runs the review
Author /the writer or person has responsibility for the document(s) to be reviewed
Reviewers/ individuals with a specific technical Identify and describe defects
Scribe /documents all problems, with advent of tools to support review process, especially logging of defects

Success Factors for Reviews


• review has clear predefined objectives.
• right people for the review
• Any checklists used address the main risks and are up to date
• Training especially for formal techniques such as inspection

Limitations of Static Testing


Static techniques can’t conform with the customer’s real requirements. & can’t check non-functional characteristics.

Review Techniques
• Ad hoc Reviewers read work product sequentially, identifying issues . highly dependent on reviewer skills.
• Checklist-based reviewers detect issues based on checklists. A review checklist consists of a set of ques-
tions based on potential defects, which may be derived from experience.
• Scenarios and dry runs Reviewers are provided with structured guidelines on how to read through the work
product. scenarios provide guidelines on how to identify defect type.
•Perspective-based reading Reviewers take on different stakeholder viewpoints in individual reviewing. differ-
ent stakeholder viewpoints leads to more depth in individual reviewing

Testing Methodologies
Black-Box Testing Method
• Testing without having any knowledge of the interior workings • The tester is unaware of the system architecture and
does not have access to source code. • User stories are used as test basis.

White-Box Testing Method


• White-box testing is the detailed investigation of internal logic and structure of the code. needs to know internal work-
ings of the code. needs to have a look inside the source code at all test levels • User stories are used as test basis.

Software Testing Types


▪Functional Testing evaluate functions that the system should perform testing levels
• Component/unit testing the process of testing individual components in isolation.by developing team
• Integration testing after test all units, integrate all units and check interactions among the units.
Component integration testing focuses on interactions,interfaces between integrated components responsibil-
ity of developer
System integration testing focuses on interactions &interfaces among systems,packages.responsibility of testers.
• Driver: A driver calls the component to be tested.
• Stub: A stub is called from the software component to be tested
• System testing • A level of testing that validates the complete and fully integrated software product. • The pur-
pose of a system test is to evaluate the end-to-end system tasks the system can perform • Independent testers
for system testing include: • Applications • Hardware/software systems • Operating system
• Acceptance Testing Common forms of acceptance testing include the following:
1. User acceptance testing user provide input and feedback on system testing, building confidence that
users can use the system to meet their needs, .
2. Operational acceptance testing tests the system by operations staff in a production environment.
3. Contractual and regulatory acceptance testing
Contractual acceptance: performed against a contract’s acceptance criteria.
Regulatory acceptance testing is performed against any regulations
4. Alpha and beta testing used by developers of commercial off-the-shelf (COTS) software to get feedback from users
❖ Alpha testing: Production-team tests software in lab environment at developer’s site
❖ Beta testing /After alpha, beta version made to allow users to experiment in their own environment.
▪Non-functional Testing “how well” the system behaves. Ex Load =large amount of users / Stress= too many uses /
Volume =large amount of data,
1. Usability: whether the software product is easy to use, learn, and understand from the user’s perspective.
2. Maintainability: It measures the effort needed to make specified modifications.
3. Efficiency: This evaluates the relationship between the software’s performance and the resources it uses under
specific conditions. Does the software use resources efficiently?
4. Portability: the ability of software to be transferred from one environment to another.
5. Interoperability: whether the software product works with other software applications, as required by the users.
6. Localization: checking default languages, date, time formats if software is designed for a particular location
7. Recovery: how quickly and effectively the application recovers after crashes

▪Structural/White-box Testing White-box testing derives tests based on the system’s internal structure or implementa-
tion. Internal structure may include code, architecture, workflows, and/or data flows within the system all test levels

▪Change-related Testing • When changes are made to a system, testing should be done to confirm that the changes
have corrected defect or not all test levels
Confirmation testing: the software should be re-tested to confirm that the original defect has been successfully removed.
Regression testing: checking if a fix for one thing accidentally breaks something else in the software. It’s making sure
that changes don’t cause new problems.

Maintenance Testing: After deploying a system, we need to make changes to it, so it make sure that changes
to the software system (like fixing defects or adding features) don’t accidentally break other parts of the system that
were working fine.
• Maintenance involve planned releases and unplanned releases (hot fixes).
• Triggers for maintenance: modification, migration, and retirement.
• Impact analysis is useful for regression testing during maintenance testing.

• Debugging development activity, not a testing activity. • Debugging identifies and fixes the source of defects
10.Reviewers provided structured guidelines how to read through work product based on expected usage: Checklist-
based Ad-hoc Dry run Perspective-based reading
1. Who is responsible for testing in the following test levels: operation acceptance/unit/system testing.
• Operational Acceptance Testing: End users or operational team
• Unit Testing: Developers
• System Testing: Independent test team
2. IV. List six steps of the formal review/inspection process:
1. Planning: Define the scope and schedule the review.
2. Kick-off: Distribute documents and explain the review objectives.
3. Preparation: Reviewers examine the documents individually.
4. Review Meeting: Discuss and identify defects collectively.
5. Rework: Author corrects the identified defects.
6. Follow-up: Verify corrections and close the review.
II. What does tester do to facilitate testing in case of missing/not developed components during the integration testing?
3. Use stubs and drivers to simulate the missing components.
testing used by developers of commercial off-the-shelf (COTS) software to get feedback from potential/existing users be-
fore the software product is put on the market: •Beta testing.
Identify three guidelines to the successful conduct of the review process.
• Define clear objectives and scope: Ensure that the purpose and scope of the review are well-defined and under-
stood by all participants.
• Involve the right stakeholders: Include individuals with the necessary expertise and perspectives to provide valua-
ble feedback.
• Prepare adequately: Participants should review the material beforehand and come prepared with comments and
questions.

Lecture 4 ***************************** STLC (Software Testing Life Cycle)


1 requirement Analysis: should done analyzing (SRS) 2.Prepare of RTM (Requirement Traceability Matrix) 3. Identifying
testing types & techniques 4. Prioritizing the features need focused testing 5.identifying testing environment details
Deliverables: • RTM • Automation Feasibility Report

2 Test Planning: Preparation of test plan/strategy document


• 1.Test tool selection. 2. Test effort estimation. 3.Resource planning. 4. Determining roles & responsibilities
• Deliverables: • Test plan/strategy document • Effort estimation document

3. Test Case Development: creation, verification, rework of test cases & test scripts. Test data is identified, created,
reviewed, and then reworked as well.

• Activities to be done: • Create test cases & automation scripts & test data
• Deliverables: • Test cases/script • Test data

4.Environment Setup: Decides the software and hardware conditions under which a work product is tested
• Prepare hardware and software requirement for the Test Environment
• Setup test Environment and test data list.
• Deliverables • Environment ready with test data set up.

5.Test Execution Tester carries out the testing based on the test plans and the test cases prepared. Bugs will be re-
ported back to the development team for correction, then retesting

• Map defects to test cases in RTM.


• Retest the Defect fixes.
• Document test results, and log defects for failed cases.
• Deliverables • Completed RTM with execution status. • Test cases updated with results. 8 • Defect reports

6.Test Closure: Taking lessons from the current test cycle to remove the process bottlenecks for future test cycles
• Prepare Test closure report
• Prepare test metrics
Deliverables: • Test Closure report • Test metrics

Software Testing Estimation how long a task would take to complete.


• Metrics-based technique: estimating the test effort based on metrics of similar projects or based on typical values
• Expert-based technique: estimating the test effort based on experience of the owners of the testing tasks or experts

testing estimation techniques


Work breakdown structure Breaking down the test project into modules, sub-modules; tasks and estimate effort/duration
Experience-based Testing Estimation you already tested similar applications collected metrics from similar projects.
PERT Estimation Three types of estimations most likely/optimistic/pessimistic calculated (O + (4 × M) + p) /6
Wideband Delphi wbs is distributed to a team comprising of 3-7 members for re-estimating the tasks and final estimate
is the result of the summarized estimates based on the team agreement.
Percentage distribution: In this technique, all the phases of SDLC are assigned effort in %.

Software Test metrics estimating the progress, quality, and health of a software testing effort, and improving the effi-
ciency and effectiveness of a software testing process. Ex:

• Schedule slippage = (actual end date – estimated end date) / (planned end date – planned start date) * 100
• Number of tests run per time = number of tests run/total time
• Fixed Defects Percentage = (defects fixed/defects reported) * 100
• Percentage test cases executed= (number of test cases executed/ total number of test cases written) * 10
• Requirement Creep = (total number of requirements added/number of initial requirements) *100

Entry Criteria • set of conditions should be met before starting the software testing.
Ex: 1..environment available& ready for use 2. Testing tools ready for use. 3.testable code available.

Exit Criteria • set of conditions that should be completed to stop the software testing.
Ex: tests planned have been run. • Verify if requirement coverage has been met. • there are NO Critical or high severity
defects that are left unresolved. • Verify if all high-risk areas are completely teste

Test Progress Monitoring and Control


• Test Progress Monitoring: to provide feedback and visibility about test activities.
It is collected manually or automatically and may used to measure exit criteria, Metrics used to assess progress
against the planned schedule and budget.
Metrics types
Percentage of work done in test case preparation (or percentage of planned test cases prepared)
• Percentage of work done in test environment preparation
• Test case execution (number of test cases run/not run, test cases passed/failed)
• Defect information (defects found & fixed, failure rate, re-test results)
• Test Reporting: summarizing information about testing effort, during and at the end of test activity/level.
Test Work Products Examples
• Test planning: work products include one or more test plans.
• Test monitoring and control: work products include test reports, (test progress reports , test summary reports ).
• Test analysis: work products include defined and prioritized test conditions.
• Test design: results in test cases and sets of test cases to exercise the test conditions defined in test analysis
• Test implementation: work products include Test procedures & the sequencing of test procedures, Test suites
• Test execution: work product include Docs of (status of individual test cases or test procedures) &Defect reports
• Test completion: work products include test summary reports., action items.

• Test Reporting Audience: Tailor test reports based on the project context and the report’s audience.
• Test control describes any guiding or corrective actions taken as a result of information and metrics gathered
and reported.
Test strategy Test plan
High level document Detailed document
Static document, can’t be changed Dynamic document, can’t be changed
Defined at organization level. Defined at project level
Derived from business requirements Derived from product description, use case
Developed by project manager Prepared by test manager
Defines the overall approach and goals. contain the plan for all testing activities to be done
Scope and overview • Test Approach • Testing tools • Test scope, resources, test levels, and types
summary • Testing metrics

Test Strategy Types


a. Process-Compliant (Standard-Compliant): Design implement tests based on external rules & standards
b. Directed (or Consultative): Driven by advice from stakeholders,experts,domain specialists. Outside test team
c. Regression-Averse: Aims to prevent regression of existing capabilities from breaking. Reuses test cases.
d. Reactive: Tests are designed, implemented, executed in response to knowledge gained from prior test

Test Scenario Test Scenario gives the idea of what we have to test. • Test Scenario is like a high-level test case
Test Case: test cases are the set of positive and negative executable steps of a test scenario. Once the test scenarios
are prepared, reviewed and approved, test cases will be prepared based on these scenarios.
use case are very useful for designing acceptance tests with customer participation

Logical vs. Physical Test Cases


• The logical test case (high-level) describes what is tested and not how it is done.
• physical test case (low-level) describes in practical terms what must be done.
• Positive test cases ensure that users can perform appropriate actions when using valid data.
• Negative test cases performed to try to “break” the software by performing invalid actions, by using invalid data.

Defect report After uncovering a defect (bug), testers generate formal defect report. to state the problem as clearly as
possible to find and fix defect easily

Defect severity the effect of the bug on the application.


Defect priority impact of the bug on the customers business.
Test Procedures: sequence of actions for a test one or more Test Cases • Defining the test procedures requires care-
fully identifying constraints and dependencies that might influence the test execution sequence

test execution schedule that defines the order in which they are to be run.in test suites
Requirements Traceability Matrix (RTM) links requirements to test cases during the validation process.

Importance of Traceability Analyzing the impact of changes ,Making testing auditable , Meeting IT governance criteria
, Improving the understandability of test progress reports

Test Closure Report created once the testing phase is successfully completed by meeting exit criteria detailed analysis
of the bugs removed and errors found. gives a summary of all the tests conducted.
6.Test strategy type where tests are designed and implemented and may immediately be executed in response to
knowledge gained from prior test results (rather than being pre-planned):
•a. Process-compliant c. Model-Based b. Reactive d. Methodical
7. To be testable, acceptance criteria should address all the following topics, except:
a. Business rules
b. Quality characteristics
c. External interfaces
d. Skills of scrum team
Which requirements don't have test case
Complex requirements
9.Which of the following scenarios has high priority and low severity?
A. Spelling mistake of a company name on home page
B. On a bank website, an error message pops up when a customer clicks on transfer money button.
C. FAQ page takes a long time to load d. Crash in some functionality which is going to deliver.
Exercise: Specify Scenario Severity and Priority

• Submit button not working on login page, customers are can’t login. Priority: High, Severity: High
• Crash in functionality which is going to deliver after couple of releases. Priority: Low, Severity: High
• Spelling mistake of a company name on homepage. Priority: High, Severity: low
• FAQ page long time to load. Priority: Low, Severity: low
• Company logo or tagline issues. Priority: High, Severity: Low
• error message pops up when a customer clicks on transfer money button. Priority: High, Severity: High
• Font family or font size or color or spelling issue in the application or reports. Priority: Low, Severity: Low
Identify testing life cycle phases and deliverable from each phase.

• Requirement analysis: RTM, Automation feasibility report


• Test Planning: test plan/ test strategy
• Test case development: test case / test script
• Environment setup: environment ready with data test setup
• Test Execution: RTM with execution status , defect report
• Test Closure: test closure report ,test metrics
Mention three entry and three exit criteria.
Entry Criteria: 1..environment available& ready for use 2. Testing tools ready for use. 3.testable code available.
Exit Criteria: tests planned have been run. • Verify if requirement coverage has been met. • there are NO Critical or high
severity
• Explain three differences between test strategy and test plan.
o Test Strategy:
1. High-level document
2. Defines the overall approach and goals.
3. Usually organization-wide
o Test Plan:
1. Detailed document
2. Specifies how testing will be conducted for a specific project
3. Project-specific
What is meant by test procedure? A detailed sequence of actions for executing a test, including setup, execution steps,
and postconditions.
1. Give an example of a test work product for each of the following test activities: test planning, monitoring, analysis,
design, implementation, execution, and completion.
1. Test Planning: Test Plan
2. Monitoring: Test Progress Reports
3. Analysis: Defect Reports
4. Design: Test Cases
5. Implementation: Test Scripts
6. Execution: Test Logs
7. Completion: Test Summary Report
Identify Test Strategy Type
• Tests rely on making systematic use of predefined set of tests/test conditions, such as taxonomy of common
types of failures, list of important quality characteristics, or company-wide look-and-feel standards.
o Methodical Strategy
• Tests are designed and implemented and may immediately be executed in response to knowledge gained from
prior test results rather than being pre-planned.
o Reactive Strategy
• Tests are designed based on some required aspect of the product, such as function, business process, internal
structure, non-functional characteristic.
o Analytical Strategy
• Tests include reuse of existing test ware (especially test cases and test data) and test suites.
o Regression-averse
• Tests driven primarily by the advice, guidance, or instructions of stakeholders, business domain experts, or tech-
nology experts, who may be outside the test team or outside the organization itself.
o Consultative Strategy
Identify Testing Estimation Technique
• Breaking down test project into modules; sub-modules; functionalities; tasks and estimate effort/duration (WBS)
• Assumes that you already tested similar applications in previous projects and collected metrics from those pro-
jects. Experience-based Testing Estimation
• Three types of estimations most likely/optimistic/pessimistic are calculated for each activity. PERT Estimation
• Work breakdown structure is distributed to a team comprising of 3-7 members for re-estimating the tasks and
final estimate is the result of the summarized estimates based on the team agreement. Wideband Delphi

Lecture 5 & 6 Test Case Design Techniques***************


Black-box Test Case Design Techniques
1.Equivalence Partitioning (EP)
• Equivalence partitioning divides data into partitions in way that all the members of a given partition are expected
to be processed in the same way. equivalence partitions for both valid and invalid values.
• Valid partitions values accepted by component, Invalid partitions values rejected by component
• Each value must belong to one and only one equivalence partition.
• using one value from each partition in test cases achieve 100% coverage.

2.Boundary value analysis (BVA)


an extension of equivalence partitioning ONLY used when: the partition is ordered •numeric or sequential data • Take
the test conditions as partitions and design the test cases by getting boundary values.
• For example: To test a field which accepts only an amount more than 10 and less than 20 We take the boundaries as
10-1, 10, 10+1, 20-1, 20, 20+1. (Instead of using lots of test data, we just use 9, 10, 11, 19, 20 and 21)
3.Decision Table Testing
✓ implementation of system requirements specifies how different combinations of conditions result in different out-
comes.
✓ Decision tables are a good way to record complex business rules that a system must implement.
✓ tester identifies conditions (often inputs) and the resulting actions (often outputs) These form the rows of table,
✓ minimum coverage standard for decision table testing is to have at least one test case per decision rule in table
✓ 100% decision coverage is achieved, it executes ALL decision outcomes

4.State Transition
• A system may exhibit a different response depending on current conditions or previous history (its state).
• allows tester to view the software in terms of its states, transitions between states, inputs or events that trigger
state change, resulting actions.
• state table shows relationship between states and inputs and can highlight possible transitions that are invalid.

5.Use Case Testing

• Test cases check how the system behaves in different situations. cover basic, exceptional, and error scenarios.
involve collaboration between the system under test and actors (like users).
• Each use case has:
o Preconditions: Conditions that must be met for the use case to work.
o Postconditions: Observable results and the final system state after completing the use case.
• Designing test cases from use cases combines with other techniques.
• Use cases are valuable for designing acceptance tests with customer participation.

Structure-based (White-box) Techniques.


at ALL test levels, but the statement and decision testing are most used at the component test level.

1.Statement Testing & Coverage:


• Statement testing focuses on executing the executable statements in the code.
• Coverage is measured as the percentage of executed statements out of the total executed statements.
• Test cases are derived to increase statement coverage.
• To calculate statement coverage, find the shortest paths covering all nodes.
2. Decision table& Coverage:
• Decision testing targets code decisions (e.g., IF statements).
• Test cases cover all decision outcomes (true and false branches).
• Coverage is the percentage of executed decision divided by total number of decisions outcomes.
• Achieving 100% decision coverage tests all outcomes, even implicit ones.
• Branch coverage Find the minimum paths to cover all edges (branches).
• Decision coverage is stronger than statement coverage 100% decision coverage guarantees 100% statement
coverage, but not vice versa.
3. Experience-Based Techniques:
• relies on tester’s skills &experience with similar applications &technologies.
• identify special tests formal methods miss. When combined with systematic techniques,
• Error Guessing A commonly used experience-based technique, testers anticipate defects based on experience
• Enumerate a list of possible defects & design tests that attack these defects (called fault attack)
• Exploratory Testing Informal (not pre-defined) tests are designed, executed, logged, evaluated dynamically in
test execution Results are documented to learn more about the system. conducted using session-based testing
• Test design and execution happen simultaneously, guided by a documented test charter.
• Useful when Specifications are lacking. Time pressure is high.
• Checklist-based Testing: execute tests to cover test conditions found in a checklist.
Testing Role
• component testing level & component integration testing, the role done by developers.
• test level & system integration test level, done by an independent test team
• At the operational acceptance test level, the role of a tester done by operations/systems administration staff.
• At the user acceptance test level, the role of a tester is done by business analysts, subject matter experts, and users.

Test Organization and Independence


• The effectiveness of finding defects by testing & reviews can be improved by using independent testers

Benefits Drawbacks
Independent testers are unbiased & see other defects. Isolation from the development team
independent tester can verify assumptions made by Developers lose a sense of responsibility for quality.
people during specification & implementation of system.

Configuration Management purpose is to establish & maintain INTEGRITY of work products of the system
through the whole life cycle helps to uniquely identify the tested item During test planning,
Risk-based Testing Risk can be defined as the chance of an event, threat occurring and resulting in undesira-
ble problem. The level of risk determined by: likelihood, Impact
used to focus effort required during testing. used to decide where/when to start testing and identify areas that
need more attention.
**Resulting product risk information is used to guide test activities. Elaborate.?
Determine the test techniques to be employed. • Determine the extent of testing to be carried out. • Prioritize
testing to find the critical defects as early as possible.

Risk Types
1. Project Risks situations has a negative effect on a project's ability to achieve its objectives. Ex important
• Organizational factors: Personnel, political issues, Skill, training, staff shortages, failure by the team to
follow up on information, problem testers communicating their needs .
• Technical issues Test environment not ready on time. • Low quality of the design, code, test data Prob-
lems in defining the right requirements
• Supplier issues: • Failure of third party to deliver a necessary product or service• Contractual issues.

2. Product Risks work product may fail to satisfy the legitimate needs of its users
• System architecture may not support some non-functional requirement(s)
• A particular computation may be performed incorrectly
• Response-times may be inadequate for a high-performance transaction processing system
• User experience (UX) feedback might not meet product expectations

Defect Management important


• Discrepancies between actual and expected outcomes need to be logged as incidents.
• Incident shall be investigated and may turn out to be a defect.
• Defects shall be tracked from discovery & classification to correction and confirmation of the solution.
• organization should establish a defect management process and rules for classification.
• Process must be agreed with all participating in defect management, designers, developers, testers
• Defects may be raised during development.

Bug Life Cycle: Defect Age The time gap between date of detection & date of closure
New When a defect is logged and posted for the first time
Assigned After the tester has posted the bug, approves the bug is genuine, assigned to developing team.
Open developer working on the defect fix, if defect isn’t appropriate, he change states to Duplicate, Deferred
Fixed When developer makes necessary code changes and verifies changes the bug is passed to testing team.
Pending retest Here the testing is pending on the testers end.
Retest The tester does the retesting of the changed code to check whether the defect got fixed or not
Verified The tester tests the bug again after it got fixed if bug isn’t existed, he changes the status to “verified”
Reopen If the bug still exists even after the bug is fixed by the developer
Closed If the tester feels that the bug no longer exists in the software
Duplicate If the bug is repeated twice.
Rejected If the developer feels that the bug is not genuine, he rejects the bug.
Deferred expected to be fixed in next releases. The priority of the bug low
Not a bug If there is no change in the functionality of the application

Test Tools Purposes


❖ Improve the efficiency of test activities by automating repetitive tasks
❖ Increase reliability of testing
❖ Improve quality of test activities by allowing more consistent testing
❖ Supporting manual test activities throughout the test process
Benefits and risks of using Tools.
Potential benefits Reduction in repetitive manual work • Greater consistency and repeatability, Easier access
to information about testing
Potential risks: Expectations for the tool may be unrealistic • The time, cost and effort for the initial introduc-
tion of a tool may be under estimated • The tool may be relied on too much• A new platform or technology
may not be supported by the tool

Testing execution tools execute test objects using automated test scripts. This requires significant effort.
• Data-driven testing approach • Keyword-driven testing approach • Model-Based testing (MBT) tools
Data-driven testing Separating the test inputs and expected results and uses a more generic test script.
Keyword-driven testing: every testing action like opening or closing of browser, mouse click, keystrokes is described
by a keyword such as openbrowser, click
Model-Based testing Uses a system’s model under test ( UML diagram) to generate test cases by system designer.
After completing the tool selection and a successful proof-of-concept evaluation, introducing the selected tool into an
organization generally starts with a pilot project. Why?
Gain in-depth knowledge about tool strengths and weaknesses, assess benefits, standardize usage, and minimize risks
before full-scale adoption.

Exercise Identify Approach Used in Testing Tool


1. Generic script processes action words describing the actions to be taken, which then calls scripts to process the
associated test data. Keyword-Driven Testing
2. Separate out the test inputs and expected results, usually into a spreadsheet, and uses a more generic test
script that can read the input data and execute the same test script with different data. Data-Driven Testing
3. Enable a functional specification to be captured in the form of a model, such as an activity diagram. This task is
generally performed by a system designer. Model-Based Testing
III. Consider a field that holds exactly 3-digit characters. Give examples on test cases that should be used to achieve
100% coverage in the case of applying Equivalence Partitioning technique:
1. • Valid: 123, 456, 789
2. • Invalid: 12, 1234, ABC
3. Bonus Question: Give examples on test cases in case of applying BVA technique:
4. • Lower boundary: 099, 100
5. • Upper boundary: 999, 1000

II. The way in which independent testing is implemented varies depending on the software development lifecy-
cle model. Mention one advantage and one drawback of using independent testers:
Advantage: Independent testers are unbiased & see other defects. Drawback: isolation from development
team
Black-box testing technique useful for testing the implementation of system requirements that specify how dif-
ferent combinations of conditions result in different outcomes. Decision Table Testing
Testing technique where informal (not pre-defined) tests are designed, executed, logged, and evaluated dy-
namically during test execution. The test results are used to create tests for the areas that may need more
testing. Exploratory Testing
This type of testing is used to focus effort required during testing. It is used to decide where/when to start test-
ing and identify areas that need more attention. Risk-Based Testing
Black-box testing technique is used only when data is numeric or sequential. Boundary Value Analysis (BVA)
Tool used to establish and maintain integrity of work products (components, data and documentation) of the
system through the whole life cycle. Configuration Management Tool
How to measure decision testing coverage? percentage of executed decision divided by total number of decisions
outcomes.

When is exploratory testing most appropriate? When there is limited documentation or time pressure
What is common minimum coverage for decision table testing? least one test case per decision rule in the table.
To achieve 100% coverage in equivalence partitioning, test cases must cover all identified partitions by using --
--. Complete. one value from each partition.
Statement and decision testing are mostly used at the component test level. Complete.
Risk level of an event can be determined based on ---- and ----. Complete. Likelihood and impact.
Contrast benefits and drawbacks of hiring independent testers.
• Benefits: Unbiased perspective, specialized skills.
• Drawbacks: Potential lack of domain knowledge, higher costs.
Identify three potential risks of using tools to support testing.
tool Expectations may be unrealistic • The time, cost, effort for the initial introduction of a tool under estimated • The
tool may be relied on too much
Black box testing technique used very useful for designing acceptance tests with customer/user participation.
Use case testing
Black box testing technique useful for testing the implementation of system requirements that specify how different com-
binations of conditions result in different outcomes. Decision Table Testing
Testing technique where informal (not pre-defined) tests are designed, executed, logged, and evaluated dynamically dur-
ing test execution. The test results are used to create tests for the areas that may need more testing. Exploratory Testing
This type of testing is used to focus effort required during testing. It is used to decide where/when to start testing and
identify areas that need more attention. Risk-Based Testing
Tool used to establish and maintain integrity of work products (components, data, and documentation) of the system
through the whole life cycle. Configuration Management Tool
Black box testing technique used only when data is numeric or sequential. Boundary Value Analysis (BVA)
Replace with Key Terms
1. The time gap between date of detection & date of closure. Defect Age

Identify Type of Project Risk changed.


1. Test environment not ready on time. Technical issues
2. Skills, training, and staff shortages. Organization issues
3. Contractual issues. Supplier issues
4. • Personnel issues Organization issues
5. Improper attitude such as not appreciating the value of finding defects during testing. Organization issues
6. Low quality of the design, code, or configuration data. Technical issues
Lecture 7_ Fundamentals of Agile Software Development
Whole-Team Approach
• The team should be relatively small (3-9 members) shares same workspace, as colocation, work progress is communi-
cated through daily stand-up meetings involving all members, whole team is responsible for quality in Agile projects, Test-
ers will work closely with both developers and business representatives

Early and Frequent Feedback Agile projects have short iterations enabling project team to receive early and continu-
ous feedback on product quality.
Benefits of early and frequent feedback
4. Avoiding requirements misunderstandings,
5. Discovering, isolating, and resolving quality problems early.

Aspects of Agile Approaches

1- Agile Approaches
1.Scrum: Sprint duration is 2-4 weeks. Team performance is measure by velocity, amount of backlog items the team can
finish during one sprint., Measurement unit for effort to complete a user story is story points.

2.Extreme Programming (XP)


planning game 2 levels of plans in XP: important
• release planning: team & stakeholders collaboratively decide the requirements and features that can be
delivered into production
• Iteration planning: team will pick up the most valuable items from the list and break them down into
tasks then estimates and a commitment to delivering at the end of the iteration.
• In both levels, there are 3 steps: • exploration, • commitment, • steering. important

Simple Design and Refactoring start with a simple design code is frequently refactored so that it will be maintainable,
Simple designs make the 'definition of done' easier. • XP teams conduct a small test called Spike.

Minimum Viable Product (MVP): A cross-functional team in XP releases a Minimum Viable Product (MVP) frequently. •
Advantages:
• helps to breakdown complex modules in a small chunk of code.
• helps development team as on-site customer to demonstrate product & focus only on least amount of
highest priority work

System Metaphor: naming conversion practice in design &code to have a shared understanding between teams
Pair Programming programmers operating on same code & unit test cases on same system One plays pilot role & fo-
cuses on clean code, runs. The second one plays role of navigator focuses on big picture & reviews code.

Collective Code Ownership the XP team always takes collective ownership of code. • Success or failure is a collective
effort and there is no blame game. •There is no one key player here, so if there is a bug or issue then any developer fix it.

Test-driven Development •Tests are written before code and ‘passing’ the tests is the critical driver of development.
•You develop code incrementally, along with a test for that increment

Slack Time • like TDD, continuous integration & refactoring of code to improve the quality ,. The team is not doing ac-
tual development in slack time but acts as a buffer to deal with uncertainties and issues. Teams use slack time to pay
down technical debt by refactoring code.
3.Kanban: visualize and optimize the flow of work within a value-added chain. when work arrives in an unpredictable
fashion & when you want to deploy work as soon as it is ready,

• Kanban Board: The value chain to be managed is visualized by a Kanban board. Each column shows a station,
set of related activities. Tasks to be processed are symbolized by tickets moving from left to right across board
• Work-in-Progress WIP Limit: The amount of parallel active tasks is strictly limited.
• Kanban lead time: time between a request being made and a task being released .important
• Kanban cycle time: calculate actual work-in-progress time. Tracks how long a task stays in different stages.***

2-Collaborative User Story Creation cs


• User Story: Agile form of requirements specifications, how system should behave to single feature (small
enough to be completed in a single iteration.) must address both functional and non-functional characteris-
tics. Each story includes acceptance criteria, written to capture requirements from developers,
• INVEST Technique Collaborative authorship of user story use techniques such as brainstorming and mind
mapping.
• Epics: Larger collections of (related features, sub-features make up a single complex feature, or user stories)
• Agile team considers a task finished when a set of acceptance criteria have been satisfied.
• Collaborative authorship of the user story can use techniques such as brainstorming and mind mapping, To
write effective user stories, use the INVEST technique
• 3Cs Concept
• Card: the physical media describing a user story. It identifies the requirement,
• Conversation: explains how the software will be used. The conversation can be documented or verbal.
• Confirmation: The acceptance criteria discussed in conversation are used to confirm that the story is done.

3- Retrospectives meeting held at the end of each iteration to discuss what was successful, what could improve,
process, people, organization, relationships, tools All team members provide input on both testing , non-testing activities

4- Continuous Integration CI Delivery of a product increment requires reliable, working, integrated software at the
end of every sprint.
Continuous integration addresses this challenge by merging all changes made & integrating all changed components
regularly, at least once a day. In XP, Developers work on local versions of the code. Then integrate changes made every
few hours or daily After every code compilation and build, we have to integrate it, If the tests fail, they are fixed,
Continuous Integration Automated Activities iiimportant Exam Mention 6 automated activities in order)
1. Static code analysis: execute static code analysis and report results
2. Compile: compile and link the code, generate executable files
3. Unit test: execute the unit tests,
4. Deploy: install the build into a test environment
5. Integration test: execute the integration tests and report results
6. Report: post the status of all these activities
Continuous Integration Challenges • CI tools must be introduced and maintained • CI process must be defined and es-
tablished • Test automation requires additional resources and can be complex to establish
1. Extreme programming team is not doing actual development during this time but acts as a buffer to deal with uncer-
tainties and issues Slack Cycle Sprint Lead
2. 3Cs of writing user stories include all of the following, except:
a. Confirmation Conversation Check card
3. Methodology used in situations where work arrives in an unpredictable fashion or when you want to deploy work as
soon as it is ready, rather than waiting for other work items:
b. • scrum kanban extreme programming waterfall

Agile Principles _ lecture 8


• Stabilization iterations occur periodically to resolve any remaining defects, good to Address defects remaining
from the previous iteration at the beginning of the next iteration,
• Test automation at all levels of testing occurs in many Agile teams.
• A higher percentage of the manual testing on Agile projects tends to be done using experience-based testing.
• Developers focus on creating unit tests.
• Testers should focus on creating automated integration, system, and system integration tests.
• Changes to existing features have testing implications, especially regression testing implications.
• The use of automated testing is one way of managing the amount of test effort associated with change

Test Activities: during iteration user story will progress sequentially through the following test activities:
➢ Unit testing, typically done by the developer.
➢ Feature acceptance testing, sometimes broken into two activities.
➢ Feature verification testing, often automated, may be done by developers or testers, and involves testing
against the user story’s acceptance criteria.
➢ Feature validation testing, usually manual, to determine whether the feature is fit for use and to receive
real feedback from the business stakeholders.
➢ parallel process of regression testing occurring throughout the iteration.
➢ There may be a system test level that involves executing functional and non-functional tests.

Project Work Products important


• Business-oriented work products that describe what is needed and how to use it
• Development work products how system is built, implement system code ,or evaluate individual pieces of code
• Test work products how the system is tested actually test the system or that present test results

Documentation in Agile Projects common practice to avoid producing vast amounts of documentation focus more on
having working software, with automated tests, balance between increasing efficiency by reducing documentation and
providing sufficient documentation to support business, testing, development.

Test Status and Progress Testers in Agile teams utilize various methods to record test progress and status
• deliver to team by media as wiki dashboards & dashboard-style emails, verbally during stand-up meetings.
• Agile teams may use tools that automatically generate status reports based on test results and task progress.
• This method of communication also gathers metrics from the testing process.
• Teams may use burndown charts to track progress across the entire release and within each iteration.
• A burndown chart represents amount of work left to be done against time allocated to the release or iteration.
• To provide visual representation of team’s status capture story card, development & test tasks, on task board

Managing Regression Risk important


• with testing the code changes made in the current iteration, testers need to verify no regression was introduced
to features tested in previous iterations.
• Risk of introducing regression in Agile development is high due to extensive code churn.
• To maintain velocity without incurring a large amount of technical debt, it is critical that teams invest in test auto-
mation at all test levels as early as possible.
• All test assets should be kept up to-date with each iteration. It is recommended that all test assets be maintained
in a configuration management tool to enable version control.
• Testers need to allocate time in each iteration to review manual &automated test cases from previous & current
iterations
Automated Acceptance Tests run regularly as part of continuous integration full system build. run against a complete
system build at least daily but are not run with each code check-in, provide feedback on product quality with respect to
regression since the last build, but they do not provide status of overall product quality

Agile Testing Techniques important


1. Test-driven Development TDD test captures programmer’s concept of desired functioning of small piece code

Benefits of Test-driven Development Problems of Test-driven Development


Code coverage Every code segment has at least one associ- Programmers prefer programming to testing
ated test, so all code written has at least one test.
Regression test suite A regression test suite is developed in- Some tests can be very difficult to write incrementally.
crementally as a program is developed.
Simplified debugging When test fails should be obvious It difficult to judge the completeness of a set of tests
where problem lies then check& modify the code
System documentation tests themselves are a form of docu-
mentation describe what the code should doing.

2. Behavior-Driven Development BDD an extension of TDD. is feature testing of expected behaviors of an appli-
cation as a whole. Teams that apply BDD extract one or more scenario from each user story and then formulate
them as automated tests. A scenario represents a single behavior under specific conditions.
• BDD creates a functional test (a higher-level test) for a requirement that fails because feature doesn’t exist.

GWT Formula/Gherkin BDD tests are written in a readable, business-oriented language like Given When-Then to
ensure that tests are aligned with business requirements

3. Acceptance Test-Driven Development ATDD extension of the TDD approach. • It facilitates improved collabo-
ration between testers, developers, and business participants. • Using plain language to write acceptance tests
based on the shared understanding of user story requirements. • ATDD considers the user acceptance test crite-
ria as the foundation for development
TDD BDD ATDD

Definition Development technique focus Development technique fo- Focused on meeting the
on individual units cused on expected behavior needs of the user
Focus Unit test Regression test Writing acceptance tests
Understanding tests Tests written by and for develop- Tests written for anyone to un- Tests written for anyone to
ers derstand understand
Test Pyramid is a model showing that different tests may have different granularity.
• Different goals are supported by different levels of test automation.
• Having large number of tests at lower levels, as development moves to upper levels, number of tests decreases.
• Usually, unit and integration level tests are automated and are created using API-based tools.
• At system and acceptance levels, the automated tests are created using GUI-based tools.

Automating API Tests API tests ensure data flow and integration points are working as expected.
• Automating in this layer can also be a lot quicker than having to log into UI and populate the information on-
screen, to trigger the API.
• Automating APIs directly will also help execute non-functional tests.

Agile Tester Skills


• Be positive Accurately evaluate and report test results/progress and product quality
• Collaborate, work in pairs with programmers, respond to change quickly
• Plan and organize their own work.

Sprint zero is the first iteration of the project where many preparation activities take place.

Quality Risks in Agile Projects


• quality risk analysis takes place at two places:
• Release planning: business representatives who know the features in the release provide a high-level overview of
risks, and whole team, including the tester(s), may assist in the risk identification and assessment
• Iteration planning: whole team identifies and assesses the quality risks.

Estimating Testing Effort Based on Content and Risk team estimates testing effort using techniques like planning
poker and T-shirt sizing. Risk analysis helps prioritize items in the backlog, but sometimes not all items can be com-
pleted in a single iteration.

Acceptance Criteria Topics To be testable, acceptance criteria address following topics


• Functional behavior: observable behaviour with user actions as input operating under certain configurations.
• Quality characteristics: How the system performs the specified behaviour
• Scenarios (use cases): sequence of actions between an external actor ( a user) and the system to make task
• Business rules: Activities performed in the system under certain conditions defined by outside constraints.
• External interfaces: Between system and outside world.
• Constraints: Design and implementation constraint that will restrict the options for the developer
• Data definitions: customer may describe format, data type, allowed values, and default values for a data item
“Done” • 100% decision coverage where possible
in Unit • Static analysis performed on all code
testing: • No unresolved major defects
• All code, unit tests, and unit test results reviewed
• All unit tests automated
“Done” ➢ All functional requirements tested
in ➢ All interfaces between units tested.
Integration ➢ No unresolved major defects
Testing: ➢ All defects found are reported.
➢ All regression tests automated,
“Done” ▪ End-to-end tests of user stories, features, and functions
in ▪ All user personas covered.
System ▪ The most important quality characteristics of the system covered
Testing ▪ Testing done in a production-like environment(s)
▪ All regression tests automated
▪ All defects found are reported and possibly fixed
▪ No unresolved major defects
“Done” ✓ user stories selected for iteration are complete, understood by team, have testable acceptance criteria.
for user ✓ All elements of user story are specified & reviewed, including user story acceptance tests, completed.
stories ✓ Tasks must be implement & test; the selected user stories have been identified and estimated
“Done” ❖ All user stories, with acceptance criteria, are approved by customer.
For ❖ The design is complete, with no known technical debt.
feature: ❖ The code is complete, with no known technical debt or unfinished refactoring.
❖ Unit tests have achieved the defined level of coverage.
❖ integration tests & system tests for feature have been performed accord to defined coverage criteria.
❖ No major defects remain to be corrected. Feature documentation complete
“Done • All features for the iteration are ready and individually teste.
for • Any non-critical defects can’t be fixed within constraints of iteration are added to product backlog
iteration • Integration of all features for the iteration are completed and tested.
• Documentation is written, reviewed, and approved
key difference between the definition of done and acceptance criteria
DOD include requirements like fully integrated and peer-reviewed code, all unit tests passed, completed DOC
DoD universal standard applicable to every user story, acceptance criteria specific to each user story

Defect intensity: how many defects are found per day or per transaction.
Defect density: number of defects found compared to number of user stories,
Tools in Agile Projects Some Agile teams use an all-inclusive tool (Application Lifecycle Management ALM) that provides
features relevant to Agile development.
• wikis • instant messaging, and • desktop sharing

Wikis: allow teams to build & share online knowledge base on tools & techniques for development& testing activities,
• Tools and techniques for developing and testing found to be useful by other members of the team
• Metrics, charts, dashboards on product status, which is useful when the wiki is integrated with other tools
• Conversations between team members,
Desktop Sharing and Capturing Tools Used for distributed teams, product demonstrations, code reviews, and even pair-
ing. • Capturing product demonstrations at the end of each iteration, which can be posted to the team’s wiki.
Automated test execution tools: Specific tools are available to support test-first approaches, such as TDD, BDD
Exploratory test tools: Tools capture and log activities performed on an application during an exploratory test session.
4.Testing technique where stakeholder with acceptance tests in plain language based on shared understanding of user
story requirements: • c. ATDD
Allow teams to build up an online knowledge base of tools and techniques for development and testing activities:
c. • Burn-down c. Pareto Fishbone d. Wiki
8.Definition of "Done" in unit testing can be any of the following, except:
a. c. 100% decision coverage
b. No known unacceptable technical debt remaining in design and code
c. All user personas covered
d. Static analysis performed in code.
Black box testing technique for designing acceptance tests with user participation. Behavior-Driven Development (BDD)

Lecture 9: Database Testing


1. Structural Database Testing technique that validates all elements inside data repository used for data storage
2. Schema Testing (mapping testing.) Testing validates various schema formats associated with database & verifies
whether mapping formats of tables/views/columns are compatible with mapping formats of user interface.

ACID Properties of Transactions


➢ Atomicity: ensures that either all parts of a transaction are completed successfully, or none of them are, leaving
the system in a consistent state
➢ Consistency: integrity constraint states that the balance in the accounts table should be consistent
➢ Isolation ensuring that each transaction is executed in isolation from other concurrently executing transactions.
Dirty read: when one transaction reads data has been modified but not yet committed by another transaction
Non- repeatable read: one transaction reads same data twice but data modified by another transaction
Phantom read : transaction reads a set of records match certain conditions &another transaction inserts or de-
letes records for same condition
➢ Durability once a transaction is committed, its changes are permanent and will survive system failures
1. Structural Database Testing
Keys and Indexes Testing

• Test queries with and without indexes to measure performance differences


• .Verify that indexes are correctly defined on columns used in WHERE clauses
Stored Procedures Testing

• Whether the manual execution of the Stored Procedure provides the end-user with the required result?
• Whether the manual execution of the Stored Procedure ensures the table fields are being updated as required
• Whether the execution of the Stored Procedures enables the implicit invoking of the required triggers
Trigger Testing

• Validation of the required Update/Insert/Delete triggers functionality in the realm of the application under test
• Whether the required coding conventions have been followed during the coding phase of the Triggers?
• Whether the trigger updates the data correctly once they have been executed?
Database Server Validation

• Check the database server configurations as specified by the business requirements


• Check the authorization of the required user to perform only those levels of actions
2. Functional Database Testing
Type of database testing used to validate functional requirements of a database from the end-user’s perspective

• test whether transactions & operations performed by end-users related to database works as expected or not.
• Whether the field is mandatory while allowing NULL values on that field?
• Whether all similar fields have the same names across tables?
Checking data integrity and consistency

• Whether the data stored in the tables is correct and as per the business requirements?
• . Whether there are any unnecessary data present in the application under test
• Whether transactions are performed according to business requirement whether the results are correct or not?
• Whether the data has been properly committed if the transaction has been successfully executed?

Login and User Security


• Whether the data is secured from unauthorized access
• Check that sensitive data like passwords and credit card numbers are encrypted.
• Whether there are different user roles created with different permissions?
• Whether all the users have required levels of access on the specified Database
3. Non-Functional Database Testing
Categorized into various categories as required by the business requirements. These can be load testing, Stress Testing,
Security Testing, Compatibility Testing, and so on.

To check table USE keyword for checkout to the database and the SHOW keyword for checking tables from the respec-
tive database.
Minimum system equipment requirement– The minimum system configuration that will allow the system to meet expec-
tations of stakeholders.
Challenges
• Manual Complexity: Manual database testing can become intricate due to data volume, multiple relational data-
bases, and data complexity, leading to testing challenges.
• Automation Costs: Implementing automation tools can raise project costs, impacting budget considerations.
• Deep Expertise Needed: Testers must possess in-depth knowledge of the database,
How can Automation Help? It speeds up testing process, reduce errors, & increase test coverage. 2. perform repetitive
tests, freeing up time for testers to focus on complex testing scenarios. 3. Automation ensure consistency in testing

You might also like