Sqa Summary222
Sqa Summary222
Causal Analysis &Resolution: identify outcomes causes, prevent bad outcomes recurrence, know recurrence of posi-
tive outcomes. eliminates rework & improves quality, best performed after the problem is first identified
Causal Analysis in Agile Teams: collect impediment &retrospective data during each iteration, implement selected im-
provements During task When process performance exceeds expectations or doesn’t meet its quality.
Root Cause Analysis (RCA): CMMI mechanism of analyzing the defects, to identify their causes. Identify whether the
defect was due to “testing miss”, “development miss” or was a “requirement or designs miss.”
Continuous Improvement organization continue improve suitability, effectiveness of quality management system
Ex : Plan-Do-Check-Act (PDCA) tool used to solve problems more efficiently& prevents recurring mistakes.
PLAN • Plan what needs to be done. smaller steps to build a proper plan with fewer failures.
• DO • apply everything that considered during the previous stage.
Standardization: Make sure that everybody knows their roles and responsibilities
• CHECK • audit your plan’s execution and see if your initial plan worked.
• ACT • If everything seems perfect apply your initial plan.
RACI Model/Matrix RACI model/matrix shows how each person contributes to a project.
• Responsible: A person makes sure that the right things happen at right time
• Accountable: the "owner" for ensuring that something gets done correctly. make sure the responsible per-
son/team knows the expectations of the project and completes work on time.
• Consulted: They provide their inputs to the task. Several people may be consulted for a task.
• informed: These are the people who need to be kept informed on the progress of the task.
Cost of Quality (CoQ) method to calculate costs companies, ensure products meet or fail quality standards,
I. Prevention Cost associated with activities specifically designed to prevent poor quality in products.
II. Appraisal Cost activities designed to evaluate products to assure conformance to quality requirements.
III. Internal Failure Cost product fails to conform to quality specification before shipment.
IV. External Failure Cost product fails to conform quality specification after shipment.
V. The Cost of Poor Quality (COPQ: the costs that are generated as a result of producing defective material
lecture 3*********
• Error made by a human being, will produce a defect (bug / fault) in the program, which if executed in the code, will lead to a failure
Testing objectives
1. Evaluate work products as requirements. Build confidence. Find and prevent defects. Reduce risks.
Seven Testing Principles
1. Testing shows the presence of defects: Testing show defects but cannot prove that there no defects.
2. Exhaustive testing is impossible: Testing everything is not feasible except for trivial cases.
3. Early testing: Testing activities should start as early as possible in the software or system development lifecycle.
4. Defect clustering: A small number of modules contain most of the defects discovered during pre-release testing.
5. Pesticide paradox: If the same tests are repeated, these tests will no longer find any new defects.
6. context-dependent: Testing is different in different context
7. Absence-of-errors fallacy: Finding and fixing defects does not help if the system built is unusable
• confirmation bias difficult to accept information that disagrees with currently held beliefs
• useful if test basis (any level or type of testing) has measurable coverage criteria defined.
• key performance indicator drive the activities that demonstrate achievement of software test objectives.
• Testing Throughout SDLC
• Waterfall model: only occur after all other development activities done
• Incremental model: testing a system in pieces incrementally.
• Iterative model: groups of features tested together in a series of cycles
tester's role during each SDLC phase.
1. Requirement Analysis: Understand and analyze the requirements to create test plans and test cases.
2. Design: Review design documents and create detailed test cases based on the design.
3. Implementation: Develop automated tests and prepare for test execution.
4. Testing: Execute test cases, report defects, and verify fixes.
5. Deployment: Perform final acceptance testing and ensure a smooth transition to production.
6. Maintenance: Conduct regression testing and support ongoing maintenance activities.
Static testing depends on manual examination of work products or tool-driven evaluation of the code & find
maintainability defects.
Activities of Formal Review/Inspection Most formal review type• Led by the trained moderator • Pre-meeting preparation
1. Planning: Identify: Scope, Resources/timeframe •Roles • Entry & exit criteria • review characteristics
2. Initiate review (kickoff) Distribute documents • material • Explain scope/process
3. Individual review take notes: • Defects • Questions , Noting potential defects
4. Review meeting Discussing or logging, with documented results
5. Fixing and reporting Create defect reports • Author fixes the defects
6. Follow-up Checking if defects have been addressed, Gathering metrics ,Checking exit criteria
Formal Review Roles
Manager / decides on the execution of reviews, allocates time in project schedules
Moderator/ Facilitator the person who leads, plans and runs the review
Author /the writer or person has responsibility for the document(s) to be reviewed
Reviewers/ individuals with a specific technical Identify and describe defects
Scribe /documents all problems, with advent of tools to support review process, especially logging of defects
Review Techniques
• Ad hoc Reviewers read work product sequentially, identifying issues . highly dependent on reviewer skills.
• Checklist-based reviewers detect issues based on checklists. A review checklist consists of a set of ques-
tions based on potential defects, which may be derived from experience.
• Scenarios and dry runs Reviewers are provided with structured guidelines on how to read through the work
product. scenarios provide guidelines on how to identify defect type.
•Perspective-based reading Reviewers take on different stakeholder viewpoints in individual reviewing. differ-
ent stakeholder viewpoints leads to more depth in individual reviewing
Testing Methodologies
Black-Box Testing Method
• Testing without having any knowledge of the interior workings • The tester is unaware of the system architecture and
does not have access to source code. • User stories are used as test basis.
▪Structural/White-box Testing White-box testing derives tests based on the system’s internal structure or implementa-
tion. Internal structure may include code, architecture, workflows, and/or data flows within the system all test levels
▪Change-related Testing • When changes are made to a system, testing should be done to confirm that the changes
have corrected defect or not all test levels
Confirmation testing: the software should be re-tested to confirm that the original defect has been successfully removed.
Regression testing: checking if a fix for one thing accidentally breaks something else in the software. It’s making sure
that changes don’t cause new problems.
Maintenance Testing: After deploying a system, we need to make changes to it, so it make sure that changes
to the software system (like fixing defects or adding features) don’t accidentally break other parts of the system that
were working fine.
• Maintenance involve planned releases and unplanned releases (hot fixes).
• Triggers for maintenance: modification, migration, and retirement.
• Impact analysis is useful for regression testing during maintenance testing.
• Debugging development activity, not a testing activity. • Debugging identifies and fixes the source of defects
10.Reviewers provided structured guidelines how to read through work product based on expected usage: Checklist-
based Ad-hoc Dry run Perspective-based reading
1. Who is responsible for testing in the following test levels: operation acceptance/unit/system testing.
• Operational Acceptance Testing: End users or operational team
• Unit Testing: Developers
• System Testing: Independent test team
2. IV. List six steps of the formal review/inspection process:
1. Planning: Define the scope and schedule the review.
2. Kick-off: Distribute documents and explain the review objectives.
3. Preparation: Reviewers examine the documents individually.
4. Review Meeting: Discuss and identify defects collectively.
5. Rework: Author corrects the identified defects.
6. Follow-up: Verify corrections and close the review.
II. What does tester do to facilitate testing in case of missing/not developed components during the integration testing?
3. Use stubs and drivers to simulate the missing components.
testing used by developers of commercial off-the-shelf (COTS) software to get feedback from potential/existing users be-
fore the software product is put on the market: •Beta testing.
Identify three guidelines to the successful conduct of the review process.
• Define clear objectives and scope: Ensure that the purpose and scope of the review are well-defined and under-
stood by all participants.
• Involve the right stakeholders: Include individuals with the necessary expertise and perspectives to provide valua-
ble feedback.
• Prepare adequately: Participants should review the material beforehand and come prepared with comments and
questions.
3. Test Case Development: creation, verification, rework of test cases & test scripts. Test data is identified, created,
reviewed, and then reworked as well.
• Activities to be done: • Create test cases & automation scripts & test data
• Deliverables: • Test cases/script • Test data
4.Environment Setup: Decides the software and hardware conditions under which a work product is tested
• Prepare hardware and software requirement for the Test Environment
• Setup test Environment and test data list.
• Deliverables • Environment ready with test data set up.
5.Test Execution Tester carries out the testing based on the test plans and the test cases prepared. Bugs will be re-
ported back to the development team for correction, then retesting
6.Test Closure: Taking lessons from the current test cycle to remove the process bottlenecks for future test cycles
• Prepare Test closure report
• Prepare test metrics
Deliverables: • Test Closure report • Test metrics
Software Test metrics estimating the progress, quality, and health of a software testing effort, and improving the effi-
ciency and effectiveness of a software testing process. Ex:
• Schedule slippage = (actual end date – estimated end date) / (planned end date – planned start date) * 100
• Number of tests run per time = number of tests run/total time
• Fixed Defects Percentage = (defects fixed/defects reported) * 100
• Percentage test cases executed= (number of test cases executed/ total number of test cases written) * 10
• Requirement Creep = (total number of requirements added/number of initial requirements) *100
Entry Criteria • set of conditions should be met before starting the software testing.
Ex: 1..environment available& ready for use 2. Testing tools ready for use. 3.testable code available.
Exit Criteria • set of conditions that should be completed to stop the software testing.
Ex: tests planned have been run. • Verify if requirement coverage has been met. • there are NO Critical or high severity
defects that are left unresolved. • Verify if all high-risk areas are completely teste
• Test Reporting Audience: Tailor test reports based on the project context and the report’s audience.
• Test control describes any guiding or corrective actions taken as a result of information and metrics gathered
and reported.
Test strategy Test plan
High level document Detailed document
Static document, can’t be changed Dynamic document, can’t be changed
Defined at organization level. Defined at project level
Derived from business requirements Derived from product description, use case
Developed by project manager Prepared by test manager
Defines the overall approach and goals. contain the plan for all testing activities to be done
Scope and overview • Test Approach • Testing tools • Test scope, resources, test levels, and types
summary • Testing metrics
Test Scenario Test Scenario gives the idea of what we have to test. • Test Scenario is like a high-level test case
Test Case: test cases are the set of positive and negative executable steps of a test scenario. Once the test scenarios
are prepared, reviewed and approved, test cases will be prepared based on these scenarios.
use case are very useful for designing acceptance tests with customer participation
Defect report After uncovering a defect (bug), testers generate formal defect report. to state the problem as clearly as
possible to find and fix defect easily
test execution schedule that defines the order in which they are to be run.in test suites
Requirements Traceability Matrix (RTM) links requirements to test cases during the validation process.
Importance of Traceability Analyzing the impact of changes ,Making testing auditable , Meeting IT governance criteria
, Improving the understandability of test progress reports
Test Closure Report created once the testing phase is successfully completed by meeting exit criteria detailed analysis
of the bugs removed and errors found. gives a summary of all the tests conducted.
6.Test strategy type where tests are designed and implemented and may immediately be executed in response to
knowledge gained from prior test results (rather than being pre-planned):
•a. Process-compliant c. Model-Based b. Reactive d. Methodical
7. To be testable, acceptance criteria should address all the following topics, except:
a. Business rules
b. Quality characteristics
c. External interfaces
d. Skills of scrum team
Which requirements don't have test case
Complex requirements
9.Which of the following scenarios has high priority and low severity?
A. Spelling mistake of a company name on home page
B. On a bank website, an error message pops up when a customer clicks on transfer money button.
C. FAQ page takes a long time to load d. Crash in some functionality which is going to deliver.
Exercise: Specify Scenario Severity and Priority
• Submit button not working on login page, customers are can’t login. Priority: High, Severity: High
• Crash in functionality which is going to deliver after couple of releases. Priority: Low, Severity: High
• Spelling mistake of a company name on homepage. Priority: High, Severity: low
• FAQ page long time to load. Priority: Low, Severity: low
• Company logo or tagline issues. Priority: High, Severity: Low
• error message pops up when a customer clicks on transfer money button. Priority: High, Severity: High
• Font family or font size or color or spelling issue in the application or reports. Priority: Low, Severity: Low
Identify testing life cycle phases and deliverable from each phase.
4.State Transition
• A system may exhibit a different response depending on current conditions or previous history (its state).
• allows tester to view the software in terms of its states, transitions between states, inputs or events that trigger
state change, resulting actions.
• state table shows relationship between states and inputs and can highlight possible transitions that are invalid.
• Test cases check how the system behaves in different situations. cover basic, exceptional, and error scenarios.
involve collaboration between the system under test and actors (like users).
• Each use case has:
o Preconditions: Conditions that must be met for the use case to work.
o Postconditions: Observable results and the final system state after completing the use case.
• Designing test cases from use cases combines with other techniques.
• Use cases are valuable for designing acceptance tests with customer participation.
Benefits Drawbacks
Independent testers are unbiased & see other defects. Isolation from the development team
independent tester can verify assumptions made by Developers lose a sense of responsibility for quality.
people during specification & implementation of system.
Configuration Management purpose is to establish & maintain INTEGRITY of work products of the system
through the whole life cycle helps to uniquely identify the tested item During test planning,
Risk-based Testing Risk can be defined as the chance of an event, threat occurring and resulting in undesira-
ble problem. The level of risk determined by: likelihood, Impact
used to focus effort required during testing. used to decide where/when to start testing and identify areas that
need more attention.
**Resulting product risk information is used to guide test activities. Elaborate.?
Determine the test techniques to be employed. • Determine the extent of testing to be carried out. • Prioritize
testing to find the critical defects as early as possible.
Risk Types
1. Project Risks situations has a negative effect on a project's ability to achieve its objectives. Ex important
• Organizational factors: Personnel, political issues, Skill, training, staff shortages, failure by the team to
follow up on information, problem testers communicating their needs .
• Technical issues Test environment not ready on time. • Low quality of the design, code, test data Prob-
lems in defining the right requirements
• Supplier issues: • Failure of third party to deliver a necessary product or service• Contractual issues.
2. Product Risks work product may fail to satisfy the legitimate needs of its users
• System architecture may not support some non-functional requirement(s)
• A particular computation may be performed incorrectly
• Response-times may be inadequate for a high-performance transaction processing system
• User experience (UX) feedback might not meet product expectations
Bug Life Cycle: Defect Age The time gap between date of detection & date of closure
New When a defect is logged and posted for the first time
Assigned After the tester has posted the bug, approves the bug is genuine, assigned to developing team.
Open developer working on the defect fix, if defect isn’t appropriate, he change states to Duplicate, Deferred
Fixed When developer makes necessary code changes and verifies changes the bug is passed to testing team.
Pending retest Here the testing is pending on the testers end.
Retest The tester does the retesting of the changed code to check whether the defect got fixed or not
Verified The tester tests the bug again after it got fixed if bug isn’t existed, he changes the status to “verified”
Reopen If the bug still exists even after the bug is fixed by the developer
Closed If the tester feels that the bug no longer exists in the software
Duplicate If the bug is repeated twice.
Rejected If the developer feels that the bug is not genuine, he rejects the bug.
Deferred expected to be fixed in next releases. The priority of the bug low
Not a bug If there is no change in the functionality of the application
Testing execution tools execute test objects using automated test scripts. This requires significant effort.
• Data-driven testing approach • Keyword-driven testing approach • Model-Based testing (MBT) tools
Data-driven testing Separating the test inputs and expected results and uses a more generic test script.
Keyword-driven testing: every testing action like opening or closing of browser, mouse click, keystrokes is described
by a keyword such as openbrowser, click
Model-Based testing Uses a system’s model under test ( UML diagram) to generate test cases by system designer.
After completing the tool selection and a successful proof-of-concept evaluation, introducing the selected tool into an
organization generally starts with a pilot project. Why?
Gain in-depth knowledge about tool strengths and weaknesses, assess benefits, standardize usage, and minimize risks
before full-scale adoption.
II. The way in which independent testing is implemented varies depending on the software development lifecy-
cle model. Mention one advantage and one drawback of using independent testers:
Advantage: Independent testers are unbiased & see other defects. Drawback: isolation from development
team
Black-box testing technique useful for testing the implementation of system requirements that specify how dif-
ferent combinations of conditions result in different outcomes. Decision Table Testing
Testing technique where informal (not pre-defined) tests are designed, executed, logged, and evaluated dy-
namically during test execution. The test results are used to create tests for the areas that may need more
testing. Exploratory Testing
This type of testing is used to focus effort required during testing. It is used to decide where/when to start test-
ing and identify areas that need more attention. Risk-Based Testing
Black-box testing technique is used only when data is numeric or sequential. Boundary Value Analysis (BVA)
Tool used to establish and maintain integrity of work products (components, data and documentation) of the
system through the whole life cycle. Configuration Management Tool
How to measure decision testing coverage? percentage of executed decision divided by total number of decisions
outcomes.
When is exploratory testing most appropriate? When there is limited documentation or time pressure
What is common minimum coverage for decision table testing? least one test case per decision rule in the table.
To achieve 100% coverage in equivalence partitioning, test cases must cover all identified partitions by using --
--. Complete. one value from each partition.
Statement and decision testing are mostly used at the component test level. Complete.
Risk level of an event can be determined based on ---- and ----. Complete. Likelihood and impact.
Contrast benefits and drawbacks of hiring independent testers.
• Benefits: Unbiased perspective, specialized skills.
• Drawbacks: Potential lack of domain knowledge, higher costs.
Identify three potential risks of using tools to support testing.
tool Expectations may be unrealistic • The time, cost, effort for the initial introduction of a tool under estimated • The
tool may be relied on too much
Black box testing technique used very useful for designing acceptance tests with customer/user participation.
Use case testing
Black box testing technique useful for testing the implementation of system requirements that specify how different com-
binations of conditions result in different outcomes. Decision Table Testing
Testing technique where informal (not pre-defined) tests are designed, executed, logged, and evaluated dynamically dur-
ing test execution. The test results are used to create tests for the areas that may need more testing. Exploratory Testing
This type of testing is used to focus effort required during testing. It is used to decide where/when to start testing and
identify areas that need more attention. Risk-Based Testing
Tool used to establish and maintain integrity of work products (components, data, and documentation) of the system
through the whole life cycle. Configuration Management Tool
Black box testing technique used only when data is numeric or sequential. Boundary Value Analysis (BVA)
Replace with Key Terms
1. The time gap between date of detection & date of closure. Defect Age
Early and Frequent Feedback Agile projects have short iterations enabling project team to receive early and continu-
ous feedback on product quality.
Benefits of early and frequent feedback
4. Avoiding requirements misunderstandings,
5. Discovering, isolating, and resolving quality problems early.
1- Agile Approaches
1.Scrum: Sprint duration is 2-4 weeks. Team performance is measure by velocity, amount of backlog items the team can
finish during one sprint., Measurement unit for effort to complete a user story is story points.
Simple Design and Refactoring start with a simple design code is frequently refactored so that it will be maintainable,
Simple designs make the 'definition of done' easier. • XP teams conduct a small test called Spike.
Minimum Viable Product (MVP): A cross-functional team in XP releases a Minimum Viable Product (MVP) frequently. •
Advantages:
• helps to breakdown complex modules in a small chunk of code.
• helps development team as on-site customer to demonstrate product & focus only on least amount of
highest priority work
System Metaphor: naming conversion practice in design &code to have a shared understanding between teams
Pair Programming programmers operating on same code & unit test cases on same system One plays pilot role & fo-
cuses on clean code, runs. The second one plays role of navigator focuses on big picture & reviews code.
Collective Code Ownership the XP team always takes collective ownership of code. • Success or failure is a collective
effort and there is no blame game. •There is no one key player here, so if there is a bug or issue then any developer fix it.
Test-driven Development •Tests are written before code and ‘passing’ the tests is the critical driver of development.
•You develop code incrementally, along with a test for that increment
Slack Time • like TDD, continuous integration & refactoring of code to improve the quality ,. The team is not doing ac-
tual development in slack time but acts as a buffer to deal with uncertainties and issues. Teams use slack time to pay
down technical debt by refactoring code.
3.Kanban: visualize and optimize the flow of work within a value-added chain. when work arrives in an unpredictable
fashion & when you want to deploy work as soon as it is ready,
• Kanban Board: The value chain to be managed is visualized by a Kanban board. Each column shows a station,
set of related activities. Tasks to be processed are symbolized by tickets moving from left to right across board
• Work-in-Progress WIP Limit: The amount of parallel active tasks is strictly limited.
• Kanban lead time: time between a request being made and a task being released .important
• Kanban cycle time: calculate actual work-in-progress time. Tracks how long a task stays in different stages.***
3- Retrospectives meeting held at the end of each iteration to discuss what was successful, what could improve,
process, people, organization, relationships, tools All team members provide input on both testing , non-testing activities
4- Continuous Integration CI Delivery of a product increment requires reliable, working, integrated software at the
end of every sprint.
Continuous integration addresses this challenge by merging all changes made & integrating all changed components
regularly, at least once a day. In XP, Developers work on local versions of the code. Then integrate changes made every
few hours or daily After every code compilation and build, we have to integrate it, If the tests fail, they are fixed,
Continuous Integration Automated Activities iiimportant Exam Mention 6 automated activities in order)
1. Static code analysis: execute static code analysis and report results
2. Compile: compile and link the code, generate executable files
3. Unit test: execute the unit tests,
4. Deploy: install the build into a test environment
5. Integration test: execute the integration tests and report results
6. Report: post the status of all these activities
Continuous Integration Challenges • CI tools must be introduced and maintained • CI process must be defined and es-
tablished • Test automation requires additional resources and can be complex to establish
1. Extreme programming team is not doing actual development during this time but acts as a buffer to deal with uncer-
tainties and issues Slack Cycle Sprint Lead
2. 3Cs of writing user stories include all of the following, except:
a. Confirmation Conversation Check card
3. Methodology used in situations where work arrives in an unpredictable fashion or when you want to deploy work as
soon as it is ready, rather than waiting for other work items:
b. • scrum kanban extreme programming waterfall
Test Activities: during iteration user story will progress sequentially through the following test activities:
➢ Unit testing, typically done by the developer.
➢ Feature acceptance testing, sometimes broken into two activities.
➢ Feature verification testing, often automated, may be done by developers or testers, and involves testing
against the user story’s acceptance criteria.
➢ Feature validation testing, usually manual, to determine whether the feature is fit for use and to receive
real feedback from the business stakeholders.
➢ parallel process of regression testing occurring throughout the iteration.
➢ There may be a system test level that involves executing functional and non-functional tests.
Documentation in Agile Projects common practice to avoid producing vast amounts of documentation focus more on
having working software, with automated tests, balance between increasing efficiency by reducing documentation and
providing sufficient documentation to support business, testing, development.
Test Status and Progress Testers in Agile teams utilize various methods to record test progress and status
• deliver to team by media as wiki dashboards & dashboard-style emails, verbally during stand-up meetings.
• Agile teams may use tools that automatically generate status reports based on test results and task progress.
• This method of communication also gathers metrics from the testing process.
• Teams may use burndown charts to track progress across the entire release and within each iteration.
• A burndown chart represents amount of work left to be done against time allocated to the release or iteration.
• To provide visual representation of team’s status capture story card, development & test tasks, on task board
2. Behavior-Driven Development BDD an extension of TDD. is feature testing of expected behaviors of an appli-
cation as a whole. Teams that apply BDD extract one or more scenario from each user story and then formulate
them as automated tests. A scenario represents a single behavior under specific conditions.
• BDD creates a functional test (a higher-level test) for a requirement that fails because feature doesn’t exist.
GWT Formula/Gherkin BDD tests are written in a readable, business-oriented language like Given When-Then to
ensure that tests are aligned with business requirements
3. Acceptance Test-Driven Development ATDD extension of the TDD approach. • It facilitates improved collabo-
ration between testers, developers, and business participants. • Using plain language to write acceptance tests
based on the shared understanding of user story requirements. • ATDD considers the user acceptance test crite-
ria as the foundation for development
TDD BDD ATDD
Definition Development technique focus Development technique fo- Focused on meeting the
on individual units cused on expected behavior needs of the user
Focus Unit test Regression test Writing acceptance tests
Understanding tests Tests written by and for develop- Tests written for anyone to un- Tests written for anyone to
ers derstand understand
Test Pyramid is a model showing that different tests may have different granularity.
• Different goals are supported by different levels of test automation.
• Having large number of tests at lower levels, as development moves to upper levels, number of tests decreases.
• Usually, unit and integration level tests are automated and are created using API-based tools.
• At system and acceptance levels, the automated tests are created using GUI-based tools.
Automating API Tests API tests ensure data flow and integration points are working as expected.
• Automating in this layer can also be a lot quicker than having to log into UI and populate the information on-
screen, to trigger the API.
• Automating APIs directly will also help execute non-functional tests.
Sprint zero is the first iteration of the project where many preparation activities take place.
Estimating Testing Effort Based on Content and Risk team estimates testing effort using techniques like planning
poker and T-shirt sizing. Risk analysis helps prioritize items in the backlog, but sometimes not all items can be com-
pleted in a single iteration.
Defect intensity: how many defects are found per day or per transaction.
Defect density: number of defects found compared to number of user stories,
Tools in Agile Projects Some Agile teams use an all-inclusive tool (Application Lifecycle Management ALM) that provides
features relevant to Agile development.
• wikis • instant messaging, and • desktop sharing
Wikis: allow teams to build & share online knowledge base on tools & techniques for development& testing activities,
• Tools and techniques for developing and testing found to be useful by other members of the team
• Metrics, charts, dashboards on product status, which is useful when the wiki is integrated with other tools
• Conversations between team members,
Desktop Sharing and Capturing Tools Used for distributed teams, product demonstrations, code reviews, and even pair-
ing. • Capturing product demonstrations at the end of each iteration, which can be posted to the team’s wiki.
Automated test execution tools: Specific tools are available to support test-first approaches, such as TDD, BDD
Exploratory test tools: Tools capture and log activities performed on an application during an exploratory test session.
4.Testing technique where stakeholder with acceptance tests in plain language based on shared understanding of user
story requirements: • c. ATDD
Allow teams to build up an online knowledge base of tools and techniques for development and testing activities:
c. • Burn-down c. Pareto Fishbone d. Wiki
8.Definition of "Done" in unit testing can be any of the following, except:
a. c. 100% decision coverage
b. No known unacceptable technical debt remaining in design and code
c. All user personas covered
d. Static analysis performed in code.
Black box testing technique for designing acceptance tests with user participation. Behavior-Driven Development (BDD)
• Whether the manual execution of the Stored Procedure provides the end-user with the required result?
• Whether the manual execution of the Stored Procedure ensures the table fields are being updated as required
• Whether the execution of the Stored Procedures enables the implicit invoking of the required triggers
Trigger Testing
• Validation of the required Update/Insert/Delete triggers functionality in the realm of the application under test
• Whether the required coding conventions have been followed during the coding phase of the Triggers?
• Whether the trigger updates the data correctly once they have been executed?
Database Server Validation
• test whether transactions & operations performed by end-users related to database works as expected or not.
• Whether the field is mandatory while allowing NULL values on that field?
• Whether all similar fields have the same names across tables?
Checking data integrity and consistency
• Whether the data stored in the tables is correct and as per the business requirements?
• . Whether there are any unnecessary data present in the application under test
• Whether transactions are performed according to business requirement whether the results are correct or not?
• Whether the data has been properly committed if the transaction has been successfully executed?
To check table USE keyword for checkout to the database and the SHOW keyword for checking tables from the respec-
tive database.
Minimum system equipment requirement– The minimum system configuration that will allow the system to meet expec-
tations of stakeholders.
Challenges
• Manual Complexity: Manual database testing can become intricate due to data volume, multiple relational data-
bases, and data complexity, leading to testing challenges.
• Automation Costs: Implementing automation tools can raise project costs, impacting budget considerations.
• Deep Expertise Needed: Testers must possess in-depth knowledge of the database,
How can Automation Help? It speeds up testing process, reduce errors, & increase test coverage. 2. perform repetitive
tests, freeing up time for testers to focus on complex testing scenarios. 3. Automation ensure consistency in testing