[go: up one dir, main page]

0% found this document useful (0 votes)
15 views12 pages

Chapter 1 3 Short Note

The document provides a comprehensive overview of Software Quality Assurance (SQA), defining its key concepts, challenges, and factors affecting software quality. It outlines the components of SQA throughout the software project life cycle, including pre-project, development, and management aspects, along with methodologies for reviews and testing. Additionally, it emphasizes the importance of quality assurance activities, external participant management, and the use of CASE tools to enhance software quality.

Uploaded by

mengesha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views12 pages

Chapter 1 3 Short Note

The document provides a comprehensive overview of Software Quality Assurance (SQA), defining its key concepts, challenges, and factors affecting software quality. It outlines the components of SQA throughout the software project life cycle, including pre-project, development, and management aspects, along with methodologies for reviews and testing. Additionally, it emphasizes the importance of quality assurance activities, external participant management, and the use of CASE tools to enhance software quality.

Uploaded by

mengesha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Chapter 1: Introduction

What is Software Quality/SQA?


 Definitions:
o IEEE: Degree to which a system meets specified requirements and user
expectations.
o Crosby (1979): Quality means conformance to requirements.
o Juran (1988): Quality consists of product features meeting customer needs and
freedom from deficiencies.
o Pressman: Conformance to functional requirements, development standards,
and professional practices.
Software Quality Challenges:
 Characteristics of SQA environment:
o Contracted work, customer-supplier relationships, teamwork, coordination with
other teams, interfaces with other systems, long-term maintenance, and team
changes.
Software Quality Factors:
1. Product Operation Factors:
o Correctness: Accuracy, completeness, up-to-dateness, availability.
o Reliability: Failure prevention.
o Efficiency: Hardware resource optimization.
o Integrity: Security requirements.
o Usability: Training and operational ease.
2. Product Revision Factors:
o Maintainability: Efforts to identify and correct failures.
o Flexibility: Adaptive maintenance capabilities.
o Testability: Testing ease and effectiveness.
3. Product Transition Factors:
o Portability: Adaptation to different environments.
o Reusability: Use of modules in new projects.
o Interoperability: Interfaces with other systems.
SQA Tasks:
1. Establish organizational procedures and standards for high-quality software.
2. Plan and adapt quality procedures for specific projects.
3. Control processes to ensure adherence to standards.
4. Collect and analyze metrics to predict and control software quality.

Chapter 2: Components of Software Quality Assurance System Overview

Pre-Project SQA Components:

1. Contract Review:
o Objectives: Clarify customer requirements, examine alternatives, formalize
relationships, identify risks, estimate resources, assess company and customer
capacity, define partner roles, and protect proprietary rights.
o Stages:
 Proposal Draft Review: Review customer requirements, risks, and
formalities before submitting the proposal.
 Contract Draft Review: Ensure all understandings are documented, no
unclarified issues remain, and no unauthorized changes are introduced.
o Implementation: Depends on project size, complexity, and staff experience.
o Challenges: Time pressure, need for expertise, logistical issues.
o Recommendations: Schedule reviews, appoint a team leader, and distribute
tasks.
2. Development and Quality Plans:
o Development Plan Elements:
 Deliverables, interfaces, methodology, mapping of phases, milestones,
staff organization, risks, and cost estimation.
o Quality Plan Elements:
 Quality goals, planned reviews, software tests, acceptance tests, and
configuration management.

Software Project Life Cycle SQA Components:

 Verification, Validation, and Qualification: Ensure software meets requirements.


 Defect Removal Model: Analyze defect removal effectiveness and cost.
 Reviews: Conduct design reviews, peer reviews, and expert opinions.
 Software Testing: Define objectives, strategies, classifications, and implementation
methods.
 Maintenance: Include contributions from external participants.

Infrastructure SQA Components:

 Configuration Management: Manage tasks, tools, audits, change control, and release of
software versions.

Management SQA Components:

 Software Quality Metrics: Define objectives, classifications, and implementation


methods.
 Software Quality Cost: Apply classic and extended cost models.

SQA Standards:

 Quality Management Standards: Examples include ISO standards and Capability Maturity
Model (CMM).
 Project Process Standards: Examples include IEEE/EIA standards.

Organizing for SQA:

 Human Components: Focus on teamwork, leadership, and coordination among team


members.
Software Quality Assurance: Chapter 3 Detailed Explanation

SQA Components in the Project Life Cycle

1. Project Life Cycle Stages:


o The project life cycle consists of two key stages:
 Development Life Cycle Stage: This is where the software is designed,
built, and tested. It involves activities like requirements gathering, design
specifications, coding, and unit testing.
 Operation-Maintenance Stage: After deployment, the software enters a
phase where it requires ongoing support, updates, and bug fixes. This stage
ensures the software remains functional and relevant over time.
2. Software Development Methodologies:
o Software Development Life Cycle (SDLC):
 A systematic process that follows a linear and sequential approach,
starting with defining requirements and ending with deployment and
maintenance.
 Waterfall Model: A classic representation of SDLC that emphasizes a step-
by-step approach. Each phase must be completed before moving to the
next, making it easy to manage but potentially inflexible to changes.
o Prototyping Model:
 Involves creating prototypes to gather user feedback early in the
development process. This iterative approach allows for adjustments
based on user input, enhancing user satisfaction.
o Spiral Model:
 Combines iterative development with risk assessment. Each iteration
includes planning, risk analysis, development, and customer evaluation,
allowing for adjustments based on ongoing feedback and risk
management.
o Object-Oriented Model:
 Focuses on the reuse of software components, which can significantly
improve development efficiency and quality. Object-oriented
programming allows developers to integrate existing modules into new
applications, reducing redundancy and potential defects.
3. Quality Assurance Activities:
o Quality assurance planners must:
 Identify necessary QA activities tailored to the project’s needs.
 Establish timing and types of activities to ensure thorough coverage.
 Assign responsibilities to team members while considering the
involvement of external consultants for unbiased evaluations.
o Factors influencing QA intensity include:
 Project Magnitude: Larger projects typically require more comprehensive
QA.
 Technical Complexity: More complex systems demand rigorous testing to
ensure all aspects function correctly.
 Team Experience: A well-qualified team can streamline QA processes and
improve overall effectiveness.
4. Verification, Validation, and Qualification:
o Verification: This process ensures that the outputs of a development phase meet
the specified conditions. It answers the question, "Are we building the product
right?"
o Validation: Ensures that the final product meets user requirements and
expectations. It addresses, "Are we building the right product?"
o Qualification: Determines if the system is suitable for operational use, aligning
with industry standards and regulations.
5. Reviews

As defined by IEEE (1990), a review process is:

 "A process or meeting during which a work product, or set of work products, is presented
to project personnel, managers, users, customers, or other interested parties for comment
or approval."
 Reviews acquire special importance in the SQA process because they provide early
detection and prevent the passing of design and analysis errors “downstream,” to stages
where error detection and correction are much more intricate, cumbersome, and therefore
costly.

Several Methodologies for Reviewing Documents:

 Formal design reviews


 Peer reviews (inspections and walkthroughs)
 Expert opinions

Types of Review

 Formal Design Reviews (DRs):


o The only review that is necessary for the approval of the design product.
o May be conducted at any development milestone requiring completion of an
analysis or design document.

Common Formal Design Reviews Include:

 DPR – Development Plan Review


 SRSR – Software Requirement Specification Review
 PDR – Preliminary Design Review
 DDR – Detailed Design Review
 DBDR – Data Base Design Review
 TPR – Test Plan Review
 STPR – Software Test Procedure Review
 VDR – Version Description Review
 OMR – Operator Manual Review
 SMR – Support Manual Review
 TRR – Test Readiness Review
 PRR – Product Release Review
 IPR – Installation Plan Review

Participants in a DR

 All DRs are conducted by a review leader and a review team. The choice of appropriate
participants is of special importance because of their power to approve or disapprove a
design product.

Characteristics of the Review Leader:

 Knowledge and experience in the development of projects of the type reviewed.


 Seniority at a level similar to, if not higher than, that of the project leader.
 A good relationship with the project leader and their team.
 A position external to the project team.

Preparations for a DR

Preparations for a DR session are to be completed by three main participants:

 Review Leader:
o Appoints team members, schedules review sessions, and distributes design
documents among team members.
 Review Team:
o Expected to review the design document and list their comments prior to the review
session. Checklists help ensure completeness.
 Development Team:
o Prepares a short presentation of the design document, focusing on main
professional issues awaiting approval.
The DR Session Agenda Includes:

1. A short presentation of the design document.


2. Comments made by members of the review team.
3. Verification and validation of comments to determine required actions.
4. Decisions about the design product, which determine the project’s progress.

Possible Decision Outcomes:

 Full approval: Continuation to the next phase with minor corrections.


 Partial approval: Continuation allowed with major action items required.
 Denial of approval: Requires a repeat of the DR due to significant defects.

Post-Review Activities

Following a review, it is essential to issue a report

 summarizing discussions,
 decisions, and action items.

This report helps track corrections and ensures accountability.

Pressman’s “Golden Guidelines” for a Successful Design Review

 Develop checklists for each type of design document.


 Train senior professionals to handle major technical and review process issues.
 Periodically analyze past DR effectiveness to improve methodology.
 Schedule DRs as part of the project activity plan.

Peer Reviews

The major difference between formal design reviews and peer review methods lies in their
participants and authority. Peer reviews involve project members at the same level, focusing on
detecting errors and deviations from standards.
Peer Review Methods:

 Inspections
 Walkthroughs

Inspection Comprehensive Infrastructure:

 Development of inspection checklists for each type of design document and coding
language.
 Training of professionals in inspection processes.
 Periodic analysis of past inspections for improvement.

Basic Activities of Peer Review Include:

 Participants of peer reviews


 Requisite preparations for peer reviews
 The peer review session
 Post-peer review activities
 Peer review efficiency and coverage

Expert Opinions

Expert opinions, prepared by outside experts, support quality evaluation by introducing additional
capabilities to the internal review staff. External experts transmit their expertise by preparing
judgments about documents or participating in internal review teams.

6. Software Testing:
o Objectives: The primary goals are to identify defects, ensure the software meets
quality standards, and compile records for future reference to prevent similar
issues.
o Strategies:
 Big Bang Testing: Tests the complete system at once, which can be
inefficient for larger systems.
 Incremental Testing: Involves testing individual components as they are
developed, allowing for earlier detection of defects and smoother
integration.
o Classifications:
 Black Box Testing: Focuses on outputs in response to various inputs,
ignoring internal workings. This method is effective for functional testing.
 White Box Testing: Examines internal logic and structure, ensuring that all
pathways are tested. It helps identify potential security vulnerabilities and
logical errors.
7. Testing Implementation:

The main issues in testing implementation involve the effectiveness and efficiency of tests. These
factors are crucial for ensuring that the software meets quality standards while optimizing
resource usage. The implementation process can be broken down into several key components:

1. Planning

 Objective: Establish a clear testing strategy that aligns with project goals.
 Activities:
o Define the scope of testing, including what will be tested and what will not.
o Identify the types of tests to be performed (e.g., unit, integration, system, and
acceptance testing).
o Determine the resources needed, including personnel, tools, and environments.
o Establish a timeline for testing activities to ensure they fit within the overall
project schedule.

2. Design

 Objective: Create detailed test cases and scenarios based on requirements and
specifications.
 Activities:
o Develop test case documentation that outlines the inputs, actions, and expected
outcomes for each test.
o Use techniques such as equivalence partitioning and boundary value analysis to
ensure comprehensive coverage.
o Design test data that reflects real-world use cases to enhance the validity of test
results.
o Create a test environment that simulates the production environment as closely
as possible to ensure accurate testing conditions.

3. Execution

 Objective: Carry out the tests as planned and document the results.
 Activities:
o Execute test cases systematically, following the defined procedures.
o Record the outcomes of each test, noting any discrepancies between expected
and actual results.
o Report defects promptly to the development team for resolution.
o Conduct regression testing to verify that fixes do not introduce new issu

 Effectiveness: Refers to the degree to which the tests identify defects. High effectiveness
means that most defects are found and reported.
 Efficiency: Involves conducting tests in a manner that maximizes resource utilization, such
as time and personnel, while minimizing costs.

o Steps involve determining the appropriate testing methodology, planning tests,


and designing detailed test cases to ensure comprehensive coverage.
8. Maintenance:
o Maintenance is crucial for keeping the software functional and relevant,
categorized into:
 Corrective Maintenance: Fixes bugs and provides user support.
 Adaptive Maintenance: Adjusts the software to meet new requirements
or changes in the environment.
 Functionality Improvement Maintenance: Enhances existing features and
adds new functionality to improve performance.
o QA activities in maintenance ensure compliance with both functional and
managerial requirements, aiming to increase efficiency and reduce costs.
9. External Participants:
o Involvement of external participants (e.g., subcontractors and suppliers) can
introduce risks such as:
 Delays in project timelines.
 Quality issues from external contributions.
 Loss of control over project components.
o Effective management, including participation in design reviews and testing, can
help mitigate these risks.
10. CASE Tools:
o Computer-Aided Software Engineering (CASE) tools enhance quality by
automating various aspects of the software development process. They support:
 Identification of discrepancies in design and compliance.
 Ensuring consistent documentation and coding practices.
 Facilitating efficient updates and maintenance of software systems.

You might also like