SE R23 Unit 4
SE R23 Unit 4
Coding is undertaken once the design phase is complete and the design documents have been
successfully reviewed.
In the coding phase,every module specified in the design document is coded and unit tested.
`During unit testing,each module is tested in isolation from their modules .That module is
tested independently as and when its coding is complete.
After all the modules of a system have been coded and unit tested,the integration and system
Testing phase is undertaken.
Overtheyears,the generalperceptionoftestingas monkeystypinginrandomdataandtryingto crash the
system has changed. Now testers are looked upon as masters of specialised concepts, techniques,
and tools.
CODING: The objective of the coding phase is to transform the design of a system into code in a high-
level language, and then to unit test this code.
Normally, good software development organizations require their programmers to adhere tosome
well-defined and standard style of coding which is called their coding standard.
It is mandatory for the programmers to follow the coding standards.
Compliance of their code to coding standards is verified during code inspection. Any code that
doesnot conform to the coding standards is rejected during code review and the code is reworked
by the concerned programmer.
Good software development organizations usually develop their own coding standards and
guidelines depending on what suits their organization best and based on the specific types of
software they develop.
Coding Standards
1. Rules for limiting the use of global: These rules list what types of data can be declared global and
what cannot, with a view to limit the data that needs to be defined with global scope.
2. Standard headers for different modules: The header of different modules should have standard
format and information for ease of understanding and maintenance.
The following is an example of header format that is being used in some
Coding Standards
1. Rules for limiting the use of global: These rules list what types of data can be declared global and
what cannot, with a view to limit the data that needs to be defined with global scope.
2. Standard headers for different modules: The header of different modules should have standard
format and information for ease of understanding and maintenance.
The following is an example of header format that is being used in some companies:
a. Name of the module.
b. Date on which the module was created.
c. Author’s name.
d. Modification history.
e. Synopsis of the module.
f. Different functions supported in the module, along with their input/output parameters.
g. Global variables accessed/modified by the module
3. Naming conventions for global variables, local variables, and constant identifiers: A popular
naming convention is that variables are named using mixed case lettering. Global variable names
would always start with a capital letter (e.g., GlobalData) and local variable names start with small
letters (e.g., localData). Constant names should be formed using capital letters only (e.g.,
CONSTDATA).
SOFTWARE ENGINEERING
4. Conventions regarding error return values and exception handling mechanisms: The way error
conditions are reported by different functions in a program should be standard within an organization.
For example, all functions while encountering an error condition should either return a 0 or 1
consistently, independent of which programmer has written the code. This facilitates reuse and
debugging.
Coding Guidelines
1. Do not use a coding style that is too clever or too difficult to understand: Code should be easy to
understand. Many inexperienced engineers actually take pride in writing cryptic and incomprehensible
code. Clever coding can obscure meaning of the code and reduce code understandability; thereby
making maintenance and debugging difficult and expensive.
2. Avoid obscure side effects: The side effects of a function call include modifications to the parameters
passed by reference, modification of global variables, and I/O operations. An obscure side effect is one
that is not obvious from a casual examination of the code.
3. Do not use an identifier for multiple purposes: Programmers often use the same identifier to denote
several temporary entities.
4. Each variable should be given a descriptive name indicating its purpose.
5. Use of variables for multiple purposes usually makes future enhancements more difficult.
6. Code should be well-documented: As a rule of thumb, there should be at least one comment line on
the average for every three source lines of code.
7. Length of any function should not exceed 10 source lines: A lengthy function is usually very
difficult to understand as it probably has a large number of variables and carries out many different
types of computations. For the same reason, lengthy functions are likely to have disproportionately
larger number of bugs.
8. Does not use GO TO statements: Use of GO TO statements makes a program unstructured. This
makes the program very difficult to understand, debug, and maintain
CODE REVIEW
• Review is a very effective technique to remove defects from source code. In fact, review has been
acknowledged to be more cost-effective in removing defects as compared to testing.
• Testing is an effective defect removal mechanism. However, testing is applicable to only executable
code.
• The reason behind why code review is a much more cost-effective strategy to eliminate errors from
code compared to testing is that reviews directly detect errors. On the other hand, testing only helps
detect failures and significant effort is needed to locate the error during debugging.
• Normally, the following two types of reviews are carried out on the code of a module:
oCode walkthrough. oCode
inspection.
Code walkthrough:
1. The main objective of code walkthrough is to discover the algorithmic and logical errors in the
code.
2. Code walkthrough is an informal code analysis technique.
3. In this technique, a module is taken up for review after the module has been coded, successfully
compiled, and all syntax errors have been eliminated.
4. A few members of the development team are given the code a couple of days before the
walkthrough meeting.
SOFTWARE ENGINEERING
5. Each member selects some test cases and simulates execution of the code by hand (i.e., traces the
execution through different statements and functions of the code).
6. The members note down their findings of their walkthrough and discuss those in a walkthrough
meeting where the coder of the module is present.
Code Inspection:
1. The principal aim of code inspection is to check for the presence of some common types of errors
that usually creep into code due to programmer mistakes and oversights and to check whether
coding standards have been adhered to.
2. The programmer usually receives feedback on programming style, choice of algorithm, and
programming techniques.
Following is a list of some classical programming errors which can be checked during code inspection:
Use of uninitialized variables.
Jumps into loops.
Non-terminating loops.
Incompatible assignments.
Array indices out of bounds.
Improper storage allocation and deallocation.
Use of incorrect logical operators or incorrect precedence among operators.
Dangling reference caused when the referenced memory has not been allocated.
SOFTWARE DOCUMENTATION
When software is developed, in addition to the executable files and the source code, several kinds of
documents such as users’ manual, software requirements specification (SRS) document, design document,
test document, installation manual, etc., are developed as part of the software engineering process. All these
documents are considered a vital part of any good software development practice. Good documents are
helpful in the following ways:
Different types of software documents can broadly be classified into the following:
Internal documentation:
1. These are provided in the source code itself.
2. Internal documentation can be provided in the code in several forms.
3. The important types of internal documentation are the following:
a. Comments embedded in the source code.
b. Use of meaningful variable names.
c. Module and function headers.
d. Code indentation.
e. Code structuring (i.e., code decomposed into modules and functions).
f. Use of enumerated types.
g. Use of constant identifiers.
h. Use of user-defined data types
External documentation: These are the supporting documents such as SRS document, installation
document, user manual, design document, and test document Gunning’s fog index:
SOFTWARE ENGINEERING
Gunning’s fog index Gunning’s fog index (developed by Robert Gunning in 1952) is a metric that
has been designed to measure the readability of a document. The computed metric value (fog
index) of a document indicates the number of years of formal education that a person should have,
in order to be able to comfortably understand that document.
Example 10.1 Consider the following sentence: “The Gunning’s fog index is based on the premise
that use of short sentences and simple words makes a document easy to understand.” Calculate its Fog
index. The fog index of the above example sentence is
If a users’ manual is to be designed for use by factory workers whose educational qualification is class 8,
then the document should be written such that the Gunning’s fog index of the document does not exceed 8.
TESTING
Definition: Software testing is a process of identifying the correctness of software by considering its
all attributes (Reliability, Scalability, Portability, Re-usability, Usability) and evaluating the
execution of software components to find the software bugs or errors or defects
SOFTWARE ENGINEERING
Software Testing Principles
Software testing is a procedure of implementing software or the application to identify the defects or
bugs. For testing an application or software, we need to follow some principles to make our product
defects free, and that also helps the test engineers to test the software with their effort and time. Here,
in this section, we are going to learn about the seven essential principles of software testing.
Manual testing
The process of checking the functionality of an application as per the customer needs without taking
any help of automation tools is known as manual testing. While performing the manual testing on any
application, we do not need any specific knowledge of any testing tool, rather than have a proper
understanding of the product so we can easily prepare the test document.
Automation testing
Automation testing is a process of converting any manual test cases into the test scripts with the help
of automation tools, or any programming language is known as automation testing. With the help of
automation testing, we can enhance the speed of our test execution because here, we do not require
any human efforts. We need to write a test script and execute those scripts.
UNIT TESTING
SOFTWARE ENGINEERING
WHY UNIT TESTING
SOFTWARE ENGINEERING
SOFTWARE ENGINEERING
SOFTWARE ENGINEERING
SOFTWARE ENGINEERING
DEBUGGING
Debugging in Software Engineering is the process of identifying and resolving errors or bugs in a
software system. It’s a critical aspect of software development, ensuring quality, performance, and user
satisfaction. Despite being time-consuming, effective debugging is essential for reliable and
competitive software products.
Process of Debugging
Debugging is a crucial skill in programming. Here’s a simple, step-by-step explanation to help you
understand and execute the debugging process effectively:
SOFTWARE ENGINEERING
A program analysis tool usually is an automated tool that takes either the source code or the executable
code of a program as input and produces reports regarding several important characteristics of the program,
such as its size, complexity, adequacy of commenting, adherence to programming standards, adequacy of
testing, etc. We can classify various program analysis tools into the following two broad categories:
1. Static analysis tools
2. Dynamic analysis tools Static Analysis Tools:
Static program analysis tools assess and compute various characteristics of a program without executing it.
Typically, static analysis tools analyse the source code to compute certain metrics characterising the source
code (such as size, cyclomatic complexity, etc.) and also report certain analytical conclusions. These also
check the conformance of the code with the prescribed coding standards. In this context, it displays the
following analysis results:
To what extent the coding standards have been adhered to?
Whether certain programming errors such as uninitialised variables, mismatch between actual and
formal parameters, variables that are declared but never used, etc., exist?
A list of all such errors is displayed.
SYSTEM TESTING
System tests are designed to validate a fully developed system to assure that it meets its requirements. The
test cases are therefore designed solely based on the SRS document.
There are essentially three main kinds of system testing depending on who carries out testing:
1. Alpha Testing: Alpha testing refers to the system testing carried out by the test team within the
developing organisation.
2. Beta Testing: Beta testing is the system testing performed by a select group of friendly customers.
3. Acceptance Testing: Acceptance testing is the system testing performed by the customer to determine
whether to accept the delivery of the system.
The system test cases can be classified into functionality and performance test cases. The functionality
tests are designed to check whether the software satisfies the functional requirements as documented in the
SRS document. The performance tests, on the other hand, test the conformance of the system with the non-
functional requirements of the system.
Smoke Testing:
Before a fully integrated system is accepted for system testing, smoke testing is performed. Smoke testing
is done to check whether at least the main functionalities of the software are working properly. Unless the
software is stable and at least the main functionalities are working satisfactorily, system testing is not
undertaken.
For smoke testing, a few test cases are designed to check whether the basic functionalities are working. For
example, for a library automation system, the smoke tests may check whether books can be created and
deleted, whether member records can be created and deleted, and whether books can be loaned and
SOFTWARE ENGINEERING
returned.
Scenario coverage: Each use case typically consists of a mainline scenario and several alternate
scenarios. For each use case, the mainline and all alternate sequences are tested to check if any
errors show up.
Testing derived classes: All derived classes of the base class have to be instantiated and tested. In
addition to testing the new methods defined in the derived . c lass, the inherited methods must be
retested.
Association testing: All association relations are tested.
Aggregation testing: Various aggregate objects are created and tested.
SOFTWARE RELIABILITY
In software engineering, software reliability is the probability that a software system will function
without failure for a specified period of time and under specified conditions. It is a fundamental quality
attribute of software systems that ensures they are dependable, robust, and able to perform their intended
functions accurately and efficiently.
1. Customer Satisfaction: Software that is reliable meets user expectations by providing consistent
performance, which builds trust with customers.
2. Cost Efficiency: Systems that are unreliable may require more frequent updates, maintenance, or
repairs, which can be costly. Ensuring high reliability upfront saves money in the long run.
SOFTWARE ENGINEERING
3. Safety and Risk Management: In critical systems (e.g., medical, aerospace, automotive),
reliability is directly tied to safety. A failure in such systems can have catastrophic consequences,
so ensuring reliability is essential to reduce risk.
4. Business Continuity: Software that is highly reliable ensures that business operations run
smoothly, minimizing downtime and disruptions.
5. Regulatory Compliance: In certain industries, software reliability is mandated by law (e.g., in
healthcare or financial systems), where reliability must meet specific standards to ensure safety and
accountability.
1. Complexity of Modern Systems: As systems become more complex, ensuring their reliability
becomes harder. Interactions between components, distributed systems, and third-party integrations
can introduce new failure points.
2. Unpredictability: Some software failures are not predictable, making it challenging to design
systems that can handle all failure scenarios effectively.
3. Limited Resources: Achieving high reliability often requires extensive testing, monitoring, and
debugging, which can be resource-intensive, especially in large-scale systems.
The statistical testing is critical to take informed decisions regarding the development, and deployment of
the software. It helps the testers to detect trends in usage, patterns, and discrepancies in the input data sets.
In this way, there is an overall improvement in software quality, optimization of resource allocation, and
data-driven testing.
The statistical testing helps to measure the productivity of the complete development process, determines
the areas of improvement, and analyzes the extent of modifications needed to develop a quality, and robust
software.
Statistical testing is a critical test method used to analyze the quality of software using the
information obtained from the various testing methodologies.
The statistical testing uses the statistical procedures in the test data to conclude on the
characteristics, quality, and robustness of the software.
The statistical testing detects bugs, and weaknesses in the software by evaluating the information
gathered at the time of testing.
The statistical testing measures the efficiency of the testing process, and helps to make informed
decisions on whether the software is ready to be deployed to the production environment.
The statistical testing guides how to share the test results with the project stakeholders, by
highlighting the defects identified, and confirming whether the software meets the user
requirements.
The statistical testing gauges the performance of the software under various circumstances and
assists in taking up the optimal test strategies for obtaining a good test coverage.
The statistical testing does a comparison on the various versions of the software, analyzes the
outcomes of the changes, and finally assesses the robustness of the software.
With the help of statistical testing, the testers evaluate the reliability, and performance of the
software by going through the test data, and detecting the areas of improvements.
By using the statistical testing methodologies, the testers can assess the software more efficiently.
SOFTWARE ENGINEERING
By taking help of the statistical testing techniques, the testers can generate detailed test plans,
execute test cases, and analyze the test results correctly.
With the use of the statistical testing tools, the testers evaluate the test data, determine patterns, and
come up with data driven decisions at the time of testing.
The trainings organized on the statistical testing techniques helps to provide a good understanding
of the statistical terminologies, and methodologies which help in the complete software
development.
The testers develop the expertise to experiment, gather information, conduct hypothesis testing, and
come up with critical inferences which can be adopted to improve the software quality.
By using the statistical testing methodologies, the testers can assess the performance, reliability,
and robustness of the software.
Advantages of Software Statistical Testing
The statistical testing verifies the different parameters of the software that are supposedly used
which ultimately helps to improve the quality of the software.
The estimation of the reliability of the software using the statistical testing is far more accurate than
the other procedures namely ROCOF, POFOD etc.
Disadvantages of Software Statistical Testing
The statistical testing is a complex activity, and does not have any easy and replicable way of
describing the profiles.
The generation of the statistical test cases is an inconvenient process as the total count of test cases
which can verify the software are statistically big.
Software Quality
Software quality product is defined in term of its fitness of purpose. That is, a quality product does
precisely what the users want it to do. For software products, the fitness of use is generally explained in
terms of satisfaction of the requirements laid down in the SRS document. Although "fitness of purpose" is a
satisfactory interpretation of quality for many devices such as a car, a table fan, a grinding machine, etc.for
software products, "fitness of purpose" is not a wholly satisfactory definition of quality.
Example: Consider a functionally correct software product. That is, it performs all tasks as specified in the
SRS document. But, has an almost unusable user interface. Even though it may be functionally right, we
cannot consider it to be a quality product.
The modern view of a quality associated with a software product several quality methods such as the
following:
Portability: A software device is said to be portable, if it can be freely made to work in various operating
system environments, in multiple machines, with other software products, etc.
Usability: A software product has better usability if various categories of users can easily invoke the
functions of the product.
Reusability: A software product has excellent reusability if different modules of the product can quickly be
reused to develop new products.
Correctness: A software product is correct if various requirements as specified in the SRS document have
been correctly implemented.
Maintainability: A software product is maintainable if bugs can be easily corrected as and when they
show up, new tasks can be easily added to the product, and the functionalities of the product can be easily
modified, etc.
SOFTWARE ENGINEERING
Software Quality Management System
A quality management system is the principal methods used by organizations to provide that the products
they develop have the desired quality.
Managerial Structure and Individual Responsibilities: A quality system is the responsibility of the
organization as a whole. However, every organization has a sever quality department to perform various
quality system activities. The quality system of an arrangement should have the support of the top
management. Without help for the quality system at a high level in a company, some members of staff will
take the quality system seriously.
Quality System Activities: The quality system activities encompass the following:
Auditing of projects
Production of documents for the top management summarizing the effectiveness of the quality system in
the organization.
Quality control target not only on detecting the defective devices and removes them but also on
determining the causes behind the defects. Thus, quality control aims at correcting the reasons for bugs and
not just rejecting the products. The next breakthrough in quality methods was the development of quality
assurance methods.
The primary premise of modern quality assurance is that if an organization's processes are proper and are
followed rigorously, then the products are obligated to be of good quality. The new quality functions
include guidance for recognizing, defining, analyzing, and improving the production process.
Total quality management (TQM) advocates that the procedure followed by an organization must be
continuously improved through process measurements. TQM goes stages further than quality assurance and
aims at frequently process improvement. TQM goes beyond documenting steps to optimizing them through
a redesign. A term linked to TQM is Business Process Reengineering (BPR).
BPR aims at reengineering the method business is carried out in an organization. From the above
conversation, it can be stated that over the years, the quality paradigm has changed from product assurance
to process assurance, as shown in fig.
SOFTWARE ENGINEERING
The ISO 9000 series of standards is based on the assumption that if a proper stage is followed for
production, then good quality products are bound to follow automatically. The types of industries to which
the various ISO standards apply are as follows.
1. ISO 9001: This standard applies to the organizations engaged in design, development, production,
and servicing of goods. This is the standard that applies to most software development
organizations.
SOFTWARE ENGINEERING
2. ISO 9002: This standard applies to those organizations which do not design products but are only
involved in the production. Examples of these category industries contain steel and car
manufacturing industries that buy the product and plants designs from external sources and are
engaged in only manufacturing those products. Therefore, ISO 9002 does not apply to software
development organizations.
3. ISO 9003: This standard applies to organizations that are involved only in the installation and
testing of the products. For example, Gas companies.
1. Application: Once an organization decided to go for ISO certification, it applies to the registrar for
registration.
2. Pre-Assessment: During this stage, the registrar makes a rough assessment of the organization.
3. Document review and Adequacy of Audit: During this stage, the registrar reviews the document
submitted by the organization and suggest an improvement.
4. Compliance Audit: During this stage, the registrar checks whether the organization has compiled
the suggestion made by it during the review or not.
5. Registration: The Registrar awards the ISO certification after the successful completion of all the
phases.
6. Continued Inspection: The registrar continued to monitor the organization time by time.
The model defines a five-level evolutionary stage of increasingly organized and consistently more mature
processes.
CMM was developed and is promoted by the Software Engineering Institute (SEI), a research and
development center promote by the U.S. Department of Defense (DOD).
Capability Maturity Model is used as a benchmark to measure the maturity of an organization's software
process.
SOFTWARE ENGINEERING
Methods of SEICMM
Capability Evaluation: Capability evaluation provides a way to assess the software process capability of
an organization. The results of capability evaluation indicate the likely contractor performance if the
contractor is awarded a work. Therefore, the results of the software process capability assessment can be
used to select a contractor.
Software Process Assessment: Software process assessment is used by an organization to improve its
process capability. Thus, this type of evaluation is for purely internal use.
SEI CMM categorized software development industries into the following five maturity levels. The various
levels of SEI CMM have been designed so that it is easy for an organization to build its quality system
starting from scratch slowly.
Level 1: Initial
Ad hoc activities characterize a software development organization at this level. Very few or no processes
are described and followed. Since software production processes are not limited, different engineers follow
their process and as a result, development efforts become chaotic. Therefore, it is also called a chaotic
level.
Level 2: Repeatable
At this level, the fundamental project management practices like tracking cost and schedule are established.
Size and cost estimation methods, like function point analysis, COCOMO, etc. are used.
Level 3: Defined
SOFTWARE ENGINEERING
At this level, the methods for both management and development activities are defined and documented.
There is a common organization-wide understanding of operations, roles, and responsibilities. The ways
through defined, the process and product qualities are not measured. ISO 9000 goals at achieving this level.
Level 4: Managed
At this level, the focus is on software metrics. Two kinds of metrics are composed.
Product metrics measure the features of the product being developed, such as its size, reliability, time
complexity, understandability, etc.
Process metrics follow the effectiveness of the process being used, such as average defect correction time,
productivity, the average number of defects found per hour inspection, the average number of failures
detected during testing per LOC, etc. The software process and product quality are measured, and
quantitative quality requirements for the product are met. Various tools like Pareto charts, fishbone
diagrams, etc. are used to measure the product and process quality. The process metrics are used to analyze
if a project performed satisfactorily. Thus, the outcome of process measurements is used to calculate
project performance rather than improve the process.
Level 5: Optimizing
At this phase, process and product metrics are collected. Process and product measurement data are
evaluated for continuous process improvement.
SEI CMM provides a series of key areas on which to focus to take an organization from one level of
maturity to the next. Thus, it provides a method for gradual quality improvement over various stages. Each
step has been carefully designed such that one step enhances the capability already built up.
SOFTWARE ENGINEERING
Six Sigma
Six Sigma is the process of improving the quality of the output by identifying and eliminating the cause of
defects and reduce variability in manufacturing and business processes. The maturity of a manufacturing
process can be defined by a sigma rating indicating its percentage of defect-free products it creates. A six
sigma method is one in which 99.99966% of all the opportunities to produce some features of a component
are statistically expected to be free of defects (3.4 defective features per million opportunities).
By using the same raw material, machinery and workforce a Japanese form took over Quasar television
production, and within a few months, they produce Quasar TV's sets which have fewer errors. This was
obtained by improving management techniques.
Six Sigma was adopted by Bob Galvin, the CEO of Motorola in 1986 and registered as a Motorola
Trademark on December 28, 1993, then it became a quality leader.
1. Statistical Quality Control: Six Sigma is derived from the Greek Letter σ (Sigma) from the Greek
alphabet, which is used to denote Standard Deviation in statistics. Standard Deviation is used to
measure variance, which is an essential tool for measuring non-conformance as far as the quality of
output is concerned.
2. Methodical Approach: The Six Sigma is not a merely quality improvement strategy in theory, as
it features a well defined systematic approach of application in DMAIC and DMADV which can be
used to improve the quality of production. DMAIC is an acronym for Design-Measure- Analyze-
Improve-Control. The alternative method DMADV stands for Design-Measure- Analyze-Design-
Verify.
3. Fact and Data-Based Approach: The statistical and methodical aspect of Six Sigma shows the
scientific basis of the technique. This accentuates essential elements of the Six Sigma that is a fact
and data-based.
4. Project and Objective-Based Focus: The Six Sigma process is implemented for an organization's
project tailored to its specification and requirements. The process is flexed to suits the requirements
and conditions in which the projects are operating to get the best results.
5. Customer Focus: The customer focus is fundamental to the Six Sigma approach. The quality
improvement and control standards are based on specific customer requirements.
6. Teamwork Approach to Quality Management: The Six Sigma process requires organizations to
get organized when it comes to controlling and improving quality. Six Sigma involving a lot of
training depending on the role of an individual in the Quality Management team.
1. DMAIC
2. DMADV
SOFTWARE ENGINEERING
DMAIC
It specifies a data-driven quality strategy for improving processes. This methodology is used to enhance an
existing business process.
1. Define: It covers the process mapping and flow-charting, project charter development, problem-
solving tools, and so-called 7-M tools.
2. Measure: It includes the principles of measurement, continuous and discrete data, and scales of
measurement, an overview of the principle of variations and repeatability and reproducibility (RR)
studies for continuous and discrete data.
3. Analyze: It covers establishing a process baseline, how to determine process improvement goals,
knowledge discovery, including descriptive and exploratory data analysis and data mining tools,
the basic principle of Statistical Process Control (SPC), specialized control charts, process
SOFTWARE ENGINEERING
capability analysis, correlation and regression analysis, analysis of categorical data, and non-
parametric statistical methods.
4. Improve: It covers project management, risk assessment, process simulation, and design of
experiments (DOE), robust design concepts, and process optimization.
5. Control: It covers process control planning, using SPC for operational control and PRE-Control.
DMADV
It specifies a data-driven quality strategy for designing products and processes. This method is used to
create new product designs or process designs in such a way that it results in a more predictable, mature,
and detect free performance.