SE
SE
Software Engineering is a branch of computer science that deals with the systematic development, operation, maintenance, and retirement of
software. It applies engineering principles to software development to create reliable and efficient software.
Key Goals:
High-quality software
Delivered on time
Within budget
2. Software Components
These are the building blocks of a software system. Major components include:
1.
User Interface (UI) – How users interact with the software.
2.
Business Logic – Core functionality and rules of the system.
3.
Data Access Layer – Mechanism for accessing and managing data.
4.
Database – Stores persistent data.
5.
Middleware – Allows communication between components.
Optional:
Web Services/APIs
Security Modules
This refers to the problems faced during the early days of software development (1960s–70s), still relevant today.
Major Issues:
Unmanageable complexity
Difficulty in maintenance
Causes:
Unclear requirements
Inadequate testing
Main Phases:
1.
Requirements Gathering
2.
System Design
3.
Implementation (Coding)
4.
Testing
5.
Deployment
6.
Maintenance
Common Models:
Waterfall Model
Agile Development
Iterative Model
Spiral Model
V-Model
Similarities:
Use of structured processes
Emphasis on quality
These are non-functional requirements that determine how well the software performs.
Key Attributes:
1.
Correctness – Meets the specified requirements.
2.
Reliability – Performs consistently over time.
3.
Efficiency – Uses resources optimally (CPU, memory).
4.
Usability – Easy to understand and operate.
5.
Maintainability – Easy to fix and upgrade.
6.
Portability – Works across different environments.
7.
Scalability – Performs well under increased load.
8.
Security – Resists unauthorized access and attacks.
9.
Reusability – Components can be reused in other projects.
Software Development Life Cycle (SDLC)
The SDLC is a structured framework that describes the phases involved in the development of software from initial feasibility study to
maintenance. Each model within SDLC provides a different approach to software development.
1. Waterfall Model
Definition:
The Waterfall Model is a linear and sequential approach where each phase must be completed before the next one begins.
Phases:
1.
Requirement Analysis
2.
System Design
3.
Implementation
4.
Integration and Testing
5.
Deployment
6.
Maintenance
Advantages:
Well-structured documentation.
2. Prototype Model
Definition:
This model involves building a working prototype of the system early in the process to help understand and refine requirements.
Steps:
1.
Requirements gathering
2.
Quick design
3.
Build prototype
4.
User evaluation
5.
Refinement
6.
Final system development
Advantages:
Disadvantages:
Can be time-consuming.
3. Spiral Model
Definition:
The Spiral Model, proposed by Barry Boehm, combines the features of the Waterfall and Prototype models with an emphasis on risk analysis.
2.
Identify and resolve risks
3.
Develop and verify deliverables
4.
Plan the next iteration
Advantages:
Definition:
This model focuses on developing the system in small pieces (increments) with frequent user interaction. Each release is a working version.
Types:
Advantages:
Disadvantages:
Definition:
In this model, the system is developed in repeated cycles (iterations). Each iteration includes design, development, and testing.
Process:
1.
Initial planning
2.
Design and develop a small module
3.
Test and evaluate
4.
Improve in next iteration
Advantages:
Continuous improvement.
Disadvantages:
Summary Table
Model Best For Key Feature Main Drawback
Waterfall Simple, well-defined projects Linear, sequential phases Inflexible to changes
Prototype Projects with unclear requirements Early feedback from users May lead to poor design
Spiral Large, high-risk projects Risk analysis + iterative design Costly and complex
Evolutionary Development Projects needing early versions Frequent updates, user input Integration challenges
Iterative Enhancement Complex systems with evolving features Repeated improvement Scope creep risk
UNIT 02
Contents of an SRS:
1.
Introduction
Purpose
Scope
Definitions, acronyms
2.
Overall Description
Product perspective
Product functions
User characteristics
3.
Functional Requirements
What the system should do (e.g., login, register, generate reports)
4.
Non-functional Requirements
5.
External Interfaces
6.
Constraints
Phases of RE:
a. Elicitation (Gathering Requirements)
Techniques:
Challenges:
Users may not know what they want
Communication barriers
Unclear goals
b. Analysis
Goal: Organize and model the gathered information, remove conflicts and ambiguities.
Activities:
Tools:
c. Documentation
Goal: Formally write down all functional and non-functional requirements in the SRS document.
Tips:
Use clear, unambiguous language
d. Review
Goal: Ensure the documented requirements are complete, correct, and understandable.
Activities:
Peer reviews
Tasks:
Re-prioritization
Why it’s needed:
3. Feasibility Study
Definition:
A feasibility study evaluates a project’s potential for success before it is developed.
Types:
1.
Technical Feasibility – Is the technology available and practical?
2.
Economic Feasibility – Is the project cost-effective?
3.
Operational Feasibility – Will the system work in the intended environment?
4.
Legal Feasibility – Are there any legal issues (licensing, data privacy)?
5.
Schedule Feasibility – Can it be done in the given time?
4. Information Modelling
Definition:
Information modeling is about representing system data, structure, and flow visually to help understand the system requirements.
Used for:
Clarifying requirements
Identifying entities, attributes, relationships
Components:
1.
Process – A function or activity (circle)
2.
Data Flow – Movement of data (arrow)
3.
Data Store – Where data is stored (open-ended rectangle)
4.
External Entity – Source or destination of data (rectangle)
Levels:
Purpose:
Components:
1.
Entity – Object or concept (rectangle)
2.
Attribute – Property of an entity (oval)
3.
Relationship – Association between entities (diamond)
4.
Primary Key – Unique identifier for an entity
Used For:
Database design
Structure:
Example:
Benefits:
Reduces ambiguity
Purpose
Scope
Definitions
2.
Overall Description
User needs
Assumptions and dependencies
3.
Functional Requirements
4.
Non-functional Requirements
5.
External Interfaces
Traceability – Can trace each requirement to its source and through development stages.
IEEE 830 was superseded by IEEE 29148, which is part of ISO/IEC/IEEE 29148:2018 (modern integrated standard for requirements).
Verification:
Ensures the product is built correctly (i.e., follows the design and specification).
Validation:
b. SQA Plans
An SQA Plan outlines the process, tools, and responsibilities for achieving quality goals.
Key Elements:
1.
Purpose and scope
2.
Standards and procedures
3.
SQA tasks and responsibilities
4.
Review and audit procedures
5.
Test plans and strategies
6.
Problem reporting and corrective action
7.
Tools, methods, and techniques
8.
SQA reporting and records
These frameworks provide structured methods and standards to ensure software quality:
Popular Frameworks:
CMM/CMMI
Six Sigma
They guide how organizations plan, monitor, and improve their quality processes.
ISO 9000 is a family of quality management standards that apply across industries, including software.
Key Concepts:
In Software:
Benefits:
International recognition
Purpose:
CMM Levels:
Level Maturity Stage Description
1 Initial Ad hoc, chaotic processes
2 Repeatable Basic project management
3 Defined Standardized software processes
4 Managed Metrics-driven process control
5 Optimizing Continuous process improvement
Comparison: ISO 9000 vs SEI-CMM
Feature ISO 9000 SEI-CMM
Scope All industries Software industry
Focus Quality assurance Process maturity
Certification Level Organization Process capability level
Nature Compliance to standards Capability and improvement
UNIT 03
Software design is the process of transforming user requirements (from the SRS) into a suitable form, which helps in implementation and
coding. It defines how the system will be built.
Key Objectives:
Satisfy requirements
Improve maintainability
Minimize complexity
Types of Design:
Architectural design defines the overall structure of the system, the main components (modules), and how they interact.
Focus:
Artifacts Produced:
Block diagrams
Subsystems
Component interfaces
Examples:
Client-server architecture
Microservices architecture
Low-Level Design (LLD) describes the internal logic of each module or component defined in the high-level design.
Focus:
Internal workflows
Data structures
Algorithms
Function definitions
4. Modularization
Definition:
Modularization is the process of dividing a system into independent modules to reduce complexity and enhance reusability.
Benefits:
Encourages reuse
Principles:
DSCs are hierarchical diagrams that show the modules in a system and their calling relationships.
Components:
Rectangles: Modules
6. Pseudo Code
Definition:
Pseudo code is an informal high-level description of an algorithm that uses the structural conventions of programming without adhering to
syntax rules.
Purpose:
Helps plan logic before coding
Example:
Function Factorial(n)
if n == 0 then
return 1
else
return n * Factorial(n-1)
7. Flow Charts
Definition:
A flowchart is a graphical representation of a step-by-step process, showing decisions, loops, and actions.
Symbols Used:
Symbol Meaning
Oval Start/End
Rectangle Process Step
Diamond Decision
Arrow Flow direction
Benefits:
Cohesion (Intra-module):
Definition:
How closely related and focused the responsibilities of a single module are.
2.
Sequential – Output of one part is input to another
3.
Communicational – Operate on same data
4.
Procedural – Perform steps in a specific order
5.
Temporal – Tasks done at the same time
6.
Logical – Based on a control flag
7.
Coincidental Cohesion – Random grouping (Worst)
Coupling (Inter-module):
Definition:
2.
Common Coupling – Modules share global data
3.
Control Coupling – One module controls flow of another
4.
Stamp Coupling – Passes a data structure
5.
Data Coupling – Only necessary data is passed (Best)
Goal in Design:
High cohesion
Low coupling
1. Function-Oriented Design
Definition:
Function-Oriented Design is a strategy where software is decomposed based on functions or procedures. It focuses on what the system does
(i.e., behavior or transformations of data).
Key Features:
Process:
1.
Start with high-level functions (e.g., “process transaction”)
2.
Decompose into sub-functions (e.g., “validate input”, “calculate bill”)
3.
Continue until low-level, implementable functions are reached
Advantages:
Disadvantages:
Poor encapsulation
Object-Oriented Design focuses on modeling software based on real-world entities (objects), which have attributes (data) and methods
(behavior).
Key Concepts:
Design Process:
1.
Identify objects from requirements
2.
Group similar objects into classes
3.
Define relationships (inheritance, association)
4.
Design interactions using UML (Unified Modeling Language)
Advantages:
3. Top-Down Design
Definition:
Top-Down Design is a stepwise refinement approach where the system is broken down from the highest level into smaller components.
Process:
1.
Start with the main system goal or module
2.
Decompose it into subsystems or sub-functions
3.
Repeat until each module can be implemented directly
Advantages:
Disadvantages:
Lower-level issues may be overlooked initially
4. Bottom-Up Design
Definition:
Bottom-Up Design starts from basic, reusable components and builds up to create larger systems.
Process:
1.
Identify reusable modules (e.g., authentication, database access)
2.
Integrate them to form higher-level functionalities
3.
Assemble the complete system from these blocks
Advantages:
Promotes reuse
Disadvantages:
It is the process of quantifying properties of software or its specifications to assess productivity, quality, cost, and performance.
Software Metrics:
Quantitative measures used to estimate various aspects of software development and maintenance such as:
Size
Complexity
Effort
Reliability
Maintainability
2. Size-Oriented Metrics
These metrics are based on the size of the software — usually measured in terms of:
Basic Terms:
Program Vocabulary:
n = n1 + n2
Program Length:
N = N1 + N2
Volume:
V = N * log2(n)
Difficulty:
D = (n1 / 2) * (N2 / n2)
Effort:
E=D*V
Usefulness:
Components Counted:
Component Weight (Low/Avg/High)
External Inputs (EI) 3/4/6
External Outputs (EO) 4/5/7
External Inquiries (EQ) 3/4/6
Internal Logical Files (ILF) 7 / 10 / 15
External Interface Files (EIF) 5 / 7 / 10
2.
Compute Unadjusted Function Point (UFP) = sum of all (count × weight)
3.
Calculate Technical Complexity Factor (TCF) from 14 general system characteristics (like performance, usability, etc.)
4.
Final FP = UFP × TCF
Advantages:
Independent of programming language
3. Cyclomatic Complexity
Introduced by: Thomas McCabe
It measures the logical complexity of a program based on the control flow graph (CFG).
A graphical representation of all paths that might be traversed through a program during its execution.
Nodes: Represent program statements or blocks
E = number of edges
N = number of nodes
2.
Or directly:
V(G) = Number of decision points + 1
(e.g., if, while, for, case)
Interpretation:
Cyclomatic Complexity Meaning
1–10 Simple, easy to test
11–20 Moderate complexity
21–50 Complex, needs review
>50 Very complex, high risk
Example:
1. Testing Objectives
Software testing aims to:
2. Unit Testing
Definition:
Unit testing involves testing individual units or components (e.g., functions, classes, or methods) in isolation.
Goal:
Ensure each unit performs as expected and handles all edge cases.
Tools Used:
JUnit (Java)
NUnit (.NET)
PyTest (Python)
Advantages:
3. Integration Testing
Definition:
Combines and tests interacting modules to ensure they work together properly.
Goal:
Types:
Top-down integration: Test from the main module downward using stubs.
4. Acceptance Testing
Definition:
Conducted to validate the software against business requirements and ensure it’s ready for delivery to the client.
Types:
Alpha testing: Done in-house by potential users
Goal:
5. Regression Testing
Definition:
Done after changes (bug fixes, enhancements) to ensure new changes don’t affect existing functionality.
Why Important?
Performed When?
Before releases
Tools:
Selenium
QTP/UFT
TestNG
Includes:
Input validation
Output correctness
UI behavior
Examples:
Tools:
Checks how the system performs in terms of speed, responsiveness, stability, and scalability under expected or peak loads.
Types:
Response time
Throughput
CPU/memory usage
Tools:
Apache JMeter
LoadRunner
NeoLoad
Summary Table
Type of Testing Performed By Purpose Level
Unit Testing Developer Test individual components Code-level
Integration Testing Developer/Tester Test interactions between modules Module-level
Acceptance Testing Client/User Validate software meets requirements System-level
Regression Testing QA/Test Automation Verify new changes don’t break code All levels
Functionality Testing QA/Testers Test functional correctness System/module
Performance Testing Performance Testers Test non-functional performance System-level
Definition:
Integration starts from the top-level modules (main control modules) and proceeds downward.
How It Works:
Use test stubs to simulate lower modules that are not yet integrated.
Each level is tested one at a time, replacing missing modules with stubs.
Example:
If you have modules A B C (A calls B, and B calls C):
Test A first.
Advantages:
Disadvantages:
Definition:
Integration begins with low-level modules, progressing upward to higher modules.
How It Works:
Example:
Test C first, using a test driver that simulates B or A.
Advantages:
Disadvantages:
High-level logic and flow remain untested until later.
Example:
Key Techniques:
Path Coverage – All possible paths through the program are tested.
Done by developers
Tools:
JUnit (Java)
Techniques:
Equivalence Partitioning
Characteristics:
Tester only knows what the software should do, not how it works.
Done by QA/testers
Example:
Steps:
1.
Analyze Requirements
2.
Identify Inputs (valid & invalid)
3.
Define Expected Outputs
4.
Organize Tests (positive, negative, boundary cases)
5.
Use tools or write scripts to generate test data (for large-scale testing)
Example: A gaming company internally tests a game with employees before external release.
Characteristics:
Real-world environment
Example: A mobile app released in beta on Play Store to gather user feedback.
Summary Table
Testing Type What It Does Who Performs It
Top-Down Testing Integrates top modules first Developers/Testers
Bottom-Up Testing Integrates bottom modules first Developers/Testers
Test Drivers Simulate calling modules Developers
Test Stubs Simulate called modules Developers
White Box Testing Internal logic testing Developers
Black Box Testing Requirement-based testing Testers/QA
Test Data Suite Inputs to test software Testers
Alpha Testing In-house pre-release testing Internal Users
Beta Testing Real-world external testing End Users/Clients
A formal peer review where a team of reviewers assesses the software artifacts (code, design, requirements) against predefined criteria. The
focus is on identifying defects, issues with design or functionality, and improving the overall quality of the project.
Steps:
1.
Preparation: The author (developer/creator) provides the artifact (e.g., code) to reviewers beforehand.
2.
Review Meeting: Reviewers meet to discuss issues and concerns.
3.
Defect Identification: Reviewers check for errors, missing requirements, or inconsistencies.
4.
Defect Resolution: After the review, the author addresses the issues identified by the team.
5.
Documentation: A report is prepared listing issues and decisions made during the review.
Key Characteristics:
Advantages:
3. Walkthrough
Definition:
A walkthrough is an informal review process where the author of the artifact presents it to a group, explaining the logic, design, or code. The
goal is to get feedback and identify potential issues.
Steps:
1.
Preparation: The author prepares the document or code to be walked through.
2.
Presentation: The author walks through the material with the team, explaining the approach, logic, and design.
3.
Feedback: Reviewers ask questions and provide suggestions for improvements or point out issues.
4.
Resolution: The author incorporates feedback and resolves any issues raised.
Key Characteristics:
Focus on Understanding: Helps the author and team understand the material better
Advantages:
4. Code Inspection
Definition:
A code inspection is a formal and systematic process where a team of reviewers examines the source code to identify defects, violations of
coding standards, and opportunities for improvement. It is a detailed and structured activity.
Steps:
1.
Planning: The team identifies the code to be inspected and prepares an inspection checklist.
2.
Pre-Inspection: The author distributes the code and related documentation (e.g., design, specifications) to the review team.
3.
Inspection Meeting: Reviewers check the code against predefined criteria (e.g., correctness, adherence to standards, potential bugs).
4.
Defect Identification: The reviewers identify any issues in the code and suggest improvements.
5.
Post-Inspection: The issues are addressed by the author, and a report is created summarizing the findings.
Key Characteristics:
Defect Prevention: The goal is to identify potential defects early, before execution
Advantages:
Ensuring the software adheres to established design principles and coding standards to maintain consistency, readability, maintainability, and
quality.
Design Standards:
Consistency: Consistent naming conventions, data structures, and methods across the design.
Coding Standards:
Coding standards define how code should be written to ensure readability and maintainability. These include:
Indentation and Formatting: Following consistent indentation and line breaks for readability.
How It Works:
Regular code reviews and inspections help ensure compliance with the standards.
Automated tools can be used to check coding standards (e.g., linters, static code analysis tools).
Advantages:
Conclusion
Benefits of Static Testing:
UNIT 05
Key Characteristics:
Dynamic Nature: Software is rarely perfect at the time of release, and it must adapt to changing needs, environments, and technologies.
Continuous Modifications: Software often undergoes regular updates, additions, and modifications to meet user needs or correct errors.
Lifecycle: Software lifecycle includes initial development, deployment, and continuous maintenance, making it an ongoing process.
Software development teams must plan for future modifications and support.
Proper architecture and design are essential for scalability, maintainability, and ease of evolution.
2.
Performance Improvements: As user needs grow, software may need optimizations for speed, scalability, or other performance factors.
3.
Adaptation to Changes: Software may need to adapt to changes in hardware, operating systems, or other external environments.
4.
User Requests: New features or enhancements requested by users or clients.
5.
Security: Continuous updates are necessary to patch security vulnerabilities.
3. Categories of Maintenance
Software maintenance can be classified into several categories based on the nature of the changes being made. The main categories are:
Definition:
Maintenance activities performed to prevent potential problems or failures before they occur, ensuring the system continues to operate
smoothly.
Key Characteristics:
Involves activities like code refactoring, updating dependencies, and enhancing performance.
Examples:
Regularly updating software libraries and components to keep them compatible with newer technologies.
Code refactoring to improve readability and simplify the codebase for easier future modifications.
Key Characteristics:
It addresses functional issues or bugs that affect the performance or operation of the software.
Corrective maintenance does not usually introduce new features but fixes existing problems.
Examples:
Definition:
This type of maintenance focuses on enhancing the software's performance, improving features, or refining existing functions according to new
user requirements.
Key Characteristics:
Focuses on improving usability, performance, or adding new features to keep the software competitive.
Examples:
Adding a new feature that improves user interaction with the software.
Enhancing software performance by optimizing algorithms or database queries.
4. Cost of Maintenance
Definition:
The cost of maintaining software includes all expenses related to corrective, preventive, and perfective maintenance activities over the lifetime
of the software.
2.
Size of the System: Larger systems typically have higher maintenance costs.
3.
Frequency of Changes: The more frequently the software is updated, the higher the ongoing cost.
4.
Quality of the Original Code: Well-structured and documented code is easier to maintain and update.
5.
Environment Changes: If the software has to be adapted to new operating systems, hardware, or frameworks, it adds to the cost.
Statistics:
Maintenance can take up a significant portion of the software lifecycle costs. For example, research suggests that maintenance costs can
account for 60-80% of total lifecycle costs.
5. Software Re-Engineering
Definition:
Software re-engineering involves restructuring or rewriting parts of existing software to improve its performance, quality, or maintainability
while retaining its functionality.
Key Activities:
1.
Reverse Engineering: Analyzing the current system to understand its components and design.
2.
Restructuring: Improving the structure or design of the software, without changing its functionality.
3.
Re-documentation: Updating or creating documentation for the system to aid future development or maintenance.
4.
Migration: Moving software from one platform or environment to another (e.g., legacy system to modern infrastructure).
Benefits:
Example:
Rewriting a legacy system from COBOL to Java or modern web technologies.
6. Reverse Engineering
Definition:
Reverse engineering involves analyzing the software to extract knowledge or design information from the existing code, typically for the
purpose of reengineering or understanding legacy systems.
Key Activities:
1.
Code Analysis: Examining the source code to understand its structure, logic, and dependencies.
2.
Extracting Design Information: Creating models or documentation that were not originally available.
3.
Understanding Legacy Systems: Reverse engineering is often used when no original design documentation is available, helping to rebuild
lost knowledge.
Benefits:
Helps to understand and maintain legacy systems when documentation is outdated or unavailable.
Example:
Analyzing a legacy banking system built decades ago to extract business rules and system architecture, so it can be migrated to a modern
platform.
2.
Version Control:
Managing different versions of software components, ensuring that changes can be tracked and previous versions can be restored if needed.
3.
Change Control:
Managing changes to the software and ensuring that only authorized changes are made.
4.
Configuration Status Accounting:
Keeping records of all configurations and changes made to the software system, along with their statuses (approved, pending, etc.).
5.
Configuration Audits:
Ensuring that the software configuration is consistent with the specifications and requirements.
2. Change Control Process
Definition:
The Change Control Process is a systematic approach to managing changes in the software system, ensuring that no unauthorized changes
are made, and that changes are tracked and documented.
2.
Impact Analysis:
Analyzing the impact of the proposed change on the project, including its effect on schedule, cost, and other components.
3.
Approval/Reject:
Based on the impact analysis, the change request is either approved or rejected by the change control board or project management team.
4.
Implementation:
Once approved, the change is implemented into the system.
5.
Testing:
The modified software is tested to ensure that the change has been successfully integrated and that no new issues have been introduced.
6.
Documentation and Tracking:
All changes are documented, and their status is tracked to ensure proper communication and accountability.
Key Features:
1.
Tracking Changes:
Keeps track of all modifications made to the code, including additions, deletions, and updates.
2.
Branching:
Allows developers to work on different features or bug fixes independently by creating branches, which can later be merged.
3.
Merging:
Combines changes from different branches or versions into a single version of the code.
4.
Reverting:
Enables reverting to previous versions if a new version causes issues or defects.
5.
Collaboration:
Facilitates teamwork by allowing multiple developers to work on the same codebase without conflict.
2.
Lower CASE:
Focuses on later stages like coding, testing, and maintenance.
Benefits:
Cost Estimation:
Involves predicting the overall cost of the software project, including development, testing, and maintenance. It can be calculated using:
Expert Judgment
Effort Estimation:
Determining the amount of developer work required to complete the project. It is often measured in person-months or person-hours.
Schedule/Duration Estimation:
Predicting the timeline for project completion. The schedule takes into account the complexity of tasks, the number of resources, and the work
dependencies.
COCOMO Levels:
1.
Basic COCOMO:
Provides a rough estimate based on the size of the software (e.g., LOC – Lines of Code).
2.
Intermediate COCOMO:
Incorporates additional factors such as software reliability, personnel experience, and hardware constraints.
3.
Detailed COCOMO:
Involves more detailed factors and provides a more accurate estimate based on various project attributes.
COCOMO Formula:
aa and bb are constants determined by the project type (organic, semi-detached, embedded).
2.
Heuristic Methods:
Used when it is difficult to find an exact solution. These methods rely on practical experience or rules of thumb.
3.
Critical Path Method (CPM):
A project management technique used to determine the longest sequence of dependent tasks and allocate resources accordingly.
4.
PERT (Program Evaluation and Review Technique):
Used to schedule and allocate resources by considering the uncertainty in task durations.
2.
Risk Assessment:
Analyzing the likelihood and impact of each identified risk. Risks are categorized based on their severity and probability.
3.
Risk Mitigation:
Developing strategies to avoid or reduce the impact of risks (e.g., using reliable technologies, training staff).
4.
Risk Monitoring:
Continuously monitoring risks throughout the project lifecycle and updating mitigation strategies as necessary.