[go: up one dir, main page]

0% found this document useful (0 votes)
7 views24 pages

Software Engineering

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 24

Software Engineering

Monday, September 16, 2024 10:50 AM

What is Software Engineering?


Software Engineering is the process of designing, developing, testing, and maintaining software. It is a systematic and
disciplined approach to software development that aims to create high-quality, reliable, and maintainable software.

Dual Role of Software


There is a dual role of software in the industry. The first one is as a product and the other one is as a vehicle for
delivering the product. We will discuss both of them.

1. As a Product

• It delivers computing potential across networks of Hardware.


• It enables the Hardware to deliver the expected functionality.
• It acts as an information transformer because it produces, manages, acquires, modifies, displays, or transmits
information.

2. As a Vehicle for Delivering a Product

• It provides system functionality (e.g., payroll system).


• It controls other software (e.g., an operating system).
• It helps build other software (e.g., software tools).

Classification of Software on the basis of copyright:


- **Proprietary Software**: Licensed and owned by a company or individual. Users must purchase
a license to use it.
- **Freeware**: Software available for free but with restricted usage rights (no modification or
Redistribution).
- **Open Source Software**: Software where the source code is open and available for
modification and redistribution.
- **Shareware**: Software that is distributed for free initially but may require payment for full
functionality or continued use

https://www.geeksforgeeks.org/characteristics-of-a-good-software-engineer/

1 Page 1
V model
Monday, September 16, 2024 12:02 PM

Verification Phases:

• Requirement Analysis: This phase contains detailed communication with the customer to understand their requirements and
expectations. This stage is known as Requirement Gathering.
• System Design: This phase contains the system design and the complete hardware and communication setup for developing the
product.
• Architectural Design: System design is broken down further into modules taking up different functionalities. The data transfer and
communication between the internal modules and with the outside world (other systems) is clearly understood.
• Module Design: In this phase, the system breaks down into small modules. The detailed design of modules is specified, also known as
Low-Level Design (LLD).
• Coding Phase:
The actual coding of the system modules designed in the design phase is taken up in the Coding phase. The best suitable programming
language is decided based on the system and architectural requirements.
The coding is performed based on the coding guidelines and standards. The code goes through numerous code reviews and is
optimized for best performance before the final build is checked into the repository.

Validation Phases:

• Unit Testing: Unit Test Plans are developed during the module design phase. Unit testing is the testing at code level and helps
eliminate bugs at an early stage, though all defects cannot be uncovered by unit testing.
• Integration testing: After completion of unit testing Integration testing is performed. In integration testing, the modules are
integrated, and the system is tested. Integration testing is performed in the Architecture design phase. This test verifies the
communication of modules among themselves.
• System Testing: System testing tests the complete application with its functionality, interdependency, and communication. It tests the
functional and non-functional requirements of the developed application.
• Acceptance Testing:
Acceptance testing is associated with the business requirement analysis phase and involves testing the product in user environment.
Acceptance tests uncover the compatibility issues with the systems available in the user environment. It also discovers the non-
functional issues such as load and performance defects in the actual user environment.

1 Page 2
CMM
Monday, September 16, 2024 12:41 PM

What is the Capability Maturity Model (CMM)


Capability Maturity Model (CMM) was developed by the Software Engineering Institute (SEI) at Carnegie Mellon University in
1987. It is not a software process model. It is a framework that is used to analyze the approach and techniques followed by any
organization to develop software products. It also provides guidelines to enhance further the maturity of the process used to
develop those software products.
It is based on profound feedback and development practices adopted by the most successful organizations worldwide. This
model describes a strategy for software process improvement that should be followed by moving through 5 different levels.
Each level of maturity shows a process capability level. All the levels except level 1 are further described by Key Process Areas
(KPA).
Key Process Areas (KPA)
Each of these KPA (Key Process Areas) defines the basic requirements that should be met by a software process to satisfy the
KPA and achieve that level of maturity.

Level-1: Initial

• No KPIs defined.
• Processes followed are ad hoc and immature and are not well defined.
• Unstable environment for software development.
• No basis for predicting product quality, time for completion, etc.

Level-2: Repeatable

• Focuses on establishing basic project management policies.


• Size and cost estimation methods, like function point analysis, COCOMO, etc. are used.
• Experience with earlier projects is used for managing new similar-natured projects.

• Project Planning- It includes defining resources required, goals, constraints, etc. for the project. It presents a detailed plan to
be followed systematically for the successful completion of good-quality software.
• Configuration Management- The focus is on maintaining the performance of the software product, including all its
components, for the entire lifecycle.
• Requirements Management- It includes the management of customer reviews and feedback which result in some changes in
the requirement set. It also consists of accommodation of those modified requirements.
• Subcontract Management- It focuses on the effective management of qualified software contractors i.e. it manages the parts
of the software developed by third parties.
• Software Quality Assurance- It guarantees a good quality software product by following certain rules and quality standard
guidelines while developing.

Level-3: Defined

• At this level, documentation of the standard guidelines and procedures takes place.
• It is a well-defined integrated set of project-specific software engineering and management processes.
• Peer Reviews: In this method, defects are removed by using several review methods like walkthroughs, inspections, buddy
checks, etc.
• Intergroup Coordination: It consists of planned interactions between different development teams to ensure efficient and
proper fulfillment of customer needs.
• Organization Process Definition: Its key focus is on the development and maintenance of standard development processes.
• Organization Process Focus: It includes activities and practices that should be followed to improve the process capabilities of
an organization.
• Training Programs: It focuses on the enhancement of knowledge and skills of the team members including the developers and
ensuring an increase in work efficiency.
1 Page 3
ensuring an increase in work efficiency.

Level-4: Managed

• At this stage, quantitative quality goals are set for the organization of software products as well as software processes.
• The measurements made help the organization to predict the product and process quality within some limits defined
quantitatively.
• Software Quality Management: It includes the establishment of plans and strategies to develop quantitative analysis and
understanding of the product’s quality.
• Quantitative Management: It focuses on controlling the project performance quantitatively.

Level-5: Optimizing

• This is the highest level of process maturity in CMM and focuses on continuous process improvement in the organization using
quantitative feedback.
• Use of new tools, techniques, and evaluation of software processes is done to prevent the recurrence of known defects.
• Process Change Management: Its focus is on the continuous improvement of the organization’s software processes to improve
productivity, quality, and cycle time for the software product.
• Technology Change Management: It consists of the identification and use of new technologies to improve product quality and
decrease product development time.
• Defect Prevention It focuses on the identification of causes of defects and prevents them from recurring in future projects by
improving project-defined processes.

1 Page 4
Agile
Monday, September 16, 2024 2:50 PM

The Agile model combines iterative and incremental approaches to focus on adaptability and customer satisfaction through rapi d
delivery of working software.
Projects are broken into smaller builds provided in short iterations (typically 1 -4 weeks).
Each iteration involves a full development cycle: planning, design, coding, testing, and feedback, ensuring a working product is
delivered to the client frequently.

Phases of the Agile Model

The agile model is a combination of iterative and incremental process models. The steps involve in agile SDLC models are:
• Requirement gathering
• Design the Requirements
• Construction / Iteration
• Testing / Quality Assurance
• Deployment
• Feedback

1. Requirement Gathering:- In this step, the development team must gather the requirements, by interaction with the customer.
Development team should estimate the time and effort needed to build the project. Based on this information you can evaluate
technical and economic feasibility.

2. Design the Requirements:- In this step, the development team will use user-flow-diagram or high-level UML diagrams to show the
working of the new features and show how they will apply those features to the software. Wireframing and designing user inter faces
are done in this phase.

3. Construction / Iteration:- In this step, development team members start working on their project, which aims to deploy a working
product.

4. Testing / Quality Assurance:- In this phase, the Quality Assurance team examines the product's performance and looks for the bug.

5. Deployment:- In this step, the development team will deploy the working project to end users.

6. Feedback:- This is the last step of the Agile Model. In this, the team receives feedback about the product and works on correcting bugs
based on feedback provided by the customer.

1 Page 5
Agile SDLC model is a combination of iterative and incremental process models with focus on process adaptability and customer
satisfaction by rapid delivery of working software product.
Agile Methods break the product into small incremental builds.
These builds are provided in iterations.
The project scope and requirements are laid down at the beginning of the
development process.
Plans regarding the number of iterations, the duration and the scope of each
iteration are clearly defined in advance.

Each iteration is considered as a short time "frame" in the Agile process


model, which typically lasts from one to four weeks.

The division of the entire project into smaller parts helps to minimize the
project risk and to reduce the overall project delivery time requirements.

Each iteration involves a team working through a full software development


life cycle including planning, requirements analysis, design, coding, and
testing before a working product is demonstrated to the client.

1 Page 6
Requirements Elicitation
Monday, September 16, 2024 7:41 PM

Requirements elicitation (also known as requirements capture


and
requirements acquisition) is a process of collecting information
about
software requirements from different individuals such as users
and other
stakeholders.

Requirements elicitation in software engineering is perhaps


the most difficult, most error-prone, and most communication-
intensive software development.

Some popular techniques for requirement elicitation include:


Brainstorming, Interviews, Use Case approach, Observation,
Prototyping, and Workshops.

Phases of Requirement Elicitation

2 Page 7
Feasibility Study
Monday, September 16, 2024 7:40 PM

A feasibility study is an analysis conducted to determine whether a proposed project or solution is practical and
achievable. It assesses various aspects of the project to ensure it can be successfully completed and will meet its intended
goals. The primary purpose of a feasibility study is to evaluate the potential success and risks associated with the project
before significant resources are invested.

1. Technical Feasibility
• Definition: Technical feasibility evaluates whether the technology and technical resources required for a project are
available and can be effectively utilized. It assesses if the technical approach and tools are sufficient to achieve the
project’s goals.
• Example: Imagine a company wants to develop a new virtual reality (VR) training system. Technical feasibility would
examine if the required VR hardware and software are available, if the development team has the expertise to create
the system, and if any technical challenges could hinder the project.
2. Operational Feasibility
• Definition: Operational feasibility assesses whether the proposed solution can be successfully integrated into the
organization’s existing operations and workflows. It evaluates how the project will fit into daily activities and whether it
can be managed and used effectively by the organization.
• Example: A hospital is planning to implement a new electronic health record (EHR) system. Operational feasibility
would look at whether the EHR system can be integrated with current hospital processes, if staff will need training to
use it, and how the new system will affect daily operations.
3. Economic Feasibility
• Definition: Economic feasibility determines whether the project is financially viable (able to succeed) by comparing its
costs to the expected benefits. It assesses whether the project provides a good return on investment (ROI) and if the
financial benefits outweigh the costs.
• Example: A retail chain is considering opening a new store. Economic feasibility would involve calculating the costs of
building and operating the store, comparing them to the projected increase in sales and profits, and deciding if the
investment is worthwhile.
4. Legal Feasibility
• Definition: Legal feasibility examines whether the project complies with all relevant laws, regulations, and legal
requirements. It ensures that the project adheres to legal standards to avoid potential legal issues and penalties.
• Example: A tech company is developing a new app that collects personal data from users. Legal feasibility would check
if the app complies with data protection regulations such as GDPR (General Data Protection Regulation) or CCPA
(California Consumer Privacy Act) to ensure it doesn’t violate privacy laws.
5. Schedule Feasibility
• Definition: Schedule feasibility assesses whether the project can be completed within the required timeframe. It
evaluates if the project timeline is realistic and if all tasks can be finished by the planned deadlines.
• Example: A software company is working on a new application that needs to be launched by the end of the year.
Schedule feasibility would involve checking if the development, testing, and deployment phases can be completed
within the given timeframe and addressing any potential delays.
6. Cultural Feasibility
• Definition: Cultural feasibility evaluates whether the project aligns with the cultural norms, values, and expectations of
the target audience or stakeholders. It ensures that the project is culturally sensitive and acceptable.
• Example: A global company wants to launch a marketing campaign in a new international market. Cultural feasibility
would involve researching local customs and values to ensure that the campaign is respectful and resonates well with
the local audience.
7. Environmental Feasibility
• Definition: Environmental feasibility assesses the potential environmental impact of the project and checks if it
complies with environmental regulations. It aims to ensure that the project minimizes harm to the environment and

2 Page 8
complies with environmental regulations. It aims to ensure that the project minimizes harm to the environment and
adheres to sustainability practices.
• Example: A construction company is planning to build a new residential complex. Environmental feasibility would
evaluate the potential impact on local ecosystems, such as wildlife and water sources, and ensure that the construction
meets environmental standards and regulations to reduce its ecological footprint.
These types of feasibility studies help in thoroughly evaluating different aspects of a project, ensuring that it is practical,
legal, financially sound, and aligned with the organization’s goals and values.

2 Page 9
Nonfunctional metrics
Monday, September 16, 2024 10:10 PM

Performance:
• Response Time, Throughput, Latency, Capacity.

Reliability:
• Uptime Percentage, Mean Time Between Failures (MTBF), Mean Time to Repair (MTTR),
fault tolerance

Usability:
• Time to Learn, User Error Rate, User Satisfaction, Task Completion Rate.

Security:
• Number of Security Incidents, Time to Detect Threats, Vulnerability Density, Encryption
Strength.

Portability:
• Supported Platforms, Installation Time, Adaptation Effort.

Energy Efficiency:
• Power Consumption, CPU/Memory Utilization, Energy Cost per Transaction.

2 Page 10
Black box view
Monday, September 16, 2024 10:42 PM

The Blackbox view of a good Software Requirements Specification (SRS) focuses on describing what the system should do, rather
than how it should do it. This means the SRS should specify the system’s functionality from the user's perspective, without g oing
into the technical implementation details. The Blackbox view highlights the system’s inputs, outputs, and behavior, making it
easier for stakeholders to understand the system’s capabilities.

2 Page 11
Guidelines
Wednesday, September 18, 2024 8:27 AM

*Guidelines for expressing requirements**: -

Use clear, simple, and unambiguous language.


Avoid technical jargon unless necessary.
Ensure each requirement is testable and measurable.
Ensure the requirements are complete, consistent, and relevant
to the project scope.

2 Page 12
Interview
Wednesday, September 18, 2024 8:43 AM

2 Page 13
LOC
Monday, September 16, 2024 12:50 PM

Source lines of code (SLOC), also known as lines of code (LOC), is a software metric used to measure the size of a computer program
by counting the number of lines in the program's source code. SLOC is typically used to predict the amount of effort that will be
required to develop a program, as well as to estimate programming productivity or maintainability once the software is produced.

It helps track codebase growth and is easy to compute and understand.

In size-oriented metrics, LOC is considered to be the normalization value.


It is an older method that was developed when FORTRAN and COBOL
programming were very popular.
Productivity is defined as KLOC / EFFORT, where effort is measured in person-
months.
As productivity depends on KLOC, so assembly language code will have more
productivity.
The more expressive is the programming language, the lower is the productivity.

• Types of LOC
• Physical LOC (PLOC): The total number of lines in the file, including comments and blank lines.
• Logical LOC (LLOC): Only counts lines that contain executable statements, excluding comments and empty lines.

Advantages of LOC :
1. Simplicity: LOC is easy to understand and quick to calculate, making it a straightforward metric.
2. Early Estimation: It provides a rough estimate of project size and development effort in early stages.
3. Productivity Indicator: LOC can track the amount of code written and measure development progress.
4. Comparison Across Projects: It does not reflect code complexity and is best used for comparing projects in the same language
and coding standards.
5. Useful in Legacy Systems: Helps gauge (measure) the scale and complexity of existing, older systems.

Disadvantages of LOC :
1. No Reflection on Code Quality: It doesn’t account for code quality, efficiency, or maintainability.
2. Language Dependency: LOC counts vary significantly between programming languages for the same functionality.
3. Encourages Writing More Code: Focus on LOC can lead to writing unnecessary or redundant code.
4. Ignores Complexity: LOC doesn’t measure the complexity of algorithms or logic within the code.
5. No Correlation with Functionality: A higher LOC doesn’t necessarily mean more features or functionality.
6. Maintainability Issues: Larger LOC doesn't always mean harder to maintain if the code is well-organized.
7. Not Suitable for Agile: LOC doesn’t align with Agile’s focus on quality, flexibility, and iterative development.

3 Page 14
4Ps
Monday, September 16, 2024 6:20 PM

Four P’s of Management Spectrum:


- People: Managing the team responsible for development.
- Product: Managing the software product being developed.
- Process: Managing the development process followed.
- Project: Managing the overall planning, execution, and delivery of the project.

For detail
From <https://www.geeksforgeeks.org/4-ps-in-software-project-planning/>

3 Page 15
Software project estimation
Monday, September 16, 2024 11:02 PM

3 Page 16
COCOMO II
Wednesday, September 18, 2024 2:02 PM

3 Page 17
Funct Indep
Wednesday, October 16, 2024 10:26 PM

Functional independence in software design refers to the principle of designing software modules in such a way that each module performs a single, well-
defined task with minimal interaction with other modules. Achieving functional independence is crucial for creating maintainable, scalable, and robust software
systems. The two key concepts that contribute to functional independence are cohesion and coupling.
Cohesion:
• Definition: Cohesion measures how closely related the functions within a single module are. It refers to the degree to which the elemen ts of a module work
together to accomplish a single, well-defined task.
• Example: In a module for processing customer orders, all functions related to order management (such as adding items to an order, ca lculating the total, and
applying discounts) should belong to that module.

Coupling:
• Definition: Coupling measures the degree of interdependence between different modules. It refers to how much one module depends on the internal
workings of another module.
Example: A customer order module should not directly access the database module’s internal data structure; instead, it should use well-defined interfaces to
interact with the database.

4 Page 18
Advantages of Explicit Design and Documentation of Software
Architecture:
Wednesday, October 16, 2024 10:42 PM

Advantages of Explicit Design and Documentation of Software Architecture:


1. Clarity and Communication: Provides a clear understanding of the system’s architecture, making it easier for team members and stakeholders to communicate effectively
about the design.
2. Improved Decision-Making: Facilitates informed decision-making by providing detailed insights into the architecture, allowing teams to assess trade-offs and make
better design choices.
3. Easier Maintenance: Well-documented architecture makes it easier to maintain and update the system over time, as developers can quickly understand how
components interact and where changes may impact other parts of the system.
4. Facilitates Onboarding: New team members can ramp up more quickly by referring to explicit design documents, reducing the learning curve associated with
understanding the system.
5. Risk Management: Helps in identifying potential risks and architectural challenges early in the development process, allowing teams to address them proactively.

4 Page 19
RMMM
Wednesday, October 16, 2024 7:49 PM

A Risk Mitigation, Monitoring, and Management (RMMM) plan is a critical part of the project risk management
process, aimed at identifying, assessing, and addressing risks that could negatively impact the success of a project.
The plan is typically part of software development or project management methodologies like SDLC, Agile, or
Waterfall.
The RMMM plan documents all work performed as part of risk analysis and is used by the project manager as part
of the overall project plan.

Risk Mitigation :
It is an activity used to avoid problems (Risk Avoidance).
Steps for mitigating the risks as follows.
1. Finding out the risk.
2. Removing causes that are the reason for risk creation.
3. Controlling the corresponding documents from time to time.
4. Conducting timely reviews to speed up the work.

Risk Monitoring :
It is an activity used for project tracking.
It has the following primary objectives as follows.
1. To check if predicted risks occur or not.
2. To make sure the steps to avoid or reduce risks are followed correctly
3. To collect data for future risk analysis.

4. To identify which problems are caused by specific risks throughout the project.

Risk Management and planning :


It assumes that the mitigation activity failed and the risk is a reality. This task is done by Project manager when risk
becomes reality and causes severe problems. If the project manager effectively uses risk mitigation strategies to
remove risks successfully then it is easier to manage the risks. This shows that the response that will be taken for
each risk by a manager. The main objective of the risk management plan is the risk register. This risk register
describes and focuses on the predicted threats to a software project.

5 Page 20
SCM
Wednesday, October 16, 2024 9:05 PM

Software Configuration Management


When we develop software, the product (software) undergoes many changes in their maintenance phase; we need to handle
these changes effectively.
Several individuals works together to achieve these common goals. This individual produces several work product (SC Items)
e.g., Intermediate version of modules or test data used during debugging, parts of the final product.
The elements that comprise all information produced as a part of the software process are collectively called a software
configuration.
As software development progresses, the number of Software Configuration elements (SCI's) grow rapidly.
These are handled and controlled by SCM. This is where we require software configuration management.
A configuration of the product refers not only to the product's constituent but also to a particular version of the component .
Therefore, SCM is the discipline which
• Identify change
• Monitor and control change
• Ensure the proper implementation of change made to the item.
• Auditing and reporting on the change made.
Configuration Management (CM) is a technic of identifying, organizing, and controlling modification to software being built b y a
programming team.
The objective is to maximize productivity by minimizing mistakes (errors).

Software Configuration Management (SCM)


Version Control: Version control is vital in SCM for tracking changes to software code. Using tools like Git, teams can manag e
different versions of their codebase, allowing multiple developers to collaborate seamlessly. This ensures that every
modification is recorded, enabling easy reversion if needed.
Configuration Identification: This process involves systematically identifying and documenting all configuration items (CIs) in a
project, including source code, documents, and requirements.

Change Control: Managing modifications to configuration items is critical. Change control involves reviewing, testing, and
approving changes to prevent unauthorized or unintended alterations, ensuring the stability and reliability of the software.
Configuration Status Accounting: This aspect involves recording and reporting the status of configuration items and their
changes. It provides visibility into the history and status of all changes, aiding in project tracking and audits.
Configuration Auditing: Regular audits verify that the software conforms to its requirements and that SCM processes are
followed correctly. This helps maintain the integrity and quality of the software by detecting any discrepancies.

5 Page 21
PRO REACT
Wednesday, October 16, 2024 9:39 PM

Proactive Risk Management


Proactive risk management involves identifying, assessing, and addressing risks before they occur. The goal is to prevent risks from materializing or to reduce their impact if they
do.
Key Elements:
1. Risk Identification: Risks are identified in advance through techniques like brainstorming, SWOT analysis, expert interviews, and historical dat a analysis.
2. Risk Assessment: Each risk is evaluated based on its likelihood of occurrence and potential impact.
3. Risk Mitigation: Strategies are developed to either eliminate or minimize the chances of risks occurring (e.g., safety protocols, redundant systems, early training).
4. Contingency Planning: Preparing fallback plans for risks that cannot be fully prevented.
5. Risk Monitoring: Risk indicators are regularly monitored to detect early signs of risks developing.
Benefits:
• Prevention: Proactive measures can prevent many risks from ever becoming real issues, reducing the overall impact on the project or org anization.
• Cost-Effective: Preventing a problem is often much cheaper and less disruptive than fixing it after it occurs.
• Preparedness: Having contingency plans in place helps organizations react quickly and efficiently when a risk does occur.
• Increased Confidence: Proactive risk management builds trust among stakeholders and team members because it demonstrates foresight and preparatio n.
Example:
A software development company identifies the risk of feature creep during project planning. They proactively set clear scope boundaries, involve stakeholders early in the
decision-making process, and enforce change management protocols to prevent uncontrolled scope expansion.

Reactive Risk Management


Reactive risk management involves dealing with risks after they have occurred. It focuses on minimizing damage and recovering from the event once the risk has materialized.
Key Elements:
1. Crisis Management: When a risk occurs, reactive strategies are deployed to contain the damage.
2. Problem-Solving: Immediate solutions are found to mitigate the impact of the risk (e.g., fixing bugs after a system crash).
3. Damage Control: Efforts are focused on reducing negative consequences such as financial losses, reputation damage, or operational downtime.
4. Recovery: Once the immediate issue is contained, the organization works to restore normal operations.
5. Post-Mortem Analysis: After resolving the risk, the team analyzes the event to learn from it and prevent similar occurrences in the future.
Benefits:
• Real-Time Response: It provides a framework for handling risks that were unforeseen or unavoidable.
• Learning Opportunity: Each reactive management process offers an opportunity to improve processes, strengthen systems, and enhance future risk planning.
• Flexibility: Reactive risk management is adaptable and can be quickly deployed when the situation arises.
Example:
A critical server goes down during peak business hours in an e-commerce company. The IT team quickly diagnoses the issue, restores the server, and ensures that future
occurrences are less likely by adding redundancy and monitoring tools afterward.

Comparison: Proactive vs. Reactive Risk Management


Aspect Proactive Risk Management Reactive Risk Management
Timing Before risks occur (prevention-oriented) After risks occur (response-oriented)
Approach Anticipatory, preventive, and focused on risk avoidance Crisis management, problem-solving, and damage control
Focus Prevention, risk avoidance, and mitigation Minimization of impact, containment, and recovery
Cost Often lower overall cost, as risks are avoided or mitigated Can be costly, especially if the risk has severe consequences
Preparation Involves preparing strategies and contingencies in advance Response is developed in real-time, with potential delays
Impact on Operations Helps maintain business continuity with minimal disruptions Disruptions may occur, with potential downtime and resource allocation
Flexibility Less flexible but provides clear guidance on risk handling Highly flexible and adaptable to real-time situations
Confidence Builds stakeholder confidence through structured preparation May reduce confidence if risks cause major setbacks
Learning Opportunity Focused on continuous improvement and learning before events Post-mortem analysis allows learning from past incidents

5 Page 22
System testing
Thursday, October 17, 2024 12:05 AM

System Testing
System Testing is a critical phase in the software development lifecycle where the complete and integrated software is tested as a whole. The primary goal is to verify
that the system meets the specified requirements and to identify any defects or issues that need to be addressed before the software is deployed.

Activities Under System Testing


Test Planning:

Involves creating a comprehensive test plan that outlines the scope, objectives, resources, and schedule for system testing. This plan also defines the testing strategy,
tools, and techniques to be used.

Test Case Design:

Developing detailed test cases based on the system requirements and design documents. Each test case includes the input data, execution steps, and expected
outcomes to ensure thorough coverage of all functionality.

Test Environment Setup:

Preparing the hardware, software, network configurations, and other necessary resources to create a testing environment that mirrors the production environment as
closely as possible.

Test Execution:

Running the test cases on the integrated system. This involves executing both functional and non-functional tests to ensure the system performs as expected under
various conditions.

Defect Reporting and Tracking:

Identifying, documenting, and tracking any defects or issues that arise during testing. This includes categorizing the defects, prioritizing them, and assigning them to the
appropriate team members for resolution.

Regression Testing:

Re-executing test cases to ensure that recent changes or bug fixes have not introduced new defects or affected existing functionality. This helps maintain the stability
and reliability of the system.

Performance Testing:

Assessing the system's performance under various conditions, such as load, stress, and scalability testing. This ensures the system can handle expected user loads and
performs efficiently.

Security Testing:

Evaluating the system's security measures to identify vulnerabilities and ensure the system is protected against potential threats. This includes penetration testing,
vulnerability scanning, and security audits.

Usability Testing:

Verifying that the system is user-friendly and meets the usability requirements. This involves assessing the user interface, navigation, and overall user experience.

Acceptance Testing:

Conducting tests to verify that the system meets the acceptance criteria defined by stakeholders. This phase often includes user acceptance testing (UAT), where end-
users validate the system's functionality and usability.

5 Page 23
BPR
Thursday, October 17, 2024 12:34 PM

Business Process Reengineering (BPR) is a strategy in organizational management where companies radically redesign their core
business processes to achieve substantial improvements in productivity, efficiency, and quality. It involves analyzing and re thinking
the way work is done within an organization to better support the company’s mission and reduce costs.

5 Page 24

You might also like