[go: up one dir, main page]

0% found this document useful (0 votes)
3 views94 pages

Notes

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 94

UNIT I SOA AND MICROSERVICE ARCHITECTURE BASICS 9 SOA and MSA –

Basics – Evolution of SOA & MSA – Drivers for SOA – Dimensions, Standards and
Guidelines for SOA – Emergence of MSA – Enterprise-wide SOA – Strawman and SOA
110 Reference Architecture – OOAD Process & SOAD Process – Service Oriented
Application – Composite Application Programming Model

Service-OrientedArchitecture

• Service-oriented architecture (SOA) is a method of software development that uses


software components called services to create business applications.

• Each service provides a business capability, and services can also communicate with
each other across platforms and languages.

• Developers use SOA to reuse services in different systems or combine several


independent services to perform complex tasks.

• The different characteristics of SOA are as follows :

• Provides interoperability between the services.


Provides methods for service encapsulation, service discovery, service
composition, service reusability and service integration.

• Service composition is the process of creating a new service by combining two or


more existing services to produce the ideal service.

• Service discovery is the process of automatically detecting devices and services on a


network.

The different characteristics of SOA are as follows :

Service Encapsulation is often used to hide the internal representation, or state, of an object
from the outside.

Service Reuse: The process of reusing services when composing new services.

Service Integration and Management (SIAM) is an approach to managing multiple suppliers


of services (business services as well as information technology services) and integrating
them to provide a single business-facing IT organization.

The different characteristics of SOA are as follows :

• Facilitates QoS (Quality of Services) through service contract based on Service Level
Agreement (SLA).
• A service-level agreement (SLA) is an agreement between a service provider and
a customer. Particular aspects of the service – quality, availability, responsibilities –
are agreed between the service provider and the service user

• Provides loosely couples services.

If two services are loosely coupled, then a change to one service rarely requires a change to
the other service. However, if two services are tightly coupled, then a change to one service
often requires a change to the other service.

Provides location transparency with better scalability and availability.


Ease of maintenance with reduced cost of application development and
deployment.

The different characteristics of SOA are as follows :

• Location transparency is the ability to access objects without the knowledge of their
location.

• In computer networks, location transparency is the use of names to identify network


resources, rather than their actual location.

The different characteristics of SOA are as follows :

• Cloud scalability in cloud computing refers to the ability to increase or decrease IT


resources as needed to meet changing demand.

Service-Oriented Architecture

• There are two major roles within Service-oriented Architecture:

• Service provider: The service provider is the maintainer of the service and the
organization that makes available one or more services for others to use. To advertise
services, the provider can publish them in a registry, together with a service contract
that specifies the nature of the service, how to use it, the requirements for the service,
and the fees charged.

• Service consumer: The service consumer can locate the service metadata in the
registry and develop the required client components to bind and use the service.

Service-Oriented Architecture
Advantages of SOA:

• Service reusability: In SOA, applications are made from existing services. Thus,
services can be reused to make many applications.

• Easy maintenance: As services are independent of each other they can be updated
and modified easily without affecting other services.

• Platform independent: SOA allows making a complex application by combining


services picked from different sources, independent of the platform.

• Availability: SOA facilities are easily available to anyone on request.

• Reliability: SOA applications are more reliable because it is easy to debug small
services rather than huge codes

• Scalability: Services can run on different servers within an environment, this


increases scalability

Disadvantages of SOA:

• High investment: A huge initial investment is required for SOA.

• Complex service management: When services interact they exchange messages to


tasks. the number of messages may go in millions. It becomes a cumbersome task to
handle a large number of messages.

Object-Oriented Analysis and Design (OOAD)

Object-Oriented Analysis and Design (OOAD) is a systematic approach for developing


software systems that emphasizes the use of objects and classes to model real-world entities
and their interactions. The OOAD process typically consists of several phases, each with its
own set of activities and deliverables. Here's a detailed explanation of each phase:

Object-Oriented Analysis and Design (OOAD)

• Requirements Gathering:

• In this initial phase, the focus is on understanding and documenting the requirements
of the system to be developed.

• Activities include interviewing stakeholders, studying existing documentation,


conducting surveys, and analyzing similar systems.

• The main deliverables of this phase are the Requirements Specification document,
which outlines the functional and non-functional requirements of the system.
Object-Oriented Analysis and Design (OOAD)

• Analysis:

• During analysis, the emphasis is on understanding the problem domain and defining
the system's conceptual model.

• This phase involves identifying the main entities (objects) in the system and their
relationships, as well as defining the behavior and attributes of these entities.

• Techniques such as use case analysis are often used.

• The main deliverables of this phase include the Use Case Model, Class Diagrams, and
Interaction Diagrams (such as Sequence Diagrams or Communication Diagrams).

Object-Oriented Analysis and Design (OOAD)

• Design:

• In the design phase, the focus shifts towards designing the architecture and detailed
design of the system based on the analysis.

• This involves defining the structure of the system, including subsystems, modules,
and their interactions.

• Design decisions regarding patterns, frameworks, and technologies to be used are


made during this phase.

• The main deliverables of this phase include the Architectural Design document,
Component Diagrams, and Deployment Diagrams.

• Implementation:

• The implementation phase involves translating the design into executable code.

• Object-oriented programming languages such as Java, C++, or Python are typically


used to write the code.

• The code is organized into classes and modules based on the design, and best
practices such as encapsulation, inheritance, and polymorphism are followed.

• Unit testing is also an integral part of this phase, ensuring that individual components
of the system behave as expected.
Testing:

• Testing is performed to validate the functionality and quality of the implemented


system.

• Different types of testing, including unit testing, integration testing, system testing,
and acceptance testing, are carried out.

• Test cases are designed to verify that the system meets its requirements and behaves
correctly under various conditions.

• Defects discovered during testing are reported, fixed, and retested until the system
meets the desired quality standards.

• Testing:

• Testing is performed to validate the functionality and quality of the implemented


system.

• Different types of testing, including unit testing, integration testing, system testing,
and acceptance testing, are carried out.

• Test cases are designed to verify that the system meets its requirements and behaves
correctly under various conditions.

• Defects discovered during testing are reported, fixed, and retested until the system
meets the desired quality standards.

• Deployment:

• Once the system has been thoroughly tested and approved, it is deployed to the
production environment.

• This involves installing the software on the target hardware and configuring it for use
by end-users.

• Deployment may also involve data migration, user training, and ongoing support
activities.

• Maintenance:

• After deployment, the system enters the maintenance phase, where it is actively used
and supported.

• Maintenance activities include bug fixing, performance optimization, adding new


features, and addressing changes in the external environment.
• It's important to continuously monitor and update the system to ensure its reliability,
security, and relevance over time.

• Throughout the entire OOAD process, communication and collaboration among


stakeholders, including developers, designers, testers, and users, are essential for
success. Additionally, iterative and incremental approaches, such as Agile
methodologies, are often employed to manage complexity and adapt to changing
requirements effectively.

• Unit testing is the process of testing individual units or components of a software


application in isolation.

• The main goal of unit testing is to validate that each unit of the software performs as
designed.

• It typically involves testing functions, methods, or classes with various inputs to


ensure they produce the expected outputs.

• Integration testing is the process of testing the interactions between different units or
components of a software system.

• The main objective of integration testing is to verify that the integrated components
work together as expected and to detect any interface defects or communication
issues.

System Testing:

• System testing is the process of testing a complete and integrated software system as a
whole.

• The primary objective of system testing is to validate that the entire system meets its
specified requirements and performs as expected in its intended environment.

• System tests are typically black-box tests, focusing on the system's external behavior
and user interactions rather than its internal implementation details.

• System testing may involve functional testing, usability testing, reliability testing,
performance testing, security testing, and other types of tests depending on the
system's requirements.

• System testing helps assess the overall quality and readiness of the software for
deployment, providing confidence that it meets the needs of its stakeholders.

UNIT II MICROSERVICE BASED APPLICATIONS 9 Implementing Microservices


with Python – Microservice Discovery Framework – Coding, Testing & Documenting
Microservices – Interacting with Other Services – Monitoring and Securing the Services
– Containerized Services – Deploying on Cloud.
Micro services architecture is an approach to software development where a large application
is broken down into smaller, independent services. Each service focuses on a specific
business capability and can be developed, deployed, and scaled independently. This promotes
flexibility, scalability, and ease of maintenance.

What are micro services?

• Micro services are sma


small,
ll, independent, and loosely coupled. A single small team of
developers can write and maintain a service.

• Each service is a separate codebase, which can be managed by a small development


team.

• Services can be deployed independently. A team can update an existing service


without rebuilding and redeploying the entire application.

What are micro services?

• Services communicate with each other by using well


well-defined
defined APIs. Internal
implementation details of each service are hidden from other services
services.

• Supports polyglot programming. For example, services don't need to share the same
technology stack, libraries, or frameworks.

• Services communicate with each other by using well


well-defined
defined APIs. Internal
implementation details of each service are hidden from other services.

• Supports polyglot programming. For example, services don't need to share the same
technology stack, libraries, or frameworks.
• Management/orchestration. This component is responsible for placing services on
nodes, identifying failures, rebalancing services across nodes

• API Gateway. The API gateway is the entry point for clients. Instead of calling
services directly, clients call the API gateway, which forwards the call to the
appropriate services on the back end.

• DevOps is a software development approach that combines development (Dev) and


operations (Ops) teams to improve collaboration and efficiency throughout the
software development lifecycle. It focuses on automating processes, continuous
integration, and continuous delivery to deliver high-quality software faster and more
reliably.

• These micro services communicate with each other through well-defined APIs, often
using lightweight protocols like HTTP or message queues. Unlike monolithic
architectures, where a single codebase handles all functionalities, micro services allow
for distributed development and deployment.

Key characteristics include:

• Decentralized Data Management: Each micro service manages its own database,
ensuring data independence and avoiding a single point of failure.

• Autonomous Deployment: Services can be deployed independently, facilitating


continuous integration and delivery.

Key characteristics include:

• Resilience and Fault Isolation: If one service fails, it doesn't necessarily affect the
entire system, promoting fault tolerance and resilience.

• Scalability: Individual services can be scaled independently based on their specific


needs.

Micro services offer several advantages:

• Scalability: Each micro service can be scaled independently based on demand,


allowing for more efficient resource utilization.

• Flexibility: Micro services enable teams to work on different services simultaneously,


allowing for faster development and deployment cycles.

• Fault Isolation: If one micro service fails, it doesn't necessarily bring down the entire
system, as other services can continue to function independently.

Micro services offer several advantages:

• Technology Diversity: Different services can be built using different technologies,


allowing teams to choose the best tool for each specific task.
• Ease of Maintenance: Since each service is smaller and focused on a specific function,
it's easier to maintain and update them without affecting the entire system.

• Autonomy: Teams can work on and deploy microservices independently, promoting


autonomy and reducing dependencies between teams.

• Resilience: Microservices architectures are inherently more resilient to failures


because failures are isolated to individual services rather than affecting the entire
system.

UNIT III DEVOPS 9 DevOps: Motivation-Cloud as a platform-Operations-


Deployment Pipeline: Over all Architecture Building and Testing-Deployment- Case
study: Migrating to Micro services.

DevOps
DevOps is a set of practices, methodologies, and cultural philosophies that aims to improve
collaboration and communication between software development (Dev) and information
technology operations (Ops) teams. The goal is to shorten the software development life
cycle while delivering features, fixes, and updates frequently, reliably, and more efficiently.
Here's a detailed breakdown of key components and concepts within DevOps:
1. Culture: DevOps emphasizes a cultural shift towards collaboration, communication, and
shared responsibility among development, operations, and other stakeholders involved in the
software delivery process. This culture encourages breaking down silos between teams,
fostering trust, and promoting continuous learning and improvement.
2. Automation: Automation is fundamental in DevOps practices to streamline processes,
reduce manual errors, and accelerate delivery. Automation tools are used for various tasks
such as code compilation, testing, deployment, infrastructure provisioning, and monitoring.
3. Continuous Integration (CI): CI is a development practice where developers integrate
code into a shared repository frequently (often multiple times a day). Each integration
triggers automated builds and tests to detect and address integration errors early in the
development cycle.
4. Continuous Delivery (CD): CD extends CI by automating the deployment process to
ensure that software can be reliably released at any time. It involves deploying code changes
to production or staging environments automatically or with minimal manual intervention,
typically after passing automated tests.
5. Infrastructure as Code (IaC): IaC is the practice of managing and provisioning
infrastructure (e.g., servers, networks, and storage) using machine-readable configuration
files or scripts, rather than manual processes. This approach enables consistent, repeatable,
and scalable infrastructure deployments and facilitates versioning and collaboration.
6. Monitoring and Logging: DevOps emphasizes the importance of monitoring application
performance, infrastructure health, and user experience in real-time. Monitoring tools track
metrics, logs, and events to identify issues, detect anomalies, and optimize system
performance. Continuous monitoring enables proactive problem detection and resolution.
7. Microservices and Containers: DevOps often leverages microservices architecture and
containerization technologies like Docker and Kubernetes. Microservices break down
applications into smaller, loosely coupled services, enabling easier management, scalability,
and deployment. Containers provide lightweight, portable, and isolated runtime environments
for applications, enhancing consistency and efficiency across development, testing, and
production environments.
8. Collaboration Tools: DevOps teams use various collaboration tools to facilitate
communication, coordination, and knowledge sharing. These tools include version control
systems (e.g., Git), issue tracking systems (e.g., Jira), communication platforms (e.g., Slack),
and collaboration platforms (e.g., Confluence).
9. Security: DevOps integrates security practices throughout the software development life
cycle (DevSecOps). Security measures such as code analysis, vulnerability scanning, access
control, and compliance checks are automated and integrated into CI/CD pipelines to detect
and mitigate security risks early in the development process.
10. Feedback Loop: DevOps emphasizes the importance of feedback loops to gather insights
from users, stakeholders, and operational metrics. Feedback drives continuous improvement

by identifying areas for optimization, feature enhancements, and bug fixes, ensuring that
development efforts align with business objectives and user needs.
The DevOps lifecycle
Because of the continuous nature of DevOps, practitioners use the infinity loop to show how
the phases of the DevOps lifecycle relate to each other. Despite appearing to flow
sequentially, the loop symbolizes the need for constant collaboration and iterative
improvement throughout the entire lifecycle.

by identifying areas for optimization, feature enhancements, and bug fixes, ensuring that
development efforts align with business objectives and user needs.
The DevOps lifecycle
Because of the continuous nature of DevOps, practitioners use the infinity loop to show how
the phases of the DevOps lifecycle relate to each other. Despite appearing to flow
sequentially, the loop symbolizes the need for constant collaboration and iterative
improvement throughout the entire lifecycle.

Discover
Building software is a team sport. In preparation for the upcoming sprint, teams must
workshop to explore, organize, and prioritize ideas. Ideas must align to strategic goals and
deliver customer impact. Agile can help guide DevOps teams.
Plan
DevOps teams should adopt agile practices to improve speed and quality. Agile is an iterative
approach to project management and software development that helps teams break work into
smaller pieces to deliver incremental value.
Build
Git is a free and open source version control system. It offers excellent support for branching,
merging, and rewriting repository history, which has led to many innovative and powerful
workflows and tools for the development build process

Interacting with Other Services


Interacting with other services in a microservices architecture typically involves
communication between different microservices to fulfill a larger task or process. Here are
some common approaches:
1. HTTP RESTful APIs: Microservices can communicate with each other over HTTP using
RESTful APIs. Each microservice exposes a set of HTTP endpoints that other microservices
can call to perform specific actions or retrieve data.
2. Message Brokers: Message brokers like RabbitMQ or Apache Kafka can be used for
asynchronous communication between microservices. Microservices can publish messages to
a broker, and other microservices can consume these messages to perform tasks or respond to
events.
3. Service Mesh: Service mesh technologies like Istio or Linkerd provide a dedicated
infrastructure layer for handling service-to-service communication. They offer features like
load balancing, service discovery, encryption, and traffic management.
4. Service Discovery: In a dynamic microservices environment where services can be
deployed and scaled independently, service discovery mechanisms are crucial.Service
discovery tools like Consul or Eureka facilitate this by maintaining a registry of available
services and their locations.
5. GraphQL: GraphQL can be used to provide a unified API gateway for microservices. It
allows clients to query only the data they need and aggregates data from multiple
microservices behind the scenes

6.RPC (Remote Procedure Call): RPC frameworks like gRPC provide a more efficient
alternative to HTTP-based communication. They allow services to call remote procedures as
if they were local functions, abstracting away network communication details. gRPC uses
Protocol Buffers (protobuf) for serialization and provides features like streaming and
bidirectional communication.
7. Event-Driven Communication: In an event-driven architecture, microservices
communicate by publishing and subscribing to events. When something significant happens
within a service (e.g., a new order is placed), it publishes an event to a message broker. Other
services interested in that event can then subscribe to it and react accordingly. This approach
promotes loose coupling and scalability.
8.API Gateway: An API gateway sits between clients and the microservices backend,
providing a single entry point for all external requests. It can handle tasks like authentication,
and request routing, as well as aggregating and forwarding requests to the appropriate
microservices. This helps simplify client communication and can improve security and
performance.
9.Circuit Breaker Pattern: Implementing a circuit breaker pattern helps in handling failures
gracefully when interacting with other services. It monitors the health of remote services and
prevents cascading failures by failing fast when a service is unavailable.
Monitoring and Securing the Services
Monitoring and securing services in a microservices architecture are critical components for
ensuring the reliability, performance, and security of the system.
Monitoring:
1. Service Health Monitoring:
 Monitor the health of each microservice by collecting and analyzing various metrics such
as CPU usage, memory consumption, response times, and error rates.
 Use tools like Prometheus, Graphite, or Datadog to collect metrics from each service and
visualize them on dashboards for real-time monitoring.
 Implement health checks within each service to report its status (e.g., "up" or "down") to
an external monitoring system.
2. Logs Aggregation and Analysis:
 Collect logs generated by each microservice in a centralized location for easier analysis
and troubleshooting.
 Use log aggregation tools like ELK Stack (Elasticsearch, Logstash, Kibana), Splunk, or
Fluentd to collect, parse, and store logs.
 Analyze logs to identify errors, anomalies, or performance issues, and use this information
to debug and optimize the system.
3. Distributed Tracing:
 Implement distributed tracing to track the flow of requests as they propagate through
multiple microservices.
 Use tools like Jaeger, Zipkin, or OpenTelemetry to instrument applications and collect
trace data.
 Analyze traces to understand dependencies between microservices, identify performance
bottlenecks, and optimize request latency.
4. Alerting:
 Set up alerting rules based on predefined thresholds or conditions for critical metrics such
as high error rates, latency spikes, or service unavailability.
 Use alerting tools like Prometheus Alertmanager, PagerDuty, or OpsGenie to send
notifications via email, SMS, or integrations with collaboration platforms like Slack or
Microsoft Teams.
Define escalation policies to ensure timely response and resolution of issues identified by
alerts
Securing Services:
1. Authentication and Authorization:
 Implement authentication mechanisms such as OAuth, JWT, or API keys to ensure that
only authorized users and services can access microservices.
 Use role-based access control (RBAC) to enforce fine-grained access permissions based on
user roles or service identities.
2. Transport Layer Security (TLS):
 Encrypt communication between microservices using TLS to prevent eavesdropping and
data tampering.
 Utilize mutual TLS (mTLS) to authenticate both client and server, ensuring secure
communication between microservices.
3. Input Validation and Sanitization:
 Validate and sanitize input data to prevent common security vulnerabilities such as
injection attacks (e.g., SQL injection, XSS) and ensure data integrity.
4. Secrets Management:
 Store sensitive information such as database credentials, API keys, and cryptographic keys
securely using a centralized secrets management solution (e.g., HashiCorp Vault, AWS
Secrets Manager).
 Limit access to secrets based on the principle of least privilege and rotate them regularly to
mitigate the risk of exposure.
5. Container Security:
 Harden container images by following best practices such as minimizing the attack surface,
regularly patching dependencies, and running containers with least privilege.
 Utilize container security tools (e.g., Docker Bench, Clair, Twistlock) to scan images for
vulnerabilities and enforce security policies at runtime.
6. API Gateway Security:
 Secure APIs exposed by microservices using an API gateway, which can enforce
authentication, rate limiting, and access control policies.
 Implement measures such as input validation, content type validation, and request
validation to prevent common API security threats.
7. Runtime Protection:
 Deploy runtime protection mechanisms such as runtime application self-protection (RASP)
or web application firewalls (WAFs) to detect and mitigate runtime threats.
 Monitor runtime behavior for anomalies and enforce runtime security policies to prevent
unauthorized access and data breaches.
8. Continuous Security Testing:
 Integrate security testing into the CI/CD pipeline to identify and remediate security
vulnerabilities early in the development lifecycle.
 Conduct regular security assessments, penetration testing, and vulnerability scanning to
identify and address security weaknesses in microservices and their dependencies.

By implementing these security measures, you can mitigate security risks and ensure the
confidentiality, integrity, and availability of your microservices architecture.
Containerized Services
Containerized services are a method of packaging, deploying, and managing software
applications and their dependencies within isolated execution environments called containers.
These containers encapsulate everything needed to run an application, including the code,
runtime, system tools, libraries, and settings, ensuring consistent behavior across different
environments. Here's a detailed explanation of containerized services:
1. Containerization Technology:

 Docker: Docker is the most popular containerization platform, allowing developers to


create, deploy, and manage containers easily. It provides tools for building, sharing, and
running containers on various platforms, making it a standard choice for containerized
services.

 Container Runtimes: Docker originally introduced its own container runtime, but
alternatives like containerd and cri-o have gained popularity. These runtimes manage the
lifecycle of containers, handling tasks such as container creation, execution, and destruction.

2. Key Concepts:

 Containers: Containers are lightweight, portable, and self-sufficient units that package
application code and dependencies. They run as isolated processes on a host operating
system, sharing the kernel with other containers but having their own filesystem, network,
and process space.

 Images: Container images are read-only templates used to create containers. They contain
the application code, runtime, libraries, and other dependencies needed to run the application.
Images are built from Dockerfiles or other specifications and can be stored in registries like
Docker Hub or private repositories.

3. Advantages:

 Consistency: Containerized services ensure consistency across different environments,


including development, testing, and production. Developers can package applications along
with their dependencies, ensuring that they run the same way everywhere.

 Isolation: Containers provide process isolation, meaning that each container runs in its own
isolated environment. This isolation enhances security by preventing applications from
interfering with each other and reduces the impact of software conflicts.
 Portability: Containers are portable across different infrastructure environments, including
on-premises data centers, public clouds, and hybrid environments. This portability allows for
seamless deployment and migration of applications between environments.

 Resource Efficiency: Containers share the host operating system's kernel, resulting in
lower overhead compared to traditional virtual machines (VMs). This efficiency enables
higher resource utilization and allows for running more containers on the same hardware.

4. Use Cases:

 Microservices Architecture: Containers are well-suited for building and deploying


microservices-based applications. Each microservice can run in its own container, enabling
independent development, scaling, and deployment of services.

 Continuous Integration/Continuous Deployment (CI/CD): Containers streamline the CI/CD


process by providing a consistent environment for building, testing, and deploying
applications. Container orchestration platforms like Kubernetes automate the deployment and
management of containers in CI/CD pipelines.

 DevOps Practices: Containers facilitate DevOps practices by enabling collaboration


between development and operations teams. Developers can package applications into
containers, while operations teams can deploy and manage them using container orchestration
tools.

5. Container Orchestration:

 Kubernetes: Kubernetes is an open-source container orchestration platform that automates


the deployment, scaling, and management of containerized applications. It provides features
like service discovery, load balancing, auto-scaling, and self-healing, making it a powerful
tool for managing containerized services at scale.

 Docker Swarm: Docker Swarm is another container orchestration platform provided by


Docker. It enables the deployment and management of Docker containers in a clustered
environment, offering features like service discovery, load balancing, and rolling updates.

6. Security Considerations:

 Image Security: Ensure that container images are scanned for vulnerabilities before
deployment. Use container security tools to identify and remediate security issues in
container images and their dependencies.

 Runtime Security: Implement runtime protection mechanisms to monitor container


behavior for suspicious activity and enforce security policies. Runtime security tools can
detect and prevent threats such as container escapes, privilege escalation, and malicious
activity.

 Network Security: Configure network policies and implement network segmentation to


restrict communication between containers and external networks. Use encryption and
authentication mechanisms to secure communication between containers and external
services.

In summary, containerized services offer numerous benefits, including consistency, isolation,


portability, resource efficiency, and support for modern software development practices like
microservices and DevOps. However, they also require careful consideration of security
implications and best practices to ensure a secure deployment environment.

Deploying on Cloud

Deploying applications on the cloud involves the process of transferring application's


computing and storage resources from an on-premises environment to a cloud infrastructure
provider such as Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform
(GCP). Here's a detailed explanation of deploying on the cloud:

1. Selecting a Cloud Provider:

 Assess Requirements: Evaluate application's requirements, including scalability,


availability, security, and compliance. Consider factors such as geographic location, pricing
models, and service offerings when selecting a cloud provider.

 Comparing Providers: Compare the features, pricing, and support offered by different
cloud providers. Consider factors such as compute instances, storage options, networking
capabilities, and managed services.

2. Preparing the Application for Cloud Deployment:

 Containerization: Containerize application using technologies like Docker or Kubernetes.


Containerization provides consistency, portability, and scalability, making it easier to deploy
and manage applications on the cloud.

 Cloud-Native Development: Adopt cloud-native development practices, such as using


serverless architectures, microservices, and managed services. These approaches leverage the
scalability and flexibility of cloud platforms to optimize application performance and reduce
operational overhead.

 Infrastructure as Code (IaC): Define the infrastructure using code (e.g., Terraform, AWS
CloudFormation) to automate the provisioning and configuration of cloud resources. IaC
enables reproducible and consistent deployments, simplifying infrastructure management and
reducing the risk of configuration drift.

3. Deploying the Application:

 Compute Resources: Provision compute resources (e.g., virtual machines, containers,


serverless functions) to run the application. Select the appropriate instance types, sizes, and
configurations based on the workload requirements and performance objectives.
 Storage: Configure storage solutions (e.g., object storage, block storage, databases) to store
and access application data. Choose storage options that meet your performance, durability,
and scalability needs while optimizing cost and efficiency.

 Networking: Set up networking components (e.g., virtual networks, load balancers,


security groups) to connect the application components and manage traffic flow. Configure
network security policies, encryption, and monitoring to protect the application from
unauthorized access and attacks.

4. Monitoring and Management:

 Monitoring: Implement monitoring and logging solutions (e.g., Amazon CloudWatch,


Azure Monitor, Google Cloud Monitoring) to track the performance, availability, and health
of your application. Monitor key metrics, set up alerts, and analyze logs to identify and
troubleshoot issues proactively.

 Automation: Use automation tools and workflows (e.g., AWS Lambda, Azure Functions,
Google Cloud Functions) to automate repetitive tasks, such as deployment, scaling, and
resource management. Automation improves operational efficiency, reduces manual errors,
and enables rapid response to changing conditions.

 Security: Implement security best practices (e.g., identity and access management,
encryption, compliance controls) to protect the application and data from security threats.
Leverage cloud-native security services and features to strengthen the defense posture and
ensure regulatory compliance.

5. Scaling and Optimization:

 Scalability: Configure auto-scaling policies to dynamically adjust compute resources based


on workload demand. Scale your application horizontally (adding more

instances) or vertically (increasing instance size) to handle fluctuations in traffic and


workload requirements.

 Cost Optimization: Optimize your cloud resources to minimize costs while meeting
performance and availability objectives. Monitor resource usage, analyze cost trends, and
implement cost-saving strategies such as reserved instances, spot instances, and resource
tagging.

 Performance Optimization: Fine-tune the application and infrastructure to improve


performance, latency, and responsiveness. Implement caching, content delivery networks
(CDNs), and performance testing to optimize application performance and user experience.
6. Continuous Deployment and Iteration:

 Continuous Integration/Continuous Deployment (CI/CD): Implement CI/CD pipelines to


automate the build, test, and deployment of the application. Streamline the release process,
accelerate time-to-market, and iterate quickly based on user feedback and business
requirements.

 Feedback Loop: Collect feedback from users, stakeholders, and monitoring systems to
identify areas for improvement and prioritize feature development. Use agile methodologies
and iterative development cycles to continuously enhance the application and adapt to
changing needs.

Deploying on the cloud offers numerous benefits, including scalability, flexibility, reliability,
and cost-effectiveness. By following best practices and leveraging cloud-native technologies
and services, organizations can optimize their deployment processes and unlock the full
potential of cloud computing for their applications
lOMoARcPSD|9189951

Cloud U4 Whitenotes

Cloud Computing (RMD engineering college)

Scan to open on Studocu

Studocu is not sponsored or endorsed by any college or university


Downloaded by arun joseph (arunjosemary@gmail.com)
lOMoARcPSD|9189951

Page |1

CW8021 - CLOUD, MICROSERVICES AND APPLICATIONS

UNIT IV CLOUD AND DEVOPS

Origin of DevOps - The developers versus operations dilemma - Key characteristics of a


DevOps culture – Deploying a Web Application - Creating and configuring an account -
Creating a web server - Managing infrastructure with Cloud Formation - Adding a configuration
management system

S.No Page No
1 Introduction to cloud and DevOps
2 Origin of DevOps
2 The developers versus operations dilemma
3 Key characteristics of a DevOps culture
4 Deploying a Web Application
5 Creating and configuring an account
6 Creating a web server
7 Managing infrastructure with Cloud Formation
8 Adding a Configuration management system

Downloaded by arun joseph (arunjosemary@gmail.com)


lOMoARcPSD|9189951

Page |2

INTRODUCTION
A. DEVOPS

DEVOPS is consisting of two tasks simultaneously.

DEV(DEVELOPMENT) + OPS(OPERATIONS) = DEVOPS. In short, DEVOPS is a set of practices


which combines the software development along with the operations which result in better
and faster software development cycle with high software quality and enables Agile
Development. In short, DevOps is agile development + agile operations.

B. CLOUD AND DEVOPS

What Is Cloud DevOps?

DevOps is a software development approach that combines cultural principles, tools, and
practices to increase the speed and efficiency of an organization’s application delivery
pipeline. It allows development and operations (DevOps) teams to deliver software and
services quickly, enabling frequent updates and supporting the rapid evolution of products.

Cloud computing is the on-demand availability of computer system resources, especially


data storage (cloud storage) and computing power, without direct active management by
the user. The essence of cloud computing

Cloud computing is a powerful technology that helps organizations implement DevOps


strategies.

There are three important ways DevOps and cloud work together:

• DevOps leverages the cloud—DevOps organizations manage and automate


infrastructure using cloud computing technology, enabling agile work processes.
• CloudSecOps (cloud security operations)—an organizational pattern that moves
processes to the cloud while tightly integrating security into the entire development
lifecycle. It’s like DevSecOps, only in the cloud.
• DevOps as a Service—delivering integrated continuous integration and continuous
delivery (CI/CD) pipelines via the cloud, in a software as a service (SaaS) model, to
make DevOps easier to adopt and manage in an organization.

Downloaded by arun joseph (arunjosemary@gmail.com)


lOMoARcPSD|9189951

Page |3

1.DevOps Leverages the Cloud :=

• DevOps processes can be very agile when implemented correctly, but they can easily
grind to a halt when facing the limitations of an on-premise environment. For example,
if an organization needs to procure and install new hardware in order to start a new
software project or scale up a production application, it causes needless delays and
complexity for DevOps teams.
• Cloud infrastructure offers an important boost for DevOps and facilitates scalability.
The cloud minimizes latency and enables centralized management via a unified
platform for deploying, testing, integrating, and releasing applications.
• A cloud platform allows DevOps teams to adapt to changing requirements and
collaborate across distributed enterprise environments.
• Cloud DevOps solutions are often more cost-effective.
• Cloud-based DevOps services helps minimize human error and streamlines repeatable
tasks.

2. What Is CloudSecOps (Cloud, Security, and Operations)?

• SecOps is a merging of security and IT operations in a unified process. SecOps involves


a team that combines skilled software engineers and security analysts that can assess
and monitor risk and protect corporate assets. The SecOps team typically operates
from an organization’s security operations center (SOC).
• SecOps is a growing movement within the broader world of DevSecOps that integrates
security with development and operations processes. that focuses on securing an
organization’s underlying development infrastructure.
• Cloud security operations (CloudSecOps) is an evolution of the SecOps function that
aims to identify, respond to, and recover systems from attacks targeting an
organization’s cloud assets. Security operations must reactively respond to attacks that
the security tools detect, while proactively seeking out other attacks that ordinary
detection methods may have missed.

CloudSecOps teams have several roles and functions: =

• Incident management—identifies security incidents, responds to them, and


coordinates the response with communication, legal, and other teams. In a cloud
environment, incident management moves faster and involves many more moving
parts than in an on-premise data center.

• Prioritizing events—this requires calculating risk scores for cloud systems, accounts,
and devices, and identifying the sensitivity of cloud applications and data.

• Using security technology—traditional SOC tools include security information and


event management (SIEM) solutions and other reactive systems. SOC teams are
shifting from static log analysis using conventional tools to advanced analytics driven
by new solutions such as extended detection and response (XDR). These solutions
leverage behavioral analysis, machine learning, and threat intelligence capabilities to
identify and respond to abnormal behavior.

Downloaded by arun joseph (arunjosemary@gmail.com)


lOMoARcPSD|9189951

Page |4

• Threat hunting—a proactive effort to discover advanced security threats, usually


triggered by a hypothetical threat scenario. Threat hunting involves tools that filter out
the noise from security monitoring solutions, enabling advanced data investigation.

• Metrics and objectives—the role of a SecOps team requires keeping track of key
performance indicators like mean time to detect, acknowledge, and remediate
(MTTD, MTTA, and MTTR, respectively).

3. What Is DevOps as a Service?

• DevOps as a Service is a set of cloud-based tools that enable collaboration between


an organization’s development and operations teams. The DevOps as a Service
provider provides a toolset that covers all relevant aspects of the DevOps process
and provides them as a unified platform.
• DevOps as a Service is the opposite of a “best of breed” toolchain, where teams
select the tools they like best for each purpose. It can make DevOps easier to
implement for organizations new to agile processes because it does not require
learning and integrating multiple-point solutions.
• A DevOps as a Service platform enables tracking and management of every action
taken in the software delivery process. It enables organizations to set up
continuous integration / continuous delivery (CI/CD) systems and pipelines to
increase development velocity and provide continuous feedback to developers
• This platform approach hides the complexity of managing data and information
flows across a complex DevOps toolchain. Individuals and teams involved in the
DevOps process can access any relevant technology without having to find, adopt,
and learn multiple tools. For example, a DevOps as a Service solution provides
access to source code management (SCM), build servers, deployment
management, and application performance management (APM) in one interface,
with centralized auditing and reporting.

Cloud DevOps as a Service Tools and Solutions

Here are DevOps as a Service offerings provided by the world’s leading cloud providers.
Each of them provides an end-to-end environment for DevOps teams, which eliminates
the need to download, learn, and integrate multiple point solutions.

Examples
1.AWS DevOps
2.Azure DevOps
3.Google Cloud DevOps

1.AWS DevOps
AWS DevOps
Amazon Web Services (AWS) provides services and tools dedicated to supporting DevOps
implementations, including:

AWS CodeCommit
AWS CodeCommit is a managed source control service for hosting private Git repositories.
There is no need to provision or scale the infrastructure or install, configure, or operate
software—the service handles these tasks for you.

Downloaded by arun joseph (arunjosemary@gmail.com)


lOMoARcPSD|9189951

Page |5

AWS CodeBuild
AWS CodeBuild is a fully-managed service for continuous integration (CI) in the cloud. The
service can compile your source code, run tests, and create deployment-ready software
packages. The service handles the infrastructure, so there is no need to provision, scale, or
manage the build servers. It scales continuously and can process several builds concurrently.

AWS CodeArtifact
AWS CodeArtifact is a fully-managed service that lets you centrally manage artifact
repositories. It lets you publish, share, and store software packages securely. It provides pay-
as-you-go scalability that enables you to flexibly scale the repository to satisfy requirements.
The service handles the infrastructure, so there is no need to manage software or servers.

AWS CodeDeploy
AWS CodeDeploy is a fully-managed service for automating software deployments. It supports
deployment to various environments, including on-premises servers, AWS Lambda, Amazon
Elastic Compute Cloud (Amazon EC2), and AWS Fargate.

AWS CodePipeline
AWS CodePipeline is a cloud service for continuous delivery (CD). It provides functionality for
modeling, visualizing, and automating software delivery steps. You can employ CodePipeline
to model the entire release process, including code builds, deployment to pre-production
environments, application testing, and releasing into a production environment

Downloaded by arun joseph (arunjosemary@gmail.com)


lOMoARcPSD|9189951

Page |6

2.Azure DevOps
Microsoft Azure provides cloud-based services and tools that support the
modern DevOps team. Here are notable services that help DevOps teams plan, build, and
deploy applications:

Azure Repos
Azure Repos provides version control tools to help you manage code. It offers the following
version control types:

• Git—a popular open source distributed version control. Azure Repos lets you use Git
with various tools and operating systems, including Windows, Mac, Visual Studio,
Visual Studio Code, and Git partner services and tools.

• Team Foundation Version Control (TFVC)—Azure’s centralized version control


system lets you store code components in one repository.

Azure Pipelines
Azure Pipelines is a cloud service that builds and tests code projects automatically. It utilizes
continuous integration (CI) and continuous delivery (CD) when testing, building, and shipping
your code to the environment of your choice. Pipelines support numerous programming
languages and project types.

Downloaded by arun joseph (arunjosemary@gmail.com)


lOMoARcPSD|9189951

Page |7

Azure Boards
Azure Boards is a cloud service that provides interactive and customizable tools for managing
software projects. It offers various capabilities, such as calendar views, native support for
Scrum, Kanban, and Agile processes, integrated reporting, and configurable dashboards. You
can leverage these features to scale as your project grows.

Azure Test Plans


Azure Test Plans is a browser-based test management solution that provides tools for driving
quality and collaboration across the development lifecycle. It includes capabilities for various
types of testing, including planned manual testing, exploratory testing, user acceptance
testing, and feedback reviews.

Azure Artifacts
Azure Artifacts provides a cloud-based, centralized location for managing packages and
sharing code. It enables you to publish packages and share them publicly or privately with
your team or the entire organization. The service lets you consume packages from various
feeds and public registries, including npmjs.com and NuGet.org. It also supports a range of
package types, including npm, NuGet, Python, Universal Packages, and Maven.

Downloaded by arun joseph (arunjosemary@gmail.com)


lOMoARcPSD|9189951

Page |8

3.Google Cloud DevOps


Google Cloud provides various services and tools that support DevOps implementations,
including:

Cloud Build

The Cloud Build service executes builds on Google Cloud’s infrastructure. It imports source
code from a location of your choice, such as GitHub, Bitbucket, Cloud Source Repositories,
or Cloud Storage, and uses your specifications to execute the build. It can produce various
artifacts, including Java archives and Docker containers.

Artifact Registry

Artifact Registry is a cloud-based service for centrally managing artifacts and dependencies.
The service is fully integrated with Google Cloud tools and runtimes and supports native
artifact protocols. It provides simple integration with existing CI/CD tools so you can set up
automated pipelines

Cloud Monitoring
Cloud Monitoring is a service that collects events, metadata, and metrics from various
sources, including Google Cloud, AWS, application instrumentation, and hosted uptime
probes.

Cloud Deploy
Google Cloud Deploy is a managed cloud service for automating application delivery. It uses
a defined promotion sequence when delivering applications to target environments. You can
deploy an updated application by creating a release, and the delivery pipeline will then
manage the entire lifecycle of this release

Downloaded by arun joseph (arunjosemary@gmail.com)


lOMoARcPSD|9189951

Page |9

Services and reference architecture of GCP

• Github Code Repo


• Cloud Build, a container-based CI/CD Tool)
• Container Registry
• Google Kubernetes Engine (GKE)
• Cloud Load Balancing, used as an Ingress Controller for GKE)
• Cloud Uptime Checks, for synthetic application monitoring
• Cloud Monitoring
• Cloud Functions
• Pub/Sub, used

Downloaded by arun joseph (arunjosemary@gmail.com)


lOMoARcPSD|9189951

P a g e | 10

History of DevOps

Now, let us delve into the interesting history of DevOps. Patrick Debois is often referred
to as the father of DevOps.

1. 2007: It all started in 2007 when he started working on a robust “data center
migrationn” where he was responsible for testing. He experienced many frustrations
during the course of this project, starting from the continuous switching back and forth
from the development side of the problem and the bevy of operations that waited on
the other side. He realized that a large chunk of the effort and time was spent (or
rather wasted) in navigating the project from development to operations. However, it
wasn’t possible to bridge the significantly wide gap between the two worlds.
2. 2008: It was in 2008 at an Agile conference conducted in Toronto, Canada, when a
man named Andrew Shafer attempted to arrange a meetup session that was called
“Agile Infrastructure.”. However, Patrick was quite excited finally come across a
like-minded person.. They later went on to form a discussion group for various people
who wanted to post their ideas that would help bring a relevant solution to the wide
gap between development and operations.
3. 2009: June of 2009, Paul Hammond and John Allspaw conducted a lecture entitled
“10+ Deploys a Day: Dev and Ops Cooperation at Flickr.” Patrick ended up
watching the streaming video of that presentation .Its views highly resonated with him
making him realize that this was exactly the solution he had been looking for.
Motivated by this lecture, he arranged a gathering of system administrators and
developers to sit together and discuss the most ideal ways to begin bridging the gap
between these two heterogeneous fields. This event was named DevOpsDays, and
was held during the final week of October 2009.
4. 2010: This was all that was needed for smaller tech enterprises to make an effort in
amalgamating the DevOps practices and the tools built to help new teams that are
forming. By this time, DevOps managed to acquire a grassroots following where
members began extensively pushing their respective ideas.
5. 2011: It was in March 2011 when Cameron Haight of Gartner With his positive
outlook, many other members and users came and began implementing DevOps with
wide ideas. Soon enough, enterprises regardless of how small or big scale
they are started adopting DevOps.
6. 2015: DevOps incorporated into SAFe SAFe is rapidly gaining traction in the enterprise
arena, where DevOps is adopted and scaled across.

Downloaded by arun joseph (arunjosemary@gmail.com)


lOMoARcPSD|9189951

P a g e | 11

7. 2016: DevOps is the new norm for high-performing companies “Clearly, what was
state of-the-art three years ago is just not good enough for today’s business
environment.”
8. 2018: State of DevOps report defines 5- stage approach From level 0 to 5, a
descriptive, pragmatic approach is introduced to guide teams and mature DevOps
initiatives, a report sponsored by Deloitte
9. 2019: Enterprises embed more IT functions in their teams next to ‘Dev’ and ‘Ops’
“organizations are embedding security (DevSecOps), privacy, policy, data (DataOps)
and controls into their DevOps culture and processes.”

Downloaded by arun joseph (arunjosemary@gmail.com)


lOMoARcPSD|9189951

P a g e | 12

The Developers versus Operations dilemma in Cloud DevOps


Developer’s dilemma
Developer dilemma No. 1: When to say when on feature requests

Developer dilemma No. 2: How much documentation is enough?

Developer dilemma No. 3: To the cloud, or not to the cloud?

Developer dilemma No. 4: Maintain old code, or bring in the new?

Developer dilemma No. 5: SQL vs. NoSQL

Developer dilemma No. 6: Go native, or target the mobile Web?

Developer dilemma No. 7: How much control should users really get?

Developer dilemma No. 1: When to say when on feature requests

Saying 'yes' to feature requests vs Saying 'yes' to feature requests

Everyone wants feature-rich code, but no one wants to pay the cost of managing all of it.
Anyone who's tried to build something as simple as a four-button remote control app knows
how many zillions of designer years it takes to create something that simple.

Developer dilemma No. 2: How much documentation is enough?

Focus on just good documentation and avoid big details which typically means a lot of
guessing and wasted time. Good document means

• All relevant information must be recorded.


• All paper records must be legible, signed and dated.
• Records must be, accurate and kept up to date.
• Records must be written in plain English

Developer dilemma No. 3: To the cloud, or not to the cloud?


The more you outsource, the more you lose control and spin your wheels trying to recapture
it. The less you outsource to the cloud, the more you spin your wheels keeping everything
running.

Developer dilemma No. 4: Maintain old code, or bring in the new?

There is no easy answer to this dilemma. The old code still works. We like it. It's
just that it's not compatible with the new version of the operating system or a new
multicore chip.

The new code costs money. We can usually fix a number of glaring problems with
the old code, but who knows what new problems might appear.

Developer dilemma No. 5: SQL vs. NoSQL

As for speed, NoSQL is generally faster than SQL. On the other hand, NoSQL database may
not fully support ACID transactions, which may result data inconsistency.

Developer dilemma No. 6: Go native, or target the mobile Web?

Downloaded by arun joseph (arunjosemary@gmail.com)


lOMoARcPSD|9189951

P a g e | 13

Native apps are installed on a device. They are built using an operating system's SDKs and
have access to different resources on a device: camera, GPS, phone, device storage, etc. Web
mobile apps are websites optimized for mobile browsers. Their functionality resides entirely
on a server. Web apps are delivered over an internet browser. Users don't need to install them
on their devices

Developer dilemma No. 7: How much control should users really get?

Software users want all of the freedom they can get, but they expect you, the developer, to
rescue them from harm when issue occurs. They want all of the advantages and being able
to slip through some backdoor whenever a problem occurs.

Operator’s dilemma
Operator dilemma No. 1: Speed vs. Quality
Operator dilemma No. 2: Autonomy vs. Collaboration
Operator dilemma No. 3: Build vs. Buy
Operator dilemma No. 4: Control vs. Flexibility

Operator dilemma No. 1: Speed vs. Quality


Speed can help you build an advantage over slower competitors. Quality ensures you're giving
the users the best experience they can get with your software
Operator dilemma No. 2: Autonomy vs. Collaboration

Balancing autonomy and collaboration does not mean leaving your team alone to solve
problems, but rather providing the right amount of guidance and support.

Operator dilemma No. 3: Build vs. Buy

To build or buy software is a decision that haunts the DevOps team but through
comparison and situational awareness the right answer can be found
Operator dilemma No. 4: Control vs. Flexibility

Control is a manager's duty to regulate and guide the activities of an organization. The
flexibility of an organization is its ability to adapt to its internal and external environment

DEVOPS CULTURE
DevOps – a series of practices that automates the processes between software development
and IT teams – accomplishes an agile culture that enables you to build, test, and release
software quicker and more reliably. DevOps is the way organizations extract the value of agile
transformations they have started. By integrating software development and operations, and
automating their processes from end-to-end

A fundamental starting point is the attitude:


• Maintain transparency in work
• Develop a culture of trust for one other
• Embrace non-conflicting goals
• Accept failures instead of laying blame on others
• Develop a sense of shared responsibility instead of the ‘not my job’ attitude.

Downloaded by arun joseph (arunjosemary@gmail.com)


lOMoARcPSD|9189951

P a g e | 14

From a process and organizational perspective, you have to:


• Facilitate true autonomy among individuals and DevOps team members

• Enable cross-functional collaboration

• Minimize processes that cause waste and blockage

• Maintain continuous flows across the SDLC pipeline, including integration, testing,
deployment and funding, among others.

Downloaded by arun joseph (arunjosemary@gmail.com)


lOMoARcPSD|9189951

P a g e | 15

5 PROVEN STEPS TO EMBRACE DEVOPS CULTURE

STEP#1: Start with Top-down approach to DevOps and follow it up with bottom-
up approach
A cultural change has to happen in the entire organization. Initially, begin it from the top and
gradually take it towards the bottom. A cultural change, doesn’t happen without top-down
motivation and coordination. DevOps culture needs the acceptance at the executive-level and
immediate sponsorship. The right leadership to modify the software development lifecycle and
promote automation over manual processes. To achieve a successful DevOps culture,
everyone in your organization – from a junior developer to the CIO – should support the
organizational change.

STEP#2: Automate Every Process in Your Organization


To aim for continuous improvement with high cycle rates and the ability to instantly respond
to customer feedback, organizations should automate their processes. While it’s a fact that
automating processes will initially consume more time than performing such tasks manually,
making this change will save you time, energy, and resources in the long run.

STEP#3: Embrace Agile Approach to Software Development


Your software teams must not only practice agile approach to software development, but also
practice continuous Integration systematically. This implies designing a software development
flow that pushes code at least once every day/min/Sec.

STEP#4: Encourage Learning Via Extensive Experimentation


Implementing a successful DevOps practice comprises the ability to experiment – to learn,
fail-fast, repeat – which is critical. High performers adopt tested practices for collaboration,
testing and swift experimentation

STEP#5: Measure & Reward Using the Right Metrics


When people are measured and rewarded for the right things, the culture changes.

Research proves that highest performing organizations adopting DevOps do significantly


better on all the following metrics:

• Deployment frequency: The frequency with which organizations deploy code to


release or updates it to end-users is very high.

• Lead time for changes: The time taken to go from code committed to code
successfully running is very high

• Time to restore service: The time taken to restore service when a service incident or
defect occurs, without any unexpected disruption or service interruption.

• Production failure rate: The frequency with which the software fails in production
during a particular period of time is nil.

• Mean time to recover: The time taken by an application in production to recover


from a failure is negligible.

• Average lead time: The time taken for a new requirement to be developed, tested,
delivered, and deployed into production is very less.

Downloaded by arun joseph (arunjosemary@gmail.com)


lOMoARcPSD|9189951

P a g e | 16

DEPLOYING A WEB APPLICATION

• Step-1: Deploying a Web Application


• Step 2: Planning and Designing Planning
• Step 3: Selecting the Right Technologies
• Step 4: Development
• Front end
• Back end
• Database
• Step 5: Testing
• Step 6: Deployment
• Step 7: Maintenance and Updates

Step 1: Idea Generation and Requirement Analysis


Everything starts with an idea. Consider a problem you'd like your web application to solve or
the functionality it should have. It could be a personal project, a tool for a business, or an
application for the community. Once the idea is clear, perform a requirement analysis. This
involves defining the functionality your web application should provide, the target user base,
and the technologies required. You should also outline any potential challenges and
constraints in this step.

Step 2: Planning and Designing Planning


involves defining the application’s structure and components. At this stage, you'll consider the
database structure, backend logic, and the UI/UX. Sketch or use a tool to create wireframes
of your application to visualize its functionality. Designing is also an essential part of this stage.
It involves choosing color schemes, typography, and layout to create an aesthetically pleasing
and user-friendly interface.

Step 3: Selecting the Right Technologies


Now that you have a clear understanding of what your web application requires, you can
select the right technologies. You'll need to choose a frontend framework or library (like React,
Angular, or Vue.js), a backend language (like Python, Java, or Node.js), and a database
system (like PostgreSQL, MongoDB, or MySQL). This choice depends on your application's
requirements and your familiarity with these technologies.

Step 4: Development
This is where you bring your application to life. Using the technology stack you've chosen,
start coding your application. In this step, you usually begin with the backend, setting up your
server and database and defining the API endpoints. Once your backend logic is complete,
you'll move to the frontend, creating pages and components using the chosen frontend
framework.

Step 5: Testing
Testing is vital to ensure the quality of your application. There are several types of tests you
can perform: Unit tests check if individual components of your app work as intended.
Integration tests check if different parts of your app work together as expected. End-to-End
tests simulate real user scenarios to check if the application behaves correctly. Bugs and errors
are an inevitable part of development. When you encounter them, debug your code and make
necessary adjustments.

Downloaded by arun joseph (arunjosemary@gmail.com)


lOMoARcPSD|9189951

P a g e | 17

Step 6: Deployment

• Deployment is the process of making your web application accessible to


users. There are several web hosting platforms available like Heroku,
Netlify, AWS, Azure or Google Cloud.
• First, you'll need to purchase a domain name.
• Then, depending on your chosen hosting service, you'll follow their specific
deployment steps.
• Remember to also set up a Continuous Integration/Continuous Deployment
(CI/CD) pipeline for smoother updates and iterations on your web
application.

Step 7: Maintenance and Updates Once deployed, your work isn't done. Regular
maintenance is crucial to ensure the app's smooth operation. You should also listen to user
feedback and continuously improve your application, adding new features, fixing bugs, and
refining the UI/UX.
Eg := Eg – AWS Deployment steps

Downloaded by arun joseph (arunjosemary@gmail.com)


lOMoARcPSD|9189951

P a g e | 18

CREATING AND CONFIGURING AN ACCOUNT


REGISTER FOR A FREE MICROSOFT AZURE DEVOPS ACCOUNT AND
CONFIGURING

A. Open the My Azure DevOps Organizations (https://aex.dev.azure.com/ ) in your web


browser.

B. Now, enter the credentials of the Microsoft account that you used to create the
Azure cloud account.

Creating an azure devops account is a straightforward process that involves

1. Signing Up for Azure,


2. Creating An Organization,
3. Creating A Project, And
4. Inviting Team Members.

With Azure DevOps, you can streamline your software development process and improve
collaboration among your team members.

1.Select the sign-up link for Azure DevOps.

2.Enter your email address, phone number, or Skype ID for your Microsoft account

Downloaded by arun joseph (arunjosemary@gmail.com)


lOMoARcPSD|9189951

P a g e | 19

3.Enter your password and select Sign in.

4.To get started with Azure DevOps, select Continue.

An organization is created. You can rename and delete your organization, or change the
organization location.

Downloaded by arun joseph (arunjosemary@gmail.com)


lOMoARcPSD|9189951

P a g e | 20

5.Create a New Project and invite team members

Downloaded by arun joseph (arunjosemary@gmail.com)


lOMoARcPSD|9189951

P a g e | 21

CREATING A WEB SERVER

1. JAVA BASED HTTP WEB-SERVER

An Http Server (Java Based ) is bound to an IP address and port number and listens
for incoming requests and returns responses to clients. Simple http server is flexible
to be added or easily embedded into complex projects for rendering Html elements or
serving as a backend server, or even deployed in the client side devices.

Figure :- structure of Http Server implementation

A simple HTTP server can be added to a Java program to support cloud based applications
using four steps:

1. Construct an HTTP server object


2. Attach one or more HTTP Request handler objects to the HTTP server
object
3. Implement HTTP handler to process GET/POST requests and generate
responses
4. Start the HTTP server

1. Create a http Server

The HttpServer class provides a simple high-level Http server API, which can be used to
build embedded HTTP servers.

int port = 9000;


HttpServer server = HttpServer.create(new InetSocketAddress(port), 0);
System.out.println("server started at " + port);
server.createContext("/", new RootHandler());
server.createContext("/echoHeader", new EchoHeaderHandler());
server.createContext("/echoGet", new EchoGetHandler());
server.createContext("/echoPost", new EchoPostHandler());
server.setExecutor(null);
server.start();

Downloaded by arun joseph (arunjosemary@gmail.com)


lOMoARcPSD|9189951

P a g e | 22

2. Create http Request Handler

Http Handlers are associated with http server in order to process client requests.

public class RootHandler implements HttpHandler {


@Override
public void handle(HttpExchange he) throws IOException {
String response = "<h1>Server start success
if you see this message</h1>" + "<h1>Port: " + port +
"</h1>";
he.sendResponseHeaders(200, response.length());
OutputStream os = he.getResponseBody();
os.write(response.getBytes());
os.close();
}
}

3. Process Get and Post Requests

There are two common methods for a request-response between a client and server
through HTTP protocol:

• GET - Requests data from specified resources


• POST - Submits data to be processed to identified resources

Here, you create two handlers to process GET/POST methods respectively.

3.1 DECLARE ECHOGET HANDLER TO PROCESS GET REQUEST:

public class EchoGetHandler implements HttpHandler {


@Override
public void handle(HttpExchange he) throws IOException {
// parse request
Map<String, Object> parameters = new HashMap<String, Object>();
URI requestedUri = he.getRequestURI();
String query = requestedUri.getRawQuery();
parseQuery(query, parameters);
// send response
String response = "";
for (String key : parameters.keySet())
response += key + " = " + parameters.get(key) + "\n";
he.sendResponseHeaders(200, response.length());
OutputStream os = he.getResponseBody();
os.write(response.toString().getBytes());
s.close();
}
}

Downloaded by arun joseph (arunjosemary@gmail.com)


lOMoARcPSD|9189951

P a g e | 23

3.2 DECLARE ECHOPOST HANDLER TO PROCESS POST REQUEST:

public class EchoPostHandler implements HttpHandler {


@Override
public void handle(HttpExchange he) throws IOException {
// parse request
Map<String, Object> parameters = new HashMap<String, Object>();
InputStreamReader isr = new
InputStreamReader(he.getRequestBody(), "utf-8");
BufferedReader br = new BufferedReader(isr);
String query = br.readLine();
parseQuery(query, parameters);
// send response
String response = "";
for (String key : parameters.keySet())
response += key + " = " + parameters.get(key) + "\n";
he.sendResponseHeaders(200, response.length());
OutputStream os = he.getResponseBody();
os.write(response.toString().getBytes());
os.close();
}
}

4. TEST HTTP SERVER

/ display server status, processed by RootHandler

2.CREATING A WEB SERVER -- Creating Node.js Webserver

Node. js is an open source JavaScript runtime environment that lets developers run
JavaScript code on the server.

1.Server.js
2.HTTP Request in Server.Js
3.HTTP JSON Response in Serrver.Js

The http.createServer() method includes request and response parameters which is supplied
by Node.js. The request object can be used to get information about the current HTTP request
e.g., url, request header, and data. The response object can be used to send a response for
a current HTTP request

Downloaded by arun joseph (arunjosemary@gmail.com)


lOMoARcPSD|9189951

P a g e | 24

Server.js

var http = require('http'); // Import Node.js core module


var server = http.createServer(function (req, res)
{ // 2 - creating server
//handle incomming requests here..
});
server.listen(5000); //listen for any incoming requests
console.log('Node.js web server at port 5000 is running..')

Handle HTTP Request

The http.createServer() method includes request and response parameters which is


supplied by Node.js.

Server.js

var http = require('http');


var server = http.createServer(function (req, res) {
if (req.url == '/') { //check the URL of the current request
// set response header
res.writeHead(200, { 'Content-Type': 'text/html' });
// set response content
res.write('<html><body><p>This is home Page.</p></body></html>');
res.end();
}
else
res.end('Invalid Request!');
});
server.listen(5000); //listen for any incoming requests
console.log('Node.js web server at port 5000 is running..')

Sending JSON Response

The following sample code demonstrates how to serve JSON response from the Node.js
web server.

var http = require('http');


var server = http.createServer(function (req, res) {
if (req.url == '/data') { //check the URL of the current request
res.writeHead(200, { 'Content-Type': 'application/json' });
res.write(JSON.stringify({ message: "Hello World"}));
res.end();
}
});
server.listen(5000);
console.log('Node.js web server at port 5000 is running..')

Downloaded by arun joseph (arunjosemary@gmail.com)


lOMoARcPSD|9189951

P a g e | 25

To Test

C:\>node server.js
node.js web server at port 5000 is running…

curl –i http://localhost:5000
Http/1.1 200 ok
Content-Type: text/plain
Date: Tue,8 Sep 2023 03:05:08 GMT
Connection: keep-alive
This is home page

3. Creating a Web server using Python

Using Python you can create a custom web server which has unique functionality.

The web server in this example can be accessed on your local network.. This can either be
localhost or another network host. You could serve it cross location with a help of VPN.

Web server

• to start a custom web server. To create a custom web server, we need to use the
HTTP protocol.
• By design the http protocol has a “get” request which returns a file on the server. If
the file is found it will return 200.
• The server will start at port 8080 and accept default web browser requests.

# Python 3 server example


from http.server import BaseHTTPRequestHandler, HTTPServer
import time
hostName = "localhost"
serverPort = 8080
class MyServer(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header("Content-type", "text/html")
self.end_headers()

Downloaded by arun joseph (arunjosemary@gmail.com)


lOMoARcPSD|9189951

P a g e | 26

self.wfile.write(bytes("<html><head><title>https://pythonbasics.org</title></head>",
"utf-8"))
self.wfile.write(bytes("<p>Request: %s</p>" % self.path, "utf-8"))
self.wfile.write(bytes("<body>", "utf-8"))
self.wfile.write(bytes("<p>This is an example web server.</p>", "utf-8"))
self.wfile.write(bytes("</body></html>", "utf-8"))
if __name__ == "__main__":
webServer = HTTPServer((hostName, serverPort), MyServer)
print("Server started http://%s:%s" % (hostName, serverPort))
try:
webServer.serve_forever()
except KeyboardInterrupt:
pass
webServer.server_close()
print("Server stopped.")

To start Test a webserver run the command below:

python3 -m http.server

• That will open a webserver on port 8080.


• Browser opens -> http://127.0.0.1:8080/
• The webserver is also accessible over the network using your 192.168.-.- address.

If you open an url like http://127.0.0.1/example the method do_GET() is called. The server
sends the webpage in this method.

The variable self.path returns the web browser url requested. In this case it would be
/example

Downloaded by arun joseph (arunjosemary@gmail.com)


lOMoARcPSD|9189951

P a g e | 27

MANAGING INFRASTRUCTURE WITH CLOUD FORMATION

Cloud Formation is a AWS Tool that supports Cloud-DevOps Custure

❖ AWS CloudFormation is a service that allows users to model and provision their
entire cloud infrastructure using simple configuration files.
❖ It is an Infrastructure as Code (IaC) tool which makes it easier for developers
and system administrators to manage their AWS resources by creating,
updating, and deleting their infrastructure in a more automated way.
❖ CloudFormation enables organizations to use a single language to model and
provision their entire infrastructure across multiple regions and accounts.

This automation helps

• Reduce manual errors,

• Improves resource utilization

• Increases system reliability, and

• Reduces the time it takes to provision new environments.

❖ Not only can AWS CloudFormation help with the deployment of resources, but it also
helps with monitoring these resources over time. By managing related resources
together as a single unit called “stack,” it provides vision of what has been
deployed, when changes are made, and where the deployments are running.

❖ CloudFormation provides a robust set of features, such as stack policies that can
help increase security by controlling who can change specific resources in your stack.

CloudFormation Template Terms and Concepts

TERMS DESCRIPTION

TEMPLATE A CloudFormation template is simply a text file, formatted in a


specific way
that defines how AWS resources should be configured and
deployed.
STACK A stack is a term AWS uses to refer to a collection of multiple AWS
resources -- such as EC2 virtual machines, S3 storage,
and IAM access controls -- that you can manage together using a
single template.
FORMATTING CloudFormation supports templates that are formatted using either
JSON or YAML.

FUNCTIONS CloudFormation Functions allow to retrieve data(GET DATA ) from


resources deployed.

Downloaded by arun joseph (arunjosemary@gmail.com)


lOMoARcPSD|9189951

P a g e | 28

Cloud Formation Tool’s Template and stack

A stack is a collection of AWS resources that you can manage as a single unit.

A CloudFormation template in JSON or YAML format. The CloudFormation template


describes the resources you want and their settings.

Setting up AWS CloudFormation

• Step 1 - Code your Infrastructure from scratch with the help of


CloudFormation template language, in either YAML or JSON format, or start from
many available sample templates.

• Step 2 - Check your template code locally or upload your template code into the
S3 bucket.

• Step 3 - Use AWS CloudFormation from the browser console; then, use command
line tools or APIs to create a stack based on your template code.

• Step 4 - After this, AWS CloudFormation provisions and configure the stack and
resources you specified on your template.

Downloaded by arun joseph (arunjosemary@gmail.com)


lOMoARcPSD|9189951

P a g e | 29

Downloaded by arun joseph (arunjosemary@gmail.com)


lOMoARcPSD|9189951

P a g e | 30

Applications- Companies Using CloudFormation Tool

• Shopify makes use of CloudFormation to create easy-to-use templates. These are


used by shopify to deploy complex infrastructures with just a few clicks.

• Netflix is a major streaming media provider which leverages CloudFormation for its
platform development and deployment.

• Airbnb is another company that utilizes CloudFormation. Airbnb uses it to manage its
massive infrastructure – for spinning up multiple layers of Amazon EC2 instances,
setting up auto-scaling policies, allocating security groups, installing monitoring tools,
and much more. This way, Airbnb can automate its deployments and keep track of
changes in its infrastructures in an efficient manner.

• Capital One employs CloudFormation in its DevOps environment. It uses


CloudFormations scripts to orchestrate deployments across multiple AWS accounts
while ensuring a stable architecture across all environments (dev/test/production).

• Expedia Inc., one of the world’s leading online travel booking companies, also utilizes
CloudFormation. Expedia creates a custom template library for its cloud environments
using CloudFormation, which allows them to quickly deploy.

CONFIGURATION MANAGEMENT IN DEVOPS

• CM system enables easy tracking of configuration changes, resulting in configuration


transparency and audibility.
• CMS also offers automatic infrastructure provisioning and scaling, making it simple to
add more servers or apps to the environment.
• CM also ensures that all servers, applications, and systems are configured
dependably and appropriately, enhancing the environment’s overall performance and
reliability.
• Configuration management enables quick recovery from calamities by allowing for
simple rollbacks to earlier configurations.

Baseline in DevOps

A Baseline is a snapshot of selected work items at a point in time. So, no matter what
changes we have made to the baselined work items, the saved snapshot won't change.

Downloaded by arun joseph (arunjosemary@gmail.com)


lOMoARcPSD|9189951

P a g e | 31

Even if we have merged the baselines, the changes are done against the latest versions of
the work items, not to the baselines themselves.

Fig – Baselining in DevOps

Version Control in DevOps

Version control systems are software that help track changes make in code over time. As a
developer edits code, the version control system takes a snapshot of the files. It then saves
that snapshot permanently so it can be recalled later if needed.

Downloaded by arun joseph (arunjosemary@gmail.com)


lOMoARcPSD|9189951

P a g e | 32

Fig – Version Control in DevOps Git

• Version control – for this, you will need to use a certain version control system
(like Git). Make sure to use external keys to encrypt secret data and add data files to
a single repository created in your preferred version control solution for thorough
management.

Elements of DevOps

• Configuration identification

• Configuration control

• Configuration Accounting

• Configuration Audit

Downloaded by arun joseph (arunjosemary@gmail.com)


lOMoARcPSD|9189951

P a g e | 33

Configuration Identification—This is the process of identifying all of the


components(source code ,design document, test cases etc)of a project and ensuring that
these components can be found quickly throughout the project life cycle.

Configuration Change Control—This important activity coordinates access to project


components among team members so that data do not “fall through the cracks,” become lost,
or that unauthorized changes are made.

Configuration Status Accounting—The goal of configuration status accounting is to record


why, when, and by whom a particular change is made to the source code of a project.

Configuration Auditing—Configuration auditing is a process that confirms that a software


project is on track and that the developers are building what is actually required.

Configuration Repositories used in DevOps

• SOURCE CODE REPOSITORY — Used primarily during the development phase.


• ARTIFACT REPOSITORY — Used during the development and operations phases.
• CONFIGURATION MANAGEMENT DATABASE — Used during the development
and operations phases.
• SNAPSHOT REPOSITORY — Snapshot of configuration for the resources
Snapshot repositories hold snapshots
• RELEASE REPOSITORY — Release repositories hold releases

Top 5 configuration management tools

1. Ansible
2. Puppet
3. Chef
4. CFEngine
5. Saltstack

Downloaded by arun joseph (arunjosemary@gmail.com)


lOMoARcPSD|9189951

P a g e | 34

Configuration as a Code: =

Configuration as a code is nothing but defining all the configurations of the servers or any
other resources as a code or script and checking them into version control.

‘Configuration as a Code’ in DevOps practice: =

• Configurations are Version controlled


• Supports Automated and standardized Configuration
• Removes dependency
• Error-free infra setups
• Correcting configuration files
• Treating infrastructure as a flexible resource
• Automated scaling of infrastructure
• Maintaining consistency in the setups

Downloaded by arun joseph (arunjosemary@gmail.com)


UNITV WORKING WITH APIs 9 Working with Third Party APIs: Overview of interconnectivity
in cloud ecosystems. Working with Twitter API, Flickr API, Google Maps API. Advanced use of
JSON and REST.
What is a Third-party API?

Third-party API refers to a program that allows you to connect different functionalities from
different apps. It is typically provided by large corporations, but it does not have to be. This
API allows you to access third- party data and software functions on your application or
website.
One example is Uber’s integration of Google Maps Map functionality to track Uber rides.
Uber saves time building map functionality by using third-party APIs.

How Does a 3rd Party API Work?

To comprehend how a third API functions We must first distinguish it from the first-party
API. It is intended to be used internally, whereas the API is a tool that lets you connect your
app to services of different companies.

An API is a medium for communication between two applications It creates shared functions
and allows seamless, controlled data sharing. In the case of third parties, the app owner
creates the app’s functionality using an API in order to let other apps connect to their app
features. To accomplish this, the integration code is published as well as documents
regarding its implementation.

Key Benefits of Using a Third-party API

Efficiency

Prior to this, every new app creator had to create every aspect of its functionality from
scratch. Now software developers can make use of APIs from third parties to gain access to
features that otherwise would require some time or effort to create. Thus, a third-party API
integration can help reduce costs, time, and time.
Avoid Data-Duplication

Google Sign-in enables several apps to make use of authenticating credentials from OAuth to
control user profiles. Without authorization, API to integrate with third-party apps users
must create a brand new profile for each sign-up. Furthermore, every business must manage
multiple databases that come from the same source.

Less Maintenance

An API from a third party is simpler to manage. In the end, it’s operated, controlled, and
managed by the business that developed it. For third-party APIs APIs, it’s a simple plug-and-
play method. If you are using APIs from third parties from a reputable company, then you
won’t face any issues as maintenance and updates are guaranteed to be seamless.

What is a third-party API integration?

It’s as simple as connecting an API provided by an outside service to your own application. It
is continuously maintained by the service provider, while the integration is made through
specific developer keys to serve the reason. This process requires the knowledge of a
proficient mobile developer (for API integration with third-party APIs in Android as well as
iOS) or a professional API integration specialist.

Why You May Need a 3rd Party API Integration?

For many new businesses that do not have the funds to develop their own complicated
functions, APIs can prove useful. APIs from reputable and established firms can open your
company to a universe of possibilities that might otherwise not be available to your company.
In the case of a map, for instance, creating maps for an app will require lots of complicated
background work and the extremely complex process of creating the app. In this instance, the
startup would be better off using a well-established third-party API to get access to a wealth
of information.

How to Choose the Right 3rd Party API for Your Project?

- Documentation
Each software item is supported by some kind of documentation that developers can use to
implement the software in their own code. Before you choose an API from a third-party API
make sure it comes with detailed documentation that contains specific details.

- Features
Developers rely on APIs for efficiency. You shouldn’t have to work with two different APIs
when only one of them can do all the tasks you require. An API that is third-party-friendly
should provide robust and specific features that will help you reach your goal efficiently.

- Support
Third-party APIs are maintained by the service provider not you, the customer. This means
that you must utilize tools with top service from the provider. How responsive is the service?
What is the frequency of updates? What’s the maintenance plan? These questions require
answers.

- Reliability
This information is gathered from others who have used the API. A system that’s frequently
unstable on other platforms might not work on yours as well. Remember that sudden errors
can cause serious problems to the quality of service you provide to your customers.

- Security
When you make use of APIs offered by a third party that you use, you will be sharing
information with the provider. Therefore, if the service doesn’t provide
high-level security or encryption for data then your information is not securewith that
particular product.

Table of Contents

 What is a third-party API?


 How third-party API works
 Types of APIs
 Public API
 Partner API
 Internal API
 What you can do with a third-party API
 Payments
 Chat
 Access and Authorization
 Geolocation

What is a third-party API?


In general, API stands for Application Programming Interface. It is a very vital
part of modern sophisticated app platforms. It works like a medium between
two applications. It allows two or multiple applications to transfer permitted
data with each other.

In simple words, a third-party API work as a door between two platforms. An


application must meet certain security criteria to pass data through the door. We
are exposed to a wide range of applications in our daily lives. APIs enable them
to connect with each other by communicating one application to another.

While using an API you can access third-parties data or functionality in your
application. It can help you save the time and high cost of building the
functionality on your own.
A common question might arise in your mind how does a 3rd-party API work?
A third-party API is developed by someone else, but you can use it in your
system. So, people also named it external API.

For example, imagine you are trying to develop a taxi service app that requires
Google Maps to track drivers. It is not possible for you to build a platform like
Google Maps completely by yourself. In this case, you can use Google Map
API in your application to track your vehicles. Thus, you are pulling data from
Google Maps to your system using an API.

A 3rd-party API is stored on a server owned by a third-party service.


Developers can use it by linking to a JavaScript library or by making an HTTP
request to a predefined URL pattern. When the connection is established, the
user’s request is sent from the app to a server via the API and vice-versa. It is a
two-way communication.

Types of APIs
Based on the functionality, we can categorize APIs in different types. The most
common types of APIs are –

 Public API (free)


 Partner API
 Internal API

Let’s learn about these APIs in depth below.


Public API
Public APIs are free to use and any business or developer can access public API. Typically,
public APIs have a simpler authentication and authorization process. Google’s public APIs
are the most commonly used APIs by developers to access its many products such as
Adsense, Google Maps, and AMP. The CMS platform WordPress also has lots of open APIs
for expanding WordPress’s capabilities.

Before integrating an API, always read the API documentation. Even free APIs must have
good API documentation that is constantly updated. You will receive a free API key to
maintain security. It will ensure unauthorized interception of data by external actors.

Partner API
Partner API is only available to authorized subscribers, mostly used in business- to-business
processes. For example, if you want to connect to an external CRM, your vendor will provide
you with an API to access the internal data system. A payment gateway API works in the
same method. Partner APIs typically include a license agreement as well as enhanced
authentication, authorization, and security mechanisms.

Internal API
An internal API is used only within the business organization. Sometimes, we call it private
API. In most cases, businesses develop it in-house for theirinternal use. For example, the HR
department can share attendance data automatically to the payroll system using an internal
API. Large companies develop internal APIs to speed up their business process.
A third-party API has tonnes of applications. Social login, online payment, customer data
management, webhook implementation and many more, all you can do with an external API.
Let’s see some common usage of external API.

Payments
Now all websites that are selling a product or service accept online payments. All of them use
a third-party API to collect payments from customers. Along with regular payments, a
payment gateway processes recurring payments, refunds, currency management and so on.
Major players like Stripe, PayPal, Square, Mollie, all of them provide external APIs for
merchant websites.

Chat
A very familiar use of third-party API is the chat feature in websites or applications. It
provides users real-time online assistance through a chat platform. A popular example of chat
API is Messenger integration with various websites. This chat plugin helps integrate
Messenger directly with your website. Your customers can interact with your website using a
personalized profile.

Access and Authorization


You can use social login API to provide authorized access to your customers. It helps users
quickly create accounts on your website with their social accounts. Besides, If you are
running multiple websites, your customer can use a single sign-in credential to access all
platforms using a third-party API.
Geolocation
A wide range of services makes use of geolocation. Banking apps use it to show
nearby ATMs and bank branches, food delivery apps track deliveries, real estate
apps plot routes, and so on. Widely known geolocation integrations include
Google Maps API and Google Directions API.
OVERVIEW OF INTERCONNECTIVITY IN CLOUD ECOSYSTEMS.

Interconnectivity in cloud ecosystems is the backbone of modern IT


infrastructure, enabling seamless communication and integration between
various cloud services, platforms, and infrastructures. This interconnectedness
is vital for organizations looking to leverage the benefits of cloud computing,
such as scalability, flexibility, and cost-effectiveness. Let's delve into the details
of how interconnectivity works and its key components:

1. Interoperability : Interoperability refers to the ability of different systems or


components to communicate and exchange data effectively. In the context of cloud
ecosystems, interoperability ensures that disparate cloud services, applications, and platforms
can work together seamlessly. This often involves adhering to industry standards and
implementing compatible protocols to facilitate communication between different
components.
2. Application Programming Interfaces (APIs): APIs are the building blocks of
interconnectivity in cloud ecosystems. They provide standardized interfaces that allow
developers to interact with and integrate various cloud services and platforms. APIs enable
functionalities such as data retrieval, storage management, authentication, and much more.
By leveraging APIs, developers can create custom integrations and automate workflows
across different cloud environments.
3. Networking Infrastructure: Networking infrastructure is fundamental to interconnectivity
in cloud ecosystems. Cloud providers offer a range of networking services that enable secure
communication between different components within the cloud environment. These services
include Virtual Private Clouds (VPCs), which allow users to create isolated networks within
the cloud, and Load Balancers, which distribute incoming traffic across multiple servers to
ensure high availability and reliability.
4. Integration Platforms and Middleware: Integration platforms and middleware solutions
play a crucial role in facilitating the seamless integration of disparate systems and
applications within the cloud ecosystem. These platforms provide tools and frameworks for
connecting various data sources, applications, and services, enabling data flows and process
orchestration across different cloud environments. Integration platforms often include
features such
as data transformation, routing, and protocol mediation to ensure compatibility between
different systems.
5. Hybrid and Multi-cloud Connectivity : Many organizations operate in hybrid
or multi-cloud environments, where they use a combination of on-premises infrastructure and
multiple cloud services. Interconnectivity solutions enable seamless communication and data
exchange between these diverse environments, allowing organizations to leverage the
benefits of both on- premises and cloud resources. This often involves implementing hybrid
cloud architectures, VPN connections, and inter-cloud networking solutions to establish
secure communication channels between different environments.
6. Security and Compliance: Interconnectivity in cloud ecosystems must prioritize security
and compliance requirements to protect sensitive data and ensure regulatory compliance. This
includes implementing robust authentication and access control mechanisms, encrypting data
in transit and at rest, andadhering to regulatory standards and industry best practices. Security
considerations are paramount when designing interconnectivity solutions to mitigate risks
such as data breaches, unauthorized access, and compliance violations.
7. Service Orchestration: Service orchestration tools automate the provisioning, deployment,
and management of cloud resources and services, enabling organizations to streamline
workflows and optimize resource utilization. These tools facilitate dynamic scaling, load
balancing, and fault tolerance, allowing
organizations to respond rapidly to changing business needs and demands.

Cloud ecosystems are thriving environments where businesses leverage various cloud
services and applications. But for these ecosystems to function seamlessly, they require a
strong foundation – interconnectivity.

In simpler terms, interconnectivity refers to the ability for different parts of a cloud
ecosystem to connect and exchange data securely and efficiently. This encompasses
connections between:

 Multiple cloud providers: Businesses are increasingly adopting multi-cloud strategies, using
services from various providers like AWS, Microsoft Azure, or Google Cloud Platform.
Interconnection enables smooth data flow between these clouds.
 Cloud and on-premises infrastructure: Many businesses maintain a hybrid cloud
environment, with some data and applications on-premises and others in the cloud.
Interconnection allows seamless communication between these environments.
 Different services within a cloud: Even within a single cloud provider, applications and
services may reside in separate virtual networks. Interconnection facilitates communication
between these internal components.
Here's a deeper dive into the importance of interconnectivity in cloud
ecosystems:

 Unlocking the Power of Multi-Cloud: Interconnectivity allows businesses to leverage best-


of-breed services from different cloud providers without worrying about data silos or slow
transfers.
 Streamlining Hybrid Cloud Operations: By enabling seamless communication between
on-premises and cloud environments, interconnection simplifies data management and
application deployment in hybrid cloud setups.
 Enhanced Performance and Scalability: Interconnection solutions often offer high-
bandwidth, low-latency connections, ensuring faster data transfer and improved application
performance. This becomes crucial for real-time applications and large data processing tasks.
 Security and Compliance: Secure interconnection options with robust access controls
ensure data privacy and compliance with relevant regulations across different cloud
environments.

There are various ways to achieve interconnectivity in cloud ecosystems:

 Dedicated Interconnects: These provide a direct physical connection between your on-
premises network and the cloud provider's network, offering high security and performance.
 Partner Interconnects: This option leverages a network service provider to establish a
connection between your network and the cloud.
 Cross-Cloud Interconnects: This facilitates direct connections between your resources in
different cloud providers, enabling data exchange without going through the public internet.

Interconnection is the Technology Powering Digital Ecosystems

Interconnection is the mechanism tying together the ecosystem of entities each business
exchanges data with as part of operations, using what we are calling a “digital native supply
chain.” Historically, enterprises, related service providers and cloud providers (and
consumers, however you define them) shared data though point-to-point connections within
carrier-neutral data centers. As multicloud grew, so did digital ecosystems, which are able to
exchange data and use hosted services as needed, at scale and from any location. Simplifying
data exchange between clouds would be a good step to holistic interconnection.

Data centers remain the hub for interconnection, which happens through cross connects and
virtual connections. The advantages for ecosystems vary, including:
 Security and resilience that only private networks can offer
 Guaranteed performance with latency aligned to need – critical to userexperience
 Better decisions, enabled by reducing or eliminating data silos
 Cost control as a result of direct cloud connections and the agility to turn up or down services
faster and ever-more selective broadband use (as we should see with edge clouds)
 New revenue opportunities and expanded market reach

 What do you mean by data silo?


 A data silo is a collection of information isolated from an organization and
inaccessible to all parts of a company hierarchy. Data silos create expensive and time
consuming problems for businesses, but they are relatively simple to resolve.
Working with Twitter API

Introduction

Twitter is a platform that is used by people across the world to exchange thoughts, ideas and
information with one another, using Tweets. Each Tweet consists of up to 280 characters and
may include media such as links, images and videos. The Twitter API provides a
programmatic way to retrieve Twitter data.

Professors at various schools around the world use Twitter API in their class toteach
students:

 data science, text mining, machine learning etc.


 how to write code
 how to work with APIs
 how to work with real-world social media data

Working with the Twitter API opens up a plethora of possibilities for


developers and businesses to interact with Twitter's vast amount of data, create
innovative applications, and automate tasks related to managing Twitter
accounts, analyzing trends, sentiment analysis, and more. Here's a detailed
guide on how to work with the Twitter API:

1. Understanding Twitter API Basics :


 Twitter offers a range of APIs catering to different needs, including the REST API,
Streaming API, and the Ads API.
 The REST API allows you to interact with Twitter's data through HTTP requests. It's
suitable for tasks like reading timelines, posting tweets, searching tweets, etc.
 The Streaming API provides real-time access to a subset of Twitter data. It's useful for
monitoring tweets as they're posted in real-time, tracking keywords, or streaming user
timelines.
 The Ads API is specific to Twitter's advertising platform and allows programmatic
management of advertising campaigns.

2. Creating a Twitter Developer Account :


 To access Twitter's APIs, you need to create a developer account on the
Twitter Developer Platform.
 Once registered, create a Twitter App in the Developer Dashboard. This app will
generate the necessary API keys and tokens required to authenticate your requests.
3. Authentication:
 Twitter API uses OAuth 2.0 for authentication. You'll need to include your API key,
API secret key, Access token, and Access token secret in your API requests.
 These keys and tokens are provided when you create your Twitter App.
4. Making API Requests:
 For the REST API, you'll typically use HTTP methods like GET, POST, PUT, and
DELETE to interact with Twitter's resources.
 Construct API requests using endpoints provided in Twitter's API documentation. For
example, to fetch tweets from a user's timeline, you
would use the /statuses/user_timeline endpoint.
 Include required parameters in your requests, such as query parameters for search
endpoints or tweet IDs for specific tweet interactions.
 Handle responses from Twitter API, which usually come in JSON format.
5. Rate Limits:
 Twitter enforces rate limits to prevent abuse and ensure fair usage of its APIs. Each
endpoint has specific rate limits.
 Monitor your API usage and handle rate limit errors gracefully in your code. Twitter
provides rate limit status headers in API responses, which you can use to track your
usage.
6. Streaming API:
 Use the Streaming API for real-time monitoring of tweets.
 You can filter streams based on keywords, locations, user IDs, etc.
 Implement a streaming client that maintains a persistent connection to Twitter's
servers to receive real-time updates.
7. Data Analysis and Visualization:
 Once you've fetched Twitter data, you can perform various analyses likesentiment
analysis, trend detection, user profiling, etc.
 Tools like Python's pandas, matplotlib, and libraries like NLTK or spaCycan be
helpful for data processing and analysis.
 Visualize insights using libraries like matplotlib, seaborn, or web-basedframeworks
like D3.js.
8. Building Applications:
 Twitter API can be used to build a wide range of applications, including social media
management tools, sentiment analysis dashboards, chatbots, recommendation engines,
etc.
 Ensure compliance with Twitter's Developer Agreement and Policy whilebuilding and
deploying your applications.
9. Handling Errors and Exceptions:
 Implement error handling mechanisms in your code to deal with various issues like
rate limit exceeded errors, authentication failures, network timeouts, etc.
 Retry failed requests with exponential backoff strategies to avoid hitting rate limits
excessively.

10. Compliance and Regulations :


 Be aware of Twitter's API terms of use and compliance requirements. Ensure your
application follows Twitter's rules regarding data usage, privacy, and user consent.
 Regularly check for updates to Twitter's API documentation and adapt your application
accordingly.
Working with the Flickr API

Working with the Flickr API allows developers to integrate Flickr's vast
collection of photos and related data into their applications. Whether you're
building a photo-sharing platform, creating a gallery website, or developing a
tool for visual analysis, accessing Flickr's API can provide access to millions of
images and rich metadata. Here's a detailed guide on working with the Flickr
API:

1. Understanding Flickr API Basics :


 Flickr offers a RESTful API that provides access to various resources such as photos,
albums, users, groups, and more.
 The API allows developers to search for photos, retrieve detailed information about
specific photos, interact with users and their content, and perform other tasks related
to managing and accessing Flickr data.
2. Creating a Flickr Developer Account :
 To access the Flickr API, you need to sign up for a developer account onthe Flickr
Developer Portal.
 Once registered, create an application to obtain an API key and secret,which are
required for authentication and accessing the API endpoints.
3. Authentication :
 Flickr API uses OAuth 1.0a for authentication, which involves obtaining a request
token, exchanging it for an access token, and signing API requests with the access
token secret.
 Include your API key and secret in API requests, along with the access token and
token secret obtained during the authentication process.
4. Making API Requests :
 Construct API requests using RESTful endpoints provided in Flickr's API
documentation.
 Endpoints allow you to perform actions such as searching for photos, retrieving photo
details, accessing user information, adding comments, favoriting photos, and more.
 Include required parameters in your requests, such as search queries, photo IDs, user
IDs, etc.
 Handle responses from the API, which are typically in XML or JSONformat.
5. Rate Limits:
 Flickr enforces rate limits to prevent abuse and ensure fair usage of itsAPI.
 Monitor your API usage and handle rate limit errors gracefully in your code.
 Flickr provides rate limit information in API responses, which you can use to track
your usage.
6. Data Analysis and Visualization:
 Once you've fetched data from Flickr, you can perform various analyses and
visualizations, such as clustering similar images, detecting objects or scenes,
generating photo galleries, and more.
 Tools like Python's pandas, scikit-learn, matplotlib, and libraries like OpenCV can be
useful for data processing, analysis, and computer vision tasks.
 Visualize insights using libraries like matplotlib, seaborn, or web-based frameworks
like D3.js.
7. Building Applications:
 The Flickr API can be used to build a wide range of applications, including photo
management tools, image search engines, recommendation systems, and more.
 Ensure compliance with Flickr's API terms of service and usage guidelines while
building and deploying your applications.
8. Handling Errors and Exceptions:
 Implement error handling mechanisms in your code to deal with various issues such
as rate limit exceeded errors, authentication failures, network timeouts, etc.
 Retry failed requests with appropriate strategies to minimize disruptions to your
application's functionality.
9. Compliance and Regulations:
 Be aware of Flickr's API terms of service and usage guidelines. Ensure your
application follows Flickr's rules regarding data usage, privacy, and user consent.
 Regularly check for updates to Flickr's API documentation and adapt your
application accordingly.

Working with the Flickr API can provide access to a wealth of visual content
and metadata, enabling developers to create innovative applications that
leverage the power of images and photography. By mastering the Flickr API,
you can build applications that enhance photo sharing, discovery, and analysis,
enriching the user experience and unlocking new possibilities in visual
computing.

Working with the Google Maps API

Working with the Google Maps API offers developers powerful tools for
integrating interactive maps, location-based services, and geographic data into
their applications. Whether you're building a website, mobile app, or any other
type of software that requires mapping functionality, the Google Maps API
provides extensive features and customization options. Here's a detailed guide
on working with the Google Maps API:

1. Understanding Google Maps API Basics :


 Google Maps API is a collection of APIs provided by Google that allows
developers to embed maps, geocode addresses, calculate routes, and
perform various other mapping-related tasks.
 The main APIs include Google Maps JavaScript API, Google Maps
Android SDK, Google Maps iOS SDK, Google Places API, and Google
Maps Geocoding API.
2. Getting Started :
 To start using the Google Maps API, developers need to sign up for a
Google Cloud Platform (GCP) account and enable the required APIs in
the Google Cloud Console.
 Obtain an API key, which is necessary for authenticating requests to the
Google Maps API.
3. Authentication:
 Google Maps API uses API keys for authentication and access control.
 Include your API key in API requests to authenticate your application. You can
restrict usage by specifying referrer URLs, IP addresses, or usage quotas.
4. Choosing the Right API:
 Select the appropriate Google Maps API based on your application's platform and
requirements. For web-based applications, the Google Maps JavaScript API is
commonly used, while mobile apps may use the Android or iOS SDKs.
 The Google Places API is suitable for accessing information about places, such as
restaurants, hotels, and landmarks, while the Geocoding API is used for converting
addresses into geographic coordinates and vice versa.
5. Embedding Maps:
 Use the Google Maps JavaScript API to embed interactive maps into webpages.
 Customize map styles, markers, overlays, and controls to match thedesign and
functionality of your application.
 Implement event listeners to handle user interactions with the map, suchas clicking
on markers or dragging the map.
6. Geocoding and Reverse Geocoding:
 Use the Geocoding API to convert addresses into geographic coordinates(latitude
and longitude) and vice versa.
 Perform geocoding requests to retrieve location data based on addressesor reverse
geocoding requests to obtain addresses based on coordinates.
7. Routing and Directions:
 Utilize the Directions API to calculate routes between locations andobtain step-by-
step directions.
 Customize route options, such as transportation mode (driving, walking,cycling),
waypoints, and optimization parameters.
8. Search and Places:
 Use the Places API to search for places based on various criteria, such askeyword,
type, or proximity to a specific location.
 Retrieve detailed information about places, including name, address,phone number,
website, opening hours, and user ratings.
9. Integration with Mobile Apps:
 Integrate Google Maps functionality into native mobile apps using theGoogle
Maps SDK for Android and iOS.
 Display maps, markers, and overlays, and implement location-basedfeatures like
geolocation, routing, and search.
10. Handling Errors and Exceptions:
 Implement error handling mechanisms to manage various issues, including authentication
errors, quota exceeded errors, network timeouts, and invalid API requests.
 Handle errors gracefully and provide informative error messages to users.
11.Compliance and Regulations:
 Adhere to Google's terms of service and usage policies when using the Google
Maps API.
 Ensure compliance with data privacy regulations, especially when collecting
and processing location data from users.

By mastering the Google Maps API, developers can create immersive mapping
experiences that enhance the functionality and usability of their applications,
providing users with valuable location-based information and services. Whether
it's for navigation, local search, or visualizing geographic data, the Google Maps
API offers the tools and resources needed to build innovative mapping
solutions.

Advanced use of JSON and REST

Advanced use of JSON (JavaScript Object Notation) and REST


(Representational State Transfer) involves leveraging their capabilities to design
efficient and scalable web APIs for data exchange and interaction between
client and server applications. Here's a detailed explanation of advanced
techniques and best practices for using JSON and REST:

1. JSON (JavaScript Object Notation) :


 JSON is a lightweight data interchange format that is easy for humans to read and
write and easy for machines to parse and generate.
 Advanced use of JSON involves optimizing its structure for efficiency, flexibility,
and readability in API payloads.
 Use nested objects and arrays to represent complex data structures, such as
hierarchical relationships or collections of related entities.
 Employ data types effectively, such as strings, numbers, booleans, arrays, and objects,
to accurately represent the semantics of the data being exchanged.
 Utilize JSON Schema to define and validate the structure of JSON data, ensuring
consistency and interoperability between API clients and servers.
 Implement techniques like JSON Patch or JSON Merge Patch for partial
updates to resources, reducing the amount of data transferred over the
network.
 Consider compression techniques such as Gzip or Brotli to reduce the
size of JSON payloads, especially for large datasets, to improve network
performance and latency.

JSON stands for JavaScript Object Notation

JSON is a text format for storing and transporting dataJSON is


"self-describing" and easy to understand JSON Example

This example is a JSON string:

'{"name":"John", "age":30, "car":null}'What

is JSON?

 JSON stands for JavaScript Object Notation


 JSON is a lightweight data-interchange format
 JSON is plain text written in JavaScript object notation
 JSON is used to send data between computers
 JSON is language independentWhy

Use JSON?

The JSON format is syntactically similar to the code for creating JavaScript objects. Because
of this, a JavaScript program can easily convert JSON datainto JavaScript objects.

Since the format is text only, JSON data can easily be sent between computers, and used by
any programming language.

JavaScript has a built in function for converting JSON strings into JavaScript objects:

JSON.parse()

JavaScript also has a built in function for converting an object into a JSON string:

JSON.stringify()
Valid Data Types

In JSON, values must be one of the following data types:

 a string
 a number
 an object (JSON object)
 an array
 a boolean
 null

JSON values cannot be one of the following data types:

 a function
 a date
 undefined

JSON Strings

Strings in JSON must be written in double quotes.

Example
{"name":"John"}

JSON Numbers

Numbers in JSON must be an integer or a floating point.

Example
{"age":30}

JSON Objects

Values in JSON can be objects.


Example
{
"employee":{"name":"John", "age":30, "city":"New York"}
}

JSON Array Literals This

is a JSON string:

'["Ford", "BMW", "Fiat"]'

Inside the JSON string there is a JSON array literal:

["Ford", "BMW", "Fiat"]

Arrays in JSON are almost the same as arrays in JavaScript.JSON Server

A common use of JSON is to exchange data to/from a web server. When

receiving data from a web server, the data is always a string.

Parse the data with JSON.parse(), and the data becomes a JavaScript object.

Sending Data

If you have data stored in a JavaScript object, you can convert the object intoJSON, and
send it to a server:

Example
const myObj = {name: "John", age: 31, city: "New York"}; const myJSON
= JSON.stringify(myObj);
window.location = "demo_json.php?x=" + myJSON;

Receiving Data

If you receive data in JSON format, you can easily convert it into a JavaScriptobject:
Example
const myJSON = '{"name":"John", "age":31, "city":"New York"}';
const myObj = JSON.parse(myJSON);
document.getElementById("demo").innerHTML = myObj.name;JSON

PHP

A common use of JSON is to read data from a web server, and display the datain a web
page.

This chapter will teach you how to exchange JSON data between the client anda PHP server.

The PHP File

PHP has some built-in functions to handle JSON.

Objects in PHP can be converted into JSON by using the PHP


function json_encode():

PHP file

<?php
$myObj->name = "John";
$myObj->age = 30;
$myObj->city = "New York";

$myJSON = json_encode($myObj);

echo $myJSON;
?>

JSON HTML

JSON can very easily be translated into JavaScript. JavaScript can be

used to make HTML in your web pages.


HTML Table

Make an HTML table with data received as JSON:

Example
const dbParam = JSON.stringify({table:"customers",limit:20}); const
xmlhttp = new XMLHttpRequest();
xmlhttp.onload = function() {
myObj = JSON.parse(this.responseText);
let text = "<table border='1'>"
for (let x in myObj) {
text += "<tr><td>" + myObj[x].name + "</td></tr>";
}
text += "</table>"
document.getElementById("demo").innerHTML = text;
}
xmlhttp.open("POST", "json_demo_html_table.php");
xmlhttp.setRequestHeader("Content-type", "application/x-www-form- urlencoded");
xmlhttp.send("x=" + dbParam);

OUTPUT
2. REST (Representational State Transfer) :
 REST is an architectural style for designing networked applications based
on the principles of statelessness, uniform interface, resource
identification, and manipulation through representations.
 Advanced use of REST involves adhering to RESTful principles and best practices to
create scalable, maintainable, and interoperable APIs.
 Design resource-oriented APIs that represent domain entities as resources with unique
URIs (Uniform Resource Identifiers).
 Use HTTP methods (GET, POST, PUT, DELETE) to perform CRUD (Create, Read,
Update, Delete) operations on resources, following the semantics of each method.
 Implement HATEOAS (Hypermedia as the Engine of Application State) to include
hypermedia links in API responses, enabling clients to navigate the API dynamically
without prior knowledge of URIs.
 Support content negotiation by providing multiple representations (e.g., JSON, XML)
of resources based on client preferences specified in the Accept header of HTTP
requests.
 Utilize caching mechanisms such as ETags and cache-control headers to improve API
performance and reduce server load by enabling caching of resource representations.
 Implement pagination, filtering, sorting, and search capabilities to manage large
collections of resources efficiently and provide a better userexperience.
 Use HTTP status codes to convey the outcome of API requests accurately, including
success, client errors, server errors, and redirections.
 Ensure security by implementing authentication, authorization, and encryption
mechanisms to protect sensitive data and prevent unauthorizedaccess or manipulation
of resources.

REST is the abbreviation of Representational State Transfer

REST is the abbreviation of Representational State Transfer, a phrase coined in the year 2000
by Mr. Roy Fielding. It is a structural design approach for crafting loosely attached
applications using HTTP, often implemented in the growth of web services. REST web
services do not impose any rules concerning how it needs to be applied in practice at a low
level; it only holds the high-level design guiding principles and leaves it to the developer to
think about the implementation.

In this architecture, a REST server provides connectivity to resources, which helps with client
access as well as updating resources. In this, the resources are recognized by the URIs /
Global IDs. REST API creates a variety of outputs to represent a resource, such as JSON -
which is very popular among them all,
text, XML formats. REST architecture-oriented web services are termed as RESTful web
services.

RESTful Methods
The REST architecture makes use of four commonly used HTTP methods. These are:

Method Description

GET This method helps in offering read-only access for the resources.

POST This method is implemented for creating a new resource.

DELETE This method is implemented for removing a resource.

PUT This method is implemented for updating an existing resource or creating afresh one.

A Node.js application using ExpressJS is ideally suited for building REST APIs. In this
chapter, we shall explain what is a REST (also called RESTFul) API, and build a Node.js
based, Express.js REST application. We shall also use REST clients to test out REST API.

API is an acronym for Application Programming Interface. The word interface generally
refers to a common meeting ground between two isolated and independent environments. A
programming interface is an interface between two software applications. The term REST
API or RESTFul API is used for a web Application that exposes its resources to other
web/mobile applications through the Internet, by defining one or more endpoints which the
client apps can visit to perform read/write operations on the host's resources.

REST architecture has become the de facto standard for building APIs, preferred by the
developers over other technologies such as RPC, (stands for Remote Procedure Call), and
SOAP (stands for Simple Object Access Protocol).

What is REST architecture?

REST stands for REpresentational State Transfer. REST is a well known software
architectural style. It defines how the architecture of a web application should behave. It is a
resource based architecture where everything that the REST server hosts, (a file, an image, or
a row in a table of a database), is a resource, having many representations. REST was first
introduced by Roy Fielding in 2000.
REST recommends certain architectural constraints.

 Uniform interface
 Statelessness
 Client-server
 Cacheability
 Layered system
 Code on demand

These are the advantages of REST constraints −

 Scalability
 Simplicity
 Modifiability
 Reliability
 Portability
 Visibility

A REST Server provides access to resources and REST client accesses and modifies the
resources using HTTP protocol. Here each resource is identified by URIs/ global IDs. REST
uses various representation to represent a resource like text, JSON, XML but JSON is the
most popular one.

HTTP methods

Following four HTTP methods are commonly used in REST based architecture. POST

Method

The POST verb in the HTTP request indicates that a new resource is to be created on the
server. It corresponds to the CREATE operation in the CRUD (CREATE, RETRIEVE,
UPDATE and DELETE) term. To create a new resource, you need certain data, it is included
in the request as a data header.

Examples of POST request −

HTTP POST http://example.com/users


HTTP POST http://example.com/users/123

GET Method

The purpose of the GET operation is to retrieve an existing resource on the server and return
its XML/JSON representation as the response. It corresponds to the READ part in the CRUD
term.
Examples of a GET request −

HTTP GET http://example.com/users


HTTP GET http://example.com/users/123

PUT Method

The client uses HTTP PUT method to update an existing resource, corresponding to the
UPDATE part in CRUD). The data required for update is included in the request body.

Examples of a PUT request −

HTTP PUT http://example.com/users/123


HTTP PUT http://example.com/users/123/name/Ravi

DELETE Method

The DELETE method (as the name suggest) is used to delete one or more resources on the
server. On successful execution, an HTTP response code 200 (OK) is sent.

Examples of a DELETE request −

HTTP DELETE http://example.com/users/123


HTTP DELETE http://example.com/users/123/name/Ravi

RESTful Web Services

Web services based on REST Architecture are known as RESTful web services. These
webservices uses HTTP methods to implement the concept of REST architecture. A RESTful
web service usually defines a URI, Uniform Resource Identifier a service, which provides
resource representation such as JSON and set of HTTP Methods.

Creating RESTful API for A Library

Consider we have a JSON based database of users having the following users in a file
users.json:

{
"user1" : {
"name" : "mahesh",
"password" : "password1",
"profession" : "teacher",
"id": 1
},

"user2" : {
"name" : "suresh",
"password" : "password2",
"profession" : "librarian",
"id": 2
},

"user3" : {
"name" : "ramesh",
"password" : "password3",
"profession" : "clerk",
"id": 3
}
}

Our API will expose the following endpoints for the clients to perform CRUDoperations on
the users.json file, which the collection of resources on the server.

Sr.No. URI HTTP Method POST body Result

1 / GET empty Show list of all the users.

2 / POST JSON String Add details of new user.

3 /:id DELETE JSON String Delete an existing user.

4 /:id GET empty Show details of a user.

5 /:id PUT JSON String Update an existing user


List Users

Let's implement the first route in our RESTful API to list all Users using thefollowing
code in a index.js file

var express = require('express');


var app = express();
var fs = require("fs");
app.get('/', function (req, res) {
fs.readFile( dirname + "/" + "users.json", 'utf8', function (err, data) {
res.end( data );
});
})
var server = app.listen(5000, function () {
console.log("Express App running at http://127.0.0.1:5000/");
})

To test this endpoint, you can use a REST client such as Postman or Insomnia. In this
chapter, we shall use Insomnia client.

Run index.js from command prompt, and launch the Insomnia client. Choose GET methos
and enter http://localhost:5000/ URL. The list of all users from users.json will be displayed in
the Respone Panel on right.
You can also use CuRL command line tool for sending HTTP requests. Openanother
terminal and issue a GET request for the above URL.

C:\Users\mlath>curl http://localhost:5000/
{
"user1" : {
"name" : "mahesh",
"password" : "password1",
"profession" : "teacher",
"id": 1
},

"user2" : {
"name" : "suresh",
"password" : "password2",
"profession" : "librarian",
"id": 2
},

"user3" : {
"name" : "ramesh",
"password" : "password3",
"profession" : "clerk",
"id": 3
}
}

Show Detail

Now we will implement an API endpoint /:id which will be called using user IDand it will
display the detail of the corresponding user.

Add the following method in index.js file −

app.get('/:id', function (req, res) {


fs.readFile( dirname + "/" + "users.json", 'utf8', function (err, data) {
var users = JSON.parse( data );
var user = users["user" + req.params.id]
res.end( JSON.stringify(user));
});
})

In the Insomnia interface, enter http://localhost:5000/2 and send the request.

You may also use the CuRL command as follows to display the details of user2

C:\Users\mlath>curl http://localhost:5000/2
{"name":"suresh","password":"password2","profession":"librarian","id":2}

Add User

Following API will show you how to add new user in the list. Following is the detail of the
new user. As explained earlier, you must have installed body-parser package in your
application folder.

var bodyParser = require('body-parser')


app.use( bodyParser.json() );
app.use(bodyParser.urlencoded({ extended: true }));

app.post('/', function (req, res) {


fs.readFile( dirname + "/" + "users.json", 'utf8', function (err, data) {
var users = JSON.parse( data );
var user = req.body.user4;
users["user"+user.id] = user
res.end( JSON.stringify(users));
});
})

To send POST request through Insomnia, set the BODY tab to JSON, and enterthe the user
data in JSON format as shown

You will get a JSON data of four users (three read from the file, and one added)

{
"user1": {
"name": "mahesh",
"password": "password1",
"profession": "teacher",
"id": 1
},
"user2": {
"name": "suresh",
"password": "password2",
"profession": "librarian",
"id": 2
},
"user3": {
"name": "ramesh",
"password": "password3",
"profession": "clerk",
"id": 3
},
"user4": {
"name": "mohit",
"password": "password4",
"profession": "teacher",
"id": 4
}
}

Delete user

The following function reads the ID parameter from the URL, locates the user from the list
that is obtained by reading the users.json file, and the corresponding user is deleted.

app.delete('/:id', function (req, res) {


fs.readFile( dirname + "/" + "users.json", 'utf8', function (err, data) {
data = JSON.parse( data );
var id = "user"+req.params.id;
var user = data[id];
delete data[ "user"+req.params.id];
res.end( JSON.stringify(data));
});
})

Choose DELETE request in Insomnia, enter http://localhost:5000/2 and send the request. The
user with ID=3 will be deleted, the remaining users are listed in theresponse panel
Output
{
"user1": {
"name": "mahesh",
"password": "password1",
"profession": "teacher",
"id": 1
},
"user2": {
"name": "suresh",
"password": "password2",
"profession": "librarian",
"id": 2
}
}

Update user

The PUT method modifies an existing resource with the server. The following app.put()
method reads the ID of the user to be updated from the URL, and the new data from the
JSON body.

app.put("/:id", function(req, res) {


fs.readFile( dirname + "/" + "users.json", 'utf8', function (err, data) {

var users = JSON.parse( data );


var id = "user"+req.params.id;
users[id]=req.body;
res.end( JSON.stringify(users));
})
})

In Insomnia, set the PUT method for http://localhost:5000/2 URL.


The response shows the updated details of user with ID=2

{
"user1": {
"name": "mahesh",
"password": "password1",
"profession": "teacher",
"id": 1
},
"user2": {
"name": "suresh",
"password": "password2",
"profession": "Cashier",
"id": 2
},
"user3": {
"name": "ramesh",
"password": "password3",
"profession": "clerk",
"id": 3
}
}

Here is the complete code for the Node.js RESTFul API −


var express = require('express');var app

= express();

var fs = require("fs");

var bodyParser = require('body-parser')

app.use( bodyParser.json() );

app.use(bodyParser.urlencoded({ extended: true }));

app.get('/', function (req, res) {

fs.readFile( dirname + "/" + "users.json", 'utf8', function (err, data) {res.end(

data );

});

})

app.get('/:id', function (req, res) {

fs.readFile( dirname + "/" + "users.json", 'utf8', function (err, data) {var users

= JSON.parse( data );

var user = users["user" + req.params.id]res.end(

JSON.stringify(user));

});

})

var bodyParser = require('body-parser')

app.use( bodyParser.json() );
app.use(bodyParser.urlencoded({ extended: true }));

app.post('/', function (req, res) {

fs.readFile( dirname + "/" + "users.json", 'utf8', function (err, data) {var users

= JSON.parse( data );

var user = req.body.user4;

users["user"+user.id] = user res.end(

JSON.stringify(users));

});

})

app.delete('/:id', function (req, res) {

fs.readFile( dirname + "/" + "users.json", 'utf8', function (err, data) {data =

JSON.parse( data );

var id = "user"+req.params.id;var

user = data[id];

delete data[ "user"+req.params.id];

res.end( JSON.stringify(data));

});

})

app.put("/:id", function(req, res) {

fs.readFile( dirname + "/" + "users.json", 'utf8', function (err, data) {

var users = JSON.parse( data );


var id = "user"+req.params.id;

users[id]=req.body;

res.end( JSON.stringify(users));

})

})

var server = app.listen(5000, function () {

console.log("Express App running at http://127.0.0.1:5000/");

})

By mastering advanced techniques of JSON and REST, developers can design


robust and scalable web APIs that provide efficient data exchange and
interaction between client and server applications, enabling seamless integration
and interoperability across diverse systems and platforms.

You might also like