Notes
Notes
Notes
Basics – Evolution of SOA & MSA – Drivers for SOA – Dimensions, Standards and
Guidelines for SOA – Emergence of MSA – Enterprise-wide SOA – Strawman and SOA
110 Reference Architecture – OOAD Process & SOAD Process – Service Oriented
Application – Composite Application Programming Model
Service-OrientedArchitecture
• Each service provides a business capability, and services can also communicate with
each other across platforms and languages.
•
Provides methods for service encapsulation, service discovery, service
composition, service reusability and service integration.
Service Encapsulation is often used to hide the internal representation, or state, of an object
from the outside.
Service Reuse: The process of reusing services when composing new services.
• Facilitates QoS (Quality of Services) through service contract based on Service Level
Agreement (SLA).
• A service-level agreement (SLA) is an agreement between a service provider and
a customer. Particular aspects of the service – quality, availability, responsibilities –
are agreed between the service provider and the service user
If two services are loosely coupled, then a change to one service rarely requires a change to
the other service. However, if two services are tightly coupled, then a change to one service
often requires a change to the other service.
• Location transparency is the ability to access objects without the knowledge of their
location.
Service-Oriented Architecture
• Service provider: The service provider is the maintainer of the service and the
organization that makes available one or more services for others to use. To advertise
services, the provider can publish them in a registry, together with a service contract
that specifies the nature of the service, how to use it, the requirements for the service,
and the fees charged.
• Service consumer: The service consumer can locate the service metadata in the
registry and develop the required client components to bind and use the service.
Service-Oriented Architecture
Advantages of SOA:
• Service reusability: In SOA, applications are made from existing services. Thus,
services can be reused to make many applications.
• Easy maintenance: As services are independent of each other they can be updated
and modified easily without affecting other services.
• Reliability: SOA applications are more reliable because it is easy to debug small
services rather than huge codes
Disadvantages of SOA:
• Requirements Gathering:
• In this initial phase, the focus is on understanding and documenting the requirements
of the system to be developed.
• The main deliverables of this phase are the Requirements Specification document,
which outlines the functional and non-functional requirements of the system.
Object-Oriented Analysis and Design (OOAD)
• Analysis:
• During analysis, the emphasis is on understanding the problem domain and defining
the system's conceptual model.
• This phase involves identifying the main entities (objects) in the system and their
relationships, as well as defining the behavior and attributes of these entities.
• The main deliverables of this phase include the Use Case Model, Class Diagrams, and
Interaction Diagrams (such as Sequence Diagrams or Communication Diagrams).
• Design:
• In the design phase, the focus shifts towards designing the architecture and detailed
design of the system based on the analysis.
• This involves defining the structure of the system, including subsystems, modules,
and their interactions.
• The main deliverables of this phase include the Architectural Design document,
Component Diagrams, and Deployment Diagrams.
• Implementation:
• The implementation phase involves translating the design into executable code.
• The code is organized into classes and modules based on the design, and best
practices such as encapsulation, inheritance, and polymorphism are followed.
• Unit testing is also an integral part of this phase, ensuring that individual components
of the system behave as expected.
Testing:
• Different types of testing, including unit testing, integration testing, system testing,
and acceptance testing, are carried out.
• Test cases are designed to verify that the system meets its requirements and behaves
correctly under various conditions.
• Defects discovered during testing are reported, fixed, and retested until the system
meets the desired quality standards.
• Testing:
• Different types of testing, including unit testing, integration testing, system testing,
and acceptance testing, are carried out.
• Test cases are designed to verify that the system meets its requirements and behaves
correctly under various conditions.
• Defects discovered during testing are reported, fixed, and retested until the system
meets the desired quality standards.
• Deployment:
• Once the system has been thoroughly tested and approved, it is deployed to the
production environment.
• This involves installing the software on the target hardware and configuring it for use
by end-users.
• Deployment may also involve data migration, user training, and ongoing support
activities.
• Maintenance:
• After deployment, the system enters the maintenance phase, where it is actively used
and supported.
• The main goal of unit testing is to validate that each unit of the software performs as
designed.
• Integration testing is the process of testing the interactions between different units or
components of a software system.
• The main objective of integration testing is to verify that the integrated components
work together as expected and to detect any interface defects or communication
issues.
System Testing:
• System testing is the process of testing a complete and integrated software system as a
whole.
• The primary objective of system testing is to validate that the entire system meets its
specified requirements and performs as expected in its intended environment.
• System tests are typically black-box tests, focusing on the system's external behavior
and user interactions rather than its internal implementation details.
• System testing may involve functional testing, usability testing, reliability testing,
performance testing, security testing, and other types of tests depending on the
system's requirements.
• System testing helps assess the overall quality and readiness of the software for
deployment, providing confidence that it meets the needs of its stakeholders.
• Supports polyglot programming. For example, services don't need to share the same
technology stack, libraries, or frameworks.
• Supports polyglot programming. For example, services don't need to share the same
technology stack, libraries, or frameworks.
• Management/orchestration. This component is responsible for placing services on
nodes, identifying failures, rebalancing services across nodes
• API Gateway. The API gateway is the entry point for clients. Instead of calling
services directly, clients call the API gateway, which forwards the call to the
appropriate services on the back end.
• These micro services communicate with each other through well-defined APIs, often
using lightweight protocols like HTTP or message queues. Unlike monolithic
architectures, where a single codebase handles all functionalities, micro services allow
for distributed development and deployment.
• Decentralized Data Management: Each micro service manages its own database,
ensuring data independence and avoiding a single point of failure.
• Resilience and Fault Isolation: If one service fails, it doesn't necessarily affect the
entire system, promoting fault tolerance and resilience.
• Fault Isolation: If one micro service fails, it doesn't necessarily bring down the entire
system, as other services can continue to function independently.
DevOps
DevOps is a set of practices, methodologies, and cultural philosophies that aims to improve
collaboration and communication between software development (Dev) and information
technology operations (Ops) teams. The goal is to shorten the software development life
cycle while delivering features, fixes, and updates frequently, reliably, and more efficiently.
Here's a detailed breakdown of key components and concepts within DevOps:
1. Culture: DevOps emphasizes a cultural shift towards collaboration, communication, and
shared responsibility among development, operations, and other stakeholders involved in the
software delivery process. This culture encourages breaking down silos between teams,
fostering trust, and promoting continuous learning and improvement.
2. Automation: Automation is fundamental in DevOps practices to streamline processes,
reduce manual errors, and accelerate delivery. Automation tools are used for various tasks
such as code compilation, testing, deployment, infrastructure provisioning, and monitoring.
3. Continuous Integration (CI): CI is a development practice where developers integrate
code into a shared repository frequently (often multiple times a day). Each integration
triggers automated builds and tests to detect and address integration errors early in the
development cycle.
4. Continuous Delivery (CD): CD extends CI by automating the deployment process to
ensure that software can be reliably released at any time. It involves deploying code changes
to production or staging environments automatically or with minimal manual intervention,
typically after passing automated tests.
5. Infrastructure as Code (IaC): IaC is the practice of managing and provisioning
infrastructure (e.g., servers, networks, and storage) using machine-readable configuration
files or scripts, rather than manual processes. This approach enables consistent, repeatable,
and scalable infrastructure deployments and facilitates versioning and collaboration.
6. Monitoring and Logging: DevOps emphasizes the importance of monitoring application
performance, infrastructure health, and user experience in real-time. Monitoring tools track
metrics, logs, and events to identify issues, detect anomalies, and optimize system
performance. Continuous monitoring enables proactive problem detection and resolution.
7. Microservices and Containers: DevOps often leverages microservices architecture and
containerization technologies like Docker and Kubernetes. Microservices break down
applications into smaller, loosely coupled services, enabling easier management, scalability,
and deployment. Containers provide lightweight, portable, and isolated runtime environments
for applications, enhancing consistency and efficiency across development, testing, and
production environments.
8. Collaboration Tools: DevOps teams use various collaboration tools to facilitate
communication, coordination, and knowledge sharing. These tools include version control
systems (e.g., Git), issue tracking systems (e.g., Jira), communication platforms (e.g., Slack),
and collaboration platforms (e.g., Confluence).
9. Security: DevOps integrates security practices throughout the software development life
cycle (DevSecOps). Security measures such as code analysis, vulnerability scanning, access
control, and compliance checks are automated and integrated into CI/CD pipelines to detect
and mitigate security risks early in the development process.
10. Feedback Loop: DevOps emphasizes the importance of feedback loops to gather insights
from users, stakeholders, and operational metrics. Feedback drives continuous improvement
by identifying areas for optimization, feature enhancements, and bug fixes, ensuring that
development efforts align with business objectives and user needs.
The DevOps lifecycle
Because of the continuous nature of DevOps, practitioners use the infinity loop to show how
the phases of the DevOps lifecycle relate to each other. Despite appearing to flow
sequentially, the loop symbolizes the need for constant collaboration and iterative
improvement throughout the entire lifecycle.
by identifying areas for optimization, feature enhancements, and bug fixes, ensuring that
development efforts align with business objectives and user needs.
The DevOps lifecycle
Because of the continuous nature of DevOps, practitioners use the infinity loop to show how
the phases of the DevOps lifecycle relate to each other. Despite appearing to flow
sequentially, the loop symbolizes the need for constant collaboration and iterative
improvement throughout the entire lifecycle.
Discover
Building software is a team sport. In preparation for the upcoming sprint, teams must
workshop to explore, organize, and prioritize ideas. Ideas must align to strategic goals and
deliver customer impact. Agile can help guide DevOps teams.
Plan
DevOps teams should adopt agile practices to improve speed and quality. Agile is an iterative
approach to project management and software development that helps teams break work into
smaller pieces to deliver incremental value.
Build
Git is a free and open source version control system. It offers excellent support for branching,
merging, and rewriting repository history, which has led to many innovative and powerful
workflows and tools for the development build process
6.RPC (Remote Procedure Call): RPC frameworks like gRPC provide a more efficient
alternative to HTTP-based communication. They allow services to call remote procedures as
if they were local functions, abstracting away network communication details. gRPC uses
Protocol Buffers (protobuf) for serialization and provides features like streaming and
bidirectional communication.
7. Event-Driven Communication: In an event-driven architecture, microservices
communicate by publishing and subscribing to events. When something significant happens
within a service (e.g., a new order is placed), it publishes an event to a message broker. Other
services interested in that event can then subscribe to it and react accordingly. This approach
promotes loose coupling and scalability.
8.API Gateway: An API gateway sits between clients and the microservices backend,
providing a single entry point for all external requests. It can handle tasks like authentication,
and request routing, as well as aggregating and forwarding requests to the appropriate
microservices. This helps simplify client communication and can improve security and
performance.
9.Circuit Breaker Pattern: Implementing a circuit breaker pattern helps in handling failures
gracefully when interacting with other services. It monitors the health of remote services and
prevents cascading failures by failing fast when a service is unavailable.
Monitoring and Securing the Services
Monitoring and securing services in a microservices architecture are critical components for
ensuring the reliability, performance, and security of the system.
Monitoring:
1. Service Health Monitoring:
Monitor the health of each microservice by collecting and analyzing various metrics such
as CPU usage, memory consumption, response times, and error rates.
Use tools like Prometheus, Graphite, or Datadog to collect metrics from each service and
visualize them on dashboards for real-time monitoring.
Implement health checks within each service to report its status (e.g., "up" or "down") to
an external monitoring system.
2. Logs Aggregation and Analysis:
Collect logs generated by each microservice in a centralized location for easier analysis
and troubleshooting.
Use log aggregation tools like ELK Stack (Elasticsearch, Logstash, Kibana), Splunk, or
Fluentd to collect, parse, and store logs.
Analyze logs to identify errors, anomalies, or performance issues, and use this information
to debug and optimize the system.
3. Distributed Tracing:
Implement distributed tracing to track the flow of requests as they propagate through
multiple microservices.
Use tools like Jaeger, Zipkin, or OpenTelemetry to instrument applications and collect
trace data.
Analyze traces to understand dependencies between microservices, identify performance
bottlenecks, and optimize request latency.
4. Alerting:
Set up alerting rules based on predefined thresholds or conditions for critical metrics such
as high error rates, latency spikes, or service unavailability.
Use alerting tools like Prometheus Alertmanager, PagerDuty, or OpsGenie to send
notifications via email, SMS, or integrations with collaboration platforms like Slack or
Microsoft Teams.
Define escalation policies to ensure timely response and resolution of issues identified by
alerts
Securing Services:
1. Authentication and Authorization:
Implement authentication mechanisms such as OAuth, JWT, or API keys to ensure that
only authorized users and services can access microservices.
Use role-based access control (RBAC) to enforce fine-grained access permissions based on
user roles or service identities.
2. Transport Layer Security (TLS):
Encrypt communication between microservices using TLS to prevent eavesdropping and
data tampering.
Utilize mutual TLS (mTLS) to authenticate both client and server, ensuring secure
communication between microservices.
3. Input Validation and Sanitization:
Validate and sanitize input data to prevent common security vulnerabilities such as
injection attacks (e.g., SQL injection, XSS) and ensure data integrity.
4. Secrets Management:
Store sensitive information such as database credentials, API keys, and cryptographic keys
securely using a centralized secrets management solution (e.g., HashiCorp Vault, AWS
Secrets Manager).
Limit access to secrets based on the principle of least privilege and rotate them regularly to
mitigate the risk of exposure.
5. Container Security:
Harden container images by following best practices such as minimizing the attack surface,
regularly patching dependencies, and running containers with least privilege.
Utilize container security tools (e.g., Docker Bench, Clair, Twistlock) to scan images for
vulnerabilities and enforce security policies at runtime.
6. API Gateway Security:
Secure APIs exposed by microservices using an API gateway, which can enforce
authentication, rate limiting, and access control policies.
Implement measures such as input validation, content type validation, and request
validation to prevent common API security threats.
7. Runtime Protection:
Deploy runtime protection mechanisms such as runtime application self-protection (RASP)
or web application firewalls (WAFs) to detect and mitigate runtime threats.
Monitor runtime behavior for anomalies and enforce runtime security policies to prevent
unauthorized access and data breaches.
8. Continuous Security Testing:
Integrate security testing into the CI/CD pipeline to identify and remediate security
vulnerabilities early in the development lifecycle.
Conduct regular security assessments, penetration testing, and vulnerability scanning to
identify and address security weaknesses in microservices and their dependencies.
By implementing these security measures, you can mitigate security risks and ensure the
confidentiality, integrity, and availability of your microservices architecture.
Containerized Services
Containerized services are a method of packaging, deploying, and managing software
applications and their dependencies within isolated execution environments called containers.
These containers encapsulate everything needed to run an application, including the code,
runtime, system tools, libraries, and settings, ensuring consistent behavior across different
environments. Here's a detailed explanation of containerized services:
1. Containerization Technology:
Container Runtimes: Docker originally introduced its own container runtime, but
alternatives like containerd and cri-o have gained popularity. These runtimes manage the
lifecycle of containers, handling tasks such as container creation, execution, and destruction.
2. Key Concepts:
Containers: Containers are lightweight, portable, and self-sufficient units that package
application code and dependencies. They run as isolated processes on a host operating
system, sharing the kernel with other containers but having their own filesystem, network,
and process space.
Images: Container images are read-only templates used to create containers. They contain
the application code, runtime, libraries, and other dependencies needed to run the application.
Images are built from Dockerfiles or other specifications and can be stored in registries like
Docker Hub or private repositories.
3. Advantages:
Isolation: Containers provide process isolation, meaning that each container runs in its own
isolated environment. This isolation enhances security by preventing applications from
interfering with each other and reduces the impact of software conflicts.
Portability: Containers are portable across different infrastructure environments, including
on-premises data centers, public clouds, and hybrid environments. This portability allows for
seamless deployment and migration of applications between environments.
Resource Efficiency: Containers share the host operating system's kernel, resulting in
lower overhead compared to traditional virtual machines (VMs). This efficiency enables
higher resource utilization and allows for running more containers on the same hardware.
4. Use Cases:
5. Container Orchestration:
6. Security Considerations:
Image Security: Ensure that container images are scanned for vulnerabilities before
deployment. Use container security tools to identify and remediate security issues in
container images and their dependencies.
Deploying on Cloud
Comparing Providers: Compare the features, pricing, and support offered by different
cloud providers. Consider factors such as compute instances, storage options, networking
capabilities, and managed services.
Infrastructure as Code (IaC): Define the infrastructure using code (e.g., Terraform, AWS
CloudFormation) to automate the provisioning and configuration of cloud resources. IaC
enables reproducible and consistent deployments, simplifying infrastructure management and
reducing the risk of configuration drift.
Automation: Use automation tools and workflows (e.g., AWS Lambda, Azure Functions,
Google Cloud Functions) to automate repetitive tasks, such as deployment, scaling, and
resource management. Automation improves operational efficiency, reduces manual errors,
and enables rapid response to changing conditions.
Security: Implement security best practices (e.g., identity and access management,
encryption, compliance controls) to protect the application and data from security threats.
Leverage cloud-native security services and features to strengthen the defense posture and
ensure regulatory compliance.
Cost Optimization: Optimize your cloud resources to minimize costs while meeting
performance and availability objectives. Monitor resource usage, analyze cost trends, and
implement cost-saving strategies such as reserved instances, spot instances, and resource
tagging.
Feedback Loop: Collect feedback from users, stakeholders, and monitoring systems to
identify areas for improvement and prioritize feature development. Use agile methodologies
and iterative development cycles to continuously enhance the application and adapt to
changing needs.
Deploying on the cloud offers numerous benefits, including scalability, flexibility, reliability,
and cost-effectiveness. By following best practices and leveraging cloud-native technologies
and services, organizations can optimize their deployment processes and unlock the full
potential of cloud computing for their applications
lOMoARcPSD|9189951
Cloud U4 Whitenotes
Page |1
S.No Page No
1 Introduction to cloud and DevOps
2 Origin of DevOps
2 The developers versus operations dilemma
3 Key characteristics of a DevOps culture
4 Deploying a Web Application
5 Creating and configuring an account
6 Creating a web server
7 Managing infrastructure with Cloud Formation
8 Adding a Configuration management system
Page |2
INTRODUCTION
A. DEVOPS
DevOps is a software development approach that combines cultural principles, tools, and
practices to increase the speed and efficiency of an organization’s application delivery
pipeline. It allows development and operations (DevOps) teams to deliver software and
services quickly, enabling frequent updates and supporting the rapid evolution of products.
There are three important ways DevOps and cloud work together:
Page |3
• DevOps processes can be very agile when implemented correctly, but they can easily
grind to a halt when facing the limitations of an on-premise environment. For example,
if an organization needs to procure and install new hardware in order to start a new
software project or scale up a production application, it causes needless delays and
complexity for DevOps teams.
• Cloud infrastructure offers an important boost for DevOps and facilitates scalability.
The cloud minimizes latency and enables centralized management via a unified
platform for deploying, testing, integrating, and releasing applications.
• A cloud platform allows DevOps teams to adapt to changing requirements and
collaborate across distributed enterprise environments.
• Cloud DevOps solutions are often more cost-effective.
• Cloud-based DevOps services helps minimize human error and streamlines repeatable
tasks.
• Prioritizing events—this requires calculating risk scores for cloud systems, accounts,
and devices, and identifying the sensitivity of cloud applications and data.
Page |4
• Metrics and objectives—the role of a SecOps team requires keeping track of key
performance indicators like mean time to detect, acknowledge, and remediate
(MTTD, MTTA, and MTTR, respectively).
Here are DevOps as a Service offerings provided by the world’s leading cloud providers.
Each of them provides an end-to-end environment for DevOps teams, which eliminates
the need to download, learn, and integrate multiple point solutions.
Examples
1.AWS DevOps
2.Azure DevOps
3.Google Cloud DevOps
1.AWS DevOps
AWS DevOps
Amazon Web Services (AWS) provides services and tools dedicated to supporting DevOps
implementations, including:
AWS CodeCommit
AWS CodeCommit is a managed source control service for hosting private Git repositories.
There is no need to provision or scale the infrastructure or install, configure, or operate
software—the service handles these tasks for you.
Page |5
AWS CodeBuild
AWS CodeBuild is a fully-managed service for continuous integration (CI) in the cloud. The
service can compile your source code, run tests, and create deployment-ready software
packages. The service handles the infrastructure, so there is no need to provision, scale, or
manage the build servers. It scales continuously and can process several builds concurrently.
AWS CodeArtifact
AWS CodeArtifact is a fully-managed service that lets you centrally manage artifact
repositories. It lets you publish, share, and store software packages securely. It provides pay-
as-you-go scalability that enables you to flexibly scale the repository to satisfy requirements.
The service handles the infrastructure, so there is no need to manage software or servers.
AWS CodeDeploy
AWS CodeDeploy is a fully-managed service for automating software deployments. It supports
deployment to various environments, including on-premises servers, AWS Lambda, Amazon
Elastic Compute Cloud (Amazon EC2), and AWS Fargate.
AWS CodePipeline
AWS CodePipeline is a cloud service for continuous delivery (CD). It provides functionality for
modeling, visualizing, and automating software delivery steps. You can employ CodePipeline
to model the entire release process, including code builds, deployment to pre-production
environments, application testing, and releasing into a production environment
Page |6
2.Azure DevOps
Microsoft Azure provides cloud-based services and tools that support the
modern DevOps team. Here are notable services that help DevOps teams plan, build, and
deploy applications:
Azure Repos
Azure Repos provides version control tools to help you manage code. It offers the following
version control types:
• Git—a popular open source distributed version control. Azure Repos lets you use Git
with various tools and operating systems, including Windows, Mac, Visual Studio,
Visual Studio Code, and Git partner services and tools.
Azure Pipelines
Azure Pipelines is a cloud service that builds and tests code projects automatically. It utilizes
continuous integration (CI) and continuous delivery (CD) when testing, building, and shipping
your code to the environment of your choice. Pipelines support numerous programming
languages and project types.
Page |7
Azure Boards
Azure Boards is a cloud service that provides interactive and customizable tools for managing
software projects. It offers various capabilities, such as calendar views, native support for
Scrum, Kanban, and Agile processes, integrated reporting, and configurable dashboards. You
can leverage these features to scale as your project grows.
Azure Artifacts
Azure Artifacts provides a cloud-based, centralized location for managing packages and
sharing code. It enables you to publish packages and share them publicly or privately with
your team or the entire organization. The service lets you consume packages from various
feeds and public registries, including npmjs.com and NuGet.org. It also supports a range of
package types, including npm, NuGet, Python, Universal Packages, and Maven.
Page |8
Cloud Build
The Cloud Build service executes builds on Google Cloud’s infrastructure. It imports source
code from a location of your choice, such as GitHub, Bitbucket, Cloud Source Repositories,
or Cloud Storage, and uses your specifications to execute the build. It can produce various
artifacts, including Java archives and Docker containers.
Artifact Registry
Artifact Registry is a cloud-based service for centrally managing artifacts and dependencies.
The service is fully integrated with Google Cloud tools and runtimes and supports native
artifact protocols. It provides simple integration with existing CI/CD tools so you can set up
automated pipelines
Cloud Monitoring
Cloud Monitoring is a service that collects events, metadata, and metrics from various
sources, including Google Cloud, AWS, application instrumentation, and hosted uptime
probes.
Cloud Deploy
Google Cloud Deploy is a managed cloud service for automating application delivery. It uses
a defined promotion sequence when delivering applications to target environments. You can
deploy an updated application by creating a release, and the delivery pipeline will then
manage the entire lifecycle of this release
Page |9
P a g e | 10
History of DevOps
Now, let us delve into the interesting history of DevOps. Patrick Debois is often referred
to as the father of DevOps.
1. 2007: It all started in 2007 when he started working on a robust “data center
migrationn” where he was responsible for testing. He experienced many frustrations
during the course of this project, starting from the continuous switching back and forth
from the development side of the problem and the bevy of operations that waited on
the other side. He realized that a large chunk of the effort and time was spent (or
rather wasted) in navigating the project from development to operations. However, it
wasn’t possible to bridge the significantly wide gap between the two worlds.
2. 2008: It was in 2008 at an Agile conference conducted in Toronto, Canada, when a
man named Andrew Shafer attempted to arrange a meetup session that was called
“Agile Infrastructure.”. However, Patrick was quite excited finally come across a
like-minded person.. They later went on to form a discussion group for various people
who wanted to post their ideas that would help bring a relevant solution to the wide
gap between development and operations.
3. 2009: June of 2009, Paul Hammond and John Allspaw conducted a lecture entitled
“10+ Deploys a Day: Dev and Ops Cooperation at Flickr.” Patrick ended up
watching the streaming video of that presentation .Its views highly resonated with him
making him realize that this was exactly the solution he had been looking for.
Motivated by this lecture, he arranged a gathering of system administrators and
developers to sit together and discuss the most ideal ways to begin bridging the gap
between these two heterogeneous fields. This event was named DevOpsDays, and
was held during the final week of October 2009.
4. 2010: This was all that was needed for smaller tech enterprises to make an effort in
amalgamating the DevOps practices and the tools built to help new teams that are
forming. By this time, DevOps managed to acquire a grassroots following where
members began extensively pushing their respective ideas.
5. 2011: It was in March 2011 when Cameron Haight of Gartner With his positive
outlook, many other members and users came and began implementing DevOps with
wide ideas. Soon enough, enterprises regardless of how small or big scale
they are started adopting DevOps.
6. 2015: DevOps incorporated into SAFe SAFe is rapidly gaining traction in the enterprise
arena, where DevOps is adopted and scaled across.
P a g e | 11
7. 2016: DevOps is the new norm for high-performing companies “Clearly, what was
state of-the-art three years ago is just not good enough for today’s business
environment.”
8. 2018: State of DevOps report defines 5- stage approach From level 0 to 5, a
descriptive, pragmatic approach is introduced to guide teams and mature DevOps
initiatives, a report sponsored by Deloitte
9. 2019: Enterprises embed more IT functions in their teams next to ‘Dev’ and ‘Ops’
“organizations are embedding security (DevSecOps), privacy, policy, data (DataOps)
and controls into their DevOps culture and processes.”
P a g e | 12
Developer dilemma No. 7: How much control should users really get?
Everyone wants feature-rich code, but no one wants to pay the cost of managing all of it.
Anyone who's tried to build something as simple as a four-button remote control app knows
how many zillions of designer years it takes to create something that simple.
Focus on just good documentation and avoid big details which typically means a lot of
guessing and wasted time. Good document means
There is no easy answer to this dilemma. The old code still works. We like it. It's
just that it's not compatible with the new version of the operating system or a new
multicore chip.
The new code costs money. We can usually fix a number of glaring problems with
the old code, but who knows what new problems might appear.
As for speed, NoSQL is generally faster than SQL. On the other hand, NoSQL database may
not fully support ACID transactions, which may result data inconsistency.
P a g e | 13
Native apps are installed on a device. They are built using an operating system's SDKs and
have access to different resources on a device: camera, GPS, phone, device storage, etc. Web
mobile apps are websites optimized for mobile browsers. Their functionality resides entirely
on a server. Web apps are delivered over an internet browser. Users don't need to install them
on their devices
Developer dilemma No. 7: How much control should users really get?
Software users want all of the freedom they can get, but they expect you, the developer, to
rescue them from harm when issue occurs. They want all of the advantages and being able
to slip through some backdoor whenever a problem occurs.
Operator’s dilemma
Operator dilemma No. 1: Speed vs. Quality
Operator dilemma No. 2: Autonomy vs. Collaboration
Operator dilemma No. 3: Build vs. Buy
Operator dilemma No. 4: Control vs. Flexibility
Balancing autonomy and collaboration does not mean leaving your team alone to solve
problems, but rather providing the right amount of guidance and support.
To build or buy software is a decision that haunts the DevOps team but through
comparison and situational awareness the right answer can be found
Operator dilemma No. 4: Control vs. Flexibility
Control is a manager's duty to regulate and guide the activities of an organization. The
flexibility of an organization is its ability to adapt to its internal and external environment
DEVOPS CULTURE
DevOps – a series of practices that automates the processes between software development
and IT teams – accomplishes an agile culture that enables you to build, test, and release
software quicker and more reliably. DevOps is the way organizations extract the value of agile
transformations they have started. By integrating software development and operations, and
automating their processes from end-to-end
P a g e | 14
• Maintain continuous flows across the SDLC pipeline, including integration, testing,
deployment and funding, among others.
P a g e | 15
STEP#1: Start with Top-down approach to DevOps and follow it up with bottom-
up approach
A cultural change has to happen in the entire organization. Initially, begin it from the top and
gradually take it towards the bottom. A cultural change, doesn’t happen without top-down
motivation and coordination. DevOps culture needs the acceptance at the executive-level and
immediate sponsorship. The right leadership to modify the software development lifecycle and
promote automation over manual processes. To achieve a successful DevOps culture,
everyone in your organization – from a junior developer to the CIO – should support the
organizational change.
• Lead time for changes: The time taken to go from code committed to code
successfully running is very high
• Time to restore service: The time taken to restore service when a service incident or
defect occurs, without any unexpected disruption or service interruption.
• Production failure rate: The frequency with which the software fails in production
during a particular period of time is nil.
• Average lead time: The time taken for a new requirement to be developed, tested,
delivered, and deployed into production is very less.
P a g e | 16
Step 4: Development
This is where you bring your application to life. Using the technology stack you've chosen,
start coding your application. In this step, you usually begin with the backend, setting up your
server and database and defining the API endpoints. Once your backend logic is complete,
you'll move to the frontend, creating pages and components using the chosen frontend
framework.
Step 5: Testing
Testing is vital to ensure the quality of your application. There are several types of tests you
can perform: Unit tests check if individual components of your app work as intended.
Integration tests check if different parts of your app work together as expected. End-to-End
tests simulate real user scenarios to check if the application behaves correctly. Bugs and errors
are an inevitable part of development. When you encounter them, debug your code and make
necessary adjustments.
P a g e | 17
Step 6: Deployment
Step 7: Maintenance and Updates Once deployed, your work isn't done. Regular
maintenance is crucial to ensure the app's smooth operation. You should also listen to user
feedback and continuously improve your application, adding new features, fixing bugs, and
refining the UI/UX.
Eg := Eg – AWS Deployment steps
P a g e | 18
B. Now, enter the credentials of the Microsoft account that you used to create the
Azure cloud account.
With Azure DevOps, you can streamline your software development process and improve
collaboration among your team members.
2.Enter your email address, phone number, or Skype ID for your Microsoft account
P a g e | 19
An organization is created. You can rename and delete your organization, or change the
organization location.
P a g e | 20
P a g e | 21
An Http Server (Java Based ) is bound to an IP address and port number and listens
for incoming requests and returns responses to clients. Simple http server is flexible
to be added or easily embedded into complex projects for rendering Html elements or
serving as a backend server, or even deployed in the client side devices.
A simple HTTP server can be added to a Java program to support cloud based applications
using four steps:
The HttpServer class provides a simple high-level Http server API, which can be used to
build embedded HTTP servers.
P a g e | 22
Http Handlers are associated with http server in order to process client requests.
There are two common methods for a request-response between a client and server
through HTTP protocol:
P a g e | 23
Node. js is an open source JavaScript runtime environment that lets developers run
JavaScript code on the server.
1.Server.js
2.HTTP Request in Server.Js
3.HTTP JSON Response in Serrver.Js
The http.createServer() method includes request and response parameters which is supplied
by Node.js. The request object can be used to get information about the current HTTP request
e.g., url, request header, and data. The response object can be used to send a response for
a current HTTP request
P a g e | 24
Server.js
Server.js
The following sample code demonstrates how to serve JSON response from the Node.js
web server.
P a g e | 25
To Test
C:\>node server.js
node.js web server at port 5000 is running…
curl –i http://localhost:5000
Http/1.1 200 ok
Content-Type: text/plain
Date: Tue,8 Sep 2023 03:05:08 GMT
Connection: keep-alive
This is home page
Using Python you can create a custom web server which has unique functionality.
The web server in this example can be accessed on your local network.. This can either be
localhost or another network host. You could serve it cross location with a help of VPN.
Web server
• to start a custom web server. To create a custom web server, we need to use the
HTTP protocol.
• By design the http protocol has a “get” request which returns a file on the server. If
the file is found it will return 200.
• The server will start at port 8080 and accept default web browser requests.
P a g e | 26
self.wfile.write(bytes("<html><head><title>https://pythonbasics.org</title></head>",
"utf-8"))
self.wfile.write(bytes("<p>Request: %s</p>" % self.path, "utf-8"))
self.wfile.write(bytes("<body>", "utf-8"))
self.wfile.write(bytes("<p>This is an example web server.</p>", "utf-8"))
self.wfile.write(bytes("</body></html>", "utf-8"))
if __name__ == "__main__":
webServer = HTTPServer((hostName, serverPort), MyServer)
print("Server started http://%s:%s" % (hostName, serverPort))
try:
webServer.serve_forever()
except KeyboardInterrupt:
pass
webServer.server_close()
print("Server stopped.")
python3 -m http.server
If you open an url like http://127.0.0.1/example the method do_GET() is called. The server
sends the webpage in this method.
The variable self.path returns the web browser url requested. In this case it would be
/example
P a g e | 27
❖ AWS CloudFormation is a service that allows users to model and provision their
entire cloud infrastructure using simple configuration files.
❖ It is an Infrastructure as Code (IaC) tool which makes it easier for developers
and system administrators to manage their AWS resources by creating,
updating, and deleting their infrastructure in a more automated way.
❖ CloudFormation enables organizations to use a single language to model and
provision their entire infrastructure across multiple regions and accounts.
❖ Not only can AWS CloudFormation help with the deployment of resources, but it also
helps with monitoring these resources over time. By managing related resources
together as a single unit called “stack,” it provides vision of what has been
deployed, when changes are made, and where the deployments are running.
❖ CloudFormation provides a robust set of features, such as stack policies that can
help increase security by controlling who can change specific resources in your stack.
TERMS DESCRIPTION
P a g e | 28
A stack is a collection of AWS resources that you can manage as a single unit.
• Step 2 - Check your template code locally or upload your template code into the
S3 bucket.
• Step 3 - Use AWS CloudFormation from the browser console; then, use command
line tools or APIs to create a stack based on your template code.
• Step 4 - After this, AWS CloudFormation provisions and configure the stack and
resources you specified on your template.
P a g e | 29
P a g e | 30
• Netflix is a major streaming media provider which leverages CloudFormation for its
platform development and deployment.
• Airbnb is another company that utilizes CloudFormation. Airbnb uses it to manage its
massive infrastructure – for spinning up multiple layers of Amazon EC2 instances,
setting up auto-scaling policies, allocating security groups, installing monitoring tools,
and much more. This way, Airbnb can automate its deployments and keep track of
changes in its infrastructures in an efficient manner.
• Expedia Inc., one of the world’s leading online travel booking companies, also utilizes
CloudFormation. Expedia creates a custom template library for its cloud environments
using CloudFormation, which allows them to quickly deploy.
Baseline in DevOps
A Baseline is a snapshot of selected work items at a point in time. So, no matter what
changes we have made to the baselined work items, the saved snapshot won't change.
P a g e | 31
Even if we have merged the baselines, the changes are done against the latest versions of
the work items, not to the baselines themselves.
Version control systems are software that help track changes make in code over time. As a
developer edits code, the version control system takes a snapshot of the files. It then saves
that snapshot permanently so it can be recalled later if needed.
P a g e | 32
• Version control – for this, you will need to use a certain version control system
(like Git). Make sure to use external keys to encrypt secret data and add data files to
a single repository created in your preferred version control solution for thorough
management.
Elements of DevOps
• Configuration identification
• Configuration control
• Configuration Accounting
• Configuration Audit
P a g e | 33
1. Ansible
2. Puppet
3. Chef
4. CFEngine
5. Saltstack
P a g e | 34
Configuration as a Code: =
Configuration as a code is nothing but defining all the configurations of the servers or any
other resources as a code or script and checking them into version control.
Third-party API refers to a program that allows you to connect different functionalities from
different apps. It is typically provided by large corporations, but it does not have to be. This
API allows you to access third- party data and software functions on your application or
website.
One example is Uber’s integration of Google Maps Map functionality to track Uber rides.
Uber saves time building map functionality by using third-party APIs.
To comprehend how a third API functions We must first distinguish it from the first-party
API. It is intended to be used internally, whereas the API is a tool that lets you connect your
app to services of different companies.
An API is a medium for communication between two applications It creates shared functions
and allows seamless, controlled data sharing. In the case of third parties, the app owner
creates the app’s functionality using an API in order to let other apps connect to their app
features. To accomplish this, the integration code is published as well as documents
regarding its implementation.
Efficiency
Prior to this, every new app creator had to create every aspect of its functionality from
scratch. Now software developers can make use of APIs from third parties to gain access to
features that otherwise would require some time or effort to create. Thus, a third-party API
integration can help reduce costs, time, and time.
Avoid Data-Duplication
Google Sign-in enables several apps to make use of authenticating credentials from OAuth to
control user profiles. Without authorization, API to integrate with third-party apps users
must create a brand new profile for each sign-up. Furthermore, every business must manage
multiple databases that come from the same source.
Less Maintenance
An API from a third party is simpler to manage. In the end, it’s operated, controlled, and
managed by the business that developed it. For third-party APIs APIs, it’s a simple plug-and-
play method. If you are using APIs from third parties from a reputable company, then you
won’t face any issues as maintenance and updates are guaranteed to be seamless.
It’s as simple as connecting an API provided by an outside service to your own application. It
is continuously maintained by the service provider, while the integration is made through
specific developer keys to serve the reason. This process requires the knowledge of a
proficient mobile developer (for API integration with third-party APIs in Android as well as
iOS) or a professional API integration specialist.
For many new businesses that do not have the funds to develop their own complicated
functions, APIs can prove useful. APIs from reputable and established firms can open your
company to a universe of possibilities that might otherwise not be available to your company.
In the case of a map, for instance, creating maps for an app will require lots of complicated
background work and the extremely complex process of creating the app. In this instance, the
startup would be better off using a well-established third-party API to get access to a wealth
of information.
How to Choose the Right 3rd Party API for Your Project?
- Documentation
Each software item is supported by some kind of documentation that developers can use to
implement the software in their own code. Before you choose an API from a third-party API
make sure it comes with detailed documentation that contains specific details.
- Features
Developers rely on APIs for efficiency. You shouldn’t have to work with two different APIs
when only one of them can do all the tasks you require. An API that is third-party-friendly
should provide robust and specific features that will help you reach your goal efficiently.
- Support
Third-party APIs are maintained by the service provider not you, the customer. This means
that you must utilize tools with top service from the provider. How responsive is the service?
What is the frequency of updates? What’s the maintenance plan? These questions require
answers.
- Reliability
This information is gathered from others who have used the API. A system that’s frequently
unstable on other platforms might not work on yours as well. Remember that sudden errors
can cause serious problems to the quality of service you provide to your customers.
- Security
When you make use of APIs offered by a third party that you use, you will be sharing
information with the provider. Therefore, if the service doesn’t provide
high-level security or encryption for data then your information is not securewith that
particular product.
Table of Contents
While using an API you can access third-parties data or functionality in your
application. It can help you save the time and high cost of building the
functionality on your own.
A common question might arise in your mind how does a 3rd-party API work?
A third-party API is developed by someone else, but you can use it in your
system. So, people also named it external API.
For example, imagine you are trying to develop a taxi service app that requires
Google Maps to track drivers. It is not possible for you to build a platform like
Google Maps completely by yourself. In this case, you can use Google Map
API in your application to track your vehicles. Thus, you are pulling data from
Google Maps to your system using an API.
Types of APIs
Based on the functionality, we can categorize APIs in different types. The most
common types of APIs are –
Before integrating an API, always read the API documentation. Even free APIs must have
good API documentation that is constantly updated. You will receive a free API key to
maintain security. It will ensure unauthorized interception of data by external actors.
Partner API
Partner API is only available to authorized subscribers, mostly used in business- to-business
processes. For example, if you want to connect to an external CRM, your vendor will provide
you with an API to access the internal data system. A payment gateway API works in the
same method. Partner APIs typically include a license agreement as well as enhanced
authentication, authorization, and security mechanisms.
Internal API
An internal API is used only within the business organization. Sometimes, we call it private
API. In most cases, businesses develop it in-house for theirinternal use. For example, the HR
department can share attendance data automatically to the payroll system using an internal
API. Large companies develop internal APIs to speed up their business process.
A third-party API has tonnes of applications. Social login, online payment, customer data
management, webhook implementation and many more, all you can do with an external API.
Let’s see some common usage of external API.
Payments
Now all websites that are selling a product or service accept online payments. All of them use
a third-party API to collect payments from customers. Along with regular payments, a
payment gateway processes recurring payments, refunds, currency management and so on.
Major players like Stripe, PayPal, Square, Mollie, all of them provide external APIs for
merchant websites.
Chat
A very familiar use of third-party API is the chat feature in websites or applications. It
provides users real-time online assistance through a chat platform. A popular example of chat
API is Messenger integration with various websites. This chat plugin helps integrate
Messenger directly with your website. Your customers can interact with your website using a
personalized profile.
Cloud ecosystems are thriving environments where businesses leverage various cloud
services and applications. But for these ecosystems to function seamlessly, they require a
strong foundation – interconnectivity.
In simpler terms, interconnectivity refers to the ability for different parts of a cloud
ecosystem to connect and exchange data securely and efficiently. This encompasses
connections between:
Multiple cloud providers: Businesses are increasingly adopting multi-cloud strategies, using
services from various providers like AWS, Microsoft Azure, or Google Cloud Platform.
Interconnection enables smooth data flow between these clouds.
Cloud and on-premises infrastructure: Many businesses maintain a hybrid cloud
environment, with some data and applications on-premises and others in the cloud.
Interconnection allows seamless communication between these environments.
Different services within a cloud: Even within a single cloud provider, applications and
services may reside in separate virtual networks. Interconnection facilitates communication
between these internal components.
Here's a deeper dive into the importance of interconnectivity in cloud
ecosystems:
Dedicated Interconnects: These provide a direct physical connection between your on-
premises network and the cloud provider's network, offering high security and performance.
Partner Interconnects: This option leverages a network service provider to establish a
connection between your network and the cloud.
Cross-Cloud Interconnects: This facilitates direct connections between your resources in
different cloud providers, enabling data exchange without going through the public internet.
Interconnection is the mechanism tying together the ecosystem of entities each business
exchanges data with as part of operations, using what we are calling a “digital native supply
chain.” Historically, enterprises, related service providers and cloud providers (and
consumers, however you define them) shared data though point-to-point connections within
carrier-neutral data centers. As multicloud grew, so did digital ecosystems, which are able to
exchange data and use hosted services as needed, at scale and from any location. Simplifying
data exchange between clouds would be a good step to holistic interconnection.
Data centers remain the hub for interconnection, which happens through cross connects and
virtual connections. The advantages for ecosystems vary, including:
Security and resilience that only private networks can offer
Guaranteed performance with latency aligned to need – critical to userexperience
Better decisions, enabled by reducing or eliminating data silos
Cost control as a result of direct cloud connections and the agility to turn up or down services
faster and ever-more selective broadband use (as we should see with edge clouds)
New revenue opportunities and expanded market reach
Introduction
Twitter is a platform that is used by people across the world to exchange thoughts, ideas and
information with one another, using Tweets. Each Tweet consists of up to 280 characters and
may include media such as links, images and videos. The Twitter API provides a
programmatic way to retrieve Twitter data.
Professors at various schools around the world use Twitter API in their class toteach
students:
Working with the Flickr API allows developers to integrate Flickr's vast
collection of photos and related data into their applications. Whether you're
building a photo-sharing platform, creating a gallery website, or developing a
tool for visual analysis, accessing Flickr's API can provide access to millions of
images and rich metadata. Here's a detailed guide on working with the Flickr
API:
Working with the Flickr API can provide access to a wealth of visual content
and metadata, enabling developers to create innovative applications that
leverage the power of images and photography. By mastering the Flickr API,
you can build applications that enhance photo sharing, discovery, and analysis,
enriching the user experience and unlocking new possibilities in visual
computing.
Working with the Google Maps API offers developers powerful tools for
integrating interactive maps, location-based services, and geographic data into
their applications. Whether you're building a website, mobile app, or any other
type of software that requires mapping functionality, the Google Maps API
provides extensive features and customization options. Here's a detailed guide
on working with the Google Maps API:
By mastering the Google Maps API, developers can create immersive mapping
experiences that enhance the functionality and usability of their applications,
providing users with valuable location-based information and services. Whether
it's for navigation, local search, or visualizing geographic data, the Google Maps
API offers the tools and resources needed to build innovative mapping
solutions.
is JSON?
Use JSON?
The JSON format is syntactically similar to the code for creating JavaScript objects. Because
of this, a JavaScript program can easily convert JSON datainto JavaScript objects.
Since the format is text only, JSON data can easily be sent between computers, and used by
any programming language.
JavaScript has a built in function for converting JSON strings into JavaScript objects:
JSON.parse()
JavaScript also has a built in function for converting an object into a JSON string:
JSON.stringify()
Valid Data Types
a string
a number
an object (JSON object)
an array
a boolean
null
a function
a date
undefined
JSON Strings
Example
{"name":"John"}
JSON Numbers
Example
{"age":30}
JSON Objects
is a JSON string:
Parse the data with JSON.parse(), and the data becomes a JavaScript object.
Sending Data
If you have data stored in a JavaScript object, you can convert the object intoJSON, and
send it to a server:
Example
const myObj = {name: "John", age: 31, city: "New York"}; const myJSON
= JSON.stringify(myObj);
window.location = "demo_json.php?x=" + myJSON;
Receiving Data
If you receive data in JSON format, you can easily convert it into a JavaScriptobject:
Example
const myJSON = '{"name":"John", "age":31, "city":"New York"}';
const myObj = JSON.parse(myJSON);
document.getElementById("demo").innerHTML = myObj.name;JSON
PHP
A common use of JSON is to read data from a web server, and display the datain a web
page.
This chapter will teach you how to exchange JSON data between the client anda PHP server.
PHP file
<?php
$myObj->name = "John";
$myObj->age = 30;
$myObj->city = "New York";
$myJSON = json_encode($myObj);
echo $myJSON;
?>
JSON HTML
Example
const dbParam = JSON.stringify({table:"customers",limit:20}); const
xmlhttp = new XMLHttpRequest();
xmlhttp.onload = function() {
myObj = JSON.parse(this.responseText);
let text = "<table border='1'>"
for (let x in myObj) {
text += "<tr><td>" + myObj[x].name + "</td></tr>";
}
text += "</table>"
document.getElementById("demo").innerHTML = text;
}
xmlhttp.open("POST", "json_demo_html_table.php");
xmlhttp.setRequestHeader("Content-type", "application/x-www-form- urlencoded");
xmlhttp.send("x=" + dbParam);
OUTPUT
2. REST (Representational State Transfer) :
REST is an architectural style for designing networked applications based
on the principles of statelessness, uniform interface, resource
identification, and manipulation through representations.
Advanced use of REST involves adhering to RESTful principles and best practices to
create scalable, maintainable, and interoperable APIs.
Design resource-oriented APIs that represent domain entities as resources with unique
URIs (Uniform Resource Identifiers).
Use HTTP methods (GET, POST, PUT, DELETE) to perform CRUD (Create, Read,
Update, Delete) operations on resources, following the semantics of each method.
Implement HATEOAS (Hypermedia as the Engine of Application State) to include
hypermedia links in API responses, enabling clients to navigate the API dynamically
without prior knowledge of URIs.
Support content negotiation by providing multiple representations (e.g., JSON, XML)
of resources based on client preferences specified in the Accept header of HTTP
requests.
Utilize caching mechanisms such as ETags and cache-control headers to improve API
performance and reduce server load by enabling caching of resource representations.
Implement pagination, filtering, sorting, and search capabilities to manage large
collections of resources efficiently and provide a better userexperience.
Use HTTP status codes to convey the outcome of API requests accurately, including
success, client errors, server errors, and redirections.
Ensure security by implementing authentication, authorization, and encryption
mechanisms to protect sensitive data and prevent unauthorizedaccess or manipulation
of resources.
REST is the abbreviation of Representational State Transfer, a phrase coined in the year 2000
by Mr. Roy Fielding. It is a structural design approach for crafting loosely attached
applications using HTTP, often implemented in the growth of web services. REST web
services do not impose any rules concerning how it needs to be applied in practice at a low
level; it only holds the high-level design guiding principles and leaves it to the developer to
think about the implementation.
In this architecture, a REST server provides connectivity to resources, which helps with client
access as well as updating resources. In this, the resources are recognized by the URIs /
Global IDs. REST API creates a variety of outputs to represent a resource, such as JSON -
which is very popular among them all,
text, XML formats. REST architecture-oriented web services are termed as RESTful web
services.
RESTful Methods
The REST architecture makes use of four commonly used HTTP methods. These are:
Method Description
GET This method helps in offering read-only access for the resources.
PUT This method is implemented for updating an existing resource or creating afresh one.
A Node.js application using ExpressJS is ideally suited for building REST APIs. In this
chapter, we shall explain what is a REST (also called RESTFul) API, and build a Node.js
based, Express.js REST application. We shall also use REST clients to test out REST API.
API is an acronym for Application Programming Interface. The word interface generally
refers to a common meeting ground between two isolated and independent environments. A
programming interface is an interface between two software applications. The term REST
API or RESTFul API is used for a web Application that exposes its resources to other
web/mobile applications through the Internet, by defining one or more endpoints which the
client apps can visit to perform read/write operations on the host's resources.
REST architecture has become the de facto standard for building APIs, preferred by the
developers over other technologies such as RPC, (stands for Remote Procedure Call), and
SOAP (stands for Simple Object Access Protocol).
REST stands for REpresentational State Transfer. REST is a well known software
architectural style. It defines how the architecture of a web application should behave. It is a
resource based architecture where everything that the REST server hosts, (a file, an image, or
a row in a table of a database), is a resource, having many representations. REST was first
introduced by Roy Fielding in 2000.
REST recommends certain architectural constraints.
Uniform interface
Statelessness
Client-server
Cacheability
Layered system
Code on demand
Scalability
Simplicity
Modifiability
Reliability
Portability
Visibility
A REST Server provides access to resources and REST client accesses and modifies the
resources using HTTP protocol. Here each resource is identified by URIs/ global IDs. REST
uses various representation to represent a resource like text, JSON, XML but JSON is the
most popular one.
HTTP methods
Following four HTTP methods are commonly used in REST based architecture. POST
Method
The POST verb in the HTTP request indicates that a new resource is to be created on the
server. It corresponds to the CREATE operation in the CRUD (CREATE, RETRIEVE,
UPDATE and DELETE) term. To create a new resource, you need certain data, it is included
in the request as a data header.
GET Method
The purpose of the GET operation is to retrieve an existing resource on the server and return
its XML/JSON representation as the response. It corresponds to the READ part in the CRUD
term.
Examples of a GET request −
PUT Method
The client uses HTTP PUT method to update an existing resource, corresponding to the
UPDATE part in CRUD). The data required for update is included in the request body.
DELETE Method
The DELETE method (as the name suggest) is used to delete one or more resources on the
server. On successful execution, an HTTP response code 200 (OK) is sent.
Web services based on REST Architecture are known as RESTful web services. These
webservices uses HTTP methods to implement the concept of REST architecture. A RESTful
web service usually defines a URI, Uniform Resource Identifier a service, which provides
resource representation such as JSON and set of HTTP Methods.
Consider we have a JSON based database of users having the following users in a file
users.json:
{
"user1" : {
"name" : "mahesh",
"password" : "password1",
"profession" : "teacher",
"id": 1
},
"user2" : {
"name" : "suresh",
"password" : "password2",
"profession" : "librarian",
"id": 2
},
"user3" : {
"name" : "ramesh",
"password" : "password3",
"profession" : "clerk",
"id": 3
}
}
Our API will expose the following endpoints for the clients to perform CRUDoperations on
the users.json file, which the collection of resources on the server.
Let's implement the first route in our RESTful API to list all Users using thefollowing
code in a index.js file
To test this endpoint, you can use a REST client such as Postman or Insomnia. In this
chapter, we shall use Insomnia client.
Run index.js from command prompt, and launch the Insomnia client. Choose GET methos
and enter http://localhost:5000/ URL. The list of all users from users.json will be displayed in
the Respone Panel on right.
You can also use CuRL command line tool for sending HTTP requests. Openanother
terminal and issue a GET request for the above URL.
C:\Users\mlath>curl http://localhost:5000/
{
"user1" : {
"name" : "mahesh",
"password" : "password1",
"profession" : "teacher",
"id": 1
},
"user2" : {
"name" : "suresh",
"password" : "password2",
"profession" : "librarian",
"id": 2
},
"user3" : {
"name" : "ramesh",
"password" : "password3",
"profession" : "clerk",
"id": 3
}
}
Show Detail
Now we will implement an API endpoint /:id which will be called using user IDand it will
display the detail of the corresponding user.
You may also use the CuRL command as follows to display the details of user2
−
C:\Users\mlath>curl http://localhost:5000/2
{"name":"suresh","password":"password2","profession":"librarian","id":2}
Add User
Following API will show you how to add new user in the list. Following is the detail of the
new user. As explained earlier, you must have installed body-parser package in your
application folder.
To send POST request through Insomnia, set the BODY tab to JSON, and enterthe the user
data in JSON format as shown
You will get a JSON data of four users (three read from the file, and one added)
{
"user1": {
"name": "mahesh",
"password": "password1",
"profession": "teacher",
"id": 1
},
"user2": {
"name": "suresh",
"password": "password2",
"profession": "librarian",
"id": 2
},
"user3": {
"name": "ramesh",
"password": "password3",
"profession": "clerk",
"id": 3
},
"user4": {
"name": "mohit",
"password": "password4",
"profession": "teacher",
"id": 4
}
}
Delete user
The following function reads the ID parameter from the URL, locates the user from the list
that is obtained by reading the users.json file, and the corresponding user is deleted.
Choose DELETE request in Insomnia, enter http://localhost:5000/2 and send the request. The
user with ID=3 will be deleted, the remaining users are listed in theresponse panel
Output
{
"user1": {
"name": "mahesh",
"password": "password1",
"profession": "teacher",
"id": 1
},
"user2": {
"name": "suresh",
"password": "password2",
"profession": "librarian",
"id": 2
}
}
Update user
The PUT method modifies an existing resource with the server. The following app.put()
method reads the ID of the user to be updated from the URL, and the new data from the
JSON body.
{
"user1": {
"name": "mahesh",
"password": "password1",
"profession": "teacher",
"id": 1
},
"user2": {
"name": "suresh",
"password": "password2",
"profession": "Cashier",
"id": 2
},
"user3": {
"name": "ramesh",
"password": "password3",
"profession": "clerk",
"id": 3
}
}
= express();
var fs = require("fs");
app.use( bodyParser.json() );
data );
});
})
fs.readFile( dirname + "/" + "users.json", 'utf8', function (err, data) {var users
= JSON.parse( data );
JSON.stringify(user));
});
})
app.use( bodyParser.json() );
app.use(bodyParser.urlencoded({ extended: true }));
fs.readFile( dirname + "/" + "users.json", 'utf8', function (err, data) {var users
= JSON.parse( data );
JSON.stringify(users));
});
})
JSON.parse( data );
var id = "user"+req.params.id;var
user = data[id];
res.end( JSON.stringify(data));
});
})
users[id]=req.body;
res.end( JSON.stringify(users));
})
})
})