[go: up one dir, main page]

0% found this document useful (0 votes)
15 views32 pages

Unit 1 Notes

Uploaded by

Tanmay Shresht
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views32 pages

Unit 1 Notes

Uploaded by

Tanmay Shresht
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

SCHOOL OF COMPUTING

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

SCSA1614 DEVOPS
UNIT 1

INTRODUCTION TO DevOps

Introduction to DevOps - DevOps vs Agile - DevOps Principles and Life Cycle - Introduction
to CI / CD & DevOps Tools-Version Control - Build Automation - Configuration Management-
Containerization-- Continuous Deployment - Continuous Integration - Continuous Testing -
Continuous Monitoring.

Introduction to DevOps:
DevOps is the acronym given to the combination of Development and Operations. It refers to
a collaborative approach to make the Application Development team and the IT Operations
team of an organization to seamlessly work with better communication.

What is DevOps?
DevOps is the combination of cultural philosophies, practices, and tools that increases an
organization's ability to deliver applications and services at high velocity: evolving and
improving products at a faster pace than organizations using traditional software development
and infrastructure management processes, Figure 1

Fig 1 DevOps Architecture

DevOps vs Agile:
DevOps promotes a fully automated continuous integration and deployment pipeline to enable
frequent releases, while Agile provides the ability to rapidly adapt to the changing requirements
and better collaboration between different smaller teams.
Difference Between Agile and DevOps

Parameter Agile DevOps

What is it? Agile refers to an iterative DevOps is considered a


approach which focuses on practice of bringing
collaboration, customer development and operations
feedback, and small, rapid teams together.
releases.

Purpose Agile helps to manage complex DevOps central concept is to


projects. manage end-to-end
engineering processes.

Task Agile process focusses on DevOps focuses on constant


constant changes. testing and delivery.

Implementation Agile method can be The primary goal of DevOps


implemented within a range of is to focus on collaboration,
tactical frameworks like a so it doesn't have any
sprint, safe and scrum. commonly accepted
framework.

Team skill set Agile development emphasizes DevOps divides and spreads
training all team members to the skill set between the
have a wide variety of similar development and operation
and equal skills. teams.

Team size Small Team is at the core of Relatively larger team size
Agile. As smaller is the team, as it involves all the stack
the fewer people on it, the faster holders.
they can move.

Duration Agile development is managed DevOps strives for deadlines


in units of "sprints." This time is and benchmarks with major
much less than a month for each releases. The ideal goal is to
sprint. deliver code to production
DAILY or every few hours.

Feedback Feedback is given by the Feedback comes from the


customer. internal team.

Target Areas Software Development End-to-end business


solution and fast delivery.
Shift-Left Leverage shift-left Leverage both shifts left and
Principles right.

Emphasis Agile emphasizes software DevOps is all about taking


development methodology for software which is ready for
developing software. When the release and deploying it in a
software is developed and reliable and secure manner.
released, the agile team will not
care what happens to it.

Cross-functional Any team member should be In DevOps, development


able to do what's required for the teams and operational teams
progress of the project. Also, are separate. So,
when each team member can communication is quite
perform every job, it increases complex.
understanding and bonding
between them.

Communication Scrum is the most common DevOps communications


method of implementing Agile involve specs and design
software development. Daily documents. It's essential for
scrum meeting is carried out. the operational team to fully
understand the software
release and its
hardware/network
implications for adequately
running the deployment
process.

Documentation Agile method is to give priority In the DevOps, process


to the working system over documentation is foremost
complete documentation. It is because it will send the
ideal when you're flexible and software to the operational
responsive. However, it can hurt team for deployment.
when you're trying to turn things Automation minimizes the
over to another team for impact of insufficient
deployment. documentation. However, in
the development of complex
software, it's difficult to
transfer all the knowledge
required.

Automation Agile doesn't emphasize Automation is the primary


automation. Though it helps. goal of DevOps. It works on
the principle to maximize
efficiency when deploying
software.
Goal It addresses the gap between It addresses the gap between
customer needs and development + testing and
development & testing teams. Ops.

Focus It focuses on functional and It focuses more on


non-function readiness. operational and business
readiness.

Importance Developing software is inherent Developing, testing and


to Agile. implementation all are
equally important.

Speed vs. Risk Teams using Agile support rapid In the DevOps method, the
change, and a robust application teams must make sure that
structure. the changes which are made
to the architecture never
develop a risk to the entire
project.

Quality Agile produces better DevOps, along with


applications suites with the automation and early bug
desired requirements. It can removal, contributes to
easily adapt according to the creating better quality.
changes made on time, during Developers need to follow
the project life. Coding and Architectural
best practices to maintain
quality standards.

Tools used JIRA, Bugzilla, Kanboard are Puppet, Chef, TeamCity


some popular Agile tools. OpenStack, AWS are
popular DevOps tools.

Challenges The agile method needs teams DevOps process needs to


to be more productive which is development, testing and
difficult to match every time. production environments to
streamline work.

Advantage Agile offers shorter DevOps supports Agile's


development cycle and release cycle.
improved defect detection.
DevOps Principles and Life Cycle
DevOps Principles
 Customer-Centric Action
 Create with the End in Mind
 End-To-End Responsibility
 Cross-Functional Autonomous Teams
 Continuous Improvement
 Automate Everything You Can

Customer-Centric Action:

Customer-centric (also known as client-centric) is a business strategy that's based on putting


your customer first and at the core of your business in order to provide a positive experience
and build long-term relationships

Create with the End in Mind:

Begin with the End in Mind means to begin each day, task, or project with a clear vision of your
desired direction and destination, and then continue by flexing your proactive muscles to make
things happen.

End-To-End Responsibility:

End-to-end responsibility means that the team holds itself accountable for the quality and
quantity of services it provides to its customers.

Cross-Functional Autonomous Teams:

Team members must be empowered and supported in having the ability to take on many roles
within the team to ensure that value is delivered to the customer with minimal bottlenecks and
roadblocks. Rapid delivery to the customer can be achieved.

Work is to be managed as a product rather than a project so that teams are autonomous in the
decision making tasks required to successfully deliver and maintain their product. Work is never
done; it continues to get better and better until end of life.

Continuous Improvement:

Continuous improvement, sometimes called continual improvement, is the ongoing


improvement of products, services, or processes through incremental and breakthrough
improvements. These efforts can seek "incremental" improvement over time or "breakthrough"
improvement all at once.

Automate Everything You Can:

What does it mean to "automate everything"? It means you have recognized a need to do things
better, cheaper, faster, with more quality, security, accuracy and more importantly, to free up
people from mundane tasks that are demotivating, laborious, costly and potentially high risk to
your employees, business and customers.

DevOps Life cycle

DevOps characterizes an agile connection between operations and development. It is a cycle


polished by the development group and operational specialists together from the start to the
last phase of the item.

Fig 2: DevOps Life cycle

The DevOps lifecycle Figure 2 incorporates seven stages and how it integrates is shown in
Figure 4 as given underneath:

1) Continuous Development

The program /software is planned and coded during this phase. During the planning stage, the
project's vision is decided, and the programmers start working on the application's code.
Planning doesn't require any DevOps tools, but code maintenance does and there are several
tools available for maintaining the code as shown in the Figure 3.

Plan: In this stage, teams identify the business requirement and collect end-user feedback. They
create a project roadmap to maximize the business value and deliver the desired product during
this stage.

Code: The code development takes place at this stage. The development teams use some tools
and plugins like Git to streamline the development process, which helps them avoid security
flaws and lousy coding practices.
Fig 3: Continuous Development

2) Continuous Integration

The central point of the entire DevOps lifecycle is this phase. Developers must commit changes
to the source code more frequently as part of this software development process. This might
happen every day or every week. Every commit is then built, which makes it possible to identify
any issues early on. Compiling code involves not just compilation but also code review,
packaging, unit testing, and integration testing.

Code that adds new functionality is constantly merged with the code that already exists.
Software is therefore always being developed. To reflect changes to the end users, the updated
code must be seamlessly and continually connected with the systems.

3) Continuous Testing

This phase, where the developed software is continuously testing for bugs. For constant testing,
automation testing tools such as TestNG, JUnit, Selenium, etc are used. These tools allow QAs
to test multiple code-bases thoroughly in parallel to ensure that there is no flaw in the
functionality. In this phase, Docker Containers can be used for simulating the test environment.

Build: In this stage, once developers finish their task, they commit the code to the shared code
repository using build tools like Maven and Gradle.

Test: Once the build is ready, it is deployed to the test environment first to perform several
types of testing like user acceptance test, security test, integration testing, performance testing,
etc., using tools like JUnit, Selenium, etc., to ensure software quality.

4) Continuous Deployment

In this phase, the code is deployed to the production servers. Also, it is essential to ensure that
the code is correctly used on all the servers. The new code is deployed continuously, and
configuration management tools play an essential role in executing tasks frequently and quickly.
Here are some popular tools which are used in this phase, such as Chef, Puppet, Ansible,
and SaltStack.

Release: The build is ready to deploy on the production environment at this phase. Once the
build passes all tests, the operations team schedules the releases or deploys multiple releases to
production, depending on the organizational needs.

Deploy: In this stage, Infrastructure-as-Code helps build the production environment and then
releases the build with the help of different tools.

Operate: The release is live now to use by customers. The operations team at this stage takes
care of server configuring and provisioning using tools like Chef.

4) Continuous Monitoring

Monitoring is a phase that involves all the operational factors of the entire DevOps process,
where important information about the use of the software is recorded and carefully processed
to find out trends and identify problem areas. Usually, the monitoring is integrated within the
operational capabilities of the software application.

Monitor: In this stage, the DevOps pipeline is monitored based on data collected from
customer behavior, application performance, etc. Monitoring the entire environment helps
teams find the bottlenecks impacting the development and operations teams’ productivity.

Fig 4 DevOps Life Cycle Stages


Introduction to CI / CD & DevOps Tools
Introduction to CI / CD

Continuous Integration and Delivery (CI/CD) Figure 5 shows the best CI/CD pipeline
practices that automate the process of software development. In an organization, there is a need
for teams to synchronize their work without breaking the code, and we often refer to this as the
pipeline of CI/CD.

CI/CD is one of the best practices to integrate workflow between development teams and IT
operations. It serves as an agile approach that focuses on meeting business requirements, quality
code, and security while the implementation and deployment process is automated.

Fig 5: CI/CD (Continuous Integration/Continuous Delivery) Pipeline

CI/CD principles

Continuous Delivery practices take CI further by describing principles for successful


production deployments:

Architect the system in a way that supports iterative releases. Avoid tight coupling between
components. Implement metrics that help detect issues in real-time.

Practice test-driven development to always keep the code in a deployable state. Maintain a
comprehensive and healthy automated test suite. Build in monitoring, logging, and fault-
tolerance by design.

Work in small iterations. For example, if you develop in feature branches, they should live no
longer than a day. When you need more time to develop new features, use feature flags.

Developers can push the code into production-like staging environments. This ensures that the
new version of the software will work when it gets in the hands of users.

Anyone can deploy any version of the software to any environment on demand, at a push of a
button. If you need to consult a wiki on how to deploy, it’s game over.

If you build it, you run it. Autonomous engineering teams should be responsible for the quality
and stability of the software they build. This breaks down the silos between traditional
developers and operations groups, as they work together to achieve high-level goals.

Continuous Integration (CI) is a programming practice requiring developers to incorporate


code changes into a shared repository. Changes committed to the repository are verified by an
automated construction, ensuring that bugs are spotted early before deployment.

What is the “CD” in CI/CD?

The "CD" in CI/CD refers to continuous delivery and/or continuous deployment, which are
related concepts that sometimes get used interchangeably. Both are about automating further
stages of the pipeline, but they’re sometimes used separately to illustrate just how much
automation is happening. The choice between continuous delivery and continuous deployment
depends on the risk tolerance and specific needs of the development teams and operations
teams.

Common CI/CD tools

CI/CD tools can help a team automate their development, deployment, and testing. Some tools
specifically handle the integration (CI) side, some manage development and deployment (CD),
while others specialize in continuous testing or related functions.

Tekton Pipelines is a CI/CD framework for Kubernetes platforms that provides a standard
cloud-native CI/CD experience with containers.
 Beyond Tekton Pipelines, other open-source CI/CD tools you may wish to investigate
include:
 Jenkins, designed to handle anything from a simple CI server to a complete CD hub.
 Spinnaker, a CD platform built for multicloud environments.
 GoCD, a CI/CD server with an emphasis on modeling and visualization.
 Concourse, "an open-source continuous thing-doer."
 Screwdriver, a build platform designed for CD.

The major public cloud providers all offer CI/CD solutions, along
with GitLab, CircleCI, Travis CI, Atlassian Bamboo, and many others. There are many
different ways you can implement CI/CD based on your preferred application development
strategy and cloud provider.
DevOps Tools
DevOps is a practice that involves a cultural change, new management principles, and
technology tools that help to implement best practices shown in Figure 6.

When it comes to a DevOps toolchain, organizations should look for tools that improve
collaboration, reduce context-switching, introduce automation, and leverage observability and
monitoring to ship better software, faster.

There are two primary approaches to a DevOps toolchain: an all-in-one or open toolchain.
An all-in-one DevOps solution provides a complete solution that usually doesn’t integrate with
other third-party tools. An open toolchain can be customized for a team’s needs with different
tools.

 Container Management tools - Docker, Kubernetes


 Application Performance Monitoring tools - Dynatrace, Prometheus
 Deployment & Server Monitoring tools - Splunk, Datadog
 Configuration Management tools - Chef, Puppet
 CI / Deployment Automation tools - Jenkins, IBM UrbanCode
 Test Automation tools - Test.ai, Selenium
 Artifact Management tools - Sonatype NEXUS, CloudRepo
 Codeless Test Automation tools - AccelQ, Testim.io

Fig 6: DevOps Tools Practices based on DevOps lifecycle.


Version Control
Version control, also known as source control, is the practice of tracking and managing changes
to software code. Version control systems are software tools that help software teams manage
changes to source code over time. Version control works as shown in Figure 7 and allows you
to manage changes to files over time and store these modifications in a database. Some of the
version control tools are Github, Subversion, and Mercurial

Fig 7: Version Control Workflow.

This includes version control software, version control systems, or version control tools.
Version control is a component of software configuration management. It's sometimes referred
to as VCS programming.

How Do Version Control Systems Work?

Version control systems allow multiple developers, designers, and team members to work
together on the same project. Also known as VCS, these systems are critical to ensure everyone
has access to the latest code. As development gets more complex, there's a bigger need to
manage multiple versions of entire products.

Types of Version Control Systems:


 Local Version Control Systems
 Centralized Version Control Systems
 Distributed Version Control Systems

Local Version Control Systems: It is one of the simplest forms and has a database that kept
all the changes to files under revision control. RCS is one of the most common VCS tools. It
keeps patch sets (differences between files) in a special format on disk. By adding up all the
patches it can then re-create what any file looked like at any point in time.

Centralized Version Control Systems: Centralized version control systems contain just
one repository globally and every user need to commit for reflecting one’s changes in the
repository. It is possible for others to see your changes by updating.

Two things are required to make your changes visible to others which are:
 You commit
 They update

The benefit of CVCS (Centralized Version Control Systems) makes collaboration amongst
developers along with providing an insight to a certain extent on what everyone else is doing
on the project. It allows administrators to fine-grained control over who can do what.

It has some downsides as well which led to the development of DVS. The most obvious is the
single point of failure that the centralized repository represents if it goes down during that
period collaboration and saving versioned changes is not possible. What if the hard disk of the
central database becomes corrupted, and proper backups haven’t been kept? You lose absolutely
everything.

Fig 8: Centralized Version Control vs Distributed Version Control System


Distributed Version Control Systems: Distributed version control systems contain
multiple repositories. Each user has their own repository and working copy. Just committing
your changes will not give others access to your changes. This is because commit will reflect
those changes in your local repository and you need to push them in order to make them
visible on the central repository. Similarly, When you update, you do not get others’ changes
unless you have first pulled those changes into your repository.

To make your changes visible to others, 4 things are required:


 You commit
 You push
 They pull
 They update
The most popular distributed version control systems are Git, and Mercurial. They help us
overcome the problem of single point of failure.
Features of Version Control Software

Each version control option comes with different features.

1. Concurrent Development

Projects are getting more complex. There’s a growing need to manage multiple versions
of code and files and entire products. This means multiple developers and designers can work
on the same set of files without worrying that they are duplicating effort or overwriting other
team members’ work.

Let’s say you're managing an IOT deployment for high-end internet-connected security
cameras. Over the product lifecycle, you may use ten different types of cameras, each with a
different chip. As a result, each will have different software.

Using the right version control software means you can maintain multiple versions of your code
to manage the specific functionality of each camera's chip and operating system.

So, when you need to deploy a critical security patch to prevent bad guys from hijacking those
cameras, it’s easy. You’ll instantly see which code is impacted, make the changes, and deploy
a fix.

2. Automation

Higher quality and increased productivity are top priorities for today’s development teams. And
your team can reach your goals by automating tasks, such as testing and deployment. In
software development, Continuous Integration (CI) with automated builds and code reviews
are a standard operating procedure. For hardware development, such as semiconductors,
automation can include testing with field programmable gate arrays (FPGAs). Integration with
simulation verification and synthesis systems.
Such FPGAs or verification and testing systems must themselves be controlled and
versioned, along with potentially very large data files. It is vital that even small changes are
tracked and managed. This means your version control software is the center of the IP universe.
The right one can handle millions of automated transactions per day with millions of files.

3. Team Collaboration

Companies operate where the talent lives. This means you might have design and
development centers in Minneapolis, Seattle, Toronto, and Shanghai. And when deadlines loom,
you might need to add engineers in Paris and possibly even in Taipei.
You need a way to provide global access to your team members at all your facilities. Having a
single source of truth –– with appropriately secured identity and access management –– is
critical for your success.
With the right VCS, each team member is working on the latest version. And that makes it easier
to collaborate.

4. Tracked Changes — Who, What, When, Why

Every development team needs visibility into changes. Tracking who, what, when, and why changes are
made is important. Version control software captures this detailed information and maintains this history
forever. So, everyone gets access to who is working on what — and the changes that are made.
This is especially important if you have governance, risk, and compliance (GRC) or regulatory needs.
Audit log history is especially key in the automotive, aerospace, medical device, and semiconductor
industries.

5. High Availability/Disaster Recovery

The most expensive asset you have is your product development team. You can't have them
idled because they've lost access to the code. With the right version control software, you can
have a replica of your enterprise repository , your single source of truth, operating in another
location. If something happens, you can immediately switch over to a replica of the master for
uninterrupted availability.

Build Automation
Build Automation, Figure 9 is the process of scripting and automating the retrieval of software
code from a repository, compiling it into a binary artifact, executing automated functional tests,
and publishing it into a shared and centralized repository.

Key Metrics

 Number of Features / User Stories per Build - Indicates the number of changes
being implemented and maps to business value being created.
 Average Build Time - Indicates the average time to perform a build.
 Percentage of Failed Builds - impacts the overall team output due to rework.
 Change Implementation Lead Time - affects the number of releases per a given
period and overall product roadmap planning.
 Frequency of Builds - indicates the overall output and activity of the project.
Fig 9: Build Automation

The Role of Continuous Integration in the Automated Build Process

Build automation enables Continuous Integration (CI). So, the role of CI in the automated
build process is that CI uses build automation to verify check-ins and enable teams to detect
issues early. Because of its relationship with CI, build automation also makes Continuous
Testing (CT)and Continuous Delivery (CD) — possible.

Benefits of Build Automation

There are five main benefits of build automation.


1. Increases Productivity

Build automation ensures fast feedback. This means your developers increase
productivity. They’ll spend less time dealing with tools and processes — and more time
delivering value.

2. Accelerates Delivery

Build automation helps you accelerate delivery. That’s because it eliminates redundant
tasks and ensures you find issues faster, so you can release faster.

3. Improves Quality

Build automation helps your team move faster. That means you’ll be able to find issues
faster and resolve them to improve the overall quality of your product — and avoid bad
builds.

4. Maintains a Complete History

Build automation maintains a complete history of files and changes. That means you’ll
be able to track issues back to their source.
5. Saves Time and Money

Build automation saves time and money. That’s because build automation sets you up
for CI/CD, increases productivity, accelerates delivery, and improves quality.

How to Automate the Build Process

 Write the code.


 Commit code to a shared, centralized repository — such as Perforce Helix Core.
 Scan the code using tools such as static analysis.
 Start a code review.
 Compile code and files.
 Run automated testing.
 Notify contributors to resolve issues.

Automated Build Tools

These tools help you automate the process of building, testing, and deploying code. Example:
Jenkins. Jenkins is a popular build runner. Many teams automate their CI/CD pipeline with
Jenkins.

Configuration Management
Configuration management is a system engineering process for establishing
consistency of a product’s attributes throughout its life. In the technology world, configuration
management is an IT management process that tracks individual configuration items of an IT
system.

Fig 10: Configuration Management

Components: Configuration Management in DevOps

Configuration management takes on the primary responsibility for three broad categories
required for DevOps transformation:

 Identification
 Control and
 Audit processes.
Identification:

The process of finding and cataloging system-wide configuration needs.


Control:

During configuration control, we see the importance of change management at work. It’s
highly likely that configuration needs will change over time, and configuration control allows
this to happen in a controlled way as to not destabilize integrations and existing infrastructure.

Audit:

Like most audit processes, a configuration audit is a review of the existing systems to ensure
that it stands up to compliance regulation and validations.

There are primary components that go into the comprehensive configuration management
required for DevOps, Figure 11:

 Artifact repository
 Source code repository.
 Configuration management data architecture

Fig 11 Comprehensive Configuration Management

Artifact Repository

An artifact repository is meant to store machine files. This can include binaries, test
data, and libraries. Effectively, it’s a database for files that people don’t generally use. In
DevOps, artifacts, like binaries, are a natural result of continuous integration. DevOps
developers are always pushing out builds which, in turn, create artifact files that need to be
stored, but not necessarily accessed.

The artifact repo comes with two logical partitions, for storing and managing binaries:

1. Snapshot
2. Release
Fig 12 Snapshot of Configuration Management
Source Code Repository

Conversely, the source code repository is a database of source code which developers use.
This database serves as a container for all the working code. Source code aside, it stores several
useful components including various scripts and configuration files.

While some developers store binaries in this same repository, that’s not a best practice. In
DevOps, due to the sheer number of builds and off-shoot binaries, it’s recommended that an
artifact repository is developed for the purpose of storing binaries and other artifacts. It’s not
hard to determine what goes into the source code repository. A quick litmus test is to ask
yourself, “are the files human-readable?”

If yes, there’s a good chance they belong in the source code repository as opposed to anywhere
else. There are two types of source code repositories: centralized version control system
(CVCS) and distributed version control system (DVCS).

In a CVCS, the source code lives in a centralized place, where it can be retrieved and stored.
However, in DVCS, the code exists across multiple terminals useful in the development
process. It’s faster and more reliable. Most often, DVCS is the chosen source code repository
of today’s DevOps professionals.

Configuration Management Data Architecture

The idea of having data architecture dedicated to configuration management is a principle of


ITIL service management framework. A configuration management database or (CMDB) is a
relational database that spans across multiple systems and applications related to configuration
management, including services, servers, applications, and databases to name a few.

CMDB is helpful for change management, as it allows users to audit the relationships between
integrated systems before configuration changes are made. It’s also a useful tool for
provisioning as you can glean all identifying information for objects like servers. A CMBD is
an essential tool when it comes to incident management, too, as it helps teams escalate issues
to resolution.
Outcomes of Properly Managed Configurations

When a system is properly configured and managed, you can expect certain outcomes. Among
these outcomes are delivering infrastructure-as-a-code and configuration-as-a-code.

Below, we will look at the role each outcome has in configuration management:

Infrastructure-as-a-Code

Infrastructure-as-a-code (IaaC) in simplest terms is a code or script the automates the


environment necessary for development, without manually completing all the steps necessary
to build the environment. When we use the word ‘environment’ in this way, we are referring
to the set up of all computing resources required to create the infrastructure to perform DevOps
actions. This could be servers, networks of configurations and other resources.

Configuration-as-a-Code

As the name suggests, configuration-as-a-code (CaaC) is a string of code or script that


standardizes configurations within a given resource, like a server or network. These
configurations are applied during the deployment phase to ensure the configuration of the
infrastructure makes sense for the application.

Benefits of IaaC and CaaC

Continuous integration should be pretty familiar with the benefits of IaaC and CaaC, but for
those new to DevOps, you’ll want to know what to expect. These are some of the benefits of
the two key outcomes defined in this section:

 Automation of the infrastructure environment provides


standardization
 Setups are free of human error
 Collaboration is enhanced between operations and development
 Keeps configurations from drifting
 Makes infrastructure more flexible, ready to scale
 Each step is consistent across all resources
 Version control is a given
With these benefits applied to an organization, efficiency and greater agility are a natural
result. DevOps is practically synonymous with configuration management, IaaC, and CaaC.

Configuration Management Database

The Configuration Management Database (CMDB) comes from the Information Technology
Infrastructure Library (ITIL) service management framework. The CMDB is a repository of
various infrastructure devices, applications, databases, and services. In the Figure 13 the above
services, applications, databases, and servers are represented by the different colored boxes. In the
illustration, Service 1 depends on Application A, as seen bythe arrow relationship. Application A
leverages on Database 1. Both Application A and Database 1 reside on Server A.
Fig 13 Snapshot of Configuration Management

CMDB for Change Management

The CMDB is particularly useful when you are trying to make a change to any of the
applications, databases, or servers.

Let’s say you want to make a change to Application B. To make the change, you must first
do an impact assessment. CMDB helps in performing impact assessments. In the illustration,
suppose changes are made to Application B. The impact assessment will read that any
changes done to Application B will impact Application C, as the data is flowing through it.

Today, software development seldom happens in isolation. The software to be developed is


either an improvement over existing code or is getting plugged into an enterprise network of
applications. Therefore, it is critical that the impacts are assessed to the tee and CMDB is a
significant help in this area.

CMDB for Provisioning Environments

Another application of CMDB in DevOps is in environment provisioning. Today we can spin


up environments on the go with tools like Ansible in our scripts. When you key in the exact
configuration of the production server, the environment provisioning tools create a prod-like
server in a snap of a finger.

But how is it that you are going to obtain the complete configuration of a server? The most
straightforward way is to refer to a CMDB.

Let’s say Server C is a production server that needs to be replicated. In the CMDB, the server
C entry will provide the complete configuration of the server, which is an immense help in
scripting provisioning scripts such as Playbooks (compatible with Ansible.)
CMDB for Incident Management

The CMDB also has other benefits such as supporting the incident management teams during
incident resolutions. The CMDB readily provides the architecture of applications and
infrastructure, which is used to troubleshoot and identify the cause of the issue.

Configuration Management Tools


It is common for configuration management tools Figure 14 to include automation too.
Popular tools are:
 Red Hat Ansible
 Chef
 Puppet

Fig 14 Snapshot of Configuration Management

Containerization
Containerization is the process of packaging software code, its required dependencies,
configurations, and other detail to be easily deployed in the same or another computing
environment. In simpler terms, containerization is the encapsulation of an application and its
required environment.

Containerization in DevOps
First appearing as “cgroups” within the Linux Kernel in 2008, containers exploded in
popularity with the introduction of Docker Engine in 2013. Because of that, “containerizing”
and “Dockerizing” are often used interchangeably.

On the surface, containerizing software is a relatively straightforward process: a


“container file” with the software’s information is converted to a lightweight container image,
which becomes a real container at runtime through a runtime engine. Any image or engine
managed by the Open Container Initiative will work with any other image or engine built using
the same standards. One way to view containers is as another step in the cloud
technology journey.

Containers don’t come bundled with their own guest operating systems. Instead,
they use software called a runtime engine to share the host operating system of whatever
machine they’re running on. That makes for greater server efficiency and faster start-up times
most container images are tens of MB in size, while a VM generally needs between four and
eight GB to run well. That unparalleled portability has made containers the secret weapon
of cloud-native tech companies and, increasingly, their larger legacy counterparts.
List of Containerization DevOps Tools

Marathon:
Marathon, an Apache Meso framework that was designed solely to manage containers can
make your life pretty easy. In comparison to the other prevailing orchestration solutions such
as Kubernetes, Docker Swarm, Marathon will allow and ensure that you will be able to scale
your container infrastructure by just automating most of the management and also the
monitoring tasks.

Over the days, Mesos Marathon has been evolving into a very sophisticated and feature-rich
tool. It becomes even more difficult to bring the better of Apache Mesos Marathon into the
limelight all by itself.

Following are some of the advantages of using Marathon, let us now take a look at each and
every one of them:

Advantages of Marathon:
Apache Mesos Marathon ensures super ultra-high availability. Marathon software will let you
run multiple schedules at the same point in time as if one goes down, the system will still keep
ticking showing that there is nothing that can stop it. Docker Swarm and Kubernetes also
promise high availability, but Marathon takes this to the next level.
Marathon has multiple CLI clients that can be separately installed along with Marathon. These
CLI clients give you loads of options for managing or scripting the tool in a varied number of
complex ways.
It is very easy to run locally for development purposes as against Kubernetes Application
health checks provide you with all the information in detail that you would require of your
instance – like the performance monitoring and stuff.

Fleet:
CoreOS Container Linux is said to be the topping the charts in the space of Container
Operating Systems – which are by default designed to be managed and to be run at
humongous scales with the least possible or minimal operational overhead.
Applications with the Container Linux run inside these containers and provide a developer-
friendly set of tools for the software deployments. Container Linux runs on nearly almost all
the possible combinations of platforms, be it physical, virtual, public, or private cloud spaces.
CoreOS also do provide the fleet functionality based on the fleets cluster manager daemons
which do control the CoreOS’s separate system instances at the cluster level itself.
Following are some of the advantages of using Fleet, let us now take a look at each and every
one of them:
Advantages of Fleet:
CoreOS claims that the configuration values are distributed within the cluster for applications
to read them and these values can be programmatically changed, smart applications can
reconfigure these automatically. The basis on this point, you will never have to run Chef
onall the machines to change just a single configuration value.
CoreOS provides you with the highest possible availability at a relatively lower price
CoreOS lets you maintain different versions of software on machines, and upgrading these
machines is done without any downtime at all
CoreOS goes a step further than Docker by replicating the Cluster and Network setting
between the Development and Production environments as well, whereas Docker just ensures
that these environments are similar but not to the level that CoreOS does this for us.
Developer machines can be brought UP and running within seconds, as there would not be a
requirement to install all the required software from the scratch one after the other
The cost of replicating software like Heroku can be drastically brought down.

Swarm:
Docker Swarm is native clustering for Docker. It turns a pool of Docker hosts into a single,
virtual Docker host. A swarm contains more than one Docker host that run in the Swarm
mode, it acts as a manager to manage the membership and workers which would then run the
Swarm services. A Swarm cluster consists of Docker Engine deployed on multiple nodes.
Manager nodes perform orchestration and cluster management. Worker nodes receive and
execute tasks from the manager nodes.
Following are some of the advantages of using Swarm, let us now take a look at each and
every one of them:

Advantages of Swarm:
Starting with Docker Engine 1.12, Docker Swarm is completely available alongside Docker.
If you use the latest Docker, then the Swarm setup is already done for you in the latest
versions.
Docker Swarm is easily integrated with Docker, it hooks directly into the Docker API and
then is compatible with all of the Docker’s tools.
Docker Hub:
The Docker Hub can be very easily defined as a Cloud repository in which Docker users and
partners create, test, store, and also distribute Docker container images. Through the use of
Docker Hub, a user can very easily access public, open-source image repositories and at the
same time – use the same space to create their own private repositories as well.
Following are some of the advantages of using Docker Hub, let us now take a look at each
and every one of them:
Advantages of Docker Hub:
Forms the central repository for all the public and private images created by the users Provides
central access to all the available Public docker images
Users can safely create their own private docker images and save them under the same central
repository, the docker hub
Watch this video on “Top 10 Highest Paying IT Jobs in 2021” and know how to get into these
job roles.
<iframe width="560" height="315" src="https://www.youtube.com/embed/G-vSRFhkeeU"
frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media;
gyroscope; picture-in-picture" allowfullscreen></iframe>

Packer:
Packer is free and open-source software that finds its usage to create identical machine images
or containers for various platforms from a singly available source configuration. Having pre-
baked machine images is very advantageous because to create them scratch is a very tedious
task.
There were not as many tools earlier that could perform this task, and even if there exists such
software or a tool – then there would have been a huge learning curve that gets associated with
it.
As a result of that, earlier to Packer – the creation of machine images was always a threat to
the agility of the operations team and also weren’t used despite their massive benefits. Packer
with its invent has been able to replace these for quite a long, as Packer is very easy to use and
also automates the process of creation of any kind of machine image.
Packer encourages modern configuration machines using frameworks like Chef or Puppet to
install and also to configure this software that you are planning to Packer-made images. To be
very precise, Packer brings the concept of pre-baked images into the modern age, therefore,
encouraging untapped potential and also encourage newer and newer opportunities.
Having said all this, let us take a look into the advantages that it has in store for us:
Following are some of the advantages of using Packer, let us now take a look at each and every
one of them:

Advantages of Packer:
Packer ensure that the infrastructure deployment process is done at a super-fast pace.
Packer ensures multi-provider portability, meaning it creates identical images for various
other platforms that it supports. Using this, a Production setup can run on an AWS, Staging /
QA might run on something like OpenStack, and maybe the development on Desktop
Virtualization solutions. Packer enforces improved stability as it installs, configures all the
software at the time the image is built.
A machine that is built by Packer can very quickly be launched and tested (smoke tested) to
verify that the things are all good and appear to be working.
Kubernetes:
Kubernetes was built by Google based on its experience of running Containers in various
Production environments. With a combination of great software engineers working on the
project plus the fact that Google was behind the evolvement of Kubernetes, it is one of the best-
suited tools that run some of the largest software services by scale.
This combination ensured that this rock-solid platform can take any scaling needs of an
Organization head-on. Kubernetes is an open-source system for deploying, scaling, and also to
manage containerized applications. Kubernetes brings both the software design and also
software operations together as one single operation by design.
Kubernetes enables the deployment of cloud-native applications anywhere and also manages
these deployments exactly, the same way as you like from everywhere. With the Containers, it
is very easy to ramp up the application instances to match the spikes in the demand whenever
observed.
These containers do obtain these resources from the core host OS, they are considered much
lighter weight than those of the traditional Virtual machines. By this, also ensures that the
underlying server infrastructure is highly efficiently made use of.
Following are some of the advantages of using Kubernetes, let us now take a look at each and
every one of them:
Advantages of Kubernetes
Kubernetes proves high scalability, easier to use container management, and at the same time
helps to reduces delay in communication.
Building micro-services and adding lifetime replications based on the need is a super easy task
with Kubernetes. If the Project demands many more of these and makes changes also, there is
not much effort that is needed.
Kubernetes manages the balancing load on all the participating nodes via the load balancer and
keeps the Master away from being overloaded with all the tasks at once.

Nomad
Nomad is a Cluster Manager and also a Scheduler that is designed for Micro-services and also
to handle batch workloads. It is also distributed, highly available and at the same time scales
to thousands of nodes or clusters that can span amongst multiple data centers and regions.
It does provide a common workflow that helps deploy applications across the infrastructure.
Developers or any other individuals for that case can provide declarative job specifications to
define the way or manner that the applications must be deployed and resources must be
allocated.
Nomad accepts requests for executing such jobs, and also finds all the resources that need to
run these jobs as well. The scheduling algorithm that is used by Nomad, it ensures that all the
constraints that it needs are satisfied and packs applications on the host to help optimize
resource utilization.
It additionally supports virtualized, containerized, and also standalone applications that run on
major operating systems. Nomad is also finding its application in the production environments
as well.
Following are some of the advantages of using Nomad, let us now take a look at each and every
one of them:

Advantages of Nomad:
Nomad uses bin-packing to optimize application placement onto servers to maximize resource
utilization, increase density, and help reduce costs.
In addition to providing its support to Linux, Windows, and Mac environments it extends its
support towards containerized, virtualized, and standalone applications as well.
Simplified operations via Nomad make it safe to handle upgrade jobs, automatically handles
machine failures, and also provide a single workflow for application deployments.
Nomad has the ability to span across many public and private clouds, to treat all infrastructure
as a pool of resources that were used as expendables.
Nomad is a single binary that schedules applications and services on Linux, Windows, and
Mac. It is an open-source scheduler that uses a declarative job file for scheduling virtualized,
containerized, and standalone applications.

OpenVZ:
OpenVZ can be described as a Container-based Virtualization solution for Linux
environments. It does it by creating multiple secure and isolated Linux servers termed as the
Virtual Private Servers (VPS) on a single physical machine. Each of these containers (VPS)
performs or executes instructions as if they are run on a standalone server.

The only way that OpenVZ containers differ from the traditional Virtual machines is that they
run on the same OS Kernel as of the host itself but in turn, allows multiple Linux variants
in individual Containers, and because of this running of these containers is done with very less
overhead. With the same, it also provides greater efficiency and manageability than the
traditional old Virtualization technologies.

Following are some of the advantages of using OpenVZ, let us now take a look at each and
every one of them:

Advantages of OpenVZ:
Since that OpenVZ uses a single Linux Kernel implementation, it has the utmost possibility to
scale well. It can scale up to thousands of CPUs and TBs of RAM.

Very low Virtualization overhead again because of the single Linux Kernel implementation.

Live migration of Virtual Private Servers (VPS) from one physical host to another without
even shutting them down during the process.

Resource management is done in a very efficient manner with OpenVZ and alongside that
resource isolation, performance and security are its other core attributes.
IPsec is very much supported inside these containers since the Kernel version v2.6.32.
Container hardware remains independent as OpenVZ restricts container access to physical
devices.

Solaris Containers:
The very first thing that might hit your mind is the very name of the tool as Solaris and
Containers are two words from two different extremes, but let me clarify that it is very much
possible in this decade.
Over the past few years, the discussion over Containers usually happened with Docker,
CoreOS, and LXD on Linux (to some extent over Windows and Mac OSX too) but with
Solaris (Oracle’s UNIX like OS) have had containers for quite a long time now. Though with
the confusion that the name creates, Solaris Containers are pretty hardly identical to those of
Docker, CoreOS containers.
These do similar things as virtualizing software inside isolated environments curtailing the
overhead of having a hypervisor or a VMware instance. Though the world might be
considering Docker and the like for their Linux environments Solaris Containers are also
interesting enough to gain knowledge all about.
There is a plan to bring Docker to Solaris Containers as confirmed by Oracle – which only
means that Solaris Containers can be seen more on the mainstream Containers and DevOps
space.
Following are some of the advantages of using Solaris Containers, let us now take a look at
each and every one of them:

Advantages of Solaris Containers:


Configuration is pretty easy, until and unless you are able to point and click your way through
the Enterprise Manager Ops Center to manage the Solaris Containers.
Virtual resources are managed pretty well and easy with Solaris Containers as compared to
Docker and CoreOS.

CloudSlang:

CloudSlang, an open-source software tool that finds its usage in the orchestration space is one
of the cutting-edge technologies available for Organizations with DevOps implementations. It
is one such tool that can perform the orchestration activity on almost anything that you can
imagine in an ageless manner. There is a possibility that an individual can re-use a ready-made
workflow or design a custom workflow altogether – which can further be reusable, shareable,
and are also very easy to understand as well.

Following are some of the advantages of using CloudSlang, let us now take a look at each
and every one of them:
Advantages of CloudSlang:
One of the biggest advantages of using CloudSlang is that it is an Open source tool that is
available for orchestrating cutting-edge technologies.
Use, Re-use, or Customize the readymade YAML-based workflows. These workflows are
further powerful, shareable amongst members, and are extremely easy to understand by others.
The content that is available with CloudSlang is easy to understand as it uses YAML-based
DSL. It is an Open source-based orchestration tool with readymade workflows.

Continuous Testing:
Continuous Testing, Figure 15 in DevOps is a software testing type that involves testing the
software at every stage of the software development life cycle. The goal of Continuous testing
is evaluating the quality of software at every step of the Continuous Delivery Process by testing
early and testing often.

Fig 15 Continuous Testing

Continuous Monitoring
Continuous Monitoring comes in at the end of the DevOps pipeline. Once the software is
released into production, Continuous Monitoring will notify dev and QA teams in the event of
specific issues arising in the prod environment. It provides feedback on what is going wrong,
which allows the relevant people to work on necessary fixes as soon as possible.
Continuous Monitoring basically assists IT organizations, DevOps teams in particular, with
procuring real-time data from public and hybrid environments. This is especially helpful with
implementing and fortifying various security measures – incident response, threat assessment,
computers, and database forensics, and root cause analysis. It also helps provide general
feedback on the overall health of the IT setup, including offsite networks and deployed
software.
Fig 16 Continuous Monitoring

Goals of Continuous Monitoring in DevOps


 Enhance transparencyd and visibility of IT and network operations,
especially those that can trigger a security breach, an resolve it with a well-
timed alert system.
 Help monitor software operation, especially performance issues, identify the
cause of the error, and apply appropriate solutions before significant damage
to uptime and revenue.
 Help track user behaviour,
s especially right after an update to a particular
site or app has been pushed to prod. This monitors if the update has a positive,
negative, or neutral effect on user experience.

You might also like