[go: up one dir, main page]

0% found this document useful (0 votes)
138 views93 pages

DevOps

The document outlines a syllabus for a DevOps course, covering key topics such as DevOps principles, version control with Git, configuration management using Chef, containerization with Docker, and build tools like Maven. Each section includes specific objectives, essential concepts, and practical applications relevant to the DevOps field. The course aims to equip learners with the necessary skills and knowledge to implement DevOps practices effectively in an organizational context.

Uploaded by

Yogesh zende
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
138 views93 pages

DevOps

The document outlines a syllabus for a DevOps course, covering key topics such as DevOps principles, version control with Git, configuration management using Chef, containerization with Docker, and build tools like Maven. Each section includes specific objectives, essential concepts, and practical applications relevant to the DevOps field. The course aims to equip learners with the necessary skills and knowledge to implement DevOps practices effectively in an organizational context.

Uploaded by

Yogesh zende
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 93

Syllabus..

4. Introduction to DevOps [Weightage 10, Hrs. 4]


1.1 Define DevOps
1.2 What is DevOps
1.3 SDLC models, Lean, ITIL, Agile
14 Why DevOps?
1.5 History of DevOps
1.6 DevOps Stakeholders
1.7 DevOps Goals
L,8 Important Terminology
1.9 DevOps Perspective
1.10 DevOps and Agile
1.11 DevOps Tools
1.12 Configuration Management
1.13 Continuous Integration and Deployment
1.14 Linux OS Introduction
1.15 Importance of Linux in DevOps
1.16 Linux Basic Command Utilities
1.17 Linux Administration.
1.18 Environment Variables
1.19 Networking
1.20 Linux Servér Installation
1.21 RPM and YUM Installation
2. Version Control-GIT [Weightage 15, Hrs. 31
2.1 Introduction to GIT
2.2 What is Git
2.3 About Version Control System and Types
2.4 -Difference between CVCS and DVCS
2.5 A Short History of GIT

2.6 GIT Basics


2.7 GIT Command Line
2.8 Installing Git
2.9 Installing on Linux
2.10 Installing on Windows
2.11 Initial Setup
2.12 Git Essentials
2.13 Creating Repository
2.14 Cloning, Check-in and Committing
2.15 Fetch Pull and Remote
2.16 Brånching
Merging.
2.17 Creating the Branches, Switching the Branches,
2.18 The Branches
Weightage 25, Hrs. 131
Configuration Management
3. Chef for Common Chef Terminology (Server, Workstation, Client.
3.1
Overview of Chef;
Servers and Nodes Chef Configuration Concepts Contents
Repository etc)
Configure Knife Execute Some Commands to
Workstation Setup: How to Test
3.2 Workstation
Connection between Knife and
Add yourself and Node Organization
Organization Setup: Create organization;
3.3
Check Node' 1. Introduction to DevOps
Test Node Setup: Create a Server and add to Organization, 1.1 -1.58
3.4
using Knife
Node Objects and Search: How to Add Run list to Node Check node Detaile2
3.5
Environments: How to create Environments, Add servers to environments? 2. Version Control - GIT
3.6 2.1,- 2.37
Roles: Create roles, Add Roles to Organization
3.7
Atributes: Understanding of Atributes, Creating Custom Attributes, Defining in
3.8
Cookbooks 3. Chef for Configuration Management
Data Bags: Understanding the Data Bags, Creating and Managing the Data Bag:
3.1 - 3.22
3.9
Creating the Data Bags using CI and Chef Console, Sample Data bags for Creatinn
Users
4. Docker - Containers 4.1 - 4.38
4. Docker-Containers [Weightage 30, Hrs. 15]
4.1 Introduction: What is a Docker? Use case of Docker, Platforms for Docker. Dockerc
vs Virtualization
4.2 Architecture: Docker Architecture, Understanding the Docker Components 5..Build Tool - Maven 5.1 -5.24
43 Installation: Installing Docker on Linux Understanding Installation
of Docker on
Windows Some Docker Commands Provisioning
4.4 Docker Hub: Downloading Docker Images Uploading
the Images in Docker Bibliography B.1 - B.1
Registry and AWS ECS, Understanding the Containers,
Running Commands in
Container Running Multiple Containers.
4.5 Custom Images: Creating a Custom Image Running a
Container from the Custom
Image Publishing the Custom Image.
4.6 Docker Networking: Accessing Containers,
Linking Containers, Exposing Container
Ports, Container Routing.
5. Build Tool-Maven
[Weightage 20, Hrs. 10]
5.1 Maven Installation
5.2 Maven Build Requirements
5.3 Maven POM Builds (pom.xml)
5.4 Maven Build Life Cycle
S.5 Maven Local Repository
(,m2)
5.6 Maven Global
Repository
5.7 Group ID, Artifact 1D,
Snapshot
5.8 Maven Dependencies
5.9 Maven Plugins
1.

Introduction to DevOps
Objectives...
After learning this chapter you will be able to:
D Understand the essential characteristics of DevOps including building a culture of
shared responsibility, transparency, and embracing failure.
D Study the concepts like importance of Continuous Integration and Continuous
Delivery, Infrastructure as Code, Test Driven Development, Behaviour Driven
Development.
O Study essential DevOps concepts and use of it.
O Understand the organizational impact of DevOps tools, and their use in the Linux
operating system.

1,1 INTRODUCTION
Many times, we hear from software developers that they do DevOps or use Devops
tools means that are using two words : development and operations. DevOps culture is
different from traditional corporate culture; it typically requires a change in mindset,
processes and tools. This is linked with Continuous Integration (C) and Continuous
Delivery (CD) practices with laC (Infrastructure as Code).
The term Devops (Development and Operations) is a collection of tools and
technologies combined to carry out various business processes. The purpose is to
bridge the gap between the development department, and the operations department
which are two of the most important departments in any IT organization.
1.1.1 Define DevOps
Definition: DevOps (a combination of two words such as "development" and
"operations") is the combination of practices and tools designed to increase an
organization's ability to deliver applications and services faster than traditional'
software development processes.
(1.1)
Introduction to DevOps
IV] 1.2 -
DevOps : MCA [Management Sem. IV] 1.3 Introduction to DevOps
[Management Sem.
DevOps : MCA

DevOps? The rigid nature of the development process in SDLC makes it impossible to revert to
1.2 WHATIS
as a result of
to replacing siloed
Development the previous stage of development, and the model has become 'outdated
"Dev" and "Ops" refers now work togethor.. the ever-evolving nature of software development, where implementation must
be
The contraction of teams that
create multidisciplinary continually updated in accordance with user feedback in order to meet the evolving
Operations. The idea is to DevOps practices include
and tools. Essential requirements of the IT sector.
shared and efficient practices delivery, and monitorinp are widely used
continuous Let's see some of the fewW modern software development models that
planning, continuous integration,
a constant journey.
applications. DevOps is away from traditionally siloed tem in the industry today.
The DevOps philosophy
focuses on breaking Lean Model:

a more collaborative approach. Under
the Deuo 1.
presence in the IT industry
software development to adopt The concept of lean software development derived its
teams work together throughout the project production system was the first company to
model, development and operations from Toyota manufacturing. Toyota
deployment. process in the mid-20th century to improve their car
from development to
lifecycle, introduce the lean development
production and reduce wastage of time and resources.
Lean Model followed by many manufacturing sectors,
across various industries. This
approach was first executed in software development in 2003.
Development Why Lean model ased in software?
IT industry
There are various reasons for the popularity of Lean methodology in the
such as:
It allows frequent product changes and software releases.

DevOps
Shorter development lifecycle.
Continuous exchange of preliminary development steps.
o Simultaneous improvement in development quality, time and
so on

Operations These are some of the notable factors that make the lean development model essential
up with the current pace of. software
Quality AssUrance (QA)
for organizations that want to keep
development.
2. Agile Model:
is iterative in
Agile is a continuous integration and deployment approach which
Fig. 1.1: DevOps Model nature. It develops its principles from Lean methodology. Some of the agile
SDLC MODELS, LEAN, ITIL,; AGILE approaches can be attributed to the following concepts:
1.3 Frequent analysis and implementation of changes.
The focus of the IT industry has been changed to continuous integration ana Team-oriented leadership or specifically ownership of tasks by each member of
deployment approach. There is a constant need for upgrading and integrating the
the team.
solution with the existing software to meet the market demands. Therefore, owing to deliverables.
a continuous delivery approach, the communication between stakeholders involved
o
Itis usually very self-organized and responsible for its
o Agile perfectly supports organizational and customer expectations.
in developing a software solution and the end-users has increased more than ever, processes that are used to implement Agile
Agile Process: Following are the various
due to which there is a constant need for feedback and its implementation.
Process:
SDLC: on a team-driven development
(a) Scrum: This process of agile development focuses
SDLC began with Waterfall model but is gradually shifting to other models and ina
environment. The team is composed of 7 to members, usually, with major roles
9
approaching the Agile DevOps and Lean methodologies. a Owner, and Scrum
Software Development Life Cycle (SDLC) is a waterfall
and responsibilities classified as Scrum Master, Product
as
traditional methodology for software
model which conta Team. The three roles can be explained further follows:
development.
Introduction
- Sem. IV] 1,4 to DevOps
DevOps : MCA
[Management
DovOps : MCA(Managoment - Sem.IV] 1.5 Introductlon to DevOps
member is responsible for organizing the team and
Scrum Master: This
communication gaps or any other gap about the task being () Extreme programming (XP) - Extreme programming methodology is very useful
eliminating in situations where there are frequent release cycles, shorter development phases,
delivered. a product and uncertainties relating to the functionality to be developed.
is responsible for creating backlog.
Product Owner: This member
backlog. and ensuring that the designated tasks are comnle 3. ITIL ( Information Technology Infrastructure Library):
prioritizing the
ITIL provides the framework and organized processes, while Lean
by the end of each iteration. reminds team
o Scrum Team: The team is responsible for
completing the allocated to members to reduce waste (for IT, this is in the form of time and non-utilized talent)
a self-organized and collaborative approach. and Agile helps team members to work more quickly and adapt to change.
within a sprint, by
are listed as follows: For example, think of an T service desk handling ticket pileups due to Covid. They
Some of the Scrum Practices
product likely are using incident management through ITIL V3/2011 or ITIL. 4, which keeps
Sprint Planning: In this type of planning, the team discusses the teams working through tickets in the same way. But, when they add in Lean, it might
backlog, initial plan of action, and the tasks to be completed during this
look like the team is reducing downtime by re-príoritizing tickets. Add in a bit of Agile
sprint. and the tickets might be routed to people outside of the IT service desk who are
o Daily Scrum Meets: Daily scrum meetings refer to the daily morning equipped to handle specific types of issues, or it might look like a group tackling
meetings, usually time-boxed for 15 minutes, where they discuss their plan of problems and implementing feedback loops.
action for the day. Individually, these methodologies work to update support and service delivery, but
Sprint Review Meeting: This type of review meeting refers to a meeting combined they can boost it to the next level It should be noted, that you don't
where completion of the planned course of action is discussed and monitored necessarily need all three to be successful, but you can find success within the
to determine the future course of actions needed to. accomplish any combination. Together, Lean, ITL, and Agile offer:
bottlenecks. Faster resolution of issues.
o Sprint Retrospective Meets: This is the last phase and the last scrum meeting., Improved productivity for agents.
In this phase, the overall development strategies are discussed regarding the A better overall customer experience.
scope of solution implementation, identification of bottlenecks, the success of Reduction in wasted time and ultimately, money.
planned courses, and any other scope of improvement, which could be
1.4 WHY DevOps?
adopted in future projects, are discussed.
) Crystal Methodologies: This methodology includes the interaction between DevOps is used to increase an organization's speed at the time of delivering
people, more than processes and tools. This method also includes the approaches applications and services. Many companies have success fuly implemented DevOps to
accepted by the team according to the scope of a project. enhance their user experience including Amazon, Netflix, etc.
(c) Dynamic Software Development Method (DSDM):
This methodology is also Facebook's mobile app is updated every two weeks effectively which tells users you
referred to as a Rapid Action Development Model. In can have what you want and you can have it. It is the DevOps philosophy that helps
th the users are involved
actively in the development of the project. Teams are
empowered with decision Facebook ensure that apps are not outdated and that users get the best experience on
making capabilities. Facebook. Facebook achieves this true code ownership model that makes its
(d) Feature-driven Development (FDD: This a developers responsible which includes testing and supporting through production
is feature-driven methodology, where
each phase involves completing a small feature and delivery for each kernel of code. They write and update their true policies like this
within the given time. It consist
of design walk-through and code inspection, so on. but Facebook has developed a DevOps culture and has successfully enhanced its
and
(e) Lean Software Development - Lean software development lifecycle.
development includes 'just-in-time
production techniques. This methodology targets
to eliminate waste and reduee Industries have started to prepare for digital transformation by shifting their means
cost, in that way increasing to weeks and months instead of years while maintaining high quality as a result.
the efficiency of the entire software development
process. DevOps is the solution for all this.
Introduction to Devops
Sem. IV] 1.6
DevOps:MCA [Management- DevOps : MCA [Management-Sem. IV]
1,7 Introductlon to DevOps
The DevOps is different from traditional
IT because traditional IT has 1000s Jino.
creat o Operate: At this level, the available version is ready for users to use. Here, the
code and is created by different teams with different standards but DevOps is
knowledge of the product.
IT complev
Traditional is department looks after the server configuration and deployment.
by ene team with intimate o Monitor: The observation is done at this level that depends on the data which is
DevOps is easily understandable.
understand while
gathered from consumer behaviour, the efficlency of applications, and from
DerOps Lifecycle: various other sources.
• DevOps Lifecycde is the methodology where professional development teams come
Best practices to follow when using Devops:
more efficiently and quickly. The Devone
together to bring products to market Implement an automated dashboard which gives run-tíme information about the
as Plan, Code, Building. Test, Releasing
lifecycle consists of various phases such development of product stages.
Deploying. Operating. and Monitoring. As it keeps the entire team together which is gccd for colaboration and
communication.
Pian Allow DevOps to be a cultural change within the organization.
Be patient with the developers when using DevOps.
Code
Montor Maintaina centralized unit for storage.
Build a flexible infrastructure as it can access at any time anTwhere with the help of
internet.
DevOps Build
Advantages:
Operate Lifecycle 1. Faster Delivery: It enables organizations to release new procucts and updates
faster and more frequently, which can lead to a competitive adratage.
2. Improved Collaboration: Devops promotes collaberaticn between derelopment
and operations teams, resulting in better communication, increased efficiency.
Deploy Test
and reduced friction.
3. Improved Quality: DevOps emphasizes autormated testing and continucus
Relezse
integration, which helps to catch bugs early in the development process and
improve the overall quality of software.
Fig. 12: DevOps Lifecyde 4. Increased Automation: DevOps enables organizations to automate many manual
c Plan: The first step of planning is determining the commercial needs and processes, freeing up tme for more strategic work and recucing the risk of
gathering the opinions of end-user by professionals. human error.
c Code: The code for the same is developed
and in order to simplify 5. Better Scalability: Devops enables organirations to quickly and efficiently scale
the design, the
team of developers uses tools and extensions
that take care of security problems. their infrastructure to meet changing demands, improving the ability to respond
o
Build: After the coding part, programmers use various tools for to business needs.
the submission o! to deliver new
the code to the common code source. 6. Increased Customer Satisfaction: DevOps helps organizations
o more quickly. This can result in increased customer
Test: This level is very important as software integrity
should be assured at thi features and updates
level At this phase, various types of tests such as satisfaction and loyalty.
User Acceptability TestinE as continuous
Safety Testing, Speed Testing and many more 7. Improved Security: DevOps promotes security best practices, such
will be done. can help to reduce the risk of security breaches
o Release: At this level, everything is ready to testing and monitoring, which
be deployed in the operationa systems.
environment. and improve the overall security of an organization's
to improve their use
o Deploy: In 8. Better Resource Utilization: DevOps enables organizations
this level, Infrastructure-as-Code assists in personnel, which can result in
infrastructure and subsequently publishes creating the operationa of resources, including hardware, software, and
lifecycle tools.
the build using various DevoP cost savings and improved efflciency.
Introduction to DevOp
1.8
DevOps : MCA [Management- Sem. IV DevOps : MCA (Management- Sem. IV] 1.9 Introduction to Dev0ps

Disadvantages: DevOps can be a complex


and
Investment: Implementing 1.6 DevOps Stakeholders
1. High Initial technology, infrastructure
investment in DevOps stakeholders are not limited to software developers and IT operations
process, requiring significant
professionals. Devops stakeholders include people from another party, within the
personnel. can be a challenge,
DevOps professionals and organization as well as people from other organization. Also, people involve in
Finding qualified programs to
2. skills Shortage:
trainíng and development L designing and developing software products and services, delivering and managing
organizations may need to invest in
their teams. software products and services. All these peoples work together to improve the flow
the necessary skills withín cultural n4 of value across the end-to-end IT value chain. Many of the Dev stakeholders who are
to Change: Some employees may resist the
3. Resistance
Devops adoption. This
can rnevt involved in designing and developing products and services are upstream from the
organízational changes required for successful
reduced efficiency. developers themselves and so influence the work that they do.
to collaboration, and
in resistance, resistance Dev includes all people involved in developing software products and services
a relatively new field, and there is a lark
4. Lack of Standardization: DevOps is still including but not exclusive to:
processes. This can make o
of standardization in terms of methodologies, tools, and Architects, Business Representatives, Customers, Product Owners, Project
it difficult for organizations to determine the best approach for their specific Managers, Quality Assurance (QA), Testers and Analysts, Suppllers.
Ops Includes all people involved in delivering and managing software products and
needs.
delivery, services including but not exclusive to:
5. Increased Complexity: DevOps can increase the complexity of software o Information Security Professionals, Systems Engineers, System Administrators,
requiring organizations to manage a larger number of movin8 parts and
integrate
IT Operations Engineers, Release Engincers, Database Administrators (DBAs),
multiple systems and tools. Network Engineers, Support Professionals, Third Party Vendors and Suppliers.
on technology, and
6. Dependency on Technology: Devops relies heavily
organizations may need to invest in a variety of tools and platforms to support
1.7 DevOps Tools
"The ultimate goal of Devops in any organization is to help optimize the flow of
the Devops process.
business value from the conception of the buslness ldea to the end-product at the
7. Need for Continuous Improvement: As new technologies and best practices
hands of the user; all the while encouraging greater collaboratlon between the
emerge, DevOps requires constant improvement and adaptation. Organizations
stakeholders involved."
must be prepared to continuóusly adapt and evolve their DevOps practices to
1. Ensures effective collaboration between teams: Effectlve collaboratlon in any
remain competitive. process depends on shared ownershlp. During the development process, all people
HISTORY OF Devops involved should note the fact that everyone is equally responsible for the entire
1.5 development process. Whether it is development, testing, or deployment, each
DevOps term introduced in 2O07-2009 by Patrick Debois, Gene Kim, team member should be involved. They should understand that they have an
and John Willis. 1
represents the combination of Development (Dev) equal stake in the final outcome. In the Devops paradigm, the passing of work
and Operations (0ps). It 1S
movement that promotes bringing developers from one team to another is completely defined and broken down. This
and operations together within teams
This is to be able to deliver added business
value to users more quickly and hence v accelerates the entire process of development since collaboration between all the
more competitive in the market. teams involved is streamlined.
This is the set of practices that reduce
the obstacles between developers
2. Create scalable infrastructure platforms: DevOps is used to create a supportable
want to code update and go for those Wiv infrastructure for applications that make them highly scalable. To fit into the
faster delivery. Operations must have assurancE modern world demands of any business, scalable apps have become an absolute
stability and quality of production systems. a
This is an extension necessity. The process of scaling should be reliable and fully automated. As
which reduces delivery time as of the agile proce a marketing
delay in delivery time due to app will have the ablity to adapt to any situation when
There should be communication non-inclusion of Ops. result, the
processes and
between Dev and Ops which allow
better follow bp effort goes viral. with the app being scalable, it can adjust itself to large traffic
end-to-end delivery of products. volumes and provide perfect user experience.
Introductlon to DevOps
- Sem. IV) 1.10
DevOps: MCA [Management DevOps : MCA [Management- Sem, 1VJ
1.11 Introduction to DevOps
company must give an importance
capabilities: Every
3. Builds on-demand release state. Continuous delivery of software Canary Release Referencing how canaries were brought into coal mines to test the
software ina'releasable' anv ee
for keeping their
software to add
new features and go live at air. This is a go-live strategy where a new applícation version gets
product will allow the process of release management because it released to a small batch of production servers, then monitored
DevOps aims to automate the management is predictable, fast closely to determine if it runs as it's supposed to. If the'version
release
plethora of advantages. Automated can release proves,. to be stable, it's rolled out to all of the production
automation, companies
very consistent. Moreover, through management also bo environment,
Automated release
versions as per their requirements.
as these are essential for compliane Capacity Test This test determines the user capacity that a computer, server, or
complete and thorough audit trials,
application can support right before failing.
purposes.
tasks such as testing and
4. Provides faster feedback: Automating repetitive Commit A means of recording changes to a repository, then adding a log
feedback. Since the development team message, is outlining the same changes.
reporting will always increase the speed of
can roll out the updated version faster
will know what has to change, it Configuration When uncommitted hotfixes and manual changes are applied to
can better understand the impact of the changes that
it hae
addition, the team Drift software and hardware configurations, the latter becomes
done in the software lifecycle. A real understanding of changes
will assist team
mechanism inconsistent with the master version. This is often a common
members in working efficiently in collaboration. With fast feedback reason for technical debt.
can make better decisions collectively
the operations team and developers and
Configuration An engineering process for creating consistent system settings,
improve the app's performance. Management including physical attributes, performance, and function, as well as
IMPORTANT TERMINOLOGY keeping them that way. This is meant to keep a system associated
1.8 with its initial design, requirements, and operational information.
Table 1.1: Terminology Containerizåtion An operating system (0S) level method of virtualization employed
Agent An Agent is a program residing on particular physical servers to for the deployment and running of distributed applications
execute multiple processes on that very server. without having to launch an entire virtual machine for every use.
Agile Software A philosophy and methodology for software development, with an Containers A software package is a standardized unit that includes everything
Development importance on user feedback, the quality of software, and the needed to run the software, including code and dependencies.
capability of fast response to new product requirements and other Containers enable an application to run in a fast and reliable
manner when it's moved from one computing environment to
changes.
another.
Application The deployment of software releases to several different An, approach to software engineering in which integration,
Continuous
Release environments and their configurations, but with minimal human
Delivery automated testing, and automated deployment capabilities
Automation involvement. continuously allow new software to be repeatedly developed and
(ARA)
deployed swiftly and with a high degree of reliability, but with little
Behavior-DrivenAn Agile software development methodology that encourages human intervention.
Development collaboration and teamwork between software developers, A development practice for software releases in which
every code
Quaby Continuous
(BDD) Assurance, and business participants in any given commit that passes through automated testing is sent to the
software project. Deployment
Build Agent An agent used in continuous integration that can
production environment, which results in a large number of daily
be installed same tasks as
locally or remotely, depending on the server. The agent
production deployments. It accomplishes the
sends and Continuous Delivery does, but the former is fully automated,
receives messages relating to the creation
of software builds. completely removing the human element.
...
contd.... COntd.
1.12 to Devp
- Sem. V
[Management
DevOps: MCA wheredevelopers merge MCA (Management - Sem. IV]
DevOps:
development practice all of 1.13 Introductlon to DevOps
software a shared repository often,
Continuous
A
code into ideally
working copies of Gemba This term means "the real place" in Japanese, and in the context of
Integration their
several times a day. the business world, it often means "where value is created."
finding and fixing any software
process
systemic that involves Issue Tracking A process where programmers and quality assurance experts can
during every part
of
Continuous
A
the software track the flow of both defects and new features, starting at
malfunctions and bugs a part of both the continuous
Quality considered identification and concluding with resolution.
development cycle. It is
delivery processes. Lead Time
integration and continuous as part of
In the world of manufacturing, this is the time involved in moving
running automated tests the software a work in progress (WIP) to a finished state. In the world of DevOps,
The process of
Continuous across all environments to obtain immediate the context changes to moving code changes to production.
Testing delivery pipeline
on code builds quality. Mean Time A calculation of the average amount of system downtime resulting
feedback the co
strategy in which a new version of the
between, from failures. It measures the reliability of a given system or
Dark Launch A development
to your team ora eb: Failures (MTBF) components.
that implements new features, is released
organization's users, but is either not visibly activated ori Mean Time to The average amount of time needed for a system or component to
of the
process is similar toa Canary Release. Recovery (MTTR) recover from failure and be returned to production status.
only partially so. This
processes necessary to that make Microservices A pattern of architectural design wherein complex applications is
Deployment The bringing together of all the
composed of a suite of smaller modular services or components
hardware or a software program available for use, which includec
all installations, configuring, testing, and moving that program to
that communicatewith each other using language-agnostic APIs.
Production The last 'stage in the software deployment pipeline, where the target
its home environment.
audience will finally use the application.
Deployment An automated multi-step process that takes software from version
Regression The testing of a software application to confirm that any recent
Pipeline control to making it available to an organization's users. Testing changes made to an application haven't adversely affected any
Devops A fusion of the words "development" and "operations". It describes features that were already in place.
a design philosophy where development and operations teams
Release Using tools such as XL Release to manage software releases, taking
collaborate on software development and deployment. The goal of Orchestration them from the development stage to the actual software release.
this new process is to increase software production agility while This includes the definition, automation, security, monitoring, and
achieving business goals.
control of the manal and automated tasks
DevSecops The process of bringing security into DevOps
methodology and Rollback Returning a database or program to a previous state, either
giving it a significant role.
manuallyor automatically.
Event-Driven A software architecture pattern where Also called Revision Control or Version Control. This is a process for
the system both produces Source Control
Architecture messages or events,
and is built to react to, consume, anid eteu storing, tracking, and managing changes to. code, documents,
other events. websites, and other pieces of information. This is usually achieved
Exploratory A testing process where by generating branches off of the software's stable master version,
Testing human testers are given
areas that may
potentially have issues
free rein to t then merging the stable feature branches back into the latter.
couldn't detect. that automated test This is an almost copy of a production environment for software
Staging
Fail Fast A
design strategy testing, It is used to test the newest software iteration before it goes
characterized Environment
attempt fails, is by a rapid turnaround, where an live, using an environment that mimics live production as close as
reported on time,
changes are made, feedback is quickly returned, the possible.
and a new attempt
is.made.
Contd.
contd.
Introduction
1.14
to Devoo
- Sem.
MCA(Management - Sem. IV]
M]
:
DevOps : MCA [Management DevOps
1.15 Introduction to DevOps
development works that result when
an easily
Technical debt The extra the short run, rather than 1. Creation of Devops Infrastructure:
implemented code is used in the
solution. In other words, the cost of a Implementation of the first step is to create the Devops infrastructure on which the
of the best overall
application application will run. Doing so is not easy task. There should not be a lack of co
short-cut. operation between the development and the operations team. Both work in two
test
Using specialized software (apart
from the software being different groups and silos. Developers want to deliver changes as soon as possible and
Test Automation compare actual outcomes ai the operations team, on the other hand, aims for stability.
control the execution of testsand Developers and IT professionals must bring all changes with stability together and
predicted outcomes. ensure they work towards the common goal of our stakeholders, i.e. releasing
smallest unite
Unit Testing A testing strategy that involves isolating the valuable software as soon as possible with minimumn risk involved. There should be a
testable code is separated apart irom the rest of the software ana continuous delivery pipeline so that both the development and the operation team
as supposed to. can work together without any kind of confusion.
running tests on it to see if it functions it's
Following are some of the small changes that can be done to do this goal:
o Developers should audit any smallest change made to the deployment
19 DevOps PERSPECTIVE
an environment so that ifanything goes wrong, the problem can be traced out easily.
DevOps is the group of cultural philosophies, practices, and tools that increases o Strong monitoring systems should be set to alert the development and operations
organization's skill to deliver applications and services at high velocity: evolving
and.
team on time if any abnormal event occurs. This will minimize the downtíme if
pace traditional software
a
improving products at faster than organizations using anything goes wrong.
development and infrastructure management processes. An application logs a WARNING every time a connection is unexpectedly closed or
Business Perspective: timed out, INFO or DEBUG every time a connection is closed.
o Operation team can test the scenario if anything goes wrong so that the same
Every business person should know what DevOps is all about. It is a group of concepts
that automates the processes between IT teams and software development. It provides thing can be prevented from happening again in the future.
o Involvement of the operations team in the organizational IT service continuity
an ability to solve critical issues quickly, fast software releases, increased trust and
faith, and better management of unplanned work. plan right from the start.
In simple words, it is collaboration between development and operations which For creating the DevOps infrastructure, technology with which the operations
focuses on tighter integration. teai is well-familiar should be used so that they can easily own and manage the
DevOps can be beneficial for any business: environment.
Through DevOps, developers can infuse in the opportunity to work more 2. Modeling & Managing the DevOps Infrastructure:
efficiently
with stakeholders, and operations. Management is the main thing after creating Devops infrastructure. Many times
Continuous Release and smarter Deployment of new system companies do not have complete control over the selection of Infrastructure but
process. At
and applications. people can fully automate the build, integration, testing, and deployment
Continuous Integration and Continuous Delivery. as:
this phase, some questions may come in mind such
Production cycles are often coordinated
with IT mechanisms to make
them m0 What provisions will be done for DevOps infrastructure?
effective and streamlined. our
How to deploy and configure various bits of software that form
Faster time-to-market and delivery
times that improve Return on infrastructure?
o Process Automation Investment
and Security Maintenance. o How to manage our infrastructure once the provisioning and configuration?
DevOps Perspective of as operating
Infrastructure and Environment: Everything needed· to create and maintain the infrastructure, such
Theoretically Devops is very system install definitions, configuration for data center automation tools like Puppet,
símple
Information Technology Operations but practically Software Development
general infrastructure configurations like DNS files SMTP settings, and the scripts
&

DevOps consists need to be combined.


of the following steps: Implementation proce for managing the infrastructure will be kept under the version control.
Introduclon
1.16 lo Dero
[Management - Sem. IM
inputs to the deployment pipeline whos Devops : MCA
(Management Bem.IV) 1.17 Introductlon to DevOps
DevOps : MCA will provide
version control (b) Virtualization: Virtualization is the fundamental support of the cloud which
in
these filesinfrastructural
All changes is to: run on all the applications befor
case of will be enables thousands of hosts to virtually access servers over the internet. A virtual
job in infrastructural changes
environment. machine competes with a physical machine. Following will be the benefits of
Verify that the production
pushed into the environment and the testing environment virtualízation:
they are production o Fast response to the changing environment
changes to the
Push
operations team. deployment of the neW infrastructure Consolidation
by the
managed successful
ensuring the oHardware standardization
o Perform tests for
on the application. Environment:
oBaselines can be easily maintained
Infrastructure (c) Ongoing Server Management: After installing the operating system, the
Management of Devops managing the DevOps infrastructure
following things for company needs to ensure full control over the configuratíon. They should not
need
We will the
change in an uncontrolled manner. Nobody should be able to log into the
environment: Infrastructure: no change should be
access to the DevOps deployment environment except the operations team and
For controlling approval
access so that no one can make changes without done without an automated system. We also need to apply 0S service packages,
Controlling the infrastructure.
o
process to make changes to the upgrade, install new software, change necessary settings, and perform
c Defining an automated on time.
infrastructure to detect and fix issues deployments.
• Monitoring the run parallel tests in the
DevOps Infrastructure:
(d) Parallel Testing with Virtual Environments: We need to
For making changes to the firewall or deployment pipeline to see if everything is running smcothly in the production
no matter if it is about updating the
o Even the smallest change run the same environment or if there are any issues.
It should be through
deploying a new version of the software.
process. 4. Managing Data:
change management
set of problems in Data management & organization while implementing the
a
o The DevOps Infrastructure modification process should be managed through A

single ticketing system everyone


can log into. DevOps infrastructure might be faced. These are:
to
o so
Changes should be logged as they are that they
can
be easily audited. There isa large volume of data involred which makes it impossible keep track
to history changes in every environment. of each data involved in software development
o Everyone should be able view the of system.
o Anyone needs to test the changes in a production-like testing environment
before o
The lifecycle of application data is different from other parts of the
One way to avoid this problem and effectively
manage data is to delete the previous
pushing them live.
or old version with a new copy. However, doing so is not possible
Apply all the DevOps infrastructure changes to the version control first and tnea version replace the
is imnportant. There can be scenarios
apply them through the automated process. in real-time scenarios. Every single bit of data case,
we might need to roll back to a previous state due to some issues. In that
Run tests to verify if the changes we made have worked or not. when
of the data. So, we will need some advanced
we will still need the older versions
3. Managing the Server Provisioning and Configuration: as:
Server provisioning6 Server configuration management Smal approaches for data management such to
is often overlooked in
(a) Database Scripting: One great way to manage data in Devops Infrastructure is
and medium-sized organizations. as scripts and check into version
captur all database initialization and migration
(a) Server Provisioning:
In Server Provisioning, appropriat to manage every database used in the
set of resources control. Then, we can use these scripts
a like
systems and data fe sure all the database scripts are
and software are taken to build a server and make it ready delivery process. However,
we need to make
network operation. Typical tasks ser is no issue while retrieving data from the
during server provisioning are selecting a managed effectively so that there
from a pool of available servers,
loading appropriate software, customizing databases.
configuring the system, final)
changing a boot
changing its parameters. image for the server, and
Introductlonto
1.18 DevOps -
DevOps: MCA (Management Sem.
IV)
1.19 Introduction to DevOps
[Management-Sem. IV]
DevOps : MCA challenging however crucial part of
a Database Afresh:
The most
an environment after (c) Rolling Back the Databases: With the help of roll-forward and roll-backward
(b) Deploying infrastructure is to reproduce scripts, it is easy to use an application at the deployment time to migrate the
managing the
DevOps
sure the application works as it should before any data.
need to make that This is
w
existing database to its correct version without losing
issue occurs. a database takes place.
We

problem. This is where the deployment of Another effective data migration strategy is to perform both the database
the again: process independently. This
we deploy a database migration process from the application deployment
happens when or any change in the
data is erased. will also make sure data migration is done without data loss
The old version of the are created.
structure, instances, and schemas application behaviour.
The new database database.
o Finally, the data is
loaded into the 6. Configuration Management:
Change: Configuration management is another crucial step in DevOps infrastructure
5. Incremental manage DevOps infrastructure we are
effective technique to management in which we ensure that all the files and software which
• Incremental change is another even after we are making changes
to it correctly, and working as
data. It ensures an application
keeps working
other hand expecting on the machine are available, configured
(CI). On the
which is an important
pre-requisite of continuous integration every intended.
successful deployment of
software release, a However, when we
Continuous delivery demands the means we must update Managing configuration manually is simple for single machine.
production. This are connected -
including the changes to the database
into
held in it. So, we are handling five or ten servers with which 100-200 computers
while retaining the valuable data a nightmare. That's why we need a better way to
the entire operational database so we can easily take back control of things if configuration management becomes
need an efficient rollback
strategy that manage things:
anything goes wrong. changes to a file or a
(a) Version Control: Version control is responsible for recording
For this, we must follow the following data
migration strategies: so we can easily remember specific versions later on. It
most efficient mechanisms for data set of files over time that
(a) Database Versioning: It is one of the
is a good practice because if we know the previous versions
of files, we can easily
migration in an automated fashion. All we need is to create a table in the database Version control can also help us
every time we make a change to the roll back to the earlier versions of the project.
which contains its version number. Now,
recover in case we make mistakes and screw up things.
database, we will have to create two scripts:
A roll-forward script that takes the database from version
x to version x+1. Best practices for Version Control:
x. Use version control for everything (source
code, tests, database scripts, builds
A roll-backward script that takes the database from version x+1 to version and configuration files).
which & deployment scripts, documentation, libraries,
Another thing we will need is an application configuration setting are working properly.
to work. Check in regularly to see if all the versions
specifies the version of the database with which it is designed messages during check-in. This can save
Then during the deployment, we can use tool which looks at the
a current Use detailed multi-paragraph commit
error occurs later.
version of the database and the database version required by the application hours of debugging in case any
version being deployed. Then this tool will use the roll-forward
or roll (b) Managing Components and Dependencies:
1. Managing External Libraries: Since
external libraries come in binary form,
backward scripts to align both the application and the database version
can be a difficult task. Here are two ways we can get this
correctly. We can read about database scripting in detail here. managing them
(b) Managing Orchestrated Changes: This is another common practice for data done:
version control.
migration. We are not in favor of it because it would be better if applications
o
Check the external libraries into the
o Declare the external libraries and
use a tool like Maven or Ivy to down them
could communicate directly, not through the database. Still, many companies are our artifact repository.
following this practice and integrating all applications through a single database. from the Internet repositories to
The best way is divide the application into smaller
Be cautious when doing the same because even a small change to the database can 2. Managing Components: to the application, reduce
components. This will limit the scope of the changes
have a negative impact on how other applications work. We should test such encourage reuse, and enable a much more efficient
regression bugs,
changes in an orchestrated environment before implementing them in the
development process on large projects.
production environment.
1.20 Introduction to DevOps -
[Management - Sem. IV DevOps MCA [Management Sem. V
:
1.21 Introduction to DevOps
DevOps : MCA man
configuration should be
Configuration: Software a DevOps purpose to manage The agile purpose is to manage
(c) Managing Software & testing, and consider Purpose
to proper management
carefully. We should subject it as: end to end engineering complex projects.
configuration principles such processes.
few important software same repository
Keep all the available application configuration options in the Task It focuses on constant testing It focuses on constant changes.
as its source code. and delivery.
o Manage the values of
configurations separately. Team Size It has a large team 'size as it It has a small team size. As
process with the help of values
Perform configurations using an automated involves all the stack holders. smaller the team, the fewer
repository. people work on it so that they
taken from the configuration
to avoid confusion. can move faster.
Use clear naming conventions
information. Team Skillset DevOps divides and spreads the Agile development emphasizes
o Do not repeat any
as simple as possible. skill set between the training all team members to
o Keep the configuration information development and the operation have a wide variety of similar
configuration system.
Do not over-engineer or over-optimize the team. and equal skills.
tests and keep a record of each.
Run all necessary configuration on Agile can implement within a
Implementation DevOps is focused
a DevOps infrastructure management and software
That is how we establish collaboràtion, so it does not have range of tactical frameworks
The process requires a lot of patience and guidance accepted such as safe, scrum, and sprint.
deployment environment. any commonly
go wrong.
because there are many chances, things could framework.
Duration The ideal goal is to deliver the Agile development is managed
110 DevOps AND AGILE code to production daily or in units
of sprints. This time is

Agility is the key-Additionally, through DevOps, scalability can be achieved quickly every few hours. much less than a month for
and easily even for big organizations with a Stable and reliable operating each sprint.
environment, pushing one's business to stay ahead of the competition. End to End business solution Software development.
Target Areas
Agile helps in bridging the gap between Business and Development teams and DevOps and fast delivery.
is coming
helps in doing this for Development and Operations
teams. Feedback Feedback comes from the In Agile, feedback
focuses on collaboration, customer internal team. from the customer.
Agile refers to an iterative approach which
supports only shift left.
feedback, and small, rapid releass, DevOps is the practice of bringing development Shift left It supports both variations left| It
and operations teams together. DevOps central
concept is to manage end-to-end Principle and right.
on, functional and
engineering processes (Concept to Cash). Focus DevOps focuses on operational Agile focuses
readiness. non-functional readiness.
and business
Difference between DevOps and Agile:
for In Dev0ps, developing, testing. Developing software is inherent
DevOps and Agile are two software development strategies having similar aims Importance are to Agile.
and implementation all
product development, delivery end to end. equally important.
Table 1.2: Key difference between Devops and Agile Agile produces better
Quality DevOps contributes tó creating
automation applications suites with the
Parameter Devops Agile better quality with can
early bug removal desired requirements. It
Definition DevOps is a practice of bringing Agile refers to the continuous and adapt according to the
Developers need to follow quickly
development and operation iterative approach, which Coding and best Architectural changes
made on time during
collaboration, project life.
teams together. focuses on practices to maintain quality the
customer feedback, small, and standards.
rapid releases. contd.
...
Contd.
Introduction to DevOps - Sem. IV
Sen N 1.22 DevOps : MCA[Management
DevssCA angment- Ansible, Bugzilla, Kanboard, and JIRA
1.23 Introduction to DevOps
Chef, AWS, 1. Puppet:
Puppet, are some popular Agile tools.
Tos OpenStack are
and Team City Puppet is the most widely used DevOps tool. It allows the delivery and release of
POpular DerOpstools on
Agle does not emphasize the
Autemation is the prìmary goal automation,
technology changes quickly and frequently. It has features of versioning, automated
Autemation works on the testing, and continuous delivery. It enables to manage entire infrastructure as code
of DevOps It
principle of maximizing without expanding the size of the team.
efficiency rhen deploying Features:
sofuare. Real-time context-aware reporting.
communication | Scrum is the most common
Conication Deros
inraes spes and design method of implementing Agile Model and manage the entire environment.
documents It s essential for the software development. Scrum Defined and continually enforce infrastructure.
cperational team to fully meeting is carried out daily. Desired state conflict detection and remediation.
understand the software release It inspects and reports on packages running across the infrastructure.
and its netrork implications for
the It eliminates manual wòrk for the software delivery process.
the enough runnìng
enloyment pces o It helps the developer to deliver great software quickly.
DevOps the process The agile method gives priority
Doentation 2. Ansible:
documentation is prime because to the working system over
complete documentation. It is Ansible is a leading DevOps tool Ansible is an open-source II engine that automates
it ml snd the sofvare to an
cperational team for ideal when you are flexible and application deployment, cloud-provisioning, intra-service orchestration, and other I
deployment. Automation responsive. However, it can tools. It makes it easier for DevOps teams to scale automation and speed up
minimis the impact harm when you are trying to
of productivity.
insuffiient turn things over to. another
ocumentation. Ansible is easy to deploy because does not use any agents
or custom security
Howeve, in the development of team for deploynent. infrastructure on the client-side, and by pushing modules to the clients. These
sophisticated software, it is
modules are executed locally on the client side, and the output is pushed back to the
difficult to transfer all the
knowiedge required Ansible server.
Features:
L11 DevOps TOOLS o Itis easy to use to open source deploy applications.
• Puppet Chef, Ansible and SaltStack are some most popular tools. o Ithelps in avoiding complexity in the software development
process.
o It eliminates repetitive tasks.
JUnit Maven up the development process.
It manages complex deployments and speeds
3. Docker:
shipping, and running
HEF gradle Sensu Docker is a high-end DevOps tool that allows building,
applications on multiple systems. It also helps to gather the apps quickly
distributed
DevOps Tools container management.
from the components, and it is typically suitable for
Features:
A more comfortable and faster.
It configures the system
splunk SALTSTACx
JIRA NSIBLE
Itincreases productivity.
are used to run the application in an isolated
D o It provides containers that
ecipse Bamboo
environment.
Fig. 1.3: Dev0Ps Tools
Introduction to DevOps
DovOps : MCA (Managemont- Sem. IV]

DevOps : MCA[Management - Sem. IV) 1.24 1.25 Introductlon to DevOps


on available nodes to an active It can distribute the tasks across multiple machines, thereby increasing
request for published ports
It routes the incoming on
connection even if there is no task running concurrency.
container. This feature enables the It supports continuous integration and continuous delivery.
the node. It offers 400 plugins to support the building and testing of any project virtually.
group itself.
It allows saving secrets in the It requires little mainténance and has a built-in GUI tool for easy updates.
4. Nagios: errors and 7. Git:
for DevOps. It can determine the
Nagios is one of the more useful tools server, and log monitoring Git is an open-source distributed version control system that is freely available for
network, infrastructure,
rectify them with the help of everyone. It is designed to handle minor to major projects with speed and efficiency.
systems. It is developed to coordinate the work among programmers. The version control
Features: allows you to track and work together with your team members in the same
server operating systems. workspace. It is used as a critical distributed version-control for the DevOps tool.
o It provides complete monitoring of desktop and
bottlenecks and optimize bandwidth Features:
o
The network analyzer helps to identify
utilization. o.It is a free open source tool.
as services, applications, OS, and network
o
It helps to monitor components such It allows distributed development.
protocol It supports the pullrequest.
It also provides complete monitoring of Java
Management Extensions. It enables a faster release cycle.
5. CHEF: Git is very scalable.
consistency. The CHEF is a It is very secure and completes tasks very fast.
A CHEF is a useful tool for achieving scale, speed, and
uses Ruby encoding
cloud-based system and open source technology. This technology 8. Splunk:
as recipes and cookbooks. The CHEF is used everyone. It
to develop essential building blocks such Splunk is a tool to make machine data usable, accessible, and valuable to
tasks for more
in infrastructure automation and helps in reducing manual and repetitive delivers operational intelligence to DevOps teams. It helps companies to be
infrastructure management. secure, productive, and competitive.
are required to
Chef has got its convention for different building blocks, which Features:
solution.
manage and automate infrastructure. It has a next-generation monitoring and analytics
IT services.
Features: It delivers a single, unified view of different
solutions for security.
maintains high availability.
It Extend the Splunk platform with purpose-built
can manage multiple cloud environments.
It Data drive analytics with actionable insight.
It uses the popular Ruby language to create a domain-specific language. 9. Stackify:
error queries, logs, and more
The chef does not make any assumptions about the current status of the node. It Stackify is a lightweight DevOps tool. It shows real-time
uses its mechanism to get the current state of the machine. workstation. It is an ideal solution for intelligent orchestration for
directly into the
center. Stackify Retrace allows teams to quickly identify
Jenkins: the software-defined data
6.
application is always available and performing
Jenkins is a DevOps tool for monitoring the execution of repeated tasks. Jenkins is and resolve issues, ensuring that the
as expected.
software that allows continuous integration. Jenkins will be installed on a server
where the central build will take place. It helps to integrate project changes more Features:
or data changes.
efficiently by finding the issues quickly. It eliminates messy configuration
types of web requests.
Features: It can trace detail of all
before production.
Jenkins increases the scale of automation. It allows us to find and fix the bugs
access configures image caches.
It can be easily set up and configured via a web interface. It provides secure and
Introduction to DevOps
- Sem. IV]
Sem. IV 1.26 DevOps : MGA [Managomont
DevOps : MCA [Management- 1.27 Introduction to DevOps
access control.
It secures multi-tenancy
with granular role-based manage images,
1. Delivering Infrastructure as Code:
a private registry to store and Infrastructure as code (laC) in DevOps configuration management means
o Flexible image management with managing
and provisioning infrastructure through code rather than manually configuring it
10. Selenium: provides
framework for web applications. It a
through web interface or commnand-line interface. This approach allows for
Selenium is a portable software testing infrastructure to be version controlled, tested, and automated, just like application
tests.
an easy interface for developing automated code.
Features: This method makes it possible to manage infrastructure the same way you would an
It is afree open source tool. application and makes it easier to deploy things consistently, go back to earlier
as Android and i0S.
It supports multiplatform for testing, such versions if needed, and keep track of what's happening with the infrastructure in a
a keyword-driven framework for a WebDriver. transparent and traceable way. Sorme popular tools used for laC incude Ansible,
It is easy to build
It creates robust browser-based
regression automation suites and tests. CHEE, Puppet, and Terraform.
Example: Assume you vrish to build a web application on a cloud provider such as
MANAGEMENT
112 CONFIGURATION AWS. Traditionally, you would have to establish a virtual server manually, install the
operating system, configure the network settings, and install the application-specific
DevOps configuration is the evolution and
automation of the systems administration
management and deployment. DevOps software.
role, bringing automation to infrastructure With Infrastructure as Code, you would write a script that outlines the infrastructure
responsibility under the umbrella of
configuration also brings system administration required to execute your application. This script will tell you hat kind of server you
software engineering need, how much memory it should have, and what software needs to be installed.
a systems engineering process for establishing server,
Configuration management is After you've built the script, you can use a tool like Terraformn to create the
way. you
In the technology world, configure it, and install the required software with a single command. This
consistency of a product's attributes throughout its life. can consistently create a reproducible infrastructure and easily mnanage it in the
management is an IT management process that tracks individual
confguration
future.
configuration items of an IT system. 2. Delivering Configuration as Code:
a process that tracks and manage and set up your system and
Software configuration management is system engineering Delivering configuration as code (CaC) is a way to
to
monitors changes to software systems configuration metadata.
Configuration apps using code instead of manual adjustments. This makes it possible keep track
DevOps configuration is the test and automate the process just like your app's code. This
management is a key part of a DevOps lifecycle. of changes, them,
same way as the app, maing it easier to deploy
evolution and automation of the systems administration role, bringing automation
to method lets you manage settings the
or roll back changes quickly, see what settings are being used, and ensure everything
infrastructure management and deployment. is running correctly.
to scale a web server is an example of
Configuration management is important because it enables the ability Example: Using a tool like Ansible to configurecan use an Ansible script instead of
infrastructure and software systems without having to correspondingly scale you
presenting configuration code. Conversely.
as
administrative staff to manage those. systems. This can make it possible to scale server. This script will automatically configure the
manually setting up the web processes,
run correct software version, set up the number of worker
where it previously wasn't feasible to do so. webserver to the
- a recipe for setting up the web
essentially creating
Configuration management in Devops can help developers make changes to their and specify its root directory running.
systems quickly and effiiently while ensuring nothing breaks. It can also help keep server that can be saved, reviewed, and tested before
1,13CONTINUOUS INTEGRATION ANDDEPLOYMENT
track of any updates and changes along with the incident that triggered the change. This is
In a properly managed DevOps configuration management, there are two prominent test, and deploy iterative code changes.
In this method you continuously build, versions.
process. New code can be developed based on correct previous
outcomes: an iterative
new code can be designed and developed.
Infrastructure as code. So less human intervention
o Configuration as code.
-
Introduction to DevOps
IV 1.28
DevOps : MCA [Management Sem. DevOps : MCA(Managomont - Sem. IV]
1.29 Introductlon to DevOps
to this metho.
There are three primary approaches a Initially, Linux was created for personal computers
Consider an application that has its code stored in Gita and gradually was used in other
machines like servers, mainframe computers, supercomputers, it
1. Contnuous Integration:
code changes every day, multiple times etc. Nowadays, Linux
repository in GitLab. Developers push is also used in embedded systems like routers,
you can create a set of scripts to build and automation controls, televisions,
day. For every push to the repository, digital video recorders, video game consoles, smartwatches, etc.
test your application automatically. These scripts help decrease the chances that The biggest success
of Linux is Android(operating system) it is based on the Linux kernel that is
practice is known as Continuous on smartphones and tablets. Due to Android, Linux running
you introduce errors in your application. This has the largest installed base of all
to an application, even to development general-purpose operating systems. Linux is generally packaged in a Linux
Integration. Each change submitted
built and tested automatically and continuously. These tests ensure distribution.
branches, is you
the changes pass all tests, guidelines,
and code compliance standards Architecture of Linux:
itself is an example of a project that uses
established for your application. GitLab
method. For every push to the Applications
Continuous Integration as a software development
project, a set of checks run against the code.
Shell
Deploy Kernel
Code

Plan

Operate Hardware
Build Release

Monitor
Test

Fig. 14: Continuous Integration Method Utilities


a step beyond Continuous
2. Continuous Delivery: Continuous Delivery is
Fig. 1.5: Architecture of Linux
Integration. Not only is your application built and tested each time a code change
is pushed to the codebase, the application also
is deployed continuously. However, Components of Architecture:
1. Kernel: Kernel is the core of the
Linux-based operating system. It virtualizes the
with continuous delivery, you trigger the deployments manually. Continuous process with its
to common hardware resources of the computer to provide each process
Delivery checks the code automatically, but it requires human intervention resources. the process seem as if it is the sole running
virtual This makes
manually and strategically trigger the deployment of the changes. responsible for preventing and mitigating
on the machine. The kernel is also
3. Continuous Deployment: Continuous Deployment is another step beyond processes. Different types of the kernel are:
conflicts between different
Continuous Integration, similar to Continuous Delivery. The difference is that
Monolithic Kernel
instead of deploying your application manually, you set it to be deployed
Hybrid Kernels
automatically. Human intervention is not required.
Exo Kernels
1.14 LINUX OS INTRODUCTION Microkernels
types of functions that
are used to implement the
Linux is a community of open-source Unix-like operating systems that are based on 2. System Library: The special
system.
the Linux Kernel. It was initially released by Linus Torvalds on September 17, 1991. It functionality of the operating kernel's
hides the complexity of the
an interface to the kernel which user executes the
is a free and open-source operating system, and the source code can be modified and 3. Shell: It is users. It takes commands from the and
distributed to anyone. commercially or non-commercially under the GNU General functions fromn the
kernel's functions.
Public License.
Introduction to DevOps
- IV] 1.30 DevOps : MCA[Management Sem. IV
DevOps : MCA [Management Sem. RAM/HDD/CPIT
1.31 Introduction to DevOps
peripheral devices like
4. Hardware Layer: This layer consists of all 4. Cloud Linux OS: Build off the Linux distribution Cloud Linwx is an
operating
system designed specifically for cloud computing and shared hosting providers.
etc. user.
functionalities of an operating system to the For those whodo not know Shared hosting refers to a type of web hosting where a
5. System Utility: It provides the
single server is shared by several websites. Cloud Linux currently powers
1.15 IMPORTANCE OFLINUX DevOps somewhere in the range of 20 million web pages. Because it is based on CentOs,
as both focus on which in turn was heavily based on RHEL, one can feel confident in its scalability
philosophies and perspectives
Linux and DevOps share similar core
a customization aspect which is the and customization capabilities.
customization and scalability. Linux has specific to a 5. Debian: Debian is a -Linux distribution for servers. Debian is different from
and security applications
thinking in DevOps. It allows for design There is Ubuntu in what it prioritizes. For Debian, stability is more important than
particular development environment or development goals to be created.
system functions compared to Windows. innovation and because of this, it lags behind Ubuntu when it comes to the
much more freedom over how the operating servers. If the Devps team is integration of new software packages and libraries. With this in mind, it offers an
pipelines use Linux-based
Mostly Software delivery enterprise solution for those who want to focus on overall stability- first and
can do all testing in-house and with
using a Linux-based operating system they foremost.
extreme ease. 1,16 LINUX BASIC COMMAND UTILITIES
The basic important point of using Linux is, the
Linux Kernel can process huge
memory and Linux-based systems are highly scalable. If the hard drive or Table 1.3
amounts of
process these
other hardware requirements change during the development Sr. No. Description Syntax
power in Linux. The same
requirements can be added without losing processing 1. ls: List files and directories in a directory. Ls [options] [directory]
cannot always be said of the Windows operating system. cd: Change the current working directory. cd [directory]
system 2
Linux is important in DevOps because it is a powerful and flexible operating pwd: Print the current working directory. pwd
a wide range of tools and utilities that are useful for building and
that provides
as
managing software applications. DevOps teams use Linux for taskS such managing 4. mkdir: Create a new directory. mkdir [options] directory
servers, configuring network settings, deploying and monitoring applications, and 5. rmdir: Remove an empty directory. rmdir [options] directory
automating tasks through scripting. 6. cp: Copy files or directories from one cp [options] Source file
location to another. destination file
Best Linux distributions for DevOps:
1. Ubuntu: Ubuntu is the most widely used Linux distribution in the world. There is mv:Move or rename files or directories. mv [options] SOurce file
7.
a wid array of free, open-sourced, software tools available to Linux based Devops destination file
practitioners. Those that are not free will be only a fraction of the price of similar cat [options] file
cat: Display the contents
a
of
file.
software available on the Windows platform. less: Display the contents of a file one page less [options] file
9
Because of Ubuntu's large distribution, it has very strong community support and at a time.
also offers th option of commercial support should one desire it. file. head [options] file
2. CentOs: The CentOs is a notable mention simply because of the way it works with 10. head: Display the first few lines ofa
last few lines of a file. tail [options] file
tail Display the
:

Red Hat Enterprise Linux (RHEL). RHEL is a popular Linux distribution that is 11.
or grep [options] pattern file
a
widely used for applications such as microservers, cloud computing, application 12. grep: Search for a pattern in file
development, storage solutions, and many more. output.
3. Fedora: Fedora is another option for RHEL cantered developers. It differs from or directories. find [path] [expression]
13. find: Search for files name
Cent0S in two very important ways. For starters, Fedora is not an RHEL clone like a tar [options] archive
14 tar Archive files and directories into
:

file(s) _to_archive
CentOS. It is officially adopted by the RHEL team since Red Hat uses
Fedora as a tarball. cont, ...
sort of proving or testing ground for upcoming RHEL technologies. Because of this
Fedora is fully integrated with RHEL.
Introduction to DevOps
Sem. IV] 1.32 DevOps : MCA [Management- Sem. IV]
DevOps : MCA [Management- 1.33 Introduction to DevOps
gzip [options] file 3. Automate Processes: DevOps practitioners should
gzip:Compress files. be comfortable with
15
gunzip [options] file.gz automating processes. This means scripting out manual processes,
setting up
16 gunzip: Uncompress files. Continuous Integration/Continuous Delivery (CI/CD) pipelines, and streamlining
ps (options]
ps Display information about running
:
task execution. For this purpose, knowledge of scripting languages is important.
17
processes. 4. Monitor and Optimize: Monitoring and Optimizing systems and processes are key
components of the DevOps role. Monitoring helps identify problems, while
18. top Display system resource usage and
: top
optimization helps ensure processes run efficiently. Systems and process
process information. monitoring are essential aspects of the Linux DevOps role. DevOps engineers must
ssh: Connect to remote system using
a SSH. ssh (user@]hostname be able to analyze system and process performance metrics, detect any potential
19.
issues, and take proactive steps to prevent outages or other negative impacts on
1,17 LINUX ADMINISTRATION system performance, They must also be able to identify opportunities for
a improvement and develop strategies for optimizing system performance.
The job of a Linux systems administrator is to manage the operations of computer 5. Collaborate: DevOps is all about collaboration between teams, and practitioners
system like maintaining, enhancing, creating user accounts/reports, and taking
should be comfortable communicating with stakeholders and other teams.
devices
backups using Linux tools and command-line interface tools. Most computing Collaboration is essential when transitioning to a Linux DevOps role. Working
open-source
are powered by Linux because of its high stability, high security, and closely with other team members can help build the skills and knowledge needed
environment. for a successful transition.
Linux DevOps is the practice of using Linux-based systems and tools to build,
deploy,
1.18 ENVIRONMENT VARIABLES
and manage applications in a Continuous Integration and Continuous Deployment
Environment variables are pairs of keys and values that can be used to customize the
environment.
many common tasks associated with build process and store sensitive data such as access details to deployment servers.
This approach allows for the automation of
Linux is a multi-user operating system. Multi-user means that each user has own
software development, including source control, build automation, infrastructure
dedicated opèrating environment after logging in to the system. And this
orchestration, deployment, monitoring, and logging. environment is defined by a set of variables, which are called environment variables.
A Linux DevOps Manager should: Users can modify their own environment variables to meet the requirements of the
1. Learn DevOps Tools: a Linux DevOps manager should learn DevOps tools like environment.
Docker, Ansible, Jenkins, Kubernetes, etc. These DevOps tools should be learned to 1. Use command env or printenv to display currently defined environment
become DevOps Manager in the Linux Operating system. To learn Linux DevOps variables, for example:
tools, one should know the Linux command line and its various commands. This $ env (or printenv)
includes learning how to navigate the file system, how to create, remove and _SESSION_ID=3092
XDG
manage files, and how to install and configure software. It is also important to
HOSTNAME=XXXX
learn scripting languages such as Python, Bash, and Ruby, as these are commonly NVM_CD_FLAGS=
used in Devops automation. Once a person is comfortable with the Linux
TERM=Xterm-256color
command line and scripting languages, they can move on to learning about
DevOps tools such as Ansible, Puppet, Chef, and Jenkins. SHELL=/bin/bash
2. Get Familiar with Infrastructure as Code(laC): laC is the process HISTSIZE=1000
of managing SSH_CLIENT=XXXX 49967 22
and provisioning infrastructure through machine-readable definition files.
Infrastructure as Code (laC) is a method of managing and provisioning SSH _TTY=/dev/pts/0

infrastructure and associated configuration through code, instead NVM_DIR=/mnt/efs/data/home/txu/. nvm


processes. laC is a key part of Devops, as of manual USER=tXu
it enables organizations to manage their
infrastructure in the same way that they manage software, MAIL=/var/spool/mail/txu
version control and automation. To learn laC, which is through
it is important to an
understanding of Infrastructure as a Service (laaS) and Platform as again LC_CTYPE=en US.UTF-8
Service %s
(Paas). LESSOPEN=||/usr/bin/lesspipe. sh
Introduction to DevOps
Sem. IV] 1.34 DevOps : MCA[Management - Sem. IV]
: MCA [Management- 1.35 Introductlon to DevOps
DevOps
a specific ENV variable. To add a directory path to PATH, you can write:
2. echo command: To display
$ pwd
$ echo $HOME

/mnt/data/home/abc /root/docker/httpd
$ export PATH=$PATH:$PWD
Set/Unset New Env Variable
To set a new environment
variable. $ echo $PATH
3. export command:
$ echo $VERSION /usr/local/sbin : /usr/local/bin:/usr/sbin :/usr/bin: /root/bin: /root/docker
VERSION=1.0.0
/httpd
$ export
2. HOME: The user's main working directory is the default directory when the user
$ echo $VERSION
logs in to the Linux system.
1.0.0 $ whoami
an existing environment variable.
4. unset command: To delete/remove tony
$ echo $VERSION
$ echo $HOME
1.0.0 /home/modern
we input will
$ unset VERSION 3. HISTSIZE: Save the number of historical commands. The commnands
$ echo $VERSION be saved by the systemn, and this environment variable records the number of
You can set/unset multiple variables as well. commands to be kept. Generally 1000.
$export VERSION=1.0.0 VERSION2=2.0.0 $ echo $HISTSIZE
$unset VERSION VERSION2 1000
$ HISTSIZE=1001
Set Persistent ENV Variables:
users, you can leverage echo $HISTSIZE
1, For All users: To make ENV variables persistent for all $
variables on
letc/profile file. This file used to set system-wide environmental 1001
user's shells. The variables are sometimes the same ones that are in the 4. LOGNAME: Current user login nme.
an initial PATH or PS1 for all shell
.bash profile, however this file is used to set $- echo $LOGNAME

users of the system. For example: Modern


HOSTNAME: Host name.
export CLASSPATH=./JAVA_HOME/lib:$JAVA HOME/jre/lib 5.
not take effect
After the addition is complete, the new environment variable will $ echo$HOSTNAME

immediately, you need to run source /etc/profile to take effect immediately, and cloud-dev.modern.com
current user.
otherwise it will only take effect when you re-login as the user next time. 6. SHELL: The type of shell used by the
user, you can modify the $SHELL
2. For Single User: Setting specific ENV variables for single $ echo
a can be
.bash _profile file in the user home directory, which is hidden file that /bin/bash
viewed by ll -a: 1.19 NETWORKING
$ cd Troubleshooting Commands:
Linux Networking and machine and to
-a .bash_profile to view the hostname of the
$ 11
1, hostname: hostname command is used
-rw-r--p-- 1 tony tony 193 Sep 22 2021 .bash_profile set the hostname.
Common ENV Variables: modern.com
Example: sudo hostname when you restart the
1. PATH: The paths, separated by colons, are a list of directories where executable hostname using "hostnamne" command, (
If you set the
change to the name specified in the hostname file
programs can be found. machine, the hostname will
$ echo $PATH e.g. /etc/hostname).
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
Introduction to DerO
DerOps:MCA (anagemet-Ser n 138 DerOps :1VCA(Wanazet-Se n
127
hostname permanentiy. you can use the letc/hoste For example, you can troubleshoot prox esver conections sing
So if you want to change the wE
present on the server.
file or relerant hostname fle WZet e uUSe proryyes http proryagrory post:port
ttto:/lerteralsíte.com
achines, you can change it in the
letc/hostname file.
0 For buntu Tou can check
Cent0S and Fedora you Can change it in the if anebsite is up ty fechíng the fle.
For PHEL
get ws.google.con
fetc/sysconfig/network file.
reverse lookup of IP or a DNS name. 6. ip(config): ip conmand is used to display and manipalzte routes 2nd setrott
2. Host: Host command is for the
Example:
interfaces. ip conmand is the rener version of ifeonfiz ifeorfg wots in
systens, but it is better to use the ip commard instezd cf tifcorfg
a te
an IP, you can use the host commands 2n
If you want to find DNS attached with
a
Display network derices and configuration:
follns:
íp addr
host 8.8.8.8
You can also do the reverse to find the IP.adiress associated
viith the donain This conmand can use vith pipes and grep to get more graa pt e tte P
nane. For erample, zddress of the etho ínterface.
host cevopscube.con íp a | grep etho l grep "iet"lztfo {orit $2
rermote server is reachable or
3. ping The ping networking utility is used to check if the Get details of a specific interfzce
not It is prímarily used for checking the connectívity and troubleshooting the
ip a shos eth2
network It prorides the folloing detaíls.
Poutíng tables can be listed by follosing cmande
) Eytes sent and received
(i) Packets sent, recefved, and ost
íp route
Gii) Approcimate round-tríp tíme (in milliseconds) ip route list
Syntax: pingeIP or DNS> 7. arp: ARP (Address Pesolution Protoco) shons tie cate table of ocal neors P
Example: píng devopscube.com addresses and MAC adáresses that the system interzctd mith
To ping IP address: píng 2.8.8.8 arp
ss(netstat): The ss command is a replzcemernt foz netstat Yon can stl se
e
If youwant to limít the píng output viithout using ctrl + c, then you can use the 8.
flag with a rumber as shown below: netstat comnand on al sstems. Using tnis command, gou can oe z
píng -c 1 devopscube.com informatíon than netstat commard ss cOmmnand is fas be2se it Ets al tie
4. curl: Curi utílity is primarily used to transfer data from or to a server. However, you inforrnation frorn the kernel userspace
can use ít for network troubleshootíng, For netv/ork troubleshootíng, (a) Listening all connections: The "ss command will ist al te TCP, UD?, Ua d

curl supports
protocols such as DICT, FILE, FTP, FTPS, GOPHER, HTTP, HTTPS, IMAP, IMAPS, LDAP, socket Connectíons on your machine.
LDAPS, MOTT, POP3, POP3S, RTMP, RTMPS, RTSP, SCP, SFTP, SMB, SMES, you want to flte cut TCP, UD?
SMTP, SMTPS, (b) Filtering out TCP, UDP and UNIX Sockets: If
TELNET and TFTP.
UNIK Socket details, use "-t*-u and "- f2gs ith the "ss" command It
I

For example, curl can check connectivity on port 22 using to specific parts. If you want to ist both
telnet. show all the established connectíons the
curl -v telnet://192.168.33.10:22 as
connected and listening pors using "a" vith the specific fag shcwn belo.
To check the FTP connectivity using Curl
SS-ta
curî ftp://ftptest.net
You can troubleshoot vreb server S5-u3
connectivity as vwell.
SS-Xa
curl http://devopscube.con -I ports, use -1" flag with ss
(c) List all listeníng ports: To list all the listening use
5. wget: The vwget cormmand is prímarily used UDP or UNIX socket, -t, "-u" and "-x fag
to fetch web pages. You can use wget to cOmmand. To list specific TCP,
troubleshoot netnork issues as well.
with "-1" as shown belon.
DevOps :MCA [Management - Introduction to DevOpa
Sem. IV 1.38
DevOps :MCA(Management- Sem. IV]
1.39 Introduction to DevOps
redhatedevopscube:~$ ss -lt
Address:Port Peer Address:Port 14, route: This command is used to get the details of the route
table for your system and
State Recv-0 Send-0 Local to manipulate it.
LISTEN e 128 *:ssh *:* Examples:
LISTEN 50 :::http-alt :::* For listing all routes:
LISTEN s0 :::55857 ::: Execute the "route" command without any arguments to list all the existing
LISTEN 128:::ssh routes in your system or server.
LISTEN 50 :::53285 :::* readhat@devopscube:~$ route
redhatedevopscube :-$ Kernel IP routing table
(d) List all established: To list all the established ports,
use the state established flap 2 Destination Gateway Genmask Flags Metric Ref Use Iface
shown below. default ip-172-31-16-1. 0.0.0.0 UG 000 etho
-t -r state established 172.17.0.0 * 255.255.0.0 000 dockero
U
SS

To list all sockets in listening state, 172.31.16.0 * 255.255.240.0 U O 0 0 eth0


ss -t -r state listening ubuntu@devopscube:~$
9. Traceroute: If you do not have a traceroute utility in your system
or server, you can If you want to get the full output in numerical form without any hostname, you
repository. This is a network troubleshooting utility. Using can use "-n" flag with the route command.
install it from the native
traceroute, you can find the number of hops required for a particular packet to reach redhat@devopscube:$ route -n
the destination. Kernel IP routing table
traceroute google.com Destination Gateway Genmask Flags Metric Ref Use Iface
10. mtT: The mtr utility is a network diagnostic tool to troubleshoot the network 0.0.0.0 172.31.16.1 0.0.0.0 UG O00 eth
bottlenecks. It combines the functionality of both ping and traceroute. 172.17.0.0 0.0.0.0 255.255.0.0 U000 docker0
Example: The following command shows the traceroute output in real time. 172.31.16.0 0.0.0.0 255.255.240.0 U000eth0
mtr google.com redhat@devopscube:~$
troubleshooting network
11. dig: If you have any task related to DNS lookup, you can use the "dig" command to 15. tepdump: The tcpdump command is primarily used for
query the DNS name servers. traffic. some learning, so
Get all DNS records with dig [Note: To analyze the output of tepdump command requires
scope of this article.]
Example: The following command returns all the DNS records and TTL information explaining it is out of the
of the system. So you need to
of a: tcpdump command works with the network interfaces
use administrative privileges to execute the command.
twitter.com
dig com ANY List all network interfaces:
twiter. to the interfaces.
12. nc(netcat): The nc (netcat) command is known as the Swiss army of networking Use the following command list all
-
commands. sudo tcpdump -list-interfaces
Interface:
Using nc, you can check the connectivity of a service running on a specific port. Capture Packets on Specific can use the following
packets on a specific interface, you
Example: To check if the ssh port is open, use the following command: To get the dump of
nc -v -n 192.168.33.10 Command.
22 the packets.
13. telnet: The telnet command is used to troubleshoot the TCP connections on a port. Note: press ctrl +c to stop capturing
Example: To check port connectivity using telnet, use the following command. sudo tcpdump -i ethe -c the number.
To limit the packet
capturing, you can use the flag with
telnet 10.4.5.5 22 -c 10
tcpdump -i ethe
For example, sudo
Introduction to DevOps
DevOps : MCA [Management- Sem. IV] 1.40 DevOps : MCA [Management- Sem. IV
1.41 Introductlon to DevOps
Capture Packets on All Interfaces: 1. To begin
use any flag as shown below. the installation, insert the installation media
into your computer and set
To capture packets on all the interfaces,
the computer to boot from it. When the computer
sudo tcpdump i any has booted from the media you
use in day-to-day Linux troublesh0oting, Thie will see the following language selection screen appear.
16. Lsof: lsof is a command that would
is equally important for anyone working with Linux
systems. Lirgug
command Antharit Francais fani!
Waleain
To list all open files, execute the lsof command. Galega
Thai
Guar at: Tagatog

lsof irdi
TItetan
One of the common errors faced by developers and Devops engineers is "Bind failed Bosno1
Catal
a Indoreta
error: Address alreadyin use". You can find the process ID associated with port Cestina
Dargs Italigro
up the port.
using the following command. Then you can kill the process to free Ceutsch Bt1s
EAAryu PyoA
lsof -i:8080 Esper anto
Peer Sanegil1.

1.20 LINUX SERVER INSTALLATION Etpaol


Eest!
S1Tuere Ina
SIOveira
Eushara
Lietvei al
Installation of Ubuntu Linux Server 16.04 LTS (Long Term Support): Suoni Lat iai Senca

Requirements:
For this installation, you will need a copy of the Ubuntu Linux Server 16.04 Fig. 1.6 (i): Language Selection Screen

installation media. You can obtain the latest version as a DVD image (1SO) which can Select the language you would like to use and press Enter. By defaultit is English.
be used for this installation at www.ubuntu.com/download/server. Be sure to select
2. Next you will be asked to select an action.
.

the Ubuntu Linux Server edition as the desktop version uses a graphical installer.
ubuntu
We will be installing the x86_64 DVD ISO image on a virtual machine.
Installer Keyboard Notes:
The Ubuntu text installer utilizes keyboard keys for menu selections. The following is
Install Ubntu Server
a list of the primary keys you will use: Install NAAS Fegion Centroller
Install MAAS RaCk Controller
o Tab key or Arrow keys: Navigate from one selection to another Check disc for defects

Test tesory
Space Bar :Toggle selections on or off Boot frca first hard disk
Rescue a troken systen
Enter key :
Accept the current selections and proceed to
the next step (Some
keyboards may have a Return key rather than an Enter key)
Installation:
F4 Modes FS Accessibillty F6 0ther Options
F1 Help F2 Language F3 Keyaap
The following steps will guide you through a basic
installation of Ubuntu Linux Server
16.04. The installation process will take
'some time to complete and some steps w Fig. 1.6 (ii): Select an Action
take longer than others.
we wil choose the default Install Ubuntu Server by
[Important: Kindly take backup before Since we are installing Linux,
installation as this installation may erase the
data] pressing Enter key.
Introduction to DevÔnN
Sem. IM) 1.42
DevOps: MCA [(Management- DevOps : MCA (Managemont- Sem. IV]
process, the installer will ask fo 1.43 Introduction to DevOps
we have begun the installation Here, we will select <No> to allow us to
3. Now that system to use during installation and operatio Pressing the Tab key will allow you to move manually select our keyboard layout.
language that you would like the between selections.
the tallat jon procHi. he telcted
larquese sill
Daee he rar tor t jstalled afea, t3u can tr to hae ypt rtord Lpt tKtdby
tot rt ta ds this, yu vil! te grtit s
im of Lep. If ypt
l to lt irtord ept troa Lst.
:
Lcailzst
tetect ettrs lrpt
o tx
Astuian
tessevas

Fig. 1.6 (v): Detect the Keyboard


6. The first step in selecting your keyboard's layout ís to choose the Country of
Cešt ina
ars Origin for the keyboard.

Frcais
Galsci

s
bacn tàates httra
Selecta language for the System
Fig. 1.6 (iii):
We will be using the default, English.
you will be asked to select the
4. Once the system language has been selected
location the system will use. This setting is used for configuring the locality
of tarea

several system services. Erglist

letes latuon si!! be vsed ta set your tie 0re and also for exgncle to help
ct the tytten Lcaje. Norlly this shoul: be the country tere you live.
tocat iors baset on the Inguese you selected. Choose "other"
Ps isLoat Lon
titis If

yr t listed.

Caurtr. er Fig. 1.6 (vi): Select country for the


keyboard
terity rta:
default.
We will choose English (US) which is the
within the Country of Origin of
Botsun
Cansts 7: Next you will.be asked to select the specific layout
Tretsod
your keyboard.
Nev Zealrd
Peane select the lat thuy tors the

Africa

ctte

intwat ana ath a ra

yinte tttent
Fig. 1.6 (iv): Select Location Srto-atian (S
We will accept the default, United States by pressing Enter key.
5. The installer will now ask whether or not it should try to detect your keyboard
layout. You can select <Yes> to allow the system to detect the keyboard layout. If
is successful you will automatically skip to step 8. Layout
Specific Keyboard
Fig. 1.6 (vii): Select
If the detection is not successful you will need to complete the manual selectio which is the default.
process in steps 6 and 7 as if you had selected <No>. We.will again choose English (US)
Introductlon
DevOps : MCA [Management - Sem. IM
1.44 toDevop
your network options.
automatically configure messapes If itt
DovOps : MCA(Managomont-
Som. IV]
8. Your system will now try to
vou will be presented with the following tailure 1.45
unable to do this 10. Now that we
have selected
Introductlon to DevOps

tht sethora
enter the Internet Protocol Configure network manually, we will be
{1 Cntise (IP) address
for our system. If you do not asked to
address please consult your know your IP
Neter tcmirati falirpotocol. Alternat ively. network administrator
for the information.
r netur is rotably ret usin thehtre is not eor ins In this example, we
ey be slo netuc will use the IP address 1.2.3.4.
the HCF Seve
rocer ly.

The 1P addr ess is unlaus to yr compiter ard y


bA:
for rusters seorat ed t per icAs
(P4):
blccxS Gf teSdec ia char xter: r at ed D
calonz (1?6).
You can al:0 cot Íorail acCend CIR re (h "rA.
adninLstrator.
1P addres::
tivatec hittne L2.3,4
Fig. 1.6 (viii): Configure the network
automatically
To continue to the network configuration step
press the Enter key.
9. To configure your network settings you will be presented with four options.
The first two options allow you to retry the auto-configuration
process. These
options are useful if you can correct the reason your network
was not able to he
configured automatically and you wish to retry the automatic configuration.
You can also choose to skip configuring the network by selecting Do not Fig. 1.6 (): IP address

configure the network at this time. If you choose to skip configuring the In the provided field, enter the IP address of 1.2.3.4. When done, press the Tab key
network, you will need to manually configure your network settings after until you get to <Continue> and then press the Enter key.
installation completes before your system will be able to communicate with other 11. You will next be prompted to enter the network mask for your network. Again, if
servers on your network. you do nt know your network mask please consult your network administrator
For our example, we will select Configure network manually. for the information.
as our network mask.
Cond1gure the retucr In our example, we willuse the default 255.255.255.0
Fron tere yu con chocoe to retry
H
nretuork autoconfiguration
(tich e sceed if your DHCP server teres a lcng tine to respond)
r tohostnane
conf ige the retucr tanuallg. Sone DHP servers require a
res
r .
3 Cr
OF
to sent by the cllent,
be $O uou.can also choose to The netaSk LS
S
te eter.n stratr et L

te
retry DO
provide.
etcr Corfiguraion elth a hostnane that you netecrk, Corselt
your retcrK

per
ics.
Netuor conf ipurst1on Dethod:
Netnask :
Fetry retuors toCd 1zra: icn 5S.255.255.0
Fetry retutri a0crlgration vith a tCP hostnane
Do rot cri ipre tte retut at this tine
Go Eack

Mask
Fig. 1.6 (xi): Network simply press
mask of 255.255.255.0
the default network
Since we are accepting to <Continue> and then press the Enter
key.
Fig. 1.6 (ix): Configure you get
Network manually
After you have made your selection press the Tab key until
the Enter key to continue.
Introduction to DevOps
DevOps: MCA [Managomont- Sem. IV]
- Sem. IV] 1.46 1.47 Introductlon to DevOps
[Management our network
DevOps: MCA
IP address of the
network gateway for you do not In the field provided, enter the IP address of 8.8.8.8 for the name server
we will enter the information t0 you if address.
12. In this step, When done, press the Tab key until you get to Continue> and
Your network
administrator can provide this then press the
Enter key.
know what it is. as our gateway address. 14. Next you will be asked to enter the name that this host will be known as. This
we will use the address 1.2.3.1 name can be a single word (no spaces) and should not contain special
In our example, characters
{ Cont icure tte netecn such as "% It 1s Common, however, for system administrators to utilize a dash "
periods) that in their host names. (Such as web-server-1)
atress (four runbers separated by
the gate is
an IP
Anoun as the defsult
router. AII
Indicates the gatey roter, LAN
also
instance, to the Internet) is
soes utside yor (for you my h3ve no
traffic that router. In rare cicustances,
Sent thugh thiscase, vou cn If you don't knou Please enter the hostnare for this tpto.
router: in tht lee this blank, your netecrk
the praer ansuer to this Questicn, consult The
knou
hostane i:
tf your slra.
tote Te ypr.pta t tte netsrt, l
o dn'
up your oun hone
8dninistrator. retucr, yo cn ak
scothicg uo ere.

Gateuoy: Hastnane:

tuntu
TZ.3.1
Cont Irues (Go Back

ne selects Enter) actiyates buttons


T

OS
Gateway Haboves Saac celect Eato xtlvats ttm
Fig. 1.6 (xii): IP address of the Network Fig. 1.6 (iv): Hostname
In the provided field, enter the IP address of 1.2.3.1 for the gateway address. When selecting your host's name, it is important to select a meaningful name to
press the
When done, press the Tab key until you get to <Continue> and then prevent confusion with other hosts on your network.
we will leave the host
Enter key. Since the host we are building-will not appear on a network,
name for server our name as its default value of ubuntu. Press the Tab key until you get to <Continue>
13. In this step, we will enter the IP address of the primary
network. Your network administrator can provide this information to
you if you and then press the Enter.
15. In this step, you will be asked to enter the full name of the primary
user of the
namne server
do not know what it is. Here, we will use the address 8.8.8,8 as our
system. (Note: On Ubuntu systems, this user is NOT the root superuser but will
address. have administrative capabilities.)
Coni lgure the netucr
The nane servers ere used to 1004 up host nates on the netuork. Usereccount vill be created tor you to Se Instead c the root xcount for
non-admInistrative activities.
Please eter the IF eddresses (not host nates) of up to 3 name a
nane of this user. This Intoration wilL te use for istnce utes
Servers, separated by speces. Do not use coras. The first nan Please enter the real progran hLch diio!s r
server In the 1lst ulll be the first t0 be querled. If you don't uant defauit origln for emai!s sent byy this user
as
ll as any
name. YCur full nae is a ressonab
le
rcice.
to use any nane Server, Just leave this field blans.. the user's real'
Full name for the nev user:
Ngne server addresses:
TechonTheNet
3.B.8.8
<Co BaçA>

(Go Eac>
CCont irues

Tab oves: new user


Fig. 1.6 (xv): Real name for the
CaCE) CElEcts: Enter
actlvates buttons
Fig. 1.6 (xiii): Name server
Addresses
Introductlon to DovOps
DevOps : MCA[Management - Sem. IV] 1.48
pevOps : MCA(Management- Sem. V
1.49
On our system we will set this user
name to TechOnTheNet but you may ch00se to Introduction to DevOps
press
use your full name. Press the Tab key until you get to <Continue> and then I) Set up users and pascuorG3
Please enter the sane user
the Enter key. passurd again to verify you have typed
it correctly.
16. Next, we will enter the user name that we will use to log in. This
name should be Re-enterr passuord to verIfy:

lowercase and not include spaces or non-alphanumeric characters (Characters


that are not numbers or alphabetic characters). Oshos Parsucrd In C1ear

) Set in ucers and (Go Back>


rascrds Cont Sru
Select a usenane for the nes account.
usernane should start vith o Ioer-c8se
or first nne is a ressonsble choice. The
letter, uhich can be folloued by any conbinat ion
of nunbers and ore louer-case letters.
Fig. 1.6 (xviii): Reenter Password
After re-entering the password, press the Tab key until you get to <Continue>
Usennane for you accont: and
then press the Enter key.
techontheret
19. If you entered a weak passwor, the installer will prompt you to
Cont Inues confirm that you
want to proceed or re-enter a stronger password.

Fig. 1.6 (xvi): Username for the new account a passuord


You entered that consists of less than eitht characters, hich is consicered
In our example, we will use techonthenet for our user account. Press the Tab key t00 Leak. You hould choose a stronger passrd.
until you get to <Continue> and then press the Enter key: Use ueak passuord?

17. On this screen, we will need to enter the password you would like to use
for the <G0 Back
techonthenet user acCount.
It is important to choose a strong password that cannot be easily guessed
and that Fig. 1.6 (ix): Confirmation for the password
you'll remember! (Typically strong passwords are more
than 8 characters long and Since our machine will only be used for this tutorial, we will accept the risk and
contain upper/lower case characters and have numbers or special
characters such continue. If you choose <No> you will have to repeat the password selection
as a "$".)
(Steps 11 and 12). If you choose <Yes> you will proceed to the next step.
( Set u uers and pOSsuorCs 20. Ubuntu allows youto encrypt your home directories for security. This is useful for
situations where users require security on items they keep in their home
A gOod
passuord ulll contain a eixture of
changed at regular intervals. letters, nucbers and punctuat ion and should be
directories. Here, we will select <No> and not encrypt the home directories.
Choose a çassord for the neu uSer:
[0 Set o users and cascrcs
USho any files stored there
Passucrd in CIesr You may configure your hone directory for encryotlcn, such that
remain private even if yOur cocputer is stolen.
(Go Bac)
CCont you login and
Inue The system will seanlessiy rount your encrypted hoe directory each tie
autonat lcally unount uhen you log ut of all active sessions.
Fig. 1.6(xvii): Set up a Password Encrypt your hone directory?
After entering the password, press (Yes
the Tab key until you get to <Continue> <GO Back>
then press the Enter key. and
18. In this step, you will home directories
be asked to re-enter the
password you used from the Fig. 1.6 (xx): Confirmation to encrypt the
previous step. This is to ensure choose the time zone your computer
that the passwords match. 21, In this step, you will configure the clock and correct local
use this setting to display the
will use. The system time services will
time.
Introduction to DevOps
DevOps: MCA [Management - Sem. IV] 1.50 MCA(Management - Sem. IV
DevOps :
1.51 Introduction to DevOps
c

Cre the
is not litted, then ciesenesp tback to the step Theose lnguse 23. Next we will need to select the hard disk
I1 the cesired
and te lect a cotry
te on
that etes the Cesied tie (the country you livt or are that we will apply the hard drive
locate). partitions to. Since our computer contains only one hard drive, we can
yor
accept the
Select te defaults and continue. If your systerm has more than one hard disk, you
will need
to select the appropriate hard drive your system is set to
boot from.

Note that all data cn the dist ypu


East Iclra
Samog confired tht you really ant
elct vill te ered. tbut ngt befre ypu tave
tO tae the ctanqe3.
Select disk to partiticn:

ECSI331(0,0,0)1(1.5 8:4rense vrtaS


Configure the clock
Fig. 1.6 (i):
<Go Back
On our system, ve will select the Pacific Time zone.
22. Ubuntu will now ask you to configure your hard disk partitions. There are
several choices available:
Guided - use entire disk: Creates a non-LVM (Logical Volume Manager) Fig. 1.6 (oiii): Select disk to Partition
partition layout which follows a more traditional UNIX layout. This scheme now
creates fixed partitions which can not be easily changed without re 24. The installer will confirm that you are prepared to write the partition layout
installation or advanced knowledge of Linux. to the hard disk you have selected.
Guided - use entire disk and set up LVM: Allocates a small boot partition and
places the remaining available disk space intoa logical volume in which the
Befcre the Log ical Volure Manzger can be corfipred, the crent catitining schene tas
other partitions will be created. LVM allows additional flexibility in how the to be eritten to disk. These changes canot be urdre.
logical volume will be laid out or changed in the future.
Guided - use entire disk and set up encrypted LVM: Creates a similar After the Logical Volute Manager is configred, ro additioral cages to the cartit ionirg
drirg the installat ion. Pleace
schene of disks containing phsical volus a!ed
partition layout as the previous option but encrypts the logical volume with a decide if you are satisfied uith the cCurent catitinig schee before contiuig.
password.
o Manual: Allows manual The partit ton tables of the follouing device ae taged:
configuration of disk partitions. This is an advanced SCSI33 (0,0,0) (sda)
mode typically used by experienced UNIX administrators and allows full
control over the layout of the partitions.
L

Nrite the changes to disks and configre

Here, we will choose the second option which is Guided - uise CYes
up LVM. entire disk and set

configure LVM.
The instal2er
schenes) r
cn
1f
. you can TIonig a dis (using different standard Fig. 1.6 (oiv): Save changes to disks and
your selected partition layout to the hard disk, tab to
do it taslly. with gulded part it ioning you
StAl! hae a crce later to revie vill
rd custcaise the results. If you are ready to apply
If you dhocce guie1 part lit Lon ing for en ent you
<Yes> and then press the Enter key.
should te d. ire dip, uiIl next be asked uhich
dis
Fartit lnirg tethod: armount of hard disk space you would like the
25. In this step, you will be entering the
G!d - t re ci The installer will by default fill in the
1dU t
ire d1sk and t LW installer to use for Ubuntu Linux Server.
Guided - can lower this value to
S et ire di set o ercrpted your hard disk but you
d
LVH

total amount of space available-on MB


purposes. Values are typically entered in
leave some space available for other
or TB (terabytes).
(megabytes), GB (gigabytes)
Fig. 16 (oi): Select Disk Partition
Method
Introduction to DevOps
-
DevOps : MCA [Management - 1.52 IVI
Sem. IM DevOps: MCA Management Sem,
1.53 Introduction to DevOps
28. Next, the installer will ask if you would like to configure automatic system
or part cf 1t. If you uSe
You nay uSe the tole vole roun for suided patit ionns,
later. then you wll be able to sro logicol updates. The options available are:
tniy cart ot I
saller part roup at
No automatic updates:. No updates will be applied automatically by the
yolues Ibter tokin
Rore flexjbilitu,
of the volu
Instal!at ion tIe otfer
3.0 C8 (or 142): plesse
1s note that system. System updates will require manual intervention by a system
The nininn slze of the selectes
part it icnin recipe The nax tum
the
ca
2ges you coose to instsl! reaulre nore space than this.
y
administrator.
tvallable sze is 1.0 O.
con be uted asa shortcut to specify the size, or enter percentage
a Install security updates automatically: Updates which resolve security
MÍnt: "ex" ti issues will be applied automatically once per day.
fe.r."203) to yse that percent2se of the oxin sie.
Rcunt pf votuee rouo to use for suled cartitioning: Manage system with Landscape: System updates will be managed externally
by the Landscape application suite.

Applying updates on a frequent basis Is an lcortant part of keeoirg yor systes secure.
group to use for guided partitioning
Fig. 1.6 (oxv): Amount of volume to utilize all
By default, updates need to te arolied canually using paxkage narageent tools.

We will leave the value unchanged which will tell the installer
aut omatically dounload and Install
Aternat1vely, you Can chose to have this systea
over the ueb as cart of a group
you get to <Continue> and then security updates, or you can choose to tanage this systes
available hard disk space. Press the Tab key until of systens using Canonlcal's Landscape service.

press the Enter key. HOw do yOu Unt to monaxe Upzrades Cn this systen?
to
26. The installer will confirm that you are ready to write the partition information No
autoatic uodates
the hard disk. Instal! seCurlty uodates autoatlco1
Manage systen uith Lardscace
you
If you cont Inue, the chanzes llsted telou ui!! be uritten to the disks. 0therulse,
uil! te atle to further changes rually.
manage upgrades on the system
The partItion tables of the follouing devices are changed:
Fig. 1.6 (xxviii): Options to manner
LV root
updates are applied in a timely
LVM
Gbntvg,
LM VG buntu-v, LV s0p,!
always a good idea to ensure security
SCSI33 (0,0,0) (sda It is option.
tollouing partit ions are going to te feralted: so we will select Install security updates automatically
break other software
The
LVM VG
buntuvf, LY root as ext4 are concerned that an automatic update might
LW VG
tuntu-v, LV
s01 as suep
sda) as ext2
(TIP: If you No automatic updates and
may want to select
(

rtitin #1 of sCSI33 (0,0,0)


you
or services the host offers
krite the changes to disks? when required.]
apply the updates manually software or services
we
we will be asked to select any additional
29. At this step, the standard system utilities option is
on By default
Fig. 1.6 (xvi): Save changes to disks would like to install the host.many of the system'utilities that we will need to
Since these settings are correct and we are. ready to write the partition selected. This option contains
manage our system so we will leave it selected.
information we will select <Yes> to continue.
27. If you utilize a HTTP proxy on your network, you can enter the proxy information Instalied. To tue the syttes to yr
of
the talloing predetined coliect jons
on this Screen.
At the nonent, only the
to
cor ot
jnstall te
one
Deeds, you Can cho0se
softuart.
U Cont igre the pac ae nnazer Choose soltuare to Install:
arT selectjan
If you need to use a HITP proy to access the outside uorld, enter the proxy infornat ion
here, 0therulse. leave this blans.

The prory infortation should be given In the standard form of


stardird satta utlLie
"http:/riluser] [:pass]0] host [:port)/".
HTTP proxy Infornat son (blan for norne): Cont i

(Go BacK) Cont inje>

bttes
aata software/service to install on the host
Enter
POveS!oacr) Celectss
Fig. 1.6 (ocvii): HTTP Proxy Information
We will not provide any HTTP proxy information so we press the Tab key until we Select the additional
Fig. 1.6 (xxix):
get to <Continue> and then press the Enter key.
Introductlon to DovOpa
DevOps : MCA (Management - Sem.
lV| 1.54 povops:MCA [Managemont - Sem. IV]
1.55
secure shell (SsH) from Introduction to DevOps
Since we would like to be able to log into the host using 32. 1f all goes well in a few mínutes, you
will see a login prompt similar to
another host on the network we will also select OpenSSH scrver. following screenshot: the
menu
You can select a menu option by pressing the Space Bar. Moving between Ubuntu 16.04 LTS ubuntu
items can be accomplished by using your keyboard's Arrow keys. ttuj1
1buntu login:
When you are finished selecting the software, select <Continue>,
30. The system will now ask to install the GRUB boot loader onto the master boot Fig. 1.6 (xoxii): Log-in Screen
record of your hard disk. GRUB is used during the boot up process to enable You have successfully installed Ubuntu Server Linux.
Now you can log in using
Ubuntu Linux Server to load. the user name and password you configured during Steps 10, 11
and 12.
Ital: the CRP tor Jcaser cn hgrd dizk 1.21 RPM AND YUM INSTALLATION
a

It that this nev instsllat Ion 1s the only operat ing systea on this cotputer. If so,
seems YUM (Yellowdog Updater Modified) is an open-source and free command-line package
1t should be safe to Instoll the GLS boot loader to the naster boot record of your first
hard dr ive. management utility for systems executing the Linux OS with the help of the RPM
Wning: If the installer falled to detect another operating sy:ten that is present on package manager. Many other tools offer GUI to YUM functionality because YUM
your coeputer, odityirg the noster toot record ul!l noe that operat ing system
tenporar i1y urtootable, though GRUÊ Can te taruslly configured Jater to boot contains a command-line interface.
It.
YUM permits automatic updates and package dependency management over RPM
Install the GRU8 bogt Ioxder to the taster boot record?
based distros. YUM implements software repositories (set of packages) that can be
locally used or on a network connection similar to the Advanced Package Tool from
Debian.
Installing YUM in Ubuntu:
Fig. 1.6 (o):
Install the GRUB boot loader Step 1 : Update the System
We will select <Yes> to install the GRUB boot loader. We need to execute the update command for getting the latest package information and
31. Now the installation is complete. The
installer will now prompt you to reb0ot the updating package repositories:
computer. $ sudo apt update
Nexgubuntu: -$ sudo apt update
1 FiniSh the Instsl!at 1c1 (sudo) passkord for krishu:
Htt:1 http://ppa.launchpad. net/gezakovacs/ppafubuntu focal Inaelease
Installbt ion cop lete Hit:2 http://in.archive.ubuntu.con/ubuntu focal InRelease
Installaticn is cono lete, so if tine to toot into your neu system, Make sure to rerove
the installat icn edis (C0-POM, isfIcoples), so that you boot Into the neu system
Hit:3 http://in.archtve.ubuntu.conyubuntu focal-updates InRelease
than restart*irg
irg the installat icn. rather Hit:4 http:|\n.archive.ubuntu.con/ubuntu focal-backports Inkelease
Get:5 http:||Securlty. ubuntu.ccn/ubuntu focal-security InRelease (114 ks)
CCont drue
Get:6 http:/isecurlty.ubuntu.conubuntu focal·security/ain J35 Packages [441 k

B)
Ign;7 http://securtty.ubuntu.con/ubuntu focal-securlty/nain ands4 Packages
Get:8 http://securlty.ubuntu.con/ubuntu focal·securtty/naln Translatton-en [257
kB).
DEP-11 Metadat
Ign:9 http://security.ubuntu.con/ubuntu focal-security/nain ands4
-a Metadat
Get:10 http://securtty.ubuntu.con/ubuntu focal-security/raln ands4 c-n-f
(16.4 k8)
and64 Package
Ign:11 http://securtty.ubuntu.con/ubuntu focal-securlty/restricted
S
Translattlon -e
Get:12 http://security.ubuntu.con ubuntu focal •security/restricted
n (135 kB]
Teb moves: focal-securltyfuntverse (386 Packages
Spac2) Selectst Enter sctlvates buttons Ign:13 http://securlty.ubuntu.con/ubuntu
and64 Packages
Get:14 http://security. ubuntu.con/ubuntu focal-securlty/unlverse
Fig. 1.6 (xoxi): Finish
Select <Continue> to reboot the Installation [705 k8)
focal-securlty/untverse Translatton-en
Ign:15 http://security.ubuntu. con/ubuntu
into Ubuntu Linux Server.
Fig. 1.7 (a)
Introduction to DevOps
DevOps : MCA [Management - Sem. IV] 1.56 -
DevOps : MCA(Management Sem, IV
1.57 Introduction to DevOps
: 3.
Step 2 Install YUM Which of the following is not a feature of continuous delivery?
We need to execute the install command for quickly installing the
packages and thei
(a) Automate Everything (b) Continuous Improvement
dependencies:
$ sudo apt-get install
yum (c) Bug fixes and experiments (d) Gathering Requirement

ubuntu@ubuntu-Virtual Box:-$ sudo apt-get install yum 4. What is the use of Git?
(a) Version Control System tool
[sudo] password for ubuntu: (b) Continuous Integration tool
Reading package lists... Done (c) Containerization tool (4) Continuous Monitoring tool

Building dependency tree .. 5. What type of mindset is the core of a DevOps culture?
Reading state infornation. Done Service Mindset
(a) (b) Skill Mindset
People Mindset
(c) (d) Process Mindset
Fig. 1.7 (b)
6. Which statement does NOT define DevOps?
Summary (a) DevOps is a framework and job title that focuses on structured processes to
This chapter gives an idea about DevOps and its operations. organize flow between the Development and Operations teams.
DevOps (a combination of two words such as "development" and "operations") is (b) DevOps is a movement or practice that emphasizes collaboration and
the combination of practices and tools designed to increase an organization's communication of both software developers and other Information
ability to deliver applications and services faster than traditional software Technology (IT) professional.
development processes. (c) Devops is about experiences, ideas, and culture.
DevOps is used to increase an organization's speed at time of delivering (d) DevOps is an activity of optimizing the development to operations value
applications and services. Many companies 'have successfully implemented stream by creating an increasingly smooth, fast flow of application changes
DevOps to enhance their user experience including Amazon, Netflix, etc. from development into operations.
There are various terminologies used in DevOps such as Container, Commit, 7. Whatis the difference between Continuous Delivery and Continuous
Agent, Deployment etc. Deployment?
Devops is the grouping of cultural philosophies, practices, and tools that (a) Continuous delivery is a manual task, while continuous
deployment is an
increases an organization's skill to deliver applications and services at high
automated task.
velocity: evolving and improving products at a faster pace than organizations a manual release to production decision, while
(b) Continuous delivery has
using traditional software development and infrastructure management to production.
continuous deployment has releases automatically pushed
processes. delivery includes all steps of software development life cycle;
(c) Continuous
There are various tools like Puppet, Ansible used as validation and testing.
for management of DevOps.
DevOps configuration is the evolution . and automation continuous deployment may skip few steps such
administration role, bringing automation to infrastructure of the
systems
(d) Continuous delivery
means complete delivery of the application to customer;
management and of the application in
deployment. continuous deployment includes only deployment
This chapter also shows how to install Linux 0S customer environment
and use its basic commands. every time you need to incorporate
Check Your Understanding 8. creates an extra merge commit
1. Identify the method that does least impact changes. (b) Git Push
methodology?
the. establishment of DevOps (a) Git Merge
(d) Git Fetch
(a) Waterfall Software Delivery (c) Git Fork
(b) Lean Manufacturing
changes that were made in a certain commit?
9. How will you find all
(c) Continuous Software Delivery
(d) Agile Software Delivery
the
id> (b) git diff-tree -d <commit id>
(a) git diff-tree -a
2. What is DevOps? <commit
(a) A small team of people (d) git diff-tree-s <commit id>
that own everything related to a particular service (C) gitdiff-tree -r <commit id>
(b) Developers performing Answers
operations
(c) Developers and Operations 9. (c)
team members working together 5. (a) 6.(d) 7. (b) 8. (a)
(d) None of above 2. (a) 3. (d) 4. (a)
1. (a)
Introductlon to DevOps
DevOps : MCA [Management - Sem. IV) 1.58

Practice Questions
QI Answer the following questions in short. 2...
1. What is DevOps?
2. How Lean Model is better than SDLC?
3. State various processes used to implement Agile.
4. What is ITIL?
5. What are various phases of DevOps lifecycle?
Version Control -GIT
6. Who are DevOps Stakeholders?
7. State any two goals of Devops
8. What is BDD?
Objectives...
9. What is Continuous Delivery? After learning this chapter you will be able to:
10. What is MTBF? Understand the concept of version control systems, GIT and its types.
11. What is MTTR?
Understand the basics of GIT and its installations.
12. What is Database Versioning?
Learn various commands of GIT.
13. State the best practices for version control for DevOps?
O Understand repository creation using GIT with branching concept.
14. What is Kernel?
15. What are environment variables?
16. What is YUM?
Q.II Answer the following questions. 2.1 INTRODUCTION TO GIT
1. What are various processes used to implement Agile?
Version Control Systems (VCS) are used by all software developers during all phases
2. Explain various phases of Devops Life cycle?
3. What are best practices to follows when using DevOps?
of project development. These version control system helps them to manage software
4 What are various advantages and disadvantages of DevOps?
code and it helps them to keep the history of all versions of all software code, projects,
5. What are various goals of DevOps? and objects. In software engineering all the software developers must communicate
6. Explain DevOps perspective of 1Infrastructure SEnvironment? with each other to get better outcome of it. Version control systems support version in
7. What are key differences between DevOps and Agile? collaborative framework so that all software engineers work efficiently. Without the
8. State some features of any one DevVOps Tools you like? use of VCS collaborations become challenging. The aim of this chapter is to give idea
9. What is Configuration Management? about version control system with Git, its history, installation and branching concept.
10. Write a short note on Configuration Management. an efficient way to
11. What are three approaches Before version control systems software developers did not have
of continuous integration and deployment? to work
collaborate on their code. Software developers had hectic time while trying
a
12. Explain architecture of Linux OS. code for
on the same code at the same timne. They improvised by mailing each other
13. Explain any four basic commands Linux,
in floppy disks as backups,
14. What are roles and responsibilities of
Linux DevOps Manager? sharing, they stored their code on USB sticks and physical
parts of a system it was
they made sure to work in small teams and work different
15. Explain any four Linux networking in
and troubleshooting commands. systems that could suit their
Q.III Write short notes on: manageable for small projects, but people needed large
1. Continuous Integration to the need for a version control system where developers
and Continuous Delivery (CI/CD). needs. These challenges led
a
2. DevOps Life Cycle. on code and keep backups of various versions of
could effectively collaborate
3. Architecture of Linux.
4. Linux Networking Commands. project. Version
have a basic understanding about
5. Environment Variables. But before moving into GIT, We should
Control Systems (VvCS) and Git.
(2.1)
DevOps : MCA [Management - Sem. IV] 2.2 Verslon Control- GIT
nevOps: MCA[Managemont Sem. IV)
2.3 Verslon Control GIT
2.2 WHAT IS GIT? Local Version Control Systems:
Git is a free and open-source Distributed Version Control System (DVcS) designed Many pcople's version-control method
of choice is to copy files into another directory
to handle everything from small to very large projects with speed and efficiency (perhaps a time-stamped directory, if they're clever).
This approach is very common
(Global Information Tracker). because it is so simple, but it is also incredibly error prone.
It is easy to forget
GIT relies on the basis of distributed development of software where more than directory you're in and accidentally write to the wrong file or copy over which
files you
one developer may have access to the source code of a specific application and can don't mean to.
modify changes to it that may be seen by other developers. To deal with this issue, programmers long ago developed local VCSs
that hada simple
GIT has the functionality, performance, security, and flexibility that most teams database that kept all the changes to files under revision control.
and individual developers need. Local Computer
Initially designed and developed by Linus Torvalds for Linux kernel development
in 2005. Version Database
Checkout
Every GIT working directory is a full-fledged repository with complete history and
full version tracking capabilities, independent of network access or a central File Versicn 3
server.
GIT allows a team of people to work together, all using the same files. And it helps
the team cope up with the confusion that tends to happen when multiple people Version 2
are editing the same files.
2.3 ABoUT VERSION CONTROL SYSTEMS AND TYPES Version 1

Version control is a system that records changes to a file or set of files over time so
that you can recall specific versions later. For the examples in this chapter, you wiìll
use software source code as the files being version controlled, though in reality you Fig. 2.1: Local Version Control Systemns
can do this with nearly any type of file on a computer. One of the most popular VCS tools was a system called RCS (Revision Control System)
many computers
If you are a graphic or web designer and want to keep every version of an image or (manages multiple revisions of files). which is still distributed with
files) in a
layout (which you would most certainly want to), a Version Control System (VCS) is a today. RCS works by keeping patch sets (that is, the differences between
very wise thing to use. It allows you to revert selected files back to a previous state, on disk; it can then re-create what any file looked like at any point in
special format
revert the entire project back to a previous state, compare changes over time, see who time by adding up all the patches.
last modified something that might be causing a problem, who introduced an issue Version Control Systems can be divided in categories,
2
and when, and more. Using a VCS also generally means that if you screw 1. Centralized Version Control System (CVCS)
things up or
lose files, you can easily recover. In addition, you get all this for very (DVCS)
little overhead. 2. Distributed Version Control System
Version control systems are a type of software tool that helps
in recording changes 1, Centralized Version Control System (CvcS)
made to files by keeping track of modifications made to the code. to collaborate
When we develop a
people encounter is that they need
software product, most of the time we collaborate as a group. So, The major issue with RCS is that Centralized Version
groups of software developers might be the thing is these developers on other systems. To deal with this problem,
located at different locations and each one of with as Cvs, Subversion,
were developed. These systems (such
them contributes to some specific kind of functionality or features. So, Control Systems (CVCSs) versioned files, and a number
to contribute a server that contains all the
to the product, they made modifications to the source code. So,
this version control and Perforce) have single For many years, this has been
system helps developers to efficiently of clients that check out files from
that central place.
communicate and manage (track) all the
changes that have been made to the source code along
with the information like who the standard for version control.
made and what change has been made.
Version Control GIT
DevOps: MCA (Management- Sem. IV] 24 DevOps : MGA Management- Sem. IV
2.5
Verslon Control GIT
one repository, and each user gets
Centralized version control systems contain'just
own working copy. You need to commit to reflecting on your changes in the Server Computer
their
repository. It is possible for others to
see your changes by updating. In CVCS, twO Version Database

are to make your changes visible to others which are, commit and
things required Version 3
update.
some major
Even though it seems pretty good to maintain single repository, it has
a
Version 2
drawbacks like,
a network to
It is not locally available: So, you always need to be connected to

perform any action. Version 1

any case of
High probability of losing data: Since everything is centralized, in
the central server getting crashed, it may result in losing the entire data of the
project.
Computer A Computer B
Computer A
File File
Version Database
File
Version Database Verslon Database
Version 3
Version 3 Version 3

Version 2
Verslon 2
Version 2
B
Computer
Version 1
Version 1
File 1
Version

Control System
Fig. 2.2: Centralized Version Control System Fig. 2.3: Distributed Version
over CVCS.
2. Distributed Version Control System (DVcs): you can have the following advantages
If you use DVCS,
& pull) are very fast because it only
needs to access the
Distributed version control systems contain multiple repositories. Each developer has All operations (except push need an internet
you do not always
their own repository and working copy. So, that means even you do some changes by not a remote server. Hence,
hard drive,
committing, other developers have no access to your changes. This is because commit connection. data
can be done locally without manipulating the
will reflect those changes in your local repository, and you need to push them in order o Committing new change-sets
to make them visible on the central repository. Similarly, when you on the main repository. they can share
update, you do a full copy of the project repository,
not get other's changes unless you have first pulled those changes
into your Since every contributor has case.
they want in any
repository. In DVCS, there are 4 main operations named as, commit, push, pull, and changes with one another if any of time, the lost data
can be easily
server gets crashed at point
update. If the central repositories.
one of the contributor's local
recovered from any
Version Control
DevOps: MCA [Management - Sem. IV] 2.6 GIT
DevOps : MCA(Management- Sem. IV
2.7
Verslon Control -GIT
2.4 DIFFERENCE BETWEEN CVCS AND DVCS in particular Linus Torvalds, the creator of Linux)
eome of the lessons to develop their own tool based on
Control and they learned while using BitKeeper. Some of
Following are the important differences between Centralized Version were as follows:
the goals of the new
system
Distributed Version Control.
Table 2.1 Speed
Centralized Version Control Distributed Version Control Simple design
Sr. No. Key
In CVs, a client needs to get In DVS, each client
can have a Strong support for non-linear development (thousands of parallel branches)
1 Working
as well and have
local copy of source from local branch Fully distributed
server, do the changes and acomplete history on it. Able to handle large projects like the Linux kernel efficiently (speed and data size)
commit those changes to Client needs to push the
changes to branch which will Since from year 2005, git is easy to use and yet retain these initial qualities. It is very
central source on server.
then be pushed to server useful for large projects, and its incredible branching system is very useful for non
repository. linear development.
In 2005, for the development of Linux kernel, Linus Torvalds needed a new version
2
Learning CVS systems are easy to learn DVS systems are difficult for
a newsystem from
Curve and set up. beginners. Multiple control system So he went offline for a week, Wrote revolutionary
commands scratch, and called it git. Fifteen years later, the platform is the undisputed leader in a
need to be
remembered. crowded field.
including Google
3 Branches Working on branches in Working on branches in Worldwide, many of the start-ups, collectives and multinationals,
difficult in CVs. Developer easier in DVS. Déveloper faces use Git to maintain the source code of their software projects. Some
and Microsoft, as
often faces merge conflicts. lesser conflicts. use Git via commercial hosting companies such
host their own git projects, others 2011).
DVD systems are workable (founded in 2010) and gitLab (founded in
4
Offline Cvs system do not provide gitHub (founded in 2007), Bitbucket was acquired by
registered developers and
Access offline access. offline as a client copies the The largest of these, gitHub, has 40
million
entire repository on their Microsoft for a whopping $7.5 billion in
2018.
local machine. as a version control system (VCS),
competitors) is sometimes categorized
git (and its and sometimes revision
a
slower as every DVS is faster as mostly user management system (SCM),
5. Speed CVS is
to. deals with local copy without
sometimes a source code
command need
control system (RCS) a survey of developers by Stack
communicate with server. hitting server every time. nmarket dominance is
The best indication of git's
6. Backup If CVS Server is down, If DVS server is down,
Overflow. a single repository
developers cannot work. developer can work. using was client server, so the code sits in
their local copies. Traditionally, version control System (CvS), Subversion and
Concurrent Versions
a central server. systems.
-0r repo-on Version Control (TFVC) are all examples of Client-server
2.5ASHORT HISTORY OF GIT Team Foundation where development is
a corporate environment,
The Linux kernel is an open-source software project of large scope. During the early A Client-server
VCS works fine
an in-house development team with good
by
years of the Linux kernel maintenance (1991-2002), changes to the software were is ndertaken you have a collaboration
involving
tightly controlled and work so well if and
passed around as patches and archived files. In 2002, the Linux kernel, project began connections. It does not independently,
network working voluntarily,
using a proprietary DVCS called BitKeeper. or thousands of developers, which is all typical with
Open
hundreds with the code,
In 2005, the relationship between the community that developed the Linux kernel and eager to try out new things
remotely, all such as Linux.
the commercial company that developed BitKeeper broke down, and the tool's free-. (OSS) projects
SOurce Software
of-charge status was revoked. This prompted the Linux development community (and
Version Control - GIT
Devops :MCA [Management - Sem. IV] 2.8 pevOps : MCA (Management- Sem, IV
2.0
Verslon Control - GIT
Tn
2.6 GIT BASICS the above example, all three cards represent
Bon select which version of the file we want different versions of the same file. We
Git Life Cycle: to use at any point
iump to and from to any version of time. So we can
of the file in the Git time continuum.
Following are stages of Git Life cycle: Git also helps you synchronize code between
1. Local working directory:. The first stage of a Git project life cycle is the local vour friend are collaborating on a
multiple people. So, imagine you
and
project. You both are working on the same
working directory where our project resides, which may or may not be tracked. iles. Now Git takes those changes you and your friend made independently project
git init git add merges them into a single "Master" repository. So, and
by using Git you can ensure you
both are working on the most recent version of the repository. So, you don't have to
Working Initialization worry about mailing your files to each other and working
Directory with number of copies of
the original file.
Staging Area
Git Workflow:
Before. we start working with Git commands, it is necessary that you understand what
it represents.
What is a Repository?
git push git commit
A repository a.k.a. repo is nothing but a collection of source code.
github Local
Repository There are four. fundamental elements in the Git Workflow: Working Directory,
Fig. 2.4: git Life Cycle Staging Area, Local repository and Remote Repository.
Remote Repo
2. Initialization: To initialize a repository, we give the command git init. With this Working Staging Local Repo
(MASTER)
Directory Area (HEAD)
command, we will make Git aware of the project file in our repository.
3. Staging Area: Now that our source code files, data files, and configuration files
are being tracked by Git, we will add the files that we want to commit to the
Git Add Git Commit Git Push
staging area by the git add command. This process can also be called indexing.
The index consists of files added to the staging area.
4. Commit: Now, we will commit our files using the git commit -m our message Git Merge Git Fetch
command.
Git is Distributed Version Control System.
Git helps you keep track of the changes you make to your code. It is basically the it
Pull
history tab for your code editor (With no incognito mode?). If at any point while
coding you hit a fatal error and do not know what's causing it you can always revert
back to the stable state. So it is very helpful for debugging. Or you can simply see
what changes you made to your code over time. a simple git Workflow
Fig. 2.6: Diagram of states.
a your Working Directory, it can be in three possible
If you consider file in with the updated changes are marked to
list1 = [1,2,3] list1 = [1,2,3] means the files
list1= [1,2,3] 1, It can be staged: Which not yet committed.
list1.append(4) list1.append(4) list1.pop() local repository but
print(list1) list1.append(5) print(list1) be committed to the with the updated changes
are not vet
print(list1) Which means the files
2. It can be modified:
stored in the local repository. you made to your file are
Which means that the changes
#iniialFile 3. It can be committed:
#addedALine #makingChanges
repository.
Fig. 2.5: A simple example of Version History of a File safely stored in the local
- Sem. Version Control GI IVI
DevOps: MCA [Management IJ 2.10 DevOps : MCA(Management- Sem, 2.11
Version Control-GIT
git Commands: 3. Tell git who you are.
git add is acommand used to adda file that is in the working directory to the staging Introduce yourself. Mention your Git username and email
address, since every
area. Git commit will use this information to identify you as
the author.
git commit is a command used to add all files that are staged to the local repository. $ git config --global user.name "YOUR_USERNAME"
git push is a command used to add all committed files in the local repository to the $ git config --global user.email "im satoshi@musk.com"
remote repository. So in the remote repository, all files and changes will be visible to $ git config --global --list # To check the
info you just provided
anyone with access to the remote repository. A. Generate/check your
mchine for existing SSH keys.
git fetch is a command used to get files from the remote repository to the local Using the SSH protocol, you can connect and authenticate to remote servers
repository but not into the working directory. and services. With sSH keys, you can connect to gitHub without supplying
git merge is a command used to get the files from the local repository into the your username or password at each visit.
working directory. If you did setup SSH, every Git command that has a link you replace it by:
git pull is command used to get files from the remote repository directly into the Instead of : https://github.com/username/reponame
working directory. It is equivalent to a git fetch and a git merge. You use : git@github.com: username/reponame .git
Process to place file in Git: [Note :You can use both waysalternatively.]
1. Make a gitHub Account. 5. Let us use Git.
Create a new repository on gitHub. Now, locate to the folder you want to placea
under Git in your terminal. (You can more learn about this in the link Create
repo -gitHub Docs)
First, let's create your user account $ cd Desktop/folder name

Initialize git:
And to place it under Git, enter:
touch README. md # to create
$
a README
file for the repository
$ git. init # initiates
an empty git repository

Now edit the README.md file to provide information about


the repository.
6. Add files to the staging
area for commit.
commit
Now add files to the git repository for
and
# adds all the files in the 1local repository
$ git add
stages them for commit
Fig. 2.7
OR
2. Make sure you have Git installed on your machine. use following command:
a specific file, then
If you are on a Mac, fire up the terminal and enter the following command: If you want to add
README.md # To add a specific file
$ git --version $ git add
us see what files are stages?
This will prompt open an installer if you don't already have Git. So set it up using Before we commit let be committed
# Lists all
new or modified files to
the installer. If you have Git already, it will just show you which version of Git you $ git status
to your git Rep0:
have installed. Commit Changes you made
files you added to your git repo:
If you are running Linux (deb), enter the following in the terminal: Now to commit
commit"
$ sudo apt install git-all $ git commit -m "First users can read the message and
message in the"" is given so that the other
If you are on Windows: # The
$ get a mac see what changes you made.
DevOps: MCA [Management- Sem. IVI
2.12 Version Control -GiT
DevOps:MCA (Managementt-Sem. IV)
2.13
Uncommit Changes you just made to your Git Repo: How to download Verslon Control-GIT
Now suppose you just made some error in your code or placed an unwanted and work on other repositories
Cloning a Git Repo: on gitHub?
file inside the repository, you can unstage the files you just added using:
LOcateto the directory you want
$ git reset HEAD~1
repository you want and enter to clone the repo. Copy
the following: the link of the
# Remove the most recent commit $ git clone remote_repository URL
# Commit again!
Pushing Changes to the Git Repo:
Add a remote origin and Push:
Now you can work on the files you
Now each time you make changes in your files and save it, it won't be want and commit to changes
want to push changes to that repository locally. If you
automatically updated on gitHub. All the changes we made in the file are you either have to
be added as a
collaborator for the repository or you
updated in the local repository. Now to update the changes to the master: have create something known as
request. pul
$ git remote add origin remote_repository_URL
Creating a pull request:
# set the new remote
Create a pull request to propose and collaborate on changes to a
The git remote command lets you create, view, and delete connections to repository.
These changes are proposed in a branch, which ensures
other repositories. that the default
branch only contains finished and approved work.
$ git remote -v
1. On gitHub.com, navigate to the main page of
# List the remote connections you have to other repositories. the repository.
2. In the "Branch" menu, choose the branch that contains your commits.
The git remote -v command lists the URLs of the remote connections you have
to other repositories. Kindly refer below Fig. 2.8.
$ git push -u origin master # pushes changes to origin
P main .P 52 branches 3 tags
Now the git push command pushes the changes in your local repository up to
the remote repository you specified as the origin. Switch branches/tags
See the Changes you made to your file: in

Once you start making changes on your files and you save them, the file won't my
match the last version that was committed to Git. To see the changes you just
made: Branches Tags

$ git diff # To show the files changes not yet staged my-patch-1
Revert back to the last committed version to the git Repo: myarb-patch-1
Now you can choose to revert back to the last committed version by entering:
$ git checkout. P Create branch: my from 'main'
OR for a specific file View all branches
$ git checkout <filename>
View Commit History: Fig. 2.8
You can use the git log command to see the history of commit you made to yellow banner, click Compare
& pull request
3. Above the list of files, in the
your files:
create a pull request for the associated branch.
to
$ git log Compare &
pull request
than a minte ago
Each time you make changes that you want to be reflected on gitHub, the P octo-repo had recent pushes less
to
following are the most common flow of commands: menu to select the branch you'd like
branch dropdown compare branch drop-down
$ git add. 4. Use the base use the
$ git status # Lists all new or modified files to be committed
merge your changes into, and then
you made your changes in.
$ git commit -m "Second commit"
menu to choose the topic branch
$ git push -u origin master
Devops : MGA(Management- Sem. IV
5. Type a title and description for your pull request. 2.15
Verslon Control GIT
10. Push (send) your changes
6. To create a pull request that is ready for review, click Create Pull Request from your copy of the repository,
command, origin refers to up to gitLab. In
To create a draft pull request, use the drop-down and select Create Draf the copy of the repository this
BRANCHNAME stored at gitLab. Replace
Pull Request, then click Draft Pull Request. With the name of your
branch:
Collaborating: git push origin BRANCHNAME
So imagine you and your friend are collaborating on a project. You both are 11. Git prepares, compresses,
and sends the
working on the same project files. Each tme you make some changes and (here, gitLab) are prefixed with remote data. Lines from the remote repository
like this:
push it into the master repo, your friend has to pull the changes that you Enumerating objects: 9, done.,
pushed into the git repo. Meaning to make sure you're working on the latest
Counting objects: 100% (9/9), done.
version of the git repo each time you start working, a git pull command is
the way to go. Delta compression using up to 10
threads
Compressing objects: 100% (5/5), done.
2.7 GIT COMMAND LINE
Writing objects: 100% (5/5), 1, 84. KiB 1.84
To add a new file from the command line: MiB/s, done.
Total 5 (delta 3), reused 0 (delta 0), pack-reused
1 Open a terminal (or shell) window. 0 remote:
remote: To create a merge request for BRANCHNAME,
2. Use the "change directory" (cd) command to go to your GitLab
project's folder, visit:
Run the cd DESTINATION command, changing DESTINATION to remote: https://gitlab.com/gitlab-org/gitlab//merge_requests/new?merge
the location of request%5Bsource_branch%5D=BRANCHNAME
your folder.
3. Choose a Git branch to work in. You can either: remote: To https://gitlab. com/gitlab- org/gitlab.git
Create a new branch to add your file into. Don't submit changes directly to [new branch]· BRANCHNAME-> BRANCHNAME branch 'BRANCHNAME set up to
the
default branch of your repository unless your project is very small and you're track 'origin/BRANCHNAME.
the only person working on it. Your file is now copied from your local copy of the repository, up to the remote
Switch to an existing branch. repository at gitLab. To create a merge request, copy the link sent back from the
4. Copy the file into the appropriate directory in your project. Use your remote repository and pastes it into a browser window.
standard tool
for copying files, such as Finder in mac OS, or File Explorer in Windows. 2.8 INSTALLING GIT
5. In your terminal window, confirm that your file is present
in the directory: Git has a very light footprint for its installation. For most platforms, you can simply
Windows: Use the dir command. copy the binaries to a folder that is on the executable search $PATH.
All other operating systems: Usè the ls command. You
should see the name a
Git is primarily written in C, which means there is unique distribution for each
of the file in the list shown. on a subpage of the ofiicial git site
supported platform. The git installers can be found
6. Check the status of your file with the git status are several installers available there for
command. Your file's name (https: //git-scm. com/downloads). There
should be red. Files listed in red are in your file system, but Git
isn't tracking them those who don't want to go through the hassle of doing the installation manually.
yet.
7. Tell Git to track this file with the git add FILENAME 2.9 INSTALLING ON LINUX
Command, replacing
FILENAME with the name of your file. Download For linux and Unix:
8. Check the status of your file again with It is easiest to install Gít on Linux using the preferred
package manager of your Linux
the git status command. Your file's
name should be green. Files listed in green are
tracked locally by Git, but still need distribution.
to be committed and pushed.
Debian/Ubuntu:
9. Commit (save) your file to your local copy your your release of Debian/Ubuntu
of project's Git repository:. For the latest stable version for
git commit -m "DESCRIBE COMMIT IN A FEW WORDS"
$ apt-get install git
DevOps : MCA [Management- Verslon
Sem. V])
2.16 ConttolGt wps: MGA (Mandgement Sanm. y

Ubuntu: Homebrew: Version Control -GT

$ add-apt-repository ppa:git-core/ppa # apt update; apt install git tnstall homebrew if you don't already have it.
then:
Fedora: $ brew instali git
$ yum ínstall git (up to Fedora 21) Obtaining Source Release:
$ dnf install git (Fedora 22 and later) rt wou prefer to download the git code from source
its ar if you want the latest
of git, visit git's master repository. version
2,10 INSTALLING ON WINDOWS
As of this writing.,
the master repository for git saurcas is
1. Browse to the official Git website: https://git -5Cm,com/downloads the pub/software/scm director.
htto://git.kernel.org in
2. Click the download link for WindoWs and allow the download to
complete. Vou can find a list of all the
available versions at.
3. Browse to the download location (or use the download shortcut in your http://kernel.org/pub/softrare/sCmÍtit.
browser.
Double-click the file to extract and launch the installer. Tobegin the build. download the source code for version
L6.0 (or later) and unpack it:
Downloads $ wget http://kernel.arg/gub/softare/scs/it/gt-9.1.tar.gz
2.40,0
S tar xzf git-e.41.tar-gz
S
cd git-0.01
Doaad or Winlo Verify Git Installation:
. Type below
command on terminal:
-
S git. -version
Logos gt version 1.8.3.2
2,12 GIT ESSENTIALS
asGtCa Working with Git Commands
The Git Command Line:
Git is simple to use. Just type git. Without any arguments, Git lists its options and the
most common subcommand:
ng rt

Fig. 2.9
2.11 INITIAL SETUP
How to launch Git in Windows?
Git has two modes of use:
1 Abash scripting shell (or command line)
2. Graphical User Interface (GUI)
Launch Git Bash Shell:
To launch git Bash open the Windows
Start menu, type git bash and press Enter.
Download for mac OS:
There are several options for installing
git on mac Os. Note that any non-source
distributions are provided by third parties,
and may not be up to date with the latest
source release. Fig. 2.1Q: Git Commands
Version Control- GT MCA (Management-Sem, IV)
MCA [Management - Sem. IV 2.18 pevOps: 2.19 -
DevOps: Version Control GIT

Set User Credentials: Staging Area:


your platform, you'll need to
Once you have selected a suitable distribution of Git for
-git

a username and email address to Git.


identify yourself with
Git does not directly support
In a separation of concerns most satisfying to the purist,
or FILE1.txt git add FILE1.bd
repository authentication authorization.
SSH) or operating
It delegates this in a very functional way to the protocol (commonly |git commit
system (file system permissions) hosting or serving up the repository.
Thus, the user information provided during
your first Git setup on a given machine is git add FILE2.td staging area

purely for "credit" of your code contributions.


once per new
With the binaries on your $PATH, issue the following commands just FILE2.txt repository

machine on which you'll be using git.


Fig. 2.11: Workflow of git
Replace the username and email address with your preferred credentials.
The workflow which is described for Git can be broken into three locations:
git Configuration: on the left, (where FILEL.tt and FILE2.Dt are), we have the working directory.
$ git config --global user.name "rajesh kale"
.
This is where you make changes, and is outside of the .git directory.
moves stuff from the working
$ git config --global user.email "rajeshkale@gmail.com" The second place is-the staging area. git add
These commands store your preferences in a file named. gitconfig inside your home directory to the staging area.
o Then git commit moves it fromn the staging area to the permanent repository.
directory (-on UNIX and Mac, and %USERPROFILE% on Windows).
See what is modified:
2.13 CREATIN REPOSITORY
G

We can see the difference using below command:


To create a new local git repo in your current directory: $ git diff
$ mkdirdevops Example:
git diff
tie Git touches it
S
be replaced by CLF the net
$ cd devops warning: in the working
copy of 'cicd.yal', LF
ill
$ git init diff --git a/cicd.yanl b/cicd.yaal
index 9c707e0..4a31f27 100644
This will create a Git directory in your current directory. Then you can commit files in --- a/icd.yanl

that directory into the repo. +ee-1b/cicd.yanml


+1,2 e?
> sone line
$ echo "some line" cicd.yaml taaded ne line
Tracking changes: MNGNS4 ldrepository (easter)
|sheetbhaDNCI6116235
To check the current status of a project's local directories and files (modified, new,
Fig. 2.12
deleted, or untracked), invoke the status command:
status see difference side by side then
$ git
T you want to
-globaldi ff.toolvimdiff
Commit the changes: git config
This
Issue the git commit command with a commit message, as shown on the next line. Ignoring Things: files we want it to ignore.
README, And also tell which files
Before, we tell Git to track can be useful for very large
The -m indicates that a commit message follows: It also
1S useful for
temporary and generated files.
$ git commit -m "initial commit»
as
such datasets.
Version Control
ar Devops : MCA (Management
. Sem. IV
DevOps : MCA (Management - Sem. IV 2.20 2.21
a Verslon Control - GIT
The way that we do this is by creatinga .gitignore file. This file just
is list of files 2.14 CLONING CHECKING AND
COMMITTING
ignore. It can also recognize wildcard characters.
are saved as csv files and te Example Workflow:
For example, say that want to ignore our datasets which
also have some temporary files that end in .tmp, then our -gitignore file is as Tocreate file and commit to localrepository.
follows.
*,cSV

.tmp
Viewing your commits:
• The full list of changes since the beginning of time, even when disconnected from all
networks:
S git log
$ git log --since=yesterday
$ git log --since=2weeks ts t
Stashing: Ounes t be cott

Git offers a useful feature for those times when your changes are in an incomplete
state, you aren't ready to commit them, and you need to temporarily return to the last
committed (e.g. a fresh checkout). This feature is named stash and pushes all your (ster (ret-) ital

uncommitted changes onto a stack.


$ git stash
Fig. 2.13
$ git stash --include-untracked
When you are ready to write the stashed changes back into the working copies of the
Toclone a remote repo to your current irectory:
files, simply pop them back on the stack. $ git clone urllocalDirectaryName
a copy of the files from
$ git stash pop This will create the given local directary, containing working
Undoing Thing: the repo, a
and -git directory
One of the greatest aspects about Git is that you can undo almost anything. In the 2.15 FETCH PULL AND REMOTE
end, this means you actually can't mess up: Git always provides a safety net for you. is a safer alternative because it pulls
When comparing git pull fetch, git fetch
vs
Fixing the Last Commit: your remate but dcesn't make any changes to your local files.
in all the commits from one.
No matter how carefully you craft your commits, sooner or later you'll forget to add a is faster as you're performing multiple actions in
On the other hand, git pull
change or mistype the commit's message. That's when the"-amend" flag of the"git
commit command comes in handy: it allows you to change the very
last commit 2,15.1 git FETCH a remote
commits, iles, and refs from
really easily. The git fetch command downloads you do when you want to see what
repo. is what
Fetching
$ git cOmmit --amend -m "This is the correct message". repository into your local you see
on. It is similar to svn update in that it lets
Undoing Local Changes: everybody else has been working force you to actually merge the
progressed, but it doesn't
If you need to discard all current changes in your working copy and want to restore how the central history has content from existing local
content.
Git isolates fetched
the last committed version of your complete project, the 'git reset' command is Changes into your repository. your Fetched content has to be
no effect on local development work.
useful. It has absolutely command. This makes fetching a safe
the git checkout
$ git reset --hard HEAD explicitly checked out using your local repository.
integrating them with
This tells Git to replace the files in your working copy with the "HEAD" revision way to review commits before
(which is the last committed version), discarding all local changes.
DevOps: MCA [Management - Sem. IV) Verslon Control -
2.22 GIT

DovOps : MCA [Management- Sem. IV]


2.23
Verslon Control - GIT
When downloading content from a remote repo, git pull and git fetch commande whic output displays
are available to accomplish the task. You can consider git fetch the 'safe' version nt the local branches we
tom prefixed with origin/. Additionally, had previously examined
we see the remote but now displays
the two commands. It will download the remote content but not update your local omote-repo. You can check out a remote branches prefixed with
repo's working state, leaving your current work intact. git pull is the mora branch just like a local one, but
in a
detached HEAD state ((ust like checking out an this puts you
aggressive alternative; it will download the remote content for the active local branch old commit). You can
them as read-only branches. To view your remote think of
and immediately execute git merge to create a merge commit for the new remote branches, simply pass the -r flag to
the git branch command.
content. 1If you have pending changes in progress this will cause conflicts and kick-ofr Vou can inspect remote branches
the merge conflict resolution flow. with the usual git checkout and
commands. If you approve the changes a remote git log
How git fetch works with remote branches? branch contains, you can merge it
into a local branch with a normal git merge. So, unlike SVN,
To better understand how git fetch works, let us discuss how Git organizes and Synchronizing your local
repository with a remote repository is actually a two-step process:
stores commits. Behind the scenes, in the repository's ./.git/objects directory, git fetch, then merge.
The git pull command is a convenient shortcut for this process.
stores all commits, local and remote. Git keeps remote and local branch commits
Git fetch commands and options:
distinctly separate through the use of branch refs. The refs for local branches are
stored in the ./.git/refs/heads/. Executing the git branch command will output a git fetch <remote>
Fetch all of the branches from the repository. This also downloads all of the required
list of the local branch refs.
commits and files from the other repository.
The following is an example of git branch output with some demo branch names.
git fetch <remote> <branchs
git branch
main
Same as the above command, but only fetch the specified branch.
git fetch --all
featurel
A power move which fetches allregistered remotes and their branches:
debug2
Examining the contents of
.git fetch --dry-run
the /.gi/refs/heads/ directory would reveal similar output. a demo run of the command. It will output
The --dry-run option will perform
ls ./.git/refs/heads/ examples of actions it will take during the fetch but not apply them.
main
Examples of git fetch:
feature1
Git fetch a remote branch
debug2 to fetch a remote branch and update
The folowing example will demonstrate how
Remote branches are just like local branches, except they map to commits us assume there
from your local working state to the remote contents. In this example, let
somebody else's repository. Remote branches are prefixed by the remote they belong repo from which the local repository has been cloned from using
is a central origin
to so that you don't mix them up with local branches. Like local branches, Git also has us also assume an additional remote repository named
the git clone command, Let fetch.
refs for remote branches. Remote branch refs live in the ./.git/refs/remotes/ contains a feature branch which we will configure and
COWorkers_repo that
directory. The next example code snippet shows the branches you set let us continue the example.
might see after With these assumptions
fetching a remote repo conveniently named remote-repo: remote repo using the git remote command.
Firstly we will need to configure the .
org:coworker/
git branch -r coworkers_repo git@bitbucket
git remote add cOWorkers_repo.git
# origin/main
repo using the repo URL. We will
# origin/feature1 a reference to the cOworker's
Here we have created to download the contents.
origin/debug2 now pass that remote name to git
fetch
# remote-repo/main coworkers/feature_branch
coworkers_repo
git fetch
# remote-repo/other-feature feature_branch
Fetching coworkers/
DevOps : Version Control G MCA(Managoment- Sem. IV
MCA [Management Sem. IV] 2.24 Devops: 2.25
Verslon Control -GIT
We now locally have the contents of coworkers/feature_branch. We will necd To sce which commits have been added
to the upstream
integrate this into our local working copy. We begin this process by using the git using origin/main as a filter: main, you can run a git log

checkout command to checkout the newly dowvnloaded remote branch. git log -oneline main,.origin/main
git checkout coworkers/feature_branch approve the changes and merge
TO them into your local main branch, use
[Note: checking out coworkers/feature_branch'.] following commands: the
You are in 'detached HEAD' state. You can look around, make experimental changes checkout main
git
and commit them, and you can discard any commits you make in this state without git log origin/main
impacting any branches by performing another checkout. Then we can use git merge origin/main:
If you want to create a new branch to retain commits you create, you may do so (now git merge origin/main
or later) by using -b with the checkout command again.
The origin/main and main branches now point to the same commit, and you are
Example: git checkout -b <new-branch-name> synchronized with the upstream developments.
The output from this checkout operation indicates that we are in a detached HEAD
state. This is expected and means that our HEAD ref is pointing to a ref that is not in 2.15.2 git PULL
sequence with our local history. Being that HEAD is pointed at the The git pull command is used to fetch and download content from a remote
cONOrkers/feature_branch ref, we can create a new local branch from that ref. The repository and immediately update the local repcsitory to match that content.
'detached HEAD' output shows us how to do this using the git checkout command: Merging remote upstream changes into your local repository isa common task in Git
based collaboration work flows.
git checkout -b local_feature_branch
Here we have created a new local branch named local_feature branch. This puts The git pull command is actually a combination of two other commands, git fetch
execute a git
updates HEAD to point at the latest remote content and we can continue development followed by git merge. In the first stage of operatian git pull will
to the local branch that HEAD is pcinted at. Once the content
on it from this point. fetch scoped
merge workfow. A new merge commit will be
Synchronize origin with git fetch: downloaded, git pull will entera
created and HEAD updated to point at the new commit
The following example walks through the typical workflow for synchronizing your
local repository with the central repository's main branch. How does the git Pull work?
runs git fetch which dowalcads content from the
git fetch origin The git pull command first remote
This will display the branches that were downloaded: Then a git terge is executed to merge the
specified remote repository.
new merge commit. Let us consider the following
ale8fb5..45e65a4 main -> origin/main content refs and heads into a local
process. Assume we have a
ale8fb5..9eSablc develop -> origin/develop the pull and merging
• [new example to better demonstrate origin.
branch) some-feature -> origin/some-feature a remote
repository with a main branch and
The commits from these new remote branches are shown as squares instead of circles cn remcte crgin
Main
in the diagram below. As you can see, git fetch gives you access to the entire branch
structure of another repository.
Origin /Develop

Origin / Main
Local onginmain Nain
repo
in your
Main branch
and a Remote
Repository with a Main the point where the
Fig. 2.15: the changes from
download all E. The git pull
Origin /Some Feature pull will example, that point is
in this situation, git separated. In this
Fig. 2.14: Synchronize origin with git fetch local and main branch
DevOps:MCA [Management- Sem. IV) Verslon Control-GIT
2.26 MCA (Managomont
Dovops : -Som. IV
Command will fetch the diverged remote commits which are A-B-C. Then 2.27
the nuh
cit pull discussion:
Verslon Control GIT -
process will create a new local merge commit containing
the content of the ne. Mat can think or
diverged remote commits. git pull as git's version
eunchronize your local of svn update. It is an easy way
Remote Origin /Main repository with upstream to
oxplains cach step of the pulling process, changes. The following
diagram
Origin /Main
|Origin / Main

Main
Local main
Fig. 2.16 (a)
In the above diagram, we can see the new commit H. This (b)
commit is a new merge
commit that contains the contents of remote A-B-C commits Origin / Main
message. This example is one of a few and has a combined log
git pull merging strategies. A --rebase option
can be passed to git pull to use a
rebase merging strategy instead of a merge
commit. The next example will show how a rebase pull works.
Assume that we are at a
starting point of our first diagram, and we have executed Main
git pull --rebase.
Remote Orngin / Main (c)
Fig. 2.18: Pulling Process
You start out thinking your repository is synchronized, but then git fetch reveals
that origin's version of main has progressed since you last checked it Then git merge

Local mnain immediately integrates the remote main into the local one.
Git pull and syncing:
Fig. 2.17
In this diagram, we can now see that a rebase pull does not create git pull is one of many commands that claim the responsibility of 'syncing' remote
the new H commit. content. The git remote command is used to specify what remote endpoints the
Instead, the rebase has copied the remote commits A--B-C
and rewritten the local
commits E--F--G to appear after them in the local
origin/main commit history. syncing commands will operate on. The git push command is used to upload content
Common Options: to a remote repository.
two
git pull <remote> The git fetch command can be confused with git pull comand. These
This command fetches the specified remote's copy are used to download remote content. An important safety distinction can
of the current branch and commands can be considered the "safe"
immediately merges it into the local copy. This is the same as be made between git pull and git fetch. The git fetch
git fetch <remote> fetch will download the remote
followed by git merge origin/<current-branch>. option while git pull can be considered unsafe. git
Alternatively, git pull will
git pul1 -no-commit <remote> content and not alter the state of the local repository.
This command is similar to the default invocation, fetches attempt to change the local state to match
the remote content but download remote content and immediately a
cause the local repository to get in conflicted
does not create a new merge Commit. that content. This ma unintentionally
git pull --rebase <remote> state.
Same as the previous pull instead of using git merge to integrate Pulling via Rebase:
the remote branch a linear history by preventing
with the local one, use git rebase.
option can be used to confirm
-rebase
git pull --verbose 1ne
Many developers prefer rebasing over merging. In this
This command gives verbose output during a pull which unhecessary merge commits. is even more like svn update than
a
flag
displays the content being Sense, using git pull with the -rebase
downloaded and the merge details.
plain git pull.
DevOps: MCA [Management- Sem. lIV] 2.28
Version
Control- GIT MCA [Managcment - Sem. IV]
DevOps :
2.29
Version Control - GIT
ln fact, pulling with --rebase is such a common workflow that there is a dedicatod When you merge two branches you can
configuration option for it: d sometimes get a
another developer unknowinglyboth work on the same part a
conflict. For example, you
git config --global branch.autosetuprebase always aoueloper pushes their changes to of file. The other
the remote repository. When you
sour local repository,
After running that command, all git pull commands will integrate via git rebase youl get a merge conflict. But Git has a waythen pull them to
instead of git merge.
Examples of Git Pull:
ne conflicts, so you can see both sets of changes to handle these
and decide which you want to
keep.
Small feature
The following examples demonstrate how to use git pull command in common
situations: -

(a) Default Behavior: -Master branch


git pull
Executing the default invocation of git pull will be equivalent to git fetch
origin HEAD and git merge HEAD where HEAD is ref pointing to the current branch.
(b) Git pull on remotes:
git checkout new_feature
git pull <remote rep0> Large feature
This example first performs a checkout and switches to the branch. Following Fig. 2.19: Git Branching
that, the git pul1l is executed with being passed. This will implicitly pull down the
new feature branch from. Once the download is complete it will CREATING THE BRANCHES, SWITCHING THE BRANCHES,
initiate a git 2.17
merge. MERGING
(c) Git pull rebase instead of merge: Git uses branching severely to switch between multiple tasks.
The following example demonstrates how to synchronize with the central 1. Create and Switch Branch.

repository's main branch using a rebase: 2.. Create a new local branch:
$ git branch name
git checkout main
git pull 3. List all local branches:
--rebase origin
$ git branch
This simply moves your local changes onto the top of what everybody else has
4. Switch to a given local branch:
already contributed.
$ git checkout branchname
2.16 BRANCHING 5. Merge changes from a branch into the local master
A branch is essentially a unique set of code changes with a unique name. Each $ git checkout master
repository can have one or more branches. The main branch is where all the changes $git merge branchname
eventually get merged into and it is known as the master (main)., Let's see how useful Integration with remote Repo:
repo.
Git branches are with an example. Push your local changes to the remote
remote repo to get most recent changes.
Assume that you need to work on a new feature for an application. To start work, Pul from
remote repo into your local repo, and put
you have to first create a unique branch. To fetch the most recent updates from the
them into your working directory:
So, while working you get a request to make a quick change that needs to go live
on the application today. But the thing is you $ git pull origin master
haven't finished your new feature. your local repo in the remote repo:
10 put your changes from
So, what you can do is, switch back to the master branch, make the change, and
$ git push origin master
push it live. Git Configuration File:
Then you can switch back to your new feature branch and finish your work. When are all simple text files.
Git's configuration files n

you're done, you merge the new feature branch into the master branch configuration settings.
and both "git/config Repository-specific
the new feature and that quick change are kept. gitconfig User-specific
configuration settings
DevOps: MCA IManagement - Verslon Control-GIT MCA[(Management - Sem.IV)
Sem.IV] 2.30 DovOps: 2.31
Version Control- GIT
The above diagram visualizes a repository
2.18 THE BRANCHES
a little feature, and one a
with two
isolated lines sof
development, one
for for longer-running
In Git, branches are a part of your everyday development process. A branch is a branches, it's not only possibleto work on both of feature. By developing them 1n
version of the repository that diverges from the main working project. It is a feature themin parallel, but also
the main branch free from questionable code. it keeps
available in most modern version control systems. The implementation behind Git branches
is much more lightweight
A Git project can have more than one branch. These branches are a pointer to a version control system models. Instead of than other
copying files from directory
snapshot of your changes. When you want to add a new feature or fix a bug, you Cit stores a branch as a rererence a
to commit. In sense,
to directory.
this a
spawn a new branch to summarize your changes. So, it is complex to merge the ts of a series of commitsit is not a branch represents the
container for commits. The history
unstable code with the main code base and also facilitates you to clean up your future is extrapolated through the commit relationships. for a branch
history before merging with the main branch. Refer the following image to clear idea is hranches are not like SVN branches. Git branches are an integral part your
of branching. astorvday workflow whereas SVN branches are of
only used to capture the occasional
Branch large-scale development effort.
Creating Branches:
Tt is
important to understand that branches are just pointers to commits. When you
create a branch, all Git needs to do is create a new pointer, it does not
change the
repository in any.other way. Ifyou start with a repository that looks like this:
Master Main

Fig. 2.22
Then, you create a branch using the following command:
git branch crazy-experiment
Branch 2
The repository history remains unchanged. All you get is a new pointer to the current
Commit:
Main
Fig. 2.20: Branching
This makes it harder for unstable code to get merged into
the main code base, and it
gives you the chance to clean up your future's history before
merging it into the main
branch.
Crazy Experiment
Little feature
Fig. 2.23
commits to it, you need to
new branch. To start adding
Note that this only creates the
checkout, and then use the standard git add and git comit
select it with git
Main
commands.
Creating Remote Branches: on
on remote branches. In order to operate
1ne git branch command also works repo
a remote repo must first be configured and added to the local
Temote branches,
config.
> git remote add
new-remote-repo https://bitbucket.com/user/repo.git
repo config
# Add remote repo to local crazy-experiment
<new-remote-repo
Big feature 8it push branch to new-remote-repo
# pushes the crazy-experiment
a copy the local branch crazy-experiment
to the remote
Fig. 2.21: A repository with two isolated lines S
command will push of
of development
repo <remote>.
.
Verslon :MCA (Management -Sem. IV]
DevOps: MCA [Management- Sem. IV 2.32 Control-GIT 2.33
Verslon Control - GIT
config --global user.emall
Deleting Branches: "[valid- emailj"
Bnemailaddress that will be assoclated
Once you have finished working on a branch and have merged it into the main code with each historymarker.
ait config -global color .ui auto
base, you're free to delete the branch without losing any history: Cot automatic
command line coloring for Git easy
for reviewing.
git branch -d crazy-experiment INIT:
SETUP and
However, if the branch has not been merged, the above command will output an error
Configuring user information, initializing and
message: cloning repositories:
error: The branch 'crazy-experiment' is not fully merged. If you are sure o git init
initialize an existing directory as a git repository.
you want to delete it, run 'git branch D crazy-experiment'.
This protects you from losing access to that entire line of development. If you really git clone (url]
Retrieve an entire repository from a hosted location via URL.
want to delete the branch (e.g. it's a failed experiment), you can use the capital -D
STAGE and SNAPSHOT:
flag:
Working with snapshots and the git staging area :

git branch -D crazy-experiment


This deletes the branch regardless of its status and without warnings, so uses it o git statuS
judiciously. show modified files in working directory, staged for your next commit.
The previous commands will delete a local copy of a branch. The branch may still exist git add [file]
Add a file as it looks now to your net commit (staze].
in remote repos. To delete a remote branch execute the following command:
git push origin --delete crazy-experiment reset [file]
git
OR
Unstage a file while retaining the changes in warking directory.
ogit diff
git push origin :crazy-experiment
diff of what is changed but not staged.
This will push a delete signal to the remote origin repository that triggers a delete of
git diff --staged
the remote crazy-experinent branch. diff of what is staged but not yet committed.
Some more commands and Quick Review: o
git commit -m "[descriptive message]"
INSTALLATION and GUIS: Commit your staged content as a new commit snapshot.
With platform specific installers for Git, GitHub also provides the ease staying up BRANCH and MERGE:
of
to-date with the latest releases of the command line tool while providing a
user interface for day-to-day interaction, review, graphical Isolating work in branches, changing context, and integrating changes:
and repository synchronization. git
GITHub for Windows: branch branch.
List your branches. a will
* appear next to the currently active
htps://windows.github.com
git branch [branch-name]
GITHub for Mac:
Create a new branch at the current commit.
htps://macgithub.com git checkout
out into your working directory.
For Linux and Solaris platforms, the latest
release is available on the official git Switch to another branch and check it
website. git merge (branch] one.
history into the current
Git for Al Platforms Merge the specified branch's
o
htp://git-scm.com git log
branch's history.
SETUP: Show all commits in the current
INSPECT and COMPARE :
Configuring user information used acroSs all local
repositories:
o git config
--global user. name "[firstnamelastnamej" Examining logs, diffs and object information.
Set a name that is identifiable for git log
credit vwhen review version history. for the currently active
branch.
Show the commit history
DevOps : MCA[Management - Sem. IV 2.34 Version Control
GT DevOps : MCA (Managomentt-Sem. IV 2.35
git log
branchB..branchA HISTORY; Verslon Control-GIT
REWRITE
Show the commits on branchA that are not on branclh. Rewriting branches, updating ,commits
git log --follow (file) and clearinghistory:
o gít rebase [branch1
Show the commits that changed file, even across renames. Apply any commitssof current
git diff branchB...branchA branchahead of
git reset --hard (commit| specifiedone.
Show the difference of what is in branchA that is not in branchB. Clear staging area, rewrite
working tree from specified
git show [SHA] TEMPORARY COMMITS: commit.
Show any object of Git in the human-readable format.
Temporarily store modified, tracked files
TRACKING PATH CHANGES: in order to change
branches.
o git stash
Versioning file removes and path changes. Save modified and staged changes.
git ra [file) o git stash list
Delete the file from project and stage the removal List stack-order of stashed file changes.
for commit.
o git ev [existing-path) [new-path] git stash pop
an
Change existing file path and stage the move. Write working
from top of stash stack.
o git log --stat -M git stash drop
Show all commit logs with indication any paths Discard the changes from top of stash stack.
of that moved.
IGNORING PATTERNS: Summary
Preventing unintentional staging or committing of files. Version control systems are used in all phases of scftware derelopment.
logs/ Global Information Tracker (git) I made for it which is dist-buted version control
.notes system.
pattern/ There are various types af version contral systems like Local, CVCs, DVCS.
Save a file with desired patterns as Git ignore Git also helps you syncronise codebetween multiple pecple.
with either direct string matches or
wildcard globs. There are various commands used far Git starting from repesitory creation til
git config --global core.excludesfile [file] branching.
Git branches are a part of your everyday development
process. Git branches are
System wide ignore pattern for all local repositories.
SHARE and UPDATE: effectively a pointer to a snapshat of yeur changes.
Retrieving updates from another repository Check Your Understanding
and updating local repos.
git remote add [alias) [url] 1. Which Command is used to show limited number af commits?
Add a git URL as an alias. (a) git fetch () git log -a
git fetch [alias ]
(c) git () git status
Fetch down all the branches from that git remote. config
email to be used for all commits by the
2. Which command defines the author
git merge [alías]/[branch]
Merge a remote branch into your current current user? user.email
o
branch to bring it up to date. (a) git clean-f () git config-global
git push [alias] (branch] (d) git email-amend
Transmit local branch commits to the remote (c) git merge-no-f?
repository branch. your working directory.
git pull 3. removes untracked filles from
Fetch and merge any commits from
the tracking remote branch. (a) git commit ) git clean-f
(d) git reset
(c) git clean
-
DevOps: MCA[Management- Sem. IV) 2.36
Verslon Control GIT DevOps : MCA (Management Sem. I 2.37
Verslon Control GIT
Answer the following questions.
4. Which of the following two commands used to compare two specific branches?
what are various types of version control systems?
(a) git diff. (b) git merge
(d) git push-tags
nifferentiate between centralized version control systems
(c) git blame-L, and distributed version
5. Git belongs to the. generation of Version Control tools. control systems.
3. Explain in brief Git Life cycle.
(a) 1t (b) 4th
A. Explain in brief git workflow.
(c) 2nd (d) 3d
E State the various steps how git is used in Command Line.
6. What language is used in Git?
(a) C (b) HTML 6 What are the steps to create repository in the git?
(c) PHP (d) C++ 7. Explain git fetch command with their various options.
7. How to create copy of a lab under your own GitHub account so that you can 8. How git pullworks?
solve the lab? o.III Write short notes on:
(a) git clone (b) git fork
1. Local Version Control Systerms
(c)
git pull-request (d) Forking it via the GitHub interface
2. Staging Area
8. Where is a branch stored inside a Git repository?
3. git fetch
(a) Inside .git/refs directory
4. git pull
(b) Inside either -git/refs directory or -git/packed-refs file
(c) Inside.git/packed-refs file 5. Branching in Git
(d) Inside either -git/branches file or .git/packed-refs file
9. A keeps track of the contributions of the developers working as a team on
the projects.
(a) CVs (b) DVF
(c) VCS (d) LFS
10. The files that can be committed are always present in git
(a) working directory (b) staging area
(c) unstaged area (d) Anywhere
Answers
1 (b) 2. (b) 3. (c) 4. (a) 5. (d) 6. (c) 7. (d) 8. (b) 9. (c) 10. (b)|

Practice Questions
QIAnswer the following questions in short.
1. What is Git?
2. State the types of version control systems.
3. What are three possible states of file in directory in git workflow?
4. What is the meaning of commands: push, commit?
5. Which command is used to check whether git is installed or not in Operating
system?
6. How to clone a remote repo to your current directory?
7. How to crate and switch local branch on git?
8. Explain any two commands for tracking change in git.
-
MCA (Management -Sem. IV]
DevOps: 3.2
Chef forConfiguratlon
Chef: Management
Need of
3.. If you
want to move to a new office and get
office, then system management the
same hardware
the new will will do all the work,
and software setup
at
so
v

happen, chef tool be used. Chef transfers infrastructure but runtime errors may
following Fig. 3.1. into code as
the shown in
Chef for Configuration BVT

Management Infrastructure Code Continuous Integration


Continuous Delivery

Objectives...
After learning this chapter you will be able to:
D Understand the creation of server workstation, client and. repository for chef
configuration management.
Understand actual working handling of various chef commands for nodes and data
bags creation.
Staging AT Continuous Deployment Production
Fig. 3.1: Need of Chef
OVERVIEW OF CHEF; COMMON CHEF TERMINOLOGY (SERVER, Benefits of Chef:
3.1 WORKSTATION, CLIENT, REPOSITORY etc.) SERVERS AND 1. Speed up Software Delivery: When your infrastructure is automated all the
NODES CHEF CONFIGURATION CONCEPTS software requirements like testing, creating new environments for software
Chef is a declarative configuration management and automation platform used to deployments etc. becomes faster.
2. Risk Management: Chef lowers risk and improves compliance at all stages of
translate infrastructure into code. This enables a development and deployment
process with better testing, efficient
and predictable deployments, centralized deployment. It reduces the conflicts during the development and production
versioning, and reproducible environments across all servers. environment.
Increased Service Resiliency: By making the infrastructure automated it
Chef has Client-server architecture and it supports multiple 3.
platforms like Windows, occur. It can also recover from errors
Ubuntu, Centos, and Solaris etc. It can also be integrated with cloud
platform like
monitors for bugs and errors before they
more quickly.
AWS, Google Cloud Platform, and Open Stack etc.
us understand Configuration Management.
Before getting into Chef deeply let
. Cloud Acceptance: Chef can be easily adapted
can be easily
to a cloud environment and the
configured, installed and managed
For example,'a system engineer in an organization wants
to deploy or.update Servers and infrastructure
software or an operating system on more than hundreds automatically by Chef.
organization in one day. He can do it manually but with so many of -systems in the Cloud Environments: Chef
can run on different
errors and some 3. Managing Data Centers and
software may get crash while in progress. Sometimes is impossible you can manage all cloud and on premise platforms
it to revért back to Platforms. Under chef
previous version. Such issues can be solved with Configuration
Management. including servers. a for continuous
Configuration Management keeps all software Workflow: Chef provides pipeline
an organization and also repairs, deploys and hardware-related information of Emient IT operation andbuilding.to testing and all the way through delivery,
and updates whole application. With the Ceployment starting from
help of Configuration Management, it is helpful to
do such task with multiple system
administrators and developers who manage imany servers monitoring, and troubleshooting.
and their applications. Common Chef Terminology: cookbooks,
Chef is the important tool for configuration to and it stores
configuration
management in DevOps. 1, Chef Server: It contains all data related Chef-client gives
in Chef-client.
(3.1) which describes each node
ecipes, metadata
DevOps : MCA Manapement - Sem. NM Chet fot Centiguratton Mehng
perops : MCA [Managenent- Sem V

Chef for Contquratlen Management


configuration details to node. Server verifies all changes and does the pairine A. Nodes: Nodes ate manaed byy
workstation with nodes through the use of authorization keys, and then actus! Chef and each nade is configured
Chef Client on it. Chef-Nodes ars a by installing
communication will start between workstation and nodes. s machine such as physical. virtual
Chef-Client: Chief-client is for cloud ete
registering and authenticating node.
2. Chef Werkstation: The workstation interacts with Chef-servet and Chef-nodes,
is also used to create Cookbooks. Workstation is a place where allthe interaction
t node objects and for configuration building
of the nodes.Chief-client runs locally on every
to
node configure the node.
takes place where Coolkbooks are created, tested and deployed, and in workstation. 6. Ohai: This is sed for determining the sytem
codes are tested tWorkstation is also used for defining roles and environments Chef Client. It collects tate at beginning of Chef run in
ali the system con figuratlon data.
based on the evelopment and production environment. Some components of wORKSTATION SETUP:HOW TO CoNFIGURE
woristation are: KNIFE? EXECUTE
o
Development Kit: It contaíns all the packages tequire for using Chef. 32 soME COMMANDS TO TEST CONNECTION BETWEEN KNIFE AND
Chef Command Line Tool: In this place cookbooks are created, tested and WORKSTATION
deployed and through this policies are uploaded to Chef Server. . For setup, we hare to first configure Ruby nvironment. Ruby
o Knife: This Command Lint tool is used for interacting with Chef Nodes. is primarily used for
developing Chef Policy.
Test Kitchen: Thís is for validating Chef Code. Steps:
o Chef-Repo: This is a repository in which cookbooks are created, tested and 1. Determine your default shell by running
maintaíned though Chef Command Líne tool. echossHELL
3. Cookbooks: This command will give the path to gour default sitell uch as /binzsh for the Zsh
e Coolbooks are created
using Ruby language and Domain Specific languages shell.
are used for specfic resources. 2. Add the Workstatian initializatian cantent to the ayproprlate shell re
Censider the following figure: file
For Bash shells un:
echo'eval "s(chef shell-ni: basn)xf.ashrc
Coektooa

For Zsh shells run:


echo'eval "s(chef shelL-inz }.Eshrc
Reape Recipe
For FIsh shels run:
echo'eval (chef shell-int fEn)* l.canfig/fi/config. fisn
3. Openanew shell wiadew and u:
uhich uby
fig. 9.2: Cookbook The conmand should return /ept/chef-warkstattan/enbedded/bin/ruby.
ACookbook contains recipes which specify resources to
be used and in which
order it is to be used. The Cookbook contaíns all the details regarding 3.2.1 How to configure Knife?
the work Knife is a Command Line tool thac gravides an interface between a local chef-repo
and it changes the configuration of the Chef-Node. and the Chef tnfra Server. The Knife cammand Lne toai must be canfigured to
Main components of a Cookbook:
c
Attributes These are used for overriding default setting a node. communicate with the Chef tnfa Servet 4s weil as any ather initastructure within
Files These are used for in the organization
transferring files from sub directory to a specific To configure knife to communicate with Chef infa Server for the fist time
run ant fe
path in Chef-client.
c Libraries: These are written in Ruby code and used configure to create a Chet tnfra credentials file at-i.chet/credenttals
up knife with a conftg.ra fle.
resources for configuring custom Previous Chef tnfra setups recommended eting
and reipes. a sing!le Chef
o Metadata: This contains information for deploying Configuring knife with config rb is still vald, but anly for warking with
the cookbooks to each Infra Server with a single Chef Infra Server organizatian.
node.
Recipes: These are a configuration elernent nkdie .çhef
Recipes can also be incuded in other recipes
that is stored in a cookbook. touch -l. cheflconfig.rb
and executed based on the rn New-Iten -Path "s:\" -ane .chef" -Ltentype "tirectary"
list. Pecipes are created using Ruby language.
keu-Iten -Itemlype *file" -Path "c:l.chef\conflg.ro"
DevOps : MCA [Management- Sem. IV Chef for Configuration Management
3.5 Dovops : MCA(ManagomentSem. IV 3.6 Chef for Configuration Management
The config.rb configuration can include arbitrary Ruby code to extend configuration .tUse the chef generate repo command to create the Chef Infra repository.
For
beyond static values. This can be used to load environmental variables from the example, to create a repository called chef-repo:
workstation. This makes it possible to write a single config.rb file that can be used by chef generate repo chef-repo
all users within your organization. This single file can also be checked in the chef. RBAC:
repo, allowing users to load different config.rb files based on which chef-repo they
Chef infrastructure used RBAC (Role-based Access Control). This is used to restrict
execute the commands from. This can be especially useful when each chef-repo
access to objects: nodes, environments, roles, data bags, cookbooks, and so on which
points to a different chef server or organization.
Example: config.rb: help to maintain authorization policy.
current_dir = File.dirname(_FILE) Table 3.1: Features of RBAC
user ENV[ 'CHEF_USER'J || ENVE "USER'] Feature Description
node_name user
. Organization is the top-level entity for role-based access control in
client_key "#{ENV[ 'HOME']}/ chef-repo/ chef/#{user}.pem"
chef_server_url the Chef Infra Server. Each organization contains the default groups
"https://api.opscode. com/organizations/#{ENV[ 'ORGNAME' ) (admins, clients, and users, plus billing _admíns for the hosted Chef
syntax_ check_cache_path Infra Server), at least one user and at least one node (on which the
.
"#(ENV[ "HOME']}/chef-repo/ chef/syntax_check_cache" Chef Infra Client. is installed). The Chef Infra Server supports
cookbook_path ["#(current_dir}/. ./cookbooks"] Organization multiple organizations. This ncludes a single default organization
cookbook_copyright "Your Company, Inc." that is defined during setup. Additional organizations can be created
cookbook_license "Apache-2.e" after the initial setup and configuration of the Chef Infra Server.
cookbook_email "cookbookseyourcompany. com" group is used to define access to object types and objects in the
A
# Amazon AWS
Chef Infra Server and also to assign permissions that determine
knife[:aws_access_key_id] = ENV['AWS_ACCESS_KEY_ID']
what types of tasks are available to members of that group wh0 are
knife[:aws_secret_access_key] = ENV[ 'AWS_SECRET_ACCESS_KEY']
authorized to performn them. Groups are configured by organization.
ORGANIZATION SETUP: CREATE ORGANIZATION; ADD Individual users who are members of a group will inherit the
3.3
YOURSELF AND NODE TO ORGANIZATION Group permissions assigned to the group. The Chef Infra Server includes
users. For users of
the following default groups: admins, clients, and
3.3.1 Setting up Chef Rep ository the hosted Chef Infra Server, an additional default
group is
When user will setup Chef for first time in their organization, then he will need a provided: billing_admins.
Chef Infra repository for saving cookbooks and other work. manage data
A user is any non-administrator human being who will
The chef-repo is a directory on user workstation that stores everything he need to a workstation or who
define his infrastructure with Chef Infra: Do that is uploaded to the Chef Infra Server from user
will log on to the Chef management console web interface. The
Cookbooks (including recipes, attributes, custom resources, libraries, user that is defined
and Chef Infra Server includes a single default
User to admins group.
templates) during setup and is automatically assigned the
Data bags an permission to access the Chef Infra
A Chef Client is actor that has
Policyfiles a
Server. A client is most often node (on which
the Chef Infra Client
runs), or some other
The chef-repo directory should be coordinated with a version control system, such as runs), but is alsoa workstation (on which knife
git. All of the data in the chef-repo should be treated use the Chef Infra Server API. Each
like source code. machine that is configured to
by a client. uses
a
chef and knife commands are used to upload data to the Chef Infra Server from the request to the Chef Infra Server that is made
chef-repo directory. Once uploaded, Chef Infra Client uses that data to manage Chef Client authentication that must be authorized by the public
the private key for
nodes registered with the Chef Infra Server and to ensure
that it applies the rignt key on the Chef Infra Server.
cookbooks, policyfiles, and settings to the right nodes
in the right order.
Sem.
DeyOps : MCA [Uanagement- 38 Chet for Corfiçurzon Manapemert
Chef for Configuration llanagement
DevOps : MCA Management - Sem. V] 3.7
3. Configure Chef Server:
3.3.2 Create Organization to create an organization. (When creating
an (a) Initial Configuration: During installation, you might need to proride initial
The org-create subcommand is used configuration details such as the server name, organization name, and
command, the validation key for the organization is returned
organization with this administrator user.
to STDOUT.) (o) SSL Configuration: Configure SSL certificates for secure cormmunication with
org
Use the org-create, org-delete, org-list, org-show, org-user -add and the Chef Server.
User-remove commands to manage organizations. 4. Set up Organizations: Chef Server uses the concept of organizations to manage
Syntax: different groups within your configuration management environMent. Here is
chef-server-ctl org-create 0RG_NAME "ORG_FULL_NAME" (options) how you can set up organizations:
comes with a web-based user
(a) Access the Web Ul: Chef Server typically
vwhere,
or digit, may only contain lower interface (usually accessible at https://<chef-server -hostname>).
o The name must begin with a lower-case letter you can create an organization.
case letters, digits, hyphens, and underscores, and must be between and
1 255 (b) Create an Organization: From the web Ul,
policies, and
For example, chef. Each organization will have its own separate configuration data,
characters.
a space character and must be between users.
The full name must begin with non-whíte
1 and 1023 characters. For example, "ABC Software, Inc." 5. Configure Workstations: on your development
(a) Install Chef Workstation: Install the Chef Workstation
Options: developing and testing Chef
machine. The Chef Workstation provides tools for
This subcommand has the following optíons:
to the admins and Code.
Associate a user with an organization and add them a to store your Chef Code, recipes,
(b) Create Chef Repository: Create directory
billing_admins security groups. configurations. This will be your Chef repository.
and
-a USER_NAME, - -a550ciation_user USER NAME
printing it to 6. Manage Nodes: You
Write the ORGANIZATION-validator.pem to FILE_NAME instead of are the servers you want to manage using Chef.
(a) Bootstrap Nodes: Nodes a connection wíth the Chef Server.
STDOUT. need to bootstrap them to establish define how
-f FILE_NAME, --filename FILE_ NAME
Chef Cookbooks and Recipes that Server.
(b) Apply Configurations: Write Upload these to the Chef
Examples: your infrastructure should be configured. organizations on the
chef-server -ctl org-create ABC Software (c) Assign Nodes to
Organizations: Assign nodes to specific
chef-server-ctl org-create staging Stagíng -a chef-admin Chef Server.
chef-server-ctl org-create dev Development -f /tmp/id- dev .key 7. Manage Cookbooks: to the Chef Server, use the
dev Development --association_usergrantmc upload your Cookbooks
chef-server-ctl org-create (a) Upload Cookbooks: To Chef Workstation.
from the
TEST NODE SETUP: CREATE A SERVER AND ADD TO knife Command-line tool specific nodes or roles.
Associate Cookbooks with
3,4 ORGANIZATION,CHECK NODE DETAILS USING KNIFE (b) Associate Cookbooks:
8. Manage Roles and
Policies:
desired state of node.
a

roles that describe the


3.4.1 Create a Server and Add to Organization a server (a) Create Roles: Define
Policies are used to define
versioned sets of cookbooks and
Following are the steps to be followed for creating and adding to (b) Create Policies:
attributes for nodes. capabilities to
organization: configuration management
a You need to select a server to host the Chef components. 1ne Configurations: Use Chefs your recipes, cookbooks.
1. Choose Server: or 9. Apply configurations to nodes based on the
most common choices are your own infrastructure (physical or virtual servers) apply the desired you
have defined. more
cloud platforms such as Azure, AWS, or Google Cloud Platform. roles, and policies steps might involve
this is a high-level overview, and the actual requirements. Always
2. Install Chef Server: Remember that on your specific setup and
based up-to-date
(a) Download the Chef Server Package: Visit the Chef downloads
page and details and considerations documentation for the most accurate and
obtain the Chef Server package compatible with your operating system. refer to the latest Chef
0) Install the Package: Follow the installation instructions provided by Chef to instructions.
install the Chef Server on your selected server.
-
Chef for Configuration Management pevOps : MCA (Managoment Sem. tv
DevOps : MCA[Management- Sem. IV] 3.9 3.10 Chef for Configuration Management

Node Objects:
3.4.2 Check Node details using Knife ANode Object in Chef is a jSON document that contains various pieces of information
Use the knife node subcommand to manage the nodes that exist on a Chef Infra
about a server, including:
Server. o
Node Name: A unique identifier for the node.
The following examples show how to use this knife subcommand: Environment: The environment the node belongs to (e.g., development,
1. List AlI Nodes: To list all nodes associated with your Chef Server: production).
knife node list Run List: A list of recipes and roles to be applied to the node.
2. Show Node Details: To view detailed information about a specific node:
Attributes: Custom data that describes the node's desired configuration.
knife node show NODE_NANE o Automatic Attributes: Information collected by the Chef client, like platform
Replace NODE_NAMEwith the name of the node you want to view. details, IP addresses, etc.
3, Search Nodes: You can use the knife search subcommand to search for nodes Policy: Information about the policy applied to the node.
based on specific criteria. For example, to search for nodes with a specific role:
ohai Data: System information collected by the Ohai tool.
knife search node 'roles: ROLE_NAME Tags: User-defined tags to help organize nodes.
Replace ROLE_NAME with the name of the role you are searching for.
Node Search:
4. Filter Nodes: You can filter the nodes displayed based on specific criteria using Node search in Chef is a powerful feature that allows you to find nodes based on
the knife node list command with filters. For example, to list nodes with a specific criteria. You can use node search to identify nodes with certain attributes,
specific environment:
roles, or any other data stored in the node object. The search results can then be used
knife node list 'chef_environment:ENVIRONMENT NAME to manage and manipulate nodes more efficiently.
Replace ENVIRONMENT_NAME with the name of the environment. The knife search command is used to perform node searches. Here's how
you can use
5. Node Attribute Values: Using the knife exec command,you can query and it:
display specific attribute values of nodes. For example, to display the value of a o To search for nodes with a specific role:
specific attribute for all nodes:
"roles:my_role"
knifeexec -E "nodes. all { In] puts n['attribute_name'] }" To search for nodes in a specific environment:
Replace attribute_name with the attribute you want to display. "chef_environment :my_environnent"
Remember to replace placeholders like NODE_NAME, ROLE_NAME, and ENVIRONMENT_NAME To search for nodes with a specific attribute:
with your actual node, role, and environment names. "my_attribute:attribute_value"
The knife command can be quite versatile. This command allows you to perform a wide o To search for nodes with a combination of criteria:
range of actions related to Chef Server management. To explore more options and "role:web server AND chef_environment:production"
more advanced search
subcommands, you can refer to the official Chef documentation or use the --help flag You can also use regular expressions, range queries, and
with any knife subcommand to get detailed usage information: techniques to narrow down your results.
large infrastructure with
knife SUBCOMMAND --help Node search can be particularly helpful when managinga
configurations
For example, many nodes, as it allows you to dynamically identify nodes and apply
based on their attributes and roles.
knife node show -- help ENVIRONMENTS, ADD
ENVIRONMENTS: HOW TO CREATE
3.5 NODE OBJECTS AND SEARCH 3.6
In Chef, nodes are representations of individual servers or systems that are managed SERVERS TO ENVIRONMENTS?
environment for development, testing, and
always a good idea to have separate
a
using the Chef Configuration Management tool. Nodes store information about the It is
separate environments to support an
state of a server, its attributes, run lists, environment, and other relevant details. You production. Chef enables grouping nodes into
interact with nodes to define how they should be configured and maintained. ordered development flow.
Here's a brief overview of node objects and how node search works in Chef.
DevOps : MCA [Management- Chof for Configuratlon Sem. IVI
Sem. IV 3.11 Managemon DevOps : MGA(Managomont 3.12 Chef for Configuratlon Management
For example, one environment may be called "testing" and another may be calloa
production". Since you don't want any code that is still in testing on your production 3.7.1 Create Roles
Method 1: In Chef Server directly
machines, each machine can only be in one environment. You can then have ,
configuration for machines in your testing environment, and a completely differont knife role create client1
8
configuration for computers in production. Add the run list èg. "recipe[ngínx1" under "run 1ist"
Additional environments can be created to reflect each organization's patterns and Save & exit
workflow. For example, Creating production, Staging, Testing, and Development The role will be created in Chef Server.
environments. Generally, an environment is also associated with one (or more) Example:
Cookbook versions. name."web_servers"
Default Environment: description "This role contains nodes, which act as web servers"
By default, an environment called"_default" is created. Each node will be placed into run_list "recipe[webserver]"
this environment unless another environment specified. Environments can be default_attributes 'ntp' =)
created to tag a server as part of a procesS group. =>
'ntpdate'
Create Environments:
An environment can be created in four different ways: ' disable' => true
Create a Ruby file in the environments sub-directory of the chef-repo and then
push it to the Chef Infra Server.
Create a JSON file directly in the chef-repo and then push it.to the Chef Infra Let ús download the role from the Chef server so we have it locally in a Chef
Server. repository.
>
o Using knife. knife role show client1 -d -Fjson> roles/clientl.json
Now, let us bootstrap the node using knife with roles.
Using the Chef Infra Server REST API. >
Once an environment exists on the Chef Infra Server, a node can be associated with knife bootstrap --run-list "role[webserver]" --sudo hostname
Edit the roles in Chef Server using following command.
that environment using the chef_environment method. >
knife role edit client1
3.7 ROLES: CREATE ROLES, ADD ROLES TO ORGANIZATION
A role defines specific patterns and processes across
nodes in an organization as Method 2: In local repo under chef-repo folder.
belonging to a single job function. >
viwebserver.rb
Each role has zero or more attributes. Each node has zero or more roles. Example:
name "web_servers"
When a role runs on a node, its configuration details are compared to the attributes of
that role. Then, the contents of the run-list of that role are applied to node's description "This role contains nodes, which act as web servers"
run _list "recipe[webserver]"
configuration details.
default_attributes 'ntp' =>
When running Chef Infra Clients, it combines its own attributes with
the run-lists {
contained in each assigned role.
'ntpdate' =>
Role data is stored in two formats: As Ruby file
that contains domain-specific
language or As JSON data. =>
How to use Roles in Chef?
'disable' true
1. Create a Role and add the cookbooks into it.
2. Assign the role into each node or bootstrap new
nodes using roles.
&
then upload to chef server using following commands:
3. Then run the list. $ knife role from file path/to/role/file
$ knife role from file web_servers.rb
DevOps :MCA [Management - Sem. Chot for Contiguratlon Managomon povOps : MCA (Managomont- Sem, IV
IV] 3.13
3.14 Chef for Configuratlon Managemont
Assigning Roles to Nodes: step 3: Edit a Node and Roles
>
knife node list oatalenb
$ knife node edit node_name h
de1

OR
# Assign the role to a node called server:
$ knife node run list add server 'role[web_servers)'
This will bring up the node's definition file, which will allow us to adda role to its
run list:
{ "name": "client1", "chef_environment": "_default", "normal":
{
"tags":|) }, "run_list": [ "recipe[nginx]" ])

For instance, we can replace our recipe with our role in this file: RoLE
('name": "client1", "chef_environment":"_default", "normal":{ "tags": [U}.
"run _list":["role[web_server]"])
Method3: Using Chef Autotmate UI. Fig. 3.3 (c)
Step 1: Create a Role Step 4: Run knife command fromn workstation.
Dt Mta Seegs
$ knife ssh "role :webserver» "sudo chef-client»

ATTRIBUTES: UNDERSTANDING OF ATTRIBUTES, CREATING


Ona lrn 3.8 CUSTOM ATTRIBUTES, DEFINING
IN COOKBOOKS
Ayrs hy Gegs
Attributes are properties that can be assigned to Cookbooks, Recipes, and Nodes.
Chef attributes are an important part of defining cookbooks and recipes. Attributes
allow you to specify certain details about a particular recipe, such as which platform
it is meant for or which cookbook it depends on.
Attributes can also be used to override the defaults set by a cookbook author.
As we know, Chef Attributes are key-value pairs associated with node or role
definitions 'and store data about a node.

Step 2: Add a List of Cookbooks


Fig. 3.3 (a) 3.8.1 Understanding Chef Attributes
Attributes cn be defined in several different ways, and they can be used to specify a
can be used to set the
wide range of different settings. For example, attributes
on a
hostname of a node start or to specify which application should be installed
"! node.
a
Attributes can be used to override default settings for cookbook
or recipe. For
attribute, that
instance, if a Cookbook contains a default setting for the hostname
settingcan be overridden by specifying a different hostname in the attributes list file.
Types of Chef Attributes:
As a Chef, you will need to be
aware with the different attribute types that can be used
a
to configure node run. There are six attribute types that can be assigned to a Chef
cookbook: default, automatic, normal, force_default, override, force_override. Each
type has its own purpose and use case.
Fig.33 (b)
MCA (Management - Sem. IV]
DevOps: MCA [Management Sem. IV)
Chef for Configuration Management DevOpss : 3.16 Chef for Confiquration Management
3.15

1, Default: default attribute is an attribute that does not have a value set on the
A Todelete a custom attribute, select the custom attribute and select
the"-" icon.
node. If a default attribute is not set in the default attribute file, the Chef-client 40 To change the value of a custom attribute, double click the value column
in the
will use a nil value for the attribute. You can override the default attributes list appropriate row and enter the new value.
just like any other attribute, which can also be set in the default attribute file. 44 Select File Revert to discard all your changes.
2. Automatic: An automatic attribute is set by the Chef-client node itself during the 12. Select File Save to save your changes.
Chef-client run. These server attributes are typically set based on information
gathered from the node such as the operating system type or plat form. Automatic 3.8.3 Defining Attributes in Cookbooks
attributes can be overridden like any other attribute, but they cannot be set in the In order to create an attribute file, you will first need to create a new file with ".rb"
default attribute file. extension. You can do this using any text editor. Once you have created the file, you
3. Normal: This is the most common type of attribute list and is typically used when will need to add the following code:
you want to set a specific value for an attribute on a node start. The value for a =
normal attribute can be set in the default attribute file, or it can be overridden on default["cookbook_name"]["attribute_name"] "attribute_value"
a per-node basis. Replacing "cookbook_name" with the name of the cookbook that contains the recipe
.vou wish to override and "attribute_name" with the variablename you wish to
4. Force_default: The value for this attribute list is always taken from the default
attribute file. If the force_default attribute is set on a node, any other values set override. The "attribute_value" will be used to set the value of the variable. Once you
for that node are ignored. This can be useful if you want to ensure that all nodes have added the desired code to the file, save it and then upload it to your Chef server.
in your environment have the same value for an attribute.
DATA BAGS: UNDERSTANDING THE DATA BAGS, CREATING
5. Override: An override attribute will take precedence over any other values that
have been set for an attribute, including the default value. This type of attribute is AND MANAGING THE DATA BAGS, CREATING THE DATA BAGS
often used when you need to quickly change the value of an attribute on a node 3.9 USING CLI AND CHEF CONSOLE, SAMPLE DATA BAGS FOR
run without having to edit the default attribute file. CREATING USERS
6. Force_override: A force_override attribute list overrides any other attribute
values, whether they are default values or override values. This type of attribute 3.9.1 Understanding the Data Bags
should be used sparingly, as it can make it difficult to track down the source of an Data bags are a way to store and manage global data that can be used
across nodes.
attribute value. They are essentially encrypted JSON data containers used to
store sensitive
to across nodes
3.8.2 Creating Custom Attributes information, configuration settings, or any data that needs be shared
User can create custom attributes for servers, device groups, customers, facilities, OS but shouldn't be exposed in plain text.
as
Data bags are commonly used to store items such Database connection
strings, API
Build Plans, and software policies. Custom attributes values are string values.
To add, delete, or modify the value of a custom attribute for a server:
keys, Passwords, and other Configuration data.
1. In the SA Client navigation pane, select the Devices tab. Chef Server
2. Select the All Managed Servers node.
Node A Node B
3. Select a server. Data Data
4, To view the custom attributes defined for the server, select Custom Attributes
from the View drop-down selector. This displays all the custom attributes defined
for the server. Data
5. Select Actions or right click the server and select Open. Bags
This displays information
about the server.
6. Select the Information tab in the navigation pane.
7. Select Custom Attributes in the navigation pane.
This displays all the custom
attributes defined for the server. Node B
Node A
8. To add a new custom attribute, select the "4" icon
and enter the name of the Shared, Global Data
custom attribute. Fig. 3.4: Data bags contain
DevOps:MCA [Management Sem. IV] Chef for Contiguration Management
3.17 DevOps s: MCA [Management- Sem. IV] 3.18 Chef for Configuration Management
3.9.2 Creating and Managing the Data bags D94 Sample Data bags for Creating Users
Following command shows how data bags work in Chef: Eollowing example shows how you can create
data bags to manage user accounts
1. Creating Data Bags using CLI: To create a new data bag, you use the knife 1sing Chef. This example will show how to create a data bag for user accounts,
Command Line tool or Chef APIs. Data bags are typically organized by name including their usernames, UIDs, and SSH keys.
similar to directories in a file system. 1. Create the Data Bag: Assuming you have the Chef Workstation set up,
here's how
knife data bag create BAG_NAMEG -NAME you can create a data bag for user accounts using the knife command-line
tool:
2. Creating Data Bag Items: Inside a data bag. you store individual items. Each itenm # Create the data bag
is a JSON object that contains the data you want to store. For instance, if you aré knife data bag create users
creating a data bag for database connection strings, each item might represent a 2 Create Data Bag Items: For each user, you will create a data bag item containing
different database. their information. Here is how you can create data bag items for two users, Alice
and Bob:
knife data bag create BAG_NAME ITEM NAME
# Create Alice's data bag item
3. Editing Data Bag Items: Once created, you can edit the data bag items using a text
knife data bag create users alice
editor or directly through the command line using the knife tool. knife data bag from file users alice.json
knife data bag edit BAG NAME ITEM_NAME # Create Bob's data bag item
4. Uploading Data Bags: After creating and editing data bags and their items, you knife data bag create users bob
upload them to the Chef Server using the knife command. knife data bag from file users -bob.json
knife data bag from file BAG _NAME ITEM_NAME.json
3. Populate Data Bag Item JSON Files: Here is example of JSON files for the
.alice.json and bob.json data bag items. These files contain information about the
5. Accessing Data Bags in Recipes: In Chef Recipes, you can access data bag items users, including their usernames, UIDs, and SSH keys.
and their content. These items can be used to configure resources within your
cookbooks. alice.json:
Following is JSON code:
# Load a data bag item
=
my_data data_bag_item('BAG NAME', 'ITEM NAME') "id": "alice",
# Access attributes within the data bag item "username": "alice",
db_host = my_data[ 'database']['host'] "uid": "1001",
"ssh _keys":
db_user = my_data[' database']['username' ] ..
Data bags are especially useful for separating sensitive data from Cookbooks and "ssh-rsa
AAAAB3NzaC1y¢2EAAAADAQABAAABAQ.
AAAAB3NzaC1yc2EAAAADAQABAAABAQ
configurations, providing better security and separation of concerns. However, it is "ssh-rsa
important to note that data bags are not inherently encrypted. They can be optionally
encrypted to enhance security. When -encrypted, data bag items can only be }
decrypted by nodes that have the decryption keys. bob.json:
Data Bags are treated as Global variables like JSON data. They. are indexed for {
searching and accessed during search process. We can access JSON data from Chet. ,
"id": "bob"
For example, a data bag can store global variables such as an app's source URL, the "username": "bob",
instance's hostname, and the associated stack's VPC identifier. "uid": "1002",
"ssh_keys":
3.9.3 Creating the Data Bags using Chef Console EAAAADAQABAAABAQ..."
Log in to the Chef Web console. AAAAB3NzaC1yc2
1 "ssh-rsa AAAAB3NzaClyc2EAAAADAQABAAABAQ..
2 Navigate to "Policy" and then "Data Bags". "ssh-rsa
3. Click the "Create New Data Bag" button.
4. Enter the name of the data bag and click "Create Data Bag"., public keys for the
users.
5. Inside the created data bag, click the "Create New Item" button. keys field contains the SSH
In these JSON files, the ssh public keys you want to use.
6. Enter the item's name and provide the necessary data in JSON format. You can replace the example
keys with the actual
7. Click "Create Item" to save the data bag item.
DevOps : MCA[Management - Sem. Chef for Configuration Management DevOps : MCA(Management Sem,
I

.
IV 3.19 3.20 Chef for Configuration Management

Using Data Bag Items in Recipes: You can use the data bag items in yóur Chef It helps IT teams define and maintain the desired state of servers, applications,
recipes to create user accounts and set up their SSH keys. Here is a simplifed and systems, ensuring that they are configured correctly and consistently over
example of how you might apply the data bag items: time.
# In a recipe Chef uses code-based "recipes" and "cookbooks" to define how resources should be
Users = data_bag('users') configured and it can handle tasks such as installing software, managing user
Users.each do user_id| accounts, and configuring network settings. This automation helps streamline the
user = data bag_ item('users ', user_id) process óf managing complex IT environments, reducing manual errors and
username =
user['username'] enhancing efficiency.
=
uid user['uid'] Check Your Understanding
ssh_keys = user['ssh keys')
1. What is Chef?
user username do
(a) A software recipe book (b) A configurationmanagement tool
uiduid
(c) A programming language (d) A cloud computing platform
home "/home/#{username)"
2. Which component of Chef is responsible for storing and managing configuration
shell "/bin/bash"
manage_home true data?
(a) Chef Workstation (b) Chef Node
end
(c) Chef Server (d) Chef Client
directory "/home/#{username}/.ssh" do
a
Owner username 3. What is the primary purpose of Chef recipe?
group username (a) To define server hardware specifications.
mode 0700' (b) To install software packages on nodes.
end (c) To manage user authentication.
file "/home/#{username}/. ssh/authorized _keys" do (d) To create virtual machine instances.
contentssh_keys.join("\n") 4. In Chef,what is a "Cokbook"?
username resources.
Owner
(a) A collection of recipes, templates, and
group username SSH keys.
.(b) A directory for storing
mode. '0600* Server.
(c) A configuration file for the Chef
end
(a) Atool for managing databases. Server?
end
runs on nodes and interacts with the Chef
The above example shows how you can create user accounts and set up their SSH keys 5 Which Chef component
(b) Chef Server
based on the data bag items you created. (a) Chef Workstation
(a) Chef Client
Summary (c) Chef Node
"bootstrapping" refer to in the context ofChef?
6. What does
In the context of Configuration Management in IT and software development, a over an open flame.
"chef" refers to a popular open-source tool called Chef. (a) Grilling recipes
Chef Server.
Chef is used to automate the deployment, management, and configuration of (b)
new server and connecting it to the
Initializing a
software and infrastructure in a consistent and scalable manner. (c) Deploying virtual machines.
(d)Encrypting sensitive data.
DevOps : MCA [Management -
Sem. IV] Chef for Configuration Management -
V
3.21 DevOps : MCA(Management Sem. 3.22 Chef for Configuration Management
7. What does the term "role" represent in Chef? How does the Chef Client work on nodes?
5
(a) A user's job title. 6. What are roles and how do they function in Chef?
(b) A type of cookbook. 7. Explain the use of attributes in Chef.
(c) A specific attribute of a node. How does data bag work in Chef and when should it be used?
(d) A way to define a server's function and configuration.
-9. Explain the process of searching for nodes in Chef.
8. How does Chef use the concept of "idempotence" in its recipes?
10. What is the difference between a cookboðk and a recipe in Chef?
(a) To create complex data structures.
11. How does Chef handle idempotence?
(b) To ensure that resources are only configured if necessary.
12. How do you manage dependencies between cookbooks in Chef?
(c) To manage database schemas.
13. Explain the çoncept of Environments in Chef.
(d) To handle user authentication.
14. How does Chef ensure security in managing sensitive data?
9. What is the purpose of Chefs "attributes"?
15. How do you integrate Chef with Version Control Systems?
(a) To store user credentials.
Q.III Write short notes on:
(b) To define the physical location of servers.
1. Need of Chef
(c) To configure the behavior of recipes and cookbooks.
2. Benefits of Chef
(d) To manage virtual machine instances.
3. Cookbooks
10. What is a "data bag" in Chef? 4. Data bags
(a) A container for storing encrypted data.
5. Organization set up
(b) A type of cookbook.
6. knife command
(c) A configuration file for Chef Server.
(d) A tool for managing databases.
Answers
1. (b) 2. (c) 3. (b) 4. (a) 5. (d) 6. (b) 7. (d) 8. (b) 9. (c) 10 ()

Practice Questions
Q.I Answer the following questions in short.
1. What is Chef?
2.What is a recipe in Chef?
3. What is bootstrapping in Chef?
4. What is an attribute in Chef?
a
5. What is role in Chef?
6. How does Chef ensure idempotence?
7. What are data bags in Chef?
8. How does Chef handle dependencies between cookbooks?
9. What is the role of Chef Server in the configuration process?
Q.II Answerthe following questions.
1 What is Chef and how does it work?
2
What are cookbooks and recipes in Chef?
3. How does bootstrapping work in Chef?
4. Explain the role of Chef Server in configuration management.
(Managomont -Sem. IV]
Dovops:MCA 4.2
Docker - Contalners

4... antainers are lightweight, portable, and self-sufficient units of software


tain all the necessary dependencies and configuration needed to runthat an
application.
nocker has become very popular in recent years, with many companies

Docker-Containers organizations adopting it as a standard platform for application development and


deployment.
and

In this chapter, we will provide an introduction to Docker and cover some of the key
concepts and terminology used in the Docker ecosystem.
Docker is a tool designed to make it easier to create, deploy, and run applications by
Objectives... 1using containers. Containers allow a developer to package up an application with all
of the parts it needs such as libraries and other dependencies, and ship it all out as
After learning this chapter you will be able to: one package.
O Understand how Docker build, test, and deploy applications quickly. With Docker, developers can create containers that include all the necessary
Understand how to create Docker images. dependencies, libraries, and configuration needed to run an application, and then
Understand the Docker networking. easily deploy and. scale those containers across different environments, from
Learn how to use volumes for persistent storage using Docker. development to production.
Understand how to tag images. Docker Docker Docker
Container 1
Container 2 Container 3
Understand the working of Docker hub.

App A App B App C

4.1 INTRODUCTION
When we are looking for a containerization,solution that provides maximum Bins/Libs
Bins/Libs Bins/Libs
compatibility in each environment with little or no configuration changes then
Docker isa good solution that enables us to create a snapshot of our application and
all its dependencies. Then we deploy this same snapshot in development, testing, and Docker Engine
production. In this chapter, we are going to learn Docker basics with networking
.concepts.
4.1.1 What is Docker? Host OS

Infrastructure
Build
Fig. 4.2: Container

4.1.2 How does Docker Work? the Docker daemon (server)


Ship Client-server architecture, with
Docker is built on a networks, and the Docker client (client)
images, and
Run
managing the containers, interacting with the daemon.
interface for snapshots
Fig. 4.1: Concept of Docker Providing a command-line Docker images, which are essentially
are created from are in Docker
DoCker containers Docker images stored
Docker is an open-source containerization platform that allows developers to build, system and configuration. an
Or a container's file Hub) or private (hosted internally by
ship, and run applications and services in containers. can be public (ike Docker
registries, which
(4.1) organization).
DevOps : MCA [Management- Sem. IVl
pocker- Contalners
.3 nevOpS : MCA (Management- Sem, IV
4.4 -
creates a read-write layer Docker Contaíners
When a container is created from an image, Docker on
top 6 Big Data: Docker can be used to create
of the container to modify and store data without Hadoop or Spark. This makes easier to
containers for big data applications
like
the image's file system, allowing
can scaled
. it deploy and manage these applications, as
affecting the underlving image. Containers be managed and well as to scale them up or down as needed,
Docker's CLI commands, or through container orchestration tools likee Kubernetes. , DevOps: Docker is a key tool for DevOps.
This tool allows teams to easily build,
The Docker Platform: test, and deploy applications in a consistent
a
a High Availability: and repeatable way.
run an application in loosely isolated Docker can be used to create redundant and
Docker providesthe ability to package and you to run highly available
environment called a Container. The isolation and security allow environments. By running multiple containers of an application,
a given host. Containers are lightweight and cont can ensure that the application remains available even one organizations
containers simultaneously on
on if of the containers.
everything needed to run the application, so you do not need to rely
who
fails.
you work av2 9. Security: Docker provides a secure environment for running applications. By
Currently installed on the host. You can easily share containers while
Sure that everyone you share with gets the same container that works in the using.containers, organizations can ensure that their applications are isolated
way. from the host system and from other containers running on the same host.
Docker provides tooling and a platform to manage the lifecycle of your 10. IoT: Docker can be used to create containers for IoT devices. This makes it easier to
containers: manage and update these devices, as well as to deploy new applications to them.
Develop your application and its supporting components using containers. 4.1.4 Dockers Vs Virtualizations
The container becomes the unit for distributing and testing your application.
When you are ready, deploy your application into- the production App 1
App 2 App 3
environment, as a container or an orchestrated service. This works the same
1
Container Container 2
whether your production environment is a local data center, a cloud provider, ora
hybrid of the two. Bins/Libs Bins/Libs Bins/Libs
App 1
App 2
4.1.3 Use Case of Docker
Bins/Libs -Bins/Libs Guest Guest Guest
Docker is a popular containerization technology that has found a variety of use cases OS OS OS
in the real world. Here are some examnples:
1, Application Development: Developers can use Docker to create a consistent Docker Engine Hypervisor
environment for building and testing applications. This ensures that the
application behaves the same way in development, testing, and production Host OS
Host OS
environments.
2. Continuous Integration and Continuous Deploymnent (CI/CD): Virtual
Machine Architecture
Docker can be Docker Architecture
used to create containmers that are pre-configured with all Fig. 4.3: Docker and Virtual Machine Architectures
the necessary
dependencies to run an application. This makes it easier to
deploy applications Table 4.1: Difference between Docker and Virtual Machines
quickly and easily using tools like Jenkins or Travis. Docker Virtual Machines (VMs)
3. Microservices: Docker makes a complete Docker uses a containerization
it easier to develop, deploy, Architecture Virtualization creates
microservices. With Docker, each microservice can and manag each container
virtual machine (VM) with its own approach, where
making it easier to deploy and scale independently. be packaged into contane
a
shares the host operating system
operating system.
4. Cloud Migration: Docker containers can kernel.
be easilv moved from one cloud proVu
to another. This makes easier
it Boot-time
It takes a few minutes for VMs to
for organizations to migrate Boots ina few seconds.
anew
cloud provider without having to their applicato boot.
modify the code.
5. Testing and Quality Assurance: the VMs make use of the hypervisor.
Docker can be used Runs on
that are identical to production environments. to create test environments Dockers make use of
This makes it easier execution engine.
applications and ensure t Contd.
that they are working as expected.
Docker -
DevOps: MCA [Management - Sem. IV 4.5
Containers
DovOps : MCAA[Management- Sem. IV]
4.6 Docker Containers
Requires entire to be
OS
Client
Memory No space is
needed to virtualize, | before starting thessurface, loaded Docker Host
Registry
so less
Efficiency hence less memory. efficient. Images Containers (Images
docker run
machines NGIMX
the host Virtual provide
Isolation Docker containers share isolation between
provide process-level complete the
redis
kernel but host and the guest operating docker build
isolation. systems.
PostgreSQL
a single Deployment is comparatively Docker
Deploying is easy as only
Deployment as separate instanoo docker pull daemon
can be usedlengthy
image, containerized
responsible for execution,
across all platforms. Extensions
Tools are easy to use and simpler
Usage Docker has a complex usage
of both to work with.
mechanism consisting Plugins

third party and docker managed


tools.
more Fig. 4.4: Architecture of Docker
Portability Docker containers can be easily Virtual Machines require
different effort to move due to their larger 1, The Docker Daemon:
moved across The Docker daemon (dockerd) listens for Docker API requests and manages Docker
environments, including public size and dependency on specific objects such as images, containers, networks, and volumes. A daemon can also
and private clouds. hardware. communicate with other daemons to manage Docker services.
Footprint Docker containers have smaller VMs require a complete guest 2. The Docker Client:
footprints compared to virtual operating system. The Docker client (docker) is the primary way that many Docker users interact with
run, the client sends these
machines. .Docker. When you use commands such as docker
uses the Docker
commands to dockerd, which carries them out. The docker command
can with more than one daemon.
4.2 ARCHITECTURE API. The Docker client communicate
3. Docker Desktop:
4.2.1 Docker Architecture Docker Desktop is an easy-to-install application for
Mac, Windows or Linux
Docker utilizes the client-server architecture and a remote API to manage and create you to build and share containerized applications and
environments that enables
(dockerd), the Docker
Docker containers and images. microservices. Docker' Desktop includes the Docker daemon
Trust, Kubernetes, and Credential
Docker containers are created from Docker images. client (docker), Docker Compose, Docker Content
The relationship between containers and images are analogous to the relationship Helper.
between objects and classes in object-oriented programming, where the image Docker Registry: anyone
describes thecontainer, and the container isa running A Docker registry stores Docker images.
Docker Hub is a public registry that
instance of the image. on Docker Hub by default. You
The Docker client talks to the Docker daemon, which does the heavy can use, and Docker is configured to look for images
lifting of buildng
running, and distributing your Docker containers. çan even run your own private registry. are
The Docker client and daemnon can run on the samne system. or vou can or docker run commands, the required images
conneCLo When you use the docker pull you use the docker push command, your
When
Docker client toa remote Docker daemon. puled from your configured registry.
The Docker client and daemon communicate
using a REST API, Over UNIX Sockets image is pushed to your configured registry.
network interface. Another Docker client
is Docker Compose work with
applications consisting of a set of containers. that lets you
Docker
Contalners
DevOps : MCA [Management- Sem. IV] 4.7 Dovops : MCA(Managomont Sem. V
4.8
Docker -Containers
5. Docker Objects: pocker Hub:
using images, containers. networks, his is a
centralized resource for working with Docker
When you use Docker, you are creating and of some and its components it provides:
This section is a brief overview
volumes, plugins, and other objects. ofthose Docker ímage hosting
objects. User authentication
Automated images build and workflow tools
(a) Images: such as build triggers and webhooks.
with instructions for creating a Docker Integration with Github and bitbucket.
An image
is a read-only template on
container. Often, an image is based another image, with some additi Docker Registry:
Registry is a hosted service containing repositories
For you may build an image which is based on the of images which responds to the
customization. example,
registry API.
server and your application, as well
Ubuntu image, but install the Apache web Default registry can
your application run. be accessed using the browser at Docker hub or using the Docker
the configuration details needed to make search command.
You might create your own images or you might only use those created by other.
own image, you create a Dockerfile with Repository:
and published in a registry. To build your run it. Fack A repository is a set of Docker images the repository can be shared by pushing it to a
a simple syntax for defining the steps needed to create the image and
a creates a layer in the image. When you change the registry server.
instruction in Dockerfile
The different images in the repository can be labeled using tags.
Dockerfile and rebuild the image, only those layers which have changed is rebuilt.
This is part of what makes images so lightweight, small, and fast, when compared 4.3 INSTALLATION
to other virtualization technologies. For Mac and Windows, there are a few different options for installing the Community
(b) Containers: Edition. The modern way to install Docker is to use Docker for Mac
(https:/www.docker.com/docker-mac) Docker for Windows
A container is a runnable instance of an image. You can create, start, stop, move,
or delete a container using the Docker API or CLI. You can connect a container to (https:/www.docker.com/docker-windows) respectively. The installation includes the
one or more networks, attach storage to it, or even create a new image based on its Docker platform, Command-line, Compose tools.
or
current state. [Note: Docker for Windows requires Windows 10 Professional Enterprise 64-bit.]
By default, a container is relatively well isolated from other containers and its 4.3.1 Installation of Docker on Linux
way of installing Docker, so it is
host machine. You can control how isolated a container's network, storage, or For Linux, each distribution has a unique
https://dos.docker.comVengineinstallation/ for specific
other underlying subsystems are from other containers or from the host machine. recommended you visit
A container is defined by its image as well as any configuration options you installation instructions.
on is to use the default Ubuntu repository.
provide to it when you create or start it. When a container is removed, any Another way to install Docker Ubuntu
on Ubuntu and follow the below steps:
changes to its state that are not stored in persistent storage disappear. Open the terminal following command:
Step 1
:
Checkif the system is up-to-date using the
4.2.2 Understanding the Docker Components $ sudo apt-get update
command:
Containers: Step 2 : Install Docker using the following
A Docker container image is a lightweight, standalone, $ sudo apt install docker. io command:
executable package packages using the following
software that includes everything needed to run an application
like code, runtu Step 3: Install all the dependency
system tools, system libraries, and settings. sudo snap install docker
Check Installation. the status
Docker Image: Step 4: was properly installed by running
These are the basis of containers. An image is an ordered Check whether Docker program version. To see the Docker
daemon
collection of root filesystem command or checking the
changes and the corresponding execution container
parameters for use within a status, run:
runtimne. status docker following
$ sudo systemctl installed using the
Animage typically consists of check the version
ofeach Docker,
a
union of layered file system testing
stacked on top Before
other. command:
An image does not have state and it never changes. $ docker -version
DovOps::MCAA(Management
- Sem. IV)
4.10 Docker -Contaíners
DevOps : MCA [Management - Sem. IV] 4.9
Docker-Containers
Docker info: This command displays system
wide information regarding
the
4.3.2 Installation of Docker on Windows Docker installation.

Follow the below steps to install Docker on Windows:


Step 1 : To download the docker file, go to the
website https://docs docker. com/docker-for-windows/install/
.

[Note: A 64-bit processor and 4GB system RAM


are the hardware
on
requirements to successfully run Docker Windows10.]
:
Now double-click on the Docker Desktop Installer.exe to
run the
Step 2
installer.
is not
[Note: Suppose the installer (Docker Desktop Installer.exe)
downloaded; you can get it from Docker Hub and run it whenever
required.]
Step 3 Once you start the installation process, always enable Hyper-V Windows
:

feature on the Configuration page.


Step 4
:
Follow the installation process to allow the installer and wait till the
process is done.
on the Close and restart
Step 5 ': After completion of the installation process, click Fig. 4.5 (b)

button. 3. Docker login: Login to a Docker registry.


|4.3.3 Some Docker Commands $ docker login [options] [server]
Basic commands of Docker are: Log in to Docker hub registry by typing:
1. Docker Version: This command shows the Docker version information. $ docker.login
tts:I/b.bker.coe t create one.
$ docker version C:\>dock
vith
we
Docker ID to push and pull laages fros Docker Hb. If
you don't have
a Docker
D, ead over ta

C:\>docker
:
version Usernane: sheetalbhalgat
Client Password
Login Succeeded
cloud integration: v1.e.24 your account.
access
Version: 20.10.17 grants your terninal coplete accessto
API version: 1.41 tnE In withh your
Ps witha lialted-privilege.personal
token. Learn rt t tts://docs.bcter.calxtéSs-tokns/
Go version: go1.17.11 security,
Git conit: 100c701
Built: Mon Jun 6.23:09:02 2022 Fig. 4.5 (c)
oS/Arch: windowslamd 64
default a registry.
Docker logout: Log out from Docker
Context:
Experinental : true 4.
Server: Docker Desktop 4.10.1 (82475) $ docker logout
Veesion: Docker Hub for images.
API version:
20.10.,17
1.41 (minimum version 1.12)
5. Docker Search: Search the
Go version: go1.17.11
[options] Term
$ docker search
conit:
images with a name containing
G1t a89b842
Built: Mon Jun 6 23:01:23 2022
image by ID: Below example displays
0S/Arch:
Experinental :
linux/amd64
false
Search
containerd:
Version: 1.6.6 ubuntu'.
GltComnit: 10c12954828e7c7c9b6eDea9bec02b01407d3ae1
runc: $docker search ubuntu
Version: 1.1,2
GitComnít: V1.1.2-0-ga916309
:
docker- init
Version: 0.19.0
GltComnit: de40ade
Fig. 4.5 (a)
-
Docker-Contaliners
: MCAAanagement -Sem
M]

DevOps 4.12
411
DevOps : MCA [Manngement-Sem.
M

an image to Docker Hub: Toshare gour Dechr.Ctirs


one. images to the Docker
arprru to a self-hosted Hub registry cr
S. Docker Images: List images
uetatt $docker
k inages [oPTIONS ] (REPOSITORY[:TAGI)
t
List the local server Docker images
ta
c:\docker inages
s
eis,
REPOSITORY TAG IRAGE ID CREATED SIZE
ubuntu latest e2422cecet15 7 veeks ago
sheetalbhalgat/ubuntu latest BS22cereb15 7.8e
7 weeks ago 17.
sertpr
Fig. 45 (4)
e Docker Tag: Create a tag TARGET_IMAGE that refers to SOURCE IMAGE.
< docker tag soURCE_INAGE[:TAG] TARGET_IRAGE[:TAG]

Fig. 45 (d)
Tag an image referenced by Name and Tag: To tag a local image with the name
6: Docker Pull: Pull an image or a repository froma registry
"ubuntu" and tag "latest" into the "sheetalbhalgat" repository with latest":
$ docker tag ubuntu:latest sheetal bhalgat/ubuntu : latest
$ docker pull [options] Name[ : Tag]
a or set of imar
Pull an image from Docker Hub: To download particular image, How to do "hello world" in Docker?
a
(Le., repository), use Docker Pull. If no tag is provided, Docker Engine uses the $doc ker run -1t hello-wor ld
:latest tag as a default. Thís command pulls the ubuntu:latest image:
$ docker pull ubuntu l eplett
\>docker pull ubuntu
C: 0nLI
gst: sha 1afetíalaetl shetnttFSae
Using default tag: latest $tas DunJnated ver ingr
r
telle-rld ltst
latest: Pulling from líbrary/ubuntu hás Ssegs has tht y stallt ion pei t: be rig caretly
Zabeste27e7f: Pull complete ,
Deskar tucA the follning stigs:
Digest: sha256:67211c14fa74fe78427ccs9d69a7fa9aeffle2Beal18ef3babc295a042tat421 I gnerate thás
1
Re ectes cliet trt ac tad th
Status: Downloaded newer inage for ubuntu:latest 1. Boctr ta
e
lld the talls tsge fras he Drte
docker. io/l ibrary/ ubuntu:latest kgr tar tad (artainr fraa tat igs Seh
e
s

th
J
ta
Fig. 45 (e)
esecdsl that Puts that atgut ts the orr tíet, j
et it
4 he Dchar dasn strd
7. Docker Push: Push an image or a repository to a registry.
Sdocker push (OPT IONS] NAME :TAG] Ore atiris, tu
(
u uta conteter ath

C:docker push sheetalthalgat/ubuntu sts tre oar


UsLrg default tag: latest here lasgrs, atuts rtfis, nd r a D

The push refers te repository [dorker.


io/ sheetalbhalgat/btu] s, visit:
t9k1bd812ab: Nounted from library'buntu #or mre saisd
acker et-sturted
Latest: digest: sha254 :553f822485895b0a75236aba2effebie135721be31fceeelascaad4a7eec ttp ihuts
te6 sire: S29
Fig 4.5 ()
Fig. 45 (0
C cheetahalgat / ubunt 4.3.4 Provisioning cn Windows using
using Vagrarnt and Virtualbax
Provision a Docker host together
following steps
1
et Install Babun
we use 5.0.18)
Fig. 45 (g 2. Install VirtualBox (Here
3. Install Vagrant
Docker-Containers MCA [Management -
4.13 DevOps : Sem. IV]
Devops : MCA [Management- Sem. IV] 4.14
our vagrant boY ne Docker - Containers

4. Open a new Babun shell


window and create a directory for NoW can start running docker images
we
using following command.
follows: Cudo docker run -d -p 80:80 nginx
&& cd vagrant/trusty64 .ear AIe can alias
ca ~ mkdir -p vagrant/trusty64
&& the IP address that we set earlier in our
hosts file so that it's easier to
Download the Vagrantfile for ubuntu (trusty 64-bit) omber. Opern c:\windows\system32\drivers\etc\hosts
(make sure to run your
vagrant init ubuntu/trusty64 editor as administrator) and add theefollowing line:
The Vagrantfile contains instructions for how
Vagrant should build your vit 192.168.10.101 dockerbox
your text editor of choice and find the follow
machine. Open the Vagrantfile with Caue the changes to the hosts file. Open the browser
and navigate to
sections: http://dockerbox and you should see the nginx welcome page.
# using a specific IP.
Config.vm.network "private network", ip: "192.168.33.10"...# 4.4 DOCKER HUB
#
config. vm.provision shell", inline: <<-SHELL Docker Hub is a cloud-based repository in which Docker users and partners create,
# sudo apt-get update test, store and distribute container images.
# sudo apt- get install -y apache2
Through Docker Hub, a user can access public, open source image repositories, as well
# SHELL
as use a space to create their own private repositories, automated build functions,
Modify it so that Vagrant assigns the IP address "192.168.10.101" and installs Docker
web hooks and work groups.
for us:
For example, a DevOps professional can download the official PostgreSQL object
# using a specific IP. use in
config. vm.network "private_network", ip: relational database management system container image from Docker Hub to
an application deployed in containers. Or, they can choose a cutomized RDBMS from
"192.168.10.101"...config.vm. provision "shell", inline: <«-SHELL will clear idea of this:
wget -q0- https://get.docker.com/ sh their company's private repository. Following figure
SHELL Container Engine
After the edits, the file should look like so:
Pull Run
# -*- mode: ruby -**
# vi: set ft=ruby:
Vagrant.configure(2) do | config|
O00 Images
Container
O000 Static, Persisted Image-instance running
config. Vm.box "ubuntu/trusty64" an app process
Registry/Hub Container Image
config. vm.network "private network", ip: "192.168.10.101" A registry stores many static images
config.Vm.provider "virtualbox" do |vbl
vb.memory Push
= 2
vb.cpus
end
config. vm.provision "shell", inline: Container Engine
<<-SHELL
wget -qe- https://get.docker.com/ | sh Run
Build
SHELL
end Container
an app
Vagrantfile with commentsS removed (custom Images Image-instance running
cpu/memory settings are optiona). Dockerfile Static, Persisted process in same system
Back in the Babun shell window, we can all commands to Container Image
bring the virtual machine online by runnm
vagrant up assemble an image
Once the Vagrant box is ready, ssh to it: Hub
Concept of Docker
vagrant ssh Fig. 4.6:
Docker -Containers DevOps :: MCA [Management- Sem. V)
4.15 4.16
DevOps : MCA [Management- Sem. IV]
Docker-Containers
4.4.2 UploadingttheImages in Docker Registry
Features of Docker Hub: os. and AWS ECS
process of storing, managing, and sharing imagee Uploading Docker Image Docker
1 Docker Hub simplifies the 4.4.2.1 in Registry
others. Below is an example of how to create your own
necessary security checks on the images and rovia o docker image on your
2 Docker Hub performs the machine/server (not on the login or calculation local
node) and upload it to the Poblenou
complete report on the security issues.
as Continuous Deployrnent and Continuous
Terat Registry Server.
3. It automates processes such
new image into Docker Hb
by triggering Webhooks at the time of pushing the
4. It allows us to manage the permissions of
users, teams and organizations.
Read ReadWite
5. We integrate Docker Hub with tools such as GitHub, Jenkins to streamlin
Compute cluster
workflows. nodes Standalone/Swam
docker engine
Advantages of Docker Hub:
1 Docker container images are lightweight and can be pushed in a matter of Private docker
registry
minutes using a command.
2. It is a safe method and also offers a feature such as pushing a private image or a Fig. 4.7: Uploading Docker Image in Docker Registry
public image. You must know following things before start building a process:
3. Docker hub is becoming more and more popular in industries and acts as a link About the builder environment:
between the development teams and the testing teams. o
Docker engine and root access on your builder machine.
4. If you want to share your code, software, or any kind of file for public use, you Shell scripting skills.
can just make the images public on the docker hub. Docker command line knowledge.
4.4.1 Downloading Docker Images About Registry Server:

You needa user to upload your image.
To download a particular image, or set of images (i.e., a
repository), use docker image o Access to the private network of Remote Server registry.sb.upf.edu.
pull (or the docker pull shorthand). o Uploaded images can't be removed.
Syntax: o Uploaded images can be updated
docker pull [OPTIONS) NAME[:TAG|eDIGEST]
Process:
If no tag is provided, Docker Engine uses the: Create a directory to build the image.
latest tag as a default. This example Step 1 :

pulls the debian:latest image: $ mkdir ubuntu-18.04


Example: $ cd ubuntu-18.04/
docker inage pull debian Step 2 :
Writea docker file with settings.
Using default tag: latest $ vi Dockerfile
latest: Pulling fron library/debian FROM ubuntu:18.04
"https://registry.sb.upf. edu
e756f3fdd6a3: Pull complete MAINTAINER John Brown
Digest:sha256:3f1d6c177 73a45c97bd8f158d655c9709d7b29ed7917ac934es6ad96t9E LABEL authors="John Brown"
510 LABEL version="18.04"
Status: Downloaded newer inage LABEL description="Basic inages with sssd installed,
for debian:latest \openssh-server nmap sudo telnet
docker.io/library/debian:latest
RUN apt-get update
Docker - Sem. IV]
4.17
Contalners DevOps: MCA [Management 4.18
DevOps : MCA [Management -Sem. IV]
DockerContainers
telnet seed
RUN apt-get install -y
openssh-server nmap sudo Step 5 : Rename the image
change the RESEARCH_GROUP
for the acronym of
RUN mkdir /var/run/sshd belong. the research group you
'root:xxXXXXXxxXXX* chpasswd $ docker tag <IMAGE ID>
RUN echo
's/^#?PermitRootLogin\s+.* registry.sb.upf.edu/<research_group_acronym>/ubuntu:18.04
RUN sed -ri
/PermitRootLogin yes/'/etc/ssh/sshd_ config $ docker tag 401153215cb6
/#UsePAM yes/g' /etc/ssh/sshd confio
registry.sb.upf.edu/mygroup/ubuntu:18.04
's/UsePAM yes
RUN sed -ri INote: Use lowercase in research group_acronym]
RUN mkdir /root/. ssh /* Step 6 : List and check re-tagger image.
RUN apt-get clean &S \rm-rf/var/lib/apt/1ists/* /tmp//var/tmn $ docker images
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D") REPOSITORY TAG IMAGE ID CREATED SIZE
name, surname and email
essential to add "LABEL authors" with your
It is
registry.sb.
address. About a
upf.edu/mygroup/ 18.04 401153215cb6
minute ago 335MB
Step 3: Build the image in a local docker-host ubuntu
a corporate registry
$ docker build. Step 7 :
Provide Credentials to establish session with the
Sending build-context to Docker daemon 2.048kB server.
ubuntu:18.04 $ docker login registry.sb.upf. edu
Step 1/12 : FROM
Username ):jbrown
18.04: Pulling from library/ubuntu Password:
a48c500ed24e: Pull complete Login Succeeded
le1deeeff7el: Pull complete Step 8 : Push the image. edu/aygroup/ubuntu:18.04
e33eca45a200: Pull complete $ docker push
registry.sb.upf.
[registrysb.upf.edu/infoubuntu]
repository
471db38bcfbf: Pull complete The push refers to
eb4aba487617: Pull complete 68d59e996e14: Pushed
Digest: sha256:C8c275751219dadad8fa56b3ac41ca6cb22219ff117 ca b191e5e42292: Pushed
98fe82b42f24elba64e dcb7bc3fO7ca: Pushed
Status: Downloaded newer image for ubuntu: 18.04 ffc69fc3fb3e: Pushed
---) 452a96d81c30
401b721534a1: Pushed
Step 2/12 MAINTAINER John Brown "https://registry. sb.upf.edu"
a1355d87070f: Pushed
---> Running in 2fd5f8e83c17
b337788c3f94: Pushed
Removing intermediate container 2fd5f8e83c17
b984d5ed740c å4d6cb1434aa: Pushed
Step 3/12 : RUN apt-get update 059ad6Obcacf: Pushed
8db5fo72feec: Pushed
-
-> Running in 9c84731ef4ed

:
67885e448177: Pushed
Step 4 List and Check image.
ec75999a0cbl: Pushed
$ docker images
REPOSITORY SIZE
65bdd5Oee76a: Pushed fd71590fObf1256efcofla62e7d
TAG IMAGE ID CREATED
sha256:54567b1dc80c7e463f9cada74da121d
18.04: digest:
<none> <nones 401153215cb6 335MB
25 seconds ago cf89e2 size: 3025
[Managoement- Sem.IM
Docket Containers perOps : MCA 420
DevOps : MCA [Management - Sem V 419
authenticating to multíple Dockr Containers
registries, you must repeat
Step 9 : Check the image in a web browser each registry. the command for
aws ecr
1 Go toO https:/ fretistry.sb.upf.edu/ get-login-password
-rezicn regicn | docker
2. Login with your duster credentials. 1ogin -Username ANS s-passward-stdin
aws acCOunt_id.dkr.ecr.rezion.amazcnas.com
Docker Registry Frontend
Step 2
:
If your image repository doesn't edist in
the registr you intend to push to
yet, create it.
eten 3 :
Details for repository: info'centos Identify the local image to push.
Run the docker images commard to list
ECreted LAer Decker versien the containe imazes cn your
system.
docker images
You can identify an image wita the regcsitary:tag
7eta 17062ce alue or the image ID
1706.2ce
in the resulting command cutgut
Step 4 : Tag your image with the Amazon ECR registy, repcsiter, and
optional image tag name combinaticn to se.
E'page 10. Next
The registry fornat is aws_account iddrerS-st-2amazonascom.
The repository nane shculd match the repcsitory that you reated for
Fig. 4.8 your image. If you omit the image tag, we assume that the tag is atest.
Step 10: Convert Docker image to singularity image Example: The fcIcwing eranple tags a local inage with the D
Finally, if we have uploaded a Docker image, it will be necessary to convert
the file format (from 0CI to SIF). You can do it directly from compute
e9ae3c220b23 as as acccunt_iddrecs-west-2 am2onawscom/nj
repository:tag.
nodes: eSae3220623 zNS_account_iddc.ecr.g-west
S cd home/user/ docker tag
2.amazonaws.com/uy-repcsitory:tig
S singularity pull -ocker-login docker://registry.sb.upf.edu/<research
Step 5 :
Push the image sing the docker push command
group>/image> iddcecS-wSt-2amazonaas.cOm/ay
docker push aws 2ccount
4.4.22 Uploading Docker Image in AWS ECS
repository:tag
The Amazon ECR repository must exist before you push the image. Amazon ECR also
Step 6 (Optional): those tags to
provides a way to repicate your images to other repositories, across Regions own tags to your image and push
in Apply any additional
registry and across different accounts, by specifying a replication Step 4 and Step 5.
configuration in Amazon ECR by repeating
your private registry settings.
Step 1 : Authenticate your Docker client to the Amazon ECR 4.4.3 Understanding the Containers is isolated from
registry to which you process running on a bast machine thatleverages kernel
intend to push your image. container is a sandbaxed machine. That Isclation
Authentication toYens must be obtained oner processes running on that host have been in Linux
for a long
for each registry used, ana
ui cgroupsopen_in_new, features that
tokens are valid for 12 hours. espaces and easy to use.
capabilities approachable and stop, move, or
To 2uthenticate Docker to an
Amazon ECR registry, run ume. Docker makes these You can create, start,
login-password coOmmand. When passing the aws ecr b runnable instance of an image. can connect a contalner to one or
the authentication token n container isa or CLI. You on íts current
the docker login command, use the the Docker APl
value AWS for the username ete a container usine or even create
a new image based
specify the Amazon ECR t more networkS, attach storage to ít,
registry URI you want authenticate to. I
to
state.
4.21
Docket- Containers DavOps : MCA (Managoment- Sem. 4.22
Docker Containers
DevOps : MCA [Management Sem. IV) work?
other containers and its host How does Docker Compose
well isolated from
By default. a container is relatively compose is a yaml file which we can
a container's network, storage, or other . Docker in configure different types
isolated of services.
machine. You can control how or the host machinc.
a
Then with single command all contaíners will be built and
underlying subsystems
are from other containers from you provid,
up. fired
as well as any contiguration options There are three main steps Involvedin using compose:
A container is defined by its image any changes to its eti
Whena container is removed, Generate a Dockerfile for each project.
to it when you create or stat it.
that arent stored in persistent
storage disappear. Setup services in the docker - compose .ynl file.
4.4.4 Running commands in Container Fire up the containers.
How does the docker run command work?
runs an ubuntu container, attaches interactively to
voue 4.5 CUSTOM IMAGES
• The following command
pockerfile, Images, and Containers:
local command-line session, and runs bin/bash.
Dockerfile, Docker Images, and Docker Containers are three impertant terms that you
$ docker run -i -t ubuntu bin/bash
you are using the need to understand while using Docker.
When you run this command, the following happens (assuming
default registry confguration):
your configured Buld
1 If you do not have the ubuntu image locally, Docker pulls it from
registry. as though you had run docker pull ubuntu manually.
2. Docker creates a new container as though you had run a docker container create
Docker File Docker Image Docker Cortairer
command manually.
Fig. 4.9: Docker file, Images, and Cantainers
3. Docker allocates a read-write filesystem to the container, as its final layer. This
As you can see in the above diagram when the Dcckefile is built, it
becomes a Docker
allows a running container to create or modify files and directories in its local
image, and when we run the Docker image then it Enally becomes a Docker containe:.
filesystem.
(a) Dockerfile: A Dockerfile is a text document that contains
all the comnands that a
4. Docker creates a network interface to connect the container to the default
to aSsemble an image. So. Docker can build
network, since you didn't specify any networking options. This includes assigning user can call on the command ine
an IP address to the container. By default, containers can connect to external by reading the instructiers frum a Dcckerfle Ycu can se
images automatically
to execute several ccmmand-ize
networks using the host machine's network connection. docker build to create an automated buld
S. Docker startS the container and executes /bin/bash, Because instructions in succession.
the container is a Docker image can be compared to a template
running interactively and attached to your terminal (due to the -j
and -t flags). (0) Docker Image: In Layman's terns, are the
you can provide input using keyboard containers So, these read-cnly templates
while Docker logs the output to your that is used to create Docker can use docker run command to run
terminal container. You
building blocks of a Docker are stored in the Docker
6. When you run eat to terminate the /bin/bash a container. Decker images
command. the container stops but the image and create cr a pubie repesitory like
a
is not rermoved. You can start it again or remove a local repcsitory
it. Registry. It can be either user'susers colaberate in building an appicatian.
multiple to
4.45 Running Multiple Containers Docker hub which allows
is a unning instance cf a Docker image
as
container
With Docker compose, you can configure
and stat multiple containers (°) Docker Containerr Docker run application Sa, these are basically
with a singe package needed to the
yanl file. they hold the entire which is the ultimate utility
For example, assume applications created from Docker inages,
that you are working on a project the ready
Python for AL/ML, Node JS for real-time that uses a MySQ atav of Docker. docker image is
processing, and .NET coataines, a customired
would be cumbersome to setup
such an environment for serving API's. It ueT to run your application in a docker
instructions that instal specific
can do this with compose. for each team member. Do docker image inchudes
Ceated. This customized container.
copy the code into the docker
ckages and
-
DevOps :MCA
[Management Sem. IV]
Docker -Contalners 4.24
DevOps : MCA [Management - Sem. IV 4.23
ADD Or COPY instruction:
Docker-Containers
3.
are as followS:
Requirements to create Custom images The COPY and ADD instructions are
copy data into
Webdock cloud Ubuntu instance (18.04 or later) container. The COPY instruction is onlyused to the docker
to docker container while used to copy data from docker
You have shell (SSH) access to your server host
the ADD instruction can copy data from docker
Docker installed on Ubuntu instance host and web as well.
Clone of a demo node.js project Copy the source code into the docker container
using the COPY instruction
as follows:
4.5.1 Creating a Custom Image COPY
Step 1
:
Writing Dockerfile for custom docker image. This instruction will copy all the data from the working directory
Docker builds the docker image by reading the instructions
from a text file Br
h
of the
a Dockerfile to build the docker image docker host to the working directory of the docker container.
default Docker looks for file named
Dockerfile consists of instructions that are used to customize the docker image 4. RUN instruction:
go to the root The RUN instruction is used to install new packages or run some shell
We will write a Dockerfile for a node.js application. For this, first
directory of a node.js project. commands in the base docker image. For example, in order to install npm
packages, the RUN instruction will be used as follows:
$ cd node-app
npm
Create a Dockerfile using the following command in the terminal. install
RUN

The RUN instruction will run the command in the shell of the docker
touch Dockerfile
$
container.
Open the Dockerfile in your favorite editor.
5. EXPOSE instruction:
$ nano Dockerfile expose a port of a container. The port on
The EXPOSE instruction is used to as
Dockerfile Instructions: application runs can be exposed using the EXPOSE instruction
which the
Format of Dockerfile instruction:
follows:
INSTRUCTION arguments
EXPOSE 3000 will be
The instructions in Dockerfile are not case sensitive but it is convention to use on
port 3000 of the docker container
Now the application running using this docker
UPPERCASE letters for instructions. Dockerfile builds the docker image by when a container is launched
accessible from docker host
running the instructions in the order they are specified in Dockerfile.
1. FROM Instruction: image.
ENTRYPOINT instructions:
A Dockerfile alwaysstarts from a FROM instruction which specifies which 6. CMD and are used to execute the shell
ENTRYPOINT instructions container starts.
base image will be used to create the custom docker image. For example, if The CMD and container when the docker
you want to create a custom docker image for node.js application, then the commands inside the docker shell command that
used to provide the
ENTRYPONT instruction is ENTRYPOINT for docker is
node base image will be used as follows. The starts. The default
FROM node : 14 runs when the container
/bin/sh -c. the arguments passed to the
If you do not specify a version tag, by default it will use the node image wit instruction is used to define
the latest tag. The base docker image will be pulled from DockerHub While the CMD
command inside the docker
not available locally. shell command. -c node index.js
run the /bin/sh CMD instruction.
2. WORKDIR instruction: In order to use the following
Dockerfile provides WORKDIR instruction to set Container at runtime,
instructions and their
the working directory. CMD ["node", "index. js"]
important Dockerfile
WORKDIR /app the
e tolowing table contains
The above instruction will set
the working directory app inside the
container. All the remaining instructions will be executedto explanation.
in this directoy
TManagement - Sem. IVi
DevOps :MCA[Management - Sem. IV 4.25
Docker- Containers :
DevOps: MCA
4.26

Table 4.2: Dockerfile Instructions


lie command will get the Dockerile from Docker- Contalners

build the docker image. the current working directory


Function and
Argument namne order to build the docker
instructions. image from another directory,
FROM Sets the base image for subsequent the directory containing Dockerfile. specify the path
of
MAINTAINER Sets the author field of the generated images. $ docker build /home/SUSER/
RUN Executes commands in a new layer on top of the current imaoe This command will get
the Dockerfile from /home/SUSER directory
and commits the results. the docker image. and build
CMD Allowed only once (if many, then only the last one takes effect). To build a docker image a
from file named other than Dockerfile.
Dockerfile name using the -f option. specify the
LABEL Adds metadata to an image.
$ docker build
EXPOSE Informs Docker that the container listens on the specified -f Dockerfile.dev /home/SUSER/
The above command
network ports at runtime. wil build the docker image by reading instructions from
Dockerfile.dev file in /home/$USER directory.
ENV Sets an environment variable.
Each docker image created using the build command gets a
ADD Copies new files, directories or remote file URLS unique and all
into the the docker images with their IDs can be listed using the following command.
filesystem of the container.
COPY $ docker images
Copies new files or directories into the filesystem
of the container. Step 3 : Tag docker image
ENTRYPOINT Allows you to configure a container that will run as an
executable. .The docker image built using the commands described in the previous section
VOLUME Creates a mount point and marks it as holding does not have a name and tag. Docker tags are helpful to push the docker image
externally mounted
volumes from native host or other containers. to a remote docker repository and specify the docker image version.
USER Sets the username or UID to use
when running an image. Use the following command to tag the docker image.
WORKDIR Sets the working directory for any RUN,
CMD, ENTRYPOINT, COPY, $ docker tag 21233 node-app:v1
and ADD commands. The above command will take a docker image with ID 21233 and add a tag node
ARG Definesa variable that users can pass
at build time to the builder app:v1 to it.
using buildarg. Use the following command to list all the docker images with tags.
ONBUILD Adds an instruction to be executed $ docker images.
as the base for later, when the image is used
another build. Step 4 : Push docker image to DockerHub
Example 1:The final Dockerfile is: or some other docker image
In order to push the docker image to DockerHub
FROM node:14 repository, the docker image must be tagged properly. If you want to push a
image must be tagged as
WORKDIR
/app docker image to a docker hub repository, the docker
.
COPY follows.
RUN npm
install example/node-app:v1
account id, node-app is the docker hub
EXPOSE 3000 Where, example is the docker hub tag
tag. Use the following command to
repository and vi is the docker image
CMD
("node", "index.js"]
Step 2 : Create docker the docker image.
image from Dockerfile
After writing Dockerfile, now example/node-app:v1
run the following $ docker tag 212233 the docker
build the docker image command in the terminal to docker hub, you must log into
from Dockerfile. to Betore pushing the docker image command to log into docker hub.
Use the following
$ docker build
uD using command line.
$ dockér login
DevOps: MCA (Management - Sem. IV) 4.27
Docker- Contalners DevOps : MCA (Managomont Sem. IV]
4.28
can take a Docker Containers
username and password. After authentication. usethe The above command while to complete
lask for docker hub upload bandwidth as it uploads
It will depending on
user
to docker hub. something like 180MB connection's
following command to push the docker image " completed, user can go to ns of data (in our
example).
$ docker push example/node-app:v1 proile on Docker Hub and check out Once
Image. his new
4.5.2 Running a Container from the Custom Image LS.4 Docker Container Commands
Running Docker image using docker run command:
la) Listing Docker Contaíner:

The docker run command first creates a writeable container layer over the specified
nocker ps: This command displays the list of contaíners.
image and then starts it using the specified command.
$ docker run [OPTIONS] IMAGE (COMMAND] [ARG.] $ docker ps [OPTIONS]

Example: Chow both running and stopped containers: The docker ps command
only shows
Assign a name and allocate pseudo -TTY(. -name, -it) nunning containers by default. To see all containers, use the -a gption.
This example runs a container named ubuntu-container using the ubuntu:latest $ docker ps -a
image. The -it means interactive terminal and --name specifies the name of the C:\tesp»docker ps -3
container. It instructs Docker to allocate a pseudo-TTY connected to the container's CaTAINER ID DGE OEATES
6af6338c16e ubuntu
stdin. It creates an interactive bash shell in the container.
S docker run --name ubuntu-container Fig. 4.11
-it ubuntu
C:\temp»docker run --name ubuntu-container -it ubuntu
(b) Stop Docker Container: Stop one o
more running containers. The main process
rootg6afds38c16e3:/# inside the container will receive SIGTERM, and after a grace period, StGKTLL.
$ docker stop [OPTIONS] CONTAINER (cONTAINER...]
Fig. 4.10 Stop a running container by using following command:
4.5.3 Publishing the Custom Image $ docker stop ubuntu-container
• To publish custom image, user will need to create an account on (c) Start Docker Container: Start one or more stopped containers.
signup webpage. Here user will provide a name, the Docker Hub
password, and email address for his $ docker start [OPTIONS] CONTAINER [CONTAINER...]
account.
Once user has created the account, Start a stopped container by using following command:
he can push the image that he has previously
created, to make it available for others to use. $ docker start ubuntu-container
To do so, user will need
TAG of"my-docker-whale"
image. the ID and the (0) Restart Docker Container: Restart one or more containers.
Run again the "docker images" CONTAINER [CONTAINER...]
command and note the ID and $ docker restart [OPTIONS]
image e.g. a69f3fSela31. the TAG of his Docker Restart a Docker container by using following command:
Now, with the following
command, user has to prepare our $ docker restart ubuntu-container
journey to the outside world Docker Image for
name on the Docker Hub (the account name part
of the command is user account ls (e) Remove Docker Images:
profile page):
docker tag a69f3f5ela31 Docker rmi: Remove one or more images.
accountname/my-docker INAGE (INAGE..]
Run the"docker images" -whale:latest $ docker rmi [oPTIONS)
command and verify to remove a Docker image
Next, use the "docker login the newly tagged image. emove Docker
a image by id: Below example shows how
command to log into
line. The format for the login the Docker Hub from by id.
command is: the commanu
docker logín -username = $ docker rmi b4f2cd35d4d4
When prompted, enter user yourhubusername
--email=youremail@provider•
Now user can
password and press enter |C:\temp>docker rai baf2cd35d4d4
push the image to the key, Untagged: sha256:b4f2Cd35d4d4e2ccc676e03ffaaff12961bajec32SF2634cbes74cc533e7f4
sheetalbhalgat/ubuntu:latest
newly created repository:
docker push accountname/my-docker-whale eieted:
Fig. 4.12
-
MCA [Management
A
Sem. IV)
Docker - DevOps: 4.30
MCA [Management - Sem. IV]
Contalners
DevOps : 4.29 Docker-Containers

Flexibility
4.6 DOCKER NETWORKING Cross Platform

4.6.1 Introduction
First we should understand the Workflow of Docker.
Container
calability Decentralized
Staging
Container Docker
server
Container Container

Docker User-Friendly
image
Network
Support
Docker Docker
file hub
Fig. 4.14: Goals of Docker Networking
Project code Container
Docker iv) Decentralized: Docker usesa decentralized network, which enables the capability
container Production to have the applications spread and highly available. In the event that a container
server Container
or a host is suddenly missing from your pol of resources, you can either bring up
Virtual
Container
machine Container an additional resource or pass over to services that are still available.
(v) User-friendly: Docker makes it
easy to automate the deployment of services,
Fig. 4.13: The Workflow of Docker making them easy to use in day-to-day life.
use Docker
As you can see in the above diagram, a developer writes a code that stipulates (vi) Support: Docker offer's out-of-the-box
support. So, the ability
application requirements or the dependencies in an easy to write Docker File and this functionality very easy and straightforward
Enterprise Edition and get all of the
Docker File produces Docker Images. So, whatever dependencies are required for a use.
makes Docker platform very easy to
particular application are present in this image.
Now, Docker Containers are nothing but the runtime instance of Docker Image. Thesé 4.6.1.2 Types of Docker Networks as:
networks such
images are uploaded onto the Docker Hub (Git repository for Docker Images) which There are various kinds of Docker
contains public/private repositories. Bridge Network
From public repositories, you can pull your image as well and can upload own images Host Network
onto the Docker Hub. Then, from Docker Hub, various teams such as Quality Assurance
None Network
or Production teams will pull that image and prepare
their own containers. These MACVLANand IPVLAN Networks
individual containers communicate with each other through a network to perform the
required actions, and this is nothing but Docker Networking. Overlay Network
up and
Default Bridge Network: bridge network
So, you can define Docker Networking as' a communication passage can find a default
you
the isolated containers communicate with each other in various situations to pertora
through which After a fresh Docker installation, network 1s
docker
umng. We can see it by typing $
the required actions.
4.6.1.1 Goals of Docker Networking C:\>docker network
PNAME
ls DRIVER
SCOPE

(i) Flexibility: Docker provides flexibility by enabling any NETHORK ID bridge local
number of applications ou CSec6bea96bd bridge bridge local
various platforms to communicate with each other. 3c92f1df3722 gitops local
t
(ii) Cross-platform: Docker can be easily used f8109f8eSbbb host bridge l0cal
in cross-platform which works acro minikube
local
various servers with the help of Docker Swarm 91818d219a25
none null
Clusters. bb49e3af8606.
(iiü) Scalability: Docker is a
fully distributed network, which enables Fig.4.15 (a)
grow and scale individually applicatios
while ensuring performance.
- IV
Sem.
Docker -Contalners DovOps: MCA [Management 4.32
Docker-Containers
DevOps : MCA [Management - Sem. IV] 4.31
say, Nginx, it willa can understand that containers running on
network. If you run a container,
We
IP the same bridge network
Al Docker installations have this see each- other using their addresses. Onthe other hand,
a
are able to
the default bridge
attached by default to the bridge network: does not support tautomatic: service discovery. network
3 docker run -dit
-name nginx nginx:latest
tser-defined Bridge NetworkS:
"inspect" command, you can check the containers
running insid Docker CLI,
By using the . Using the it is possible to create other
networks. You can create a
network: bridge network using: second
$ docker network inspect bridge docker network create my_bridge --driver bridge
*Configonly": false, Now, attach "busybox1" and "busybox2" tothe same network:
"Containers": {
"9c7ectbd3a76254eaba45e1fb9c554dea3695307c6216aeead6c1b49e463b729": {
docker network connect my_bridge busybox1
"Name: "nginx".
"EndpointID": "3aaf996898e38929fe344578c474
acd7bc48d574cf9ab7eB304e240b1ba37cfe 'docker network connect my_bridge busybox2
name:
*HacAddress: "02:42:ac:11:09:04", Retry pinging "busybox1" using its
"IPv4Address": "172.17.0.4/16",
"IPv6Address": docker exec -it busybox2 ping busybox1

PING busybox1(172.20.0.2):56 data bytes


ms
Fig. 4.15 (b) bytes from 172.20.0.2: seq=0 ttl=64 time=0.113
64
Docker uses a software-based bridge network. This network allows containers We can conclude that only user-defined bridge networks support automatic service
use the
connected to the same bridge network to communicate while isolating them from discovery. If you need to use service discovery with containers then don't
create a new one.
other containers not running in the same bridge network. default bridge,
Let us see how containers running in the same bridge network can connect to each 4.6.2 Accessing Containers
other. Let us create two containers for testing purposes: To access the containers, follow the steps given below:
$ docker run -dit --name busybox1 busybox following command:
1.: Obtain the container ID by running the
$ docker run -dit -name busyboxZ busybox
docker ps
These are the IP addresses of our containers:
You can see the following output:
$ docker inspect busybox1 | jq-r' to).NetworkSettings. IPAddress' IMAGE NAMES
CONTAINER ID
$ docker inspect busybox2 | jq-r'.[O].NetworkSettings.IPAddress'
Wa-console
172.17.0.2 b02459af2b9c
172.17.0.1
Let us
try to ping a container from another one using one of these ip addresses. For Fig. 4.16
example, ping the container named "busybox1" from "busybox2", using its P following command:
Access the Docker
container by running the
172.17.0.3.
id> /bin/bash
Docker exec-it busybox2 ping 172.17.0.3 docker exec -it <container command
with the
Where, container obtained
the ID of the b02459af2b9c.
PING 172.17.0.3(172.17.0.3):56 databytes Container id: This is
step, for example
64 bytes from 172.17.0.3: Seq=0 ttl=64 time=0.245 ms explained in the first
So, containers on the same bridge can see each: other using to be linked
their IPs. What multiple containers
w

happen if use the containers' name instead of the IP? 4.6.3 Linking Containers, system that allows sent from a source

Docker linking information tobe


docker exec -it busybox2 ping busybox1
a
consists of allows
connection
toether. This linking system
ping: bad address 'busybox1 container to aa recipient container.
DevOps:MCA [Management Sem. lVI 4.33
pocker - Contalners DevOps :MCA(Managomont-SSem. V)
4.34

containers to be necessary to expose a Docker-


Docker consists of a - link legacy feature that enables two linked to Itis not port because Most of
Containers
setup already have "default port" exposed
a
connection information can the Docker images used
each other. Once a connection has been established, the intheir configuration. your
in
Thereare two ways to expose a port:
s
be shared between the two containers.
Docker container linking allows the recipient container to get connection
1. Using the EXPOSE Dockerfile instruction.
information relating to the source container.
Although Docker introduced a Docker networking feature that enhances 2. Using --expOse with docker CLÍ or expose key
in use. in docker-compose.
communication between containers,container linking is still s
Let us see details of each method.
It is important to understand container linking since it is a resourceful alternative
Method 1: Via Docker file:
networking. a
Container linking is not limited to two containers. It can be applied to aš many You can add simple instruction in your Dockerfile to
let others kno at which port
containers as possible. your application will be accepting connections on.
The linking system can establish a link of multiple containers to enhance About this instruction, you must know the following things:
communication between them. rYDOSE does not add
additional layers to the resulting docker
image. It just adds
4.6.4 Exposing Container Ports metadata.
There are two ways to handle ports in Docker: Exposing the ports and Publishing the EXPOSE is a way of documenting your application
port. The only
ports. effect it has is in
terms of readability or understanding the application.
1. Exposing a port means allowing others to know on which port the containerized
Vou can see how expose works with a simple container image that has built just for
application is going to be listening on, or accepting connections on. This is for
communicating with other containers, not with the outside world. this purpose. This image doesn't do anything.
2. Publishing a port is more like mapping the ports of a container with ports of the Pull the image.
host. This way, the container is able to communicate with external systems, the
real world, and the internet. docker pull debdutdeb/expose-demo:v1
Container world This image exposes a total of four ports. List the image using the follovwing command.
expose 3572 docker image ls --filter=reference-debdutdeb/expose-demo:v1
If you look at SIZE column, you will see that it is 0 bytes.
Database Outside world docker image ls --filter=reference=debdutdeb/expose-demo:v1
TAG IMAGE ID CREATED SIZE
REPOSITORY
Server debdutdeb/expose-demo ad3d8ffa9bfe NJA

image does not have any layers. All the expose instructions
The reason is simple. This
ports 80:80 are no actual layers.
Service added to this image are metadata, there
You can also get the number of available layers using the following command:
expose 4573
'{{len .RootFS. Layers}}' debdutdeb/expose-demo:v1
docker image inspect -f
You should see an output like this: debdutdeb/expose
Fig. 4.17: Exposing Container Port in Docker
'{(len .RootFS. Layers}}'
In the above figure, you will see how the SERVER
container's port 80 is mapped to tne docker image inspect -f
port 80 of the host system. This way, the container demo:v1 0
outside world using the public IP address of is able to communicate to tne
Method 2:Via CLI Or docker compose: instruction in
the host system. an extra EXPOSE
On the other hand, the exposed ports cannot developers do not want to add
be accessed directly from outside the Sometimes application
container world. their Dockerfile. Docker API)
can detect the
Remember the following points: sure containers (throughthe the deployment
In such case to make
a other post-build, as part of
Exposed ports are used for internal multiple ports
container communication, within u port in use easily, you can expose
declarative method,
container world. process. CLI, or the
method, i.e. the
Published ports are used for communicated You can either select the imperative
with systems outside the contane
world. ie. compose files.
(Managoment- Sem. IV)
DocketContalners
DvOps :MCA 4.36
Docker-Containers
DevOps : MCA [Management - Sem IV] 4.35
L65 Container
Routing
(a) CLI Method: use There are three
steps for Container Routing:
you have to do is the expose
In this method, while creating a container, all optionally A custom
Docker network named
such that Docker adds
port number and it to the container first.
making it the default route.
1.
option (as many times as needed) with the
protocol with a /. 2. An IP tables rule to mark packets s coming out of that Docker
2.
,.hasedrouting on the host to route network.
Example: marked packets through the non-default
docker container run interface.
--expose 80 \ Erample of code:
--expose 90 \ # create a new Docker -managed, bridged connect ion
--expose 7e/udp \ #
'avpn' because docker chooses the default route alçhabetically
-d -name port-expose busybox:latest sleep 1d DOCKER_SUBNET="172.57.0.0/16"
Here, by efault the busybox image doesn't expose any ports.
docker network Create --subnet-$00CXER SUBNET
(b) Compose file Method: bridge
con.docker.network.bridge.name-=docker_vpn avgn
If you are using a compase file, you can add an array expose in the service # nark packetS from the docker_vpn
definition. You can convert the previous deployment to a compose file like so: interface during prercuting, to destine
them for non-default routing decisicns
version: "3.7" # 0x25 any hex (int mask) shculd wark
is arbitrary,
services:
firewall- cmd -- permanent --direct --add-rule igy4 argle PREROUTING -1
PortExpose: docker_vpn ! -d SDOCKER_SUBNET -1 MARK --set-ark ex25
inage: busybox # alternatively, for regular iptables:
command: sleep 1d
#iptables -t mangle -I PREROUTING e -i docker_vgn ! -d sDOCKER_SUBNET -1
contaíner_name: port-expOse MARK
--set-mark Øx25
expose:
-
# create routing table 18e is arbitrary, any integer 1-252
new
- 9e echo "100 vpn" >> /etc/iproute2/rt_tables
- 78/udp rew table
# Configure rules for when to route packets using the
ip rule add from all Fwmark ex25 Icckup vpn
Once you have the container running, just like
before you can inspect it to know new
which ports are being exposed. The command looks similar. # setup different default route cn the
a rcuting table
route
docker container inspect -f \ can differ from the nermal rcuting table's default
#
this route
{range Sexposed, S_ NetworkSettíngs.Ports)} p route add default via 10.17.0.1 dev tuna
{{printf "%s\n Sexposed)H{end})' \
port -expose
# connect the docker_vpn
uoCker network connect docker vpn
y
ccntainer
Sample Output: Summary
application's
docker teams is manging an development
container inspect
.NetworkSettings.Ports)}{{printf -f '{(range e common challenge for DevOpsacross various cloud and
$exposed, stack
70/udp
"%s\n Sexposed) )Hfend))
port-expose pendencies and technology routine tasks, they must keep the application
environments, As part of their platform that it runs
on. On
8e/tcp the underlying
Perationaland stable regardless of new features and updates.
on releasing
9e/tcp teams focus deploying
hand, Development compromise the application's stability by
Iler
Unfortunately, these often
environment-dependent bugs.
codes that introduce
- Sem. IV]
MCA [Management
Docker
-Containers
DevOps : 4.38
Docker
DevOps : 4.37
MCA [Management- Sem. IV]
9. Which of the following is not a containerr--based Containers
are increasingly adopting a alternative to Docker?
organizations (a) Kubernetes
To avoid this inefficiency, a stable framework without
(b) Core0S'rkt
designing (c) Canonical's LXD
containerized framework that allows (d) Windows
Server Containers
adding: 10. The Docker logo is
• Complexities (a) a butler (b) a sailboat
Security vulnerabilities (c) an octocat (d) a whale
o Operational loose ends
Answers
Containerization is the process of packaging an application's code witl
dependencies, libraries, and configuration files that the application needs to 1 (b) 2. (b) 3. (a) 4. (c) 5. (a) 4.(c) 7. (b) 8. (d) 9.(a) 10 (d)
launch and operate efficiently into a standalone executable unit.
Initially, containers didn't gain much prominence, mostly due to usability issues. Practice Questions
However, since Docker entered the scene by addressing these challenges.
0IAnswer the following questions in short.
containers have become practically mainstream.
1. What is docker?
Check Your Understanding 2. What is the use of docker?

1 Who introduced the Docker? 3. State the various components of docker architecture.
(a) Linus Torvalds, Mark Zuckerberg and Brendan Eich 4. What is image in the docker?
(b) Kamel Founadi, Solomon Hykes, and Sebastien Pahl
5. What are containers?
(c) Brendan Eich, Sebastien Pahl and Greg Duffy
6. What is the docker hub?
(à) Mike Cagney, Suhail Doshi and Chris Wanstrath
7. What is the docker container?
2. Which Markup Language is used to write Docker configuration files?
8. Which command is used to see the docker version?
(a) XMI (b) YAML
(c) DHTML 9. State the command used to search the docker hub forimages.
(d) JSON
3. Which programming language is used to write Docker? 10. Which command is used to pull an image or repository from a registry?
(a) GO (b) NET 11.What is the docker tag?
(c) C++ (à) C 12.Which command is used to restart the docker container?
4. is instances of Docker images that can be run using
command.
the Docker run Q.I Answer the following questions.

(a) File
1 How does docker work?
(b) Hub 2. Write down use cases of docker.
(c) Container (d) Cloud 3. Differentiate between Dockervs Virtual Machines.
5. Which command, you can see all the commands
that were un with an image via a 4. Explain docker architecture with diagram.
container?
run command.
(a) history
(b)
D.
Explain the working of the docker
ref used in the docker tag.
(c) -a (d) hist O. Explain various commands
4. What is the primary advantage I. How to create custom images in the docker?
of using Docker?
(a) Improved application docker file instruction?
security (b) Improved application Which are various arøuments of the
.
O

portability (d) Improved application performance


(c) Improved application
7. What port does Docker registry use? scalability Explain workflow of the docker.
10. What are goals of the docker networking'
(a) Port 3000
(b) Port 5000 the docker networking.
(c) Port 8000 EXplain various commands used in
(d) Port 6000
8. How many private repositories are QT Write short notes on:
allowed for the individual on
(a) 3 Docker hub? 1 Docker Container Linking.
(b) 4
(c) 7 2. Docker Architecture.
() 1
Sem. IVI
(Management- 5.2
DevOps : MCA
Bulld Tool -
Maven
Download latest) Maven software from website'Maven -
Download Apache Maven'

5.. For example, apache-maven-3.1.1-bin. zin


When you willlextract this: file, you will get the following
display:

Build Tool - Maven


Objectives...
After learning this chapter you will be able to:
O Understand uses of Maven for Simple project setup that follows best practices.
O Understand the installation and uses of Maven.
Masen Directories and Fles

5.1 INTRODUCTION
Maven is a popular open-source build tool developed by the Apache Group to
build
publish, and deploy several projects at once for better project management.
Maven isa project management and comprehension Fig. 5.1 (a)
tool that provides developers
a 2. Add JAVA_HOME
complete build lifecycle framework. and MAVEN_HOME in environment variable:
Project requirements can be built automatically by Maven. Maven also can Right click on MyComputer -> properties > Advanced System Settings ->
be helpful
in collaborative work environment. Developer life becomes easy
when Maven is used Environment variables -> click new button
for project automation thus will be helpful for report Now add MAVEN_HOME in variable name and path of maven in variable value. It
creation, checks and testing
phase. must be the home directory of maven i.e. outer directory of bin. For example,
Functions of Maven:
Maven provides ways to developers to manage E:\apache-maven-3.1,1 which will displayed below:
o Builds the following: Sytem Propeta

o
Documentation Corpt Nre rn Art SnF
Reporting rret Vns
Dependencies
SCMs
Releases
Distribution Irde ane
o Mailing list Elxateelit
Maven handles compilation,
distribution, documentation,
other tasks seamlessly. Maven team collaboration and
build related tasks. increases reusability
and takes care of most
of the
5.2 MAVEN INSTALLATION

5.2.1 Installing Maven on Windows


To install
Maven on
Windows, you need
1. Download Maven to perform following
o To and extract it: steps:
install Maven on Windows, you need
to download Apache
Maven first.
(5.1) Fig. 5.1 (b)
Now click on OK button.
. Sem. IV1
Bulld ToolMaven Devops: MCA
(Managoment 5.4
DevOps: MCA [Management- Sem. V 53 Bulld Tool - Maven

3. Add Maven path in the envinonment variable: 5.2.2 Installing Maven on Ubuntu
° f path is not set, click
on the New tab, then set the path of Maven. If it
is eet. terrminal, we run apt-cache search maven
Maven. to get all the available Maven
the path and append the path of so we are
is set by default, packages:
Here, we have installed jDK and its path
o
going to maven
Maven. $ apt-Cache search
append the path of E:Aapaet.
o The path of Maven should be %maven home%/bin. For example,
maven-3.1.1\bin.
1hxmlbeans -maven-plugin-java-doc :
Documentation for Maven XMLBeans
Plugin.
maven : Java software project management and comprehension tool.
maven-debian -helper : Helper tools for building Debian packages with Maven.
maven2 :Java software project management and comprehension toolCopy.
The Maven package always comes with the latest Apache Maven.
We run the command sudo apt-get install maven to install the latest Maven:
$ sudo apt-get install maverCopy
This will take a few minutes to download. Once downlcaded, we can run the mvn -
ar
version to verify our installation.
5.3 MAVEN BUILD REQUIREMENTS
Following steps will give an idea about the steps required to check requirements
before installing Maven.
mn7otere
Step 1 :
Verify Java installation in your machine.
First of all, open the console and execute a Java command based on the
operating system you are working on.
Task Command

Fig. 5.1 (c) Windows Open Command Console cbjava -version


4. Verify Maven: Open Command TerminalSjava -version
o To verify whether Maven is installed or not, open Linux
the and command prompt write: machine< josephs java
Yn -version Mac Open Terminal
Now it will display the version version
of Maven and jdk including the Maven home and
Java hone as shown below:
Let us verify the output for all the operating systems:
ers.0 Corportion. A1! rights reserved. 0S Output
2021-04-20O LTS
iaNHu1tacecD517349s; 2013-09-1720:52:2 Windows java 110.11 18.9 (build 11.0.11+9
Pen hoeei E:eche-s-3.1.1
ava verica .7.001, vend Java(TM) SE Runtime Environment
pore
Lecale: LTS-194)
(build 11.0.11+9
HotSpot(TM) 64-Bit Server VM 18.9
onie ott
internal ely
rhal conand,
indous Java
LTS-194, mixed mode)
LTS
Linux java 11.0.11 2021-04-20 11.0.11+9.
Environment 18.9 (build
Java(TM) SE Runtime
LTS-194) 11.0.11+9.
64-Bit Server VM 18.9 (build
Java HotSpot(TN)
Fig. 5.1 (d) LTS-194, mixed mode)
--
Build Tool- [Management Sem. IV 5.6
5.5 Maven DevOps : MCA Bulld Tool - Maven
DevOps : MCA[Management- Sem. IV]
download. : Set Maven Environment Variables.
installed on your system, then the Java 5
Step
o
If you do not have Java following link Add M2 HOME, M2, MAVEN- OPTS to
Kit (SDK) from the environment variables.
Software Development as OS
We are assuming Java 11.0.11 the installed. Output
http: ://www.oracle. com. Windows Set the environment variables using system properties.
version. M2_HOME=C:\Program Files\Apache Software
Step 2 :
Set JAVA Environment. Foundation\apache-maven-3.8.4 M2=%M2_HOME%\bin
variable to point to the base directory
Set
the JAVA HOME environment on your machine. For example.
MAVEN_OPTS=-Xms256m -Xmx512m
location where Java is installed Linux Open command terminal and set environment variables.
OS
Output export M2_HOME=/usr/local/apache-maven/apache
Windows Set the environment variable JAVA_HOME to C:\Program maven-3.8.4 export M2=$M2_HOME/bin
Files\Java\jdk11.0.11 export MAVEN_OPTS=-Xms256m -Xmx512m
export JAVA HOME = /usr/local/java-current Step 6: Add Maven bin Directory Location to System Path.
Linux Now append M2 variable to System Path.
=
Mac export JAVA_HOME /Library/Java/Home OS Output
Append Java compiler location to the System Path. Windows Append the string ;%M2% to the end of the system
variable, Path.
OS Output
Linux export PATH=$M2:$PATH
Windows Append the string C:\Program export PATH=$M2:$PATH
Mac
Files\Java\jdk11 .0.11\bin at the end of the system
variable, Path. Step 7 :
Verify Maven Installation.
mvn command.
Now open the console and execute the following
Linux export PATH = $PATH:$JAVA_HOME/bin/ Command
OS Task
not required Console c:\> mvn --version
Mac Windows Open Command
mvn --version
Verify Java installation using the command java -version as explained Linux Open Commånd Terminal $
machine:- joseph$ mvn
above. Mac Open Terminal
:
version
Step 3 Download Maven Archive. be as
o Download Maven 3.8.4 output of the above commands, which should
from https:/ /maven.apache.org/download.cgi. o Finally, verify the
follows:
OS Archive name Output
Windows apache-maven-3.8.4-bin.zip Apache Maven 3.8.4
Windows (9b656c72d54e5bacbed989b64718c159fe39b537)
Linux apache-maven-3.8.4-bin.tar.gz Software
Files\Apache
Mac apache-maven-3.8.4-bin.tar.gz Maven home: C:\Program
Foundation\apache-maven-3.8.4
: Corporation, runtime:
Step 4 Extract the Maven Archive. 11,0.11, vendor: Oracle
Java version:Files\Java\jdk11.0.11\
Extract the archive, to the directory you wish to install Maven 3.8.4. The
C:\Program Cp1252
subdirectory apache-maven-3.8.4 will be created fronm the archive. platform encoding:
Default locale: en_IN, version: "10.0", arch: "amd64",
OS Location (can be different based on vour OS name:
"windows 10",
installation)
Windows C:\Program Files\Apache Software Foundation\apache family: "windows"
Apache Maven 3.8.4
(9b656cI2d54eSbacbed989b64718c159 fe39b537)
maven-3.8,4 Linux
Linux /usr/local/apache-maven 11.0.11
Java version:
/usr/local/java-current/jre
Mac tusr/local/apache-maven
Java home:
Build Tool- MCA
[Management - Sem. n 5.8
Bulld Tool
Maven pevOps : Maven
6.7
DevOps: MCA
[Management- Sem. IVI Table 5.1: Minimal requirements for a POM
Node and Description
BUILDS (pom.xml) Sr. No.
5.4 MAVEN POM Model. It is fundamental
unit of work in Maven. 1,
Project root
tag. You need to specify theebasic schema
POM stands for : Project Object
It is an This is project root settings such as
directory of the project as pom.Xml. apache schema and w3.org specification.
XML file that resides in the base
Model version
about the project and
various configuration detail
Model version should be 4.0.0.
2

The POM contains information


used by Maven to build the project(s). Maven Jo. groupld
While executing a task or goal, project's group. This is generally unique amongst
POM also contains the goals and plugins. This is an Id of an

It reads the POM, gets the needed configurati organization or a project. For example, a banking group com.company.bank
for the POM in the current directory. can has all bank related projects.
Some of the configuration that
information, and then executes the goal. 4 artifactId
This is ån Id of the project. This is generally name of the project. For
specified in the POM are following:
example, consumer-banking. Along with the groupld, the artifacttd defines
project dependencies the artifact's location within the repository.
plugins 5 version an
This is the version of the project. Along with the groupld, It is used within
goals to separate versions fromn each other.
artifact's repository
o build profiles For example,
project version
com.comnpany.bank:consumer-banking:1.0
o
com.company.bank:consumer-banking:1.1.
developers
o mailing list Command: mvn help:effective-pom above
on your computer. Use the content of
Before creating a POM, we should first decide the project group (groupld), its name Create a pom.xml in any directory
(artifactid) and its version as these attributes help in uniquely identifying the project mentioned example POM.
CYCLE
in repository. 5.5 MAVEN BUILDaLIFE well-defined sequence of phases, which
define the order
• A Build Lifecycle is an
POM Example:
are executed. Here phase represents a stage in life cycle. As
which the goals to be following sequence of
= Lifecycle consists of the
<project xmlns "http://maven. apache.org/POM/4.0.0" example, a typical Maven Build
Xmlns:xsi = "http://www.W3.org/2001/XMLSchema-instance" phases. Build Life Cycle
xsi:schemaLocation = "http://maven.apache.org/POM/4.0.0 Table 5.2: Phases in Maven Description
Handles phase.
http://maven. apache.org/xsd/maven-4.0.0.xsd"> Phase can be customizd in this
Prepare Resource copying Resource copying
<modelVersion>4.0.0</modelVersion> necessary
resources correct and if all
Validate Validating the Validates if the project is
<groupId> com.companyname. project-group</groupId> information is available. compilation is done.
information source code
<artifactId>project</artifactId> Compile Compilation In this phase, source code suitable for
testing
Tests the compiled
<version>1.0</version> Test Testing
framework. as
</projects JAR/WAR package
phase creates the
It should be noted that there should be a single POM file for each project. Package Packaging This packaging in
POM.xml.
mentioned in the local/remote
All POM files require the project element and three mandatory fields: groupse installs the package in
Install This phase
Installation maven repository.
artifactId, version. remote repository.
package to the
o Projects notation
in repository is groupId:artifactld:version. Deploy Deploying Copies the final
IV)
(Managomont-Som. 5.10
Bulld ToolMaven Devops :MCA
Bulld
Tool - Maven
DevOps : MCA (Management - Sem. IV) 5.9 Table 5.3: Lifecycle Phases
must run
register goals, which prior to,
and post phases to
ot
There are always pre LÍfecycle Phase and Description
after a particular phase. Sr. No.
a steps through a defined sequence of phats validate
When Maven starts building project. it 1.
correct and all necessary information
are registered with each phase. Validates whether project is
is available
and executes goals, which to complete the build
process.
Maven has the following three standard lifecycles:
1. clean
initialize
2.
Initializes build state, for example set properties.
2. default(or build) generate-sources
3. site 3. e
a Generate any SOurce code to be incuded in compilation phase.
goal represents a specific task which contributes to the building and managing of
A
zero or more A
build phases. goal not bound to any builA procesS-sOurces
project. It may be bound to
Process the source code. For example, filter any value.
phase could be executed outside of the build lifecycle by direct invocation.
The order of execution depends on the order in which the goal(s) and the build generate-resOurces
5
phase(s) are invoked. For example, consider the command below. The clean and Generate resources to be included in the package.
package arguments are buildphases while the dependency:copy-dependencies is a 6.
procesS-resources
CoDY and Process the resources into the destinaticn directory. ready for
goal.
packaging phase.
Vn clean deperndency:copy-dependencies package compile
7
Here the cdean phase will be executed first, followed by the dependency:copy Compile the source code of the project.
dependencies goal, and finally package phase will be executed. 8 procesS-classes
to do
Clean Lifecycle: Post-process the generated files from compilation. For erample,
When we execute mvn post-clean command, Maven invokes the clean lifecycle bytecode enhancement/optimization on Java classes.
consisting of the following phases: generate-test-sources
o pre-clean Generate any test source code to be inciuded in compilation phase.
10. process-test-sources
clean any values.
Process the test source code. For example, filiter
post-clean
11. test-compile
directory.
Maven clean goal (clean:clean) is bound to the clean phase in
the clean lifecycle. Its Compile the test source code into the test destination
clean:cleangoal deletes the output ofa build by deleting the build directory. Thus, 12. process-test-classes
when mvn clean command executes, Maven deletes the build directory. Process the generated files from test code file
compilation.
We can customize this behavior by mentioning goals in any 13 test
of the above phases of framework Junit is one).
clean life cycie. Run tests usinga suitable unit testing
In the following example, We will attach maven-antrun-plugin:run 14 prepare-package
goal to the pre necessary to prepare a package before the actual
clean, clean, and post-clean phases. This vwill allow us rertorm any operations
to echo text messages
displaying the phases of the clean lifecycle. packaging
15 a
You can try tuning mvn clean command, package format, such as
which will display pre-clean and clean. Ke the compiled code and
package it in its distributable
Nothing will be executed for post-clean phase.
JAR, WAR,or EAR file.
Default (or Build) Lifecycle: 16
This is the primary lifecycle of Maven
pre-integration-test tests are executed. For example,
and is used to build the application. It required before integration
following 21 phases: has the eiormupactions
Setting the required environment. contd.
IVI
Bulld (Managomont- Sem, 5.12
DevOps : MCA[Management Sem. IV] 5.11 Tool- Maven Devops:MCA Bulld Tool - Maven

Maven local
repository keeps your project's all dependencies (ibrary
jars, plugin jars
17 integration-test you run a Maven build, then Maven automatically downloads
if necessary into an environment where ctc.). When all the
Process and deploy the package ertancy iars into the local repository. It helps to avoid references to dependencies
integration tests can be run. a
stored on remote machine every time project
is build.
18. post-integration-test Stooeal repository by default gets created by Maven in %USER HOME% directory.
Performn actions required after integration
tests have been executed.
for To override the default location, mention another path in Maven settings.xml fle
example, cleaning up the environment. available at %M2_HOME%\conf dírectory.
19. verify
Run any check-ups to verifythe package is valid and meets quality criteria. settings xmlns "http://maven.apache.org/SETTINGS/1.0.0*
20
xmlns:xsi = "http://www.W3.org/2901/XMLSchema-instance"
install xsi:schemaLocation = "http://maven .apache .org/SETTINGS/1.8.8
Install the package into the local repository, which can be used as
http://maven.apache. org/xsd/settings-1.9.0.xsd">
dependency in other projects locally.
<localRepository> C:/MyLocalRepository</localRepository>
21 deploy
</settings>
Copies the final package to the remote repository for sharing with other .
developers and projects. When you run Maven command, Maven will download dependencies to your custom
path.
5.6 MAVEN LOCAL REPOSITORY (,m2) Maven local repository is located in your local system. It is created by the maven
In Maven terminology. a repository is a directory
where all the project jars, library jar, when yOu run any maven command.
plugins or any other project specific artifacts are By default, maven local repository is %USER_HOME%/.m2 directory. For
stored and can be used by Maven
easily. example, C:\Users\SSS IT\.m2.
Maven repositories are of three types.
The following diagram will
regarding these three types. give an idea
1 Local
2. Central Ctemcgiet

3. Rermote Doanicats
Rexet Pace
Organization's intemal
Lbres

Msc
Local
Repositor Pets
on

Internet
Remote
Local Repositor
Repositor Central Fig. 5.2 (b)
Repository
Update/Location of Local Repository:
the settingsml
Loca! ve can change the location of maven local repository by changing
example: E:\apache
Repositor file. Is located in MAVEN HOME/Conf/settings.xml, for
maven 3.1.1\conf\settings.xml.
Fig. 5.2 (a): Marven Let's see the default code of settings .xml fle.
1. Local Repository: Repositories
settings.xml
• Maven local repository
run any maven i s a folder location on
your
command for the first
time.
machine. It gets
created when you GCangs xmlns ="htto://maven, apache.org/SETTINGS/1.0.0
nsixsi="http://M.w3.org/2001/XMLSchema-instance"
IV]
Bulld
[Managomont - Som. 5.14
: MCA Bulld ToolMaven
5.13 Tool- Maven DovOps
DevOps : MCA [Management- Sem. IV]
repository is located on the web. Most of libraries can be missing
XSi:schemaLocation="http://maven. apache.org/SETTINGS/ Maven remote from
repository such as JBoss library etc, so we needIto define remote
http://maven.apache.org/xsd/settingS-1.0.0,xsd" the central repository
1.0.0 file.
pom.xml
<l-- localRepository in
Following is the code to
add the jUnít library in pom. xml file.
| The path to the local repository maven will use t0 store artifacte
pom.xml
s{user. home)/.m2/repository
I Default: <project Xmlns"http:://maven.apache.org/POM/4.0.0"
<localRepository>/path/to/local/repo</local Repository> Xmlns:xsi="http:/, /www.w3.org/2001/XMLSchema -1instance"
cchemaLocation="http://maven.apache.org/POM/4.0,0
-)
htto://maven. apache.org/xsd/maven-4.0.0.xsd"y
</settings> <modelVersion>4.0.0</modelversion>
Now change the path to local repository. Afer changing the path of local repository.
it will look like this: (groupId> com.javatpoint.application1</groupId>
<artifactId>my application1</artifactId>
-

settings.xml
<version>1, 0</version>
<packaging>jar</packaging>
<settings xmlns="http: //maven.apache.org/SETTINGS/1.0.0"
<name>Maven Quick Start Archetype</name>
Xmlns:xsi- "http://www.w3.org/2001/XMLSchema-instance"
Xsi:schemaLocation="http://maven. apache.org/SETTINGS/1.0.0 <url>http://maven.apache.org</url>
.
http://mven.apache org/xsd/settings -1.0.0.xsd">
<localRepository>e:/mavenlocalrepository</ local Repository> <dependencies>
<dependency>
</settings> <groupId>junit</groupId>
• Now the path of local repository is e:/mavenlocalrepository. <artifactId>junit</artifactId>
2. Central Repository: <version>4.8.2</version>
Maven central repository is repository provided by Maven <scope>test</scope>
community. It contains a
large number of commonly used libraries. </dependency>
When Maven does not find any dependency in local repository,
it starts searching in </dependencies>
central repository using URL - https://repo1.maven.org/maven2/
Main concepts of Central repository are as </project>
follows:
Thisrepository is managed by Maven community. |5.8 GROUP ID, ARTIFACT ID,SNAPSHOT
It is not required to be configured. group or individual that created a project,
Lne groupld is a parameter indicating the
It requires internet access to be searched. which is often a reversed company domain name.
To browse the content project, and we use the standard
of central maven repository, maven community The artifactid is base package name used in the
the
URL - https://search.maven.org/#browse. has provided a
Using this library, a developer can searcii archetype.
all the available libraries in central run the following command:
repository. Example: In order to build as simple Java project, let's
5.7 MAVEN GLOBAL REPOSITORY mvn
archetype:generate \
Sometimes, Maven does not -DgroupId=com.baeldung
find a mentioned dependency \
well. It then stops the build process in central repository -DartifactId=baeldung \
and output error message to
such situation, Maven provides concept
of Remote Repository,
console. TO preve -DarchetypeArtifactId=maven -archetype-quickstart
own custom repository which is develope
containing required libraries or -DarchetypeVersion=1.4 \
other project jars. -DinteractiveMode=false
IV) Bulld Tool Maven
5.16
(Manngomont - Som,
we are extracting
template, now strong as follows. the file
of
Bulld Tool. pevops MCA
: the project and
S15 Maven
creating the
our projects from all other projects, After same in the tool|: suite of
The maven gropid is used to uniquely identify the
2. X
opening
package name, so we can
say
The $Toup idis following the rule the javawas
of
thatit ill
not enforcing o AAAe
reserved. Maven
is y
Pe
be start by using the omain name which the Pe
rule there are multiple legacies which of a
were not following the convention, Md
and
same, we are group ids single word. It is very difficult
înstead the
of
using for us
cent
tor getting the single mord goup ID which was approved by inclusion and
repository of maven.
can create multiple subgroups as per
Wale using the maven group id, Ire
determine the granularity of group ID is to use
quirement. The best way tomaven ed

structure ofthe project. The project configuration is done by using the project Dy
osttoetis
cbject model which was represented by a file name as pomxml. The pom wii b Btpet
describing the dependency ivhich was managed by the project and it is also used in w

plugin configuration for building the software. The pom XML file also defines the tleetg
relationship between multi-module projects.
Key Points:
The pomml is maven default XML, all the pom is inheriting from the default or
parent This pom is nothing but the default pom which was inherited by default.
Maven groupid uses the default pom for executing the relevant goal which wae (b)
defined in a maven groupid Fig. 5.3
Maven, the groupid we are checking the
Maven Groupld Naming:
3.
Inthis step, while opening the project of
At the time of working with maven groupid, the important thing about the class file is pom.xml file as follows.
that we don't need to pick the name from them; will be taking their name
automatically from 1:1 mapping from the file of Java. Maven is
two names, so it is very simple for us. For defining the maven asking
us to pick the
grouped naming, we
o1280e0•9'EG¢o
need to follow the below steps:
1 In this step, wve are creating the template of the project in the spring initializer.
The below figure shows the template of the maven
follows. grouped naming project as rt
attatyit-tatrrtas
a2).3Asis
Group name -com.groupid akg paret Ty
Artifact--maven groupid
Name- maven groupid
Packaging-jar
Java version-8 qtira

Oee Odece Cydng Hevy sM pon


Gsrngintialz
Fig. 5.3 (-)
example. We are
4..We are defining the naming conventions in the following
defining the name as maven _groupid as follows:
Code:
<parent>
<groupId>org.springframework.boot</groupId>
-
<artifactId>spring-boot starter-parent</artifactId>
<version>2.7.3</version>
<relativePath/>
</parent>
<groupId>com.groupid</groupId>
Fig. 5.3 (a)
<artifactId>maven groupid</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>maven
groupid</name>
povOps : MCA(Managomont- Som, IV1
-
Bulld Tool - Maven 5.18
DovOps :MCA [Management Sem. IV] 5.17 Bulld Tool - Mavon
nifferentiate between GroupId and ArtifactId:
Table 5,4
Sr. No. Groupld
Artifactld
jpt b A.a21Tteestncr 1 Groupld is identifying the project
/e.rmsnf Artifactld is the name of the jar
uniquely. a
(ret
without version.
etwettuue'p 2 Groupld has various versions.
tt tartr tdrttt Artifactid does not have any versions.
eke rt r rsltry
)
3. We are defining the Groupld
in the We are defining the Artifactld
in the
wti're6i'
pom.xml file.
pom.xml file.
4. We can choose any name at the
onect i
time We can choose any name at the time
of defining Groupld. of defining Artifactld.
Oe Doece edeg Hety ee
gonut
5. We need to define the name of We
need to define the name of
Fig. 5.3 (d) Groupld in lowercase letters. Artifactld in lowercase letters.
We are changing the name of maven, and the groupid in the following example. 6. Maven is not enforcing the rule by Maven is not
enforcing the rule by
Code: using Groupld. using Artifactld.
<parent> 7. We have defined the Groupld in the We have defined the Artifactid in
<groupId> org.springframework.boot </groupId> the
- plugin section. plugin section.
<artifactId>spring-boot starter-parent</artifactId>
8. The Groupld is nothing but an id of The Artifactid is nothing but an id of
<version>2.7.3</version>
<relativePath/> <!-- lookup parent from repository --) the project. the project.
</parent> 9. The Groupld is nothing but anThe Artifactld is nothing but an
<groupId> com.maven</groupId> element of the pom.xml file. element of pom.xml file.
<artifactId>maven_groupid</artifactId>
10. The Groupld specifies the id of a The Artifactld will specify the if of
<version>0.0.1-SNAPSHOT</version>
project group. the project.
<name>maven_group</name>
<description> Project for maven_groupid</description> SNAPSHOT:
<properties> SNAPSHOT is a special version that indicates a current development copy. Unlike
regular versions, Maven checks for a new SNAPSHOT version in a remote repository
for every build.
every time to
Now data-service team will release SNAPSHOT of its updated code
say data-service: 1.0-SNAPSHOT, replacing an older SNAPSHOT jar.
repository,
e-rc.tet 5.9MAVEN DEPENDENCIES
ogrt-oct-tatatKto
eeece Management. Managing
One of the core features of Maven is Dependency
a difficult task once we've to deal with multi-module projects
dependencies is a high degree of
modules/sub-projects). Maven provides
(consisting of hundreds of
control to manage such scenarios.
Transitive Dependencies Discovery: case another
upon other library, say B. In
when a library, say A, depends
It is a case,
to use library B too.
Fig. 5.3 (e) use then that project requires
project C'wants to A,
Bulld Tool-Maven
pevOps : MGA Management- Sem. ivV
DevOps : MCA [Management-Sem. IVJ. 5.19
5.20
libraries required. Maven
to díscover all the
Bulld Tool-Maven
Maven helps to avoid such requírements out thot Lib1:1.0
(pom.xml) of dependencies, i1gure
does so by reading projcct files
dependencies and so on. handles
dependency in each project pom. Maven
We only need to define direct Lib2:2.1 Root:1.0
rest automatically. App-Core-lib:1.0
graph included libraries can quickly grow to a
of
App-UI-WAR:1.0
dependencies, the
With transitive
can arise when there are duplicate libraries. Maven provides fem.
large extent. Cases
features to control extent of transitive dependencies. Lib3:1.1
Transitive Dependencies, App-Data-lib:1.0
Table 5.5: Features to control extent of
Sr. No. Feature and Description Fig. 5.4: Dependency
Graph
Following are the elements
1 Dependency Mediation: of the above dependency graph:
o App-UI-WAR
Determines what version of a dependency is to be used when
multiple depends upon App-Core-lib and App-Data-lib.
two
versions of an artifact are encountered. In the dependency tree, if Root is parent of App-Core-lib
and App-Data-lib.
dependency versions are at the same depth, then the first declared Root defines Lib1, Lib2, Lib3 as
dependencies in its dependency section.
dependency will be used. When we execute Maven build commands, Maven
starts looking for dependency
2. Dependency Management: libraries in the following sequence:
Directly specify the versions of artifacts to be used when they are Step 1: Search dependency in local repository, if not found, move
to Step 2 else
encountered in transitive dependencies. For example, Project C can include B perform the further processing.
as a dependency in its dependency management section and directly control
Step 2 Search dependency. in central repository, if not found
which version of B is to be used when it is ever referenced. and remote
repository/repositories is/are mentioned then move Step 4. Else is
it
3 Dependency Scope: downloaded to local repository for future reference.
Includes dependencies as per the current stage of the build. Step 3 : If a remote repository has not been mentioned, Maven simply stops the
4 Excluded Dependencies: processing and throws error (Unable to find dependency).
Any transitive dependency can be excluded using "exclusion" element. As Step 4 :
Search dependency in remote repository or repositories, if found then it
example, A depends upon B and B depends upon C, then A can mark C as is downloaded to local repository for future reference. Otherwise, Maven
excluded. stops processing and throws error (Unable to find dependency).
5. Optional Dependencies: 5.10 MAVEN PLUGINS
Any transitive dependency can be marked as optional
using "optional Maven is actually a plugin execution framework where every task is actually done by
element. For example, .A depends upon B and B depends upon C.
Now 5 plugins. Maven Plugins are generally used to:
marked C as optional. Then A will not use C.
Dependency Management:
create jar file
Usually, we have a set of projects under a common create war file
project. In such case, we can create
a common pom compile code files
having all the common dependencies and
parent of sub-project's poms. Following then make this pom, u
example will help you unit testing of code
concept. understana
create project documentation
create project reports
Bulld Tool- Maven Devops : MCA(Management-Sem. IV
5.22
DevOps : MCA [Management - Sem. IV] 5.21
Build Tool-Maven
can be executed using the following Maven emphases on
the simplification and standardization
A
plugin generally providesa set of goals, which process, takíng care of the building
of the following:
syntax: Builds
mvn [plugin-name ]:(goal-name)
o Documentation
can be compiled with the maven-Compiler-plugin'e
For example, a Java project o
compile-goal by running the following command: Dependencies
mvn compiler: compile Reports
Plugin Types: SCMs
• Maven provided the following two types of
Plugins: Distribution
Table 5.6: Types of Plugins Releases
Type and Description o Mailing list
Sr. No.
1. Build Plugins Maven was created to simplify Jakarta
Tribune project building processes. Many
the projects had slightly different ANT files, so Apache of
They execute during the build process and should be configured in the developed Maven to handle
<build/> element of pom.xml. building multiple projects together, including publishing
project information,
2
Reporting Plugins
facilitating team collaboration, deploying projects,
and sharing JARS among
They execute during the site generation process and they shöuld be several projects.
configured in the <reporting/> element of the pom.xml. Features of Maven:
Table 5.7: Few common Plugins A huge, continuously growing repository user libraries.
of
Sr. No. The ability to set up projects easily, using best practices.
Plugin and Description
1 clean Dependency management, featuring automatic updating.
Cleans up target after the build. Deletes the target directory. Backwards compatible with previous versions.
2 compiler Strong error and integrity reporting.
Compiles Java.source files. Automatic parent versioning.
3. surefire Ensures consistent usage across all projects.
Runs the JUnit unit tests. Creates test reports. It is extensible, and you can easily write plug-ins using scripting languages or
4. jar Java.
Builds a JAR file from the current project. Check Your Understanding
5. War
Builds a WAR file from the current project.
1. Which of the following command can tell the version of Maven?
(a) mvn --version (b) maven -version
6. javadoc
(c)· mvn version (d) maven --version
Generates Javadoc for the project.
2. Which of the following phase in Maven life cycle runs any checks to verify the
7. antrun
Runs a set of ant tasks from any package is valid and meets quality criteria?
phase mentioned of the build. (a) install (b) verify
Summary (c) Integration cost (d) deploy
is not required for
Maven is written in Java 3. Which of the following scope specifies that dependency
and is used to build projects execution?
etc. written in Ctt, Scala, RuDyr compilation, but is required during
Based on the Project Object (a) (b) compile
Model (POM), this tool provide
developers while developing reports, has made the life easy of Java (d) test
checks build and testing (c) runtime
automation setups.
DovOps : MCA (Management - Sem. IV)
Bulld
Tool- Maven
5.23 5.24
DevOps : MCA (Management - Sem. IV) Bulld Tool -Maven
Maven life cycle
performs any operations 6. What is artifactID?
4. Which of the following phase in
actual packaging? 7. What is Snapshot?
necessary to prepare a package before theprocess-resources 8.
(b)
What is Super POM?
(a) process-test-sources 9. What is Goal?
(d) destroy package
(c) prepare-package a cental o.ll Answer the following questions.
can manage project's
a build, reporting and documentation from 1. Explain
5 the phases of Maven build Life cycle.
piece of information. 2. State the minimum requirements POM.
(b) PHP of
(a) Maven 3. What features provided by Maven to
(d) JAVA control extent of transitive dependencies?
(c) Scala 4. Describe the Dependency Management in Maven.
6. In Maven, POM stands for, 5. Explain two types of Plugins provided by Maven?
(a) Project Object Model (b) Project Object Method
o.III Write short notes on:
(c) Process Object Model (d) Process Object Method
1. Maven Local Repository
7. Which of the following are the phases of Maven Build Lifecycle? 2. Central Repository
(a) Validate (b) Compile 3. Maven Global Repository
(c) Prepare Resources (d) Al of the above
4. Maven Dependencies
s. Which of the following is not type of Maven Repository?
(a) Remnote (b) Local
Central
(c) (a) Dependency
9. Which of the following command removes the target directory with all the build
data before starting the build process?
(a) mvn clean (b) mvm compile
(c) mvm build (d) mvn site
10. Which one of the following is a naming scheme in which the implicit name for a
mock object is the mocked type's name prepends with "mock".
(a) RetroNaming Scheme (b) JavaReflectionImposteriser
(c) CamelCaseNamingScheme (d) LastWordNamingScheme

Answers
1 (a) 2. (b) 3. (c) 4. (c) 5. (a) 6. (a) 7. (d) 8. (d) 9. (a) 10. (a)

Practice Questions
Q.I Answer the following questions in short.
1 What is Maven?
2. How to confirm whether Maven is installed or
not?
3. What is POM?
4. State the configuration that can be specified
in the POM.
5. What is Groupld?
Bibliography

Web Reference:
Chapter 1:
https://www.simplilearn.com
https://www.browserStack.com/guide
https://www.gavstech.com
https://www.atlassian.com
https://www.testsigma.com
https://www.netapp.com/devops-solutions/
https://www.pluralsight.com
https://docs.gitLab.com
https://www.techonthenet.com/linux
https://www.javatpoint.com
https://learn.microsoft.com/en-us/training/modules/introduction-to-devops/
https://www.spiceworks /tech/devops/articles/what-is-devops/
• https:// softobiz.com
Chapter 2:
https://git-scm.com
education.github.com
www.freecodecamp.org
https://www.atlassian.com/git/tutorials/syncing/git-pull

Chapter 3:
What is Chef? DevOps Tool For Configuration Management (intellipaat.com)
-
Chef Tutorials: Chef roles Tutorials and Example DevopsSchool.com

Chapter 4:
https://docs.docker.com
https://docs.aws.amazon.com

Chapter 5:
https://www.tutorialspoint.com/maven

(B.1)

You might also like