[go: up one dir, main page]

0% found this document useful (0 votes)
12 views237 pages

DevOps Lab Manual

The document is a lab manual for a DevOps course, detailing various experiments related to Git, GitHub, and CI/CD principles. It includes objectives, key concepts, and step-by-step instructions for using Git commands, collaborating through GitHub, and deploying applications using Docker and Jenkins. Participants will learn essential version control practices and gain hands-on experience with collaborative coding and application deployment.

Uploaded by

nirmal2404620
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views237 pages

DevOps Lab Manual

The document is a lab manual for a DevOps course, detailing various experiments related to Git, GitHub, and CI/CD principles. It includes objectives, key concepts, and step-by-step instructions for using Git commands, collaborating through GitHub, and deploying applications using Docker and Jenkins. Participants will learn essential version control practices and gain hands-on experience with collaborative coding and application deployment.

Uploaded by

nirmal2404620
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 237

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

LAB MANUAL

DevOps Lab
24SCL27
TABLE OF CONTENTS

Experiment Marks
Date Experiment Title Sign
No. / 30
PART - A
Exploring Git Commands through
1
Collaborative Coding.

2 Implement GitHub Operations using Git.

3 Implement GitLab Operations using Git.

4 Implement BitBucket Operations using Git.


Applying CI/CD Principles to Web
5 Development Using Jenkins, Git, and
Local HTTP Server.
PART - B
Exploring Containerization and
6
Application Deployment with Docker.
Applying CI/CD Principles to Web
7 Development Using Jenkins, Git,
using Docker Containers.

8 Demonstrate Maven Build Life Cycle.

Demonstrating Container Orchestration


9
using Kubernetes.
Create the GitHub Account to demonstrate
10 CI/CD pipeline using Cloud
Platform.
Experiment No. 1
Title:

Exploring Git Commands through Collaborative Coding.

Objective:

The objective of this experiment is to familiarize participants with essential Git concepts and
commands, enabling them to effectively use Git for version control and collaboration.

Introduction:

Git is a distributed version control system (VCS) that helps developers track changes in their

codebase, collaborate with others, and manage different versions of their projects efficiently.

It was created by Linus Torvalds in 2005 to address the shortcomings of existing version control

systems.

Unlike traditional centralized VCS, where all changes are stored on a central server, Git

follows a distributed model. Each developer has a complete copy of the repository on their

local machine, including the entire history of the project. This decentralization offers

numerous advantages, such as offline work, faster operations, and enhanced collaboration.

Git is a widely used version control system that allows developers to collaborate on projects,

track changes, and manage codebase history efficiently. This experiment aims to provide a

hands-on introduction to Git and explore various fundamental Git commands. Participants

will learn how to set up a Git repository, commit changes, manage branches, and collaborate

with others using Git.

Key Concepts:

● Repository: A Git repository is a collection of files, folders, and their historical versions. It
contains all the information about the project's history, branches, and commits.
● Commit: A commit is a snapshot of the changes made to the files in the repository at a
specific point in time. It includes a unique identifier (SHA-1 hash), a message describing the
changes, and a reference to its parent commit(s).

● Branch: A branch is a separate line of development within a repository. It allows developers


to work on new features or bug fixes without affecting the main codebase. Branches can be
merged back into the main branch when the changes are ready.

● Merge: Merging is the process of combining changes from one branch into another. It
integrates the changes made in a feature branch into the main branch or any other target
branch.

● Pull Request: In Git hosting platforms like GitHub, a pull request is a feature that allows
developers to propose changes from one branch to another. It provides a platform for code
review and collaboration before merging.

● Remote Repository: A remote repository is a copy of the Git repository stored on a server,
enabling collaboration among multiple developers. It can be hosted on platforms like GitHub,
GitLab, or Bitbucket.

Basic Git Commands:

● git init: Initialises a new Git repository in the current directory.

● git clone: Creates a copy of a remote repository on your local machine.

● git add: Stages changes for commit, preparing them to be included in the next commit.

● git commit: Creates a new commit with the staged changes and a descriptive message.

● git status: Shows the current status of the working directory, including tracked and
untracked files.

● git log: Displays a chronological list of commits in the repository, showing their commit
messages, authors, and timestamps.

● git branch: Lists, creates, or deletes branches within the repository.

● git checkout: Switches between branches, commits, or tags. It's used to navigate
through the repository's history.

● git merge: Combines changes from different branches, integrating them into the current
branch.
● git pull: Fetches changes from a remote repository and merges them into the current
branch.

● git push: Sends local commits to a remote repository, updating it with the latest changes.

Prerequisites:

● Computer with Git installed (https://git-scm.com/downloads)

● Command-line interface (Terminal, Command Prompt, or Git Bash)

Experiment Steps:

Step 1: Setting Up Git Repository

● Open the command-line interface on your computer.

● Navigate to the directory where you want to create your Git repository.

● Run the following commands:

mkdir Experiment1

cd Experiment1

ls –la

git init

This initializes a new Git repository in the current directory.


ls –la

Step 2: Creating and Committing Changes

● Create a new text file named "example.txt" using any text editor.

● Add some content to the "example.txt" file.

touch example.txt

vi example.txt

OR directly:

vi example.txt

ls -la

cat example.txt

● In the command-line interface, run the following commands:

git status

This command shows the status of your working directory, highlighting untracked files.
git add example.txt

This stages the changes of the "example.txt" file for commit.

git status

git commit -m "Added content to example.txt"

This commits the staged changes with a descriptive message.


Step 3: Exploring History

Modify the content of "example.txt"

Run the following commands:

git status

Notice the modified file is shown as "modified".

git diff

This displays the differences between the last commit and the working directory.

git log

This displays a chronological history of commits.


Step 4: Branching and Merging

Check the current branch

git branch

Create a new branch named "feature" and switch to it:

git branch feature

git branch

git checkout feature

git branch

OR shorthand:

git checkout -b feature


● Make changes to the "example.txt" file in the "feature" branch.

vi example.txt

cat example.txt

● Commit the changes in the "feature" branch.

git add example.txt

git commit -m "Modified example.txt by replacing Good Evening by


Good Night"

git status

● Switch back to the "master" branch:

git checkout master

git branch
cat example.txt

● Merge the changes from the "feature" branch into the "master" branch:

git merge feature

cat example.txt

Step 5: Collaborating with Remote Repositories

● Create an account on a Git hosting service like GitHub (https://github.com/).

● Create a new repository on GitHub.

● Link your local repository to the remote repository:

git remote add origin <repository_url>


● Push your local commits to the remote repository:

git push origin master

Troubleshooting Steps:

● Generate an SSH Key:

cd ~/.ssh

ls -la

ssh-keygen -t ed25519 -C "sandy.devops.stuffs@gmail.com"

cd ~/.ssh

ls -la
cat ~/.ssh/id_ed25519.pub

Copy the entire output.

Then:

1. Go to GitHub → Settings → SSH and GPG keys


2. Click "New SSH key"
3. Paste the public key
4. Give it a title like "Ubuntu_DevBox" and click Add SSH key

 Change Your Git Remote from HTTPS to SSH

Check current remotes:

git remote -v

Update the origin URL to SSH:

git remote set-url origin


git@github.com:YourUsername/YourRepo.git

Example:

git remote set-url origin


git@github.com:SandyDevOpsStuffs/Experiment1.git

Verify:
git remote -v

Test SSH Connection:

You should see:


Hi YourUsername! You've successfully authenticated, but GitHub does not provide shell access.

ssh -T git@github.com

● Push your local commits to the remote repository over SSH:

git push origin master

Go to GitHub repo and refresh the page to see the file example.txt in the remote repo.
This is because we had first created a local repo with default branch master, but GitHub uses
default branch as main. So let us rename the local branch from master to main.

git branch

git branch -m master main

git branch

git fetch origin


git branch --set-upstream-to=origin/main main

git pull –rebase

git push origin main

Now go to GitHub repo and refresh the page to see the file example.txt in the remote repo.
Conclusion:

Through this experiment, participants gained a foundational understanding of Git's essential


commands and concepts. They learned how to set up a Git repository, manage changes,
explore commit history, create and merge branches, and collaborate with remote repositories.
This knowledge equips them with the skills needed to effectively use Git for version control and
collaborative software development. To avoid the issues we faced in mismatch of default
branch in local repo and remote repo and to avoid the issues related to HTTP and SSH
authentication we can use the below steps:

Step 1: Generate an SSH key pair on local machine as shown above and add the public key to
GitHub.

Step 2: Initialize Git (if not already done)

git init
Step 3: Add and commit your files

git add .

git commit -m "Initial commit"

Step 4: Rename master to main (Git 2.28+ lets you set this by default too)

git branch -m main

Step 5: Add your GitHub repo as remote (using SSH if configured)

git remote add origin git@github.com:YourUsername/YourRepo.git

Step 6: Push to GitHub and set upstream

git push -u origin main

Exercise:

1. Explain what version control is and why it is important in software development. Provide
examples of version control systems other than Git.

2. Describe the typical workflow when working with Git, including initializing a repository,
committing changes, and pushing to a remote repository. Use a real-world example to illustrate
the process.

3. Discuss the purpose of branching in Git and how it facilitates collaborative development.
Explain the steps involved in creating a new branch, making changes, and merging it back into
the main branch.

4. What are merge conflicts in Git, and how can they be resolved? Provide a step-by-step guide
on how to handle a merge conflict.

5. Explain the concept of remote repositories in Git and how they enable collaboration among
team members. Describe the differences between cloning a repository and adding a remote.
6. Discuss different branching strategies, such as feature branching and Gitflow. Explain the
advantages and use cases of each strategy.

7. Describe various Git commands and techniques for undoing changes, such as reverting
commits, resetting branches, and discarding uncommitted changes.

8. What are Git hooks, and how can they be used to automate tasks and enforce coding
standards in a Git repository? Provide examples of practical use cases for Git hooks.

9. List and explain five best practices for effective and efficient Git usage in a collaborative
software development environment.

10. Discuss security considerations in Git, including how to protect sensitive information like
passwords and API keys. Explain the concept of Git signing and why it's important.
Experiment No. 2

Title:

Implement GitHub Operations using Git.

Objective:

The objective of this experiment is to guide you through the process of using Git commands to
interact with GitHub, from cloning a repository to collaborating with others through pull
requests.

Introduction:

GitHub is a web-based platform that offers version control and collaboration services for
software development projects. It provides a way for developers to work together, manage
code, track changes, and collaborate on projects efficiently. GitHub is built on top of the Git
version control system, which allows for distributed and decentralised development.

Key Features of GitHub:

 Version Control: GitHub uses Git, a distributed version control system, to track changes
to source code over time. This allows developers to collaborate on projects while
maintaining a history of changes and versions.
 Repositories: A repository (or repo) is a collection of files, folders, and the entire history
of a project. Repositories on GitHub serve as the central place where code and project-
related assets are stored.
 Collaboration: GitHub provides tools for team collaboration. Developers can work
together on the same project, propose changes, review code, and discuss issues within
the context of the project.
 Pull Requests: Pull requests (PRs) are proposals for changes to a repository. They allow
developers to submit their changes for review, discuss the changes, and collaboratively
improve the code before merging it into the main codebase.
 Issues and Projects: GitHub allows users to track and manage project-related issues,
enhancements, and bugs. Projects and boards help organize tasks, track progress, and
manage workflows.
 Forks and Clones: Developers can create copies (forks) of repositories to work on their
own versions of a project. Cloning a repository allows developers to create a local copy
of the project on their machine.
 Branching and Merging: GitHub supports branching, where developers can create
separate lines of development for features or bug fixes. Changes made in branches can
be merged back into the main codebase.
 Actions and Workflows: GitHub Actions enable developers to automate workflows,
such as building, testing, and deploying applications, based on triggers like code pushes
or pull requests.
 GitHub Pages: This feature allows users to publish web content directly from a GitHub
repository, making it easy to create websites and documentation for projects.

Benefits of Using GitHub:

 Collaboration: GitHub facilitates collaborative development by providing tools for code


review, commenting, and discussion on changes.
 Version Control: Git's version control features help track changes, revert to previous
versions, and manage different branches of development.
 Open Source: GitHub is widely used for hosting open-source projects, making it easier
for developers to contribute and for users to find and use software.
 Community Engagement: GitHub fosters a community around projects, enabling
interaction between project maintainers and contributors.
 Code Quality: The code review process on GitHub helps maintain code quality and
encourages best practices.
 Documentation: GitHub provides a platform to host project documentation and wikis,
making it easier to document codebases and project processes.
 Continuous Integration/Continuous Deployment (CI/CD): GitHub Actions allows for
automating testing, building, and deploying applications, streamlining the development
process.

Prerequisites:

 Computer with Git installed: https://git-scm.com/downloads


 GitHub account: https://github.com/
 Internet connection

Experiment Steps:

Step 1: Cloning a Repository

 Sign in to your GitHub account.


 Create a public repository Experiment2.
 Click the "Code" button and copy the repository URL (HTTP or SSH).

 Open your terminal or command prompt.


 Navigate to the directory where you want to clone the repository.

ls -la
 Run the following command:

git clone <repository_url>

 Replace <repository_url> with the URL you copied from GitHub.


 This will clone the repository to your local machine.

ls –la
Step 2: Making Changes and Creating a Branch

 Navigate into the cloned repository:

cd Experiment2

ls –la

 Create a new text file named “example.txt” using a text editor.


 Add some content to the “example.txt” file.

vi example.txt

 Save the file and return to the command line.

cat example.txt
 Check the status of the repository:

git status

 Stage the changes for commit:

git add example.txt

git status

 Commit the changes with a descriptive message:

git commit -m "Added content to example.txt"


git status

 Create a new branch named feature:

git branch

git branch feature

git branch

 Switch to the feature branch:

git checkout feature

git branch
Step 3: Pushing Changes to GitHub

 Add Repository URL in a variable

git remote add origin <repository_url>

Note: Preferably SSH URL, not HTTP URL. Because GitHub no longer supports HTTP
password authentication.

 Replace <repository_url> with the URL you copied from GitHub.

Note: It throws an error “remote origin already exists”. Because already


we cloned the repo from GitHub. So no need to add it back to GitHub.

 Check or verify the current remote URL.

git remote –v

Note: By default GitHub has taken HTTP URL itself.

 Change remote from HTTPS to SSH:

git remote set-url origin


git@github.com:SandyDevOpsStuffs/Experiment2.git

git remote –v
 Test or verify the SSH connection to GitHub.

ssh -T git@github.com

 Push the feature branch to GitHub using SSH:

git push origin feature

 Check your GitHub repository to confirm that the new branch feature is available.
Step 4: Collaborating through Pull Requests

 Create a pull request on GitHub:


o Go to the repository on GitHub.
o Click on "Pull Requests" and then "New Pull Request."

o Choose the base branch (usually main or master) and the compare branch
(feature).

o Review the changes and click "Create Pull Request."


 Review and merge the pull request:
o Add a title and description for the pull request.
o Assign reviewers if needed.
o Once the pull request is approved, merge it into the base branch.
Step 5: Resolving Merge Conflicts

To simulate a conflict, repeat the following:

1. Create two branches from main:

git branch

git checkout main

git branch
git checkout -b feature-1

git branch

cat example.txt

echo "Line from feature-1" > example.txt

cat example.txt

git status
git add example.txt

git status

git commit -m "Updated example.txt in feature-1"

git status

git push origin feature-1


git status

 Confirm the feature-1 branch creation in remote repo in GitHub.

Being in the main branch check the file content which should be as follows:

Welcome to NHCE
Now change the branch from main to feature-1 and check the file content which
should be as follows:

Line from feature-1

git checkout main

git branch
git checkout -b feature-2

git branch

cat example.txt

Note: Even though when I am already in feature-2 branch it still shows the content
of the file from main branch. Because from the feature-2 branch I have not modified
the file yet.

echo "Line from feature-2" > example.txt

cat example.txt
git status

git add example.txt

git status

git commit -m "Updated example.txt in feature-2"

git status
git push origin feature-2

2. Navigate to the branch feature-1 on GitHub by selecting it from the drop down, go
to Settings of the repo, add collaborators by typing their mail IDs, inform them to
accept the invitation sent from the GitHub to their mail IDs and create a pull request
(PR) for feature-1 by selecting the collaborators as reviewers and merge it on
GitHub.
Note: If you type your own mail ID with which you have created the GitHub account
then you will get below error. So I have entered my other mail ID as a reviewer for this
demo.
Note: Now you will see the collaborator status as “Pending”.

 Collaborator will receive a mail as below:


 Collaborator should click on “View invitation” in his mail inbox.

 It will take him to GitHub Login page.

Note: It is better to open GitHub for collaborator in another browser.

 After login the collaborator should click on “Accept Invitation”.


 Now you go to your GitHub repo to create a PR. Now you will not see the collaborator
status as “Pending”.
 Now create a PR by adding the collaborator as reviewer.
 Click on “Reviewers” on right side and you will be able to see a collaborator. Select
that collaborator.

 Click on “Create pull request”.


 Now collaborator or reviewer has to review the code and approve the PR.

 Collaborator will see a repo in left panel.


 Reviewer will click on the PR and he can add his reviews.

 Reviewer has to scroll down and click on “Merge pull request”.

 Reviewer has to click on “Confirm merge”.


 Both you and the reviewer will get below same screen on your respective browsers:

 Now you go to your GitHub repo, go to feature-1 branch and see the content of
example.txt. Similarly go to the branch main and see the content of
example.txt.
3. Create a pull request for feature-2 – it will show a conflict.
 Scroll down and click on “Create pull request” and select a reviewer from the right side
panel.
 Scroll down to see the error message of conflict.

4. Resolve conflict locally:

git checkout feature-2

git branch

cat example.txt
git pull origin main # triggers merge conflict

But this will not add conflict marks to the file


example.txt. Because you are on feature-2 and you ran:

git pull origin main

But Git didn’t know how to handle divergent branches, so it aborted the merge.

That’s why you didn’t get conflict markers (<<<<<<<) in example.txt.

So run the following command.

git pull --no-rebase origin main

cat example.txt
5. Open example.txt and resolve the conflict:

<<<<<<< HEAD
Line from feature-2
=======
Line from feature-1
>>>>>>> main

Note: When Git cannot automatically merge changes, it marks the conflict in the file as shown
above.

This format means:

Marker Meaning
<<<<<<< HEAD This is your current branch (feature-2)
======= Separator between conflicting changes
>>>>>>> main This is the incoming change from the branch you pulled (main)

Why the Conflict?

Because example.txt was changed in both branches:

 feature-1 added: Line from feature-1


 feature-2 added: Line from feature-2

Git doesn’t know which one to keep, so you decide.

What Should You Do?

You manually edit the file and choose what makes sense:

Option 1: Combine both lines


Line from both feature-1 and feature-2

This is what we usually do in collaborative teams — preserve both changes.

Option 2: Choose one version

Keep only feature-1's version:

Line from feature-1

Or only feature-2's version:

Line from feature-2

Note: I have file content as follows:

<<<<<<< HEAD
Line from feature-2
=======
Line from feature-1
>>>>>>> 7037b0fffb95137d3db043f42114473a03d252cd

Instead of:

<<<<<<< HEAD
Line from feature-2
=======
Line from feature-1
>>>>>>> main

When you saw this in your local file:

<<<<<<< HEAD
Line from feature-2
=======
Line from feature-1
>>>>>>> 7037b0fffb95137d3db043f42114473a03d252cd
That 7037b0f... is a local commit hash (on your machine) that represents the tip of main at
the time Git attempted the merge. It might not match GitHub’s web UI because:

 Your local main and remote main could be slightly different.


 You pulled or merged before pushing some changes.
 GitHub often shows the abbreviated hash of the merge commit (not always the one
being merged into).

Your GitHub Shows:

 main has a commit: 78f544b (from yesterday)


 That commit added: Welcome to NHCE in example.txt
 The parent of that commit: 7176ba2

Here's What's Likely Happening:

 Your local main had the commit 7037b0f... (perhaps a new one made while resolving
feature-1 merge locally).
 GitHub’s main still reflects the last visible pushed commit 78f544b (perhaps from before
or after a rebase or squash merge).

This discrepancy is normal in Git — hashes can differ locally vs. GitHub because of:

 Merge commits
 Rebase operations
 Squash merges
 History rewrites

6. I prefer to replace the content with :

Line from both feature-1 and feature-2

Ensure that you are in feature-2 branch.

git branch
vi example.txt
cat example.txt

7. Finalize the resolution:

git add example.txt

git commit -m "Resolved merge conflict in example.txt"

git push origin feature-2


8. Go back to GitHub, PR and complete the merge of feature-2. Because there
is nothing to do by the reviewer since we got merge
conflict during the process of PR creation itself. So first
resolve it and merge it yourself.

 Click on “Confirm merge”.

 Now go to feature-2 and main and observe the content of example.txt.


Step 5: Syncing Changes

After the pull request is merged, update your local repository:

But before that first let us see the content of file in each branch:

git branch

cat example.txt
git checkout feature

cat example.txt

git checkout feature-1

cat example.txt

git checkout main

cat example.txt
Now update your local repository.

git checkout main

git branch

git pull origin main


cat example.txt

Now your main branch is updated with the latest code.

Conclusion:

This experiment provided you with practical experience in performing GitHub operations using
Git commands. You learned how to clone repositories, make changes, create branches, push
changes to GitHub, collaborate through pull requests, and synchronize changes with remote
repositories. You also explored how to resolve merge conflicts, a common challenge in
collaborative development.

Questions:

1. Explain the difference between Git and GitHub.


2. What is a GitHub repository? How is it different from a Git repository?
3. Describe the purpose of a README.md file in a GitHub repository.
4. How do you create a new repository on GitHub? What information is required during
the creation process?
5. Define what a pull request (PR) is on GitHub. How does it facilitate collaboration among
developers?
6. Describe the typical workflow of creating a pull request and having it merged into the
main branch.
7. How can you address and resolve merge conflicts in a pull request?
8. Explain the concept of forking a repository on GitHub. How does it differ from cloning a
repository?
9. What is the purpose of creating a local clone of a repository on your machine? How is it
done using Git commands?
10. Describe the role of GitHub Issues and Projects in managing a software development
project. How can they be used to track tasks, bugs, and enhancements?
Experiment No. 3
Title:

Implement GitLab Operations using Git.

Objective:

The objective of this experiment is to guide you through the process of using Git commands

to interact with GitLab, from creating a repository to collaborating with others through merge

requests.

Introduction:

GitLab is a web-based platform that offers a complete DevOps lifecycle toolset, including

version control, continuous integration/continuous deployment (CI/CD), project management,

code review, and collaboration features. It provides a centralized place for software

development teams to work together efficiently and manage the entire development process

in a single platform.

Key Features of GitLab:

● Version Control: GitLab provides version control capabilities using Git, allowing

developers to track changes to source code over time. This enables collaboration,

change tracking, and code history maintenance.

● Repositories: Repositories on GitLab are collections of files, code, documentation,

and assets related to a project. Each repository can have multiple branches and tags,

allowing developers to work on different features simultaneously.

● Continuous Integration/Continuous Deployment (CI/CD): GitLab offers robust


CI/CD capabilities. It automates the building, testing, and deployment of code

changes, ensuring that software is delivered rapidly and reliably.

● Merge Requests: Merge requests in GitLab are similar to pull requests in other

platforms. They enable developers to propose code changes, collaborate, and get

code reviewed before merging it into the main codebase.

● Issues and Project Management: GitLab includes tools for managing project tasks,

bugs, and enhancements. Issues can be assigned, labeled, and tracked, while

project boards help visualize and manage work.

● Container Registry: GitLab includes a container registry that allows users to store

and manage Docker images for applications.

● Code Review and Collaboration: Built-in code review tools facilitate collaboration

among team members. Inline comments, code discussions, and code snippets are

integral parts of this process.

● Wiki and Documentation: GitLab provides a space for creating project wikis and

documentation, ensuring that project information is easily accessible and

well-documented.

● Security and Compliance: GitLab offers security scanning, code analysis, and

compliance features to help identify and address security vulnerabilities and ensure

code meets compliance standards.

● GitLab Pages: Similar to GitHub Pages, GitLab Pages lets users publish static

websites directly from a GitLab repository.


Benefits of Using GitLab:

● End-to-End DevOps: GitLab offers an integrated platform for the entire software

development and delivery process, from code writing to deployment.

● Simplicity: GitLab provides a unified interface for version control, CI/CD, and project

management, reducing the need to use multiple tools.

● Customizability: GitLab can be self-hosted on-premises or used through GitLab's

cloud service. This flexibility allows organizations to choose the hosting option that

best suits their needs.

● Security: GitLab places a strong emphasis on security, with features like role-based

access control (RBAC), security scanning, and compliance checks.

● Open Source and Enterprise Versions: GitLab offers both a free, open-source

Community Edition and a paid, feature-rich Enterprise Edition, making it suitable for

individual developers and large enterprises alike.

Prerequisites:

● Computer with Git installed (https://git-scm.com/downloads)

● GitLab account (https://gitlab.com/)

● Internet connection

Experiment Steps:

Step 1: Creating a Repository

● Sign in to your GitLab account.

● Click the "New" button to create a new project.

● Choose a project name, visibility level (public, private), and other settings.
● Click "Create project."

Step 2: Cloning a Repository

● Open your terminal or command prompt.


 Add SSH Key (if already you have an SSH key pair in your local machine). Else you run ssh-
keygen command to create an SSH key pair. Then follow the below steps.

ls -la ~/.ssh

cat ~/.ssh/id_ed25519.pub

Go to GitLab → Preferences → SSH Keys


(or open: https://gitlab.com/-/profile/keys)
Paste the key and click Add key.

● Navigate to the directory where you want to clone the repository.


● Copy the repository URL from GitLab.
● Run the following command:

git clone <repository_url>

● Replace <repository_url> with the URL you copied from GitLab.

● This will clone the repository to your local machine.

ls -ls
Step 3: Making Changes and Creating a Branch

● Navigate into the cloned repository:

Syntax: cd <repository_name>

cd sandy.devops.stuffs-Experiment3

ls -la

● Create a new text file named "example.txt" using a text editor.

● Add some content to the "example.txt" file.

● Save the file and return to the command line.

● Check the status of the repository:

git status
● Stage the changes for commit:

git add example.txt

git status

● Commit the changes with a descriptive message:

git commit -m "Added content to example.txt"

git status
● Create a new branch named "feature":

git branch

git branch feature

git branch

● Switch to the "feature" branch:

git checkout feature

git branch

Step 4: Pushing Changes to GitLab

● Add Repository URL in a variable

git remote add origin <repository_url>

● Replace <repository_url> with the URL you copied from GitLab.


git remote add origin git@gitlab.com:sandy.devops.stuffs-
group1/sandy.devops.stuffs-Experiment3.git

Note: Since we already cloned the repo we got this error saying the “error: remote
origin already exists.”

● Push the "feature" branch to GitLab:

Right now there is only main branch in GitLab.

git push origin feature

● Check your GitLab repository to confirm that the new branch "feature" is available.
Step 5: Collaborating through Merge Requests

1. Create a merge request on GitLab:

 Go to the repository on GitLab.


 Click on "Merge Requests" and then "New Merge Request."
 Choose the source branch ("feature") and the target branch ("main" or "master").

 Review the changes and click "Submit merge request."

2. Review and merge the merge request:

 Add a title and description for the merge request.


 Assign reviewers if needed.
 Once the merge request is approved, merge it into the target branch.
Step 6: Syncing Changes

● After the merge request is merged, update your local repository:

git branch

git checkout main

git branch

git pull origin main

cat example.txt

git checkout feature

git branch

cat example.txt
Conclusion:

This experiment provided you with practical experience in performing GitLab operations using
Git commands. You learned how to create repositories, clone them to your local machine, make
changes, create branches, push changes to GitLab, collaborate through merge requests, and
synchronize changes with remote repositories. These skills are crucial for effective collaboration
and version control in software development projects using GitLab and Git.

Questions/Exercises:

1. What is GitLab, and how does it differ from other version control platforms?

2. Explain the significance of a GitLab repository. What can a repository contain?

3. What is a merge request in GitLab? How does it facilitate the code review process?

4. Describe the steps involved in creating and submitting a merge request on GitLab.

5. What are GitLab issues, and how are they used in project management?

6. Explain the concept of a GitLab project board and its purpose in organizing tasks.

7. How does GitLab address security concerns in software development? Mention

some security-related features.

8. Describe the role of compliance checks in GitLab and how they contribute to

maintaining software quality.


Experiment No. 4
Title:

Implement BitBucket Operations using Git.

Objective:

The objective of this experiment is to guide you through the process of using Git commands to
interact with Bitbucket, from creating a repository to collaborating with others through pull
requests.

Introduction:

Bitbucket is a web-based platform designed to provide version control, source code


management, and collaboration tools for software development projects. It is widely used by
teams and individuals to track changes in code, collaborate on projects, and streamline the
development process. Bitbucket offers Git and Mercurial as version control systems and
provides features to support code collaboration, continuous integration/continuous
deployment (CI/CD), and project management.

Key Features of BitBucket:

● Version Control: Bitbucket supports both Git and Mercurial version control systems, allowing
developers to track changes, manage code history, and work collaboratively on projects.

● Repositories: In Bitbucket, a repository is a container for code, documentation, and other


project assets. It houses different branches, tags, and commits that represent different versions
of the project.

● Collaboration: Bitbucket enables team collaboration through features like pull requests, code
reviews, inline commenting, and team permissions. These tools help streamline the process of
merging code changes.

● Pull Requests: Pull requests in Bitbucket allow developers to propose and review code
changes before they are merged into the main codebase. This process helps ensure code
quality and encourages collaboration.
● Code Review: Bitbucket provides tools for efficient code review, allowing team members to
comment on specific lines of code and discuss changes within the context of the code itself.

● Continuous Integration/Continuous Deployment (CI/CD): Bitbucket integrates with CI/CD


pipelines, automating processes such as building, testing, and deploying code changes to
various environments.

● Project Management: Bitbucket offers project boards and issue tracking to help manage
tasks, track progress, and plan project milestones effectively.

● Bitbucket Pipelines: This feature allows teams to define and automate CI/CD pipelines directly
within Bitbucket, ensuring code quality and rapid delivery.

● Access Control and Permissions: Bitbucket allows administrators to define user roles,
permissions, and access control settings to ensure the security of repositories and project
assets.

Benefits of Using BitBucket:

 Version Control: Bitbucket's integration with Git and Mercurial provides efficient version
control and code history tracking.

● Collaboration: The platform's collaboration tools, including pull requests and code reviews,
improve code quality and facilitate team interaction.

● CI/CD Integration: Bitbucket's integration with CI/CD pipelines automates testing and
deployment, resulting in faster and more reliable software delivery.

● Project Management: Bitbucket's project management features help teams organize tasks,
track progress, and manage milestones.

● Flexibility: Bitbucket offers both cloud-based and self-hosted options, providing flexibility to
choose the deployment method that suits the organization's needs.

● Integration: Bitbucket integrates with various third-party tools, services, and extensions,
enhancing its functionality and extending its capabilities.

Prerequisites:

● Computer with Git installed (https://git-scm.com/downloads)


● Bitbucket account (https://bitbucket.org/)

● Internet connection

Experiment Steps:

Step 1: Creating a Repository

● Sign in to your Bitbucket account.

● Click the "Create" button to create a new repository.


● Choose a repository name, visibility (public or private), and other settings.

● Click "Create repository."


Step 2: Cloning a Repository

● Open your terminal or command prompt.

● Navigate to the directory where you want to clone the repository.

● Copy the repository URL from Bitbucket.


● Run the following command:

git clone <ssh_repository_url>

● Replace <repository_url> with the URL you copied from Bitbucket.

● This will clone the repository to your local machine.


Step 3: Making Changes and Creating a Branch

● Navigate into the cloned repository:

cd <repository_name>

ls -la

● Create a new text file named "example.txt" using a text editor.

● Add some content to the "example.txt" file.

● Save the file and return to the command line.


● Check the status of the repository:

git status

● Stage the changes for commit:

git add example.txt

git status

● Commit the changes with a descriptive message:

git commit -m "Added content to example.txt"


git status

● Create a new branch named "feature":

git branch

git branch feature

● Switch to the "feature" branch:

git checkout feature

git branch

Step 4: Pushing Changes to Bitbucket

● Add Repository URL in a variable

git remote add origin <ssh_repository_url>

● Replace <repository_url> with the URL you copied from Bitbucket.


NOTE: We got the error because already we have cloned the repo from Bitbucket.

 Add SSH key to Bitbucket.

cd

cd .ssh

ls -la

cat id_ed25519.pub

 Copy the content of id_ed25519.pub and go to https://bitbucket.org/account/settings/ssh-


keys/

 Click "Add key"

 Paste your public key (copied from the previous step)


 Give it a meaningful Label (e.g., Ubuntu-Experiment4)
 Save

 Test the conection

ssh -T git@bitbucket.org

● Push the "feature" branch to Bitbucket:

git push origin feature

● Check your Bitbucket repository to confirm that the new branch "feature" is available.
Step 5: Collaborating through Pull Requests

1. Create a pull request on Bitbucket:

 Go to the repository on Bitbucket.


 Click on "Create pull request."

 Choose the source branch ("feature") and the target branch ("main" or "master").
 Review the changes and click "Create pull request."
2. Review and merge the pull request:

 Add a title and description for the pull request.


 Assign reviewers if needed.
 Once the pull request is approved, merge it into the target branch.
Step 6: Syncing Changes

● After the pull request is merged, update your local repository:

git branch

git checkout main


git branch

git pull origin main

 Now go to Bitbucket to see the merged code.


Conclusion:

This experiment provided you with practical experience in performing Bitbucket operations
using Git commands. You learned how to create repositories, clone them to your local machine,
make changes, create branches, push changes to Bitbucket, collaborate through pull requests,
and synchronise changes with remote repositories. These skills are essential for effective
collaboration and version control in software development projects using Bitbucket and Git.

Questions/Exercises:

Q.1 What is Bitbucket, and how does it fit into the DevOps landscape?

Q.2 Explain the concept of branching in Bitbucket and its significance in collaborative
development.

Q.3 What are pull requests in Bitbucket, and how do they facilitate code review and
collaboration?

Q.4 How can you integrate code quality analysis and security scanning tools into Bitbucket's
CI/CD pipelines?

Q.5 What are merge strategies in Bitbucket, and how do they affect the merging process during
pull requests?
Experiment No. 5
Title: Applying CI/CD Principles to Web Development Using Jenkins, Git, and Local HTTP Server.

Objective:

To set up a basic CI/CD pipeline using Jenkins, Git, and a local HTTP server (Apache or Nginx) to
automatically deploy a web application when code is pushed to the repository.

Introduction:

Continuous Integration and Continuous Deployment (CI/CD) is a critical practice in modern


software development, allowing teams to automate the building, testing, and deployment of
applications. This process ensures that software updates are consistently and reliably delivered
to end-users, leading to improved development efficiency and product quality. In this context,
this introduction sets the stage for an exploration of how to apply CI/CD principles specifically
to web development using Jenkins, Git, and a local HTTP server. We will discuss the key
components and concepts involved in this process.

Key Components:

● Jenkins: Jenkins is a widely used open-source automation server that helps automate various
aspects of the software development process. It is known for its flexibility and extensibility and
can be employed to create CI/CD pipelines.
● Git: Git is a distributed version control system used to manage and track changes in source
code. It plays a crucial role in CI/CD by allowing developers to collaborate, track changes, and
trigger automation processes when code changes are pushed to a repository.
● Local HTTP Server: A local HTTP server is used to host and serve web applications during
development. It is where your web application can be tested before being deployed to
production servers.

CI/CD Principles:

● Continuous Integration (CI): CI focuses on automating the process of integrating code


changes into a shared repository frequently. It involves building and testing the application
each time code is pushed to the repository to identify and address issues early in the
development cycle.
● Continuous Deployment (CD): CD is the practice of automatically deploying code changes to
production or staging environments after successful testing. CD aims to minimize manual
intervention and reduce the time between code development and its availability to end-users.

The CI/CD Workflow:

● Code Changes: Developers make changes to the web application's source code locally.
● Git Repository: Developers push their code changes to a Git repository, such as GitHub or
Bitbucket.
● Webhook: A webhook is configured in the Git repository to notify Jenkins whenever changes
are pushed.
● Jenkins Job: Jenkins is set up to listen for webhook triggers. When a trigger occurs, Jenkins
initiates a CI/CD pipeline.
● Build and Test: Jenkins executes a series of predefined steps, which may include building the
application, running tests, and generating artifacts.
● Deployment: If all previous steps are successful, Jenkins deploys the application to a local
HTTP server for testing.
● Verification: The deployed application is tested locally to ensure it functions as expected.
● Optional Staging: For more complex setups, there might be a staging environment where the
application undergoes further testing before reaching production.
● Production Deployment: If the application passes all tests, it can be deployed to the
production server.

Benefits of CI/CD in Web Development:


● Rapid Development: CI/CD streamlines development processes, reducing manual tasks and
allowing developers to focus on writing code.
● Improved Quality: Automated testing helps catch bugs early, ensuring higher code quality.
● Faster Time to Market: Automated deployments reduce the time it takes to release new
features or updates.
● Consistency: The process ensures that code is built, tested, and deployed consistently,
reducing the risk of errors.

Prerequisites:

 Jenkins installed and running


 Apache2 or Nginx installed
 Git installed
 A basic web application (e.g., HTML/CSS/JS or a small React app)
 GitHub account (or GitLab/Bitbucket)
 Linux environment preferred (Ubuntu/Debian-based)

Experiment Steps:
NOTE: Make sure that the port 8080 is opened in your EC2 instance for Jenkins.

Step 1: Set Up the Web Application and Local HTTP Server (Apache2)

 Web Application Setup

mkdir Experiment5 && cd Experiment5

ls –la

vi index.html

Pate the following content, save and exit.

<h1>Welcome to CI/CD with Jenkins</h1>


cat index.html

 Install and Start Apache2 (or Nginx)

sudo apt update

sudo systemctl status apache2

sudo apt install apache2 –y

sudo systemctl status apache2

 Configure Apache Document Root (Optional)

sudo mkdir -p /var/www/html/webdirectory

# Set ownership to jenkins user so it can copy files there during deployment.

sudo chown -R jenkins:www-data /var/www/html/webdirectory

Test:
Visit http://<Public_IP_of_EC2_Instance>/webdirectory in a browser — it
should show the current content.
Step 2: Set Up Git Repository

cd Experiment5

ls –la

git init

ls –la

git add .

git status
git commit -m "Initial commit"

git status

Create a new remote repository on GitHub (or Bitbucket/GitLab):


git remote add origin https://github.com/<your-username>/<repo-name>.git

(If you wish to enter username and password every time you run git push and git pull
commands).

OR
git remote add origin git@github.com:SandyDevOpsStuffs/Experiment5.git

(If you wish to use passwordless authentication).

Create an SSH key pair in local:

ssh-keygen

Add public key to GitHub then push the code.

git push -u origin master


NOTE:

 You initialized your local Git repo, which by default created a branch called master.
 But on GitHub, the default branch is usually main (not master).
 When you run git push -u origin master, it pushes a new master branch to
GitHub, which is now available remotely — but it’s not the default branch there.
 Then if you run git push -u origin main, Git gave this error:

error: src refspec main does not match any

Because you don't have a local branch named main, only master.

Fix Option 1: Set master as the default branch on GitHub (Recommended for this
case).

Since your local branch is master, make GitHub use it:

1. Go to your repo on GitHub:


https://github.com/SandyDevOpsStuffs/Experiment5
2. Navigate to:
Settings → Branches → Default branch
3. Click “Change default branch” → select master.
4. Optionally, delete the empty main branch (if it exists):
o Go to the Branches tab.
o Delete main if it’s unused.

Fix Option 2: Rename your local branch to main

If you want to follow GitHub’s default naming:


# Rename master to main
git branch -m master main

# Push the renamed branch and set upstream


git push -u origin main

# Optionally delete remote master if not needed


git push origin --delete master

Step 3: Install Java

sudo apt install fontconfig openjdk-21-jre

java -version

Step 4: Install and Configure Jenkins

 Install Jenkins (Ubuntu example)


sudo wget -O /etc/apt/keyrings/jenkins-keyring.asc
https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key

echo "deb [signed-by=/etc/apt/keyrings/jenkins-keyring.asc]" \


https://pkg.jenkins.io/debian-stable binary/ | sudo tee \
/etc/apt/sources.list.d/jenkins.list > /dev/null

sudo apt update

sudo apt-get install jenkins

sudo systemctl status jenkins

sudo systemctl start Jenkins

sudo systemctl enable Jenkins

sudo systemctl status jenkins


Visit http://localhost:8080 OR http://<Public_IP_of_EC2_ Instance>:8080 and complete
setup using the initial password:

sudo cat /var/lib/jenkins/secrets/initialAdminPassword

Paste it here.
Install recommended plugins and create an admin user.
Step 4: Install Required Jenkins Plugins

Install:

 Git Plugin
 GitHub Integration Plugin
 Pipeline Plugin (optional)
 Any required Authentication Plugins
Step 5: Create and Configure Jenkins Job
 Create Freestyle Project

 Open Jenkins Dashboard → New Item → Freestyle Project → Name it: WebApp-
CICD
Scroll down and click OK.
 In Source Code Management, select Git → add your repository URL

 Use credentials if needed.

 Build Triggers

 Select: GitHub hook trigger for GITScm polling


 Build Step

Choose "Execute Shell"

Enter the following:

#!/bin/bash

echo "Deploying latest code..."

sudo rm -rf /var/www/html/webdirectory

sudo mkdir -p /var/www/html/webdirectory

sudo cp -r * /var/www/html/webdirectory/

echo "Deployment Complete!"


Click Save and Apply.

Step 6: Configure Webhook in GitHub

1. Go to your GitHub repo → Settings → Webhooks


2. Click Add webhook

o Payload URL: http://<your-jenkins-ip>:8080/github-webhook/


o Content type: application/json
NOTE: Each time your EC2 instance restarts public IP will be changed. So, if you
have taken a break or your EC2 instance is restarted then do not forget to edit
the Payload URL accordingly.

o Events: Just the push event


o Add webhook
NOTE: Ensure Jenkins is accessible from GitHub (use ngrok or deploy Jenkins on a public IP for
remote tests).
Step 7: Trigger the Pipeline

 Edit the sudoers file

Run on your EC2 terminal (not inside Jenkins):

sudo visudo

At the end of the file, add this line:

jenkins ALL=(ALL) NOPASSWD: /bin/rm, /bin/mkdir, /bin/cp

Press Ctrl+O and Ctrl+X to save and exit.

This allows the jenkins user to run rm, mkdir, and cp with sudo without prompting for a
password. This is secure because it's limited to only those commands.

 Now confirm that there are no builds yet in Jenkins since this is our first build.
 Now edit index.html which will be copied from current directory Experiement5 to
/var/www/html later.

vi index.html

Replace the old content with the following new content:


<h1>Version 2 - Updated via CI/CD!</h1>

cat index.html

git status

git add index.html

git status

git commit -m "Update index content"

git status
git push origin master

Confirm that the latest code is pushed.


This should trigger your Jenkins job automatically.
Step 8: Verify the CI/CD Pipeline

 Open browser → Visit http://<Public_IP_of_EC2_Instance>/webdirectory/


 You should see the updated content deployed automatically by Jenkins.

Conclusion

This experiment demonstrates a simple CI/CD pipeline that:

 Pulls code from GitHub via a webhook


 Builds and deploys to a local HTTP server
 Uses Jenkins as the automation server

This approach forms the base for real-world CI/CD practices and can be extended to support
test automation, Docker, cloud servers, and more.
Experiment No. 6
Title:

Exploring Containerization and Application Deployment with Docker.

Objective:

The objective of this experiment is to provide hands-on experience with Docker


containerization and application deployment in a Docker container. By the end of this
experiment, you will understand the basics of Docker, how to build Docker images, how to
create Docker containers, and how to deploy a simple Java SpringBoot application.

Introduction:

Containerization is a technology that has revolutionized the way applications are developed,
deployed, and managed in the modern IT landscape. It provides a standardised and efficient
way to package, distribute, and run software applications and their dependencies in isolated
environments called containers.

Containerization technology has gained immense popularity, with Docker being one of the most
well-known containerization platforms. This introduction explores the fundamental concepts of
containerization, its benefits, and how it differs from traditional approaches to application
deployment.

Key Concepts of Containerization:

● Containers: Containers are lightweight, stand-alone executable packages that include


everything needed to run a piece of software, including the code, runtime, system tools,
libraries, and settings. Containers ensure that an application runs consistently and reliably
across different environments, from a developer's laptop to a production server.

● Images: Container images are the templates for creating containers. They are read-only and
contain all the necessary files and configurations to run an application. Images are typically built
from a set of instructions defined in a Dockerfile.

● Docker: Docker is a popular containerization platform that simplifies the creation,


distribution, and management of containers. It provides tools and services for building, running,
and orchestrating containers at scale.
● Isolation: Containers provide process and filesystem isolation, ensuring that applications and
their dependencies do not interfere with each other. This isolation enhances security and
allows multiple containers to run on the same host without conflicts.

Benefits of Containerization:

● Consistency: Containers ensure that applications run consistently across different


environments, reducing the "it works on my machine" problem.

● Portability: Containers are portable and can be easily moved between different host
machines and cloud providers.

● Resource Efficiency: Containers share the host operating system's kernel, which makes them
lightweight and efficient in terms of resource utilization.

● Scalability: Containers can be quickly scaled up or down to meet changing application


demands, making them ideal for microservices architectures.

● Version Control: Container images are versioned, enabling easy rollback to previous
application states if issues arise.

● DevOps and CI/CD: Containerization is a fundamental technology in DevOps and CI/CD


pipelines, allowing for automated testing, integration, and deployment.

Containerization vs. Virtualization:

● Containerization differs from traditional virtualization, where a hypervisor virtualizes an


entire operating system (VM) to run multiple applications.

 In contrast: Containers share the host OS kernel, making them more lightweight and
efficient.

● Containers start faster and use fewer resources than VMs.

● VMs encapsulate an entire OS, while containers package only the application and its
dependencies.

Prerequisites:

A computer with Docker, Java and Maven installed.


Experiment Steps:

 In this experiment let us containerize and deploy a Java application.


 We are going to have the following folder structure.

springboot-docker-app/

├── src/

│ └── main/

│ └── java/

│ └── com/

│ └── example/

│ └── demo/

│ ├── DemoApplication.java

├── pom.xml

├── Dockerfile

Step 1: Launch and Connect to EC2 Instance

(If already running, skip to Step 2)

1. Launch an Ubuntu t2.large EC2 instance.


2. Allow ports 22, 8080 (used by Spring Boot).
3. SSH into the instance:

ssh -i your-key.pem ubuntu@<EC2-Public-IP>

Step 2: Install Java, Maven, and Docker

sudo apt update

sudo apt install -y openjdk-17-jdk maven docker.io

sudo systemctl status docker


Press Ctrl+C

sudo usermod -aG docker ubuntu

Logout and SSH again to apply group permissions:

exit

ssh -i your-key.pem ubuntu@<EC2-Public-IP>


Step 3: Create Spring Boot App from Scratch

mkdir springboot-docker-app && cd springboot-docker-app

mkdir -p src/main/java/com/example/demo

cd src/main/java/com/example/demo

Create the main class:

vi DemoApplication.java
Paste the following:

package com.example.demo;

import org.springframework.boot.SpringApplication;
import
org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.web.bind.annotation.*;

@SpringBootApplication
@RestController
public class DemoApplication {

public static void main(String[] args) {


SpringApplication.run(DemoApplication.class, args);
}

@GetMapping("/")
public String home() {
return "Hello from Dockerized Spring Boot App!";
}
}

Step 4: Create pom.xml

Go back to project root:

cd ~/springboot-docker-app
Create the pom.xml file:

vi pom.xml

Paste the following:

<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.example</groupId>
<artifactId>demo</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>jar</packaging>

<name>springboot-docker-app</name>
<description>Simple Spring Boot Docker App</description>

<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>3.2.5</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>

<properties>
<java.version>17</java.version>
</properties>

<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
</dependencies>

<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>

Step 5: Build the JAR File

ls –la
mvn clean package

ls –la

ls –la ./target/

Expected Output:
target/demo-0.0.1-SNAPSHOT.jar

Step 6: Create Dockerfile

vi Dockerfile

Paste the following:

# Base image
FROM openjdk:17-jdk-slim

# Set workdir
WORKDIR /app

# Copy JAR
COPY target/demo-0.0.1-SNAPSHOT.jar app.jar

# Expose port
EXPOSE 8080

# Run JAR
ENTRYPOINT ["java", "-jar", "app.jar"]

Step 7: Build Docker Image

docker build -t springboot-docker-app .


docker images

Step 8: Run Docker Container

docker run -d -p 8080:8080 springboot-docker-app

docker ps
Step 9: Test the Application

Open in browser:

http://<EC2-Public-IP>:8080

Expected Output:

Hello from Dockerized Spring Boot App!

Conclusion:

In this experiment, you explored containerization and application deployment with Docker by
deploying a Java SpringBoot application in a Docker container. You learned how to create a
Dockerfile, build a Docker image, run a Docker container, and access your Java SpringBoot
application from your host machine. Docker's containerization capabilities make it a valuable
tool for packaging and deploying applications consistently across different environments.

Exercise/Questions:

1. Explain the concept of containerization. How does it differ from traditional virtualization
methods?

2. Discuss the key components of a container. What are images and containers in the context of
containerization?

3. What is Docker, and how does it contribute to containerization? Explain the role of Docker in
building, running, and managing containers.

4. Describe the benefits of containerization for application deployment and management.


Provide examples of scenarios where containerization is advantageous.

5. Explain the concept of isolation in containerization. How do containers provide process and
filesystem isolation for applications?
6. Discuss the importance of container orchestration tools such as Kubernetes in managing
containerized applications. What problems do they solve, and how do they work?

7. Compare and contrast containerization platforms like Docker, containerd, and rkt. What are
their respective strengths and weaknesses?

8. Explain the process of creating a Docker image. What is a Dockerfile, and how does it help in
image creation?

9. Discuss the security considerations in containerization. What measures can be taken to


ensure the security of containerized applications?

10. Explore real-world use cases of containerization in software development and deployment.
Provide examples of industries or companies that have benefited from containerization
technologies.
Experiment No. 7

NOTE: This experiment is extension of 5th Experiment.

Title:

Applying CI/CD Principles to Web Development Using Jenkins, Git, using Docker Containers.

Objective:

The objective of this experiment is to set up a CI/CD pipeline for a web application using
Jenkins, Git, Docker containers, and GitHub webhooks. The pipeline will automatically build,
test, and deploy the web application whenever changes are pushed to the Git repository,
without the need for a pipeline script.

Introduction:

Continuous Integration and Continuous Deployment (CI/CD) principles are integral to modern
web development practices, allowing for the automation of code integration, testing, and
deployment. This experiment demonstrates how to implement CI/CD for web development
using Jenkins, Git, Docker containers, and GitHub webhooks without a pipeline script. Instead,
we'll utilize Jenkins' "GitHub hook trigger for GITScm polling" feature.

In the fast-paced world of modern web development, the ability to deliver high-quality
software efficiently and reliably is paramount. Continuous Integration and Continuous
Deployment (CI/CD) are integral principles and practices that have revolutionized the way
software is developed, tested, and deployed. These practices bring automation, consistency,
and speed to the software development lifecycle, enabling development teams to deliver code
changes to production with confidence.

Continuous Integration (CI):

CI is the practice of frequently and automatically integrating code changes from multiple
contributors into a shared repository. The core idea is that developers regularly merge their
code into a central repository, triggering automated builds and tests.
Key aspects of CI include:

● Automation: CI tools, like Jenkins, Travis CI, or CircleCI, automate the building and testing of
code whenever changes are pushed to the repository.

● Frequent Integration: Developers commit and integrate their code changes multiple times a
day, reducing integration conflicts and catching bugs early.

● Testing: Automated tests, including unit tests and integration tests, are run to ensure that
new code changes do not introduce regressions.

● Quick Feedback: CI provides rapid feedback to developers about the quality and correctness
of their code changes.

Continuous Deployment (CD):

CD is the natural extension of CI. It is the practice of automatically and continuously deploying
code changes to production or staging environments after successful integration and testing.

Key aspects of CD include:

● Automation: CD pipelines automate the deployment process, reducing the risk of human
error and ensuring consistent deployments.

● Deployment to Staging: Code changes are deployed first to a staging environment where
further testing and validation occur.

● Deployment to Production: After passing all tests in the staging environment, code changes
are automatically deployed to the production environment, often with zero downtime.

● Rollbacks: In case of issues, CD pipelines provide the ability to rollback to a previous version
quickly.

Benefits of CI/CD in Web Development:

● Rapid Development: CI/CD accelerates development cycles by automating time-consuming


tasks, allowing developers to focus on coding.

● Quality Assurance: Automated testing ensures code quality, reducing the number of bugs
and regressions.

● Consistency: CI/CD ensures that code is built, tested, and deployed consistently, regardless of
the development environment.
● Continuous Feedback: Developers receive immediate feedback on the impact of their
changes, improving collaboration and productivity.

● Reduced Risk: Automated deployments reduce the likelihood of deployment errors and
downtime, enhancing reliability.

● Scalability: CI/CD can scale to accommodate projects of all sizes, from small startups to large
enterprises.

Prerequisites:

● A computer with Docker installed.

● Jenkins installed and configured (https://www.jenkins.io/download/).

● A web application code repository hosted on GitHub.

Experiment Steps:

Step 1: Set Up the Web Application and Git Repository.

● Create a simple web application or use an existing one. Ensure it can be hosted in a Docker
container.

● Initialize a Git repository for your web application and push it to GitHub.

Step 2: Install and Configure Jenkins

● Install Jenkins on your computer or server following the instructions for your operating
system (https://www.jenkins.io/download/).

● Open Jenkins in your web browser (usually at http://localhost:8080) and complete the initial
setup, including setting up an admin user and installing necessary plugins.

● Configure Jenkins to work with Git by setting up Git credentials in the Jenkins Credential
Manager.

 Make sure that the user jenkins exists.


cat /etc/passwd | grep jenkins

 Add jenkins user to Docker group so that we can run docker commands without sudo.

 Check the status of Jenkins service and restart it to make the changes to be affected.

sudo systemctl status jenkins

sudo systemctl restart jenkins

sudo systemctl status jenkins

 Verify that the user Jenkins can run docker commands without sudo now.
sudo su - jenkins

docker ps

Become ubuntu user after verification.

exit

NOTE: Because of restarting the Jenkins service, you may need to re-login in the UI.

 Create a Dockerfile

FROM nginx:alpine
COPY . /usr/share/nginx/html

Step 3: Create a Jenkins Job

● Create a new Jenkins job using the "Freestyle project" type.

● In the job configuration, specify a name for your job and choose "This project is
parameterized."
● Add a "String Parameter" named GIT_REPO_URL and set its default value to your Git
repository URL.
● Scroll down and set Branches to build -> Branch Specifier to the
working Git branch (ex: */master).
● Scroll down to "Triggers" section and select the "GitHub hook trigger for GITScm
polling" option. This enables Jenkins to listen to GitHub webhook triggers.

Step 4: Configure Build Steps

● Scroll down and go to the "Build Steps" section.

● Add build steps to execute Docker commands for building and deploying the containerized
web application. Use the following commands:

# Remove the existing container if it exists

docker rm --force container1

# Build a new Docker image

docker build -t nginx-image1 .

# Run the Docker container

docker run -d -p 8081:80 --name=container1 nginx-image1


● These commands remove the existing container (if any), build a Docker image named
"nginx-image1," and run a Docker container named "container1" on port 8081.

Save and Apply the configuration.

Step 5: Set Up a GitHub Webhook

● In your GitHub repository, navigate to "Settings" and then "Webhooks."

● Create a new webhook and configure it to send a payload to the Jenkins webhook URL.

It is usually http://jenkins-server/github-webhook/

i.e.

http://<Public_IP_of_EC2_Instance>:8080/github-webhook/

Set the content type to "application/json"


Click on Update webhook at the bottom.

Step 6: Trigger the CI/CD Pipeline

 Now before triggering the pipeline, let us see the current images, containers and job builds
in Jenkins.
Right now, there is no image named nginx-image1 and there is no container named
container1. But still for the safer side our pipeline will delete the container1 if it
exists.

In Jenkins right now the build is 5th build.

● Now make some changes to the file index.html and push the changes as well as the
Dockerfile to your GitHub repository. The webhook will trigger the Jenkins job automatically,
executing the build and deployment steps defined in the job configuration.
ls -la

cat index.html

vi index.html

Now make some changes, save and exit.

cat index.html

git status

git add index.html Dockerfile

git status
git commit -m "Updated Build Steps in Jenkins with docker commands and
committed both index.html and Dockerfile"

git status

git push origin master

● Check the code changes in GitHub.


 Monitor the Jenkins job's progress in the Jenkins web interface.
Step 7: Verify the Deployment

docker images

docker ps

Access your web application by opening a web browser and navigating to http://localhost:8081
(or the appropriate URL if hosted elsewhere like http://<Public_IP_of_EC2Instance>:8081).

Hu hoo...! Here we go!! We deployed the application successfully onto the Docker container
and it is running! , Awesome! .

NOTE:

 CI part
 CD part
 Hard coding of repo URL and alternatives for it.
 Freestyle (Less structured, no groovy) vs Pipeline ( groovy, more structured, Declarative
vs Scripted)
Declarative syntax:

pipeline {
agent any
stages {
stage('Build') {
steps {
echo 'Building...'
}
}
}
}

Scripted syntax:

node {
stage('Build') {
echo 'Building...'
}
}

Conclusion:

This experiment demonstrates how to apply CI/CD principles to web development using
Jenkins, Git, Docker containers, and GitHub webhooks. By configuring Jenkins to listen for
GitHub webhook triggers and executing Docker commands in response to code changes, you
can automate the build and deployment of your web application, ensuring a more efficient and
reliable development workflow.

Exercise / Questions :

1. Explain the core principles of Continuous Integration (CI) and Continuous Deployment (CD) in
the context of web development. How do these practices enhance the software development
lifecycle?

2. Discuss the key differences between Continuous Integration and Continuous Deployment.
When might you choose to implement one over the other in a web development project?
3. Describe the role of automation in CI/CD. How do CI/CD pipelines automate code integration,
testing, and deployment processes?

4. Explain the concept of a CI/CD pipeline in web development. What are the typical stages or
steps in a CI/CD pipeline, and why are they important?

5. Discuss the benefits of CI/CD for web development teams. How does CI/CD impact the speed,
quality, and reliability of software delivery?

6. What role do version control systems like Git play in CI/CD workflows for web development?
How does version control contribute to collaboration and automation?

7. Examine the challenges and potential risks associated with implementing CI/CD in web
development. How can these challenges be mitigated?

8. Provide examples of popular CI/CD tools and platforms used in web development. How do
these tools facilitate the implementation of CI/CD principles?

9. Explain the concept of "Infrastructure as Code" (IaC) and its relevance to CI/CD. How can IaC
be used to automate infrastructure provisioning in web development projects?

10. Discuss the cultural and organisational changes that may be necessary when adopting CI/CD
practices in a web development team. How does CI/CD align with DevOps principles and
culture?
Experiment No. 8
Title: Demonstrate Maven Build Life Cycle

Objective:

The objective of this experiment is to understand and demonstrate the complete Maven Build
Lifecycle, including:

 Maven installation and project creation.


 Understanding source code structure.
 Executing lifecycle phases (compile, test, package, install, deploy).
 Configuring and deploying artifacts to Nexus Repository (Maven Artifactory).
 Emulating real-time Java Developer to DevOps Engineer handover.

Introduction:

Maven is a widely-used build automation and project management tool in the Java ecosystem.
It provides a clear and standardized build lifecycle for Java projects, allowing developers to
perform various tasks such as compiling code, running tests, packaging applications, and
deploying artifacts. This experiment aims to demonstrate the Maven build lifecycle and its
different phases.

Key Maven Concepts:

● Project Object Model (POM): The POM is an XML file named pom.xml that defines a project's
configuration, dependencies, plugins, and goals. It serves as the project's blueprint and is at the
core of Maven's functionality.

● Build Lifecycle: Maven follows a predefined sequence of phases and goals organized into
build lifecycles. These lifecycles include clean, validate, compile, test, package, install, and
deploy, among others.

● Plugin: Plugins are extensions that provide specific functionality to Maven. They
enable tasks like compiling code, running tests, packaging artifacts, and deploying
applications.
● Dependency Management: Maven simplifies dependency management by allowing
developers to declare project dependencies in the POM file. Maven downloads these
dependencies from repositories like Maven Central.

● Repository: A repository is a collection of artifacts (compiled libraries, JARs, etc.) that Maven
uses to manage dependencies. Maven Central is a popular public repository, and organizations
often maintain private repositories.

Maven Build Life Cycle:

The Maven build process is organized into a set of build lifecycles, each comprising a sequence
of phases. Here are the key build lifecycles and their associated phases:

Clean Lifecycle:

● clean: Deletes the target directory, removing all build artifacts.

Default Lifecycle:

● validate: Validates the project's structure.

● compile: Compiles the project's source code.

● test: Runs tests using a suitable testing framework.

● package: Packages the compiled code into a distributable format (e.g., JAR, WAR).

● verify: Runs checks on the package to verify its correctness.

● install: Installs the package to the local repository.

● deploy: Copies the final package to a remote repository for sharing.

Site Lifecycle:

● site: Generates project documentation.


Each phase within a lifecycle is executed in sequence, and the build progresses from one phase
to the next. Developers can customize build behavior by configuring plugins and goals in the
POM file.

Prerequisites:

 AWS EC2 (Ubuntu 22.04)


 OpenJDK 11
 Apache Maven
 Nexus Repository Manager 3 (Artifactory)
 Spring Boot Web Application

Experiment Steps:

Step 1: Launch and Prepare EC2 Instance

1. Go to AWS Console → EC2 → Launch Instance


2. Choose:
o AMI: Ubuntu Server 22.04
o Instance Type: t2.medium or t2.large
o Storage: 20 GB
o Key Pair: Use existing or create a new one
o Security Group:
 Allow SSH (22) – for remote access
 Allow HTTP (80) – for web app access
 Allow Custom TCP (8080, 8081) – for Spring Boot and Nexus

3. Launch the instance


4. SSH into EC2:
ssh -i "your-key.pem" ubuntu@<EC2-PUBLIC-IP>

ls –la

Step 2: Install Java and Maven

sudo apt update && sudo apt install -y openjdk-11-jdk maven git

java -version

OR

java --version

mvn -version

OR

mvn --version

OR
mvn -v

Step 3: Install and Configure Nexus Repository

1. Create a user for Nexus:

sudo useradd -m -s /bin/bash nexus

sudo su – nexus

ls –la
2. Download and install Nexus:

wget https://download.sonatype.com/nexus/3/nexus-3.80.0-06-
linux-x86_64.tar.gz

ls –la

tar -xvzf nexus-3.80.0-06-linux-x86_64.tar.gz

ls –la

mv nexus-3.80.0-06 nexus #Rename


3. Create systemd service for Nexus:
Exit to root:

exit

Then:

sudo vi /etc/systemd/system/nexus.service

Paste the following:

[Unit]
Description=Nexus Repository
After=network.target

[Service]
Type=forking
LimitNOFILE=65536
User=nexus
Group=nexus
ExecStart=/home/nexus/nexus/bin/nexus start
ExecStop=/home/nexus/nexus/bin/nexus stop
Restart=on-abort

[Install]
WantedBy=multi-user.target
sudo cat /etc/systemd/system/nexus.service

4. Start Nexus:

sudo chown -R nexus:nexus /home/nexus

sudo systemctl daemon-reexec

sudo systemctl status nexus


sudo systemctl enable nexus

sudo systemctl status nexus

sudo systemctl start nexus

sudo systemctl status nexus

Press Crtl+C to exit.

5. Access Nexus:
Visit in browser:

http://<EC2-PUBLIC-IP>:8081
Click on Login:
User name: admin

Get the password by running the below command:

sudo cat /home/nexus/sonatype-work/nexus3/admin.password

Then change the password.


6. Create a Maven Hosted Repository:

Settings -> Repository -> Create and manage repositories -> Create repository
 Recipe: maven2 (hosted)
 Name: sdm-maven-releases
 Version Policy: Release

 Deployment Policy: Allow Redeploy

 Create repository
Step 4: Create Spring Boot Web Application

mvn archetype:generate \
-DgroupId=com.example \
-DartifactId=SpringBootApp \
-DarchetypeArtifactId=maven-archetype-quickstart \
-DinteractiveMode=false

ls –la

cd SpringBootApp

ls –la

rm -rf src/ # Removing src to create a real-time SpringBoot App


ls –la

mkdir -p src/main/java/com/example

ls –la

vi src/main/java/com/example/App.java

Paste this:

package com.example;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.web.bind.annotation.*;

@SpringBootApplication
@RestController
public class App {

public static void main(String[] args) {


SpringApplication.run(App.class, args);
}

@GetMapping("/")
public String hello() {
return "Hello from Spring Boot!";
}
}
cat src/main/java/com/example/App.java
Step 5: Replace pom.xml with Spring Boot Configuration

Edit pom.xml:
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">

<modelVersion>4.0.0</modelVersion>
<groupId>com.example</groupId>
<artifactId>SpringBootApp</artifactId>
<version>1.0.0</version>
<packaging>jar</packaging>

<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.7.5</version>
</parent>

<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
</dependencies>

<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>

<distributionManagement>
<repository>
<id>nexus</id>
<name>Nexus Release Repository</name>
<url>http://<EC2-PUBLIC-IP>:8081/repository/sdm-maven-releases/</url>
</repository>
</distributionManagement>
</project>

NOTE: Each time you restart the EC2 instance, the public IP of the EC2 instance will be changed.
So you need to change the IP address in the <url> section inside
<distributionManagement> section of the pom.xml shown above. Otherwise you will
be not able to access the Nexus. Also replace the repo name sdm-maven-releases with
your repo name.
Step 6: Add Nexus Credentials to Maven Settings

vi ~/.m2/settings.xml

Paste:

<settings>
<servers>
<server>
<id>nexus</id>
<username>admin</username>
<password>YourNexusPassword</password>
</server>
</servers>
</settings>

Step 7: Build, Run, and Deploy the App

1. Build the app:

mvn clean package

ls –la

ls –la target/
2. Run the app:

java -jar target/SpringBootApp-1.0.0.jar

3. Access it in Browser:

http://<EC2-PUBLIC-IP>:8080

Output:
Hello from Spring Boot!

Stop the application by pressing Ctrl+C.

Go back to browser and confirm that the application is not accessible.

4. Deploy the artifact to Nexus:

mvn clean package deploy


# It takes some time to upload .jar file to nexus.

NOTE: If you want to be sure what's happening in the background:

mvn clean package deploy -X

It will show you:

 The exact file being uploaded


 Nexus responses (like 200 OK or 403)
 Retry or failure messages

In browser, go to Nexus:

http://<EC2-PUBLIC-IP>:8081

Navigate to → Browse → sdm-maven-releases → com/example/SpringBootApp


You’ll see the .jar published.
Expand 1.0.0 by clicking on + icon or double click on 1.0.0 directly to see .jar file.
Or else you can click on HTML View and navigate to
com/example/SpringBootApp/1.0.0 to see .jar file.
Conclusion:

This experiment demonstrates the Maven build lifecycle by creating a simple Java
project and executing various Maven build phases. Maven simplifies the build
process by providing a standardized way to manage dependencies, compile code,
run tests, and package applications. Understanding these build phases is essential
for Java developers using Maven in their projects.
Exercise/Questions:

1. What is Maven, and why is it commonly used in software development?

2. Explain the purpose of the pom.xml file in a Maven project.

3. How does Maven simplify dependency management in software projects?

4. What are Maven plugins, and how do they enhance the functionality of Maven?

5. List the key phases in the Maven build lifecycle, and briefly describe what each

phase does.

6. What is the primary function of the clean phase in the Maven build lifecycle?

7. In Maven, what does the compile phase do, and when is it typically executed?

8. How does Maven differentiate between the test and verify phases in the build

lifecycle?

9. What is the role of the install phase in the Maven build lifecycle, and why is it
useful?

10. Explain the difference between a local repository and a remote repository in
the context of Maven.
Experiment No. 9
Title:

Demonstrating Container Orchestration using Kubernetes.

Objective:

The objective of this experiment is to introduce students to container orchestration using


Kubernetes and demonstrate how to deploy a containerized web application. By the end of this
experiment, students will have a basic understanding of Kubernetes concepts and how to use
Kubernetes to manage containers.

Introduction:

Container orchestration is a critical component in modern application deployment, allowing


you to manage, scale, and maintain containerized applications efficiently. Kubernetes is a
popular container orchestration platform that automates many tasks associated with
deploying, scaling, and managing containerized applications. This experiment will demonstrate
basic container orchestration using Kubernetes by deploying a simple web application.

Kubernetes, often abbreviated as K8s, is an open-source container orchestration platform


designed to automate the deployment, scaling, and management of containerized applications.
Developed by Google and later donated to the Cloud Native Computing Foundation (CNCF),
Kubernetes has become the de facto standard for container orchestration in modern cloud-
native application development.

Key Concepts in Kubernetes:

● Containerization: Kubernetes relies on containers as the fundamental unit for packaging and
running applications. Containers encapsulate an application and its dependencies, ensuring
consistency across various environments.

● Cluster: A Kubernetes cluster is a set of machines, known as nodes, that collectively run
containerized applications. A cluster typically consists of a master node (for control and
management) and multiple worker nodes (for running containers).

● Nodes: Nodes are individual machines (virtual or physical) that form part of a Kubernetes
cluster. Nodes run containerized workloads and communicate with the master node to manage
and orchestrate containers.
● Pod: A pod is the smallest deployable unit in Kubernetes. It can contain one or more tightly
coupled containers that share the same network and storage namespace. Containers within a
pod are typically used to run closely related processes.

● Deployment: A Deployment is a Kubernetes resource that defines how to create, update, and
scale instances of an application. It ensures that a specified number of replicas are running at all
times.

● Service: A Service is an abstraction that exposes a set of pods as a network service. It provides
a stable IP address and DNS name for accessing the pods, enabling load balancing and
discovery.

● Namespace: Kubernetes supports multiple virtual clusters within the same physical cluster,
called namespaces. Namespaces help isolate resources and provide a scope for organizing and
managing workloads.

Key Features of Kubernetes:

● Automated Scaling: Kubernetes can automatically scale the number of replicas of an


application based on resource usage or defined metrics. This ensures applications can handle
varying workloads efficiently.

● Load Balancing: Services in Kubernetes can distribute traffic among pods, providing high
availability and distributing workloads evenly.

● Self-healing: Kubernetes monitors the health of pods and can automatically restart or replace
failed instances to maintain desired application availability.

● Rolling Updates and Rollbacks: Kubernetes allows for controlled, rolling updates of
applications, ensuring zero-downtime deployments. If issues arise, rollbacks can be performed
with ease.

● Storage Orchestration: Kubernetes provides mechanisms for attaching storage volumes to


containers, enabling data persistence and sharing.

● Configuration Management: Kubernetes supports configuration management through


ConfigMaps and Secrets, making it easy to manage application configurations.

● Extensibility: Kubernetes is highly extensible, with a vast ecosystem of plugins and


extensions, including Helm charts for packaging applications and custom resources for defining
custom objects.
Kubernetes has become a cornerstone of cloud-native application development, enabling
organisations to build, deploy, and scale containerized applications effectively. Its ability to
abstract away infrastructure complexities, ensure application reliability, and provide consistent
scaling makes it a powerful tool for modern software development and operations.

Prerequisites:

● A computer with Kubernetes installed.

● Docker installed.

Experiment Steps:

NOTE: Launch an EC2 instance of type t2.medium or t2.large. Terminate your instance once you
are done with your experiment to avoid bill.

Step 1: Create a Dockerized Web Application

● Create a simple web application (e.g., a static HTML page) or use an existing one.

mkdir Experiment9 && cd Experiment9

ls -la

 Make sure Docker is installed.

docker --version
 Install it if not installed.
 Create a simple web application.

vi index.html

Pate the following:

<h1>Welcome to Experiment9</h1>

● Create a Dockerfile to package the web application into a Docker container.

Here's an example of Dockerfile for a simple web server:

vi Dockerfile

Paste the following:

# Use an official Nginx base image.

FROM nginx:latest

# Copy index.html to the default Nginx html directory.

COPY index.html /usr/share/nginx/html/index.html

● Build the Docker image:

docker build -t my-web-app .


docker images

Step 2: Setup a K8s cluster using Minikube

curl -LO "https://dl.k8s.io/release/$(curl -L -s


https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"

ls –la
sudo install -o root -g root -m 0755 kubectl
/usr/local/bin/kubectl

kubectl version --client

curl -LO https://github.com/kubernetes/minikube/releases/latest/download/minikube-


linux-amd64

ls –la

sudo install minikube-linux-amd64 /usr/local/bin/minikube && rm


minikube-linux-amd64
minikube start --driver=docker

OR

minikube start --driver=docker --cpus=2 --memory=4096 --wait-


timeout=10m0s --force

NOTE: It takes more than 2 minutes sometimes. Make sure that your instance has 20 to 40 GB
of storage by running the command df -h.

minikube status

kubectl get nodes


 Point Docker to Minikube Daemon

This is very important so your built images are inside Minikube:

eval $(minikube docker-env)

NOTE:

This command changes your shell's Docker context to use the Docker daemon inside the
Minikube VM/container, instead of your host EC2 instance.

So, after you run this:

 Docker images are built inside Minikube's Docker.


 They are available to Kubernetes pods without needing to push to Docker Hub or any
registry.

Now build the Docker image again inside Minikube's Docker:

docker build -t my-web-app:latest .


docker images

Step 3: Deploy the Web Application with Kubernetes

Create a Kubernetes Deployment YAML file (web-app-deployment.yaml) to deploy the


web application:

apiVersion: apps/v1

kind: Deployment

metadata:
name: my-web-app-deployment

spec:
replicas: 3 # Number of pods to create
selector:
matchLabels:
app: my-web-app # Label to match pods
template:
metadata:
labels:
app: my-web-app # Label assigned to pods
spec:
containers:
- name: my-web-app-container
image: my-web-app:latest # Docker image to use
imagePullPolicy: Never
ports:
- containerPort: 80 # Port to expose

Explanation of web-app-deployment.yaml:

● apiVersion: Specifies the Kubernetes API version being used (apps/v1 for Deployments).

● kind: Defines the type of resource we're creating (a Deployment in this case).

● metadata: Contains metadata for the Deployment, including its name.

● spec: Defines the desired state of the Deployment.

● replicas: Specifies the desired number of identical pods to run. In this example, we want
three replicas of our web application.

● selector: Specifies how to select which pods are part of this Deployment. Pods with the label
app: my-web-app will be managed by this Deployment.

● template: Defines the pod template for the Deployment.

● metadata: Contains metadata for the pods created by this template.

● labels: Assigns the label app: my-web-app to the pods created by this template.

● spec: Specifies the configuration of the pods.

● containers: Defines the containers to run within the pods. In this case, we have one container
named my-web-app-container using the my-web-app:latest Docker image.

● ports: Specifies the ports to expose within the container. Here, we're exposing port 80.

Step 4: Deploy the Application


Apply the deployment configuration to your Kubernetes cluster:

kubectl apply -f web-app-deployment.yaml

Step 5: Verify the Deployment

Check the status of your pods:

kubectl get pods

Step 6: Access the app

Let’s expose your app using a Service. Run this:

kubectl expose deployment my-web-app-deployment --type=NodePort --port=80

kubectl get svc

Then find the URL to access the app:

minikube service my-web-app-deployment --url

Now access the app using curl in terminal itself as well as in browser:
curl http://192.168.49.2:32178

NOTE: No need to add an inbound rule to security group of EC2 instance to open the port 32178
since we are running app in a pod, not on a node directly.

When I try to access the app on browser in my laptop I am not able to access it.

Troubleshooting steps:
Because 192.168.49.2 is a private IP internal to my EC2 instance, I am unable to access the
app. It is part of the virtual network Minikube creates using Docker.

My laptop cannot directly reach this internal IP.

So I can access the app via EC2 Public IP with Port Forwarding.

I have to use kubectl port-forward to make it accessible via my EC2's public IP:

Let me add an inbound rule to security group of my EC2 instance to open a port 8085 (8080 is
already being used by some other app in my instance).

Let me forward Port 8085 on EC2 to the Pod’s port 80.

kubectl port-forward deployment/my-web-app-deployment 8085:80

Leave this command running in the foreground.

Now access the app in browser using <Public_IP_of_EC2_Instance>:8085

Still I am unable to access the app.


The reason is kubectl port-forward is only binding to localhost (127.0.0.1)`, not my
EC2’s public IP — so the forwarded port is not accessible from outside my EC2 machine.

I want the EC2 to directly listen on all interfaces (not just localhost), so I have to run:
kubectl port-forward --address 0.0.0.0 deployment/my-web-app-deployment 8085:80

Leave this command running in the foreground.

Now go back to browser and refresh the page to see the magic!.
Conclusion:

In this experiment, you learned how to create a Kubernetes Deployment for container
orchestration. The web-app-deployment.yaml file defines the desired state of the
application, including the number of replicas, labels, and the Docker image to use. Kubernetes
automates the deployment and scaling of the application, making it a powerful tool for
managing containerized workloads.

Exercise/Questions:

1. Explain the core concepts of Kubernetes, including pods, nodes, clusters, and deployments.
How do these concepts work together to manage containerized applications?

2. Discuss the advantages of containerization and how Kubernetes enhances the orchestration
and management of containers in modern application development.

3. What is a Kubernetes Deployment, and how does it ensure high availability and scalability of
applications? Provide an example of deploying a simple application using a Kubernetes
Deployment.

4. Explain the purpose and benefits of Kubernetes Services. How do Kubernetes Services
facilitate load balancing and service discovery within a cluster?

5. Describe how Kubernetes achieves self-healing for applications running in pods. What
mechanisms does it use to detect and recover from pod failures?

6. How does Kubernetes handle rolling updates and rollbacks of applications without causing
downtime? Provide steps to perform a rolling update of a Kubernetes application.

7. Discuss the concept of Kubernetes namespaces and their use cases. How can namespaces be
used to isolate and organize resources within a cluster?
8. Explain the role of Kubernetes ConfigMaps and Secrets in managing application
configurations. Provide examples of when and how to use them.

9. What is the role of storage orchestration in Kubernetes, and how does it enable data
persistence and sharing for containerized applications?

10. Explore the extensibility of Kubernetes. Describe Helm charts and custom resources, and
explain how they can be used to customize and extend Kubernetes functionality.
Experiment No. 10

Title:

Create the GitHub Account to Demonstrate CI/CD Pipeline using AWS (S3 + EC2 + CodePipeline +
CodeDeploy).

Objective:

To demonstrate Continuous Integration and Continuous Deployment (CI/CD) using GitHub as the
source, AWS S3 as an artifact store, and AWS CodePipeline + CodeDeploy to automatically deploy a
web application to an EC2 instance.

Prerequisites:
 AWS Account with necessary permissions.
 One running EC2 instance (Amazon Linux 2 preferred).
 GitHub account and repository.
 AWS CLI installed and configured on EC2 instance.

Experiment Steps:

Step 1: Create IAM Roles

 Create two IAM Roles. One for the service AWS EC2 and another for the service AWS
CodeDeploy.

 Go to the service “IAM”, select “Roles” in the left panel and click on “Create role” on the right
top.
 First let us create a role “Role_EC2CodeDeploy” for the service EC2.

 To do so ensure that “AWS service” is selected.

 From the dropdown of “Use case”, under “Commonly used services”, select the service or use
case “EC2”.
 Click “Next” and “Next”

 Give a name to the role as “Role_EC2CodeDeploy” and click on “Create role”.

 Either during the role creation or after the role creation we need to add permissions by

searching CodeDeploy in the search bar and selecting the policy

“AmazonEC2RoleforAWSCodeDeploy”.
 A new role is created. I am going to use this role on an EC2 machine. This role allows the service

AWS EC2 to access another service AWS CodeDeploy.

 Similarly let us create another role “Role_CodeDeploy” for the service CodeDeploy.

 But this time instead of EC2, select the service or use case “CodeDeploy”.
It will take the permission policy “AWSCodeDeployRole” automatically. I am going to use this role on
CodeDeploy.
Give a name “Role_CodeDeploy” to the role and click on “Create role”.

Now both the roles are created.


Step 2: Create EC2 Instance and Attach the Role “Role_EC2CodeDeploy”

We can launch any number of instances based on our requirements. But I will launch only one instance
“Demo_AWSCodeDeploy” in this example.

I launched an Amazon Linux 2023 AMI t2.micro EC2 instance.

Make sure that you have added the ports 22 and 80 as inbound rules.

Under “Advanced details”, from the IAM instance profile dropdown, select the role
“Role_EC2CodeDeploy” which was created recently for the service EC2.
Under “Advanced details” itself scroll down and in the “User data” section paste the following script to
automatically install all the packages or dependencies immediately after launching the EC2 instance.

#!/bin/bash

sudo yum -y update

sudo yum -y install ruby

sudo yum -y install wget

cd /home/ec2-user

wget https://aws-codedeploy-ap-south-1.s3.ap-south-

1.amazonaws.com/latest/install

sudo chmod +x ./install

sudo ./install auto


sudo yum install -y python-pip

sudo pip install awscli

Click on “Launch instance”.

The EC2 instance “Demo_AWSCodeDeploy” is launched and running successfully.

If we click on the instance ID and see the details of the instance then we can see the IAM role
“Role_EC2CodeDeploy” attached.
That is all fine. But I want to try each command individually. So I do not use the user data script.
Connect to the instance through Git Bash or any other CLI of your choice.

First let us update the package lists.

sudo yum -y update

Now let us install necessary packages.

sudo yum -y install ruby


sudo yum -y install wget

Let us create a project directory and navigate to it.

mkdir -p /home/ec2-user/Projects/GitHub_CodeDeploy

cd /home/ec2-user/Projects/GitHub_CodeDeploy
ls -la

Now let us download CodeDeploy agent installer.

wget https://aws-codedeploy-ap-south-1.s3.ap-south-
1.amazonaws.com/latest/install
ls –la

Let us make the installer executable.

sudo chmod +x ./install

Let us install the CodeDeploy agent in auto mode.

sudo ./install auto


Let us install pip

sudo yum install -y python-pip

Let us install awscli

sudo pip install awscli


Let us confirm the installation of awscli

aws –version

Step 3: Create an Application and Deployment Group under CodeDeploy

Go to the service “CodeDeploy”, select “Applications” in the left side bar and click on “Create
application”.

Give a name “WebsiteForDevOpsStuffs” for the application.

Select “EC2” from Compute platform dropdown.

Click on “Create application”.


An application “WebsiteForDevOpsStuffs” is created.

Click on “Create deployment group”.


Give the name “DevOpsStuffsDeploymentGroup” to the deployment group.

Attach the role “Role_CodeDeploy” to the service “AmazonCodeDeploy” by selecting


“Role_CodeDeploy” from the “Service role” dropdown.

Select “In place” as “Deployment type”.

Select “Amazon EC2 instances” as “Environment configuration”.


In the Tag group, select “Name” and name of the newly created EC2 instance as value from dropdowns.
Leave all other settings as their default values.

Disable “Load balancer”.

Click on “Create deployment group”.


Deployment group “DevOpsStuffsDeploymentGroup” was created successfully.

Step 4: Create a Pipeline and Integrate GitHub with CodePipeline

Go to the left panel, expand “CodePipeline” and and click on Pipelines”.


Click on “Create pipeline”, give a name “DevOpsStuffsPipeline” to it, and leave the rest of the things
default. By default it stores the artifacts in S3.

Then click “Next”.


Select GitHub (Version 2) from dropdown, paste the connection link if you already have one or click on
“Connect to GitHub” to create one.
Select repository name and branch name from dropdowns.

Select “No filter” for specifying how you want to trigger the pipeline and leave rest as default.
Click on “Skip build stage” since we are not building the code in this project.

Select “AWS CodeDeploy” from the dropdown for “Deploy Provider”.


Click on “Next”.

It will automatically take Region, select the application and the deployment group from dropdowns.

Click on “Next”.
Review all the things and click on “Create pipeline”.
Now deployment will be started using AWS CodeDeploy. If we would have specified the number of EC2
instances as 4 during the time of launch in Step 2 then the application would have been deployed on all
4 EC2 instances now.

First the code will be checked out from GitHub repo.


After some time code or application will be deployed on EC2 instance or server.

You can click on “View details” to see the summary. As the code will be stored in S3 bucket by default,
you can go to the service “AWS S3” and see the bucket.
Step 5: Access the Application

Let us access the application on port 80 as already we have added the inbound rule 80 in the EC2
instance.

Copy the Public DNS of the EC2 instance and paste it in the browser.

There we go!!

Let us go to GitHub repo, make some minor changes in the file index.html by clicking on Edit/Pencil icon
and commit the changes as follows.
Now go back to Amazon CodePipeline and just refresh the page.

Then go back to the browser and just refresh the page.

There we go!!

The AmazonCodePipeline has automatically identified the code changes in the GitHub repo and
triggered the build.
Conclusion:

 The app is automatically deployed to EC2 whenever changes are pushed to GitHub.
 This demonstrates a CI/CD pipeline integrating GitHub + AWS S3 + CodePipeline +
CodeDeploy + EC2.

Exercise/Questions:
1. What is the primary purpose of Continuous Integration and Continuous Deployment (CI/CD)
in software development, and how does it benefit development teams using GitHub, GCP, and
AWS?
2. Explain the role of GitHub in a CI/CD pipeline. How does GitHub facilitate version control and
collaboration in software development?
3. What are the key services and offerings provided by Google Cloud Platform (GCP) that are
commonly used in CI/CD pipelines, and how do they contribute to the automation and
deployment of applications?
4. Similarly, describe the essential services and tools offered by Amazon Web Services (AWS)
that are typically integrated into a CI/CD workflow.
5. Walk through the basic steps of a CI/CD pipeline from code development to production
deployment, highlighting the responsibilities of each stage.
6. How does Continuous Integration (CI) differ from Continuous Deployment (CD)? Explain how
GitHub Actions or a similar CI tool can be configured to build and test code automatically.
7. In the context of CI/CD, what is a staging environment, and why is it important in the
deployment process? How does it differ from a production environment?
8. What are the primary benefits of using automation for deployment in a CI/CD pipeline, and
how does this automation contribute to consistency and reliability in software releases?
9. Discuss the significance of monitoring, logging, and feedback loops in a CI/CD workflow. How
do these components help in maintaining and improving application quality and performance?
10. In terms of scalability and flexibility, explain how cloud platforms like GCP and AWS enhance
the CI/CD process, especially when dealing with variable workloads and resource demands.

You might also like