DevOps Lab Manual
DevOps Lab Manual
LAB MANUAL
DevOps Lab
24SCL27
TABLE OF CONTENTS
Experiment Marks
Date Experiment Title Sign
No. / 30
PART - A
Exploring Git Commands through
1
Collaborative Coding.
Objective:
The objective of this experiment is to familiarize participants with essential Git concepts and
commands, enabling them to effectively use Git for version control and collaboration.
Introduction:
Git is a distributed version control system (VCS) that helps developers track changes in their
codebase, collaborate with others, and manage different versions of their projects efficiently.
It was created by Linus Torvalds in 2005 to address the shortcomings of existing version control
systems.
Unlike traditional centralized VCS, where all changes are stored on a central server, Git
follows a distributed model. Each developer has a complete copy of the repository on their
local machine, including the entire history of the project. This decentralization offers
numerous advantages, such as offline work, faster operations, and enhanced collaboration.
Git is a widely used version control system that allows developers to collaborate on projects,
track changes, and manage codebase history efficiently. This experiment aims to provide a
hands-on introduction to Git and explore various fundamental Git commands. Participants
will learn how to set up a Git repository, commit changes, manage branches, and collaborate
Key Concepts:
● Repository: A Git repository is a collection of files, folders, and their historical versions. It
contains all the information about the project's history, branches, and commits.
● Commit: A commit is a snapshot of the changes made to the files in the repository at a
specific point in time. It includes a unique identifier (SHA-1 hash), a message describing the
changes, and a reference to its parent commit(s).
● Merge: Merging is the process of combining changes from one branch into another. It
integrates the changes made in a feature branch into the main branch or any other target
branch.
● Pull Request: In Git hosting platforms like GitHub, a pull request is a feature that allows
developers to propose changes from one branch to another. It provides a platform for code
review and collaboration before merging.
● Remote Repository: A remote repository is a copy of the Git repository stored on a server,
enabling collaboration among multiple developers. It can be hosted on platforms like GitHub,
GitLab, or Bitbucket.
● git add: Stages changes for commit, preparing them to be included in the next commit.
● git commit: Creates a new commit with the staged changes and a descriptive message.
● git status: Shows the current status of the working directory, including tracked and
untracked files.
● git log: Displays a chronological list of commits in the repository, showing their commit
messages, authors, and timestamps.
● git checkout: Switches between branches, commits, or tags. It's used to navigate
through the repository's history.
● git merge: Combines changes from different branches, integrating them into the current
branch.
● git pull: Fetches changes from a remote repository and merges them into the current
branch.
● git push: Sends local commits to a remote repository, updating it with the latest changes.
Prerequisites:
Experiment Steps:
● Navigate to the directory where you want to create your Git repository.
mkdir Experiment1
cd Experiment1
ls –la
git init
● Create a new text file named "example.txt" using any text editor.
touch example.txt
vi example.txt
OR directly:
vi example.txt
ls -la
cat example.txt
git status
This command shows the status of your working directory, highlighting untracked files.
git add example.txt
git status
git status
git diff
This displays the differences between the last commit and the working directory.
git log
git branch
git branch
git branch
OR shorthand:
vi example.txt
cat example.txt
git status
git branch
cat example.txt
● Merge the changes from the "feature" branch into the "master" branch:
cat example.txt
Troubleshooting Steps:
cd ~/.ssh
ls -la
cd ~/.ssh
ls -la
cat ~/.ssh/id_ed25519.pub
Then:
git remote -v
Example:
Verify:
git remote -v
ssh -T git@github.com
Go to GitHub repo and refresh the page to see the file example.txt in the remote repo.
This is because we had first created a local repo with default branch master, but GitHub uses
default branch as main. So let us rename the local branch from master to main.
git branch
git branch
Now go to GitHub repo and refresh the page to see the file example.txt in the remote repo.
Conclusion:
Step 1: Generate an SSH key pair on local machine as shown above and add the public key to
GitHub.
git init
Step 3: Add and commit your files
git add .
Step 4: Rename master to main (Git 2.28+ lets you set this by default too)
Exercise:
1. Explain what version control is and why it is important in software development. Provide
examples of version control systems other than Git.
2. Describe the typical workflow when working with Git, including initializing a repository,
committing changes, and pushing to a remote repository. Use a real-world example to illustrate
the process.
3. Discuss the purpose of branching in Git and how it facilitates collaborative development.
Explain the steps involved in creating a new branch, making changes, and merging it back into
the main branch.
4. What are merge conflicts in Git, and how can they be resolved? Provide a step-by-step guide
on how to handle a merge conflict.
5. Explain the concept of remote repositories in Git and how they enable collaboration among
team members. Describe the differences between cloning a repository and adding a remote.
6. Discuss different branching strategies, such as feature branching and Gitflow. Explain the
advantages and use cases of each strategy.
7. Describe various Git commands and techniques for undoing changes, such as reverting
commits, resetting branches, and discarding uncommitted changes.
8. What are Git hooks, and how can they be used to automate tasks and enforce coding
standards in a Git repository? Provide examples of practical use cases for Git hooks.
9. List and explain five best practices for effective and efficient Git usage in a collaborative
software development environment.
10. Discuss security considerations in Git, including how to protect sensitive information like
passwords and API keys. Explain the concept of Git signing and why it's important.
Experiment No. 2
Title:
Objective:
The objective of this experiment is to guide you through the process of using Git commands to
interact with GitHub, from cloning a repository to collaborating with others through pull
requests.
Introduction:
GitHub is a web-based platform that offers version control and collaboration services for
software development projects. It provides a way for developers to work together, manage
code, track changes, and collaborate on projects efficiently. GitHub is built on top of the Git
version control system, which allows for distributed and decentralised development.
Version Control: GitHub uses Git, a distributed version control system, to track changes
to source code over time. This allows developers to collaborate on projects while
maintaining a history of changes and versions.
Repositories: A repository (or repo) is a collection of files, folders, and the entire history
of a project. Repositories on GitHub serve as the central place where code and project-
related assets are stored.
Collaboration: GitHub provides tools for team collaboration. Developers can work
together on the same project, propose changes, review code, and discuss issues within
the context of the project.
Pull Requests: Pull requests (PRs) are proposals for changes to a repository. They allow
developers to submit their changes for review, discuss the changes, and collaboratively
improve the code before merging it into the main codebase.
Issues and Projects: GitHub allows users to track and manage project-related issues,
enhancements, and bugs. Projects and boards help organize tasks, track progress, and
manage workflows.
Forks and Clones: Developers can create copies (forks) of repositories to work on their
own versions of a project. Cloning a repository allows developers to create a local copy
of the project on their machine.
Branching and Merging: GitHub supports branching, where developers can create
separate lines of development for features or bug fixes. Changes made in branches can
be merged back into the main codebase.
Actions and Workflows: GitHub Actions enable developers to automate workflows,
such as building, testing, and deploying applications, based on triggers like code pushes
or pull requests.
GitHub Pages: This feature allows users to publish web content directly from a GitHub
repository, making it easy to create websites and documentation for projects.
Prerequisites:
Experiment Steps:
ls -la
Run the following command:
ls –la
Step 2: Making Changes and Creating a Branch
cd Experiment2
ls –la
vi example.txt
cat example.txt
Check the status of the repository:
git status
git status
git branch
git branch
git branch
Step 3: Pushing Changes to GitHub
Note: Preferably SSH URL, not HTTP URL. Because GitHub no longer supports HTTP
password authentication.
git remote –v
git remote –v
Test or verify the SSH connection to GitHub.
ssh -T git@github.com
Check your GitHub repository to confirm that the new branch feature is available.
Step 4: Collaborating through Pull Requests
o Choose the base branch (usually main or master) and the compare branch
(feature).
git branch
git branch
git checkout -b feature-1
git branch
cat example.txt
cat example.txt
git status
git add example.txt
git status
git status
Being in the main branch check the file content which should be as follows:
Welcome to NHCE
Now change the branch from main to feature-1 and check the file content which
should be as follows:
git branch
git checkout -b feature-2
git branch
cat example.txt
Note: Even though when I am already in feature-2 branch it still shows the content
of the file from main branch. Because from the feature-2 branch I have not modified
the file yet.
cat example.txt
git status
git status
git status
git push origin feature-2
2. Navigate to the branch feature-1 on GitHub by selecting it from the drop down, go
to Settings of the repo, add collaborators by typing their mail IDs, inform them to
accept the invitation sent from the GitHub to their mail IDs and create a pull request
(PR) for feature-1 by selecting the collaborators as reviewers and merge it on
GitHub.
Note: If you type your own mail ID with which you have created the GitHub account
then you will get below error. So I have entered my other mail ID as a reviewer for this
demo.
Note: Now you will see the collaborator status as “Pending”.
Now you go to your GitHub repo, go to feature-1 branch and see the content of
example.txt. Similarly go to the branch main and see the content of
example.txt.
3. Create a pull request for feature-2 – it will show a conflict.
Scroll down and click on “Create pull request” and select a reviewer from the right side
panel.
Scroll down to see the error message of conflict.
git branch
cat example.txt
git pull origin main # triggers merge conflict
But Git didn’t know how to handle divergent branches, so it aborted the merge.
cat example.txt
5. Open example.txt and resolve the conflict:
<<<<<<< HEAD
Line from feature-2
=======
Line from feature-1
>>>>>>> main
Note: When Git cannot automatically merge changes, it marks the conflict in the file as shown
above.
Marker Meaning
<<<<<<< HEAD This is your current branch (feature-2)
======= Separator between conflicting changes
>>>>>>> main This is the incoming change from the branch you pulled (main)
You manually edit the file and choose what makes sense:
<<<<<<< HEAD
Line from feature-2
=======
Line from feature-1
>>>>>>> 7037b0fffb95137d3db043f42114473a03d252cd
Instead of:
<<<<<<< HEAD
Line from feature-2
=======
Line from feature-1
>>>>>>> main
<<<<<<< HEAD
Line from feature-2
=======
Line from feature-1
>>>>>>> 7037b0fffb95137d3db043f42114473a03d252cd
That 7037b0f... is a local commit hash (on your machine) that represents the tip of main at
the time Git attempted the merge. It might not match GitHub’s web UI because:
Your local main had the commit 7037b0f... (perhaps a new one made while resolving
feature-1 merge locally).
GitHub’s main still reflects the last visible pushed commit 78f544b (perhaps from before
or after a rebase or squash merge).
This discrepancy is normal in Git — hashes can differ locally vs. GitHub because of:
Merge commits
Rebase operations
Squash merges
History rewrites
git branch
vi example.txt
cat example.txt
But before that first let us see the content of file in each branch:
git branch
cat example.txt
git checkout feature
cat example.txt
cat example.txt
cat example.txt
Now update your local repository.
git branch
Conclusion:
This experiment provided you with practical experience in performing GitHub operations using
Git commands. You learned how to clone repositories, make changes, create branches, push
changes to GitHub, collaborate through pull requests, and synchronize changes with remote
repositories. You also explored how to resolve merge conflicts, a common challenge in
collaborative development.
Questions:
Objective:
The objective of this experiment is to guide you through the process of using Git commands
to interact with GitLab, from creating a repository to collaborating with others through merge
requests.
Introduction:
GitLab is a web-based platform that offers a complete DevOps lifecycle toolset, including
code review, and collaboration features. It provides a centralized place for software
development teams to work together efficiently and manage the entire development process
in a single platform.
● Version Control: GitLab provides version control capabilities using Git, allowing
developers to track changes to source code over time. This enables collaboration,
and assets related to a project. Each repository can have multiple branches and tags,
● Merge Requests: Merge requests in GitLab are similar to pull requests in other
platforms. They enable developers to propose code changes, collaborate, and get
● Issues and Project Management: GitLab includes tools for managing project tasks,
bugs, and enhancements. Issues can be assigned, labeled, and tracked, while
● Container Registry: GitLab includes a container registry that allows users to store
● Code Review and Collaboration: Built-in code review tools facilitate collaboration
among team members. Inline comments, code discussions, and code snippets are
● Wiki and Documentation: GitLab provides a space for creating project wikis and
well-documented.
● Security and Compliance: GitLab offers security scanning, code analysis, and
compliance features to help identify and address security vulnerabilities and ensure
● GitLab Pages: Similar to GitHub Pages, GitLab Pages lets users publish static
● End-to-End DevOps: GitLab offers an integrated platform for the entire software
● Simplicity: GitLab provides a unified interface for version control, CI/CD, and project
cloud service. This flexibility allows organizations to choose the hosting option that
● Security: GitLab places a strong emphasis on security, with features like role-based
● Open Source and Enterprise Versions: GitLab offers both a free, open-source
Community Edition and a paid, feature-rich Enterprise Edition, making it suitable for
Prerequisites:
● Internet connection
Experiment Steps:
● Choose a project name, visibility level (public, private), and other settings.
● Click "Create project."
ls -la ~/.ssh
cat ~/.ssh/id_ed25519.pub
ls -ls
Step 3: Making Changes and Creating a Branch
Syntax: cd <repository_name>
cd sandy.devops.stuffs-Experiment3
ls -la
git status
● Stage the changes for commit:
git status
git status
● Create a new branch named "feature":
git branch
git branch
git branch
Note: Since we already cloned the repo we got this error saying the “error: remote
origin already exists.”
● Check your GitLab repository to confirm that the new branch "feature" is available.
Step 5: Collaborating through Merge Requests
git branch
git branch
cat example.txt
git branch
cat example.txt
Conclusion:
This experiment provided you with practical experience in performing GitLab operations using
Git commands. You learned how to create repositories, clone them to your local machine, make
changes, create branches, push changes to GitLab, collaborate through merge requests, and
synchronize changes with remote repositories. These skills are crucial for effective collaboration
and version control in software development projects using GitLab and Git.
Questions/Exercises:
1. What is GitLab, and how does it differ from other version control platforms?
3. What is a merge request in GitLab? How does it facilitate the code review process?
4. Describe the steps involved in creating and submitting a merge request on GitLab.
5. What are GitLab issues, and how are they used in project management?
6. Explain the concept of a GitLab project board and its purpose in organizing tasks.
8. Describe the role of compliance checks in GitLab and how they contribute to
Objective:
The objective of this experiment is to guide you through the process of using Git commands to
interact with Bitbucket, from creating a repository to collaborating with others through pull
requests.
Introduction:
● Version Control: Bitbucket supports both Git and Mercurial version control systems, allowing
developers to track changes, manage code history, and work collaboratively on projects.
● Collaboration: Bitbucket enables team collaboration through features like pull requests, code
reviews, inline commenting, and team permissions. These tools help streamline the process of
merging code changes.
● Pull Requests: Pull requests in Bitbucket allow developers to propose and review code
changes before they are merged into the main codebase. This process helps ensure code
quality and encourages collaboration.
● Code Review: Bitbucket provides tools for efficient code review, allowing team members to
comment on specific lines of code and discuss changes within the context of the code itself.
● Project Management: Bitbucket offers project boards and issue tracking to help manage
tasks, track progress, and plan project milestones effectively.
● Bitbucket Pipelines: This feature allows teams to define and automate CI/CD pipelines directly
within Bitbucket, ensuring code quality and rapid delivery.
● Access Control and Permissions: Bitbucket allows administrators to define user roles,
permissions, and access control settings to ensure the security of repositories and project
assets.
Version Control: Bitbucket's integration with Git and Mercurial provides efficient version
control and code history tracking.
● Collaboration: The platform's collaboration tools, including pull requests and code reviews,
improve code quality and facilitate team interaction.
● CI/CD Integration: Bitbucket's integration with CI/CD pipelines automates testing and
deployment, resulting in faster and more reliable software delivery.
● Project Management: Bitbucket's project management features help teams organize tasks,
track progress, and manage milestones.
● Flexibility: Bitbucket offers both cloud-based and self-hosted options, providing flexibility to
choose the deployment method that suits the organization's needs.
● Integration: Bitbucket integrates with various third-party tools, services, and extensions,
enhancing its functionality and extending its capabilities.
Prerequisites:
● Internet connection
Experiment Steps:
cd <repository_name>
ls -la
git status
git status
git branch
git branch
cd
cd .ssh
ls -la
cat id_ed25519.pub
ssh -T git@bitbucket.org
● Check your Bitbucket repository to confirm that the new branch "feature" is available.
Step 5: Collaborating through Pull Requests
Choose the source branch ("feature") and the target branch ("main" or "master").
Review the changes and click "Create pull request."
2. Review and merge the pull request:
git branch
This experiment provided you with practical experience in performing Bitbucket operations
using Git commands. You learned how to create repositories, clone them to your local machine,
make changes, create branches, push changes to Bitbucket, collaborate through pull requests,
and synchronise changes with remote repositories. These skills are essential for effective
collaboration and version control in software development projects using Bitbucket and Git.
Questions/Exercises:
Q.1 What is Bitbucket, and how does it fit into the DevOps landscape?
Q.2 Explain the concept of branching in Bitbucket and its significance in collaborative
development.
Q.3 What are pull requests in Bitbucket, and how do they facilitate code review and
collaboration?
Q.4 How can you integrate code quality analysis and security scanning tools into Bitbucket's
CI/CD pipelines?
Q.5 What are merge strategies in Bitbucket, and how do they affect the merging process during
pull requests?
Experiment No. 5
Title: Applying CI/CD Principles to Web Development Using Jenkins, Git, and Local HTTP Server.
Objective:
To set up a basic CI/CD pipeline using Jenkins, Git, and a local HTTP server (Apache or Nginx) to
automatically deploy a web application when code is pushed to the repository.
Introduction:
Key Components:
● Jenkins: Jenkins is a widely used open-source automation server that helps automate various
aspects of the software development process. It is known for its flexibility and extensibility and
can be employed to create CI/CD pipelines.
● Git: Git is a distributed version control system used to manage and track changes in source
code. It plays a crucial role in CI/CD by allowing developers to collaborate, track changes, and
trigger automation processes when code changes are pushed to a repository.
● Local HTTP Server: A local HTTP server is used to host and serve web applications during
development. It is where your web application can be tested before being deployed to
production servers.
CI/CD Principles:
● Code Changes: Developers make changes to the web application's source code locally.
● Git Repository: Developers push their code changes to a Git repository, such as GitHub or
Bitbucket.
● Webhook: A webhook is configured in the Git repository to notify Jenkins whenever changes
are pushed.
● Jenkins Job: Jenkins is set up to listen for webhook triggers. When a trigger occurs, Jenkins
initiates a CI/CD pipeline.
● Build and Test: Jenkins executes a series of predefined steps, which may include building the
application, running tests, and generating artifacts.
● Deployment: If all previous steps are successful, Jenkins deploys the application to a local
HTTP server for testing.
● Verification: The deployed application is tested locally to ensure it functions as expected.
● Optional Staging: For more complex setups, there might be a staging environment where the
application undergoes further testing before reaching production.
● Production Deployment: If the application passes all tests, it can be deployed to the
production server.
Prerequisites:
Experiment Steps:
NOTE: Make sure that the port 8080 is opened in your EC2 instance for Jenkins.
Step 1: Set Up the Web Application and Local HTTP Server (Apache2)
ls –la
vi index.html
# Set ownership to jenkins user so it can copy files there during deployment.
Test:
Visit http://<Public_IP_of_EC2_Instance>/webdirectory in a browser — it
should show the current content.
Step 2: Set Up Git Repository
cd Experiment5
ls –la
git init
ls –la
git add .
git status
git commit -m "Initial commit"
git status
(If you wish to enter username and password every time you run git push and git pull
commands).
OR
git remote add origin git@github.com:SandyDevOpsStuffs/Experiment5.git
ssh-keygen
You initialized your local Git repo, which by default created a branch called master.
But on GitHub, the default branch is usually main (not master).
When you run git push -u origin master, it pushes a new master branch to
GitHub, which is now available remotely — but it’s not the default branch there.
Then if you run git push -u origin main, Git gave this error:
Because you don't have a local branch named main, only master.
Fix Option 1: Set master as the default branch on GitHub (Recommended for this
case).
java -version
Paste it here.
Install recommended plugins and create an admin user.
Step 4: Install Required Jenkins Plugins
Install:
Git Plugin
GitHub Integration Plugin
Pipeline Plugin (optional)
Any required Authentication Plugins
Step 5: Create and Configure Jenkins Job
Create Freestyle Project
Open Jenkins Dashboard → New Item → Freestyle Project → Name it: WebApp-
CICD
Scroll down and click OK.
In Source Code Management, select Git → add your repository URL
Build Triggers
#!/bin/bash
sudo cp -r * /var/www/html/webdirectory/
sudo visudo
This allows the jenkins user to run rm, mkdir, and cp with sudo without prompting for a
password. This is secure because it's limited to only those commands.
Now confirm that there are no builds yet in Jenkins since this is our first build.
Now edit index.html which will be copied from current directory Experiement5 to
/var/www/html later.
vi index.html
cat index.html
git status
git status
git status
git push origin master
Conclusion
This approach forms the base for real-world CI/CD practices and can be extended to support
test automation, Docker, cloud servers, and more.
Experiment No. 6
Title:
Objective:
Introduction:
Containerization is a technology that has revolutionized the way applications are developed,
deployed, and managed in the modern IT landscape. It provides a standardised and efficient
way to package, distribute, and run software applications and their dependencies in isolated
environments called containers.
Containerization technology has gained immense popularity, with Docker being one of the most
well-known containerization platforms. This introduction explores the fundamental concepts of
containerization, its benefits, and how it differs from traditional approaches to application
deployment.
● Images: Container images are the templates for creating containers. They are read-only and
contain all the necessary files and configurations to run an application. Images are typically built
from a set of instructions defined in a Dockerfile.
Benefits of Containerization:
● Portability: Containers are portable and can be easily moved between different host
machines and cloud providers.
● Resource Efficiency: Containers share the host operating system's kernel, which makes them
lightweight and efficient in terms of resource utilization.
● Version Control: Container images are versioned, enabling easy rollback to previous
application states if issues arise.
In contrast: Containers share the host OS kernel, making them more lightweight and
efficient.
● VMs encapsulate an entire OS, while containers package only the application and its
dependencies.
Prerequisites:
springboot-docker-app/
├── src/
│ └── main/
│ └── java/
│ └── com/
│ └── example/
│ └── demo/
│ ├── DemoApplication.java
├── pom.xml
├── Dockerfile
exit
mkdir -p src/main/java/com/example/demo
cd src/main/java/com/example/demo
vi DemoApplication.java
Paste the following:
package com.example.demo;
import org.springframework.boot.SpringApplication;
import
org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.web.bind.annotation.*;
@SpringBootApplication
@RestController
public class DemoApplication {
@GetMapping("/")
public String home() {
return "Hello from Dockerized Spring Boot App!";
}
}
cd ~/springboot-docker-app
Create the pom.xml file:
vi pom.xml
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.example</groupId>
<artifactId>demo</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>jar</packaging>
<name>springboot-docker-app</name>
<description>Simple Spring Boot Docker App</description>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>3.2.5</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<properties>
<java.version>17</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
ls –la
mvn clean package
ls –la
ls –la ./target/
Expected Output:
target/demo-0.0.1-SNAPSHOT.jar
vi Dockerfile
# Base image
FROM openjdk:17-jdk-slim
# Set workdir
WORKDIR /app
# Copy JAR
COPY target/demo-0.0.1-SNAPSHOT.jar app.jar
# Expose port
EXPOSE 8080
# Run JAR
ENTRYPOINT ["java", "-jar", "app.jar"]
docker ps
Step 9: Test the Application
Open in browser:
http://<EC2-Public-IP>:8080
Expected Output:
Conclusion:
In this experiment, you explored containerization and application deployment with Docker by
deploying a Java SpringBoot application in a Docker container. You learned how to create a
Dockerfile, build a Docker image, run a Docker container, and access your Java SpringBoot
application from your host machine. Docker's containerization capabilities make it a valuable
tool for packaging and deploying applications consistently across different environments.
Exercise/Questions:
1. Explain the concept of containerization. How does it differ from traditional virtualization
methods?
2. Discuss the key components of a container. What are images and containers in the context of
containerization?
3. What is Docker, and how does it contribute to containerization? Explain the role of Docker in
building, running, and managing containers.
5. Explain the concept of isolation in containerization. How do containers provide process and
filesystem isolation for applications?
6. Discuss the importance of container orchestration tools such as Kubernetes in managing
containerized applications. What problems do they solve, and how do they work?
7. Compare and contrast containerization platforms like Docker, containerd, and rkt. What are
their respective strengths and weaknesses?
8. Explain the process of creating a Docker image. What is a Dockerfile, and how does it help in
image creation?
10. Explore real-world use cases of containerization in software development and deployment.
Provide examples of industries or companies that have benefited from containerization
technologies.
Experiment No. 7
Title:
Applying CI/CD Principles to Web Development Using Jenkins, Git, using Docker Containers.
Objective:
The objective of this experiment is to set up a CI/CD pipeline for a web application using
Jenkins, Git, Docker containers, and GitHub webhooks. The pipeline will automatically build,
test, and deploy the web application whenever changes are pushed to the Git repository,
without the need for a pipeline script.
Introduction:
Continuous Integration and Continuous Deployment (CI/CD) principles are integral to modern
web development practices, allowing for the automation of code integration, testing, and
deployment. This experiment demonstrates how to implement CI/CD for web development
using Jenkins, Git, Docker containers, and GitHub webhooks without a pipeline script. Instead,
we'll utilize Jenkins' "GitHub hook trigger for GITScm polling" feature.
In the fast-paced world of modern web development, the ability to deliver high-quality
software efficiently and reliably is paramount. Continuous Integration and Continuous
Deployment (CI/CD) are integral principles and practices that have revolutionized the way
software is developed, tested, and deployed. These practices bring automation, consistency,
and speed to the software development lifecycle, enabling development teams to deliver code
changes to production with confidence.
CI is the practice of frequently and automatically integrating code changes from multiple
contributors into a shared repository. The core idea is that developers regularly merge their
code into a central repository, triggering automated builds and tests.
Key aspects of CI include:
● Automation: CI tools, like Jenkins, Travis CI, or CircleCI, automate the building and testing of
code whenever changes are pushed to the repository.
● Frequent Integration: Developers commit and integrate their code changes multiple times a
day, reducing integration conflicts and catching bugs early.
● Testing: Automated tests, including unit tests and integration tests, are run to ensure that
new code changes do not introduce regressions.
● Quick Feedback: CI provides rapid feedback to developers about the quality and correctness
of their code changes.
CD is the natural extension of CI. It is the practice of automatically and continuously deploying
code changes to production or staging environments after successful integration and testing.
● Automation: CD pipelines automate the deployment process, reducing the risk of human
error and ensuring consistent deployments.
● Deployment to Staging: Code changes are deployed first to a staging environment where
further testing and validation occur.
● Deployment to Production: After passing all tests in the staging environment, code changes
are automatically deployed to the production environment, often with zero downtime.
● Rollbacks: In case of issues, CD pipelines provide the ability to rollback to a previous version
quickly.
● Quality Assurance: Automated testing ensures code quality, reducing the number of bugs
and regressions.
● Consistency: CI/CD ensures that code is built, tested, and deployed consistently, regardless of
the development environment.
● Continuous Feedback: Developers receive immediate feedback on the impact of their
changes, improving collaboration and productivity.
● Reduced Risk: Automated deployments reduce the likelihood of deployment errors and
downtime, enhancing reliability.
● Scalability: CI/CD can scale to accommodate projects of all sizes, from small startups to large
enterprises.
Prerequisites:
Experiment Steps:
● Create a simple web application or use an existing one. Ensure it can be hosted in a Docker
container.
● Initialize a Git repository for your web application and push it to GitHub.
● Install Jenkins on your computer or server following the instructions for your operating
system (https://www.jenkins.io/download/).
● Open Jenkins in your web browser (usually at http://localhost:8080) and complete the initial
setup, including setting up an admin user and installing necessary plugins.
● Configure Jenkins to work with Git by setting up Git credentials in the Jenkins Credential
Manager.
Add jenkins user to Docker group so that we can run docker commands without sudo.
Check the status of Jenkins service and restart it to make the changes to be affected.
Verify that the user Jenkins can run docker commands without sudo now.
sudo su - jenkins
docker ps
exit
NOTE: Because of restarting the Jenkins service, you may need to re-login in the UI.
Create a Dockerfile
FROM nginx:alpine
COPY . /usr/share/nginx/html
● In the job configuration, specify a name for your job and choose "This project is
parameterized."
● Add a "String Parameter" named GIT_REPO_URL and set its default value to your Git
repository URL.
● Scroll down and set Branches to build -> Branch Specifier to the
working Git branch (ex: */master).
● Scroll down to "Triggers" section and select the "GitHub hook trigger for GITScm
polling" option. This enables Jenkins to listen to GitHub webhook triggers.
● Add build steps to execute Docker commands for building and deploying the containerized
web application. Use the following commands:
● Create a new webhook and configure it to send a payload to the Jenkins webhook URL.
It is usually http://jenkins-server/github-webhook/
i.e.
http://<Public_IP_of_EC2_Instance>:8080/github-webhook/
Now before triggering the pipeline, let us see the current images, containers and job builds
in Jenkins.
Right now, there is no image named nginx-image1 and there is no container named
container1. But still for the safer side our pipeline will delete the container1 if it
exists.
● Now make some changes to the file index.html and push the changes as well as the
Dockerfile to your GitHub repository. The webhook will trigger the Jenkins job automatically,
executing the build and deployment steps defined in the job configuration.
ls -la
cat index.html
vi index.html
cat index.html
git status
git status
git commit -m "Updated Build Steps in Jenkins with docker commands and
committed both index.html and Dockerfile"
git status
docker images
docker ps
Access your web application by opening a web browser and navigating to http://localhost:8081
(or the appropriate URL if hosted elsewhere like http://<Public_IP_of_EC2Instance>:8081).
Hu hoo...! Here we go!! We deployed the application successfully onto the Docker container
and it is running! , Awesome! .
NOTE:
CI part
CD part
Hard coding of repo URL and alternatives for it.
Freestyle (Less structured, no groovy) vs Pipeline ( groovy, more structured, Declarative
vs Scripted)
Declarative syntax:
pipeline {
agent any
stages {
stage('Build') {
steps {
echo 'Building...'
}
}
}
}
Scripted syntax:
node {
stage('Build') {
echo 'Building...'
}
}
Conclusion:
This experiment demonstrates how to apply CI/CD principles to web development using
Jenkins, Git, Docker containers, and GitHub webhooks. By configuring Jenkins to listen for
GitHub webhook triggers and executing Docker commands in response to code changes, you
can automate the build and deployment of your web application, ensuring a more efficient and
reliable development workflow.
Exercise / Questions :
1. Explain the core principles of Continuous Integration (CI) and Continuous Deployment (CD) in
the context of web development. How do these practices enhance the software development
lifecycle?
2. Discuss the key differences between Continuous Integration and Continuous Deployment.
When might you choose to implement one over the other in a web development project?
3. Describe the role of automation in CI/CD. How do CI/CD pipelines automate code integration,
testing, and deployment processes?
4. Explain the concept of a CI/CD pipeline in web development. What are the typical stages or
steps in a CI/CD pipeline, and why are they important?
5. Discuss the benefits of CI/CD for web development teams. How does CI/CD impact the speed,
quality, and reliability of software delivery?
6. What role do version control systems like Git play in CI/CD workflows for web development?
How does version control contribute to collaboration and automation?
7. Examine the challenges and potential risks associated with implementing CI/CD in web
development. How can these challenges be mitigated?
8. Provide examples of popular CI/CD tools and platforms used in web development. How do
these tools facilitate the implementation of CI/CD principles?
9. Explain the concept of "Infrastructure as Code" (IaC) and its relevance to CI/CD. How can IaC
be used to automate infrastructure provisioning in web development projects?
10. Discuss the cultural and organisational changes that may be necessary when adopting CI/CD
practices in a web development team. How does CI/CD align with DevOps principles and
culture?
Experiment No. 8
Title: Demonstrate Maven Build Life Cycle
Objective:
The objective of this experiment is to understand and demonstrate the complete Maven Build
Lifecycle, including:
Introduction:
Maven is a widely-used build automation and project management tool in the Java ecosystem.
It provides a clear and standardized build lifecycle for Java projects, allowing developers to
perform various tasks such as compiling code, running tests, packaging applications, and
deploying artifacts. This experiment aims to demonstrate the Maven build lifecycle and its
different phases.
● Project Object Model (POM): The POM is an XML file named pom.xml that defines a project's
configuration, dependencies, plugins, and goals. It serves as the project's blueprint and is at the
core of Maven's functionality.
● Build Lifecycle: Maven follows a predefined sequence of phases and goals organized into
build lifecycles. These lifecycles include clean, validate, compile, test, package, install, and
deploy, among others.
● Plugin: Plugins are extensions that provide specific functionality to Maven. They
enable tasks like compiling code, running tests, packaging artifacts, and deploying
applications.
● Dependency Management: Maven simplifies dependency management by allowing
developers to declare project dependencies in the POM file. Maven downloads these
dependencies from repositories like Maven Central.
● Repository: A repository is a collection of artifacts (compiled libraries, JARs, etc.) that Maven
uses to manage dependencies. Maven Central is a popular public repository, and organizations
often maintain private repositories.
The Maven build process is organized into a set of build lifecycles, each comprising a sequence
of phases. Here are the key build lifecycles and their associated phases:
Clean Lifecycle:
Default Lifecycle:
● package: Packages the compiled code into a distributable format (e.g., JAR, WAR).
Site Lifecycle:
Prerequisites:
Experiment Steps:
ls –la
sudo apt update && sudo apt install -y openjdk-11-jdk maven git
java -version
OR
java --version
mvn -version
OR
mvn --version
OR
mvn -v
sudo su – nexus
ls –la
2. Download and install Nexus:
wget https://download.sonatype.com/nexus/3/nexus-3.80.0-06-
linux-x86_64.tar.gz
ls –la
ls –la
exit
Then:
sudo vi /etc/systemd/system/nexus.service
[Unit]
Description=Nexus Repository
After=network.target
[Service]
Type=forking
LimitNOFILE=65536
User=nexus
Group=nexus
ExecStart=/home/nexus/nexus/bin/nexus start
ExecStop=/home/nexus/nexus/bin/nexus stop
Restart=on-abort
[Install]
WantedBy=multi-user.target
sudo cat /etc/systemd/system/nexus.service
4. Start Nexus:
5. Access Nexus:
Visit in browser:
http://<EC2-PUBLIC-IP>:8081
Click on Login:
User name: admin
Settings -> Repository -> Create and manage repositories -> Create repository
Recipe: maven2 (hosted)
Name: sdm-maven-releases
Version Policy: Release
Create repository
Step 4: Create Spring Boot Web Application
mvn archetype:generate \
-DgroupId=com.example \
-DartifactId=SpringBootApp \
-DarchetypeArtifactId=maven-archetype-quickstart \
-DinteractiveMode=false
ls –la
cd SpringBootApp
ls –la
mkdir -p src/main/java/com/example
ls –la
vi src/main/java/com/example/App.java
Paste this:
package com.example;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.web.bind.annotation.*;
@SpringBootApplication
@RestController
public class App {
@GetMapping("/")
public String hello() {
return "Hello from Spring Boot!";
}
}
cat src/main/java/com/example/App.java
Step 5: Replace pom.xml with Spring Boot Configuration
Edit pom.xml:
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.example</groupId>
<artifactId>SpringBootApp</artifactId>
<version>1.0.0</version>
<packaging>jar</packaging>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.7.5</version>
</parent>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
<distributionManagement>
<repository>
<id>nexus</id>
<name>Nexus Release Repository</name>
<url>http://<EC2-PUBLIC-IP>:8081/repository/sdm-maven-releases/</url>
</repository>
</distributionManagement>
</project>
NOTE: Each time you restart the EC2 instance, the public IP of the EC2 instance will be changed.
So you need to change the IP address in the <url> section inside
<distributionManagement> section of the pom.xml shown above. Otherwise you will
be not able to access the Nexus. Also replace the repo name sdm-maven-releases with
your repo name.
Step 6: Add Nexus Credentials to Maven Settings
vi ~/.m2/settings.xml
Paste:
<settings>
<servers>
<server>
<id>nexus</id>
<username>admin</username>
<password>YourNexusPassword</password>
</server>
</servers>
</settings>
ls –la
ls –la target/
2. Run the app:
3. Access it in Browser:
http://<EC2-PUBLIC-IP>:8080
Output:
Hello from Spring Boot!
In browser, go to Nexus:
http://<EC2-PUBLIC-IP>:8081
This experiment demonstrates the Maven build lifecycle by creating a simple Java
project and executing various Maven build phases. Maven simplifies the build
process by providing a standardized way to manage dependencies, compile code,
run tests, and package applications. Understanding these build phases is essential
for Java developers using Maven in their projects.
Exercise/Questions:
4. What are Maven plugins, and how do they enhance the functionality of Maven?
5. List the key phases in the Maven build lifecycle, and briefly describe what each
phase does.
6. What is the primary function of the clean phase in the Maven build lifecycle?
7. In Maven, what does the compile phase do, and when is it typically executed?
8. How does Maven differentiate between the test and verify phases in the build
lifecycle?
9. What is the role of the install phase in the Maven build lifecycle, and why is it
useful?
10. Explain the difference between a local repository and a remote repository in
the context of Maven.
Experiment No. 9
Title:
Objective:
Introduction:
● Containerization: Kubernetes relies on containers as the fundamental unit for packaging and
running applications. Containers encapsulate an application and its dependencies, ensuring
consistency across various environments.
● Cluster: A Kubernetes cluster is a set of machines, known as nodes, that collectively run
containerized applications. A cluster typically consists of a master node (for control and
management) and multiple worker nodes (for running containers).
● Nodes: Nodes are individual machines (virtual or physical) that form part of a Kubernetes
cluster. Nodes run containerized workloads and communicate with the master node to manage
and orchestrate containers.
● Pod: A pod is the smallest deployable unit in Kubernetes. It can contain one or more tightly
coupled containers that share the same network and storage namespace. Containers within a
pod are typically used to run closely related processes.
● Deployment: A Deployment is a Kubernetes resource that defines how to create, update, and
scale instances of an application. It ensures that a specified number of replicas are running at all
times.
● Service: A Service is an abstraction that exposes a set of pods as a network service. It provides
a stable IP address and DNS name for accessing the pods, enabling load balancing and
discovery.
● Namespace: Kubernetes supports multiple virtual clusters within the same physical cluster,
called namespaces. Namespaces help isolate resources and provide a scope for organizing and
managing workloads.
● Load Balancing: Services in Kubernetes can distribute traffic among pods, providing high
availability and distributing workloads evenly.
● Self-healing: Kubernetes monitors the health of pods and can automatically restart or replace
failed instances to maintain desired application availability.
● Rolling Updates and Rollbacks: Kubernetes allows for controlled, rolling updates of
applications, ensuring zero-downtime deployments. If issues arise, rollbacks can be performed
with ease.
Prerequisites:
● Docker installed.
Experiment Steps:
NOTE: Launch an EC2 instance of type t2.medium or t2.large. Terminate your instance once you
are done with your experiment to avoid bill.
● Create a simple web application (e.g., a static HTML page) or use an existing one.
ls -la
docker --version
Install it if not installed.
Create a simple web application.
vi index.html
<h1>Welcome to Experiment9</h1>
vi Dockerfile
FROM nginx:latest
ls –la
sudo install -o root -g root -m 0755 kubectl
/usr/local/bin/kubectl
ls –la
OR
NOTE: It takes more than 2 minutes sometimes. Make sure that your instance has 20 to 40 GB
of storage by running the command df -h.
minikube status
NOTE:
This command changes your shell's Docker context to use the Docker daemon inside the
Minikube VM/container, instead of your host EC2 instance.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-web-app-deployment
spec:
replicas: 3 # Number of pods to create
selector:
matchLabels:
app: my-web-app # Label to match pods
template:
metadata:
labels:
app: my-web-app # Label assigned to pods
spec:
containers:
- name: my-web-app-container
image: my-web-app:latest # Docker image to use
imagePullPolicy: Never
ports:
- containerPort: 80 # Port to expose
Explanation of web-app-deployment.yaml:
● apiVersion: Specifies the Kubernetes API version being used (apps/v1 for Deployments).
● kind: Defines the type of resource we're creating (a Deployment in this case).
● replicas: Specifies the desired number of identical pods to run. In this example, we want
three replicas of our web application.
● selector: Specifies how to select which pods are part of this Deployment. Pods with the label
app: my-web-app will be managed by this Deployment.
● labels: Assigns the label app: my-web-app to the pods created by this template.
● containers: Defines the containers to run within the pods. In this case, we have one container
named my-web-app-container using the my-web-app:latest Docker image.
● ports: Specifies the ports to expose within the container. Here, we're exposing port 80.
Now access the app using curl in terminal itself as well as in browser:
curl http://192.168.49.2:32178
NOTE: No need to add an inbound rule to security group of EC2 instance to open the port 32178
since we are running app in a pod, not on a node directly.
When I try to access the app on browser in my laptop I am not able to access it.
Troubleshooting steps:
Because 192.168.49.2 is a private IP internal to my EC2 instance, I am unable to access the
app. It is part of the virtual network Minikube creates using Docker.
So I can access the app via EC2 Public IP with Port Forwarding.
I have to use kubectl port-forward to make it accessible via my EC2's public IP:
Let me add an inbound rule to security group of my EC2 instance to open a port 8085 (8080 is
already being used by some other app in my instance).
I want the EC2 to directly listen on all interfaces (not just localhost), so I have to run:
kubectl port-forward --address 0.0.0.0 deployment/my-web-app-deployment 8085:80
Now go back to browser and refresh the page to see the magic!.
Conclusion:
In this experiment, you learned how to create a Kubernetes Deployment for container
orchestration. The web-app-deployment.yaml file defines the desired state of the
application, including the number of replicas, labels, and the Docker image to use. Kubernetes
automates the deployment and scaling of the application, making it a powerful tool for
managing containerized workloads.
Exercise/Questions:
1. Explain the core concepts of Kubernetes, including pods, nodes, clusters, and deployments.
How do these concepts work together to manage containerized applications?
2. Discuss the advantages of containerization and how Kubernetes enhances the orchestration
and management of containers in modern application development.
3. What is a Kubernetes Deployment, and how does it ensure high availability and scalability of
applications? Provide an example of deploying a simple application using a Kubernetes
Deployment.
4. Explain the purpose and benefits of Kubernetes Services. How do Kubernetes Services
facilitate load balancing and service discovery within a cluster?
5. Describe how Kubernetes achieves self-healing for applications running in pods. What
mechanisms does it use to detect and recover from pod failures?
6. How does Kubernetes handle rolling updates and rollbacks of applications without causing
downtime? Provide steps to perform a rolling update of a Kubernetes application.
7. Discuss the concept of Kubernetes namespaces and their use cases. How can namespaces be
used to isolate and organize resources within a cluster?
8. Explain the role of Kubernetes ConfigMaps and Secrets in managing application
configurations. Provide examples of when and how to use them.
9. What is the role of storage orchestration in Kubernetes, and how does it enable data
persistence and sharing for containerized applications?
10. Explore the extensibility of Kubernetes. Describe Helm charts and custom resources, and
explain how they can be used to customize and extend Kubernetes functionality.
Experiment No. 10
Title:
Create the GitHub Account to Demonstrate CI/CD Pipeline using AWS (S3 + EC2 + CodePipeline +
CodeDeploy).
Objective:
To demonstrate Continuous Integration and Continuous Deployment (CI/CD) using GitHub as the
source, AWS S3 as an artifact store, and AWS CodePipeline + CodeDeploy to automatically deploy a
web application to an EC2 instance.
Prerequisites:
AWS Account with necessary permissions.
One running EC2 instance (Amazon Linux 2 preferred).
GitHub account and repository.
AWS CLI installed and configured on EC2 instance.
Experiment Steps:
Create two IAM Roles. One for the service AWS EC2 and another for the service AWS
CodeDeploy.
Go to the service “IAM”, select “Roles” in the left panel and click on “Create role” on the right
top.
First let us create a role “Role_EC2CodeDeploy” for the service EC2.
From the dropdown of “Use case”, under “Commonly used services”, select the service or use
case “EC2”.
Click “Next” and “Next”
Either during the role creation or after the role creation we need to add permissions by
“AmazonEC2RoleforAWSCodeDeploy”.
A new role is created. I am going to use this role on an EC2 machine. This role allows the service
Similarly let us create another role “Role_CodeDeploy” for the service CodeDeploy.
But this time instead of EC2, select the service or use case “CodeDeploy”.
It will take the permission policy “AWSCodeDeployRole” automatically. I am going to use this role on
CodeDeploy.
Give a name “Role_CodeDeploy” to the role and click on “Create role”.
We can launch any number of instances based on our requirements. But I will launch only one instance
“Demo_AWSCodeDeploy” in this example.
Make sure that you have added the ports 22 and 80 as inbound rules.
Under “Advanced details”, from the IAM instance profile dropdown, select the role
“Role_EC2CodeDeploy” which was created recently for the service EC2.
Under “Advanced details” itself scroll down and in the “User data” section paste the following script to
automatically install all the packages or dependencies immediately after launching the EC2 instance.
#!/bin/bash
cd /home/ec2-user
wget https://aws-codedeploy-ap-south-1.s3.ap-south-
1.amazonaws.com/latest/install
If we click on the instance ID and see the details of the instance then we can see the IAM role
“Role_EC2CodeDeploy” attached.
That is all fine. But I want to try each command individually. So I do not use the user data script.
Connect to the instance through Git Bash or any other CLI of your choice.
mkdir -p /home/ec2-user/Projects/GitHub_CodeDeploy
cd /home/ec2-user/Projects/GitHub_CodeDeploy
ls -la
wget https://aws-codedeploy-ap-south-1.s3.ap-south-
1.amazonaws.com/latest/install
ls –la
aws –version
Go to the service “CodeDeploy”, select “Applications” in the left side bar and click on “Create
application”.
Select “No filter” for specifying how you want to trigger the pipeline and leave rest as default.
Click on “Skip build stage” since we are not building the code in this project.
It will automatically take Region, select the application and the deployment group from dropdowns.
Click on “Next”.
Review all the things and click on “Create pipeline”.
Now deployment will be started using AWS CodeDeploy. If we would have specified the number of EC2
instances as 4 during the time of launch in Step 2 then the application would have been deployed on all
4 EC2 instances now.
You can click on “View details” to see the summary. As the code will be stored in S3 bucket by default,
you can go to the service “AWS S3” and see the bucket.
Step 5: Access the Application
Let us access the application on port 80 as already we have added the inbound rule 80 in the EC2
instance.
Copy the Public DNS of the EC2 instance and paste it in the browser.
There we go!!
Let us go to GitHub repo, make some minor changes in the file index.html by clicking on Edit/Pencil icon
and commit the changes as follows.
Now go back to Amazon CodePipeline and just refresh the page.
There we go!!
The AmazonCodePipeline has automatically identified the code changes in the GitHub repo and
triggered the build.
Conclusion:
The app is automatically deployed to EC2 whenever changes are pushed to GitHub.
This demonstrates a CI/CD pipeline integrating GitHub + AWS S3 + CodePipeline +
CodeDeploy + EC2.
Exercise/Questions:
1. What is the primary purpose of Continuous Integration and Continuous Deployment (CI/CD)
in software development, and how does it benefit development teams using GitHub, GCP, and
AWS?
2. Explain the role of GitHub in a CI/CD pipeline. How does GitHub facilitate version control and
collaboration in software development?
3. What are the key services and offerings provided by Google Cloud Platform (GCP) that are
commonly used in CI/CD pipelines, and how do they contribute to the automation and
deployment of applications?
4. Similarly, describe the essential services and tools offered by Amazon Web Services (AWS)
that are typically integrated into a CI/CD workflow.
5. Walk through the basic steps of a CI/CD pipeline from code development to production
deployment, highlighting the responsibilities of each stage.
6. How does Continuous Integration (CI) differ from Continuous Deployment (CD)? Explain how
GitHub Actions or a similar CI tool can be configured to build and test code automatically.
7. In the context of CI/CD, what is a staging environment, and why is it important in the
deployment process? How does it differ from a production environment?
8. What are the primary benefits of using automation for deployment in a CI/CD pipeline, and
how does this automation contribute to consistency and reliability in software releases?
9. Discuss the significance of monitoring, logging, and feedback loops in a CI/CD workflow. How
do these components help in maintaining and improving application quality and performance?
10. In terms of scalability and flexibility, explain how cloud platforms like GCP and AWS enhance
the CI/CD process, especially when dealing with variable workloads and resource demands.