Experiment 11:
integrate Kubernetes and docker
Kubernetes is an open-source tool that allows you to run and manage your container-
based workloads. Kubernetes K8s was Developed by google and was later donated to
the Cloud Native Computing Foundation.
Kubernetes helps you manage hundreds and thousands of containerized applications
in different deployable environments be it physical machines, virtual machines, cloud or
even hybrid environments!
In simple words, it’s a container orchestrator that helps makes sure that each container
is where it’s supposed to be and that the container can work together. To put it another
way, it is pretty similar to a conductor that manages everything in an orchestra. Since
there are a lot of moving points in the scalable application, the same way how a
conductor is responsible for making the song sound good similarly K8s makes sure the
services smoothly the way an app developer wants.
Getting Started with Deploying Docker
Containers to Kubernetes
Once you understand what containers and Kubernetes are, the
next step is to learn how the two work together. This guide provides
an example of containerizing a simple application using Docker and
deploying it on Kubernetes.
What is Docker?
Docker is an open source container platform that uses OS-level
virtualization to package your software in units called containers.
Containers are isolated from each other and are designed to be
easily portable. You can build, run and distribute applications in
Docker containers to run on Linux, Windows, Macs and almost
anywhere else–both on-premises and in the cloud. The Docker
environment also includes a container runtime as well as build and
image management.
Docker Containers
A Docker container image is a lightweight, standalone, executable
software package that includes everything needed to run an
application: code, runtime, system tools, system libraries and
settings. Docker provides a standard format for packaging and
porting software, much like ISO containers define a standard for
shipping freight. A runtime instance of a Docker image consists of
three parts:
The Docker image
The environment in which the image is executed
A set of instructions for running the image
How to Run a Docker Container in Kubernetes
A containerized application image along with a set of declarative
instructions can be passed to Kubernetes to deploy an application.
The containerized app instance running on the Kubernetes node
derives the container runtime from the Kubernetes node along with
compute, network, and storage resources, if needed.
Here’s what it takes to move a Docker container to a Kubernetes
cluster.
Create a container image from a Dockerfile
Build a corresponding YAML file to define how Kubernetes
deploys the app
Dockerfile to Create a Hello World Container Image
A manifest, called a Dockerfile, describes how the image and its
parts are to run in a container deployed on a host. To make the
relationship between the Dockerfile and the image concrete, here’s
an example of a Dockerfile that creates a “Hello World” app from
scratch:
FROM scratch
COPY hello /
CMD ["/hello"]
When you give this Dockerfile to a local instance of Docker by
using the docker build command, it creates a container image
with the “Hello World” app installed in it.
Creating a Kubernetes Deployment for Hello World
Next you need to define a deployment manifest, commonly done
with a YAML or JSON file, to tell Kubernetes how to run “Hello
World” based on the container image:
# Hello World Deployment YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld
spec:
selector:
matchLabels:
app: helloworld
template:
metadata:
labels:
app: helloworld
spec:
containers:
- name: helloworld
image: boskey/helloworld
resources:
limits:
memory: "128Mi"
cpu: "500m"
To deploy the application on a Kubernetes cluster, you can submit
a YAML file using a kubectl command similar to the following.
kubectl apply -f
https://yourdomain.ext/application/helloworld.yaml --
record
Once that’s done, the hello world container is deployed in a
Kubernetes pod.
Creating a Kubernetes Service
The container is now deployed to Kubernetes but there is no way to
communicate with it, the next step is to turn the deployment into a
Service by establishing communication.
In Kubernetes, a Service is an abstraction which defines a logical
set of pods and a policy by which to access them. This guide
demonstrates a basic method of providing services to pods.
Application Labels and Services
Labels
A very interesting aspect of Kubernetes is the way Kubernetes
combines the use of Labels and Services to create tremendous
possibilities.
At the heart of Kubernetes is a pod. A pod contains running
instances of one or more containers. When a pod is deployed in
Kubernetes, apart from other specifications, the pod can be
assigned labels. Ideally a pod is given a label identifying which part
of the overall application the pod belongs to. For example, if the
pod being deployed is for the application ”frontend” and within
“frontend” the pod is running code for login, upon deployment it can
be labeled [app=frontend,label=login]. Other pods deployed as
part of this tier can be given the same label.
Services
Services enable Kubernetes to route traffic to pods. Pods in
Kubernetes are deployed on an overlay network. Pods across
Kubernetes nodes cannot access each other nor can any
external/ingress traffic access pods unless a Service type resource
is defined. A service is routed to the correct app using a label. So
when a service gets created with label login,the service will send
traffic to pods that contain the login app based on the label match.
Services are needed for both East-West communication, when two
pods from different apps need to talk to each other, and for North-
South communication, when external traffic ( outside of the
Kubernetes cluster) needs to talk to a pod. Kubernetes has different
service types to address both scenarios. Some common services
are listed below:
Service Traffic Type
Depends on What it Does
Type Handled
Cluster IP Cluster Uses the Cluster Network to MAP pod Internal to the
Network IP/port Cluster
Node Port Cluster IP Uses a port on Kubernetes Node + External
Service Traffic Type
Depends on What it Does
Type Handled
creates a mapping of Node port to the
Cluster IP
Load Cluster Creates an External Load Balancer External
Balancer IP/Node Port that maps to either a Cluster IP/Node
Port
The services resource constructs in Kubernetes may be a
microservice or other HTTP services.
Hello World service definition
A corresponding service definition for the earlier “Hello World”
deployment manifest is shown below. Notice line 5 onward. With
the selector as "app: hello world" the service will forward traffic
coming to port 80 on the cluster network to pods that match this
label.
apiVersion: v1
kind: Service
metadata:
name: helloworld
spec:
selector:
app: helloworld
ports:
- port: 80
targetPort: 80
The Power of Services
Because of label matching, there is no need to understand the IP
addressing of pods to load balance traffic. As a result:
Load balancing traffic across multiple pods is simplified.
Updating an app (in a pod) can be as simple as:
o Deploying apps with new version labels ( e.g, v.1.5)
o Waiting for all deployments to complete
o Updating the corresponding Service’s labels to match the new
pods.
Traffic shaping: Using Ingress, incoming app traffic can be split
between multiple labels, making it simple to do things like A/B testing.
experiment 12:
Automate the process of running containerized application
developed in above exercise using Kubernetes.
Step 1: Create a deployment manifest Create a YAML file
(e.g., deployment.yaml) and define the deployment
configuration. Replace <container-registry>/<image-
name>:<tag> with the actual path to your Docker image in
the container registry.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-deployment
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app-container
image: <container-registry>/<image-name>:<tag>
ports:
- containerPort: 3000
Step 2: Create a service manifest Create another YAML file
(e.g., service.yaml) and define the service configuration.
apiVersion: v1
kind: Service
metadata:
name: my-app-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 3000
type: LoadBalancer
Step 3: Apply the manifests using kubectl To automate the
process, you can use the kubectl command-line tool to
apply the deployment and service manifests to your
Kubernetes cluster.
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
The above commands will create the deployment and
service in your Kubernetes cluster, and your application
will be up and running with the specified number of
replicas.
You can create a shell script to automate these steps
further. For example, create a file named deploy.sh and add
the following content:
#!/bin/bash
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
Make the script executable:
chmod +x deploy.sh
Now, you can simply run ./deploy.sh in your terminal to
deploy your application to the Kubernetes cluster.
By automating the process, you can easily redeploy your
application whenever there are updates or changes to your
Docker image or Kubernetes manifests, without the need
for manual intervention.