Module 5
Module 5
Contents
Working with remote repositories
Security and isolation
Troubleshooting
Monitoring and alerting
Controlling running containers
Containers in a business context
WORKING WITH REMOTE REPOSITORIES
Docker Hub repositories allow you to share container images with your team,
customers, or the Docker community at large.
Docker images are pushed to Docker Hub through the docker push command.
A single Docker Hub repository can hold many Docker images (stored as tags).
The power of Docker images is that they’re lightweight and portable—they can
be moved freely between systems.
You can easily create a set of standard images, store them in a repository on
your network, and share them throughout your organization.
Or you could turn to Docker Inc., which has created various mechanisms for
sharing Docker container images in public and private.
The most prominent among these is Docker Hub, the company’s public
exchange for container images.
Many open source projects provide official versions of their Docker images
there, making it a convenient starting point for creating new containers by
building on existing ones, or just obtaining stock versions of containers to
spin up a project quickly.
And you get one private Docker Hub repository of your own for free.
Explore Docker Hub
The easiest way to explore Docker Hub is simply to browse it on the web. From the
web interface, you can search for publicly available containers by name, tag, or
description.
From there, everything you need to download, run, and otherwise work with container
images from Docker Hub comes included in the open source version of Docker—
chiefly, the docker pull and docker push commands.
Docker Hub organizations for teams
If you’re using Docker Hub with others, you can create an organization, which
allows a group of people to share specific image repositories.
Organizations can be further subdivided into teams, each with their own sets of
repository privileges. Owners of an organization can create new teams and
repositories, and assign repository read, write, and admin privileges to fellow users.
Docker Hub repositories
Docker Hub repositories can be public or private. Public repositories can be
searched and accessed by anyone, even those without a Docker Hub account.
Private repos are available only to users you specifically grant access to, and
they are not publicly searchable. Note that you can turn a private repo public
and vice versa.
Note also that if you make a private repo public, you’ll need to ensure that
the exposed code is licensed for use by all and sundry.
Docker Hub does not offer any way to perform automatic license analysis on
uploaded images; that’s all on you.
While it is often easiest to search a repository using the web interface, the
Docker command line or shell also allows you to search for images.
Use docker search to run a search, which returns the names and descriptions
of matching images.
Certain repositories are tagged as official repositories.
These provide curated Docker images intended to be the default, go-to versions of
a container for a particular project or application (e.g. Nginx, Ubuntu, MySQL).
Docker takes additional steps to verify the provenance and security of official
images.
If you yourself maintain a project that you want to have tagged as an official
repository on Docker Hub, make a pull request to get the process started.
Note, however, that it is up to Docker to determine whether your project is
worthy of being included.
Docker push and Docker pull
Before you can push and pull container images to and from the Docker Hub, you
must connect to the Docker Hub with the docker login command, where you’ll
submit your Docker Hub username and password.
By default, docker login takes you to Docker Hub, but you can use it to connect to
any compatible repository, including privately hosted ones.
Generally, working with Docker Hub from the command line is fairly
straightforward.
Use docker search as described above to find images, docker pull to pull an
image by name, and docker push to store an image by name.
A docker pull pulls images from Docker Hub by default unless you specify a
path to a different registry.
Note that when you push an image, it’s a good idea to tag it beforehand.
Tags are optional, but they help you and your team disambiguate image
versions, features, and other characteristics.
A common way to do this is to automate tagging as part of your image
build process—for instance, by adding version or branch information as
tags to images.
Commands to working with remote repositories
docker search
docker search command search the Docker Hub for images.
docker search [OPTIONS] TERM
Examples
Search images by name : docker search busybox
Search images using stars : docker search --filter stars=3 busybox
This example displays images with a name containing ‘busybox’ and
at least 3 stars.
Commands to working with remote repositories
docker push
docker push command push an image or a repository to a registry.
docker push [OPTIONS] NAME[:TAG]
Example
Push a new image to a registry : docker image push --all-tags registry-
host:5000/myname/myimage
When pushing with the --all-tags option, all tags of the registry-
host:5000/myname/myimage image are pushed.
Commands to working with remote repositories
docker pull
docker pull command Pull an image or a repository from a registry.
docker pull [OPTIONS] NAME[:TAG|@DIGEST]
Example
Pull an image from Docker Hub: docker pull debian
If no tag is provided, Docker Engine uses the :latest tag as a default. This
command pulls the debian:latest image.
Automated builds on Docker Hub
Container images (hosted on Docker Hub) can be built automatically from their
components hosted in a repository.
With automated builds, any changes to the code in the repo are automatically
reflected in the container; you don’t have to manually push a newly built
image to Docker Hub.
Automated builds work by linking an image to a build context, i.e. a repo
containing a Dockerfile that is hosted on a service like GitHub or Bitbucket.
Although Docker Hub limits you to one build every five minutes, and there’s
no support yet for Git large files or Windows containers, automated builds are
nevertheless useful for projects updated daily or even hourly.
If you have a paid Docker Hub account, you can take advantage of parallel
builds.
An account eligible for five parallel builds can build containers from up to five
different repositories at once.
Note that each individual repository is allowed only one container build at a time; the
parallelism is across repos rather than across images in a repo.
Another convenience mechanism for developers in Docker Hub is webhooks.
Whenever a certain event takes place involving a repository—an image is rebuilt, or
a new tag is added— Docker Hub can send a POST request to a given endpoint.
You could use webhooks to automatically deploy or test an image whenever it is
rebuilt, or to deploy the image.
SECURITY AND ISOLATION
To use Docker safely, you need to be aware of the potential security issues
and the major tools and techniques for securing container-based systems.
After the diagnostics have finished, you should have the following output, containing your diagnostic ID:
Diagnostics Bundle: C:\Users\User\AppData\Local\Temp\CD6CF862-9CBD-4007-9C2F-
5FBE0572BBC2\20180720152545.zip
Diagnostics ID: CD6CF862-9CBD-4007-9C2F-5FBE0572BBC2/20180720152545 (uploaded)
Self-diagnose tool
Docker Desktop contains a self-diagnose tool which helps you to identify some
common problems. Before you run the self-diagnose tool, locate
com.docker.diagnose.exe. This is usually installed in
C:\Program Files\Docker\Docker\resources\com.docker.diagnose.exe.
To run the self-diagnose tool in Powershell:
PS C:\> & "C:\Program Files\Docker\Docker\resources\com.docker.diagnose.exe” check
The tool runs a suite of checks and displays PASS or FAIL next to each check. If
there are any failures, it highlights the most relevant at the end.
Monitoring and alerting
In a micro service system, you are likely to have dozens, possibly
hundreds or thousands, of running containers.
You are going to want as much help as you can get to monitor the state
of running containers and the system in general.
A good monitoring solution should show at a glance the health of the
system and give advance warning if resources are running low (e.g.,
disk space, CPU, memory).
We also want to be alerted should things start going wrong (e.g., if
requests start taking several seconds or more to process).
Monitoring and alerting
In this section, let us introduce a few basic as well as a few advanced command
structures for meticulously illustrating how the Docker containers can be managed.
The Docker engine enables you to start, stop, and restart a container with a set of
docker subcommands.
Let's begin with the docker stop subcommand, which stops a running container.
When a user issues this command, the Docker engine sends SIGTERM (-15) to the
main process, which is running inside the container. The SIGTERM signal requests
the process to terminate itself gracefully.
Most of the processes would handle this signal and facilitate a graceful exit.
However, if this process fails to do so, then the Docker engine will wait for a grace
period.
Controlling running containers
Even after the grace period, if the process has not been terminated, then the Docker
engine will forcefully terminate the process.
The forceful termination is achieved by sending SIGKILL (-9). The SIGKILL
signal cannot be caught or ignored, and so it will result in an abrupt termination of
the process without a proper clean-up.
Now, let's launch our container and experiment with the docker stop subcommand,
as shown here:
$ sudo docker run -i -t ubuntu:14.04 /bin/bash
root@da1c0f7daa2a:/#
Controlling running containers
Having launched the container, let's run the docker stop subcommand on this
container by using the container ID that was taken from the prompt.
$ sudo docker stop da1c0f7daa2a
da1c0f7daa2a
Now, we will notice that the container is being terminated. If you observe a little
more closely, you will also notice the text exit next to the container prompt.
This has happened due to the SIGTERM handling mechanism of the bash shell, as
shown here:
root@da1c0f7daa2a:/# exit
If we take it one step further and run the docker ps subcommand, then we will not
find this container anywhere in the list.
Since our container is in the stopped state, it has been comfortably left out of the
list.
Controlling running containers
Next, let's look at the docker start subcommand, which is used for starting one or
more stopped containers.
A container could be moved to the stopped state either by the docker stop
subcommand or by terminating the main process in the container either normally or
abnormally.
On a running container, this subcommand has no effect.
Let's start the previously stopped container by using the docker start subcommand,
as follows:
$ sudo docker start da1c0f7daa2a
da1c0f7daa2a
Controlling running containers
The restart command is a combination of the stop and the start functionality.
In other words, the restart command will stop a running container by following the
precise steps followed by the docker stop subcommand and then it will initiate the
start process.
This functionality will be executed by default through the docker restart
subcommand.
The next important set of container-controlling subcommands are docker pause and
docker unpause.
The docker pause subcommands will essentially freeze the execution of all the
processes within that container.
Conversely, the docker unpause subcommand will unfreeze the execution of all the
processes within that container and resume the execution from the point where it
was frozen.
Controlling running containers
Having seen the technical explanation of pause and unpause, let's see a detailed
example for illustrating how this feature works.
We have used two screen or terminal scenarios.
On one terminal, we have launched our container and used an infinite while loop
for displaying the date and time, sleeping for 5 seconds, and then continuing the
loop. We will run the following commands:
$ sudo docker run -i -t ubuntu:14.04 /bin/bash
root@c439077aa80a:/# while true; do date; sleep 5;
done
Controlling running containers
Thu Oct 2 03:11:19 UTC 2014
Thu Oct 2 03:11:24 UTC 2014
Thu Oct 2 03:11:29 UTC 2014
Thu Oct 2 03:11:34 UTC 2014
Thu Oct 2 03:11:59 UTC 2014
Thu Oct 2 03:12:04 UTC 2014
Thu Oct 2 03:12:09 UTC 2014
Thu Oct 2 03:12:14 UTC 2014
Thu Oct 2 03:12:19 UTC 2014
Thu Oct 2 03:12:24 UTC 2014
Thu Oct 2 03:12:29 UTC 2014
Thu Oct 2 03:12:34 UTC 2014
Controlling running containers
Our little script has printed the date and time every 5 seconds with an exception at
the following position:
Thu Oct 2 03:11:34 UTC 2014
Thu Oct 2 03:11:59 UTC 2014
Here, we encountered a delay of 25 seconds, because this is when we initiated the
docker pause subcommand on our container on the second terminal screen, as
shown here:
$ sudo docker pause c439077aa80a
c439077aa80a
Controlling running containers
When we paused our container, we looked at the process status by using the docker
ps subcommand on our container, which was on the same screen, and it clearly
indicated that the container had been paused, as shown in this command result:
$ sudo docker ps
CONTAINER ID |IMAGE |COMMAND |CREATED |STATUS |PORTS |NAMES
c439077aa80a |ubuntu:14.04 |"/bin/bash" |47 seconds ago |Up 46 seconds (Paused)
||ecstatic_torvalds
Controlling running containers
Velocity
Containers are lightweight and immutable, which means they lend themselves to
increasing the speed of software delivery. Containers package all the applications
dependencies and configurations within the container image, which allows for the same
image to be used across all environments without modification. This eliminates many
inconsistencies and speeds up defect resolution. It also supports a more agile and DevOps
oriented approach, improving the development, test, and production cycles of an
application. The velocity of application delivery has evident benefit to the business as it
supports the ability to deliver new value and capabilities to customers more quickly at
scale.
Containers in a business context
Portability
Software containers are a means for distribution. They allow applications to be run
safely and confidently in multiple places. Containers are packaged and transferred
to servers directly via registries using a simple tag, push, and pull model that’s
easily automated. OCI has brought on increased confidence and guarantees around
compatibility thanks to their container image and runtime specifications. Unlike
virtual machine images, container images can easily be migrated between clouds
without a lot of rework. Portability and repeatability is a common struggle for
software teams, and containers are a means to largely overcome that friction.
Containers in a business context
Isolation
The nature of software containers is that they have limited impact on other
applications running on the same node (node = single physical machine). This
increases simplicity, security, and stability for all apps/services on the node. The
lighter footprint that containers have compared to VMs allows you to potentially
run thousands of containers on one node – all in isolation. Application density
increases. From a business standpoint, this enables tremendous flexibility around
infrastructure use and can dramatically reduce overall resource consumption.
Containers in a business context
Availability
Since containers are more lightweight and their contents are designed to be
ephemeral (meaning the critical data is stored outside the container then mounted
as a volume), containers can be restarted quickly and seamlessly if your
application allows for this. If a container fails to start properly or stops responding,
a new instance of the container can be scheduled by an orchestrator, helping to
ensure high-availability. The ability to ship containers between different clouds
and infrastructure providers can also be a factor in maintaining constant uptime.
Containers in a business context
Simplicity
The container packaging model aligns well with modern, distributed application
architectures that consist of different microservices. Once past the learning curve,
one should work to decompose existing apps and package them more simply as
immutable images. This pattern is proven to streamline operations via reduction of
onerous tasks such as operating system stabilization, runtime provisioning, and
other configuration. Deployment of an individual application or service is
complete, easily repeatable, and fast.
Containers in a business context
The overall benefits