[go: up one dir, main page]

0% found this document useful (0 votes)
107 views17 pages

So What Exactly Is Docker?

The document discusses Docker containers, including what they are, why they are used, and some of their advantages over virtual machines. Docker containers package an application with all of its dependencies into a standardized unit for software development. This allows applications to run reliably on any infrastructure regardless of the environment. Containers are more lightweight and efficient than virtual machines as they share resources and don't require guest operating systems. Major companies use Docker for benefits like scalability, portability, and reduced overhead.

Uploaded by

srinivas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
107 views17 pages

So What Exactly Is Docker?

The document discusses Docker containers, including what they are, why they are used, and some of their advantages over virtual machines. Docker containers package an application with all of its dependencies into a standardized unit for software development. This allows applications to run reliably on any infrastructure regardless of the environment. Containers are more lightweight and efficient than virtual machines as they share resources and don't require guest operating systems. Major companies use Docker for benefits like scalability, portability, and reduced overhead.

Uploaded by

srinivas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 17

A few years ago, I was doing a pentest and after spending 3 days, I couldn’t find even a single security

issue.

As this has never happened before, you can guess my frustration.

Turnouts, I wasn’t doing two things.

One, I wasn’t following Sun Tzu’s wisdom.

“If you know the enemy and know yourself, you need not fear the result of a hundred battles”

I had knowledge but no wisdom. Wisdom is learning about the target before you attack it.

I spent the next two days learning about the app (as a user) and tried pentesting the app again.

Guess how many security issues did I find? a lot!

So before you learn how to attack containers, you need to understand a few things about containers.
1. What is a container (Docker)?
2. Why do we use it?
3. Who uses it, where and when?
4. How can Docker make my life easier?

So let’s dig in.

What is Docker?
If you are into IT or technology in general, you might have heard about Docker.

According to Wikipedia

Docker is a set of platform-as-a-service (PaaS) products that use OS-level virtualization to deliver software in packages called

containers.

Sounds like watching a foreign movie with no subtitles right? Let me explain in simple terms.

Note: We will use the words container and docker interchangeably going forward.
So what exactly is Docker?
Docker is a containerization tool that helps you in creating, packaging and deploying applications.

Simply put ”instead of just shipping your application, you also ship the environment required to run the application”.

So fewer moving parts, fewer chances of disasters.


Why should you learn Docker?
So you may ask why should I Learn docker?

Let’s say, you are developing an application using python where you have to ensure that your application not only runs on your

machine but works on the production system as well. But this application might work or might not, why?

Because the production system might have different versions of python installed on it or different versions of the python

libraries/modules.

This is where Docker comes into play.


Container/Docker Advantages
Solves dependency collision.
Docker solves this problem. With docker, we can containerize the application with the required environment (os, libraries) and

ship the application. Since it’s containerized into a single package, it won’t cause dependency collision issues.
Cross-platform
Since the same container image can be run on any Linux, Mac and Windows machine, Docker provides cross-platform

compatibility and helps in easy deployment.

Write once, run anywhere!

Now you may say, I’m not a developer (but I’m into operations) so why should I learn docker?
Scalability
With docker, you can pretty easily scale your infrastructure as per your needs. If you are experiencing more load on your

servers, you can increase the number of Docker containers. Similarly, if you are seeing less traffic, you might reduce the

number of running containers.


Proper Resource Utilization
Unlike VMs there is no need to allocate extra resources (like RAM, CPU, and Disk) for OS. Containers use the host’s kernel

features (like namespaces, Cgroups, etc.) to run the container.

Provides good security


Docker provides good security defaults and reduces deployment complexity, which helps in reducing the attack

surface of Docker. We will discuss more security aspects as we move forward in this course.

Docker has many other advantages which is why top tech companies are using/have started using it.
These are some of the companies which use Docker’s container technology:
 Uber
 Visa
 PayPal
 Shopify
 Quora
 Splunk

There are many startups that are slowly adapting the docker technology because of the above-mentioned benefits.

So if you wish to keep yourself updated with the latest technologies, it is important for you to learn docker.

Container vs VMs
But Imran, “virtualization already solves most of these problems, what benefits does docker bring to the table which

Virtualization doesn’t? and why should I move to containers ?”

Let’s compare containers with virtual machines to understand the benefits of Docker.

In virtualization, each VM has its own guest OS/kernel. This is an advantage in itself, as the VMs are independent

and isolated from the host system. However, this approach comes with a significant drawback of allocating

additional memory and storage for each VM.

For example, you need to run 100 VMs simultaneously, each with 1GB of RAM, 1 CPU core and 20 Gigs of Disk.
1 GB * 100 = 100 GB

1 CPU core * 100 = 100 CPU cores

20 GB disk space * 100 = 2000 GB Disk

This increases the cost dramatically and reduces the performance of the overall system (hello, noisy neighbors).

Since container technology uses the host’s kernel features to run a container, there’s no Guest OS involved, which

makes containers lean, fast and efficient than VMs as they have fewer layers compared to VMs.

So the saved resources can be better utilized to run more containers.

A picture is worth a thousand letters, let’s see an image which illustrates these points

The first half of the image shows that the containers (tenants) share the same kernel (House) as the owner (Host

OS).

In the second half of the image, VMs (tenant) do not share the same kernel as the owner (Host OS). Every VM is

allocated a dedicated kernel and hardware resources (space), which cannot be used by other tenants (VMs).

Due to reduced overhead in the container stack, containers boot faster and are much performant than VMs.

In short, the container technology has more advantages than the Virtualization.

So does this make virtualization technology extinct?

No, Virtualization is here to stay, there are many scenarios where we prefer virtualization over container

technology.

To sum it up, we have created a simple comparison table for you.

VMs Containers

Heavyweight Lightweight

Has its own OS Uses host OS


Takes minutes to start Takes few milliseconds to start

More secure Less secure*

Uses more resources Uses fewer resources

* An attacker needs to compromise both the guest and host OS to gain access to other VMs whereas in docker,

just the host OS.

Docker/Container Disadvantages
People usually talk about docker advantage but no one talks about its disadvantages.

Like every other technology, Docker also has some disadvantages.


Poor GUI Support
Docker does not support GUI applications by default. There are some workarounds and hacks to overcome this

issue but they are hacks, not features.


Poor Windows Support
Docker is a first-class citizen of Linux OS. Windows support is improving in recent times but it is far from Linux

support.
Lack of mature Security tools
Lack of Guest OS in a container is both a boon and a curse at the same time. Boon, as it makes everything faster

and efficient, but also a curse as exploiting a docker container might lead to system-wide compromise. Also,

security monitoring tools for Docker are not as mature as non-container environments.
Lack of bare metal support
Docker needs host OS to work. It doesn’t run on the bare metal server like Type 1 Hypervisor.

Docker Architecture and its components


Let us now understand docker architecture and its components.

Docker’s architecture is pretty simple and it has two main components


1. Docker Client (CLI)
2. Docker Server (Daemon)

Docker Client (CLI) – The Docker client talks to the Docker Daemon and asks it to do some job on its behalf. For

example, a client (CLI) can request details regarding running containers, the daemon responds with the states of

the running containers.


Docker Server (daemon) – The Docker Daemon, is a background process that manages the docker images,

containers, and volumes as shown in the below image.

Source: https://www.researchgate.net/profile/Yahya_Al-

Dhuraibi/publication/308050257/figure/fig1/AS:433709594746881@1480415833510/High-level-overview-of-

Docker-architecture.png

Actually we lied, docker (> 1.11) is not so simple anymore.

Docker engine consists of many components (docker CLI, dockerd, containerd, and runC)

isn’t it awesome? besides the technological trivia that will impress your friends at a party.

This division helps you switch, a part of the stack with any other alternative and removes the dependency on one

vendor (bye-bye vendor lock-in). For example, runC runtime can be replaced with CRI-O while still using dockerd

and containerd from Docker.


isn’t it awesome? besides the technological trivia that will impress your friends at a party.

This division helps you switch, a part of the stack with any other alternative and removes the dependency on one

vendor (bye-bye vendor lock-in). For example, runC runtime can be replaced with CRI-O while still using dockerd

and containerd from Docker.

What’s in it for me?


Docker’s Role in DevOps
Docker is heavily used in Agile, Continuous Integration and Continuous Delivery (CI/CD), Microservices and

DevOps.

After developers finish testing the application, they package it into a container and then the Ops team will simply

deploy the container.


Docker’s Role in Security Industry
Most organizations (if not all) are moving to container technology to help them achieve speed, scalability, and

agility.

Obviously, the security industry needs to secure these environments, so there is an increasing demand for security

professionals who understand Docker and DevOps, etc.,


Docker is heavily used to deploy security tooling in DevSecOps. We at Practical DevSecOps rely on docker to

deploy various SCA, SAST, DAST, and monitoring tools for our clients.

Docker can also help you get few hundred to few thousand shells as part of post-exploitation 

This week’s tasks


Each week, I’m going to share a lesson about Docker and then give you 1-2 hands-on tasks.

If you commit to doing the work, you will get the results you want.

This week we are going to do two tasks.


1. I only covered “Know thy enemy” but knowing yourself is the second most important thing, which I wasn’t
doing. The more you set your intention, the more likely you are to get it. Share what inspired you to learn Docker
and what you would like to get out of this course by commenting here and you can also interact with your fellow
professionals here.
2. Set up the lab environment in your machine.

Lab Setup for the Docker Security Course


Didn’t we say, this course is going to be hands-on and practical? let’s go ahead and set up a lab for this course.

Before proceeding, please ensure hardware and software prerequisites are met.
Software and Hardware requirements
Hardware
 Laptop/System with at least 4GB of RAM, 15 GB free hard disk space and should be able to run a Virtual
machine.
 Administrator access to install software like Virtualbox, Extensions, etc., and to change BIOS settings.
Software
1. Virtualbox, you can download VirtualBox from here.
2. Docker Security Course OVA file

Extra: If you are new to VirtualBox and want to know more about installation and usage you can refer following

links.
 Linux.
 macOS and Windows.

Note: Please restart your machine after the Virtualbox installation.

Now that we have installed Virtualbox and downloaded course OVA file. Let’s configure the course virtual machine

by following the below steps.


Step 1: Open Virtual Box

Step 2: Click on file > Import appliance

This will open a pop-up.

Step 3: Click on the file browser icon on the right side


Step 4: Select the downloaded ova file (which you have downloaded from the software requirement section above)

and click open

Step 5: Next click on the import button.


Step 6: Once the file is imported select the lab image and click on the start button.

Step 7: Once the VM finishes booting, login to the lab using the following credentials.

Username: practical-devsecops

Password: docker

Step 8: Verify the installation of the required software and packages by running the following commands one after

the other.
Open a Terminal (aka command prompt) by clicking on the Menu → System Tools → LXTerminal

To verify the installation of the docker, we can use the following command.

$ docker version

As a software industry tradition, we will start with a customary “Hello-world” image.

Open up the Terminal and type the following command.

$ docker run hello-world

Once the command finishes, you will see the output as shown below.

That’s it for this week’s tasks.

Reference and Further Reading


 Docker getting started page https://docs.docker.com/get-started/
 Docker announces modular docker-engine design with dockerd, containerd and
runC https://blog.docker.com/2016/04/docker-engine-1-11-runc/
 History of container technology https://opensource.com/article/18/1/history-low-level-container-runtimes

Conclusion
Today, you saw how powerful containers are and how you can leverage them to go to market faster. Using

container technology, businesses can go to market faster, gain more customers, generate more revenue, provide

employment to more people and create wealth for their nations.

We have also configured the lab environment for our future docker security lessons.

Next week, we will learn the technical nitty, gritty details of Docker like Docker Layers, Images, Containers,

Registry and much more.

Please stay tuned for our next lessons.

Looking forward to seeing you next week, have a great week ahead.

Lesson 2: Docker Images,


Docker Layers, and Registry

Introduction
In the previous lesson, we have learned the advantages and disadvantages of docker. We also configured the lab

environment and looked at a hello world docker example.

In this lesson, we are going to dig deeper into Docker, Docker images and its commands. We will also be doing a

couple of tasks where you will be tinkering with docker rather than just copy-pasting the commands.

Before moving on with this lesson, let us first clarify one of the most commonly asked questions “what’s the

difference between docker images and docker containers?”

Docker Image vs Docker Container


Docker Image is a template aka blueprint to create a running Docker container. Docker uses the information

available in the Image to create (run) a container.

If you are familiar with programming, you can think of an image as a class and a container is an instance of that

class.

An image is a static entity and sits on the disk as a tarball (zip file), whereas a container is a dynamic (running, not

static) entity. So you can run multiple containers of the same image.
In short, a running form of an image is a container.

Let’s see both images and containers in action using Docker CLI

The below command lists images present in the local machine.

$ docker images

You can create a container from this image by using the docker run command.

$ docker run hello-world

In order to perform security assessments of the Docker ecosystem, you need to understand Docker images and

containers in more detail. 

Let’s explore these topics in depth.

Docker Image
We already know that the Docker image is a template used to create(run) a docker container but what it’s made

of ? what it takes to create it? and what information is needed to turn an image into a container?
Dockerfile
Docker images are usually created with the help of a Docker specific file called Dockerfile. Dockerfile contains step

by step instructions to create an Image.


As we can see, a Dockerfile is used to create a Docker image, this image is then used for the creation of a Docker

container.

A simple example of a Dockerfile is shown below.

# Dockerfile for creating a nmap alpine docker Image

FROM alpine:latest

RUN apk update

RUN apk add nmap

ENTRYPOINT [“nmap”]

CMD [“localhost”]

Let’s explore what these Dockerfile instructions mean.

FROM: This instruction in the Dockerfile tells the daemon, which base image to use while creating our new Docker

image. In the above example, we are using a very minimal OS image called alpine (just 5 MB of size). You can

also replace it with Ubuntu, Fedora, Debian or any other OS image.

CMD: The CMD sets default command and/or parameters when a docker container runs. CMD can be overwritten

from the command line via the docker run command.

ENTRYPOINT: The ENTRYPOINT instruction is used when you would like your container to run the same

executable every time. Usually, ENTRYPOINT is used to specify the binary and CMD to provide parameters.

RUN: This command instructs the Docker daemon to run the given commands as it is while creating the image. A

Dockerfile can have multiple RUN commands, each of these RUN commands create a new layer in the image.

For example, if you have the following Dockerfile.

# Dockerfile for creating a nmap alpine docker Image

FROM alpine:latest

RUN apk update

RUN apk add nmap

ENTRYPOINT [“nmap”]

CMD [“-h”, “localhost”]

The FROM instruction will create one layer, the first RUN command will create another layer, the second RUN

command, one more layer and finally, CMD will create the last layer.

The below example shows the same above example on a ubuntu base image.
# Dockerfile for creating a nmap ubuntu docker Image

FROM ubuntu:latest

RUN apt-get update

RUN apt-get install nmap -y

ENTRYPOINT [“nmap”]

CMD [“-h”, “localhost”]

COPY: This command copies files from the host machine into the image.

ADD: The ADD command is similar to COPY but provides two more advantages.

It supports URLs so we can directly download files from the internet.

It also supports auto extraction.

WORKDIR: This command will specify on which directory the commands have to be executed. You could think of

this as the cd command in *nix based operating systems

EXPOSE: The EXPOSE command exposes the container port.

Enough of theory, let’s build a simple docker-image which runs Nmap against localhost.
Docker Image creation using Dockerfile
Start your VM and open the Leafpad text editor by going to Start Menu → Accessories → Leafpad and paste the

below code.
 # Dockerfile for creating a nmap alpine docker Image

FROM alpine:latest

RUN apk update

RUN apk add nmap

ENTRYPOINT [“nmap”]

CMD [“-h”, “localhost”]

Save the file (File → Save) with the name Dockerfile with no extension.

Let’s use this Dockerfile to build the Nmap image.

Open the terminal and go to the directory where the Dockerfile exists then execute the following command.

You might also like