LINUX CONTAINER INTERNALS
How they really work
Scott McCarty
Principal Product Manager, Containers
07/17/2018
BASIC INFO
Introduction - Linux Container Internals
Wifi, Labs, Etc
● Wifi
○ SSID: HyattMR
○ Password: linux2019
● Labs
○ https://learn.openshift.com/training/lci-olf
2 Scott McCarty, Twitter: @fatherlinux
AGENDA
Introduction - Linux Container Internals 2.1
Introduction
● We will use a completely hosted solution called Katacoda for this lab:
○ All you need is a web browser and Internet access
○ Instructions, code repositories, and terminal will be provided to a real, working virtual machine
○ All code is clickable, all you have to do is click on it and it will paste into the terminal
○ The environment can be reset at any time by refreshing (very nice)
○ Don’t be intimidated by bash examples, there is a lot of gymnastics to make sure the lab can
be run just by clicking. Feel free to ask me about bash stuff.
● At Red Hat we encourage networking and we'd like you to spend 2 to 3 minutes introducing
yourselves to the person(s) next to you. Say what company or organization you're from, and what
you're looking to learn from this tutorial.
3 Scott McCarty, Twitter: @fatherlinux
INTRODUCTION
AGENDA
Introduction - Linux Container Internals
Introduction Container Images
Four new tools in your toolbelt The new standard in software packaging
Container Hosts
Container runtimes, engines, daemons
Container Registries
Sharing and collaboration
Container Orchestration
Distributed systems and containers
5 Scott McCarty, Twitter: @fatherlinux
AGENDA
Advanced - Linux Container Internals
Container Standards Advanced Architecture
Understanding OCI, CRI, CNI, and more Building in resilience
Container Tools Ecosystem Container History
Podman, Buildah, and Skopeo Context for where we are today
Production Image Builds
Sharing and collaboration between specialists
Intermediate Architecture
Production environments
6 Scott McCarty, Twitter: @fatherlinux
Production-Ready Containers
What are the building blocks you need to think about?
7 Scott McCarty Twitter: @fatherlinux Blog: bit.ly/fatherlinux
CONTAINER IMAGES
CONTAINER IMAGE
Open source code/libraries, in a Linux distribution, in a tarball
Even base images are made up of layers:
● Libraries (glibc, libssl)
● Binaries (httpd)
● Packages (rpms)
● Dependency Management (yum)
● Repositories (rhel7)
● Image Layer & Tags (rhel7:7.5-404)
● At scale, across teams of developers
and CI/CD systems, consider all of the
necessary technology
9 Scott McCarty, Twitter: @fatherlinux
IT ALL STARTS WITH COMPILING
Statically linking everything into the binary
Starting with the basics:
● Programs rely on libraries
● Especially things like SSL - difficult to
reimplement in for example PHP
● Math libraries are also common
● Libraries can be compiled into
binaries - called static linking
● Example: C code + glibc + gcc =
program
10 Scott McCarty, Twitter: @fatherlinux
LEADS TO DEPENDENCIES
Dynamically linking libraries into the binary
Getting more advanced:
● This is convenient because programs
can now share libraries
● Requires a dynamic linker
● Requires the kernel to understand
where to find this linker at runtime
● Not terribly different than interpreters
(hence the operating system is called
an interpretive layer)
11 Scott McCarty, Twitter: @fatherlinux
PACKAGING & DEPENDENCIES
RPM and Yum were invented a long time ago
Dependencies need resolvers:
● Humans have to create the
dependency tree when packaging
● Computers have to resolve the
dependency tree at install time
(container image build)
● This is essentially what a Linux
distribution does sans the installer
(container image)
12 Scott McCarty, Twitter: @fatherlinux
PACKAGING & DEPENDENCIES
Interpreters have to handle the same problems
Dependencies need resolvers:
● Humans have to create the
dependency tree when packaging
● Computers have to resolve the
dependency tree at install time
(container image build)
● Python, Ruby, Node.js, and most
other interpreted languages rely on C
libraries for difficult tasks (ex. SSL)
13 Scott McCarty, Twitter: @fatherlinux
CONTAINER IMAGE PARTS
Governed by the OCI image specification standard
Lots of payload media types:
● Image Index/Manifest.json - provide
index of image layers
● Image layers provide change sets -
adds/deletes of files
● Config.json provides command line
options, environment variables, time
created, and much more
● Not actually single images, really
repositories of image layers
14 Scott McCarty, Twitter: @fatherlinux
LAYERS ARE CHANGE SETS
Each layer has adds/deletes
Each image layer is a permutation in time:
● Different files can be added, updated
or deleted with each change set
● Still relies on package management
for dependency resolution
● Still relies on dynamic linking at
runtime
15 Scott McCarty, Twitter: @fatherlinux
LAYERS ARE CHANGE SETS
Some layers are given a human readable name
Each image layer is a permutation in time:
● Different files can be added, updated
or deleted with each change set
● Still relies on package management
for dependency resolution
● Still relies on dynamic linking at
runtime
16 Scott McCarty, Twitter: @fatherlinux
CONTAINER IMAGES & USER OPTIONS
Come with default binaries to start, environment variables, etc
Each image layer is a permutation in time:
● Different files can be added, updated
or deleted with each change set
● Still relies on package management
for dependency resolution
● Still relies on dynamic linking at
runtime
17 Scott McCarty, Twitter: @fatherlinux
INTER REPOSITORY DEPENDENCIES
Think through this problem as well
You have to build this dependency tree
yourself:
● DRY - Do not repeat yourself. Very
similar to functions and coding
● OpenShift BuildConfigs and
DeploymentConfigs can help
● Letting every development team
embed their own libraries takes you
back to the 90s
18 Scott McCarty, Twitter: @fatherlinux
CONTAINER IMAGE
Open source code/libraries, in a Linux distribution, in a tarball
Even base images are made up of layers:
● Libraries (glibc, libssl)
● Binaries (httpd)
● Packages (rpms)
● Dependency Management (yum)
● Repositories (rhel7)
● Image Layer & Tags (rhel7:7.5-404)
● At scale, across teams of developers
and CI/CD systems, consider all of the
necessary technology
19 Scott McCarty, Twitter: @fatherlinux
CONTAINER REGISTRIES
REGISTRY SERVERS
Better than virtual appliance market places :-)
Defines a standard way to:
● Find images
● Run images
● Build new images
● Share images
● Pull images
● Introspect images
● Shell into running container
● Etc, etc, etc
Scott McCarty Twitter: @fatherlinux Blog: bit.ly/fatherlinux
CONTAINER REGISTRY & STORAGE
Mapping image layers
Covering push, pull, and registry:
● Rest API (blobs, manifest, tags)
● Image Scanning (clair)
● CVE Tracking (errata)
● Scoring (Container Health Index)
● Graph Drivers (overlay2, dm)
● Responsible for maintaining chain of
custody for secure images from
registry to container host
22 Scott McCarty, Twitter: @fatherlinux
START WITH QUALITY REPOSITORIES
Repositories depend on good packages
Determining the quality of repository
requires meta data:
● Errata is simple to explain, hard to
build
○ Security Fixes
○ Bug Fixes
○ Enhancements
● Per container images layer (tag), often
maps to multiple packages
23 Scott McCarty, Twitter: @fatherlinux
SCORING REPOSITORIES
Images age like cheese, not like wine
Based on severity and age of Security
Errata:
● Trust is temporal
● Even good images go bad over time
because the world changes around
you
24 Scott McCarty, Twitter: @fatherlinux
SCORING REPOSITORIES
Container Health Index
Based on severity and age of Security
Errata:
● Trust is temporal
● Images must constantly be rebuilt to
maintain score of “A”
25 Scott McCarty, Twitter: @fatherlinux
PUSH, PULL & SIGNING
Signing and verification before/after transit
Registry has all of the image layers and can
have the signatures as well:
● Download trusted thing
● Download from trusted source
● Neither is sufficient by itself
26 Scott McCarty, Twitter: @fatherlinux
PUSH, PULL & SIGNING
Mapping image layers
27 Scott McCarty, Twitter: @fatherlinux
GRAPH DRIVERS
Mapping layers uses file system technology
Local cache maps each layer to volume or
filesystem layer:
● Overlay2 file system and container
engine driver
● Device Mapper volumes and
container engine driver
28 Scott McCarty, Twitter: @fatherlinux
PUSH, PULL & SIGNING
Mapping image layers
29 Scott McCarty, Twitter: @fatherlinux
CONTAINER REGISTRY & STORAGE
Mapping image layers
Covering push, pull, and registry:
● Rest API (blobs, manifest, tags)
● Image Scanning (clair)
● CVE Tracking (errata)
● Scoring (Container Health Index)
● Graph Drivers (overlay2, dm)
● Responsible for maintaining chain of
custody for secure images from
registry to container host
30 Scott McCarty, Twitter: @fatherlinux
CONTAINER HOSTS
CONTAINER HOST BASICS
Container Engine, Runtime, and Kernel
32 Scott McCarty, Twitter: @fatherlinux
CONTAINERS DON’T RUN ON DOCKER
The Internet is WRONG :-)
Important corrections
● Containers do not run ON docker.
Containers are processes - they run
on the Linux kernel. Containers are
Linux processes (or Windows).
● The docker daemon is one of the
many user space tools/libraries that
talks to the kernel to set up
containers
33 Scott McCarty, Twitter: @fatherlinux
PROCESSES VS. CONTAINERS
Actually, there is no processes vs. containers in the kernel
User space and kernel work together
● There is only one process ID structure
in the kernel
● There are multiple human and
technical definitions for containers
● Container engines are one technical
implementation which provides both
a methodology and a definition for
containers
34 Scott McCarty, Twitter: @fatherlinux
THE CONTAINER ENGINE IS BORN
This was a new concept introduced with Docker Engine and CLI
Think of the Docker Engine as a giant proof
of concept - and it worked!
● Container images
● Registry Servers
● Ecosystem of pre-built images
● Container engine
● Container runtime (often confused)
● Container image builds
● API
● CLI
● A LOT of moving pieces
35 Scott McCarty, Twitter: @fatherlinux
DIFFERENT ENGINES
All of these container engines are OCI compliant
Podman CRI-O Docker
36 Scott McCarty, Twitter: @fatherlinux
CONTAINER ENGINE VS. CONTAINER HOST
In reality the whole container host is the engine - like a Swiss watch
VS.
37 Scott McCarty, Twitter: @fatherlinux
CONTAINER HOST
Released, patched, tested together
Tightly coupled communication through the
kernel - all or nothing feature support:
● Operating System (kernel)
● Container Runtime (runc)
● Container Engine (Docker)
● Orchestration Node (Kubelet)
● Whole stack is responsible for
running containers
38 Scott McCarty, Twitter: @fatherlinux
CONTAINER ENGINE
Defining a container
39 Scott McCarty, Twitter: @fatherlinux
KERNEL
Creating regular Linux processes
Normal processes are created, destroyed,
and managed with system calls:
● Fork() - Think Apache
● Exec() - Think ps
● Exit()
● Kill()
● Open()
● Close()
● System()
40 Scott McCarty, Twitter: @fatherlinux
KERNEL
Creating “containerized” Linux processes
What is a container anyway?
● No kernel definition for what a
container is - only processes
● Clone() - closest we have
● Creates namespaces for kernel
resources
○ Mount, UTC, IPC, PID, Network,
User
● Essentially, virtualized data structures
41 Scott McCarty, Twitter: @fatherlinux
KERNEL
Namespaces are all you get with the clone() syscall
Scott McCarty, Twitter: @fatherlinux
KERNEL
Even namespaced resources use the same subsystem code
Scott McCarty, Twitter: @fatherlinux
CONTAINER RUNTIME
Standarding the way user space communicates with the kernel
Expects some things from the user:
● OCI Manifest - json file which contains
a familiar set of directives - read only,
seccomp rules, privileged, volumes,
etc
● Filesystem - just a plain old directory
which has the extracted contents of a
container image
44 Scott McCarty, Twitter: @fatherlinux
CONTAINER RUNTIME
Adds in cgroups, SELinux, sVirt, and SECCOMP
45 Scott McCarty, Twitter: @fatherlinux
CONTAINER RUNTIME
But, there were others before runc, what’s the deal?
There is a rich history of standardization
attempts in Linux:
● LibVirt
● LXC
● Systemd Nspawn
● LibContainer (eventually became
runc)
46 Scott McCarty, Twitter: @fatherlinux
CONTAINER ENGINE
Provides an API prepares data & metadata for runc
Three major jobs:
● Provide an API for users and robots
● Pulls image, decomposes, and
prepares storage
● Prepares configuration - passes to
runc
47 Scott McCarty, Twitter: @fatherlinux
PROVIDE AN API
Regular processes, daemons, and containers all run side by side
In action:
● Number of daemons & programs
working together
○ dockerd
○ containerd
○ runc
48 Scott McCarty, Twitter: @fatherlinux
PULL IMAGES
Mapping image layers
Pulling, caching and running containers:
● Most container engines use graph
drivers which rely on kernel drivers
(overlay, device mapper, etc)
● There is work going on to do this in
user space, but there are typically
performance trade offs
49 Scott McCarty, Twitter: @fatherlinux
PREPARE STORAGE
Copy on write and bind mounts
Understanding implications of bind mounts:
● Copy on write layers can be slow
when writing lots of small files
● Bind mounted data can reside on any
VFS mount (NFS, XFS, etc)
50 Scott McCarty, Twitter: @fatherlinux
PREPARE CONFIGURATION
Combination of image, user, and engine defaults
Three major inputs:
● User inputs can override defaults in
image and engine
● Image inputs can override engine
defaults
● Engine provides sane defaults so that
things work out of the box
51 Scott McCarty, Twitter: @fatherlinux
PREPARE CONFIGURATION + CNI
Regular processes, daemons, and containers all run side by side
In action:
● Takes user specified options
● Pulls image, expands, and parses
metadata
● Creates and prepares CNI json blob
● Hands CNI blob and environment
variables to one or more plugins
(bridge, portmapper, etc)
52 Scott McCarty, Twitter: @fatherlinux
ENGINE, RUNTIME, KERNEL, AND MORE
All of these must revision together and prevent regressions together
53 Scott McCarty, Twitter: @fatherlinux
BONUS INFORMATION
Other related technology
54 Scott McCarty, Twitter: @fatherlinux
Containers With Advanced Isolation
Kata Containers, gVisor, and KubeVirt (because deep down inside you want to know)
● Kata Containers integrate at the container runtime
layer
● gVisor integrates at the container runtime layer
● KubeVirt not advanced container isolation. Add-on
to Kubernetes which extends it to schedule VM
workloads side by side with container workloads
Scott McCarty Twitter: @fatherlinux Blog: bit.ly/fatherlinux
Kata Containers
Containers in VMs
You still need connections to the
outside world:
● Shim offers reaping of processes/VMs similar to
normal containers
● Proxy allows serial access into container in VM
● P9fs is the communication channel for storage
Scott McCarty Twitter: @fatherlinux Blog: bit.ly/fatherlinux
gVisor
Anybody remember user mode Linux?
gVisor is:
● Written in golang
● Runs in userspace
● Reimplements syscalls
● Reimplements hardware
● Uses 9p for storage
Concerns
● Storage performance
● Limited syscall implementation
Scott McCarty Twitter: @fatherlinux Blog: bit.ly/fatherlinux
KubeVirt
Extension of Kubernetes for running VMs
KubeVirt is:
● Custom resource in Kubernetes
● Defined/actual state VMs
● Good for VM migrations
● Uses persistent volumes for VM disk
KubeVirt is not:
● Stronger isolation for containers
● Part of the Container Engine
● A replacement Container Runtime
● Based on container images
Scott McCarty Twitter: @fatherlinux Blog: bit.ly/fatherlinux
CONTAINER ORCHESTRATION
KUBERNETES & OPENSHIFT
It’s a 10 ton dump truck that handles pretty well at 200 MPH
Two major jobs:
● Scheduling - distributed systems computing.
Resolving where to put containers in the
cluster and allowing users to connect to
them
● Provide an API - can be consumed by users
or robots. Defines a model state for the
application. Completely new way of thinking.
RED HAT AND CONTAINERS - CONFIDENTIAL - NDA REQUIRED
SCHEDULING CONTAINERS
Defining the desired state
● Requires thinking in a completely new way - distributed systems
● Fault tolerance must be designed into the system
RED HAT AND CONTAINERS - CONFIDENTIAL - NDA REQUIRED
MODELING THE APPLICATION
Defining the desired state
Modeling the application using defined state, actual
state. Resolving discrepancies:
● The end user defines the desired state
● The system continuously resolves
discrepancies by taking action
● Automation can also modify the desired
state - Inception
RED HAT AND CONTAINERS - CONFIDENTIAL - NDA REQUIRED
ADVANCED MODELING
Many other resource can be defined
RED HAT AND CONTAINERS - CONFIDENTIAL - NDA REQUIRED
ADVANCED MODELING
Humans interact with these resource through defined state
RED HAT AND CONTAINERS - CONFIDENTIAL - NDA REQUIRED
ADVANCED MODELING
These resources are virtual, but map to real world infrastructure
RED HAT AND CONTAINERS - CONFIDENTIAL - NDA REQUIRED
ADVANCED MODULES
AGENDA
Advanced - Linux Container Internals
Container Standards Advanced Architecture
Understanding OCI, CRI, CNI, and more Building in resilience
Container Tools Ecosystem Container History
Podman, Buildah, and Skopeo Context for where we are today
Production Image Builds
Sharing and collaboration between specialists
Intermediate Architecture
Production environments
67 Scott McCarty, Twitter: @fatherlinux
CONTAINER STANDARDS
THE PROBLEM
With no standard, there is
no way to automate. Each
box is a different size, has
different specifications.
No ecosystem of tools
can form.
Image: Boxes manually loaded on
trains and ships in 1921
Scott McCarty Twitter: @fatherlinux Blog: bit.ly/fatherlinux
WHY STANDARDS MATTER TO YOU
Click to add subtitle
Protect customer investment
The world of containers is moving very quickly. Protect your investment in training,
software, and building infrastructure.
Enable ecosystems of products and tools to to form
Cloud providers, software providers, communities and individual contributors can all
build tools.
Allow communities with competing interests to work together
There are many competing interests, but as a community we have common goals.
70 INSERT DESIGNATOR, IF NEEDED
SIMILAR TO REAL SHIPPING CONTAINERS
Standards in different places achieve different goals
The analogy is strikingly good. The
importance of standards is critical:
● Failures are catastrophic in a fully
automated environments, such as port in
Shanghai (think CI/CD)
● Something so simple, requires precise
specification for interoperability (Files &
Metadata)
● Only way to protect investment in
equipment & infrastructure (container
orchestration & build processes)
71 Scott McCarty Twitter: @fatherlinux Blog: bit.ly/fatherlinux
How did we get here?
2008: 2013:
LINUX DOTCLOUD
CONTAINER BECOMES
PROJECT (LXC) DOCKER
1979: 2000:
2003: 2013:
SELINUX 2006: 2008: RED HAT
CHROOT JAILS ADDED
ADDED TO LINUX PROCESS KERNEL & USER ENTERPRISE
SYSCALL ADDED TO FREEBSD
MAINLINE CONFINEMENT NAMESPACES LINUX
2005:
2001: FULL RELEASE 2007: 2013: 2014:
LINUX -VSERVER OF SOLARIS GPC RENAMED DOTCLOUD PYCON GOOGLE
PROJECT ZONES CGROUPS LIGHTNING TALK KUBERNETES
1979
2000
2005
2010
Where are we going?
2015:
RED HAT 2017: 2018:
CONTAINER V1.0 of image & V1.0 of
PLATFORM 3.0 runtime spec distribution spec
2016: 2016: 2017: 2018:
2015: Skopeo project CRI-O project 2016: 2017: Docker includes CRI-O is GA and
STANDARDS VIA launched under launched under Containerd Buildah released the new powers OpenShfit
OCI AND CNCF the name OCID the name OCID project launched and ships in RHEL containerd Online
2015: 2016: 2017: 2017: 2018:
Tectonic Docker engine 1.12 Moby project Kata merges Clear Podman released
Announced adds swarm Announced & RunV projects and ships in RHEL
2015
2016
2017
2018
ARCHITECTURE
The Internet is WRONG :-)
Important corrections
● Containers do not run ON docker.
Containers are processes - they run
on the Linux kernel. Containers are
Linux.
● The docker daemon is one of the
many user space tools/libraries that
talks to the kernel to set up
containers
74
Containers Are Open
Established in June 2015 by Docker and other leaders in the container
industry, the OCI currently contains three specifications which govern,
building, running, and moving containers.
7
5
Standards Are Well Governed
● Governed by The Linux
Foundation
● Ecosystem includes:
○ Vendors
○ Cloud Providers
○ Open Source Projects
7
6
OVERVIEW OF THE DIFFERENT STANDARDS
Vendor, Community, and Standards Body driven
Many different standards
77 Scott McCarty Twitter: @fatherlinux Blog: bit.ly/fatherlinux
WORKING TOGETHER
Standards in different places achieve different goals
Different standards are focused on
different parts of the stack.
● Container Images & Registries
● Container Runtimes
● Container Networking
78 Scott McCarty Twitter: @fatherlinux Blog: bit.ly/fatherlinux
WHAT ARE CONTAINERS ANYWAY?
Data and metadata
Container images need to express user’s
intent when built and run.
● How to run
● What to run
● Where to run
79 Scott McCarty Twitter: @fatherlinux Blog: bit.ly/fatherlinux
IMAGE AND RUNTIME SPECIFICATIONS
Powerful standards which enable communities and companies to build best of breed tools
Fancy files and fancy processes
80 Scott McCarty Twitter: @fatherlinux Blog: bit.ly/fatherlinux
WORKFLOW OF CONTAINERS
The building blocks of how a container goes from image to running process
Allows users to build container The container engine is OCI compliant runtimes can OCI compliant runtimes can be
images with any tool they responsible for creating the consume the config.json and built for multiple operating
choose. Different tools are good config.json file and unpacking root filesystem, and tell the systems including Linux,
for different use cases. images into a root file system. kernel to create a container. Windows, and Solaris
81 Scott McCarty Twitter: @fatherlinux Blog: bit.ly/fatherlinux
TYING IT ALL TOGETHER
These standards are extremely powerful
82 Scott McCarty Twitter: @fatherlinux Blog: bit.ly/fatherlinux
WORKING TOGETHER
Technical example
Different standards are focused on
different parts of the stack.
● Tools like crictl use the CRI
standard
● Tools like Podman use standard
libraries
● Tools like runc are widely used
83 Scott McCarty Twitter: @fatherlinux Blog: bit.ly/fatherlinux
THE COMMUNITY LANDSCAPE
Open Source, Leadership & Standards
The landscape is made up of committees,
standards bodies, and open source
projects:
● Docker/Moby
● Kubernetes/OpenShift
● OCI Specifications
● Cloud Native Technical Leadership
84 Scott McCarty, Twitter: @fatherlinux
CONTAINER
ECOSYSTEM
AN OPEN SOURCE SUPPLY CHAIN
One big tool, or best of breed Unix like tools based on standards
86 Scott McCarty, Twitter: @fatherlinux
BASIC CONTAINERS ARE SIMILAR TO PDF?
Find, Run, Build, and Share. Collaboration with any reader/writer
==
87 Scott McCarty, Twitter: @fatherlinux
MINIMUM TO BUILD OR RUN A CONTAINER?
Standards and open source code
● A standard definition for a container at rest
○ OCI Image Specification - includes image and metadata in a bundle
● A standard mechanism to pull the bundle from a container registry to the host
○ OCI Distribution Specification - specifies protocol for registry servers
○ github.com/containers/image
● Ability to uncompress and map the OCI image bundle to local storage
○ github.com/containers/storage
● A standard mechanism for running a container
○ OCI Runtime Specification - expects only a root file system and config.json
○ The default runc implementation of the Runtime Spec (same tool Docker uses)
88 Scott McCarty, Twitter: @fatherlinux
WHAT ELSE DOES KUBERNETES NEED?
Standards and open source code
● The minimum to build or run a container
AND
● A standard way for the Kublet to communicate with the Container Engine
○ Container Runtime Interface (CRI) - the protocol between the Kubelet and Engine
● A daemon which communicates with CRI
○ gRPC Server - a daemon or shim which implements this server specification
● A standard way for humans to interface with the gRPC server to troubleshoot and debug
○ cri-ctl - a node based CLI tool that can list images, view running containers, etc
89 Scott McCarty, Twitter: @fatherlinux
THERE ARE NOW ALTERNATIVES
Moving to Podman in RHEL 8 and CRI-O in OpenShift 4
90 Scott McCarty Twitter: @fatherlinux Blog: bit.ly/fatherlinux
THE UNDERLYING ECOSYSTEM
Many tools and libraries
skopeo
Scott McCarty Twitter: @fatherlinux Blog: bit.ly/fatherlinux
CREATING DOWNSTREAM PRODUCTS
Release timing is critical to solving problems
92 Scott McCarty, Twitter: @fatherlinux
THE JOURNEY
Can start anywhere
Traditional Development Cloud Native
FIND RUN BUILD SHARE INTEGRATE DEPLOY
RHEL (Podman/Buildah/Skopeo) Quay OpenShift (Kubernetes)
93 Scott McCarty Twitter: @fatherlinux
CUSTOMER NEEDS
Mapping customer needs to solutions
Capability Platform Product Container Engine
Linux & Red Hat
Single Node Podman
Container Tools Enterprise Linux
Linux &
Multi Node OpenShift CRI-O
Kubernetes
94 Scott McCarty Twitter: @fatherlinux
Red Hat Enterprise Linux 8
The container tools module
95 Scott McCarty, Twitter: @fatherlinux
PODMAN ARCHITECTURE
Find, Run, Build, and Share. Collaboration with any reader/writer
96 Scott McCarty, Twitter: @fatherlinux
APPLICATION STREAMS USE MODULES
Modules are the mechanism of delivering multiple streams (versions) of software within a major release. This
also works the other way round, a single stream across multiple major releases.
Modules are collections of packages representing a logical unit e.g. an application, a language stack, a
database, or a set of tools. These packages are built, tested, and released together.
Each module defines its own lifecycle which is closer to the natural life of the app rather than the RHEL
lifecycle.
5 years of updates
PostgreSQL 9.4
Stream
5 years of updates
PostgreSQL 10
Stream
3 years of updates
PHP 7.1
Stream
3 years of updates
PHP 7.2
Stream
Red Hat Enterprise Linux High Touch Beta
THE CONTAINER TOOLS RELEASES
One Module delivered with multiple Application Streams based on different use cases:
● The rhel8 stream delivers new versions for developers
● The versioned, stable streams provide stability for operations
○ Created once a year, supported for two years
○ Only backports of critical fixes
Rolling Stream
rhel8
Fast Stream
2 years of updates
V1
Stable Stream
2 years of updates
V2
Stable Stream
Red Hat Enterprise Linux High Touch Beta
OpenShift 4
CRI-O and Buildah as a library
99 Scott McCarty, Twitter: @fatherlinux
CRI-O ARCHITECTURE
Run containers
100 Scott McCarty, Twitter: @fatherlinux
BUILDAH ARCHITECTURE
Build and share containers
101 Scott McCarty, Twitter: @fatherlinux
IN LOCKSTEP WITH KUBERNETES
All components for running containers released, tested, and supported together for reliability:
● CRI-O moves in lock-step with the underlying Kubernetes
● The runc container runtime is delivered side by side
● Buildah delivered as a library specifically for OpenShift. No commands for users.
OpenShift 4.X
Support updates
Kubernetes 1.13
CRI-O 1.13
Support updates
Kubernetes 1.14
CRI 1.14
Support updates
Kubernetes 1.15
CRI-O 1.15
Red Hat Enterprise Linux High Touch Beta
PRODUCTION
IMAGE BUILDS
Fancy Files
How do we currently collaborate in the user space?
Scott McCarty Twitter: @fatherlinux Blog: bit.ly/fatherlinux
Fancy Files
The future of collaboration in the user space....
Scott McCarty Twitter: @fatherlinux Blog: bit.ly/fatherlinux
Fancy Files
The future of collaboration in the user space....
Scott McCarty Twitter: @fatherlinux Blog: bit.ly/fatherlinux
INTERMEDIATE ARCHITECTURE
THE ORCHESTRATION TOOLCHAIN
On Multiple Hosts
The orchestration toolchain adds the
following:
● More daemons (it’s a party) :-)
● Scheduling across multiple hosts
● Application Orchestration
● Distributed builds (OpenShift)
● Registry (OpenShift)
108 Scott McCarty, Twitter: @fatherlinux
THE LOGIC
Bringing it All Together
109 Scott McCarty, Twitter: @fatherlinux
ADVANCED ARCHITECTURE
TYPICAL ARCHITECTURE
Bringing it All Together
In distributed systems, the user must interact through APIs
111 Scott McCarty, Twitter: @fatherlinux
HISTORY
THE HISTORY OF CONTAINERS
2008: 2013:
LINUX DOTCLOUD
CONTAINER BECOMES
PROJECT (LXC) DOCKER
1979: 2000:
2003: 2013:
SELINUX 2006: 2008: RED HAT
CHROOT JAILS ADDED
ADDED TO LINUX PROCESS KERNEL & USER ENTERPRISE
SYSCALL ADDED TO FREEBSD
MAINLINE CONFINEMENT NAMESPACES LINUX
2005:
2001: FULL RELEASE 2007: 2013: 2014:
LINUX -VSERVER OF SOLARIS GPC RENAMED DOTCLOUD PYCON GOOGLE
PROJECT ZONES CGROUPS LIGHTNING TALK KUBERNETES
1979
2000
2005
2010
CONTAINER INNOVATION IS NOT FINISHED
2015:
RED HAT 2017: 2018:
CONTAINER V1.0 of image & V1.0 of
PLATFORM 3.0 runtime spec distribution spec
2016: 2016: 2017: 2018:
2015: Skopeo project CRI-O project 2016: 2017: Docker includes CRI-O is GA and
STANDARDS VIA launched under launched under Containerd Buildah released the new powers OpenShfit
OCI AND CNCF the name OCID the name OCID project launched and ships in RHEL containerd Online
2015: 2016: 2017: 2017: 2018:
Tectonic Docker engine 1.12 Moby project Kata merges Clear Podman released
Announced adds swarm Announced & RunV projects and ships in RHEL
2015
2016
2017
2018
THANK YOU
plus.google.com/+RedHat facebook.com/redhatinc
linkedin.com/company/red-hat twitter.com/RedHatNews
youtube.com/user/RedHatVideos
Code for Attendance + Session Survey
AMER -
UNIVERSAL
BASE 1. InAttendance
the mobile app, go to the My
page by clicking “More”
RHTE Attendee
at the bottom navigation bar
2. On the My Attendance page, please
enter the below PIN code in the
designated box
SPZCZ
3. Tap Save to submit your PIN
116
Code for Attendance + Session Survey
AMER -
PODMAN
1. In the mobile app, go to the My RHTE Attendee
Attendance page by clicking “More”
at the bottom navigation bar
2. On the My Attendance page, please
enter the below PIN code in the
designated box
JYHFB
3. Tap Save to submit your PIN
117
Mounts
Copy on write vs. bind mounts
Scott McCarty Twitter: @fatherlinux Blog: bit.ly/fatherlinux
AGENDA
L103118 - Linux container internals
10:15AM—10:25AM 11:35AM—12:05PM
INTRODUCTION CONTAINER ORCHESTRATION
10:25AM—10:40AM 12:05PM—12:15PM
ARCHITECTURE CONCLUSION
10:40AM—11:05AM
CONTAINER IMAGES
11:05AM—11:35PM
CONTAINER HOSTS
119 Scott McCarty, Twitter: @fatherlinux
Materials
The lab is made up of multiple documents and a GitHub repository
● Presentation (Google Presentation): http://bit.ly/2pYAI9W
● Lab Guide (this document): http://bit.ly/2mIElPG
● Exercises (GitHub): http://bit.ly/2n5NtPl
120 Scott McCarty, Twitter: @fatherlinux
CONTACT INFORMATION
We All Love Questions
● Jamie Duncan: @jamieeduncan jduncan@redhat.com
● Billy Holmes: @gonoph111 biholmes@redhat.com
● John Osborne: @openshiftfed josborne@redhat.com
● Scott McCarty: @fatherlinux smccarty@redhat.com
121 Scott McCarty, Twitter: @fatherlinux