Ci CD
Ci CD
Ci CD
Introduction 6
Virtualisation : 7
Vagrant. 7
Containerisation: 8
Kubernetes : 9
Definition of columns 12
Ansible : # Containers are matched either by name (if provided) or by an exact match of 13
Puppet: 15
Installation 15
Hello world 16
Terraform: 17
Docker Provider 17
»Example Usage 17
»Registry Credentials 18
»Certificate information 19
»Argument Reference 20
CI CD Tools: 21
CI/CD defined 21
What Is Jenkins?# 26
What Is Travis CI?# 27
A Side-by-Side Comparison# 28
In this section 29
Related content 29
Configure bitbucket-pipelines.yml 30
On this page 30
In this section 31
Related content 31
Key concepts 32
Keywords 32
pipelines 35
default 35
branches 35
tags 36
bookmarks 36
custom 37
pull-requests 38
parallel 39
step 40
name 40
image 41
Examples 41
trigger 42
deployment 43
size 43
script 44
pipes 44
after-script 44
artifacts 45
options 45
max-time 45
clone 46
lfs 46
depth 46
definitions 47
services 47
caches 47
Drone.io 53
Set up Infrastructure 53
Install Prerequisites 57
Configure Drone 58
Recap 61
Practical: Giving Drone access to Google Cloud 62
Conclusion 73
PS - Cleanup (IMPORTANT!) 73
Build matrix 75
Prerequisites 76
On Windows 77
Setup wizard 83
Unlocking Jenkins 83
Wrapping up 107
Application-release automation107
Contents 110
Deploy a Spring Boot application to Cloud Foundry with GitLab CI/CD 111
Introduction 111
Requirements 112
Blacklist 118
Example 121
Introduction
Since the year 1936 the technology continued to advance thanks to the Turing machine which was
invented by the big master of the same name Alain Turing.
We must not forget that a large part of this scientific art that makes our everyday life was initiated by
the algorithms that was introduced in science by the Andalusians.
Then all this was developed and adapted to the science of information processing via well-known
scientists such as the great Dijkstra not to mention them all.
At the arrival we see more AI artificial intelligence that takes the lead on humans, the quack is that it
is made by humans itself from where all the beauty of science that is illustrated in the trust we place
in the works and works that we realize in the heart of the universe of science to best serve future
generations.
However, as for all other technological phenomena it does not lack detractors: Mass Destruction of
employment , road accidents.
IA was not born from the last rain but it still has a good way to go and progress.
This makes it the most popular tool and technology area of the moment.
Technically speaking Any algorithm for automating actions previously performed by humans can be
defined as an artificial intelligence as the name suggests.
This phenomenon, which does not cease to invade our hotels to replace inventory agents in
factories, has a multitude of secrets that can be focused on this pavement especially on the
conceptual and technical sides.
Virtualisation :
Vagrant.
Today we can deploy machines of virtual cacluls machines to the mass thanks to docker swarm or
kubernetes and even install all via vagrants:
explanations:
Vagrant.configure("2") do |config|
web.vm.box = "apache"
end
db.vm.box = "mysql"
end
end
here we defined two virtual machines to be deployed after running the apache and mysql script.
Containerisation:
A container is a standard unit of software that packages up code and all its dependencies so the
application runs quickly and reliably from one computing environment to another. A Docker
container image is a lightweight, standalone, executable package of software that includes
everything needed to run an application: code, runtime, system tools, system libraries and settings.
Container images become containers at runtime and in the case of Docker containers - images
become containers when they run on Docker Engine. Available for both Linux and Windows-based
applications, containerized software will always run the same, regardless of the infrastructure.
Containers isolate software from its environment and ensure that it works uniformly despite
differences for instance between development and staging.
Docker containers that run on Docker Engine:
Standard: Docker created the industry standard for containers, so they could be portable
anywhere
Lightweight: Containers share the machine’s OS system kernel and therefore do not require
an OS per application, driving higher server efficiencies and reducing server and licensing
costs
Secure: Applications are safer in containers and Docker provides the strongest default
isolation capabilities in the industry
Kubernetes :
Subsequently on these same virtual machines we can launch and install microservices through the
docker container and image management tool which can itself be mass-managed via the kubernetes
tool.
example:
version
: '2'
services:
nginx:
build: nginx
restart: always
ports:
- 8080:80
volumes_from:
- wordpress
wordpress:
image: wordpress:php7.1-fpm-
alpine
environment:
WORDPRESS_DB_HOST: mysql
WORDPRESS_DB_PASSWORD:
example
mysql:
image: mariadb
environment:
MYSQL_ROOT_PASSWORD:
example
volumes:
- ./demo-db:/var/lib/mysql
Here for example we see how we could define microservices with their environments to be deployed
later in the VM created before by our vagrant tool.
We can of course do without docker procedures to rely only on the vagrant tool, however this
process seems more complex and less used.
Another methodology is the use of the Kubernetes solution to manage all processes.
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 2
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
After creating this file, we call the command Kubctl create file_name.yaml
Google Kubernetes
GCE docs Commercial
Engine
multi-
Stackpoint.io multi-support docs Commercial
support
Community
Madcore.Ai Jenkins DSL Ubuntu flannel docs
(@madcore-ai)
multi-
Platform9 multi-support docs Commercial
support
multi-
Kublr custom multi-support docs Commercial
support
multi-
Kubermatic multi-support docs Commercial
support
flannel and/or
Giant Swarm CoreOS docs Commercial
Calico
Azure Kubernetes
Ubuntu Azure docs Commercial
Service
Community
Azure (IaaS) Ubuntu Azure docs
(Microsoft)
Community
GCE CoreOS CoreOS flannel docs
(@pires)
Community
CloudStack Ansible CoreOS flannel docs
(@sebgoa)
multi-
VMware vSphere any multi-support docs Community
support
Community
Bare-metal custom CentOS flannel docs
(@coolsvap)
flannel/calico/
Rackspace custom CoreOS docs Commercial
canal
Community
AWS Saltstack Debian AWS docs
(@justinsb)
Community
AWS kops Debian AWS docs
(@justinsb)
Community
Bare-metal custom Ubuntu flannel docs (@resouer, @WI
ZARD-CXY)
Community
oVirt docs
(@simon3z)
Community
any any any any docs
(@erictune)
Commercial
any any any any docs and
Community
Gardener Project/
multi-
any Cluster- multi-support docs Community and
Operator support Commercial
Alibaba Cloud
Container Service ROS CentOS flannel/Terway docs Commercial
For Kubernetes
Agile Stacks Terraform CoreOS multi-support docs Commercial
IBM Cloud
Ubuntu calico docs Commercial
Kubernetes Service
Community
Digital Rebar kubeadm any metal docs
(@digitalrebar)
Mirantis Cloud
Salt Ubuntu multi-support docs Commercial
Platform
Definition of columns
IaaS Provider is the product or organization which provides the virtual or physical machines
(nodes) that Kubernetes runs on.
Config. Mgmt. is the configuration management system that helps install and maintain
Kubernetes on the nodes.
Conformance indicates whether a cluster created with this configuration has passed the
project’s conformance tests for supporting the API and base features of Kubernetes v1.0.0.
Support Levels
Inactive: Not actively maintained. Not recommended for first-time Kubernetes users,
and may be removed.
Notes has other relevant information, such as the version of Kubernetes used.
Exemple :
Vagrant.configure(2) do |config|
config.vm.box = "ubuntu/trusty64"
end
end
end
docker.pull_images "progrium/consul"
docker.pull_images "progrium/registrator"
end
docker.run "progrium/consul",
end
Here we create a docker via vagrant then we ask him to get specific images with the pull command
without leaving the vagrant script.
Ansible :
# the image they were launched with and the command they're running. The module
# Ensure that a data container with the name "mydata" exists. If no container
docker:
name: mydata
image: busybox
state: present
volumes:
- /data
# Ensure that a Redis server is running, using the volume from the data
docker:
name: myredis
image: redis
state: started
expose:
- 6379
volumes_from:
- mydata
# - ensure that a container is running with the specified name and exact image.
# stopped and removed, and a new one will be launched in its place.
# - link this container to the existing redis container launched above with
# an alias.
# - grant the container read write permissions for the host's /dev/sda device
# - bind TCP port 9000 within the container to port 8080 on all interfaces
# on the host.
# - bind UDP port 9001 within the container to port 8081 on the host, only
# listening on localhost.
# - specify 2 ip resolutions.
docker:
name: myapplication
image: someuser/appimage
state: reloaded
pull: always
links:
- "myredis:aliasedredis"
devices:
- "/dev/sda:/dev/xvda:rwm"
ports:
- "8080:9000"
- "127.0.0.1:8081:9001/udp"
extra_hosts:
# exact image and command. If fewer than five are running, more will be launched;
docker:
state: reloaded
count: 5
image: someuser/anotherappimage
command: sleep 1d
docker:
name: myservice
image: someuser/serviceimage
state: restarted
docker:
image: someuser/oldandbusted
state: stopped
docker:
name: ohno
image: someuser/oldandbusted
state: absent
docker:
name: myservice
image: someservice/someimage
state: reloaded
log_driver: syslog
log_opt:
syslog-address: tcp://my-syslog-server:514
syslog-facility: daemon
syslog-tag: myservice
Puppet:
Installation
Packaged as a Puppet module, image_build is available on the Forge. You can install it with the usual
tools, including the puppet module command:
After installing the module, you can use some new Puppet commands, including puppet docker,
which in turn has two subcommands: one triggers a build of an image, while the other outputs the
intermediary Dockerfile. The examples directory contains a set of examples for experimenting with.
Let’s look at one of those now.
Hello world
We'll create a Docker image running Nginx and serving a simple text file. This is a realistic but
obviously simplistic example; it could be any application, such as a custom Java or Ruby or other
application.
First, let’s use a few Puppet modules from the Forge. We'll use the existing Nginx module and specify
its dependencies. We'll also use the dummy_service module to ignore service resources in the Nginx
module. We do this by creating a standard Puppetfile.
$ cat Puppetfile
forge 'https://forgeapi.puppetlabs.com'
mod 'puppet/nginx'
mod 'puppetlabs/stdlib'
mod 'puppetlabs/concat'
mod 'puppetlabs/apt'
mod 'puppetlabs/dummy_service'
Next, let’s write a simple manifest. Disabling nginx daemon mode isn't supported by the module just
yet (but the folks maintaining the module have just merged this capability), so let’s drop a file in
place with an exec. Have a look at manifests/init.pp:
include 'dummy_service'
class { 'nginx': }
nginx::resource::vhost { 'default':
}
file { '/var/www/html/index.html':
Let’s also provide some metadata for the image we intend to build. Take a look at metadata.yaml.
It’s worth noting that this is the only new bit so far.
cmd: nginx
expose: 80
image_name: puppet/nginx
That’s it. You should see the build output and a new image being saved locally. We’ve aimed for a
user experience that’s at least as simple as running docker build.
Let’s run the resulting image and confirm it’s serving the content we added. We expose the
webserver on port 8080 to the local host to make that easier.
83d5fbe370e84d424c71c1c038ad1f5892fec579d28b9905cd1e379f9b89e36d
$ curl http://0.0.0.0:8080
The image could be run via the Puppet Docker module, or atop any of the container schedulers.
Martin Fowler
In this simple and amazing piece of article we are going to discuss and explore some new amazing
and rather interesting pieces technology.One i.e. Habitat,an Automation tool that Automates your
process to build and publish Docker Images and Second i.e. Automate, which is a new chef CI/CD tool
with a cool new dashboard & better features.As an added bonus I am also going to share some nice
tips that I use to make my life easier while handling the CI/CD pipelines.So let’s get started,
Habitat
Introduction to Habitat
Habitat is a new amazing tool introduced by Chef.It basically tries to serve one motive i.e. to
automate the process of making a container image as easily as possible.You can think of it as
Dockerfile for the docker except that it has some new features for building images and process to
publish it in CI/CD perspective. The tool has been introduced in 2016 & is still into development
phase. It is written in rust and reactive by nature. Now let’s do some installation:
$ curl
https://raw.githubusercontent.com/habitat-sh/habitat/master/componen
ts/hab/install.sh | sudo bash
After the installation, try running it on the command line using the below command:
$ hab
hab 0.51.0/20171219021329
USAGE:
hab [SUBCOMMAND]
FLAGS:
SUBCOMMANDS:
ALIASES:
If you receive the above output, then you have successfully installed habitat.
Habitat Architecture
Now upon closely looking at its architecture and how to write it. You can clearly observe the various
files one has to write in order to bring up the container/image. The main file in this section is the
plan.sh file which is responsible for the deployment strategy/dependencies/package name of the
habitat image. It is mandatory to make this file and configure it properly in order to achieve the best
results.
Next, is the default.toml file. This file contains the information about the ports and external
configurations of your application that you have. It is similar to having nginx.conf for nginx or
apache2conf for apache, which I believe is an interesting and good idea.
For the hooks part, I observed its usage while exploring some of the samples provided by habitat
team in there docs.In simple terms, it is basically breaking down your requirements as per your
application into multiple stages each having its priority in different order while running your
application. Like we have Entrypoint in Dockerfile. For example, Here the file Init in scripts contains
your initialization commands.
Some sample examples:
# Default.toml
port = 8090
# Hooks
## Init
cd {{pkg.svc_path}}
## Run
cd "{{pkg.svc_var_path}}"
$env:HAB_CONFIG_PATH="{{pkg.svc_config_path}}"
view rawhooks.sh hosted with ❤ by by GitHub
# Plan.sh
pkg_origin=ramitsurana
pkg_name=hadoop
pkg_version="0.0.1"
pkg_license=('MIT')
pkg_upstream_url=https://github.com/ramitsurana/chef-automate-habitat
pkg_deps=(core/vim core/jre8)
pkg_binds=(
return 0
do_build() {
return 0
do_install() {
return 0
}
view rawplan.sh hosted with ❤ by by GitHub
Habitat Builder
The Habitat Builder is a place similar to Docker Hub/Quay.io. It is a place where you can
automatically check in you code with habitat and build a variety of different container images. It also
enables you to publish your docker images on docker hub by connecting your docker hub account.To
get started sign up at Habitat Builder.
The term origin here can be defined as a namespace which is created by the user/organization to
build one’s own packages. It is similar to defining your name in the dockerhub account.
As you can observe from above Habitat asks you to connect your GitHub account and specify the
path to which your plan.sh file is placed. It has a by default path under the habitat folder in which it
searches your plan.sh file. You can specify your path and use the dockerhub integration if you wish to
publish your images to dockerhub.
Similar to DockerHub, you can also connect your ECR Registry on your AWS account by visiting the
Integrations section.
After creating a package/build you can observe the dependencies by scrolling down the page:
Here you can observe that it consists of 2 sections, labelled as Transitive dependencies and
Dependencies.In simple terms, the transitive dependencies can be labelled as a basic set of packages
that are required by every application that you wish to build using docker. These are provisioned and
managed by the Habitat Team. You can also treat it similar to the FROM Section when writing a
Dockerfile.
On the other hand, Dependencies label is used to signify the extra packages you are using/mentioned
in your plan.sh file being used by your application.
Habitat Studio
Habitat Studio is a another important feature of Habitat that allows you to test and run you
application in simulation to like a real enviornment before you publish it. If you are familiar with
python, you can think it as similar as virtualenv. So let’s try out hab studio.
In case you are wondering how to create a new access Github token, please open the following url
Copy the generated token in the cli tool for hab & you are good to go.
Do make sure to save this token. We will use it in the next part of the article.
Docker Vs Habitat
Chef Automate
Introduction to Chef Automate
Chef automate is a CI/CD Based solution provided by Chef to complete your end to end delivery
requirements. It provides you with necessary tools to make your life easier and simple. It has by
default integration for features & tools like Inspec for Compliance, LDAP/SAML Support, Slack
Integration for Notifications etc.
Trying Out on Local System:
Chef Automate can be easily tried on your local system by downloading Chef Automate from here
ramit@ramit-Inspiron-3542:~$ automate-ctl
create-enterprise
create-user
create-users
delete-enterprise
delete-project
delete-runner
.....
Check if everything is good or not:
✔ [passed] CPU at least 4 cores [passed] /var has at least 80GB free
....
$ automate-ctl reconfigure
Category Inbound Security Ports Access Operating System & Instance Size
Do make sure to install the License file required for running Chef Automate from here.Its a 30 day
free trial. As per its current pricing page the fee for Chef Automate on AWS is $0.0155 node/hour.
Please make sure to note the FQDN for both your Chef server and Chef Automate server.
Setup
Let’s get started:
Using the aws console,we can start 2 EC2 instances with ( t2.large ) instance type. Make sure to
configre your security groups like shown below:
Make sure to add Port 8989 for Git with Chef-Automate Server.
After bringing up the chef-server machine, please log into the machine and use git to clone the
following repo:
Run scripts/install-chef-server.sh :
$ chmod +x $HOME/chef-automate-habitat/scripts/install-chef-
server.sh
$ sudo $HOME/chef-automate-habitat/scripts/install-chef-server.sh
$CHEF_AUTOMATE_FQDN ramit
Successfully Copy Files to Chef Server from local machine using scp:
// License File
Successfully Copy Files to Chef Automate from Chef Server using scp:
//Copy New PEM File from Chef Server to your local machine
Now your Chef Server is fully up and ready. We now move onto Chef-Automate Sever, after getting
into it using ssh. Follow these below steps:
$ chmod +x $HOME/chef-automate-habitat/scripts/install-chef-
automate.sh
$ sudo $HOME/chef-automate-habitat/scripts/install-chef-automate.sh
$CHEF_SERVER_FQDN ramit
After completing the above steps, you can proceed to open the DNS/IP for Chef Automate Server
Hoorah ! You have successfully configures chef automate and now you are ready to login.
Let’s start exploring some new features of the Chef Automate Dashboard:
With your user name and password admin, try to login. You will observe the following screen:
For shutting down chef automate:
Some of the chef automate internals that I observed while exploring this tool are as follows:
Elasticsearch
Logstash
Nginx
Postgresql
RabbitMq
You can also use chef automate liveness agent for sending keepalive messages to Chef Automate,
which prevents nodes that are up but not frequently running Chef Client from appearing as “missing”
in the Automate UI. At the time of writing, it is currently in development.
1. Jenkins
2. Gitlab
3. Travis
Use CI Web Pages for better output in Web Development Related Projects
You can use this script in Gitlab (.gitlab-ci.yml) to obtain the output at http://<-USERNAME-OF-
GITLAB->.gitlab.io/<-PROJECT-NAME->/
pages:
stage: deploy
script:
- mkdir .public
- cp -r * .public
- mv .public public
artifacts:
paths:
- public
only:
- master
For GitHub use the below script in (_config.yml) to obtain the output at http://<-USERNAME-OF-
GITHUB->.github.io/<-PROJECT-NAME->/
theme: jekyll-theme-cayman
As correctly said by Koshuke, it is important that we adopt new methods to trigger the pipelines.
One of the mistakes that one can do while checking out multiple repositories in a pipeline is the fact
that unintended commit on other repo might be triggering the pipeline. The best command to
checkout the repo is this :
checkout(
poll: false,
scm: [
userRemoteConfigs: [[
url: MY_URL.git,
credentialsId: CREDENTIALS_ID]],
extensions: [
[$class: 'DisableRemotePoll'],
])
Python is a super amazing and fun language to work with. One of the cool reasons why I recommend
it is because of the awesome libraries it has support to like dictionary, json, csv etc.
Terraform:
Docker Provider
The Docker provider is used to interact with Docker containers and images. It uses the Docker API to
manage the lifecycle of Docker containers. Because the Docker provider uses the Docker API, it is
immediately compatible not only with single server Docker but Swarm and any additional Docker-
compatible API hosts.
Use the navigation to the left to read about the available resources.
»Example Usage
# Configure the Docker provider
provider "docker" {
host = "tcp://127.0.0.1:2376/"
}
# Create a container
image = "${docker_image.ubuntu.latest}"
name = "foo"
name = "ubuntu:latest"
»Registry Credentials
Registry credentials can be provided on a per-registry basis with the registry_auth field, passing
either a config file or the username/password directly.
Note The location of the config file is on the machine terraform runs on, nevertheless if the specified
docker host is on another machine.
provider "docker" {
host = "tcp://localhost:2376"
registry_auth {
address = "registry.hub.docker.com"
config_file = "~/.docker/config.json"
registry_auth {
address = "quay.io:8181"
username = "someuser"
password = "somepass"
}
name = "myorg/privateimage"
name = "quay.io:8181/myorg/privateimage"
Note When passing in a config file make sure every repo in the auths object should have a
corresponding authstring.
In this case, either use username and password directly or set the enviroment
variables DOCKER_REGISTRY_USER and DOCKER_REGISTRY_PASS or add the string manually via
# dXNlcjpwYXNz=
"auths": {
"repo.mycompany:8181": {
"auth": "dXNlcjpwYXNz="
»Certificate information
Specify certificate information either with a directory or directly with the content of the files for
connecting to the Docker host via TLS.
provider "docker" {
host = "tcp://your-host-ip:2376/"
cert_path = "${pathexpand("~/.docker")}"
cert_material = "${file(pathexpand("~/.docker/cert.pem"))}"
key_material = "${file(pathexpand("~/.docker/key.pem"))}"
»Argument Reference
The following arguments are supported:
host - (Required) This is the address to the Docker host. If this is blank,
the DOCKER_HOST environment variable will also be read.
cert_path - (Optional) Path to a directory with certificate information for connecting to the
Docker host via TLS. It is expected that the 3 files {ca, cert, key}.pem are present in the
path. If the path is blank, the DOCKER_CERT_PATHwill also be checked.
NOTE on Certificates and docker-machine: As per Docker Remote API documentation, in any
docker-machine environment, the Docker daemon uses an encrypted TCP socket (TLS) and
requires cert_path for a successful connection. As an alternative, if using docker-machine,
run eval $(docker-machine env <machine-name>) prior to running Terraform, and the host
and certificate path will be extracted from the environment.
CI CD Tools:
Continuous integration (CI) and continuous delivery (CD) embody a culture, set of operating
principles, and collection of practices that enable application development teams to deliver code
changes more frequently and reliably. The implementation is also known as the CI/CD pipeline and is
one of the best practices for devops teams to implement.
Table of Contents
CI/CD defined
CI/CD defined
Continuous integration is a coding philosophy and set of practices that drive development teams to
implement small changes and check in code to version control repositories frequently. Because most
modern applications require developing code in different platforms and tools, the team needs a
mechanism to integrate and validate its changes.
The technical goal of CI is to establish a consistent and automated way to build, package, and test
applications. With consistency in the integration process in place, teams are more likely to commit
code changes more frequently, which leads to better collaboration and software quality.
Continuous delivery picks up where continuous integration ends. CD automates the delivery of
applications to selected infrastructure environments. Most teams work with multiple environments
other than the production, such as development and testing environments, and CD ensures there is
an automated way to push code changes to them. CD automation then performs any necessary
service calls to web servers, databases, and other services that may need to be restarted or follow
other procedures when applications are deployed.
Continuous integration and delivery requires continuous testing because the objective is to deliver
quality applications and code to users. Continuous testing is often implemented as a set of
automated regression, performance, and other tests that are executed in the CI/CD pipeline.
A mature CI/CD practice has the option of implementing continuous deployment where application
changes run through the CI/CD pipeline and passing builds are deployed directly to production
environments. Teams practicing continuous delivery elect to deploy to production on daily or even
hourly schedule, though continuous delivery isn’t always the optimal for every business application.
Teams implementing continuous integration often start with version control configuration and
practice definitions. Even though checking in code is done frequently, features and fixes are
implemented on both short and longer time frames. Development teams practicing continuous
integration use different techniques to control what features and code is ready for production.
One technique is to use version-control branching. A branching strategy such as Gitflow is selected to
define protocols over how new code is merged into standard branches for development, testing and
production. Additional feature branches are created for ones that will take longer development
cycles. When the feature is complete, the developers can then merge the changes from feature
branches into the primary development branch. This approach works well, but it can become difficult
to manage if there are many features being developed concurrently.
There are other techniques for managing features. Some teams also use feature flags, a
configuration mechanism to turn on or off features and code at run time. Features that are still under
development are wrapped with feature flags in the code, deployed with the master branch to
production, and turned off until they are ready to be used.
The build process itself is then automated by packaging all the software, database, and other
components. For example, if you were developing a Java application, CI would package all the static
web server files such as HTML, CSS, and JavaScript along with the Java application and any database
scripts.
CI not only packages all the software and database components, but the automation will also execute
unit tests and other testing. This testing provides feedback to developers that their code changes
didn’t break any existing unit tests.
Most CI/CD tools let developers kick off builds on demand, triggered by code commits in the version
control repository, or on a defined schedule. Teams need to discuss the build schedule that works
best for the size of the team, the number of daily commits expected, and other application
considerations. A best practice to ensure that commits and builds are fast, otherwise, it may impede
the progress of teams trying to code fast and commit frequently.
A best practice is to enable and require developers to run all or a subset of regressions tests in their
local environments. This step ensures that developers only commit code to version control after
regression tests pass on the code changes.
Regression tests are just the start. Performance testing, API testing, static code analysis, security
testing, and other testing forms can also be automated. The key is to be able to trigger these tests
either through command line, webhook, or web service and that they respond with success or fail
status codes.
Once testing is automated, continuous testing implies that the automation is integrated into the
CI/CD pipeline. Some unit and functionality tests can be integrated into CI that flags issues before or
during the integration process. Test that require a full delivery environment such as performance and
security testing are often integrated into CD and performed after builds are delivered to target
environments.
Executing any required infrastructure steps that are automated as code to stand up or tear
down cloud infrastructure..
Managing the environment variables and configuring them for the target environment.
Pushing application components to their appropriate services, such as web servers, API
services, and database services.
Executing any steps required to restarts services or call service endpoints that are needed for
new code pushes.
More sophisticated CD may have other steps such as performing data synchronizations, archiving
information resources, or performing some application and library patching.
Once a CI/CD tool is selected, development teams must make sure that all environment variables are
configured outside the application. CI/CD tools allow setting these variables, masking variables such
as passwords and account keys, and configuring them at time of deployment for the target
environment.
Many teams implementing CI/CD pipelines on cloud environments also use containers such as
Docker and Kubernetes. Containers allow packaging and shipping applications in standard, portable
ways. The containers can then be used to scale up or tear down environments that have variable
workloads.
CD tools also provide dashboard and reporting functions. If builds or deliveries fail, they alert
developers with information on the failed builds. They integrate with version control and agile tools,
so they can be used to look up what code changes and user stories made up a build.
CI/CD pipelines are designed for businesses that want to improve applications frequently and require
a reliable delivery process. The added effort to standardize builds, develop tests, and automate
deployments is the manufacturing process for deploying code changes. Once in place, it enables
teams to focus on the process of enhancing applications and less on the system details of delivering it
to computing environments.
CI/CD is a devops best practice because it addresses the misalignment between developers who want
to push changes frequently, with operations that want stable applications. With automation in place,
developers can push changes more frequently. Operations teams see greater stability because
environments have standard configurations, there is continuous testing in the delivery process,
environment variables are separated from the application, and rollback procedures are automated.
Getting started with CI/CD requires development teams and operational teams to collaborate on
technologies, practices, and priorities. Teams need to develop consensus on the right approaches for
their business and technologies so that once CI/CD is in place the team is onboard with following
practices consistently.
CI/CD TOOLS
Programmers used to be solely responsible for integrating their own code, but now CI tools running
on a server handle integration automatically. Such tools can be set up to build at scheduled intervals
or as new code enters the repository. Test scripts are then automatically run to make sure everything
behaves as it should. After that, builds can be easily deployed on a testing server and made available
for demo or release. Some continuous integration tools even automatically generate documentation
to assist with quality control and release management.
Of course, different development teams have different needs, which is why there are dozens of CI
tools to choose from today. There is rarely a one-size-fits-all solution in software development. The
best CI tool for open source projects might not be ideal for enterprise software. For example, let’s
compare two CI tools intended for different types of jobs: Jenkins vs Travis CI.
What Is Jenkins?#
Jenkins is a self-contained, Java-based CI tool. Jenkins CI is open source, and the software offers a lot
of flexibility in terms of when and how frequently integration occurs. Developers can also specify
conditions for customized builds. Jenkins is supported by a massive plugin archive, so developers can
alter how the software looks and operates to their liking. For example, the Jenkins Pipeline suite of
plugins comes with tools that let developers model simple-to-complex delivery pipelines as code
using the Pipeline DSL. There are also plugins that extend functionality for things like authentication
and alerts. If you want to run Jenkins in collaboration with Kubernetes and Docker, there are also
plugins for that.
While the software’s high level of customizability is seen as a benefit to many, Jenkins can take a
while to configure to your liking. Unlike tools like Travis CI that are ready to use out-of-the-box,
Jenkins can require hours or even days of set up time depending on your needs.
Jenkins Features:
Integrates with most tools in the continuous integration and delivery toolchain
Jenkins Pros:
Free to download and use
Jenkins Cons:
Travis CI is another CI tool that’s free to download, but unlike Jenkins, it also comes with free
hosting. Therefore, developers don’t need to provide their own dedicated server. While Travis CI can
be used for open source projects at no cost, developers must purchase an enterprise plan for private
projects.
Since the Travis CI server resides on the cloud, it’s easy to test projects in any environment, device or
operating system. Such testing can be performed synchronously on Linux and macOS machines.
Another benefit of the hosted environment is that the Travis CI community handles all server
maintenance and updates. With Jenkins, those responsibilities are left to the development team.
Of course, teams working on highly-sensitive projects may be wary of sharing everything with a third-
party, so many large corporations and government agencies would rather run continuous integration
on their own servers so that they have complete control.
Travis CI supports Docker and dozens of languages, but it pales in comparison to Jenkins when it
comes to options for customization. Travis CI also lacks the immense archive of plugins that Jenkins
boasts. Consequently, Travis CI offers less functionality, but it’s also much easier to configure; you
can have Travis CI set up and running within minutes rather than hours or days.
Another selling point of Travis CI is the build matrix feature, which allows you to accelerate the
testing process by breaking it into parts. For example, you can split unit tests and integration tests
into separate build jobs that run parallel to take advantage of your account’s full build capacity. For
more information about the build matrix option, see the official Travis CI docs.
Travis CI Features:
Supports the following languages: Android, C, C#, C++, Clojure, Crystal, D, Dart, Erlang, Elixir,
F#, Go, Groovy, Haskell, Haxe, Java, JavaScript (with Node.js), Julia, Objective-C, Perl, Perl6,
PHP, Python, R, Ruby, Rust, Scala, Smalltalk and Visual Basic
Travis CI Pros:
Travis CI Cons:
A Side-by-Side Comparison#
From a cost perspective, both Jenkins and Travis CI are free to download and use for open source
projects, but Jenkins requires developers to run and maintain their own dedicated server, so that
could be considered an extra expense. If you can’t or don’t want to configure Jenkins on your own
server, there are cloud hosting services specifically for Jenkins. If you need help setting up a Jenkins
server, there are plenty of tutorials online to help you perform any type of set up you need.
Travis CI offers hosting for free, but you’ll have to pay for an enterprise plan if your project is private.
Travis CI enterprise plans start at $129 per month and go up based on the level of support you
require. Fortunately, they don’t charge per project, so if you have multiple projects that need
hosting, then you can really get your money’s worth. Travis CI’s maintenance-free hosting is a major
plus since they take care of server updates and the like. All developers must do is maintain a config
file. If you host Jenkins on your own server, then you are of course responsible for maintaining it.
Fortunately, Jenkins itself requires little maintenance, and it comes with a built-in GUI tool to
facilitate easy updates.
If you want a CI tool that you can quickly set up and begin using right away, then Travis CI won’t
disappoint. It takes very little effort to get started; just create a config file and start integrating.
Jenkins, on the other hand, requires extensive setup, so you’ll be disappointed if you were hoping to
just dive right in. How long Jenkins takes to configure will depend on the complexity of your project
and the level of customization you desire.
As far as performance goes, Jenkins and Travis CI are pretty evenly matched. Which one will work
best for you project depends on your preferences. If you’re looking for a CI tool with seemingly
unlimited customizability, and you have the time to set it up, then Jenkins will certainly meet your
expectations. If you’re working on an open source project, Travis CI may be the better fit since
hosting is free and requires minimal configuration. If you’re developing a private enterprise project,
and you already have a server for hosting, then Jenkins may be preferable. Since they are free to
download, you have nothing to lose by experimenting with both and performing your own Jenkins vs
Travis CI comparison; you may end up using both tools for different jobs.
Jenkins vs Travis CI - In Summary#
When it comes to comparing Jenkins vs Travis CI, there is no absolute “winner”. Travis CI is ideal for
open source projects that require testing in multiple environments, and Jenkins is better suited
for larger projects that require a high degree of customization.
Therefore, professional developers can benefit from familiarizing themselves with both tools. If
you’re working on an open source project with a small team, Travis CI is probably a good choice since
it’s free and easy to set up; however, if you find yourself working for a large company, you’re more
likely to work with tools like Jenkins.
Bitbucket pipeline
Related content
Configure bitbucket-pipelines.yml
Bitbucket Pipelines is an integrated CI/CD service, built into Bitbucket. It allows you to automatically
build, test and even deploy your code, based on a configuration file in your repository. Essentially, we
create containers in the cloud for you. Inside these containers you can run commands (like you might
on a local machine) but with all the advantages of a fresh system, custom configured for your needs.
To set up Pipelines you need to create and configure the bitbucket-pipelines.yml file in the
root directory of your repository. This file is your build configuration, and using configuration-as-code
means it is versioned and always in sync with the rest of your code.
The bitbucket-pipelines.yml file holds all the build configurations for your repository. YAML is
a file format that is easy to read, but writing it requires care. Indenting must use spaces, as tab
characters are not allowed.
There is a lot you can configure in the bitbucket-pipelines.yml file, but at its most basic the
required keywords are:
step: each step starts a new Docker container with a clone of your repository, then runs the
contents of your script section.
Configure bitbucket-pipelines.yml
On this page
Key concepts
Keywords
pipelines
default
branches
tags
bookmarks
custom
pull-requests
parallel
step
name
image
trigger
deployment
size
script
pipes
after-script
artifacts
options
max-time
clone
lfs
depth
definitions
services
caches
In this section
YAML anchors
Related content
Get started with Bitbucket Pipelines
At the center of Pipelines is the bitbucket-pipelines.yml file. It defines all your build
configurations (pipelines) and needs to be created in the root of your repository. With 'configuration
as code', your bitbucket-pipelines.yml is versioned along with all the other files in your
repository, and can be edited in your IDE. If you've not yet created this file, you might like to read Get
started with Bitbucket Pipelines first.
YAML is a file format that is easy to read, but writing it requires care. Indenting must use spaces, as
tab characters are not allowed.
There is a lot you can configure your pipelines to do, but at its most basic the required keywords in
your YAML file are:
step : each step starts a new Docker container that includes a clone of your repository, and then
runs the contents of your script section inside it.
Pipelines can contain any software language that can be run on Linux. We have some examples, but
at its most basic a bitbucket-pipelines.yml file could look like this:
pipelines:
default:
- step:
script:
You can then build on this using the keywords listed below.
If you have a complex configuration there are a couple of techniques that you might find useful:
You can use pipes which which simplify common multi-step actions.
If you have multiple steps performing similar actions, you can add YAML anchors to easily
reuse sections of your configuration.
Key concepts
A pipeline is made up of a set of steps.
Each step in your pipeline runs a separate Docker container. If you want, you can use
different types of container for each step, by selecting different images.
The step runs the commands you provide in the environment defined by the image.
default - All commits trigger this pipeline, unless they match one of the other sections
tags (Git only) or bookmarks (Mercurial only) - Specify the name of a tag or bookmark, or use
a glob pattern.
pull-requests - Specify the name of a branch, or use a glob pattern, and the pipeline will only
run when there is a pull request on this branch.
Keywords
You can define your build pipelines by using a selection of the following keywords. They are arranged
in this table in the order in which you might use them, with highlighted rows to show keywords that
define a logical section.
Keyword Description
default
Contains the pipeline definition for all branches that don't match a
pipeline definition in other sections.
branches
Contains pipeline definitions for specific branches.
tags Contains pipeline definitions for specific Git tags and annotated
tags.
step
Defines a build execution unit. This defines the commands executed
and settings of a unique container.
name Defines a name for a step to make it easier to see what each step is
doing in the display.
image The Docker image to use for a step. If you don't specify the image,
your pipelinesrun in the default Bitbucket image. This can also be
defined globally to use the same image type for every step.
deployment
Sets the type of environment for your deployment step.
size
Used to provision extra resources for pipelines and steps.
script Contains the list of commands that are executed to perform the
build.
artifacts Defines files that are produced by a step, such as reports and JAR
files, that you want to share with a following step.
max-time
The maximum time (in minutes) a step can execute for.
Use a whole number greater than 0 or less than 120. If you don't
specify a max-time, it defaults to 120.
lfs Enables the download of LFS files in your clone. This defaults
to false if not specified.
depth
Defines the depth of Git clones for all pipelines.
definitions Defines resources, such as services and custom caches, that you
want to useelsewhere in your pipeline configurations.
services Define services you would like to use with you build, which are run
in separate but linked containers.
pipelines
The start of your pipelines definitions. Under this keyword you must define your build pipelines using
at least one of the following:
default (for all branches that don't match any of the following)
tags (Git)
bookmarks (Mercurial)
image: node:10.15.0
pipelines:
default:
- step:
script:
- npm install
- npm test
script:
- npm install
- npm test
branches:
staging:
- step:
name: Clone
script:
default
The default pipeline runs on every push to the repository, unless a branch-specific pipeline is defined.
You can define a branch pipeline in the branches section.
branches
Defines a section for all branch-specific build pipelines. The names or expressions in this section are
matched against:
You can use glob patterns for handling the branch names.
See Branch workflows for more information about configuring pipelines to build specific branches in
your repository.
tags
Defines all tag-specific build pipelines. The names or expressions in this section are matched against
tags and annotated tags in your Git repository. You can use glob patterns for handling the tag names.
bookmarks
Defines all bookmark-specific build pipelines. The names or expressions in this section are matched
against bookmarks in your Mercurial repository. You can use glob patterns for handling the tag
names.
image: node:10.15.0
pipelines:
default:
- step:
script:
- npm install
- npm test
script:
- npm install
- npm test
branches:
staging:
- step:
name: Clone
script:
custom
Defines pipelines that can only be triggered manually or scheduled from the Bitbucket Cloud
interface.
image: node:10.15.0
pipelines:
sonar: # The name that is displayed in the list in the Bitbucket Cloud
GUI
- step:
script:
script:
staging:
- step:
script:
With a configuration like the one above, you should see the following pipelines in the Run
pipelinedialog in Bitbucket Cloud:
pull-requests
A special pipeline which only runs on pull requests. Pull-requests has the same level of
indentation as branches.
This type of pipeline runs a little differently to other pipelines. When it's triggered, we'll merge the
destination branch into your working branch before it runs. If the merge fails we will stop the
pipeline.
This only applies to pull requests initiated from within your repository; pull requests from a forked
repository will not trigger the pipeline.
pipelines:
pull-requests:
'**': #this runs as default for any branch not elsewhere defined
- step:
script
- ...
- step:
script:
- ...
staging:
- step:
script:
- ...
Tip: If you already have branches in your configuration, and you want them all to only run on pull
requests, you can simply replace the keyword branches with pull-requests (if you already
have a pipeline for default you will need to move this under pull-requests and change the
keyword from default to '**' to run).
Pull request pipelines run in addition to any branch and default pipelines that are defined, so if the
definitions overlap you may get 2 pipelines running at the same time!
parallel
Parallel steps enable you to build and test faster, by running a set of steps at the same time.
The total number of build minutes used by a pipeline will not change if you make the steps parallel,
but you'll be able to see the results sooner.
There is a limit of 10 for the total number of steps you can run in a pipeline, regardless of whether
they are running in parallel or serial.
pipelines:
default:
name: Build
script:
- ./build.sh
- step:
name: Integration 1
script:
- ./integration-tests.sh --batch 1
- step:
name: Integration 2
script:
- ./integration-tests.sh --batch 2
script:
- ./deploy.sh
step
Defines a build execution unit. Steps are executed in the order that they appear in the bitbucket-
pipelines.yml file. You can use up to 10 steps in a pipeline.
Each step in your pipeline will start a separate Docker container to run the commands configured in
the script. Each step can be configured to:
Use a different Docker image.
Steps can be configured to wait for a manual trigger before running. To define a step as manual,
add trigger: manual to the step in your bitbucket-pipelines.yml file. Manual steps:
Can only be executed in the order that they are configured. You cannot skip a manual step.
If your build uses both manual steps and artifacts, the artifacts are stored for 7 days following the
execution of the step that produced them. After this time, the artifacts expire and any manual steps
in the pipeline can no longer be executed. For more information, see Manual steps and artifact
expiry.
Note: You can't configure the first step of the pipeline as a manual step.
name
You can add a name to a step to make displays and reports easier to read and understand.
image
You can define images at the global or step level. You can't define an image at the branch
level.
image: <your_account/repository_details>:<tag>
For more information about using and creating images, see Use Docker images as build
environments.
Examples
pipelines:
default:
- step:
script:
- npm install
- npm test
script:
- npm install
- npm test
trigger
Specifies whether a step will run automatically or only after someone manually triggers it. You can
define the trigger type as manual or automatic. If the trigger type is not defined, the step defaults
to running automatically. The first step cannot be manual. If you want to have a whole pipeline only
run from a manual trigger then use a custom pipeline.
pipelines:
default:
- step:
image: node:10.15.0
script:
- npm install
- npm test
artifacts:
- dist/**
- step:
name: Deploy
image: python:3.5.1
trigger: manual
script:
- python deploy.py
deployment
Sets the type of environment for your deployment step, used in the Deployments dashboard.
The following step will display in the test environment in the Deployments view:
- step:
image: aws-cli:1.0
deployment: test
script:
size
You can allocate additional resources to a step, or to the whole pipeline. By specifying the size of 2x,
you'll have double the resources available (eg. 4GB memory → 8GB memory).
- step:
script:
- step:
script:
options:
size: 2x
pipelines:
default:
- step:
script:
script
Contains a list of commands that are executed in sequence. Scripts are executed in the order in
which they appear in a step. We recommend that you move large scripts to a separate script file and
call it from the bitbucket-pipelines.yml.
pipes
We are gradually rolling out this feature, so if you don't see pipes in your editor yet, you can edit the
configuration directly, or join our alpha group which has full access.
Pipes make complex tasks easier, by doing a lot of the work behind the scenes. This means you can
just select which pipe you want to use, and supply the necessary variables. You can look at the
repository for the pipe to see what commands it is running. Learn more about pipes.
default:
- step:
script:
- pipe: atlassian/opsgenie-send-alert:0.2.0
variables:
GENIE_KEY: $GENIE_KEY
PRIORITY: "P1"
after-script
Commands inside an after-script section will run when the step succeeds or fails. This could be useful
for clean up commands, test coverage, notifications, or rollbacks you might want to run, especially if
your after-script uses the value of BITBUCKET_EXIT_CODE.
pipelines:
default:
- step:
script:
- npm install
- npm test
after-script:
Defines files to be shared from one step to a later step in your pipeline. Artifacts can be defined
using glob patterns.
pipelines:
default:
- step:
image: node:10.15.0
script:
- npm install
- npm test
artifacts:
- dist/**
- step:
image: python:3.5.1
script:
- python deploy-to-production.py
options
Contains global settings that apply to all your pipelines. Currently the only option to define is max-
time.
max-time
You can define the maximum time a step can execute for (in minutes) at the global level or step level.
Use a whole number greater than 0 and less than 120.
max-time: 60
pipelines:
default:
- step:
script:
- step:
max-time: 5
script:
clone
Contains settings for when we clone your repository into a container. Settings here include:
lfs
A global setting that specifies that Git LFS files should be downloaded with the clone.
clone:
lfs: true
pipelines:
default:
- step:
script:
- echo "Clone and download my LFS files!"
depth
This global setting defines how many commits we clone into the pipeline container. Use a whole
number greater than zero or if you want to clone everything (which will have a speed impact)
use full.
If you don't specify the Git clone depth, it defaults to the last 50, to try and balance the time it takes
to clone and how many commits you might need.
clone:
pipelines:
default:
- step:
name: Cloning
script:
definitions
Define resources used elsewhere in your pipeline configuration. Resources can include:
services that run in separate Docker containers – see Use services and databases in Bitbucket
Pipelines.
YAML anchors - a way to define a chunk of your yaml for easy re-use - see YAML anchors.
services
Rather than trying to build all the resources you might need into one large image, we can spin up
separate docker containers for services. This will tend to speed up the build, and makes it very easy
to change a single service without having to redo your whole image.
So if we want a redis service container we could add:
definitions:
services:
redis:
image: redis
caches
Re-downloading dependencies from the internet for each step of a build can take a lot of time. Using
a cache they are downloaded once to our servers and then locally loaded into the build each time.
definitions:
caches:
bundler: vendor/bundle
'*/feature'
This expression requires quotes.
Don't see your language? Don't worry, there is always the Other option in the More menu if you
can't see what you need. This uses our default Docker image that contains many popular build tools,
and you can add your own using apt-get commands in your script.
2. Select Commit file when you are happy with your edit and ready to run your first Pipeline.
Pipelines will now automatically trigger whenever you push changes to your repository, running the
default pipeline.
To get the most out of pipelines, you can add more to the bitbucket-pipelines.yml file. For
example, you can define which Docker image you'd like to use for your build, create build
configurations for specific branches, tags, and bookmarks, make sure any test reports are displayedor
define which artifacts you'd like to pass between steps.
Drone
Continuous Integration and Continuous Deployment are the next step to go through if you want a
production-grade microservice application. Let's revisit Webflix to make this point clear.
Webflix is running a whole lot of services in K8s. Each of these services is associated with some code
stored in a repository somewhere. Let's say Webflix wisely chooses to use Git to store their code and
they follow a feature branching strategy.
Branching strategies are a bit (a lot) outside the scope of this article, but basically, what this means is
that if a developer wants to make a new feature, they code up that feature on a new Git branch.
Once the developer is confident that their feature is complete, they request that their feature branch
gets merged into the master branch. Once the code is merged to the master branch, it should mean
that it is ready to be deployed into production.
If all of this sounds pretty cryptic, I would suggest you take some time to learn about Git. Git is
mighty. Long live the Git.
The process of deploying code to production is not so straightforward. First, we should make sure all
of the unit tests pass and have good coverage. Then, since we are working with microservices, there
is probably a Docker image to build and push.
Once that is done, it would be good to make sure that the Docker image actually works by doing
some tests against a live container (maybe a group of live containers). It might also be necessary to
measure the performance of the new image by running some tests with a tool like Locust. The
deployment process can get very complex if there are multiple developers working on multiple
services at the same time, since we would need to keep track of version compatibility for the various
services.
There are loads of CI/CD tools around and they have their own ways of configuring their pipelines (a
pipeline is a series of steps code needs to go through when it is pushed). There are many good books
(like this one) dedicated to designing deployment pipelines, but, in general, you'll want to do
something like this:
3. set up a test environment where the new container can run within a realistic context
5. maybe run a few more tests, e.g., saturation tests with locust.io or similar
If, for example, one of the test steps fails, then the code will not get deployed to production. The
pipeline will skip to the end of the process and notify the team that the deployment was a failure.
You can also set up pipelines for merge/pull requests, e.g., if a developer requests a merge, execute
the above pipeline but LEAVE OUT STEP 6 (deploying to production).
Drone.io
Drone is a container based Continuous Delivery system. It's open source, highly configurable (every
build step is executed by a container!) and has a lot of pluginsavailable. It's also one of the easier
CI/CD systems to learn.
In this section, we're going to set up drone on a VM in Google Cloud and get it to play nice with
GitLab. It works fine with GitHub and other popular Git applications as well. I just like Gitlab.
Now I'll be working on the assumption that you have been following along since part 1 of this series.
We already have a K8s cluster set up on Google Cloud, and it is running a deployment containing a
really simple web app. Thus far, we've been interacting with our cluster via the Google Cloud shell, so
we're going to keep doing that. If any of this stuff bothers you, please take a look at part 2.
Set up Infrastructure
The first thing we'll do is set up a VM (Google Cloud calls this a compute instance) with a static IP
address. We'll make sure that Google's firewall lets in HTTP traffic.
When working with compute instances, we need to continuously be aware of regions and zones. It's
not too complex. In general, you just want to put your compute instances close to where they will be
accessed from.
I'll be using europe-west1-d as my zone, and europe-west1 as my region. Feel free to just copy me
for this tutorial. Alternatively, take a look at Google's documentation and pick what works best
for you.
The first step is to reserve a static IP address. We have named ours drone-ip.
This outputs:
Created [https://www.googleapis.com/compute/v1/projects/codementor-tutorial/regions/europe-
west1/addresses/drone-ip].
Now take a look at it and take note of the actual IP address. We'll need it later:
address: 35.233.66.226
creationTimestamp: '2018-06-21T02:40:37.744-07:00'
description: ''
id: '431436906006760570'
kind: compute#address
name: drone-ip
region: https://www.googleapis.com/compute/v1/projects/codementor-tutorial/regions/europe-
west1
selfLink: https://www.googleapis.com/compute/v1/projects/codementor-tutorial/regions/europe-
west1/addresses/drone-ip
status: RESERVED
This outputs:
Created [https://www.googleapis.com/compute/v1/projects/codementor-tutorial/zones/europe-
west1-d/instances/drone-vm].
Alright, now we have a VM and a static IP. We need to tie them together:
networkInterfaces:
- accessConfigs:
- kind: compute#accessConfig
name: external-nat
natIP: 35.195.196.332
type: ONE_TO_ONE_NAT
A VM can, at most, have one of accessConfig. We'll need to delete the existing one and replace it
with a static IP address config. First, we delete it:
gcloud compute instances delete-access-config drone-vm \
This outputs:
Updated [https://www.googleapis.com/compute/v1/projects/codementor-tutorial/zones/europe-
west1-d/instances/drone-vm].
This outputs:
Updated [https://www.googleapis.com/compute/v1/projects/codementor-tutorial/zones/europe-
west1-d/instances/drone-vm].
And now we need to configure the firewall to allow HTTP traffic. Google's firewall rules can be added
and removed from specific instances through use of tags.
This outputs:
Updated [https://www.googleapis.com/compute/v1/projects/codementor-tutorial/zones/europe-
west1-d/instances/drone-vm].
If you wanted to allow HTTPS traffic, there is a tag for that too, but setting up HTTPS is a bit outside
the scope of this article.
Awesome! Now we have a VM with a static IP address and it can talk to the outside world via HTTP.
Install Prerequisites
In order to get Drone to run, we need to install Docker and Docker-Compose. Let's do that now:
SSH into our VM from your Google Cloud shell like so:
When it asks for passphrases, you can leave them blank for the purpose of this tutorial. That said, it's
not really good practice.
Okay, now you have a shell into your new VM. Brilliant.
Now enter:
uname -a
In GitLab, go to your user settings and click on applications. You want to create a new application.
Enter drone as the name. As a callback URL, use http://35.233.66.226/authorize. The IP address there
is the static IP address we just generated.
GitLab will now output an application ID and secret and some other stuff. Take note of these values,
Drone is going to need them.
You'll also need to create a Git repo that you own. Thus far, we've been
using https://gitlab.com/sheena.oconnell/tutorial-codementor-deploying-microservices.git. You
would want something like https://gitlab.com/${YOUR_GITLAB_USER}/tutorial-codementor-
deploying-microservices.git to exist. Don't put anything inside your repo just yet, we'll get to that a
bit later.
Configure Drone
First, we'll need to set up some environmental variables. Let's make a new file
called .drone_secrets.sh
nano .drone_secrets.sh
#!/bin/sh
export DRONE_HOST=http://35.233.66.226
export DRONE_ADMIN="sheena.oconnell"
export DRONE_GITLAB_CLIENT=<client>
export DRONE_GITLAB_SECRET=<secret>
Once you have finished editing the file, press: Ctrl+x then y then enter to save and exit.
chmod +x .drone_secrets.sh
source .drone_secrets.sh
nano Docker-compose.yml
version: '2'
services:
drone-server:
image: drone/drone:0.8
ports:
- 80:8000
- 9000
volumes:
- /var/lib/drone:/var/lib/drone/
restart: always
environment:
- DRONE_HOST=${DRONE_HOST}
- DRONE_SECRET=${DRONE_SECRET}
- DRONE_ADMIN=${DRONE_ADMIN}
- DRONE_GITLAB=true
- DRONE_GITLAB_CLIENT=${DRONE_GITLAB_CLIENT}
- DRONE_GITLAB_SECRET=${DRONE_GITLAB_SECRET}
drone-agent:
image: drone/agent:0.8
command: agent
restart: always
depends_on:
- drone-server
volumes:
- /var/run/Docker.sock:/var/run/Docker.sock
environment:
- DRONE_SERVER=drone-server:9000
- DRONE_SECRET=${DRONE_SECRET}
Now this compose file requires a few environmental settings to be available. Luckily, we've already
set those up. Save and exit just like before.
Now run
docker-compose up
There will be a whole lot of output. Open a new browser window and navigate to your Drone host. In
my case, that is: http://35.233.66.226. You will be redirected to an oAuth authorization page on
GitLab. Choose to authorize access to your account. You will then be redirected back to your Drone
instance. After a little while, you will see a list of your repos.
Each repo will have a toggle button on the right of the page. Toggle whichever one(s) you want to set
up CI/CD for. If you have been following along, there should be a repo called ${YOUR_GITLAB_USER}/
tutorial-codementor-deploying-microservices. Go ahead and activate that one.
Recap
Alright! So far so good. We've got Drone.ci all set up and talking to GitLab.
We start off by creating a service account. This is sort of like a user. Like users, service accounts have
credentials and rights and they can authenticate with Google Cloud. To learn all about service
accounts, you can refer to Google's official docs.
--display-name "drone-sa"
This outputs:
Now we want to give that service account permissions. It will need to push images to the Google
Cloud container registry (which is based on Google Storage), and it will need to roll out upgrades to
our application deployment.
bindings:
- members:
- serviceAccount:service-241386104325@compute-system.iam.gserviceaccount.com
role: roles/compute.serviceAgent
- members:
- serviceAccount:drone-sa@codementor-tutorial.iam.gserviceaccount.com
role: roles/container.developer
- members:
- serviceAccount:service-241386104325@container-engine-robot.iam.gserviceaccount.com
role: roles/container.serviceAgent
- members:
- serviceAccount:241386104325-compute@developer.gserviceaccount.com
- serviceAccount:241386104325@cloudservices.gserviceaccount.com
- serviceAccount:service-241386104325@containerregistry.iam.gserviceaccount.com
role: roles/editor
- members:
- user:yourname@gmail.com
role: roles/owner
- members:
- serviceAccount:drone-sa@codementor-tutorial.iam.gserviceaccount.com
role: roles/storage.admin
etag: BwVvOooDQaI=
version: 1
If you wanted to give your Drone instance access to other Google Cloud functionality (for example, if
you needed it to interact with App Engine), you can get a full list of available roles like so:
Now we create some credentials for our service account. Any device with this key file will have all of
the rights given to our service account. You can invalidate key files at any time. We are going to name
our key key.json
--iam-account drone-sa@${PROJECT_ID}.iam.gserviceaccount.com
This outputs:
Now we need to make the key available to Drone. We'll do this by using Drone's front-end. Point
your browser at the Drone front-end (in my case http://35.233.66.226). Navigate to the repository
that you want to deploy. Click on the menu button on the top right of the screen and select secrets.
cat key.json
"type": "service_account",
"project_id": "codementor-tutorial",
"private_key_id": "111111111111111111111111",
"private_key": "-----BEGIN PRIVATE KEY-----\n lots and lots of stuff =\n-----END PRIVATE KEY-----\n",
"client_email": "drone-sa@codementor-tutorial.iam.gserviceaccount.com",
"client_id": "xxxxxxxxxxxxxxxx",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/drone-sa
%40codementor-tutorial.iam.gserviceaccount.com"
Copy it and paste it into the secret value field and click on save.
Now Drone has access to our Google Cloud resources (although we still need to tell it how to access
the key file), and it knows about our repo. Now we need to tell Drone what exactly we need done
when we push code to our project. We do this by specifying a pipeline in a file named .drone.yml in
the root of our Git repo. .drone.yml is written in YAML format. Here is a cheat-sheet that I've found
quite useful.
It's time to put something in your tutorial-codementor-deploying-microservices repo. We'll just copy
over everything from the repo created for this series of articles. In a terminal somewhere (your local
computer, Google Cloud shell, or wherever):
cd tutorial-codementor-deploying-microservices
# set the git repo origin to your very own repo
# and push your changes. This will trigger the deployment pipeline already specified by
my .drone.yml
This will kick off our pipeline. You can watch it happen in the Drone web front-end.
pipeline:
unit-test:
image: python:3
commands:
- python -m pytest
gcr:
image: plugins/gcr
registry: eu.gcr.io
repo: codementor-tutorial/codementor-tutorial
secrets: [GOOGLE_CREDENTIALS]
when:
branch: master
deploy:
image: Google/cloud-sdk:latest
environment:
PROJECT_ID: codementor-tutorial
COMPUTE_ZONE: europe-west1-d
CLUSTER_NAME: hello-codementor
secrets: [GOOGLE_CREDENTIALS]
commands:
when:
branch: master
As pipelines go, it's quite a small one. We have specified three steps: unit-test, gcr, and deploy. It
helps to keep Docker-compose in mind when working with Drone. Each step is run as a Docker
container. So each step is based on a Docker image. For the most part, you get to specify exactly
what happens on those containers through use of commands.
unit-test:
image: python:3
commands:
- python -m pytest
This step is fairly straightforward. Whenever any changes are made to the repo (on any branch) then
the unit tests are run. If the tests pass, Drone will proceed to the next step. In our case, all of the rest
of the steps only happen on the master branch, so if you are in a feature branch, the only thing this
pipeline will do is run unit tests.
gcr:
image: plugins/gcr
registry: eu.gcr.io
repo: codementor-tutorial/codementor-tutorial
secrets: [GOOGLE_CREDENTIALS]
when:
branch: master
The gcr step is all about building our application Docker image and pushing it into the Google Cloud
Registry (GCR). It is a special kind of step as it is based on a plugin. We won't go into detail on how
plugins work here. Just think of it as an image that takes in special parameters. This one is configured
to push images to eu.gcr.io/codementor-tutorial/codementor-tutorial.
The tags argument contains a list of tags to be applied. Here, we make use of some variables supplied
by Drone. DRONE_COMMIT is the Git commit hash. Each build of each repo is numbered, so we use
that as a tag too. Drone supplies a whole lot of variables, take a look here for a nice list.
The next thing is secrets. Remember that secret we copy-pasted into Drone just a few minutes ago?
It's name was GOOGLE_CREDENTIALS. This line makes sure that the contents of that secret are
available to the step's container in the form of an environmental variable
named GOOGLE_CREDENTIALS.
deploy:
image: Google/cloud-sdk:latest
environment:
PROJECT_ID: codementor-tutorial
COMPUTE_ZONE: europe-west1-d
CLUSTER_NAME: hello-codementor
secrets: [GOOGLE_CREDENTIALS]
commands:
- yes | apt-get install python3
when:
branch: master
Here our base image is supplied by Google. It gives us gcloud and a few bells and whistles.
environment lets us set up environmental variables that will be accessible in the running container,
and the secrets work as before.
Now we have a bunch of commands. These execute in order and you should recognize most of it. The
only really strange part is how we authenticate as our service account (drone-sa). The line that does
the actual authentication is gcloud auth activate-service-account --key-file key.json. It requires a key
file. Now, ideally we would just do something like this:
Now that everything is set up, if you make a change to your code and push it to master, you will be
able to watch the pipeline get executed by keeping an eye on the Drone front-end.
Once the pipeline is complete, you will be able to make sure that your deployment is updated by
taking a look at the pods on the gcloud command line:
Outputs:
We'll get a whole lot of output here. The part that is interesting to us is:
Containers:
codementor-tutorial:
Image: eu.gcr.io/codementor-tutorial/codementor-
tutorial:commit_cb5d5ca61661954d7d139b2a1d60060cba5c4f2f
Now, if you were to check your Git log, the last commit to master that you pushed would have the
commit SHA cb5d5ca61661954d7d139b2a1d60060cba5c4f2f. Isn't that neat?
Conclusion
Wow, we made it! If you've worked through all of the practical examples, you've accomplished a lot.
You are now acquainted with Docker — you built an image and instantiated a container for that
image. Then, you got your images running on a Kubernetes cluster that you set up yourself. You then
manually scaled and rolled out updates to your application.
In this part, you got a simple CI/CD pipeline up and running from scratch by provisioning a VM,
installing Drone and its prerequisites, and getting it to play nice with GitLab and Google Kurbenetes
Engine.
PS - Cleanup (IMPORTANT!)
Clusters cost money, so it would be best to shut it down if you aren't using it. Go back to the Google
Cloud Shell and do the following:
## now we need to wait a bit for Google to delete some forwarding rules for us. Keep an eye on them
by executing this command:
## once the forwarding rules are deleted then it is safe to delete the cluster:
This post contains affiliate links to books that I really enjoy, which means I may receive a commission
if you purchase something through these links.
Circle CI
Travis CI and CircleCI are almost the same
Both of them:
Are cloud-based
Android, C, C#, C++, Clojure, Crystal, D, Dart, Erlang, Elixir, F#, Go, Groovy, Haskell, Haxe, Java,
JavaScript (with Node.js), Julia, Objective-C, Perl, Perl6, PHP, Python, R, Ruby, Rust, Scala, Smalltalk,
Visual Basic
Build matrix
language: python
python:
- "2.7"
- "3.4"
- "3.5"
env:
- DJANGO='django>=1.8,<1.9'
- DJANGO='django>=1.9,<1.10'
- DJANGO='django>=1.10,<1.11'
- DJANGO='https://github.com/django/django/archive/master.tar.gz'
matrix:
allow_failures:
- env: DJANGO='https://github.com/django/django/archive/master.tar.gz'
Build matrix is a tool that gives an opportunity to run tests with different versions of language and
packages. You may customize it in different ways. For example, fails of some environments can
trigger notifications but don’t fail all the build ( that’s helpful for development versions of packages)
Jenkins / blueOcean
Prerequisites
o On Windows
o Setup wizard
Follow up (optional)
Wrapping up
This tutorial shows you how to use the Blue Ocean feature of Jenkins to create a Pipeline that will
orchestrate building a simple application.
Before starting this tutorial, it is recommended that you run through at least one of the initial set of
tutorials from the Tutorials overview page first to familiarize yourself with CI/CD concepts (relevant
to a technology stack you’re most familiar with) and how these concepts are implemented in Jenkins.
This tutorial uses the same application that the Build a Node.js and React app with npm tutorial is
based on. Therefore, you’ll be building the same application although this time, completely through
Blue Ocean. Since Blue Ocean provides a simplified Git-handling experience, you’ll be interacting
directly with the repository on GitHub (as opposed to a local clone of this repository).
Duration: This tutorial takes 20-40 minutes to complete (assuming you’ve already met
the prerequisites below). The exact duration will depend on the speed of your machine and whether
or not you’ve already run Jenkins in Docker from another tutorial.
You can stop this tutorial at any point in time and continue from where you left off.
If you’ve already run though another tutorial, you can skip the Prerequisites and Run Jenkins in
Docker sections below and proceed on to forking the sample repository. If you need to restart
Jenkins, simply follow the restart instructions in Stopping and restarting Jenkins and then proceed
on.
Prerequisites
o 10 GB of drive space for Jenkins and your Docker images and containers.
o Docker - Read more about installing Docker in the Installing Docker section of
the Installing Jenkins page.
Note: If you use Linux, this tutorial assumes that you are not running Docker
commands as the root user, but instead with a single user account that also has
access to the other tools used throughout this tutorial.
To run Jenkins in Docker, follow the relevant instructions below for either macOS and
Linux or Windows.
You can read more about Docker container and image concepts in the Docker and Downloading and
running Jenkins in Dockersections of the Installing Jenkins page.
3. docker run \
4. --rm \
5. -u root \
6. -p 8080:8080 \
7. -v jenkins-data:/var/jenkins_home \
8. -v /var/run/docker.sock:/var/run/docker.sock \
9. -v "$HOME":/home \
jenkinsci/blueocean
Maps the /var/jenkins_home directory in the container to the Docker volume with the
name jenkins-data. If this volume does not exist, then this docker run command will
automatically create the volume for you.
Maps the $HOME directory on the host (i.e. your local) machine (usually
the /Users/<your-username> directory) to the /homedirectory in the container.
Note: If copying and pasting the command snippet above doesn’t work, try copying and pasting this
annotation-free version here:
docker run \
--rm \
-u root \
-p 8080:8080 \
-v jenkins-data:/var/jenkins_home \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$HOME":/home \
jenkinsci/blueocean
On Windows
1. Open up a command prompt window.
2. Run the jenkinsci/blueocean image as a container in Docker using the following docker
run command (bearing in mind that this command automatically downloads the image if this
hasn’t been done):
3. docker run ^
4. --rm ^
5. -u root ^
6. -p 8080:8080 ^
7. -v jenkins-data:/var/jenkins_home ^
8. -v /var/run/docker.sock:/var/run/docker.sock ^
9. -v "%HOMEPATH%":/home ^
jenkinsci/blueocean
For an explanation of these options, refer to the macOS and Linux instructions above.
This means you could access the Jenkins/Blue Ocean container (through a separate
terminal/command prompt window) with a docker exec command like:
Setup wizard
Before you can access Jenkins, there are a few quick "one-off" steps you’ll need to perform.
Unlocking Jenkins
When you first access a new Jenkins instance, you are asked to unlock it using an automatically-
generated password.
1. After the 2 sets of asterisks appear in the terminal/command prompt window, browse
to http://localhost:8080 and wait until the Unlock Jenkins page appears.
2. From your terminal/command prompt window again, copy the automatically-generated
alphanumeric password (between the 2 sets of asterisks).
3. On the Unlock Jenkins page, paste this password into the Administrator password field and
click Continue.
The setup wizard shows the progression of Jenkins being configured and the suggested plugins being
installed. This process may take a few minutes.
1. When the Create First Admin User page appears, specify your details in the respective fields
and click Save and Finish.
2. When the Jenkins is ready page appears, click Start using Jenkins.
Notes:
o This page may indicate Jenkins is almost ready! instead and if so, click Restart.
o If the page doesn’t automatically refresh after a minute, use your web browser to
refresh the page manually.
3. If required, log in to Jenkins with the credentials of the user you just created and you’re
ready to start using Jenkins!
1. Run the same docker run … command you ran for macOS, Linux or Windows above.
Note: This process also updates the jenkinsci/blueocean Docker image, if an updated
one is available.
2. Browse to http://localhost:8080.
Fork the simple "Welcome to React" Node.js and React application on GitHub into your own GitHub
account.
1. Ensure you are signed in to your GitHub account. If you don’t yet have a GitHub account, sign
up for a free one on the GitHub website.
1. Go back to Jenkins and ensure you have accessed the Blue Ocean interface. To do this, make
sure you:
2. In the Welcome to Jenkins box at the center of the Blue Ocean interface, click Create a new
Pipeline to begin the Pipeline creation wizard.
Note: If you don’t see this box, click New Pipeline at the top right.
4. In Connect to GitHub, click Create an access key here. This opens GitHub in a new browser
tab.
Note: If you previously configured Blue Ocean to connect to GitHub using a personal access
token, then Blue Ocean takes you directly to step 9 below.
5. In the new tab, sign in to your GitHub account (if necessary) and on the GitHub New
Personal Access Token page, specify a brief Token description for your GitHub access token
(e.g. Blue Ocean).
Note: An access token is usually an alphanumeric string that respresents your GitHub
account along with permissions to access various GitHub features and areas through your
GitHub account. This access token will have the appropriate permissions pre-selected, which
Blue Ocean requires to access and interact with your GitHub account.
6. Scroll down to the end of the page (leaving all other Select scopes options with their default
settings) and click Generate token.
7. On the resulting Personal access tokens page, copy your newly generated access token.
8. Back in Blue Ocean, paste the access token into the Your GitHub access token field and
click Connect.
Jenkins now has access to your GitHub account (provided by your access token).
9. In Which organization does the repository belong to?, click your GitHub account (where you
forked the repository above).
1. Following on from creating your Pipeline project (above), in the Pipeline editor,
select docker from the Agent dropdown in the Pipeline Settings panel on the right.
2. In the Image and Args fields that appear, specify node:6-alpine and -p
3000:3000 respectively.
Note: For an explanation of these values, refer to annotations 1 and 2 of the Declarative Pipeline in
the ``Create your initial Pipeline…'' section of the Build a Node.js and React app tutorial.
3. Back in the main Pipeline editor, click the + icon, which opens the new stage panel on the
right.
4. In this panel, type Build in the Name your stage field and then click the Add Step button
below, which opens the Choose step type panel.
5. In this panel, click Shell Script near the top of the list (to choose that step type), which opens
the Build / Shell Script panel, where you can enter this step’s values.
Tip: The most commonly used step types appear closest to the top of this list. To find other steps
further down this list, you can filter this list using the Find steps by name option.
6. In the Build / Shell Script panel, specify npm install.
Note: For an explanation of this step, refer to annotation 4 of the Declarative Pipeline in the Create
your initial Pipeline…section of the Build a Node.js and React app tutorial.
7. ( Optional ) Click the top-left back arrow icon to return to the main Pipeline editor.
8. Click the Save button at the top right to begin saving your new Pipeline with its "Build" stage.
9. In the Save Pipeline dialog box, specify the commit message in the Description field (e.g. Add
initial Pipeline (Jenkinsfile)).
10. Leaving all other options as is, click Save & run and Jenkins proceeds to build your Pipeline.
11. When the main Blue Ocean interface appears, click the row to see Jenkins build your Pipeline
project.
Note: You may need to wait several minutes for this first run to complete. During this time,
Jenkins does the following:
a. Commits your Pipeline as a Jenkinsfile to the only branch (i.e. master) of your
repository.
d. Executes the Build stage (defined in the Jenkinsfile) on the Node container.
(During this time, npm downloads many dependencies necessary to run your Node.js
and React application, which will ultimately be stored in the
local node_modules directory within the Jenkins home directory).
12. The Blue Ocean interface turns green if Jenkins built your application successfully.
13.
14. Click the X at the top-right to return to the main Blue Ocean interface.
Note: Before continuing on, you can check that Jenkins has created a Jenkinsfile for you at the
root of your forked GitHub repository (in the repository’s sole master branch).
2. Click the master branch’s "Edit Pipeline" icon to open the Pipeline editor for this
branch.
3. In the main Pipeline editor, click the + icon to the right of the Build stage you
created above to open the new stage panel on the right.
4. In this panel, type Test in the Name your stage field and then click the Add Step button
below to open the Choose step typepanel.
5. In this panel, click Shell Script near the top of the list.
6. In the resulting Test / Shell Script panel, specify ./jenkins/scripts/test.sh and then
click the top-left back arrow icon to return to the Pipeline stage editor.
7. At the lower-right of the panel, click Settings to reveal this section of the panel.
8. Click the + icon at the right of the Environment heading (for which you’ll configure an
environment directive).
9. In the Name and Value fields that appear, specify CI and true, respectively.
Note: For an explanation of this directive and its step, refer to annotations 1 and 3 of the Declarative
Pipeline in the Add a test stage… section of the Build a Node.js and React app tutorial.
10. ( Optional ) Click the top-left back arrow icon to return to the main Pipeline editor.
11. Click the Save button at the top right to begin saving your Pipeline with with its new "Test"
stage.
12. In the Save Pipeline dialog box, specify the commit message in the Description field (e.g. Add
'Test' stage).
13. Leaving all other options as is, click Save & run and Jenkins proceeds to build your amended
Pipeline.
14. When the main Blue Ocean interface appears, click the top row to see Jenkins build your
Pipeline project.
Note: You’ll notice from this run that Jenkins no longer needs to download the Node Docker
image. Instead, Jenkins only needs to run a new container from the Node image downloaded
previously. Therefore, running your Pipeline this subsequent time should be much faster.
If your amended Pipeline ran successfully, here’s what the Blue Ocean interface should look
like. Notice the additional "Test" stage. You can click on the previous "Build" stage circle to
access the output from that stage.
15. Click the X at the top-right to return to the main Blue Ocean interface.
2. Click the master branch’s "Edit Pipeline" icon to open the Pipeline editor for this
branch.
3. In the main Pipeline editor, click the + icon to the right of the Test stage you created above to
open the new stage panel.
4. In this panel, type Deliver in the Name your stage field and then click the Add Step button
below to open the Choose step type panel.
5. In this panel, click Shell Script near the top of the list.
then click the top-left back arrow icon to return to the Pipeline stage editor.
Note: For an explanation of this step, refer to the deliver.sh file itself located in
the jenkins/scripts of your forked repository on GitHub.
8. In the Choose step type panel, type input into the Find steps by name field.
9. Click the filtered Wait for interactive input step type.
10. In the resulting Deliver / Wait for interactive input panel, specify Finished using the
web site? (Click "Proceed" to continue) in the Message field and then click the
Note: For an explanation of this step, refer to annotation 4 of the Declarative Pipeline in the Add a
final deliver stage…section of the Build a Node.js and React app tutorial.
14. ( Optional ) Click the top-left back arrow icon to return to the main Pipeline editor.
15. Click the Save button at the top right to begin saving your Pipeline with with its new "Deliver"
stage.
16. In the Save Pipeline dialog box, specify the commit message in the Description field (e.g. Add
'Deliver' stage).
17. Leaving all other options as is, click Save & run and Jenkins proceeds to build your amended
Pipeline.
18. When the main Blue Ocean interface appears, click the top row to see Jenkins build your
Pipeline project.
If your amended Pipeline ran successfully, here’s what the Blue Ocean interface should look
like. Notice the additional "Deliver" stage. Click on the previous "Test" and "Build" stage
circles to access the outputs from those stages.
19. Ensure you are viewing the "Deliver" stage (click it if necessary), then click the
green ./jenkins/scripts/deliver.sh step to expand its content and scroll down until
you see the http://localhost:3000 link.
20. Click the http://localhost:3000 link to view your Node.js and React application running
(in development mode) in a new web browser tab. You should see a page/site with the
title Welcome to React on it.
21. When you are finished viewing the page/site, click the Proceed button to complete the
Pipeline’s execution.
22. Click the X at the top-right to return to the main Blue Ocean interface, which lists your
previous Pipeline runs in reverse chronological order.
Follow up (optional)
If you check the contents of the Jenkinsfile that Blue Ocean created at the root of your
forked creating-a-pipeline-in-blue-oceanrepository, notice the location of
the environment directive. This directive’s location within the "Test" stage means that the
environment variable CI (with its value of true) is only available within the scope of this "Test"
stage.
You can set this directive in Blue Ocean so that its environment variable is available globally
throughout Pipeline (as is the case in the Build a Node.js and React app with npm tutorial). To do this:
1. From the main Blue Ocean interface, click Branches at the top-right to access your
respository’s master branch.
2. Click the master branch’s "Edit Pipeline" icon to open the Pipeline editor for this
branch.
3. In the main Pipeline editor, click the Test stage you created above to begin editing it.
4. In the stage panel on the right, click Settings to reveal this section of the panel.
5. Click the minus (-) icon at the right of the CI environment directive (you created earlier) to
delete it.
6. Click the top-left back arrow icon to return to the main Pipeline editor.
7. In the Pipeline Settings panel, click the + icon at the right of the Environment heading (for
which you’ll configure a globalenvironment directive).
8. In the Name and Value fields that appear, specify CI and true, respectively.
9. Click the Save button at the top right to begin saving your Pipeline with with its relocated
environment directive.
10. In the Save Pipeline dialog box, specify the commit message in the Description field
(e.g. Make environment directive global).
11. Leaving all other options as is, click Save & run and Jenkins proceeds to build your amended
Pipeline.
12. When the main Blue Ocean interface appears, click the top row to see Jenkins build your
Pipeline project.
You should see the same build process you saw when you completed adding the final deliver
stage (above). However, when you inspect the Jenkinsfile again, you’ll notice that
the environment directive is now a sibling of the agent section.
Wrapping up
Well done! You’ve just used the Blue Ocean feature of Jenkins to build a simple Node.js and React
application with npm!
The "Build", "Test" and "Deliver" stages you created above are the basis for building other
applications in Jenkins with any technology stack, including more complex applications and ones that
combine multiple technology stacks together.
Because Jenkins is extremely extensible, it can be modified and configured to handle practically any
aspect of build orchestration and automation.
The User Handbook for more detailed information about using Jenkins, such as Pipelines (in
particular Pipeline syntax) and the Blue Ocean interface.
The Jenkins blog for the latest events, other tutorials and updates.
CA ARA
Application-release automation
Application-release automation (ARA) refers to the process of packaging and deploying
an application or update of an application from development, across various environments, and
ultimately to production.[1] ARA solutions must combine the capabilities of deployment automation,
environment management and modeling, and release coordination.[2]
Contents
1Relationship with DevOps
3ARA Solutions
4References
ARA Solutions[edit]
Gartner and Forrester have published lists of ARA tools in their ARA Magic Quadrant and Wave
reports respectively.[7] [8]All ARA solutions must include capabilities in automation, environment
modeling, and release coordination. Additionally, the solution must provide this functionality without
reliance on other tools. [9]
Solution Released by
BuildMaster Inedo
OpenMake
DeployHub
Software
FlexDeploy Flexagon
Puppet
Puppet Enterprise
Gitlab CI
All the code for this project can be found in this GitLab repo.
In case you’re interested in deploying Spring Boot applications to Kubernetes using GitLab CI/CD,
read through the blog post Continuous Delivery of a Spring Boot application with GitLab CI and
Kubernetes.
Requirements
We assume you are familiar with Java, GitLab, Cloud Foundry, and GitLab CI/CD.
To follow along with this tutorial you will need the following:
An account on Pivotal Web Services (PWS) or any other Cloud Foundry instance
An account on GitLab
Note: You will need to replace the api.run.pivotal.io URL in the all below commands with
the API URLof your CF instance if you’re not deploying to PWS.
---
applications:
- name: gitlab-hello-world
random-route: true
memory: 1G
path: target/demo-0.0.1-SNAPSHOT.jar
Configure GitLab CI/CD to deploy your application
Now we need to add the GitLab CI/CD configuration file ( .gitlab-ci.yml) to our project’s root.
This is how GitLab figures out what commands need to be run whenever code is pushed to our
repository. We will add the following .gitlab-ci.yml file to the root directory of the repository,
GitLab will detect it automatically and run the steps defined once we push our code:
image: java:8
stages:
- build
- deploy
build:
stage: build
artifacts:
paths:
- target/demo-0.0.1-SNAPSHOT.jar
production:
stage: deploy
script:
- ./cf push
only:
- master
We’ve used the java:8 docker image to build our application as it provides the up-to-date Java 8
JDK on Docker Hub. We’ve also added the only clause to ensure our deployments only happen when
we push to the master branch.
Now, since the steps defined in .gitlab-ci.yml require credentials to login to CF, you’ll need to
add your CF credentials as environment variables on GitLab CI/CD. To set the environment variables,
navigate to your project’sSettings > CI/CD and expand Variables. Name the
variables CF_USERNAME and CF_PASSWORD and set them to the correct values.
Once set up, GitLab CI/CD will deploy your app to CF at every push to your repository’s default
branch. To see the build logs or watch your builds running live, navigate to CI/CD > Pipelines.
Caution: It is considered best practice for security to create a separate deploy user for your
application and add its credentials to GitLab instead of using a developer’s credentials.
To start a manual deployment in GitLab go to CI/CD > Pipelines then click on Run Pipeline. Once the
app is finished deploying it will display the URL of your application in the logs for
the production job like:
instances: 1/1
usage: 1G x 1 instances
urls: gitlab-hello-world-undissembling-hotchpot.cfapps.io
stack: cflinuxfs2
You can then visit your deployed application (for this example, https://gitlab-hello-world-
undissembling-hotchpot.cfapps.io/) and you should see the “Spring is here!” message.
Semaphore CI
Publishing Docker images on DockerHub
Pushing images to the official registry is straightforward. You'll need to create a secret for the login
username and password. Then, call docker login with the appropriate environment variables. The
first step is create a secret for DOCKER_USERNAME and DOCKER_PASSWORD with the sem tool.
# secret.yml
apiVersion: v1alpha
kind: Secret
metadata:
name: docker
data:
env_vars:
- name: DOCKER_USERNAME
- name: DOCKER_PASSWORD
# .semaphore/pipeline.yml
version: "v1.0"
agent:
machine:
type: e1-standard-2
os_image: ubuntu1804
blocks:
task:
secrets:
- name: docker
prologue:
commands:
jobs:
commands:
- checkout
- docker-compose build
- docker-compose push
GoCD
In my second stint at ThoughtWorks I spent a little over a year working on their Continuous Delivery
tool GoCD, now Open Source!. Most of the time working with customers was more about
helping them understand and adopt Continuous Delivery rather than the tool itself (as it should be).
The most common remark I got though was “Jenkins does CD pipelines too” and my reply would
invariably be “on the surface, but they are not a first class built-in concept” and a blank stare usually
followed
I use Jenkins as an example since it’s the most widespread CI tool but this is really just an excuse to
talk about concepts central to Continuous Delivery regardless of tools.
It’s often hard to get the message across because it assumes people are comfortable with at least 3
concepts:
WHAT PIPELINES REALLY ARE AND WHY THEY ARE KEY TO A SUCCESSFUL CD INITIATIVE
WHAT FIRST CLASS BUILT-IN CONCEPT MEANS AND WHY IT’S KEY
I’ll assume everyone agrees with the definitions Martin posted on his site. If you haven’t seen them
yet here they are: Continuous Delivery and (Deployment) Pipelines. In particular on Deployment
Pipelines he writes (emphasis mine):
“A deployment pipeline is a way to deal with this by breaking up your build into stages […] to detect
any changes that will lead to problems in production. These can include performance, security, or
usability issues […] should enable collaboration between the various groups involved in delivering
software and provide everyone visibility about the flow of changes in the system, together with a
thorough audit trail.”
If you prefer I can categorically say what a pipeline is NOT: just a graphic doodle.
Since Continuous Integration (CI) mainly focuses on development teams and much of the waste in
releasing software comes from its progress through testing and operations, CD is all about:
much of the waste in releasing software comes from its progress through testing and operations
1. Finding and removing bottlenecks, often by breaking the sequential nature of the cycle. No
inflexible monolithic scripts, no slow sequential testing, no flat and simplistic workflows, no single
tool to rule them all
2. Relentless automation, eliminating dull work and the waste of human error, shortening feedback
loops and ensuring repeatability. When you do fail (and you will) the repeatable nature of
automated tasks allows you to easily track down the problem
3. Optimising & Visualising, making people from different parts of the organisation Collaborate on
bringing value (the software) to the users (production) as quickly and reliably as possible
scripting and automated testing are mostly localised activities that often create local maxima with
manual gatekeepers
Commitment to automation is not enough: scripting and automated testing are mostly localised
activities that often create local maxima with manual gatekeepers – the infamous “throwing over the
wall” – to the detriment of the end-to-end value creating process resulting in wasted time and longer
cycle-times.
Jenkins and GoCD Pipelines are so hard to compare because their premises are completely different:
Jenkins pipelines are somewhat simplistic and comparing the respective visualisations is in fact
misleading (Jenkins top, GoCD bottom):
1. An entire row of boxes you see in the Jenkins visualisation is a pipeline as per the original
definition in the book (that Jez Humble now kind of regrets :-)) each box you see in the Jenkins
pipeline is the equivalent of a single Task in GoCD
2. in GoCD each box you see is an entire pipeline in itself that usually is chained to other pipelines
both upstream and downstream. Furthermore each can contain multiple Stages that can contain
multiple Jobs which in turn can contain multiple Tasks
I hear you: “ok, cool but why is this significant?” and this is where it’s important to understand…
You might have seen this diagram already in the GoCD documentation: Although it really is a
simplification (here a more accurate but detail-dense one), it tries to convey visually 2 very important
and often misunderstood/ignored characteristics of GoCD:
1. its 4 built-in powerful abstractions and their relationship: Tasks inside Jobs inside Stages inside
Pipelines
2. the fact that some are executed in parallel (depending on agents availability) while others
sequentially:
Without geeking out into Barbara Liskov’s “The Power of Abstraction”-level of details we can say that
a good design is one that finds powerful yet simple abstractions, making a complex problem
tractable.
a good design is one that finds powerful yet simple abstractions, making a complex problem tractable
Indeed that’s what a tool that strives to support you in your CD journey should do (because the
journey is yours and a tool can only help or get in the way, either way it won’t transform you
magically overnight): make your complex and often overcomplicated path from check-in to
production tractable. At the same time “all non-trivial abstractions, to some degree, are leaky” as
Joel Spolsky so simply put it in “The Law of Leaky Abstractions” therefore the tricky balance to
achieve here is:
“to have powerful enough abstractions (the right ones) to make it possible to model your path to
production effectively and, importantly, remodel it as you learn and evolve it over time while, at the
same time, resist the temptation to continuously introduce new, unnecessary abstractions that are
only going to make things more difficult in the long run because they will be leaky”
And of course we believed (and I still do) we struck the right balance since we’ve been exploring,
practicing and evolving the practice of Continuous Delivery from before its formal definition.
you are supposed to model your end to end Value Stream Map connecting multiple pipelines
This is the reason why you are supposed to model your end to end Value Stream Map connecting
multiple pipelines together in both direction – upstream and downstream – while everyone seems to
still be stuck at the definition by the book that (seems to) indicate you should have one single, fat
pipeline that covers the entire flow. To some extent this could be easily brushed off as just semantics
but it makes a real difference when it’s not about visual doodles but about real life. It may appear
overkill to have four levels of abstraction for work execution but the moment you start doing more
than single team Continuous Integration (CI), they become indispensable.
Jobs and Stages are primitives, they can and should be extended to achieve higher order abstractions
For instance, it is trivial in GoCD to set up an integration pipeline that feeds off multiple upstream
component pipelines and also feeds off an integration test repository. It is also easy to define
different triggering behaviours for Pipelines and Stages: if we had only two abstractions, say Jobs and
Stages, they’d be overloaded with different behaviour configurations for different contexts. Jobs and
Stages are primitives, they can and should be extended to achieve higher order abstractions. By
doing so, we avoid primitive obsession at an architectural level. Also note that the alternating
execution behaviour of the four abstractions (parallel, sequential, parallel, sequential) is designed
deliberately so that you have the ability to parallelise and sequentialise your work as needed at two
different levels of granularity.
In order for Pipelines to be considered true first class built-in concepts rather than merely visual
doodles it must be possible to:
Not all Pipelines are created equal, let’s see why the above points are important by looking at how
they are linked to the CD best practices.
Only build your binary once Pipeline support for: dependency and Fetch Artifact
Each Change should propagate Pipeline support for: SCM Poll, Post Commit, multi instance
instantly pipeline runs
If any part fails – stop the line Basic Pipeline modeling & Lock Pipelines
Provide fast and useful feedback Pipeline Visualization + VSM + Compare pipelines
Fan-in
Last but not least Pipelines as first class built-in concepts are part of the reason why we were able to
release the first ever (and at the moment still only, AFAIK) intelligent dependency management to
automatically address the Dreaded Diamond Dependency problem and avoid wasted builds,
inconsistent results, incorrect feedback, and running code with the wrong tests: in GoCD we called
it Fan-in Dependency Management. GoCD’s Fan-in material resolution ensures that a pipeline
triggers only when all its upstream pipelines have triggered off the same version of an ancestor
pipeline or material. This will be the case when you have multiple components building in separate
pipelines which all have the same ancestor and you want downstream pipelines to all use the same
version of the artifact from the ancestor pipeline.
Further reading
If you haven’t read the Continuous Delivery book you should but chapter 5 ‘Anatomy of the
Deployment Pipeline’ is available for free, get it now. Time ago a concise but exhaustive 5-part series
on “How do I do CD with Go?” was published on Studios blog and I still highly recommended it:
4. GoCD environments
and last but not least take a look at how a Value Stream Map visualisation helps the Mingle team day
in, day out in Tracing our path to production.
Navigate to the new pipeline you created by clicking on the Edit link under the Actions
against it. You can also click on the name of the pipeline.
You will notice an existing material . Click on the "Add new material" link.
Click "Save".
Blacklist
Often you do want to specify a set of files that Go should ignore when it checks for changes.
Repository changesets which contain only these files will not automatically trigger a pipeline. These
are detailed in the ignore section of the configuration reference.
Click "Save".
Add a new stage to an existing pipeline
Now that you have a pipeline with a single stage, lets add more stages to it.
Navigate to the new pipeline you created by clicking on the Edit link under the Actions
against it. You can also click on the name of the pipeline.
You will notice that a defaultStage exists. Click on the "Add new stage" link.
Fill in the details for the first job and first task belonging to this job. You can add more
jobs and add more tasks to the jobs.
Click on help icon next to the fields to get additional details about the fields you are editing.
Click "Save".
Click on the stage name that you want to edit on the tree as shown below. The
"defaultStage" is being edited.
For task types Ant, Nant and Rake, the build file and target will default as per the tool used.
For example, Ant task, would look for build.xml in the working directory, and use the default
task if nothing is mentioned.
Click on help icon next to the fields to get additional details about the fields you are editing.
Click "Save"
Click on the job name that you want to edit on the tree as shown below. The "defaultJob" is
being edited.
Click on "Add new task". You can choose the task type from Ant, Nant, Rake and Fetch
Artifact. Or you can choose "More..." to choose a command from command repository or
specify your own command
Click on help icon next to the fields to get additional details about the fields you are editing.
Click "Save"
Advanced Options section allows you to specify a Task in which you can provide the actions
(typically clean up) that needs to be taken when users chooses to cancel the stage.
If the user is a pipeline group admin, she can clone the new pipeline into a group that she has access
to. If the user is an admin she can clone the pipeline into any group or give a new group name, in
which case the group gets created.
Select a pipeline group. If you are an admin, you will be able to enter the name of the
pipeline group using the auto suggest or enter a new group name
Click "Save"
Warning: Pipeline history is not removed from the database and artifacts are not removed from
artifact storage, which may cause conflicts if a pipeline with the same name is later re-created.
Pipeline Templates
Templating helps to create reusable workflows in order to make tasks like creating and maintaining
branches, and managing large number of pipelines easier.
Pipeline Templates can be managed from the Templates tab on the Administration Page.
Clicking on the "Add New Template" brings up the following form which allows you to create a fresh
template, or extract it from an existing pipeline. Once saved, the pipeline indicated will also start
using this newly created template.
A template can also be extracted from a pipeline using the "Extract Template" link. This can be found
on the "Pipelines" tab in the Administration page.
Example
As an example, assume that there is a pipeline group called "my-app" and it contains a pipeline called
"app-trunk" which builds the application from trunk. Now, if we need to create another pipeline
called "app-1.0-branch" which builds 1.0 version of the application, we can use Pipeline Templates as
follows
Using Administration UI
Create a template "my-app-build" by extracting it from the pipeline "app-trunk", as shown in
the previous section.
Create a new pipeline "app-1.0-branch" which defines SCM material with the branch url and
uses the template "my-app-build".
Using XML
Power users can configure the above as follows:
<pipelines group="my-app">
<materials>
</materials>
</pipeline>
<materials>
</materials>
</pipeline>
</pipelines>
<templates>
<pipeline name="my-app-build">
<stage name="build">
<jobs>
<job name="compile">
<tasks>
</tasks>
</job>
</jobs>
</stage>
</pipeline>
</templates>
Go Administrators can now enable any Go user to edit a template by making them a template
administrator.
Template administrators can view and edit the templates to which they have permissions, on the
template tab of the admin page. Template Administrators, will however not be able to add, delete or
change permissions for a template. They will also be able to see the number of pipelines in which the
template is being used, but not the details of those pipelines.
Pipeline Templates can now be viewed by Administrators and Pipeline Group Administrators while
editing or creating a Pipeline.
1. Shows the details of the job "compile-job" configured for the stage "compile".
2. Indicates that the working directory set for the task is "go/service_1", which is followed by
the "$" symbol and then the command.
3. If any "On Cancel Task" has been configured, it will be indicated like this.
See also...
Templates - Configuration Reference
If you add a manual approval to the first stage in a pipeline, it will prevent the pipeline from being
triggered from version control. Instead, it will only pick up changes when you trigger the pipeline
manually (this is sometimes known as "forcing the build").
You can control who can trigger manual approvals. See the section on Adding authorization to
approvalsfor more details.
Bibliography :
https://ramitsurana.github.io/myblog/
https://www.docker.com/ressources
https://highops.com/insights/
https://www.codementor.io/sheena/