[go: up one dir, main page]

0% found this document useful (0 votes)
174 views118 pages

DevOps Notes -2 (1) (1)

DevOps is a cultural and technical approach that enhances an organization's ability to deliver applications rapidly through various tools and practices. Git, a distributed version control system, allows developers to manage code changes efficiently, supporting collaboration and version tracking. The document also discusses other tools like Maven, Jenkins, and Ansible, emphasizing the importance of version control and project management in software development.

Uploaded by

hawoxi3236
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
174 views118 pages

DevOps Notes -2 (1) (1)

DevOps is a cultural and technical approach that enhances an organization's ability to deliver applications rapidly through various tools and practices. Git, a distributed version control system, allows developers to manage code changes efficiently, supporting collaboration and version tracking. The document also discusses other tools like Maven, Jenkins, and Ansible, emphasizing the importance of version control and project management in software development.

Uploaded by

hawoxi3236
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 118

DevOps

DevOps is the combination of cultural philosophies, practices, and tools that increases an
organization's ability to deliver applications and services at high velocity: evolving and improving
products at a faster pace than organizations using traditional software development and
infrastructure management processes.
Tools to Achieve :

GIT :
MAVEN :

Jenkins :
Ansible :

Docker :
Kubernetes :
Devops tool : GIT

Git is an Open Source Distributed Version Control System. Now that’s a lot of words to define Git.

Let me break it down and explain the wording :


· Control System: This basically means that Git is a content tracker. So Git can be used to store
content — it is mostly used to store code due to the other features it provides.

· Version Control System: The code which is stored in Git keeps changing as more code is added.
Also, many developers can add code in parallel. So Version Control System helps in handling this by
maintaining a history of what changes have happened. Also, Git provides features like branches
and merges, which I will be covering later.

· Distributed Version Control System: Git has a remote repository which is stored in a server and a
local repository which is stored in the computer of each developer. This means that the code is not
just stored in a central server, but the full copy of the code is present in all the developers’
computers. Git is a Distributed Version Control System since the code is present in every
developer’s computer. I will explain the concept of remote and local repositories later in this
article.

Case :

Suppose you are working on a project.

And you wrote the source code. it was working fine , but you further changed the code as it was needed.
After few minutes , you again changed the code , and ….again and….. again.

Now , you are messed up and frustrated as your current code is not fine and all you want is your initial
code back. But , you can't as you have not saved the previous versions. Now, all you can do is *cry in
corner*.

Wait … i have a solution.

First time , i saved my code as source_code.cpp

then , i saved new version as final_source_code.cpp

and then again as final_source_code1.cpp

and then final_source_code2.cpp

and then ………….

…………..final_source_code50.cpp

I have saved all versions of my code.but , you can see this solution is a new problem itself.

Now , suppose a different scenario. you are working a team project. and your team members are working
from remote locations. How will you all work on same source code in real time ?

Big problem ? isn’t ?


To solve all these problems , we use version control system (VCS). it basically manages the changes to
code or files. it stores all the changes and the revisions can be compared , restored or merged.

Git is a popular, free and open source version control system.

Repository is a data structure used by VCS to store metadata for set of files and/or directories. it stores
the set of file as well as history of changes made to those file.

Commands :

Git task Notes Git commands

Tell Git Configure the git config --global user.name "testuser"


who you author name
are and email git config --global user.email “test@example.com”
address to be
used with your
commits.

Note that Git


strips some
characters (for
example
trailing periods)
from
user.name.

Create a git init


new local
repository

Check Create a git clone /path/to/repository


out a working copy
repository of a local
repository:

For a remote git clone username@host:/path/to/repository


server, use:

Add files Add one or git add <filename>


more files to
staging (index):
git add *
Commit Commit git commit -m "Commit message"
changes to head
(but not yet to
the remote
repository):

Commit any git commit -a


files you've
added with git
add, and also
commit any
files you've
changed since
then:

Push Send changes git push origin master


to the master
branch of your
remote
repository:

Status List the files git status


you've changed
and those you
still need to add
or commit:

Connect If you haven't git remote add origin <server>


to a connected your
remote local repository
repository to a remote
server, add the
server to be
able to push to
it:

List all git remote -v


currently
configured
remote
repositories:

Branches Create a new git checkout -b <branchname>


branch and
switch to it:
Switch from git checkout <branchname>
one branch to
another:

List all the git branch


branches in
your repo, and
also tell you
what branch
you're currently
in:

Delete the git branch -d <branchname>


feature branch:

Push the branch git push origin <branchname>


to your remote
repository, so
others can use
it:

Push all git push --all origin


branches to
your remote
repository:

Delete a branch git push origin :<branchname>


on your remote
repository:

Update Fetch and git pull


from the merge changes
remote on the remote
repository server to your
working
directory:

To merge a git merge <branchname>


different branch
into your active
branch:
Tags You can use git tag 1.0.0 <commitID>
tagging to mark
a significant
changeset, such
as a release:

CommitId is git log


the leading
characters of
the changeset
ID, up to 10,
but must be
unique. Get the
ID using:

Push all tags to git push --tags origin


remote
repository:

Undo If you mess up, git checkout -- <filename>


local you can replace
changes the changes in
your working
tree with the
last content in
head:

Changes
already added
to the index, as
well as new
files, will be
kept.
git stash - How to Save Your Changes Temporarily

There are lots of situations where a clean working copy is recommended or even

required: when merging branches, when pulling from a remote, or simply when

checking out a different branch.

The "git stash" command can help you to (temporarily but safely) store your

uncommitted local changes - and leave you with a clean working copy.

git stash: a Clipboard for Your Changes


Let's say you currently have a couple of local modifications:

$ git status

modified: index.php

modified: css/styles.css

If you have to switch context - e.g. because you need to work on an urgent bug - you

need to get these changes out of the way. You shouldn't just commit them, of course,

because it's unfinished work.

This is where "git stash" comes in handy:

$ git stash

Saved working directory and index state WIP on master:

2dfe283 Implement the new login box


HEAD is now at 2dfe283 Implement the new login box

Your working copy is now clean: all uncommitted local changes have been saved on

this kind of "clipboard" that Git's Stash represents. You're ready to start your new task

(for example by pulling changes from remote or simply switching branches).

Continuing Where You Left Off


As already mentioned, Git's Stash is meant as a temporary storage. When you're ready

to continue where you left off, you can restore the saved state easily:

$ git stash pop

The "pop" flag will reapply the last saved state and, at the same time, delete its

representation on the Stash (in other words: it does the clean-up for you).

First, why would I Need to Rebase Something?


Let's say you're a junior dev starting at a cupcake store called Cupid's Cupcakes.
It does lots of online selling, and has many experienced devs constantly
improving it. You're brought in to mostly work on the front-end.

Your first assignment is updating a card component. When people look for
cupcakes to buy, each is in one of these cards. So you go to the repo, pull the
most recent version of the master branch, create a new branch from that one,
and get to work!

A few commits later, you're all set. The card looks nicer, all the tests pass, and
you've even improved the mobile layout. All that's left is to merge your feature
branch back into master branch so it goes live!

But wait a moment!


Unsurprisingly, other people were working on the site while you were making
this card component.

§ One developer changed the navigation


§ One adjusted the database fields to remove unneeded info

§ Another added extra info about each cupcake

§ Someone else secretly embezzled money through the store's bank records

All these changes make you worry. What if someone merged a change that
affects or overlaps with the ones you made? It could lead to bugs in the cupcake
website! If you look at the different changes made, one does! (Another change
should be reported to the police, but that's actually less important). Is there a
safe way to merge your changes without risking any conflicts, and missing out on
all the other changes made?

Situations like these are a big example of when you'd want to rebase.

What are the details of Rebasing?


Let's say when you created your branch off of the master branch, the master
branch was on commit #1. Every commit in your branch was layered on top of
commit #1. When you're ready to merge your branch, you discover other
developers made changes and the most recent commit is commit #5.

Rebasing is taking all your branch's commits and adding them on top of
commit #5 instead of commit #1. If you consider commit #1 as the "base" of
your branch, you're changing that base to the most recent one, commit #5.
Hence why it's called rebasing!
Okay, so HOW do I Rebase something?
So you've got this great card component for Cupid's Cupcakes. Now that you
know what a rebase is, let's look at the how in more detail.

First, make sure you have the most up-to-date version of the branch you're
rebasing on. Let's keep assuming it's the master branch in this example. Run git
checkout master to, y'know, check it out, and then run git pull to get the most
recent version. Then checkout your branch again - here's it'd be with git checkout
updated-card or something similar.
A straightforward rebase has a pretty simple command structure: git rebase
<branch>. branch is the one you're rebasing off of. So here you'd run git rebase
master. Assuming there's no conflicts, that's all the rebase needs!
The rebase itself technically removes your old commits and makes new commits
identical to them, rewriting the repo's commit history. That means pushing the
rebase to the remote repo will need some extra juice. Using git push --force will do
the trick fine, but a safer option is git push --force-with-lease. The latter will alert
you of any upstream changes you hadn't noticed and prevent the push. This way
you avoid overwriting anyone else's work, so it's the safer option.
With all that, your rebase is now complete! However, rebases won't always go so
smoothly...

How do I Handle Rebase Conflicts?


Remember how we worried our new card would conflict with someone else's
changes? Turns out, one does! One developer added extra info onto the new
cupcake card, such as calorie count or how many elves it takes to make it at
night. The updated markups from both sets of change are in the same lines - this
means the rebase can't happen automatically. Git won't know which parts of the
changes to keep and which to remove. It must be resolved!

Thankfully, git makes this very easy. During the rebase, git adds each commit
onto the new base one by one. If it reaches a commit with a conflict, it'll pause
the rebase and resume once it's fixed.

If you've dealt with merge conflicts before, rebase conflicts are handled
essentially the same way. Running git status will tell you where the conflicts are,
and the two conflicting sections of code will be next to each other so you can
decide how to fix them.
Once everything is fixed, add and commit the changes like you would a normal
merge conflict. Then run git rebase --continue so git can rebase the rest of your
commits. It'll pause for any more conflicts, and once they're set you just need to
push --force-with-lease.
There's two lesser-used options you could also use. One is git rebase --abort, which
would bring you back to before you started the rebase. It's useful for
unexpected conflicts that you can't rush a decision for. Another is git rebase --skip,
which skips over the commit causing the conflict altogether. Unless it's an
unneeded commit and you're feeling lazy, you likely won't use it much.

Git has taken the programming community by storm. Hundreds of thousands of


organizations and developers is starting to use Git as their Version Control System
(VCS). But you might wonder what makes Git so special?

In this post, I'll going to dig into one of the mysterious aspect of Git --- The 3-Tree
Architecture.

To get started with, lets first take a look at how the typical VCS works. Usually, a
VCS works by having two places to store things:

1. Working Copy
2. Repository

Working copy is the place where you make your changes. Whenever you edit
something, it is saved in working copy and it is a physically stored in a disk.

Repository is the place where all the version of the files or commits, logs etc is
stored. It is also saved in a disk and has its own set of files.

You cannot however change or get the files in a repository directly, in able to retrieve
a specific file from there, you have to checkout

Checking-out is the process of getting files from repository to your working copy.
This is because you can only edit files when it is on your working copy. When you are
done editing the file, you will save it back to the repository by commiting it.

Committing is the process of putting back the files from working copy to repository.
3 tree archi vs 2 tree arch

In this process, Working Copy and Repository is saved in the disk as series of folders
and files like a Tree, since files and directories resembles a tree wherein folder
represents a branch of a tree and files represents the leaf. Hence, this architecture is
called 2 Tree Architecture. Because you have two tree in there -- Working Copy and
Repository. The famous VCS with this kind of architecture is Subversion or SVN.

Now, that you know what a 2 Tree Architecture looks like, interesting to say Git has
different one, it is instead powered by 3 Trees!

Why three you might ask?

Well, interestingly, Git has also the Working Copy and Repository as well, but it had
added an extra tree in between:

As you can see above, there is a new tree called Staging. What this is for?

This is one of the fundamental difference of Git that sets it apart from other VCS, this
Staging tree (usually termed as Staging area) is a place where you prepare all the
things that you are going to commit.

In Git, you don't move things directly from your working copy to the repository, you
have to stage them first, one of the main benefits of this is, lets say:

You did changes on your 10 files, 2 of the files is something related to fixing an
alignment issue in a webpage, while the other 8 changed files is related to database
connection..
Maven :
What is Maven?
Maven is a project management and comprehension tool that provides developers a complete build
lifecycle framework. Development team can automate the project's build infrastructure in almost no time as
Maven uses a standard directory layout and a default build lifecycle.
In case of multiple development teams environment, Maven can set-up the way to work as per standards in
a very short time. As most of the project setups are simple and reusable, Maven makes life of developer
easy while creating reports, checks, build and testing automation setups.

Maven tutorial provides basic and advanced concepts of apache maven technology. Our maven tutorial is
developed for beginners and professionals.

Maven is a powerful project management tool that is based on POM (project object model). It is used for
projects build, dependency and documentation.

Understanding the problem without Maven

There are many problems that we face during the project development. They are discussed below:
1) Adding set of Jars in each project: In case of struts, spring, hibernate frameworks, we need to add set of
jar files in each project. It must include all the dependencies of jars also.
2) Creating the right project structure: We must create the right project structure in servlet, struts etc,
otherwise it will not be executed.
3) Building and Deploying the project: We must have to build and deploy the project so that it may work.

What it does?

Maven simplifies the above mentioned problems. It does mainly following tasks.
1. It makes a project easy to build
2. It provides uniform build process (maven project can be shared by all the maven projects)
3. It provides project information (log document, cross referenced sources, mailing list, dependency
list, unit test reports etc.) It is easy to migrate for new features of Maven

What is Build Tool

A build tool takes care of everything for building a process. It does following:
o Generates source code (if auto-generated code is used)
o Generates documentation from source code
o Compiles source code
o Packages compiled code into JAR of ZIP file
o Installs the packaged code in local repository, server repository, or central repository
POM stands for Project Object Model. It is fundamental unit of work in Maven. It is an XML file that
resides in the base directory of the project as pom.xml.
The POM contains information about the project and various configuration detail used by Maven to build
the project(s).
A Project Object Model or POM is the fundamental unit of work in Maven. It is an XML file that contains
information about the project and configuration details used by Maven to build the project. It contains
default values for most projects. Examples for this is the build directory, which is target; the source
directory, which is src/main/java; the test source directory, which is src/main/test; and so on.

The POM was renamed from project.xml in Maven 1 to pom.xml in Maven 2. Instead of having a
maven.xml file that contains the goals that can be executed, the goals or plugins are now configured in
the pom.xml. When executing a task or goal, Maven looks for the POM in the current directory. It reads
the POM, gets the needed configuration information, then executes the goal.

POM also contains the goals and plugins. While executing a task or goal, Maven looks for the POM in the
current directory. It reads the POM, gets the needed configuration information, and then executes the goal.
Some of the configuration that can be specified in the POM are following −

● project dependencies
● plugins
● goals
● build profiles
● project version
● developers
● mailing list
Before creating a POM, we should first decide the project group (groupId), its name (artifactId) and its
version as these attributes help in uniquely identifying the project in repository.

POM Example

<project xmlns = "http://maven.apache.org/POM/4.0.0"

xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance"

xsi:schemaLocation = "http://maven.apache.org/POM/4.0.0

http://maven.apache.org/xsd/maven-4.0.0.xsd">

<modelVersion>4.0.0</modelVersion>

<groupId>com.companyname.project-group</groupId>

<artifactId>project</artifactId>

<version>1.0</version>

</project>

It should be noted that there should be a single POM file for each project.
● All POM files require the project element and three mandatory fields: groupId, artifactId,
version.
● Projects notation in repository is groupId:artifactId:version.
● Minimal requirements for a POM −

Sr.No. Node & Description

1 Project root
This is project root tag. You need to specify the basic schema settings such as
apache schema and w3.org specification.

2 Model version
Model version should be 4.0.0.

3 groupId
This is an Id of project's group. This is generally unique amongst an organization
or a project. For example, a banking group com.company.bank has all bank
related projects.

4 artifactId
This is an Id of the project. This is generally name of the project. For example,
consumer-banking. Along with the groupId, the artifactId defines the artifact's
location within the repository.

5 version
This is the version of the project. Along with the groupId, It is used within an
artifact's repository to separate versions from each other. For example −
com.company.bank:consumer-banking:1.0
com.company.bank:consumer-banking:1.1.

artifactId is the name of the jar without version. If you created it then you can choose whatever name
you want with lowercase letters and no strange symbols. If it's a third party jar you have to take the
name of the jar as it's distributed. eg. maven, commons-math
groupId will identify your project uniquely across all projects, so we need to enforce a naming schema.
It has to follow the package name rules, what means that has to be at least as a domain name you
control, and you can create as many subgroups as you want. Look at More information about package
names. eg. org.apache.maven, org.apache.commons

Install Maven :

We should have Java running in the server to install Maven.


As is shown on the terminal, we can run sudo yum update to apply all updates. If we enter this command, a
sort of updates will be listed.
Update
Then enter “y” to continue and complete the update.Check Java using java and javac
When we enter java and javac respectively we got two messages returned:

For most AWS instances, there are JRE installed on them, that means we can run java program on them.
However, some AWS instances do not have JDK, so that we need to install JDK manually by running:
1 yum install java-devel
InstallJDK
Now we can use javac to compile our java classes.

Edit Java class using vim and execute Java program

We can use vim to create a java file on the instance. For example, we want to create a java class named
“Hello.java” to print “Hello World!” on the screen. We can run the following command:
1 vim Hello.java
It will create a file named “Hello.java” under the current directory, and swith to the viewing model:
What you will see after running "vim Hello.java"
By typing i, you enter to the editing model. Now type the following codes:
1 public class Hello {
2 public static void main(String[] args) {
3 System.out.println("Hello World!");
4 }
5 }
Then type Esc to escape the editing model and input :w to save the content and input :q to quit.
Edit, escape, write and quite
Now we are ready to compile the java file and run it:
1 javac Hello.java
2 java Hello
Now we can install maven :

wget http://mirror.olnevhost.net/pub/apache/maven/maven-3/3.0.5/binaries/apache-maven-3.0.5-bin.tar.gz
basically just go to the maven site. Find the version of maven you want. The file type and use the mirror for
the wget statement above.
Afterwards the process is easy

1. Run the wget command from the dir you want to extract maven too.
2. run the following to extract the tar,
tar xvf apache-maven-3.0.5-bin.tar.gz
3. move maven to /usr/local/apache-maven
mv apache-maven-3.0.5 /usr/local/apache-maven
4. Next add the env variables to your ~/.bashrc file
5. export M2_HOME=/usr/local/apache-maven
6. export M2=$M2_HOME/bin
export PATH=$M2:$PATH
7. Execute these commands

source ~/.bashrc
6:. Verify everything is working with the following command

mvn -version

And :

sudo wget http://repos.fedorapeople.org/repos/dchen/apache-maven/epel-apache-maven.repo -O /


sudo sed -i s/\$releasever/6/g /etc/yum.repos.d/epel-apache-maven.repo
sudo yum install -y apache-maven
mvn --version

Create a folder and start building a code :

mkdir project
cd project/
mvn archetype:generate
mvn clean package
2. Take the code from the git and put pom.xml file and build the code.
Lets do this process manually.

What is Build Lifecycle?


A Build Lifecycle is a well-defined sequence of phases, which define the order in which the goals are to be
executed. Here phase represents a stage in life cycle. As an example, a typical Maven Build
Lifecycle consists of the following sequence of phases.

Phase Handles Description

Resource copying can be customized


prepare-resources resource copying
in this phase.

Validates if the project is correct and if


validate Validating the information
all necessary information is available.

Source code compilation is done in


compile compilation
this phase.

Tests the compiled source code


Test Testing
suitable for testing framework.

This phase creates the JAR/WAR


package packaging package as mentioned in the
packaging in POM.xml.

This phase installs the package in


install installation
local/remote maven repository.
Copies the final package to the remote
Deploy Deploying
repository.

There are always pre and post phases to register goals, which must run prior to, or after a particular phase.
When Maven starts building a project, it steps through a defined sequence of phases and executes goals,
which are registered with each phase.

cd test/
12 git clone https://github.com/vinayRaj98/vinayproject
13 ls
14 mvn
15 mvn archetype:generate
16 ls
17 cd test1/
18 ls
19 cd ..
20 ls
21 mvn clean package
Maven Repository
A maven repository is a directory of packaged JAR file with pom.xml file. Maven searches for
dependencies in the repositories. There are 3 types of maven repository:
1. Local Repository
2. Central Repository
3. Remote Repository

Maven searches for the dependencies in the following order:


Local repository then Central repository then Remote repository.

If dependency is not found in these repositories, maven stops processing and throws an error.

1) Maven Local Repository

Maven local repository is located in your local system. It is created by the maven when you run any maven
command.
By default, maven local repository is %USER_HOME%/.m2 directory. For example: C:\Users\SSS IT\.m2.
Update location of Local Repository

We can change the location of maven local repository by changing the settings.xml file. It is located
in MAVEN_HOME/conf/settings.xml, for example: E:\apache-maven-3.1.1\conf\settings.xml.
Let's see the default code of settings.xml file.
settings.xml
1. ...
2. <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
3. xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
4. xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.
0.xsd">
5. <!-- localRepository
6. | The path to the local repository maven will use to store artifacts.
7. |
8. | Default: ${user.home}/.m2/repository
9. <localRepository>/path/to/local/repo</localRepository>
10. -->
11.
12. ...
13. </settings>
Now change the path to local repository. After changing the path of local repository, it will look like this:
settings.xml
1. ...
2. <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
3. xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
4. xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.
0.xsd">
5. <localRepository>e:/mavenlocalrepository</localRepository>
6.
7. ...
8. </settings>
As you can see, now the path of local repository is e:/mavenlocalrepository.

2) Maven Central Repository

Maven central repository is located on the web. It has been created by the apache maven community itself.
The path of central repository is: http://repo1.maven.org/maven2/.
The central repository contains a lot of common libraries that can be viewed by this
url http://search.maven.org/#browse.

3) Maven Remote Repository

Maven remote repository is located on the web. Most of libraries can be missing from the central
repository such as JBoss library etc, so we need to define remote repository in pom.xml file.
Let's see the code to add the jUnit library in pom.xml file.
pom.xml
1. <project xmlns="http://maven.apache.org/POM/4.0.0"
2. xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
3. xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
4. http://maven.apache.org/xsd/maven-4.0.0.xsd">
5.
6. <modelVersion>4.0.0</modelVersion>
7.
8. <groupId>com.javatpoint.application1</groupId>
9. <artifactId>my-application1</artifactId>
10. <version>1.0</version>
11. <packaging>jar</packaging>
12.
13. <name>Maven Quick Start Archetype</name>
14. <url>http://maven.apache.org</url>
15.
16. <dependencies>
17. <dependency>
18. <groupId>junit</groupId>
19. <artifactId>junit</artifactId>
20. <version>4.8.2</version>
21. <scope>test</scope>
22. </dependency>
23. </dependencies>
24.
25. </project>
You can search any repository from Maven official website mvnrepository.com.
What are Maven Plugins?
Maven is actually a plugin execution framework where every task is actually done by plugins. Maven
Plugins are generally used to −

● create jar file


● create war file
● compile code files
● unit testing of code
● create project documentation
● create project reports
A plugin generally provides a set of goals, which can be executed using the following syntax −

mvn [plugin-name]:[goal-name]
For example, a Java project can be compiled with the maven-compiler-plugin's compile-goal by running the
following command.

mvn compiler:compile
Plugin Types
Maven provided the following two types of Plugins −

Sr.No. Type & Description

1 Build plugins
They execute during the build process and should be configured in the <build/>
element of pom.xml.

2 Reporting plugins
They execute during the site generation process and they should be configured in
the <reporting/> element of the pom.xml.

Following is the list of few common plugins −

Sr.No. Plugin & Description

1 clean
Cleans up target after the build. Deletes the target directory.

2 compiler
Compiles Java source files.

3 surefire
Runs the JUnit unit tests. Creates test reports.

4 jar
Builds a JAR file from the current project.

5 war
Builds a WAR file from the current project.

6 javadoc
Generates Javadoc for the project.
7 antrun
Runs a set of ant tasks from any phase mentioned of the build.

Next, open the command console and go to the folder containing pom.xml and execute the
following mvn command.

C:\MVN\project>mvn clean
Maven will start processing and displaying the clean phase of clean life cycle.

[INFO] Scanning for projects...


[INFO] ------------------------------------------------------------------
[INFO] Building Unnamed - com.companyname.projectgroup:project:jar:1.0
[INFO] task-segment: [post-clean]
[INFO] ------------------------------------------------------------------
[INFO] [clean:clean {execution: default-clean}]
[INFO] [antrun:run {execution: id.clean}]
[INFO] Executing tasks
[echo] clean phase
[INFO] Executed tasks
[INFO] ------------------------------------------------------------------
[INFO] BUILD SUCCESSFUL
[INFO] ------------------------------------------------------------------
[INFO] Total time: < 1 second
[INFO] Finished at: Sat Jul 07 13:38:59 IST 2012
[INFO] Final Memory: 4M/44M
[INFO] ------------------------------------------------------------------

Versions :

Release Artifacts
These are specific, point-in-time releases. Released artifacts are considered to be solid, stable, and perpetual
in order to guarantee that builds which depend upon them are repeatable over time. Released JAR artifacts
are associated with PGP signatures and checksums verify both the authenticity and integrity of the binary
software artifact. The Central Maven repository stores release artifacts.

Snapshot Artifacts
Snapshots capture a work in progress and are used during development. A Snapshot artifact has both a
version number such as “1.3.0” or “1.3” and a timestamp. For example, a snapshot artifact for
commons-lang 1.3.0 might have the name commons-lang-1.3.0-20090314.182342-1.jar.

Transitive dependency in Maven :

There are two types of Maven dependencies:

● Direct: These are dependencies defined in your pom.xml file under the <dependencies/>section.
● Transitive: These are dependencies that are dependencies of your direct dependencies.
Jenkins

Scripted vs. Declarative


The best way to explain the differences is using an example, so you can find below a Pipeline in
Scripted syntax and just after it the same version but translated to Declarative syntax.

Note the callouts numbers as I will use them to explain the main differences.
Scripted style
properties([
1 parameters([
2 gitParameter(branch: '',
3 branchFilter: 'origin/(.*)',
4 defaultValue: 'master',
5 description: '',
6 name: 'BRANCH',
7 quickFilterEnabled: false,
8 selectedValue: 'NONE',
9 sortMode: 'NONE',
10 tagFilter: '*',
11 type: 'PT_BRANCH')
12 ])
13 ])
14
15 def SERVER_ID="artifactory"
16
17 node {
18 stage("Checkout") {
19 git branch: "${params.BRANCH}", url:
20 'https://github.com/sergiosamu/blog-pipelines.git'
21 }
22 stage("Build") {
23 try {
24 withMaven(maven: "Maven363") {
25 sh "mvn package"
26 }
27 } catch (error) {
28 currentBuild.result='UNSTABLE'
29 }
30 }
31 stage("Publish artifact") {
32 def server = Artifactory.server "$SERVER_ID"
33
34 def uploadSpec = """{
35 "files": [
36 {
37 "pattern": "target/blog-pipelines*.jar",
38 "target":
39 "libs-snapshot-local/com/sergiosanchez/pipelines/"
40 }
41 ]
42 }"""
43
44 server.upload(uploadSpec)
45 }
}
Input parameters as defined in the properties section

Variables are defined in Groovy language

The first element of a Scripted Pipeline is node

Error control is managed with a try/catch clause in Groovy Syntax

Artifactory configuration is defined through variables


Declarative style
1 properties([
2 parameters([
3 gitParameter(branch: '',
4 branchFilter: 'origin/(.*)',
5 defaultValue: 'master',
6 description: '',
7 name: 'BRANCH',
8 quickFilterEnabled: false,
9 selectedValue: 'NONE',
10 sortMode: 'NONE',
11 tagFilter: '*',
12 type: 'PT_BRANCH')
13 ])
14 ])
15
16 pipeline {
17 agent any
18
19 environment {
20 SERVER_ID = 'artifactory'
21 }
22
23 stages {
24 stage("Checkout") {
25 steps {
26 git branch: "${params.BRANCH}", url:
27 'https://github.com/sergiosamu/blog-pipelines.git'
28 }
29 }
30 stage("Build") {
31 steps {
32 warnError("Fallo pruebas unitarias") {
33 withMaven(maven: "Maven363") {
34 sh "mvn package"
35 }
36 }
37 }
38 }
39 stage("Publish artifact") {
40 steps {
41 rtUpload (
42 serverId: "$SERVER_ID",
43 spec: '''{
44 "files": [
45 {
46 "pattern": "target/blog-pipelines*.jar",
47 "target":
48 "libs-snapshot-local/com/sergiosanchez/pipelines/"
49 }
50 ]
51 }'''
52 )
53 }
54 }
}
}

Input parameters are defined in the same way as the Scripted Pipeline because the
properties section is outside the Pipeline main structure

The first element of a Scripted Pipeline is pipeline. This is the best way to identify a
Declarative Pipeline

Variables are defined in the Environments section. No Groovy-like variable


declaration are allowed in Declarative syntax.

Try/catch structure is not allowed like any other Groovy syntax. The custom
step warnError is used to manage build state

Artifactory plugin provides a step to easily upload an artifact without requiring


Groovy code.

Pipeline Syntaxes Declarative v/s Scripted


Jenkins pipeline supports two different syntaxes

1. Declarative Syntax
2. Scripted Syntax

Declarative Syntax
Declarative pipeline syntax offers an easy way to create pipelines. It contains a predefined
structure to create Jenkins pipelines. It gives you the ability to control all aspects of a pipeline
execution in a simple, straightforward manner.
Scripted Syntax
The scripted pipeline was the first syntax of the Jenkins pipeline. We use groovy script inside node
scope to define scripted pipeline, so it becomes a little bit difficult to start with for someone who
doesn’t have an idea about groovy. Scripted Jenkins pipeline runs on the Jenkins master with the
help of a lightweight executor. It uses very few resources to translate the pipeline into atomic
commands. Both declarative and scripted syntax are different from each other and we define them
differently.
Jenkinsfile (Declarative Pipeline)

pipeline {

agent any

tools {

maven 'maven_3_5_0'

stages {

stage('Checkout Code from Git') {

steps {

git 'https://github.com/SaumyaBhushan/Selenium_Test_Automation.git'

stage('compile stage') {

steps {

bat "mvn clean compile"

stage('testing stage') {

steps {

bat "mvn test"

}
}

Jenkinsfile (Scripted Pipeline)

node {

tools{

maven 'maven_3_5_0'

stage('Checkout Code from Git') {

git 'https://github.com/SaumyaBhushan/Selenium_Test_Automation.git'

stage('Compile') {

bat "mvn clean compile"

stage('Test') {

bat "mvn test"

Required field in syntax


In scripted pipeline everything, we write everything inside node{} so the node is the required field
which is equivalent to pipeline and agent field in declarative syntax. Node is a crucial first step in a
scripted pipeline as it allocates an executor and workspace for the Pipeline, without a node, a
Pipeline cannot do any work.
In declarative syntax, the pipeline must be top-level. After that, we write the agent field which
suggests which agent the pipeline should execute. Agent any means it will execute on any
available agent. The agent directive, instructs Jenkins to allocate an executor and workspace for
the Pipeline. Without an agent directive, not only is the Declarative Pipeline not valid, it would not
be capable of doing any work!
The next one is stages where we define all the jobs or tasks that are going to be done. Inside the
stages field, we can define various stages and steps. Inside steps, we write an actual script that
will execute like mvn test, mvn install, etc.
Post Attribute in Jenkinsfile
It executes the mentioned logic after the execution of all the stages. Inside post there are different
conditions that you can execute. These conditions are

● always (example can be sending an email to the team after the bields run)
● success
● failure

post{

always{

// this condition always gets executed no matter if the field has been failed or succeeded

success{

// execute script that are only relevant when the beilds succeed

failure{

// execute script that are only relevant when the beilds failed

Environmental Variable in Jenkinsfile


Jenkins Pipeline exposes environment variables via the global variable env, which is available from
anywhere within a Jenkinsfile.
The full list of environment variables accessible from within Jenkins Pipeline is documented
at localhost:8080/pipeline-syntax/globals#env, or http://localhost:8080/env-vars.html/ assuming
a Jenkins master is running on localhost:8080, and includes:
BUILD_NUMBERThe current build number, such as “153”.
BUILD_IDThe current build ID, identical to BUILD_NUMBER for builds created in 1.597+, but a
YYYY-MM-DD_hh-mm-ss timestamp for older builds.
BUILD_DISPLAY_NAMEThe displays name of the current build, which is something like “#153” by
default.
JOB_NAMEName of the project of this build, such as “foo” or “foo/bar”.
BUILD_TAGString of “jenkins-${JOB_NAME}–${BUILD_NUMBER}“. Replace all the forward slashes
(“/”) in the JOB_NAME with dashes (“-“). Convenient to put into a resource file, a jar file, etc for
easier identification.
EXECUTOR_NUMBER The unique number that identifies the current executor (among executors of
the same machine) that’s carrying out this build. This is the number you see in the “build executor
status”, except that the number starts from 0, not 1.
NODE_NAME Name of the agent if the build is on an agent, or “master” if run on master.
NODE_LABELS are the labels separated by white-space for the nodes
WORKSPACE The absolute path of the directory assigned to the build as a workspace.
There is a long list so you can check it from the above link.
Setting up an Environmental Variable
You can set environment variables on the basis of what syntax you are following, it is different for
Declarative and Scripted Pipeline.
Declarative Pipeline supports an environment directive, whereas users of Scripted Pipeline must
use the withEnv step.
Conditionals in Jensfile / when statement
Suppose you only want to run the tests on the development branch build you don’t want to run
tests for other builds what you can do here is inside the stage block you can define when
expressions which say when should this stage execute.
stage('compile stage') {

when{

expression{

BRANCH_NAME == 'dev' || BRANCH_NAME == 'master' && CODE_CHANGES == true //


environment variable

steps {

bat "mvn clean compile"

This part of the stage will only execute if the current branch is dev if not it’s just gonna skip. You
can also apply boolean expression in case you only want to run that step when some condition is
true like CODE_CHANGES == true.
Ansible :

Why do we need Configuration Management tool?

Anyone who works as an operations engineer has witnessed a bunch of issues with manual configuration
approach and more repetitive tasks which are time-consuming. How many times key resources left the
company and new engineer struggle to understand the environment and start performing the tasks without
escalation? Server configuration is a very broad landscape which needs to be maintained properly from the
beginning. Organization Standard will be documented in KM but people will forget/miss to follow due to
resource crunch, laziness and skill gap. Scripting is one of the option to automate and maintain the
configuration but it’s not an easy task.

What is Ansible?

Configuration management and Orchestration tool is the solution to eliminate all the problem in the system
management. Ansible is one of the most popular ones which is supported by Red Hat. Ansible is simple IT
automation engine to save time and be more productive. Human resources can spend more time on
innovation to make the operation more cost-effective.

Why Ansible?

● Ansible is free and Open Source.


● Agentless. Ansible doesn’t require any agent on client machines unlike other automation tool exists in
the market (Puppet, Chef, Salt.). It uses SSH protocol to connect the servers. Ansible required Python
to make the use of modules on client machines. Ansible also works with a system which doesn’t have
python installed using the “raw” module.
● Ansible uses YAML language which is very easy to learn.
● Supported by Red Hat.

How Ansible work?

Ansible works by connecting to your server using “SSH” and pushing out small programs, called “Ansible
modules” to it. Using these modules, playbooks (a small piece of YAML code), we should be to perform the
specific task on all the ansible clients. The specific task could be installing the packages, restarting the
services, rebooting the servers etc..There are lots of things that you could do using ansible.
Ansible – Tower

Ansible Use cases

● Provisioning
● Configuraiton Management
● App Deployment
● Continous Delivery
● Security & Compliance
● Orchestration

Ansible – Supported Operating Systems

● Linux, including RHEL, CentOS, Fedora, Ubuntu, and others.


● Windows and Windows Server
● UNIX
● OS X

Ansible – Supported Hypervisors

● VMware
● Red Hat Enterprise Virtualization (RHEV)
● Libvirt
● Xenserver
● Vagrant
Passwordless :

SSH password less authentication :

1. Create two servers (one ansible control and node)


2. Copy the pem of node to ansible control.
scp -i ~/Downloads/linuxpem.pem.pem ~/Downloads/linuxpem.pem ubuntu@
13.232.161.106:/home/ubuntu/

3. Create keygen in master


4. ssh -i linuxpem.pem ubuntu@172.31.16.8 mkdir -p .ssh
5. cat .ssh/id_rsa.pub | ssh -i linuxpem.pem ubuntu@172.31.16.8 'cat >> .ssh/authorized_keys'
6. ssh -i linuxpem.pem ubuntu@172.31.16.8 "chmod 700 .ssh; chmod 640 .ssh/authorized_keys"
7. ssh ubuntu@172.31.6.235

Install ansible :

sudo apt-get update -y


sudo apt-get install software-properties-common
sudo apt-add-repository -y ppa:ansible/ansible
sudo apt-get update
sudo apt-get install -y ansible
Playbooks are the files where Ansible code is written. Playbooks are written in YAML format. YAML
stands for Yet Another Markup Language. Playbooks are one of the core features of Ansible and tell
Ansible what to execute. They are like a to-do list for Ansible that contains a list of tasks.
Playbooks contain the steps which the user wants to execute on a particular machine. Playbooks are run
sequentially. Playbooks are the building blocks for all the use cases of Ansible.
Playbook Structure
Each playbook is an aggregation of one or more plays in it. Playbooks are structured using Plays. There can
be more than one play inside a playbook.
The function of a play is to map a set of instructions defined against a particular host.
YAML is a strict typed language; so, extra care needs to be taken while writing the YAML files. There are
different YAML editors but we will prefer to use a simple editor like notepad++. Just open notepad++ and
copy and paste the below yaml and change the language to YAML (Language → YAML).
A YAML starts with --- (3 hyphens)
Create a Playbook
Let us start by writing a sample YAML file. We will walk through each section written in a yaml file.

---

name: install and configure DB

hosts: testServer

become: yes

vars:

oracle_db_port_value : 1521

tasks:

-name: Install the Oracle DB

yum: <code to install the DB>

-name: Ensure the installed service is enabled and running

service:

name: <your service name>

The above is a sample Playbook where we are trying to cover the basic syntax of a playbook. Save the
above content in a file as test.yml. A YAML syntax needs to follow the correct indentation and one needs to
be a little careful while writing the syntax.
The Different YAML Tags
Let us now go through the different YAML tags. The different tags are described below −
name
This tag specifies the name of the Ansible playbook. As in what this playbook will be doing. Any logical
name can be given to the playbook.
hosts
This tag specifies the lists of hosts or host group against which we want to run the task. The hosts field/tag
is mandatory. It tells Ansible on which hosts to run the listed tasks. The tasks can be run on the same
machine or on a remote machine. One can run the tasks on multiple machines and hence hosts tag can have
a group of hosts’ entry as well.
vars
Vars tag lets you define the variables which you can use in your playbook. Usage is similar to variables in
any programming language.
tasks
All playbooks should contain tasks or a list of tasks to be executed. Tasks are a list of actions one needs to
perform. A tasks field contains the name of the task. This works as the help text for the user. It is not
mandatory but proves useful in debugging the playbook. Each task internally links to a piece of code called
a module. A module that should be executed, and arguments that are required for the module you want to
execute.

This is a summary of the Ansible Components :

1- Inventories :

-Static or Local /etc/ansible/hosts

-Can be called from a different file via the " -i " option

-Can also be Dynamic , can be provided via a program

########################################################

2- Modules :

-Modules are the tools in the workshop

-Ansible has many Modules which can be run directly or via playbooks against hosts " local and remote "

-Like " yum , ping " Modules

########################################################

3- Variables :

-Variables are how we deal with the differences between systems since not all systems are the same

-Variables names should be letters , numbers and underscores

-Variables should always start with a letter

-Variables can be defined in the inventory and playbook


########################################################

4- Ansible Facts

-Facts are the way of getting data from systems

-You can use these facts in plabook variables

-Gathering facts can be disabled in a playbook :

- it is not always required

- can speed up execution :

- hosts: mainhosts

- gather_facts:no

########################################################

5- Play and Playbooks

-Playbooks are the instruction manuals, the hosts are the raw materials

-A playbook is made up of individual plays

-A play is a task

-Playbooks are in YAML Format

########################################################

6- Configuration Files :

-The default is " /etc/ansible/ansible.cfg "

-The config file is read when a Playbook is run

-We can use config files other than the default as follow :

- ANSIBLE_CONFIG (an environmental Variable)

- ansible.cfg ( in the current directory )

- .ansible_cfg ( in the home directory )

- /etc/ansible/ansible.cfg

########################################################

7- Templates :

-Is the definition and set of parameters for running an ansible job
-Job templates are useful to execute the same job many times

-Variables can be used in the templates

########################################################

8- Handlers :

-A task in a Playbook which can trigger a handler

-Used to handle error conditions

-Called at the end of each play

########################################################

9- Roles :

-A Playbook is a standalone file Ansible runs to set up your servers

-Roles can be thought of as a playbook that's split into multiple files :

- One file for tasks , one for variables , one for handlers

-They are the methood to package up tasks , handlers and everything else

########################################################

10- Ansible Vault :

-Ansible Vault is a secure store

-It allows Ansible to keep sensitive data :

- Passwords

- Encrypted Files

-ansible-vault command is used to edit files

-The command line flag " --ask-vault-pass " or " --vault-password-file "

Roles:

Roles are really all about keeping ourselves organised.

If left unchecked, our Playbooks can quickly become large and unwieldy.
Ansible uses the concept of Roles to address this problem. By following a standardised
directory structure, we can keep our Ansible project structure nice and tidy, along with
maintaining some semblence of sanity and order.

So really, a Role is nothing more than a more specific Playbook. We already have
covered the basics of Playbooks, and a Role takes the concept of a Playbook and
applies it to a common set of tasks to achieve a specific thing.

That sounds quite vague. An example may serve us better.

Let us imagine we have a list of common tasks we always want to perform on every
server we manage.

We want to install some software (git, curl, htop, whatever), we want our authorised
SSH keys to be set so we don't have to muck about with passwords, and it'd be quite
nice if our User accounts were created, along with our standard home directory
structure.
We could think about these as our 'Common' tasks.

This would make a perfect Role. The 'Common Role'.

With a Common role defined, we can then remove all that standard set up from every
Playbook we have, and simply request that the Playbook includes that Role when it
executes.

In many ways, it's pretty similar to Traits in PHP.

Ansible Galaxy - Home of Many, Many Roles


Now, we will come back to Ansible Galaxy in more depth in a future video, but I want
to cover it here - briefly - because of its relation to Roles.

I would strongly encourage you to browse the Ansible Galaxy as it's really what
piqued my interests in Ansible, when compared to other similar infrastructure
automation systems like Chef and Puppet.

Ansible Galaxy is like the Apple App Store for geeks. Think of any 'thing' you might
want to play with - Redis, Jenkins, Blackfire, Logstash, NodeJs - and there will, more
likely than not, be a Role created by a friendly community member to download and
use with almost no effort.
Of course, life is never that easy, and many of the Roles on Ansible Galaxy will need
at least a basic grasp of the software you are trying to install, before you can make the
most of the Role in your own setup.

Again, we will come back to Ansible Galaxy in more detail in a future video.

Role Your Own


As already mentioned, by following this standard Role directory structure, we can
leverage the powers of Ansible to organise our infrastructure into subsets of repeatable
tasks, which can be easily read and understood by our Playbooks.

We covered using ansible-galaxy init your-role-name-here in the video on using Git with
Ansible, but not a lot was said on why we were doing that.
Using ansible-galaxy init will generate us a standardised directory structure for our
Role.
We can then populate the individual files and folders with our own data, and bonza, we
have a working Role.

I recommend following the method I used in the Git with Ansible video as we likely
won't be working locally on the server, so won't have easy access to
the ansible-galaxy command every time we want to create a new Role.
Simply, if we create our Role using ansible-galaxy then all the files we need
- /tasks/main.yml, /handlers/main.yml, vars/main.yml, etc, will be created for us already,
and we can just copy and paste our existing Playbook entries into the files and life will
be good.
Creating those files by hand isn't a problem - nor does ansible-galaxy do anything
particularly special - it's just a time saver.

Example
In the video we migrate from having all our Apache set up - tasks and handlers - in one
Playbook, and instead, we start moving these blocks of config (the tasks block, the
handlers block) into their own yaml
files: roles/apache/tasks/main.yml and roles/apache/handlers/main.yml.
The actual file contents don't change, only their locations.

We start off with our original apache-playbook.yml - and remember, this is merely a
demonstration, this won't get you a working Apache install at this stage:
First of all, we need to create our Apache Role:

cp -R roles/__template__ roles/apache
That creates us the desired role structure.

apache-playbook.yml contents:
---

- hosts: all

vars:

- website_dir: /var/www/oursite.dev/web

tasks:

- name: Install Apache

apt: pkg=apache2 state=installed update_cache=true

notify:

- start apache

- name: Create website directory

file: dest={{ website_dir }} mode=775 state=directory owner=www-data group=www-data

notify:

- restart apache

handlers:

- name: start apache


service: name=apache2 state=started

- name: restart apache

service: name=apache2 state=restarted


And we then extract the tasks section into roles/apache/tasks/main.yml.
Notice, we don't need the tasks section heading.
---

- name: Install Apache

apt: pkg=apache2 state=installed update_cache=true

notify:

- start apache

- name: Create website directory

file: dest={{ website_dir }} mode=775 state=directory owner=www-data group=www-data

notify:

- restart apache
Next, extract the handlers section into roles/apache/handlers/main.yml.
Again, we don't need the handlers section heading.
---

- name: start apache

service: name=apache2 state=started

- name: restart apache

service: name=apache2 state=restarted


Now, back in apache-playbook.yml, we can remove the
entire tasks and handlers sections, and replace them with roles.
We use the standard yml syntax for listing our roles, of which we could have more
than one.

---

- hosts: all

vars:

- website_dir: /var/www/oursite.dev/web

roles:

- apache
We could go further and extract the variables out also - it's the exact same process.

Running our apache-playbook.yml still works exactly the same as before the change:
ansible-playbook apache-playbook.yml -k -K -s
But note, the output changes ever so slightly.

We will now see the Role name as part of the task:

GATHERING FACTS ***

ok: [127.0.0.1]

TASK: [apache | Install Apache] ***

ok: [127.0.0.1]

*cut*
Notice the [apache | Install Apache] line - this now takes the format of:
[role name | task name]
This can be helpful for identifying where things are as your Playbooks grow in size
and complexity.

Demo machines :

3.93.186.254(ansible)
chmod 400 ec2ami.pem
ssh -i "ec2ami.pem" ubuntu@ec2-3-93-186-254.compute-1.amazonaws.com

3.88.11.7(machine1) docker
chmod 400 LinuxDemo.pem
ssh -i "LinuxDemo.pem" ubuntu@ec2-3-88-11-7.compute-1.amazonaws.com

54.224.110.214(machine2)
chmod 400 awsdemo.pem
ssh -i "awsdemo.pem" ubuntu@ec2-54-224-110-214.compute-1.amazonaws.com

---
- hosts: all
become: yes

tasks:
- name: Ensure Chrony (for time synchronization) is
installed.
yum:
name: chrony
state: present

- name: Ensure chrony is running.


service:
name: chronyd
state: started
enabled: yes

# The same as the above play, but in super-compact form!


- hosts: all
become: yes
tasks:
- yum: name=chrony state=present
- service: name=chronyd state=started enabled=yes
---
- name: Download packer binaries
unarchive: src={{pl_packer_download_url}} remote_src=True dest=/usr/local/bin

- name: Set environment


lineinfile: dest=/root/.bashrc line='export PATH="$PATH:/usr/local/bin/packer"' state=present
Docker : image / container

What is Docker ? – Docker is a containerization platform that packages your application and all its
dependencies together in the form of a docker container to ensure that your application works seamlessly in
any environment.
What is Container ? – Docker Container is a standardized unit which can be created on the fly to deploy a
particular application or environment. It could be an Ubuntu container, CentOs container, etc. to full-fill the
requirement from an operating system point of view. Also, it could be an application oriented container
like CakePHP container or a Tomcat-Ubuntu container etc.
Let’s understand it with an example:
A company needs to develop a Java Application. In order to do so the developer will setup an environment
with tomcat server installed in it. Once the application is developed, it needs to be tested by the tester.
Now the tester will again set up tomcat environment from the scratch to test the application. Once the
application testing is done, it will be deployed on the production server. Again the production needs an
environment with tomcat installed on it, so that it can host the Java application. If you see the same tomcat
environment setup is done thrice. There are some issues that I have listed below with this approach:
1) There is a loss of time and effort.
2) There could be a version mismatch in different setups i.e. the developer & tester may have installed
tomcat 7, however the system admin installed tomcat 9 on the production server.
Now, I will show you how Docker container can be used to prevent this loss.
In this case, the developer will create a tomcat docker image ( A Docker Image is nothing but a blueprint to
deploy multiple containers of the same configurations ) using a base image like Ubuntu, which is already
existing in Docker Hub (Docker Hub has some base docker images available for free) . Now this image can
be used by the developer, the tester and the system admin to deploy the tomcat environment. This is how
docker container solves the problem.
However, now you would think that this can be done using Virtual Machines as well. However, there is
catch if you choose to use virtual machine. Let’s see a comparison between a Virtual machine and Docker
Container to understand this better.

● Size – This parameter will compare Virtual Machine & Docker Container on their resource they
utilize.
● Startup – This parameter will compare on the basis of their boot time.
● Integration – This parameter will compare on their ability to integrate with other tools with ease.
Size
The following image explains how Virtual Machine and Docker Container utilizes the resources allocated to
them.

Start-Up

When it comes to start-up, Virtual Machine takes a lot of time to boot up because the guest operating system
needs to start from scratch, which will then load all the binaries and libraries. This is time consuming and
will prove very costly at times when quick startup of applications is needed. In case of

Advantages :

CI Efficiency
Docker enables you to build a container image and use that same image across every step of the deployment
process. A huge benefit of this is the ability to separate non-dependent steps and run them in parallel. The
length of time it takes from build to production can be sped up notably.

Compatibility and Maintainability


Eliminate the “it works on my machine” problem once and for all. One of the benefits that the entire team
will appreciate is parity. Parity, in terms of Docker, means that your images run the same no matter which
server or whose laptop they are running on. For your developers, this means less time spent setting up
environments, debugging environment-specific issues, and a more portable and easy-to-set-up codebase.
Parity also means your production infrastructure will be more reliable and easier to maintain.

Simplicity and Faster Configurations


One of the key benefits of Docker is the way it simplifies matters. Users can take their own configuration,
put it into code, and deploy it without any problems. As Docker can be used in a wide variety of
environments, the requirements of the infrastructure are no longer linked with the environment of the
application.

Rapid Deployment
Docker manages to reduce deployment to seconds. This is due to the fact that it creates a container for every
process and does not boot an OS. Data can be created and destroyed without worry that the cost to bring it
up again would be higher than what is affordable.

Continuous Deployment and Testing


Docker ensures consistent environments from development to production. Docker containers are configured
to maintain all configurations and dependencies internally; you can use the same container from
development to production making sure there are no discrepancies or manual intervention.

Multi-Cloud Platforms
One of Docker’s greatest benefits is portability. Over last few years, all major cloud computing providers,
including Amazon Web Services (AWS) and Google Compute Platform (GCP), have embraced Docker’s
availability and added individual support. Docker containers can be run inside an Amazon EC2 instance,
Google Compute Engine instance, Rackspace server, or VirtualBox, provided that the host OS supports
Docker. If this is the case, a container running on an Amazon EC2 instance can easily be ported between
environments, for example to VirtualBox, achieving similar consistency and functionality. Also, Docker
works very well with other providers like Microsoft Azure, and OpenStack, and can be used with various
configuration managers like Chef, Puppet, and Ansible, etc.

Isolation
Docker ensures your applications and resources are isolated and segregated. Docker makes sure each
container has its own resources that are isolated from other containers. You can have various containers for
separate applications running completely different stacks. Docker helps you ensure clean app removal since
each application runs on its own container. If you no longer need an application, you can simply delete its
container. It won’t leave any temporary or configuration files on your host OS.

On top of these benefits, Docker also ensures that each application only uses resources that have been
assigned to them. A particular application won’t use all of your available resources, which would normally
lead to performance degradation or complete downtime for other applications.

Security
The last of these benefits of using docker is security. From a security point of view, Docker ensures that
applications that are running on containers are completely segregated and isolated from each other, granting
you complete control over traffic flow and management. No Docker container can look into processes
running inside another container. From an architectural point of view, each container gets its own set of
resources ranging from processing to network stacks.

● Images can be version controlled as well and we build image once which runs in all environment.
Docker Architecture :
The basic architecture of Docker consists of 3 major parts:
1. Docker Host
2. Docker Client
3. Registry - dockerhub

Docker uses a client-server architecture. The Docker client talks to the Docker daemon, which does the
heavy lifting of the building, running, and distributing your Docker containers.
The Docker client and daemon can run on the same system, or you can connect a Docker client to a remote
Docker daemon. The Docker client and daemon communicate using a REST API, over UNIX sockets or a
network interface.

The Docker Host


Docker Host runs the Docker Daemon. Docker Daemon listens for Docker requests.
Docker requests could be ‘docker run’, ‘docker build’, anything.
It manages docker objects such as images, containers, networks, and volumes.

The Docker Client


Docker Client is used to trigger Docker commands. When we send any command (docker build, docker run,
etc) the docker client ends these commands to Docker daemon which further will deal with them.
Note: The Docker client can communicate with more than one daemon.

Docker Registries
The Registry is a stateless, highly scalable server-side application that stores and lets you distribute Docker
images. You can create your own image or you can use public registries namely, Docker Hub and Docker
Cloud. Docker is configured to look for images on Docker Hub by default.
We can create our own registry in fact.

So, when we run the command docker pull or docker run, the required images are pulled from your
configured registry. When you use the docker push command, your image is pushed to your configured
registry.
We will look deep into docker commands in the next blog.

Docker Objects
Docker images, containers, networks, volumes, plugins etc are the Docker objects.
In Dockerland, there are images and there are containers. The two are closely related, but distinct. But it all
starts with a Dockerfile.

A Dockerfile is a file that you create which in turn produces a Docker image when you build it. It contains a
bunch of instructions which informs Docker HOW the Docker image should get built.

You can relate it to cooking. In cooking you have recipes. A recipe lets you know all of the steps you must
take in order to produce whatever you’re trying to cook.

The act of cooking is building the recipe.

A Dockerfile is a recipe or a blueprint for building Docker images and the act of running a separate build
command produces the Docker image from the recipe.

– Docker Images
An image is an inert, immutable, file that’s essentially a snapshot of a container. It is simply a template
with instructions for creating a Docker container.
Images are stored in a Docker registry such as registry.hub.docker.com. Because they can become quite
large, images are designed to be composed of layers of other images, allowing a minimal amount of data to
be sent when transferring images over the network.

– Docker Containers
To use a programming metaphor, if an image is a class, then a container is an instance of a class—a runtime
object. They are lightweight and portable encapsulations of an environment in which to run applications.
You can create, start, stop, move, or delete a container using the Docker API or CLI. You can connect a
container to one or more networks, attach storage to it, or even create a new image based on its current state.

Docker Installation :

(A) Official Ubuntu Repositories


$ sudo apt-get install docker.io
In the past this way was discouraged as the docker package was super outdated. The universe
sources are fairly recent now.
(B) Official Docker Way
The Ubuntu installation instructions list all you need in detail, but in most cases it boils down to:
(1) Set up the docker repository

sudo apt-get update


sudo apt-get install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu
$(lsb_release -cs) stable"
(2) Install Docker CE

sudo apt-get update


sudo apt-get install docker-ce
(3) Verify the installation

sudo docker run hello-world


What is Syntax?
Very simply, syntax in programming means a structure to order commands, arguments, and everything else
that is required to program an application to perform a procedure (i.e. a function / collection of instructions).

These structures are based on rules, clearly and explicitly defined, and they are to be followed by the
programmer to interface with whichever computer application (e.g. interpreters, daemons etc.) uses or
expects them. If a script (i.e. a file containing series of tasks to be performed) is not correctly structured (i.e.
wrong syntax), the computer program will not be able to parse it. Parsing roughly can be understood as
going over an input with the end goal of understanding what is meant.

Dockerfiles use simple, clean, and clear syntax which makes them strikingly easy to create and use. They are
designed to be self explanatory, especially because they allow commenting just like a good and properly
written application source-code.

Dockerfile Syntax Example


Dockerfile syntax consists of two kind of main line blocks: comments and commands + arguments.

# Line blocks used for commenting


command argument argument ..
A Simple Example:

# Print "Hello docker!"


RUN echo "Hello docker!"

Dockerfile Commands (Instructions)


Currently there are about a dozen different set of commands which Dockerfiles can contain to have Docker
build an image. In this section, we will go over all of them, individually, before working on a Dockerfile
example.

Note: As explained in the previous section (Dockerfile Syntax), all these commands are to be listed (i.e.
written) successively, inside a single plain text file (i.e. Dockerfile), in the order you would like them
performed (i.e. executed) by the docker daemon to build an image. However, some of these commands (e.g.
MAINTAINER) can be placed anywhere you seem fit (but always after FROM command), as they do not
constitute of any execution but rather value of a definition (i.e. just some additional information).

ADD
The ADD command gets two arguments: a source and a destination. It basically copies the files from the
source on the host into the container's own filesystem at the set destination. If, however, the source is a URL
(e.g. http://github.com/user/file/), then the contents of the URL are downloaded and placed at the
destination.

Example:

# Usage: ADD [source directory or URL] [destination directory]


ADD /my_app_folder /my_app_folder
CMD
The command CMD, similarly to RUN, can be used for executing a specific command. However, unlike
RUN it is not executed during build, but when a container is instantiated using the image being built.
Therefore, it should be considered as an initial, default command that gets executed (i.e. run) with the
creation of containers based on the image.

To clarify: an example for CMD would be running an application upon creation of a container which is
already installed using RUN (e.g. RUN apt-get install …) inside the image. This default application
execution command that is set with CMD becomes the default and replaces any command which is passed
during the creation.

Example:

# Usage 1: CMD application "argument", "argument", ..


CMD "echo" "Hello docker!"

ENTRYPOINT
ENTRYPOINT argument sets the concrete default application that is used every time a container is created
using the image. For example, if you have installed a specific application inside an image and you will use
this image to only run that application, you can state it with ENTRYPOINT and whenever a container is
created from that image, your application will be the target.

If you couple ENTRYPOINT with CMD, you can remove "application" from CMD and just leave
"arguments" which will be passed to the ENTRYPOINT.

Example:

# Usage: ENTRYPOINT application "argument", "argument", ..


# Remember: arguments are optional. They can be provided by CMD
# or during the creation of a container.
ENTRYPOINT echo

# Usage example with CMD:


# Arguments set with CMD can be overridden during *run*
CMD "Hello docker!"
ENTRYPOINT echo

ENV
The ENV command is used to set the environment variables (one or more). These variables consist of “key
value” pairs which can be accessed within the container by scripts and applications alike. This functionality
of Docker offers an enormous amount of flexibility for running programs.

Example:

# Usage: ENV key value


ENV SERVER_WORKS 4
EXPOSE
The EXPOSE command is used to associate a specified port to enable networking between the running
process inside the container and the outside world (i.e. the host).

Example:

# Usage: EXPOSE [port]


EXPOSE 8080
To learn about Docker networking, check out the Docker container networking documentation.

FROM
FROM directive is probably the most crucial amongst all others for Dockerfiles. It defines the base image to
use to start the build process. It can be any image, including the ones you have created previously. If a
FROM image is not found on the host, Docker will try to find it (and download) from the Docker Hub or
other container repository. It needs to be the first command declared inside a Dockerfile.

Example:

# Usage: FROM [image name]


FROM ubuntu

MAINTAINER
One of the commands that can be set anywhere in the file - although it would be better if it was declared on
top - is MAINTAINER. This non-executing command declares the author, hence setting the author field of
the images. It should come nonetheless after FROM.

Example:

# Usage: MAINTAINER [name]


MAINTAINER authors_name

RUN
The RUN command is the central executing directive for Dockerfiles. It takes a command as its argument
and runs it to form the image. Unlike CMD, it actually is used to build the image (forming another layer on
top of the previous one which is committed).

Example:

# Usage: RUN [command]


RUN aptitude install -y riak

USER
The USER directive is used to set the UID (or username) which is to run the container based on the image
being built.

Example:

# Usage: USER [UID]


USER 751
VOLUME
The VOLUME command is used to enable access from your container to a directory on the host machine
(i.e. mounting it).

Example:

# Usage: VOLUME ["/dir_1", "/dir_2" ..]


VOLUME ["/my_files"]

WORKDIR
The WORKDIR directive is used to set where the command defined with CMD is to be executed.

Example:

# Usage: WORKDIR /path


WORKDIR ~/

Examples:

Defining Our File and Its Purpose


Albeit optional, it is always a good practice to let yourself and everybody figure out (when necessary) what
this file is and what it is intended to do. For this, we will begin our Dockerfile with fancy comments (#) to
describe it.

############################################################
# Dockerfile to build MongoDB container images
# Based on Ubuntu
############################################################

Setting The Base Image to Use


# Set the base image to Ubuntu
FROM ubuntu
Defining The Maintainer (Author)
# File Author / Maintainer
MAINTAINER Example McAuthor

Setting Arguments and Commands for Downloading MongoDB


################## BEGIN INSTALLATION ######################
# Install MongoDB Following the Instructions at MongoDB Docs
# Ref: http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/

# Add the package verification key


RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10

# Add MongoDB to the repository sources list


RUN echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | tee
/etc/apt/sources.list.d/mongodb.list

# Update the repository sources list


RUN apt-get update

# Install MongoDB package (.deb)


RUN apt-get install -y mongodb-10gen

# Create the default data directory


RUN mkdir -p /data/db

##################### INSTALLATION END #####################

Setting The Default Port For MongoDB


# Expose the default port
EXPOSE 27017

# Default port to execute the entrypoint (MongoDB)


CMD ["--port 27017"]

# Set default container command


ENTRYPOINT usr/bin/mongod

Saving The Dockerfile


After you have appended everything to the file, it is time to save and exit. Press CTRL+X and then Y to
confirm and save the Dockerfile.

This is what the final file should look like:

############################################################
# Dockerfile to build MongoDB container images
# Based on Ubuntu
############################################################

# Set the base image to Ubuntu


FROM ubuntu
# File Author / Maintainer
MAINTAINER Example McAuthor

################## BEGIN INSTALLATION ######################


# Install MongoDB Following the Instructions at MongoDB Docs
# Ref: http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/

# Add the package verification key


RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10

# Add MongoDB to the repository sources list


RUN echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | tee
/etc/apt/sources.list.d/mongodb.list

# Update the repository sources list


RUN apt-get update

# Install MongoDB package (.deb)


RUN apt-get install -y mongodb-10gen

# Create the default data directory


RUN mkdir -p /data/db

##################### INSTALLATION END #####################

# Expose the default port


EXPOSE 27017

# Default port to execute the entrypoint (MongoDB)


CMD ["--port 27017"]

# Set default container command


ENTRYPOINT usr/bin/mongod

Building Our First Image


Using the explanations from before, we are ready to create our first MongoDB image with docker!

docker build -t my_mongodb .

Custom :

FROM ubuntu:16.04
LABEL maintainer='Vinay'

RUN apt-get update -y


RUN apt-get install apache2 -y
RUN apt-get install wget -y
RUN apt-get install unzip -y
RUN apt-get install git -y
WORKDIR /tmp

RUN wget https://github.com/vinayRaj98/vinayproject/archive/master.zip

RUN unzip master.zip

RUN cp -r vinayproject-master/* /var/www/html/

EXPOSE 80

CMD ["apachectl", "-D", "FOREGROUND"]

Network drivers
Docker’s networking subsystem is pluggable, using drivers. Several drivers exist by default, and provide
core networking functionality:
● bridge: The default network driver. If you don’t specify a driver, this is the type of network you are
creating. Bridge networks are usually used when your applications run in standalone
containers that need to communicate. See bridge networks.
● host: For standalone containers, remove network isolation between the container and the Docker
host, and use the host’s networking directly. host is only available for swarm services on Docker
17.06 and higher. Seeuse the host network.
● overlay: Overlay networks connect multiple Docker daemons together and enable swarm services
to communicate with each other. You can also use overlay networks to facilitate communication
between a swarm service and a standalone container, or between two standalone containers on
different Docker daemons. This strategy removes the need to do OS-level routing between these
containers. See overlay networks.
● none: For this container, disable all networking. Usually used in conjunction with a custom network
driver. none is not available for swarm services. See disable container networking.
● Network plugins: You can install and use third-party network plugins with Docker. These plugins are
available from Docker Hub or from third-party vendors. See the vendor’s documentation for
installing and using a given network plugin.

Network driver summary


● User-defined bridge networks are best when you need multiple containers to communicate on
the same Docker host.
● Host networks are best when the network stack should not be isolated from the Docker host, but
you want other aspects of the container to be isolated.
● Overlay networks are best when you need containers running on different Docker hosts to
communicate, or when multiple applications work together using swarm services.
● Macvlan networks are best when you are migrating from a VM setup or need your containers to
look like physical hosts on your network, each with a unique MAC address.
● Third-party network plugins allow you to integrate Docker with specialized network stacks.

Bridege network demo :

Docker network ls
Docker run -itd - -name=alpine1 alpine

Docker network ls
Docker network inspect network id
Create one more

Docker attach alpine1

Ping 2

Custom :

Docker network create –driver=bridge test

Docker run -itd –name=a;pine1 –network=test alpine

One more container


Ping alpine2

Docker container :

● Create two containers:


docker run -d --name web1 -p 8001:80 eboraas/apache-php --network myNetwork
docker run -d --name web2 -p 8002:80 eboraas/apache-php
● Important note: it is very important to explicitly specify a name with --name for your
containers otherwise I’ve noticed that it would not work with the random names that Docker
assigns to your containers.
● Then create a new network:
docker network create myNetwork
● After that connect your containers to the network:
docker network connect myNetwork web1
docker network connect myNetwork web2
● Check if your containers are part of the new network:
docker network inspect myNetwork
● Then test the connection:
docker exec -ti web1 ping web2

Install ping and telnet :


Apt-get install telnet
apt-get install iputils-ping
version: '3'

services:

web:

image: nginx

db:

image: mysql

ports:

- "3306:3306"

environment:

- MYSQL_ROOT_PASSWORD=password

- MYSQL_USER=user

- MYSQL_PASSWORD=password

- MYSQL_DATABASE=demodb

$ cat docker-compose.yml
version: '3'
services:
nginx-1:
image: nginx
hostname: nginx-1.docker
network_mode: bridge
linux-1:
image: alpine
hostname: linux-1.docker
command: sh -c 'apk add --update bind-tools && tail -f /dev/null'
network_mode: bridge # that way he can solve others containers names even inside, solve
nginx-2, for example
Deploy Kubernetes application on AWS Using Kops :

Introduction :
Containers are a method of operating system virtualization that allow you to run an application and its
dependencies in resource-isolated processes. Containers allow you to easily package an application's code,
configurations, and dependencies into easy to use building blocks that deliver environmental consistency,
operational efficiency, developer productivity, and version control. Containers can help ensure that
applications deploy quickly, reliably, and consistently regardless of deployment environment. Containers
also give you more granular control over resources giving your infrastructure improved efficiency. Running
containers in the AWS Cloud allows you to build robust, scalable applications and services by leveraging the
benefits of the AWS Cloud such as elasticity, availability, security, and economies of scale. You also pay for
only as much resources as you use.

Any containerized application typically consists of multiple containers. There are containers for the
application itself, a database, possibly a web server, and so on. During development, it’s normal to build and
test this multi-container application on a single host. This approach works fine during early dev and test
cycles but becomes a single point of failure for production, when application availability is critical.

In such cases, a multi-container application can be deployed on multiple hosts. Customers may need an
external tool to manage such multi-container, multi-host deployments. Container orchestration frameworks
provides the capability of cluster management, scheduling containers on different hosts, service discovery
and load balancing, crash recovery, and other related functionalities. There are multiple options for container
orchestration on Amazon Web Services: Amazon ECS, Docker for AWS, and DC/OS.

Another popular option for container orchestration on AWS is Kubernetes. There are multiple ways to run
a Kubernetes cluster on AWS. This multi-part blog series provides a brief overview and explains some of
these approaches in detail. This first post explains how to create a Kubernetes cluster on AWS using kops.

We develop the application and deploy in aws using kops.

Problem Statement :

Before the use of the Docker containers, we were using VM cluster to host our applications. Here are some
of the disadvantages of the same :

Cons of clustered host environments :


While the pros of clustered host environments are compelling, there are some implementation and
management drawbacks.

Disadvantage no 1: Implementation and configuration complexity


Configuration complexity may the top disadvantage of cluster hosts. Creating the clustering framework,
managing the connectivity among hosts and configuring the shared storage are not easy tasks and can
involve multiple teams, depending on the organization. You should not be scared by the added complexity,
however. For the most part, the technology works; but with added complexity, there is an increased chance
of overlooking something that could jeopardize system stability.

Disadvantage no. 2: Update and upgrade factors


Upgrades to newer product versions and hardware components can cause difficulties as well. Because a
virtual host cluster connects multiple systems, there are numerous, complex interactions that occur between
components.
Simply updating the multipath I/O (MPIO) drivers on one host, for example, affects the entire cluster. First,
it affects how efficiently the other nodes pass off logic unit numbers to one another. Also, before updating
the MPIO drivers, the firmware for all the host bus adapter (HBA) cards across the entire cluster need to be
up to date. If this is not the case, the HBA driver must be installed first.

With standalone hosts, this can be addressed with one or two reboots. In a clustered environment, however,
the coordination across many virtual host servers is difficult. Upgrading the actual virtual host software,
however, can be an even greater challenge because of cluster node interactions and the different supporting
software versions (i.e., System Center Virtual Machine Manager, Data Protection Manager, etc.).

Generally, vendors provide detailed, step by step instructions for many of these complex updates; and, for
the most part, they go smoothly.

Disadvantage no. 3: Cluster cost factors


Cost is another major consideration. To implement a clustered virtual host environment, you need to
duplicate parts of the infrastructure and maintain the same VM-to-host ratios at times. Also, most vender
implementations require a storage area network or separate disk subsystem. An open source iSCSI or
cheaper disk array would be more prudent, but these options come with performance and stability
issues.Ultimately, each organization has to determine whether a clustered virtual host environment is the
right virtualization architecture for its business model. While virtual host clusters involve additional
configuration complexity, upgrading issues and potentially additional costs, your environment can benefit
from enhanced server or application availability and improved management.

System Architecture :

Minion Node Architecture

Docker: One of the basic requirement of nodes is Docker. Docker is responsible for pulling down and
running container from Docker images. Read here for more information on docker .

Kube-Proxy: Every node in the cluster runs a simple network proxy. Using proxy node in cluster routes
request to the correct container in a node.

Kubelet: It is an agent process that runs on each node. It is responsible for managing pods and their
containers. It deal with pods specifications which are defined in YAML or JSON format. Kubelet takes the
pod specifications and checks whether the pods are running healthy or not.
Flannel: It is an overlay network that works on assigning a range of subnet address. It is used to assign IPs
to each pods running in the cluster and to make the pod-to-pod and pod-to-services communications.

Context diagram :

HLL diagram :
A K8s setup consists of several parts, some of them optional, some mandatory for the whole system to
function.

Master Node

The master node is responsible for the management of Kubernetes cluster. This is the entry point of all
administrative tasks. The master node is the one taking care of orchestrating the worker nodes, where the
actual services are running.

Let's dive into each of the components of the master node


.

API server

The API server is the entry points for all the REST commands used to control the cluster. It processes the
REST requests, validates them, and executes the bound business logic. The result state has to be persisted
somewhere, and that brings us to the next component of the master node.

etcd storage

etcd is a simple, distributed, consistent key-value store. It’s mainly used for shared configuration and service
discovery.
It provides a REST API for CRUD operations as well as an interface to register watchers on specific nodes,
which enables a reliable way to notify the rest of the cluster about configuration changes.
An example of data stored by Kubernetes in etcd is jobs being scheduled, created and deployed, pod/service
details and state, namespaces and replication information, etc.

scheduler

The deployment of configured pods and services onto the nodes happens thanks to the scheduler component.
The scheduler has the information regarding resources available on the members of the cluster, as well as the
ones required for the configured service to run and hence is able to decide where to deploy a specific
service.
controller-manager

Optionally you can run different kinds of controllers inside the master node. controller-manager is a daemon
embedding those.
A controller uses apiserver to watch the shared state of the cluster and makes corrective changes to the
current state to change it to the desired one.
An example of such a controller is the Replication controller, which takes care of the number of pods in the
system. The replication factor is configured by the user, and it's the controller’s responsibility to recreate a
failed pod or remove an extra-scheduled one.
Other examples of controllers are endpoints controller, namespace controller, and serviceaccounts controller,
but we will not dive into details here.

Worker node

The pods are run here, so the worker node contains all the necessary services to manage the networking
between the containers, communicate with the master node, and assign resources to the containers
scheduled.

Docker

Docker runs on each of the worker nodes, and runs the configured pods. It takes care of downloading the
images and starting the containers.

kubelet

kubelet gets the configuration of a pod from the apiserver and ensures that the described containers are up
and running. This is the worker service that’s responsible for communicating with the master node.
It also communicates with etcd, to get information about services and write the details about newly created
ones.
kube-proxy

kube-proxy acts as a network proxy and a load balancer for a service on a single worker node. It takes care
of the network routing for TCP and UDP packets.
kubectl

And the final bit – a command line tool to communicate with the API service and send commands to the
master node.

Why Docker? Pros and Cons


What are Containers?
Well, I guess the best I can do is to quote from Docker team:
A container image is a lightweight, stand-alone, executable package of a piece of software that includes
everything needed to run it: code, runtime, system tools, system libraries, settings.

So I guess developers got pissed off by the whole “Linux” or “Windows” thing, and they where like: “Let’s
build something that can run both Windows and Linux applications, regardless of the operating system or
environment”, then containers were invented!

The idea is Containers will isolate “our code” from what is “not our code”, to make sure the “works on my
machine” situation doesn’t happen.

From Virtual Machines to Containers


So before Containers showed up, we used to use VMs to host our application, and I guess people liked it,
because we were able to get a big server and slice it up to several VMs and have multiple computers and
simulate a network. now that Containers showed up, it seems like VMs aren’t a good idea anymore, because
it seems like Containers give us a better level of abstraction than VMs.

Though some people might argue that, we might not even need docker, if we choose a right Cloud platform,
and use PaaS (Platform as a Service) offerings which will give us a higher level of abstraction, but again
other might argue that, that way you are kind of tight to that Cloud Provider, which again might not
necessarily be a bad thing, considering what they offer these days!

Also, even some of the Cloud providers does not natively support Linux or Windows, so now with
Containers, you can put your code in some container and then move your container into your cloud provider
if you like.

Remind me what Virtual Machines are!


Virtual machines (VMs) are an abstraction of physical hardware, that would slice your one giant physical
server into multiple ones. The “hypervisor” or “VMM (Virtual Machine Monitor)” provides the capability to
run multiple Virtual Machines on one set of hardwar and each one of these VMs with have an OS (you need
to have licenses, update and patch them and everything IT related you do with all of your regular
computers).

Tell me again about Containers!


Containers are an abstraction at the app layer that packages code and dependencies together. Multiple
containers can run on the same machine and share the OS kernel with other containers, each running as
isolated processes in user space. Since Containers does not have a full blown Operating System they take up
less space compared to VMs.

The following image from Docker website explains the differences Between Containers and VMs:
Deeper dive into Virtualization
As mentioned before Virtualization is handled by Hypervisor, and it basically manages the CPU’s “root
mode” and by some sort of interception, manages to create an illusion for the VM’s Operation System as if it
has its own hardware. I f you are interested to know who did this first to send them a “Thank You” note, it
was “VMWare”.

So ultimately, the hypervisor, facilitates to run multiple seperate operation systems on the same hardware.
All the VM operating system (known as Guest OS) go through the boot process to load the kernel and all the
other OS modules, just like regular computers, hence the slowness! And if you are curious about the
isolation between the guests and hosts, I should say, you can have pretty strict security between them.

Deeper dive into Containers


A bit more than 10 years ago, some folks from Google came up with namespaces concept. Yeah, exactly as
developers are familiar with, the idea is, we want to put hardware resources into namespaces, and only give
permission to use resources to other resources or software, only if they belong to a specific namespace. So
you basically can tell processes, what is their namespace, and what hardware namespaces they can access.

So this basically creates a level of isolation, where each process has only access to the resources that are in
their own namespace.

This is how Docker works! Each container runs in its own namespace and all containers use the same kernel
to manage the namespaces.

Now because kernel is the control plane here and knows the namespace that was assigned to the process, it
makes sure that process can only access resources in its own namespace.

As you can see the isolation level in Docker is not as strong as VMs as they all share the same kernel, also
because of the same reason they are much lighter than VMs.

Advantages of using Docker :


Running applications in containers instead of virtual machines is gaining momentum in the IT world. The
technology is considered to be one of the fastest growing in the recent history of the software industry. At its
heart lies Docker, a platform that allows users to easily pack, distribute, and manage applications within
containers. In other words, It is an open-source project that automates the deployment of applications inside
software containers.

Docker really makes it easier to create, deploy, and run applications by using containers, and containers
allow a developer to package up an application with all of the parts it needs, such as libraries and other
dependencies, and ship it all out as one package. By doing so, the developer can be assured that the
application will run on any other Linux machine regardless of any customized settings that machine might
have that could differ from the machine used for writing and testing the code.

Docker Statistics & Facts

● 2/3 of companies that try using Docker, adopt it. Most companies who will adopt have already done
so within 30 days of initial production usage, and almost all the remaining adopters convert within
60 days.
● Docker adoption is up 30% in the last year.
● Adopters multiply their containers by five. Docker adopters approximately quintuple the average
number of running containers they have in production between their first and tenth month of usage.
● PHP, Ruby, Java, and Node are the main programming frameworks used in containers.

Popularity & Benefits of Using Docker


Why do large companies like ING, Paypal, ADP, and Spotify keep using Docker? Why is Docker adoption
growing that fast? Let’s cover the top advantages of docker to better understand it.

Return on Investment and Cost Savings


The first advantage of using docker is ROI. The biggest driver of most management decisions when
selecting a new product is the return on investment. The more a solution can drive down costs while raising
profits, the better a solution it is, especially for large, established companies, which need to generate steady
revenue over the long term.

In this sense, Docker can help facilitate this type of savings by dramatically reducing infrastructure
resources. The nature of Docker is that fewer resources are necessary to run the same application. Because
of the reduced infrastructure requirements Docker has, organizations are able to save on everything from
server costs to the employees needed to maintain them. Docker allows engineering teams to be smaller and
more effective.

Standardization and Productivity


Docker containers ensure consistency across multiple development and release cycles, standardizing your
environment. One of the biggest advantages to a Docker-based architecture is actually standardization.
Docker provides repeatable development, build, test, and production environments. Standardizing service
infrastructure across the entire pipeline allows every team member to work in a production parity
environment. By doing this, engineers are more equipped to efficiently analyze and fix bugs within the
application. This reduces the amount of time wasted on defects and increases the amount of time available
for feature development.

As we mentioned, Docker containers allow you to commit changes to your Docker images and version
control them. For example, if you perform a component upgrade that breaks your whole environment, it is
very easy to rollback to a previous version of your Docker image. This whole process can be tested in a few
minutes. Docker is fast, allowing you to quickly make replications and achieve redundancy. Also, launching
Docker images is as fast as running a machine process.
CI Efficiency
Docker enables you to build a container image and use that same image across every step of the deployment
process. A huge benefit of this is the ability to separate non-dependent steps and run them in parallel. The
length of time it takes from build to production can be sped up notably.

Compatibility and Maintainability


Eliminate the “it works on my machine” problem once and for all. One of the benefits that the entire team
will appreciate is parity. Parity, in terms of Docker, means that your images run the same no matter which
server or whose laptop they are running on. For your developers, this means less time spent setting up
environments, debugging environment-specific issues, and a more portable and easy-to-set-up codebase.
Parity also means your production infrastructure will be more reliable and easier to maintain.

Simplicity and Faster Configurations


One of the key benefits of Docker is the way it simplifies matters. Users can take their own configuration,
put it into code, and deploy it without any problems. As Docker can be used in a wide variety of
environments, the requirements of the infrastructure are no longer linked with the environment of the
application.

Rapid Deployment
Docker manages to reduce deployment to seconds. This is due to the fact that it creates a container for every
process and does not boot an OS. Data can be created and destroyed without worry that the cost to bring it
up again would be higher than what is affordable.

Continuous Deployment and Testing


Docker ensures consistent environments from development to production. Docker containers are configured
to maintain all configurations and dependencies internally; you can use the same container from
development to production making sure there are no discrepancies or manual intervention.

If you need to perform an upgrade during a product’s release cycle, you can easily make the necessary
changes to Docker containers, test them, and implement the same changes to your existing containers. This
sort of flexibility is another key advantage of using Docker. Docker really allows you to build, test, and
release images that can be deployed across multiple servers. Even if a new security patch is available, the
process remains the same. You can apply the patch, test it, and release it to production.

Multi-Cloud Platforms
One of Docker’s greatest benefits is portability. Over last few years, all major cloud computing providers,
including Amazon Web Services (AWS) and Google Compute Platform (GCP), have embraced Docker’s
availability and added individual support. Docker containers can be run inside an Amazon EC2 instance,
Google Compute Engine instance, Rackspace server, or VirtualBox, provided that the host OS supports
Docker. If this is the case, a container running on an Amazon EC2 instance can easily be ported between
environments, for example to VirtualBox, achieving similar consistency and functionality. Also, Docker
works very well with other providers like Microsoft Azure, and OpenStack, and can be used with various
configuration managers like Chef, Puppet, and Ansible, etc.

Isolation
Docker ensures your applications and resources are isolated and segregated. Docker makes sure each
container has its own resources that are isolated from other containers. You can have various containers for
separate applications running completely different stacks. Docker helps you ensure clean app removal since
each application runs on its own container. If you no longer need an application, you can simply delete its
container. It won’t leave any temporary or configuration files on your host OS.
On top of these benefits, Docker also ensures that each application only uses resources that have been
assigned to them. A particular application won’t use all of your available resources, which would normally
lead to performance degradation or complete downtime for other applications.

Security
The last of these benefits of using docker is security. From a security point of view, Docker ensures that
applications that are running on containers are completely segregated and isolated from each other, granting
you complete control over traffic flow and management. No Docker container can look into processes
running inside another container. From an architectural point of view, each container gets its own set of
resources ranging from processing to network stacks.

How to create test Docker Image :

Docker is an operating-system-level virtualization mainly intended for developers and sysadmins.


Docker makes it easier to create and deploy applications in an isolated environment. A Dockerfile is a script
that contains collections of commands and instructions that will be automatically executed in sequence in
the docker environment for building a new docker image.
In this tutorial, I will show you how to create your own docker image with a dockerfile. I will explain the
dockerfile script in detail to enable you to build your own dockerfile scripts.
Prerequisite
● A Linux Server - I will use Ubuntu 16.04 as the host machine, and Ubuntu 16.04 as the docker base
image.
● Root Privileges.
● Understanding Docker command
Introduction to the Dockerfile Command

A dockerfile is a script which contains a collection of dockerfile commands and operating system commands
(ex: Linux commands). Before we create our first dockerfile, you should become familiar with the
dockerfile command.
Below are some dockerfile commands you must know:
FROM
The base image for building a new image. This command must be on top of the dockerfile.
MAINTAINER
Optional, it contains the name of the maintainer of the image.
RUN
Used to execute a command during the build process of the docker image.
ADD
Copy a file from the host machine to the new docker image. There is an option to use an URL for the file,
docker will then download that file to the destination directory.
ENV
Define an environment variable.
CMD
Used for executing commands when we build a new container from the docker image.
ENTRYPOINT
Define the default command that will be executed when the container is running.
WORKDIR
This is directive for CMD command to be executed.
USER
Set the user or UID for the container created with the image.
VOLUME
Enable access/linked directory between the container and the host machine.
Now let's stat to create our first dockerfile.
Step 1 - Installing Docker

Login to your server and update the software repository.

ssh root@192.168.1.248
apt-get update

Install docker.io with this apt command:

apt-get install docker.io

When the installation is finished, start the docker service and enable it to start at boot time:

systemctl start docker


systemctl enable docker

Docker has been installed and is running on the system.


Step 2 - Create Dockerfile

In this step, we will create a new directory for the dockerfile and define what we want to do with that
dockerfile.
Create a new directory and a new and empty dockerfile inside that directory.

mkdir ~/myimages
cd myimages/
touch Dockerfile

Next, define what we want to do with our new custom image. In this tutorial, I will install Nginx and
PHP-FPM 7 using an Ubuntu 16.04 docker image. Additionally, we need Supervisord, so we can start Nginx
and PHP-FPM 7 both in one command.
Edit the 'Dockerfile' with vim:

vim Dockerfile

On the top of the file, add a line with the base image (Ubuntu 16.04) that we want to use.
#Download base image ubuntu 16.04
FROM ubuntu:16.04
Update the Ubuntu software repository inside the dockerfile with the 'RUN' command.
# Update Ubuntu Software repository
RUN apt-get update
Then install the applications that we need for the custom image. Install Nginx, PHP-FPM and Supervisord
from the Ubuntu repository with apt. Add the RUN commands for Nginx and PHP-FPM installation.
# Install nginx, php-fpm and supervisord from ubuntu repository
RUN apt-get install -y nginx php7.0-fpm supervisor && \
rm -rf /var/lib/apt/lists/*
At this stage, all applications are installed and we need to configure them. We will configure Nginx for
handling PHP applications by editing the default virtual host configuration. We can replace it our new
configuration file, or we can edit the existing configuration file with the 'sed' command.
In this tutorial, we will replace the default virtual host configuration with a new configuration by using the
'COPY' dockerfile command.
#Define the ENV variable
ENV nginx_vhost /etc/nginx/sites-available/default
ENV php_conf /etc/php/7.0/fpm/php.ini
ENV nginx_conf /etc/nginx/nginx.conf
ENV supervisor_conf /etc/supervisor/supervisord.conf

# Enable php-fpm on nginx virtualhost configuration


COPY default ${nginx_vhost}
RUN sed -i -e 's/;cgi.fix_pathinfo=1/cgi.fix_pathinfo=0/g' ${php_conf} && \
echo "\ndaemon off;" >> ${nginx_conf}
Next, configure Supervisord for Nginx and PHP-FPM. We will replace the default Supervisord configuration
with a new configuration by using the 'COPY' command.
#Copy supervisor configuration
COPY supervisord.conf ${supervisor_conf}
Now create a new directory for the php-fpm sock file and change the owner of the /var/www/html directory
and PHP directory to www-data.
RUN mkdir -p /run/php && \
chown -R www-data:www-data /var/www/html && \
chown -R www-data:www-data /run/php
Next, define the volume so we can mount the directories listed below to the host machine.
# Volume configuration
VOLUME ["/etc/nginx/sites-enabled", "/etc/nginx/certs", "/etc/nginx/conf.d", "/var/log/nginx",
"/var/www/html"]
Finally, setup the default container command 'CMD' and open the port for HTTP and HTTPS. We will create
a new start.sh file for default 'CMD' command when container is starting. The file contains the 'supervisord'
command, and we will copy the file to the new image with the 'COPY' dockerfile command.
# Configure Services and Port
COPY start.sh /start.sh
CMD ["./start.sh"]

EXPOSE 80 443
Save the file and exit.
Here is the complete Dockerfile in one piece:
#Download base image ubuntu 16.04
FROM ubuntu:16.04

# Update Software repository


RUN apt-get update

# Install nginx, php-fpm and supervisord from ubuntu repository


RUN apt-get install -y nginx php7.0-fpm supervisor && \
rm -rf /var/lib/apt/lists/*

#Define the ENV variable


ENV nginx_vhost /etc/nginx/sites-available/default
ENV php_conf /etc/php/7.0/fpm/php.ini
ENV nginx_conf /etc/nginx/nginx.conf
ENV supervisor_conf /etc/supervisor/supervisord.conf

# Enable php-fpm on nginx virtualhost configuration


COPY default ${nginx_vhost}
RUN sed -i -e 's/;cgi.fix_pathinfo=1/cgi.fix_pathinfo=0/g' ${php_conf} && \
echo "\ndaemon off;" >> ${nginx_conf}
#Copy supervisor configuration
COPY supervisord.conf ${supervisor_conf}

RUN mkdir -p /run/php && \


chown -R www-data:www-data /var/www/html && \
chown -R www-data:www-data /run/php

# Volume configuration
VOLUME ["/etc/nginx/sites-enabled", "/etc/nginx/certs", "/etc/nginx/conf.d", "/var/log/nginx",
"/var/www/html"]

# Configure Services and Port


COPY start.sh /start.sh
CMD ["./start.sh"]

EXPOSE 80 443

How is kubernetes Platform ?


Even though Kubernetes provides a lot of functionality, there are always new scenarios that would benefit
from new features. Application-specific workflows can be streamlined to accelerate developer velocity. Ad
hoc orchestration that is acceptable initially often requires robust automation at scale. This is why
Kubernetes was also designed to serve as a platform for building an ecosystem of components and tools to
make it easier to deploy, scale, and manage applications.
Labels empower users to organize their resources however they please. Annotations enable users to decorate
resources with custom information to facilitate their workflows and provide an easy way for management
tools to checkpoint state.
This design has enabled a number of other systems to build atop Kubernetes.

What kubernetes is not


Kubernetes is not a traditional, all-inclusive PaaS (Platform as a Service) system. Since Kubernetes operates
at the container level rather than at the hardware level, it provides some generally applicable features
common to PaaS offerings, such as deployment, scaling, load balancing, logging, and monitoring. However,
Kubernetes is not monolithic, and these default solutions are optional and pluggable. Kubernetes provides
the building blocks for building developer platforms, but preserves user choice and flexibility where it is
important.
Kubernetes:
● Does not limit the types of applications supported. Kubernetes aims to support an extremely diverse variety
of workloads, including stateless, stateful, and data-processing workloads. If an application can run in a
container, it should run great on Kubernetes.
● Does not deploy source code and does not build your application. Continuous Integration, Delivery, and
Deployment (CI/CD) workflows are determined by organization cultures and preferences as well as
technical requirements.
● Does not provide application-level services, such as middleware (e.g., message buses), data-processing
frameworks (for example, Spark), databases (e.g., mysql), caches, nor cluster storage systems (e.g., Ceph) as
built-in services. Such components can run on Kubernetes, and/or can be accessed by applications running
on Kubernetes through portable mechanisms, such as the Open Service Broker.
● Does not dictate logging, monitoring, or alerting solutions. It provides some integrations as proof of concept,
and mechanisms to collect and export metrics.
● Does not provide nor mandate a configuration language/system (e.g., jsonnet). It provides a declarative API
that may be targeted by arbitrary forms of declarative specifications.
● Does not provide nor adopt any comprehensive machine configuration, maintenance, management, or
self-healing systems.
Additionally, Kubernetes is not a mere orchestration system. In fact, it eliminates the need for orchestration.
The technical definition of orchestration is execution of a defined workflow: first do A, then B, then C. In
contrast, Kubernetes is comprised of a set of independent, composable control processes that continuously
drive the current state towards the provided desired state. It shouldn’t matter how you get from A to C.
Centralized control is also not required. This results in a system that is easier to use and more powerful,
robust, resilient, and extensible.

WHAT IS AMAZON EC2?

Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the Amazon Web
Services (AWS) cloud. Using Amazon EC2 eliminates your need to invest in hardware up front, so you can
develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual
servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to
scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast
traffic.

Amazon EC2 provides the following features:

● Virtual computing environments, known as instances


● Preconfigured templates for your instances, known as Amazon Machine Images (AMIs), that package
the bits you need for your server (including the operating system and additional software)
● Various configurations of CPU, memory, storage, and networking capacity for your instances, known
as instance types
● Secure login information for your instances using key pairs (AWS stores the public key, and you
store the private key in a secure place)
● Storage volumes for temporary data that's deleted when you stop or terminate your instance, known
as instance store volumes
● Persistent storage volumes for your data using Amazon Elastic Block Store (Amazon EBS), known
as Amazon EBS volumes
● Multiple physical locations for your resources, such as instances and Amazon EBS volumes, known
as regions and Availability Zones
● A firewall that enables you to specify the protocols, ports, and source IP ranges that can reach your
instances using security groups
● Static IPv4 addresses for dynamic cloud computing, known as Elastic IP addresses
● Metadata, known as tags, that you can create and assign to your Amazon EC2 resources
● Virtual networks you can create that are logically isolated from the rest of the AWS cloud, and that
you can optionally connect to your own network, known as virtual private clouds (VPCs)

STEPS TO LAUCH EC2 ON UBUNTU SERVER:

Step 1: Launch an Amazon EC2 Instance:


a. Click here to open the Amazon EC2 console and then click Launch Instance to create and
configure your virtual machine.
Step 2: Configure your Instance:
You are now in the EC2 Launch Instance Wizard, which will help you configure and launch your
instance.
a. In this screen, you are shown options to choose an Amazon Machine Image (AMI). AMIs are
preconfigured server templates you can use to launch an instance. Each AMI includes an operating system,
and can also include applications and application servers.
b. You will now choose an instance type. Instance types comprise of varying combinations of CPU,
memory, storage, and networking capacity so you can choose the appropriate mix for your applications. For
more information, see Amazon EC2 Instance Types.
c. You can review the configuration, storage, tagging, and security settings that have been selected
for your instance. While you have the option to customize these settings.
d. On the next screen you will be asked to choose an existing key pair or create a new key pair. A key
pair is used to securely access your Linux instance using SSH. AWS stores the public part of the key pair
which is just like a house lock. You download and use the private part of the key pair which is just like a
house key. Select Create a new key pair and give it the name MyKeyPair. Next click the Download Key
Pair button. After you download the MyKeyPair key, you will want to store your key in a secure location. If
you lose your key, you won't be able to access your instance. If someone else gets access to your key, they
will be able to access your instance. After you have stored your key pair, click Launch Instance to start your
Linux instance.
e. Click View Instances on the next screen to view your instances and see the status of the instance
you have just started.
f. In a few minutes, the Instance State column on your instance will change to "running" and a Public
IP address will be shown. You can refresh these Instance State columns by pressing the refresh button on the
right just above the table. Copy the Public IP address of your AWS instance, so you can use it when we
connect to the instance using SSH in Step 3.

Step 3: Connect to your Instance:


After launching your instance, it's time to connect to it using SSH.
a. Select Windows below to see instructions for installing Git Bash which includes SSH. Download
Git for Windows here. Run the downloaded installer accepting the default settings (this will install Git Bash
as part of Git).
b. Right click on your desktop (not on an icon or file) and select Git Bash Here to open a Git Bash
command prompt.
c. Use SSH to connect to your instance. In this case the user name is ec2-user, the SSH key is stored
in the directory we saved it to in step 2-part d, and the IP address is from step 2-part f. The format is ssh -I
{full path of you. pem file} ec2-user@ {instance IP address}.
Step 4: Terminate Your Instance:
You can easily terminate the instance from the EC2 console. In fact, it is a best practice to terminate
instances you are no longer using so you don’t keep getting charged for them.
a. Back on the EC2 Console, select the box next to the instance you created. Then click
the Actions button, navigate to Instance State, and click Terminate.
b. You will be asked to confirm your termination - select Yes, Terminate.
Note: This process can take several seconds to complete. Once your instance has been terminated, the
Instance State will change to terminated on your EC2 Console.

● AS – Auto Scaling
● Amazon EC2 Auto Scaling helps you ensure that you have the correct number of Amazon EC2
instances available to handle the load for your application. You create collections of EC2 instances,
called Auto Scaling groups. You can specify the minimum number of instances in each Auto Scaling
group, and Amazon EC2 Auto Scaling ensures that your group never goes below this size. You can
specify the maximum number of instances in each Auto Scaling group, and Amazon EC2 Auto
Scaling ensures that your group never goes above this size. If you specify the desired capacity, either
when you create the group or at any time thereafter, Amazon EC2 Auto Scaling ensures that your
group has this many instances. If you specify scaling policies, then Amazon EC2 Auto Scaling can
launch or terminate instances as demand on your application increases or decreases.
● For more information about the benefits of Amazon EC2 Auto Scaling, see Benefits of Auto Scaling.

● Auto Scaling Components

● The following table describes the key components of Amazon EC2 Auto Scaling.

Groups

Your EC2 instances are organized in to groups so that


they can be treated as a logical unit for the purposes of
scaling and management. When you create a group, you
can specify its minimum, maximum, and, desired
number of EC2 instances. For more information,
see Auto Scaling Groups.

Launch configurations

Your group uses a launch configuration as a template


for its EC2 instances. When you create a launch
configuration, you can specify information such as the
AMI ID, instance type, key pair, security groups, and
block device mapping for your instances. For more
information, see Launch Configurations.

Scaling options

Amazon EC2 Auto Scaling provides several ways for


you to scale your Auto Scaling groups. For example,
you can configure a group to scale based on the
occurrence of specified conditions (dynamic scaling) or
on a schedule. For more information, see Scaling
Options.
Amazon EC2: Auto Scaling

In traditional IT world, there are limited number of servers to handle the application load. When the number
of requests increases the load on the servers also increases, which causes latency and failures.

Amazon Web service provides Amazon EC2 Auto Scaling services to overcome this failure. Auto Scaling
ensures that Amazon EC2 instances are sufficient to run your application. You can create an auto-scaling
group which contains a collection of EC2 instances. You can specify a minimum number of EC2 instance in
that group and auto-scaling will maintain and ensure the minimum number of EC2 instances. You can also
specify a maximum number of EC2 instances in each auto scaling group so that auto-scaling will ensure
instances never go beyond that maximum limit.

You can also specify desired capacity and auto-scaling policies for the Amazon EC2 auto-scaling. By using
the scaling policy, auto-scaling can launch or terminate the EC2 instances depending on the demand.

Auto Scaling Components

1. Groups

Groups are the logical groups which contain the collection of EC2 instances with similar characteristics for
scaling and management purpose. Using the auto scaling groups you can increase the number of instances to
improve your application performance and also you can decrease the number of instances depending on the
load to reduce your cost. The auto-scaling group also maintains a fixed number of instances even if an
instance becomes unhealthy.
To meet the desired capacity the auto scaling group launches enough number of EC2 instances, and also auto
scaling group maintains these EC2 instances by performing a periodic health check on the instances in the
group. If any instance becomes unhealthy, the auto-scaling group terminates the unhealthy instance and
launches another instance to replace it. Using scaling policies you can increase or decrease the number of
running EC2 instances in the group automatically to meet the changing conditions.

2. Launch Configuration

The launch configuration is a template used by auto scaling group to launch EC2 instances. You can specify
the Amazon Machine Image (AMI), instances type, key pair, and security groups etc.. while creating the
launch configuration. You can also modify the launch configuration after creation. Launch configuration can
be used for multiple auto scaling groups.

3. Scaling Plans

Scaling plans tells Auto Scaling when and how to scale. Amazon EC2 auto-scaling provides several ways
for you to scale the auto scaling group.

Maintaining Current instance level at all time:- You can configure and maintain a specified number of
running instances at all the time in the auto scaling group. To achieve this Amazon EC2 auto-scaling
performs a periodic health check on running EC2 instances within an auto scaling group. If any unhealthy
instance occurs, auto-scaling terminates that instance and launches new instances to replace it.

Manual Scaling:- In Manual scaling, you specify only the changes in maximum, minimum, or desired
capacity of your auto scaling groups. Auto-scaling maintains the instances with updated capacity.

Scale based on Schedule:- In some cases, you know exactly when your application traffic becomes high.
For example on the time of limited offer or some particular day in peak loads, in such cases, you can scale
your application based on scheduled scaling. You can create a scheduled action which tells Amazon EC2
auto-scaling to perform the scaling action based on the specific time.

Scale based on demand:- This is the most advanced scaling model, resources scales by using a scaling
policy. Based on specific parameters you can scale in or scale out your resources. You can create a policy by
defining the parameter such as CPU utilization, Memory, Network In and Out etc. For Example, you can
dynamically scale your EC2 instances which exceeds the CPU utilization beyond 70%. If CPU utilization
crosses this threshold value, the auto scaling launches new instances using the launch configuration. You
should specify two scaling policies, one for scaling In (terminating instances) and one for scaling out
(launching instances).

Types of Scaling polices:-

● Target tracking scaling:- Based on the target value for a specific metric, Increase or decrease the current
capacity of the auto scaling group.

● Step scaling:- Based on a set of scaling adjustments, increase or decrease the current capacity of the
group that vary based on the size of the alarm breach.
● Simple scaling:- Increase or decrease the current capacity of the group based on a single scaling
adjustment.
Setup

As a pre-requisite, you need to create an AMI of your application which is running on your EC2 instance.

● Setup: Launch Configuration:

1. Go to EC2 console and click on Launch Configuration from Auto Scaling


2. From Choose AMI, select the Amazon Machine Image from My AMIs tab, which was used to create the
image for your web application.

3. Then, select the instances type which is suitable for your web application and click Next: Configure
details.

4. On Configure details, name the launch configuration, you can assign if any specific IAM role is assigned
for your web application, and also you can enable the detailed monitoring.

5. After that, Add the storage and Security Groups then go for review.
Note: Open the required ports for your application to run.

6. Click on Create launch configuration and choose the existing key pair or create new key pair

● Setup: Auto Scaling Group:

1. From EC2 console click on Auto Scaling Group which is below the launch configuration. Then click on
create auto scaling group.
2. From Auto scaling Group page, you can create either using launch configuration or Launch Template.
Here I have created using Launch Configuration. You can create new Launch Configuration from this
page also. Since you had already created the launch configuration, you can go for creating auto scaling
group by using “Use a existing launch configuration”.

3. After clicking on next step, you can configure group name, group initial size, and VPC and subnets. Also
you can configure load balance with auto scaling group by clicking Advanced Details.
After that click on next to configure scaling policies

4.On scaling policy page, you can specify the minimum and maximum number of instance in this group.
Here you can use target tracking policy to configure the scaling policies. In metric type you can specify such
as CPU utilisation and Network In or Out and also you can give the target value as well. Depending on the
target value the scaling policy will work. You can also disable scale-in from here.

You can also use Step and simple scaling policies.

It works based on alarm, so first create the alarm by clicking on ‘add new alarm’.
Here the alarm created is based on CPU utilisation above 65%. If CPU utilisation crosses 65% the auto
scaling launches new instances based on the step action.

You can specify more step actions based on your load, but in simple policy you can’t categorise depending
on the percentage of CPU utilisation. Also you need to configure scale-in policies once the traffic become
low, as it reduces the billing.
5. Next click on ‘Next: Configure Notification’ to get the notification based on launch, terminate, and fail
etc. to your mail ID, and enter the tag and click on ‘Create auto scaling group’.

● ELB – Elastic Load Balancer

Elastic Load Balancing distributes incoming application or network traffic across multiple targets, such as
Amazon EC2 instances, containers, and IP addresses, in multiple Availability Zones. Elastic Load Balancing
scales your load balancer as traffic to your application changes over time, and can scale to the vast majority
of workloads automatically.

Load Balancer Benefits

A load balancer distributes workloads across multiple compute resources, such as virtual servers. Using a
load balancer increases the availability and fault tolerance of your applications.

You can add and remove compute resources from your load balancer as your needs change, without
disrupting the overall flow of requests to your applications.

You can configure health checks, which are used to monitor the health of the compute resources so that the
load balancer can send requests only to the healthy ones. You can also offload the work of encryption and
decryption to your load balancer so that your compute resources can focus on their main work.

Features of Elastic Load Balancing

Elastic Load Balancing supports three types of load balancers: Application Load Balancers, Network Load
Balancers, and Classic Load Balancers. You can select a load balancer based on your application needs. For
more information, see Comparison of Elastic Load Balancing Products.

For more information about using each load balancer, see the User Guide for Application Load Balancers,
the User Guide for Network Load Balancers, and the User Guide for Classic Load Balancers.

Accessing Elastic Load Balancing

You can create, access, and manage your load balancers using any of the following interfaces:
● AWS Management Console— Provides a web interface that you can use to access Elastic Load
Balancing.
● AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS
services, including Elastic Load Balancing, and is supported on Windows, Mac, and Linux. For more
information, see AWS Command Line Interface.
● AWS SDKs — Provides language-specific APIs and takes care of many of the connection details,
such as calculating signatures, handling request retries, and error handling. For more information,
see AWS SDKs.
● Query API— Provides low-level API actions that you call using HTTPS requests. Using the Query
API is the most direct way to access Elastic Load Balancing, but it requires that your application
handle low-level details such as generating the hash to sign the request, and error handling.
Creation of ELB :
I’ve recently received some questions about the AWS Application Load Balancer, what advantages it
provides, and how to monitor it. AWS is already calling the original Elastic Load Balancer it’s ‘Classic’
Load Balancer, so if you’re anxious to understand why so many are using it over the Classic ELB, this post
is for you.

This post will describe the AWS Application Load Balancer, when to use it, and introduce how to connect it
with your EC2 instances and autoscaling groups. Additional resources on integrating ECS Containers with
the Application Load Balancer are also provided.

Monitoring the AWS Application Load Balancer

If you already have an Application Load Balancer set up and just need to monitor it, check out the Sumo
Logic AWS Application Load Balancer Application . You can sign up for Sumo Logic Free here.

What is it the AWS Application Load Balancer?

The AWS Application Load Balancer is the newest load balancer technology in the AWS product suite.
Some of the benefits it provides are:

● Path Based Routing


o Select where to send requests based on the path of http request
o This allows for multiple Target Groups behind a single Application Load Balancer, with EC2 and
Container support
o For example, you might route general requests to one target group of containers/EC2s, and route
requests to render images to another microservice-specific (image rendering) target group
o See AWS’s documentation here for a full overview

● Containerized Application Support


o Specify dynamic ports in the ECS container task definition
o When a new task is added to the fleet, the ECS schedule auto-assigns it to the ALB using that port
o Share the ALB amongst multiple services using path-based routing
o Improve cost efficiency by running more components of your application per EC2 fleet
*See AWS’s announcement here for more details
● Better Health Checks
o Specify a custom set of HTTP response codes as a ‘healthy’ response
● HTTP/2 Support, WebSockets Support
o See this AWS post for more details
● New Pricing Model
o You pay per hours ALB is running
o You also pay for the number of Load Balancer Capacity Units (LCU’s) used
o Only the largest dimension for LCUs is used to calculate your bill
▪ Active Connections: 1 LCU = 3000 active connections per minute
▪ New Connections: 1 LCU = 25 new connections per second
▪ Bandwidth: 1 LCU = 2.22 Mb per second

AWS Application Load Balancer vs. Classic Load Balancer

Despite the enhanced functionality of the ALB, there are a few reasons you might elect to use the Classic
Load Balancer for your stack:

● Your application requires Application Controlled Sticky Sessions (rather than duration based)
● Your application needs to distribute TCP/IP requests – this is only supported with the Classic Load
Balancer
If you’re looking for containerized application support, path based routing, better health checks, websocket
support, or HTTP/2 support, the Application Load Balancer is the right choice for you.
How do I use it?

First, you’ll need to create your load balancer. A description of how to do this can be found in AWS’s
documentation here. Make sure you make the following selections while setting up the load balancer:

● Step 1:
o Set ‘Scheme’ to ‘Internet Facing’ and make sure there is a Listener on port 80 (HTTP)
o Select the Default VPC, or if launching the ALB into another VPC, select one where you have
testing servers running or are able to launch servers for testing
● Step 3: Create or use an existing security group that allows inbound HTTP traffic of port 80
● Step 4: Create a new Target Group and select port 80/protocol HTTP
● Step 5: Skip for now and create the load balancer
Distribute Traffic to Existing EC2 Instances
Check ALB Configuration

1. Before you begin, verify that your ALB has a Listener set to port 80 – we will test with HTTP requests
although when using your load balancer in production make to only allow interactions via HTTPS port 443
o To verify, go to the EC2 Dashboard > Load Balancers > Select your ALB > Select the ‘Listeners’ tab

2. Next, double check that the Application Load Balancer’s security group allows inbound HTTP and HTTPS
inbound traffic
o To check this, go to the EC2 Dashboard > Load Balancers > Select your ALB > Under ‘Description’
click on ‘Security group’ > Make sure the correct security group is selected and choose the ‘Inbound
Rules’ tab

Send AWS Application Load Balancer Traffic to an EC2 Instance


If you have an existing test server located in the same VPC as your ALB, follow these steps:

1. First, navigate to the EC2 Dashboard > Load Balancers > Select your ALB > Select ‘Targets’ tab > Select
‘Edit’
2. Select the test server(s) you want to distribute traffic to and click ‘Add to Registered’, then click ‘Save’

If you want to create a test server to connect to the ALB, follow these steps:

1. Launch a Linux AMI (see documentation here for more info). While launching, you must ensure that:
o Step 3: You have selected the same VPC as the VPC your ALB was launched into
o Step 3: You have a running web server technology and a sample web page – under ‘Advanced Details’
you can use the following bootstrap script if you are not familiar with this:

o #!/bin/bash

o yum install httpd -y

o service httpd start

o mkdir /var/www/html/test

echo 'Your Application Load Balancer test page!' > /var/www/html/test/index.html

o Step 6: Allow inbound HTTP traffic from your ALB’s security group

2. Now that you have a running web server to test with, navigate to the EC2 Dashboard > Load Balancers >
Select your ALB > Select ‘Targets’ tab > Select ‘Edit’
3. Select the test server(s) you want to distribute traffic to and click ‘Add to Registered’, then click ‘Save’

How we using Kops :

KOPS -
Kubernet
es
Operatio
ns

1. Launch one Ubuntu instance and execute below steps to install kops.

2. kops binary download


curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s
https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name
| cut -d '"' -f 4)/kops-linux-amd64
chmod +x kops-linux-amd64
sudo mv kops-linux-amd64 /usr/local/bin/kops

3. aws cli setup to enable ubuntu to interact with aws.


apt-get update
apt-get install -y python-pip
pip install awscli

aws --version

4.
- Create IAM user & make a note of access key & secruity key
- Create S3 bucket and enable versioning.

aws configure -- Give access & security access key details here..

5. kubectl installation (K8s cli)


snap install kubectl --classic
kubectl version
ssh-keygen -f .ssh/id_rsa

6. Environment variables setup -- Remember cluster name should ends with


k8s.local
updated these two vars in .bashrc & .profile in ~ dir.
export KOPS_CLUSTER_NAME=advith.k8s.local
export KOPS_STATE_STORE=s3://kops-state-advith-bucket

7. Create cluster:: -- This will actually prepare the configuration files.


kops create cluster \
--node-count=1 \
--node-size=t2.micro \
--master-size=t2.micro \
--zones=us-east-1a \
--name=${KOPS_CLUSTER_NAME}

(optional)if you wanted to review & edit the cluster configuration:


kops edit cluster --name ${KOPS_CLUSTER_NAME}

RUN if you're okay withe the configuration run the command with --yes as
like below:
kops update cluster --name ${KOPS_CLUSTER_NAME} --yes

Output shows like below..::


Cluster is starting. It should be ready in a few minutes.

Suggestions:
* validate cluster: kops validate cluster
* list nodes: kubectl get nodes --show-labels
* ssh to the master: ssh -i ~/.ssh/id_rsa admin@api.advith.k8s.local
* the admin user is specific to Debian. If not using Debian please use
the appropriate user based on your OS.
* read about installing addons at:
https://github.com/kubernetes/kops/blob/master/docs/addons.md.

To validate the cluster::


kops validate cluster
Validating cluster advith.k8s.local

INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-east-1a Master m3.medium 1 1
us-east-1a
nodes Node t2.medium 1 1
us-east-1a
NODE STATUS
NAME ROLE READY
ip-172-20-52-91.ec2.internal node True
ip-172-20-54-252.ec2.internal master True

Your cluster advith.k8s.local is ready

8. deploying dashboard feature::


kubectl apply -f
https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/rec
ommended/kubernetes-dashboard.yaml

Edit master's security group:


- Make sure 443 port is allowed from ANYWHERE in aws security group.

To get admin user's password::


root@ip-172-31-94-144:~# kops get secrets kube --type secret -oplaintext Or
grep password: ~/.kube/config
srlmyMCrxeIWfV6fhdElz1alo7lKWTeg

Launch kubernetes url:


http://master dns/ui
admin
passwrod

-- Select the token option and paste the below one.

Token generation for admin:


root@ip-172-31-94-144:~# kops get secrets admin --type secret -oplaintext
8XmR3sAZCsV38gGCa5OhTYXtOPpBztTR

root@ip-172-31-94-144:~# kubectl cluster-info


Kubernetes master is running at
https://api-advith-k8s-local-df1a7n-1016419148.us-east-1.elb.amazonaws.com
KubeDNS is running at
https://api-advith-k8s-local-df1a7n-1016419148.us-east-1.elb.amazonaws.com/a
pi/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl


cluster-info dump'.

root@ip-172-31-94-144:~# kubectl get nodes -- To get the nodes status


NAME STATUS ROLES AGE VERSION
ip-172-20-59-100.ec2.internal Ready node 8m v1.9.8
ip-172-20-63-182.ec2.internal Ready master 9m v1.9.8

Deploy hello-minicube to validate::


root@ip-172-31-94-144:~# kubectl run hello-minikube
--image=gcr.io/google_containers/echoserver:1.4 --port=8080
deployment.apps/hello-minikube created
root@ip-172-31-94-144:~# kubectl expose deployment hello-minikube
--type=NodePort
service/hello-minikube exposed

kubectl get service

https://master-dns:nodeport/

Code Samples :

Index.php:

<!DOCTYPE html>

<html lang="en">

<head>

<meta charset="utf-8">

<meta http-equiv="X-UA-Compatible" content="IE=edge">

<meta name="viewport" content="width=device-width, initial-scale=1">

<link rel="shortcut icon" href="images/favicon.ico" type="image/x-icon">

<title>Cockatiel</title>

<!-- style -->

<link href="css/style.css" rel="stylesheet" type="text/css">

<!-- Bootstrap -->

<link href="css/bootstrap.min.css" rel="stylesheet" type="text/css">

<!-- carousel -->

<link href="css/owl.carousel.css" rel="stylesheet" type="text/css">

<!-- responsive -->

<link href="css/responsive.css" rel="stylesheet" type="text/css">

<!-- font-awesome -->

<link href="css/font-awesome.min.css" rel="stylesheet" type="text/css">


<!-- font-awesome -->

<link href="css/animate.min.css" rel="stylesheet" type="text/css">

<link href="css/popup.css" rel="stylesheet" type="text/css">

</head>

<body class="module-home" data-spy="scroll" data-target=".navbar">

<!-- header -->

<header role="header" class="header-top" id="headere-top">

<div class="header-fixed-wrapper" role="header-fixed">

<div class="container">

<!-- hgroup -->

<hgroup class="row">

<!-- logo -->

<div class="col-xs-5 col-sm-2 col-md-2 col-lg-2">

<h1>

<a href="#headere-top" title="Rooky"><img src="" alt="Cock" title="Rooky"/></a>

</h1>

</div>

<!-- logo -->

<!-- nav -->

<nav role="navigation" class="col-xs-12 col-sm-10 col-md-10 col-lg-10 navbar


navbar-default">

<div class="navbar-header">

<button type="button" class="navbar-toggle collapsed" data-toggle="collapse"


data-target="#navbar" aria-expanded="false" aria-controls="navbar">

<span class="sr-only">Toggle navigation</span>


<span class="icon-bar"></span>

<span class="icon-bar"></span>

<span class="icon-bar"></span>

</button>

</div>

<div id="navbar" class="navbar-collapse collapse">

<ul class="nav navbar-nav">

<li class="active"><a href="#headere-top" title="Home">Home</a></li>

<li><a href="#section-two" title="Features">Introduction</a></li>

<li><a href="more.php" title="Pricing">More About Cockatiel</a></li>

<li><a href="suppl.php" title="Team">Supplies</a></li>

<li><a href="#section-five" title="Contact">Contact</a></li>

<li><a href="#section-six" title="Join Us">Get in Touch</a></li>

</ul>

</div>

</nav>

<!-- nav -->

</hgroup>

<!-- hgroup -->

</div>

</div>

<!-- banner Text -->

<section class="text-center">

<h2>Cockatiel</h2>
<a href="#" class="button-header">Get Strated</a>

</section>

<!-- banner Text -->

<!-- banner image -->

<figure>

<div class="parallax-window item tp-banner-container" data-parallax="scroll"


data-image-src="images/1.jpg"></div>

</figure>

<!-- banner image -->

</header>

<!-- header -->

<!-- main -->

<main role="main" id=" main-wrapper">

<section class="section-two" id="section-two">

<!-- image-content -->

<section>

<div class="col-xs-12 col-sm-6 col-md-6">

<article>

<h2>NATIVE TO</h2>

<p>Grasslands of Australia. Wild cockatiels are predominately grey and white with bright
orange cheek patches, which are brighter on the male.</p>

<ul>

<li>LIFE SPAN:up to about 25 years</li>

<li>AVERAGE ADULT SIZE:11-14 inches long</li>


<li>AGE OF SEXUAL MATURITY: 4-6 months</li>

</ul>

</article>

</div>

<div class="col-xs-12 col-sm-6 col-md-6">

<figure class="row" style="background-image:url('images/image7.jpeg')"></figure>

</div>

</section>

<!-- image-content -->

<div class="clearfix"></div>

<!-- image-content -->

<section>

<div class="col-xs-12 col-sm-6 col-md-6">

<figure class="row" style="background-image:url('images/image6.jpeg')"></figure>

</div>

<div class="col-xs-12 col-sm-6 col-md-6">

<article>

<h2>MALE OR FEMALE?:</h2>

<p>cockatiels are sexually dimorphic, which means males and females are visually
different. Female cockatiels have small white dots on the tops of the tips of their flight feathers and black
barring and stripes on the undersides of their wings and tail.
</p>

<p>However, all cockatiels have the markings of a female until they are six months old,
after that, the males lose these features. Male cockatiels also have brighter orange cheek patches and usually
have a greater ability to talk.</p>

</article>

</div>

</section>
<!-- image-content -->

<div class="clearfix"></div>

<section>

<div class="col-xs-12 col-sm-6 col-md-6">

<article>

<h2>PHYSICAL CHARACTERISTICS</h2>

<p>cockatiels are beautiful, small-bodied birds that have varied colorations from all grey
to all brown. Some popular types are: grey, lutino, white-faced, cinnamon, pied and albino. A single bird can
also be a combination of any of these or a color “mutation” of any one or more. cockatiels have a proud
posture, small dark eyes and a long tail.</p>
<p>All cockatiels have a head crest, which the bird can
raise or lower depending on mood and stimulation. cockatiels are a “powder-down” bird. This means they
have an extra powdery substance in their feathers. This powder can be very irritating to those owners and
handlers with allergies and asthma. If you, or a family member, have these issues, a different parrot species
may be more suited to you. Other “powder down” birds include cockatoos and African greys.

</p>

</article>

</div>

<div class="col-xs-12 col-sm-6 col-md-6">

<figure class="row" style="background-image:url('images/image2.jpeg')"></figure>

</div>

</section>

<div class="clearfix"></div>

<section>

<div class="col-xs-12 col-sm-6 col-md-6">

<figure class="row" style="background-image:url('images/image8.jpeg')"></figure>

</div>

<div class="col-xs-12 col-sm-6 col-md-6">

<article>

<h2>SIGNS OF A HEALTHY BIRD</h2>


<p>A healthy bird should be perky, active and alert with bright, clear eyes, cere(fleshy
nose area) and nares (nostrils). You should observe your bird eating and drinking throughout the day,
although they may prefer to eat when you are eating, as they are flock oriented animals. Your bird should
appear well groomed with neat, bright feathers. The feathers should be mostly smoothed to the body at</p>
<p>rest – not continually fluffed. The feet and legs should be smooth and free of lumps,
scabs and rough scales.
Birds vocalize regularly with chirps, clicks, whistles and learned words. They enjoy communicating and
mimicking. Your
bird should be interested in communicating, but may be shy or intimidated around new people or in new
environments.
A healthy bird is confident and inquisitive, although cautious and aware as well.
</p>
</article>

</div>

</section>

<div class="clearfix"></div>
<section>

<div class="col-xs-12 col-sm-6 col-md-6">

<article>

<h2>SUPPLEMENTS:</h2>

<p>The only supplement that should be necessary if you are feeding your cockatiel
correctly is calcium.</p>
<p>Calcium can usually be offered in the form of a
cuttlebone or calcium treat that attaches to the inside of your bird’s cage. If you notice that your bird does
not touch his cuttlebone or calcium treat, a powdered supplement such as packaged oyster shell can be added
directly to your pet’s food. Follow the directions on the supplement package.</p>
<p>For optimal physiologic use of the calcium you are
giving your bird, the bird should be exposed to UVB light for at
least 3-4 hours a day (or more or less depending on the species). Please see our UVB Lighting for
Companion Birds
and Reptiles handout for further information about UVB light.

</p>

</article>

</div>

<div class="col-xs-12 col-sm-6 col-md-6">

<figure class="row" style="background-image:url('images/image15.jpeg')"></figure>

</div>

</section>
<div class="clearfix"></div>

<section>

<div class="col-xs-12 col-sm-6 col-md-6">

<figure class="row" style="background-image:url('images/image8.jpeg')"></figure>

</div>

<div class="col-xs-12 col-sm-6 col-md-6">

<article>

<h2>WATER ?</h2>

<p>Fresh water must be available to your cockatiel at all times. Because your pet will
often even bathe in his water, it must be checked and changed several times a day. It is recommended that
the bowl be wiped clean with a paper towel at every change to prevent a slimy film from collecting on the
inside of the bowl. This ‘slime’ will harbor bacteria, which can be dangerous for your bird. Thoroughly
wash the bowl with a mild dishwashing detergent and water at least once a day.
All water given to birds for drinking, as well as water used for misting, soaking or bathing must be 100%
free of chlorine and heavy metals. (Not all home water filtration systems remove 100% of the chlorine and
heavy metals from tap water).</p>
<p>We recommend that you use unflavored bottled drinking water or bottled natural spring water; never
use untreated tap water. If tap water is used, you should treat it with a de-chlorinating treatment. If you do
not want to chemically dechlorinate the water, you can leave an open container of tap water out for at least
24 hours.
</p>
</article>

</div>

</section>

<div class="clearfix"></div>

<section>

<div class="col-xs-12 col-sm-6 col-md-6">

<article>

<h2>ENRICHMENT</h2>

<p>In the wild, birds spend most of their day from morning until night foraging for their
food. In our homes in a cage, their food is right at their beaks, no need to go hunting! Because of this, it is
very easy for our pet birds to become bored and lazy. Since these animals are so intelligent, it is a horrible
sentence to be banished to a cage with nothing to do. </p>
<p>“Enrichment” is important because it will keep your
cockatiel’s mind busy!
At least three different types of toys should be available to your bird in his cage at one time. cockatiels enjoy
shiny, wooden, rope, foraging, and plastic toys. It is very important to purchase toys made specifically for
birds as they are much more
</p>
<p>likely to be safer in construction and material. Birds can be poisoned by dangerous metals, such as lead
or zinc. They can also chew off small pieces of improperly manufactured “toys” and ingest them, which of
course can lead to a variety of health problems. Be sure to include “foraging” toys. These types of toys
mimic the work that a bird might do to find food in the wild.

</p>

</article>

</div>

<div class="col-xs-12 col-sm-6 col-md-6">

<figure class="row" style="background-image:url('images/image15.jpeg')"></figure>

</div>

</section>
<div class="emplty"></div>

<div class="clearfix"></div>

<section>

<div class="col-xs-12 col-sm-6 col-md-6">

<figure class="row" style="background-image:url('images/image19.jpeg')"></figure>

</div>

<div class="col-xs-12 col-sm-6 col-md-6">

<article>

<h2>HOUSING & ENVIRONMENT</h2>

<p>Fcockatiels need a clean, warm, mentally stimulating environment.


A single bird’s cage can be about 18” x 18” x 24”. Two birds should have a cage no smaller than 28”x 24”x
36”. The basic rule of thumb is the bigger the better!
The spacing between the bars of the cage should be no wider than 3/8 inch to a ½ inch. If the bars are too far
apart, your crafty bird is very likely to try to squeeze through them and get stuck.
The cage should be placed in a family centered room where the bird(s) will feel a part of the “flock”;
however the back of the cage should be positioned against a wall to provide security. Your cockatiel will feel
threatened and nervous if it is in direct traffic</p>

<p>At least three clean bowls should be ready for use: one for fresh water, one for seed/pellets and one for
fresh foods.</p>
<p>Your bird may appreciate a cage cover for nighttime. The cover can block out any extraneous light and
create a more secure sleeping place. Be careful not to use any fabrics for your cover that your bird might
catch his claws or beak in, or that he might pull strings from and eat.</p>
</article>

</div>

</section>

<div class="clearfix"></div>

<br>
<section class="section-five" id="section-five">

<div class="container">

<header role="title-page" class="text-center">

<h2>IF PROBLEMS ARISE, CALL YOUR AVIAN VETERINARIAN IMMEDIATELY!<br/></h2>


<p><ul style="color:#fff;">
• Fluffed feathers, missing patches of feathers, feathers being
purposely plucked.<br>
• Evidence that your bird has stopped grooming him/herself.<br>
• Bird sitting still and low on perch with a puffed up appearance, drooping wings<br>
• may also stay at bottom of cage.<br>
• Beak swelling or unusual marks on cere.<br>
• Nasal discharge, eye discharge, wheezing or coughing.<br>
• Any change in stools including color or consistency.<br>
• Loss of appetite.<br>
• Favoring of one foot, holding a wing differently, presence of any blood.<br>
</ul>

</p>

</header>
<!-- subscribe -->

<div class="subscribe-form">

<div class="ntify_form">

<form method="post" action="php/subscribe.php" name="subscribeform"


id="subscribeform">

<input name="email" type="email" id="subemail" placeholder="Email Address">

<button type="submit" name="" value="Submit">

Subscribe <i class="fa fa-envelope" aria-hidden="true"></i></button>

</form>

<!-- subscribe message -->

<div id="mesaj"></div>

<!-- subscribe message -->

</div>

</div>

<!-- subscribe -->

</div>

</section>

<!-- section-five -->

<!-- section-six -->

<section class="section-six" id="section-six">

<div class="container">

<header role="title-page" class="text-center">

<h4>Get in touch</h4>

<h2>Have any questions? Our team will happy to<br/>answer your questionss.</h2>

</header>

<!-- contact-form -->


<div class="contact-form">

<div id="message"></div>

<form method="post" action="php/contactfrom.php" name="cform" id="cform">

<div class="col-md-6 col-lg-6 col-sm-6">

<input name="name" id="name" type="text" placeholder="Full Name">

</div>

<div class="col-md-6 col-lg-6 col-sm-6">

<input name="email" id="email" type="email" placeholder="Email Address">

</div>

<div class="clearfix"></div>

<textarea name="comments" id="comments" cols="" rows="" placeholder="Question in


Detail"></textarea>

<div class="clearfix"></div>

<input name="" type="submit" value="Send mail">

<div id="simple-msg"></div>

</form>

</div>

<!-- contact-form -->

<div class="clearfix"></div>

</div>

<!-- map -->

<div class="map-wrapper">

<div id="surabaya"></div>

</div>

<!-- map -->

</section>

<!-- section-six -->


<!-- footer -->

<footer role="footer" class="footer text-center">

<div class="container">

<!-- socil-icons -->

<section role="socil-icons" class="socil-icons">

<a href="#"><i class="fa fa-twitter" aria-hidden="true"></i></a>

<a href="#"><i class="fa fa-facebook" aria-hidden="true"></i></a>

<a href="#"><i class="fa fa-linkedin" aria-hidden="true"></i></a>

<a href="#"><i class="fa fa-google-plus" aria-hidden="true"></i></a>

</section>

<!-- socil-icons -->

<!-- nav -->

<nav role="footer-nav">

<a href="#">Terms of Use </a>

<a href="#">Privacy Policy</a>

</nav>

<!-- nav -->

<p class="copy">&copy; 2018 name. All rights reserved. Made with by <a
href="http://sousukeinfosolutions.com/" target="_blank">Sousuke</a></p>

</div>

</footer>

<!-- footer -->

</main>

<!-- main -->


<!-- jQuery (necessary for Bootstrap's JavaScript plugins) -->

<script src="js/jquery.min.js" type="text/javascript"></script>

<script src="js/parallax.min.js" type="text/javascript"></script>

<script type="text/javascript">

$('.parallax-window').parallax({});

</script>

<script src="js/main.js" type="text/javascript"></script>

<script src="js/owl.carousel.js" type="text/javascript"></script>

<script src="https://maps.googleapis.com/maps/api/js?v=3.exp&sensor=false"></script>

<script src="js/maps.js" type="text/javascript"></script>

<script type="text/javascript" src="js/jquery.mb.YTPlayer.js"></script>

<script type="text/javascript" src="js/video.js"></script>

<script src="js/custom.js" type="text/javascript"></script>

<script src="js/jquery.magnific-popup.min.js" type="text/javascript"></script>

<script src="js/jquery.contact.js" type="text/javascript"></script>

<script src="js/bootstrap.min.js" type="text/javascript"></script>

<script src="js/html5shiv.min.js" type="text/javascript"></script>

</body>

</html>

Main.js:

$(document).ready(function() {

//#HEADER
var slideHeight = $(window).height();

$('#headere-top figure .item').css('height',slideHeight);

$(window).resize(function(){'use strict',

$('#headere-top figure .item').css('height',slideHeight);

});

//Scroll Menu

$(window).on('scroll', function(){

if( $(window).scrollTop()>600 ){

$('.header-top .header-fixed-wrapper').addClass('navbar-fixed-top animated


fadeInDown');

} else {

$('.header-top .header-fixed-wrapper').removeClass('navbar-fixed-top animated


fadeInDown');

});

$(window).scroll(function(){

if ($(this).scrollTop() > 200) {

$('#menu').fadeIn(500);

} else {

$('#menu').fadeOut(500);

}
});

// Navigation Scroll

$(window).scroll(function(event) {

Scroll();

});

$('.navbar-collapse ul li a').on('click', function() {

$('html, body').animate({scrollTop: $(this.hash).offset().top - 1}, 1000);

return false;

});

// User define function

function Scroll() {

var contentTop = [];

var contentBottom = [];

var winTop = $(window).scrollTop();

var rangeTop = 200;

var rangeBottom = 500;

$('.navbar-collapse').find('.scroll a').each(function(){

contentTop.push( $( $(this).attr('href') ).offset().top);

contentBottom.push( $( $(this).attr('href') ).offset().top + $( $(this).attr('href')


).height() );

})

$.each( contentTop, function(i){

if ( winTop > contentTop[i] - rangeTop ){

$('.navbar-collapse li.scroll')

.removeClass('active')
.eq(i).addClass('active');

})

};

// affix

var width = $(window).width();

var top = $('.tp-banner-container').length == 0 ? -1 : $('.section-one').offset().top - $('.navbar').height() * 2;

$('.navbar').affix({

offset: {

top: top

, bottom: function () {

return (this.bottom = $('.footer').outerHeight(true))

});

var owl = $("#owl-demo");

owl.owlCarousel({

itemsCustom : [

[0, 1],

[450, 1],

[600, 1],

[700, 1],

[1000, 1],
[1200, 1],

[1400, 1],

[1600, 1]

],

navigation : true,

autoPlay : 3000,

});

$('.popup-youtube, .popup-vimeo, .popup-gmaps').magnificPopup({

disableOn: 700,

type: 'iframe',

mainClass: 'mfp-fade',

removalDelay: 160,

preloader: false,

fixedContentPos: false

});

});

References :

● https://github.com/kubernetes/kops/blob/master/docs/cli/kops_cre
ate_secret_encryptionconfig.md
● https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/
● https://github.com/kubernetes/kops/blob/master/nodeup/pkg/mode
l/kube_apiserver.go#L61
● https://github.com/georgebuckerfield/kops/blob/master/pkg/apis/ko
ps/cluster.go#L162

You might also like