DevOps Notes -2 (1) (1)
DevOps Notes -2 (1) (1)
DevOps is the combination of cultural philosophies, practices, and tools that increases an
organization's ability to deliver applications and services at high velocity: evolving and improving
products at a faster pace than organizations using traditional software development and
infrastructure management processes.
Tools to Achieve :
GIT :
MAVEN :
Jenkins :
Ansible :
Docker :
Kubernetes :
Devops tool : GIT
Git is an Open Source Distributed Version Control System. Now that’s a lot of words to define Git.
· Version Control System: The code which is stored in Git keeps changing as more code is added.
Also, many developers can add code in parallel. So Version Control System helps in handling this by
maintaining a history of what changes have happened. Also, Git provides features like branches
and merges, which I will be covering later.
· Distributed Version Control System: Git has a remote repository which is stored in a server and a
local repository which is stored in the computer of each developer. This means that the code is not
just stored in a central server, but the full copy of the code is present in all the developers’
computers. Git is a Distributed Version Control System since the code is present in every
developer’s computer. I will explain the concept of remote and local repositories later in this
article.
Case :
And you wrote the source code. it was working fine , but you further changed the code as it was needed.
After few minutes , you again changed the code , and ….again and….. again.
Now , you are messed up and frustrated as your current code is not fine and all you want is your initial
code back. But , you can't as you have not saved the previous versions. Now, all you can do is *cry in
corner*.
…………..final_source_code50.cpp
I have saved all versions of my code.but , you can see this solution is a new problem itself.
Now , suppose a different scenario. you are working a team project. and your team members are working
from remote locations. How will you all work on same source code in real time ?
Repository is a data structure used by VCS to store metadata for set of files and/or directories. it stores
the set of file as well as history of changes made to those file.
Commands :
Changes
already added
to the index, as
well as new
files, will be
kept.
git stash - How to Save Your Changes Temporarily
There are lots of situations where a clean working copy is recommended or even
required: when merging branches, when pulling from a remote, or simply when
The "git stash" command can help you to (temporarily but safely) store your
uncommitted local changes - and leave you with a clean working copy.
$ git status
modified: index.php
modified: css/styles.css
If you have to switch context - e.g. because you need to work on an urgent bug - you
need to get these changes out of the way. You shouldn't just commit them, of course,
$ git stash
Your working copy is now clean: all uncommitted local changes have been saved on
this kind of "clipboard" that Git's Stash represents. You're ready to start your new task
to continue where you left off, you can restore the saved state easily:
The "pop" flag will reapply the last saved state and, at the same time, delete its
representation on the Stash (in other words: it does the clean-up for you).
Your first assignment is updating a card component. When people look for
cupcakes to buy, each is in one of these cards. So you go to the repo, pull the
most recent version of the master branch, create a new branch from that one,
and get to work!
A few commits later, you're all set. The card looks nicer, all the tests pass, and
you've even improved the mobile layout. All that's left is to merge your feature
branch back into master branch so it goes live!
§ Someone else secretly embezzled money through the store's bank records
All these changes make you worry. What if someone merged a change that
affects or overlaps with the ones you made? It could lead to bugs in the cupcake
website! If you look at the different changes made, one does! (Another change
should be reported to the police, but that's actually less important). Is there a
safe way to merge your changes without risking any conflicts, and missing out on
all the other changes made?
Situations like these are a big example of when you'd want to rebase.
Rebasing is taking all your branch's commits and adding them on top of
commit #5 instead of commit #1. If you consider commit #1 as the "base" of
your branch, you're changing that base to the most recent one, commit #5.
Hence why it's called rebasing!
Okay, so HOW do I Rebase something?
So you've got this great card component for Cupid's Cupcakes. Now that you
know what a rebase is, let's look at the how in more detail.
First, make sure you have the most up-to-date version of the branch you're
rebasing on. Let's keep assuming it's the master branch in this example. Run git
checkout master to, y'know, check it out, and then run git pull to get the most
recent version. Then checkout your branch again - here's it'd be with git checkout
updated-card or something similar.
A straightforward rebase has a pretty simple command structure: git rebase
<branch>. branch is the one you're rebasing off of. So here you'd run git rebase
master. Assuming there's no conflicts, that's all the rebase needs!
The rebase itself technically removes your old commits and makes new commits
identical to them, rewriting the repo's commit history. That means pushing the
rebase to the remote repo will need some extra juice. Using git push --force will do
the trick fine, but a safer option is git push --force-with-lease. The latter will alert
you of any upstream changes you hadn't noticed and prevent the push. This way
you avoid overwriting anyone else's work, so it's the safer option.
With all that, your rebase is now complete! However, rebases won't always go so
smoothly...
Thankfully, git makes this very easy. During the rebase, git adds each commit
onto the new base one by one. If it reaches a commit with a conflict, it'll pause
the rebase and resume once it's fixed.
If you've dealt with merge conflicts before, rebase conflicts are handled
essentially the same way. Running git status will tell you where the conflicts are,
and the two conflicting sections of code will be next to each other so you can
decide how to fix them.
Once everything is fixed, add and commit the changes like you would a normal
merge conflict. Then run git rebase --continue so git can rebase the rest of your
commits. It'll pause for any more conflicts, and once they're set you just need to
push --force-with-lease.
There's two lesser-used options you could also use. One is git rebase --abort, which
would bring you back to before you started the rebase. It's useful for
unexpected conflicts that you can't rush a decision for. Another is git rebase --skip,
which skips over the commit causing the conflict altogether. Unless it's an
unneeded commit and you're feeling lazy, you likely won't use it much.
In this post, I'll going to dig into one of the mysterious aspect of Git --- The 3-Tree
Architecture.
To get started with, lets first take a look at how the typical VCS works. Usually, a
VCS works by having two places to store things:
1. Working Copy
2. Repository
Working copy is the place where you make your changes. Whenever you edit
something, it is saved in working copy and it is a physically stored in a disk.
Repository is the place where all the version of the files or commits, logs etc is
stored. It is also saved in a disk and has its own set of files.
You cannot however change or get the files in a repository directly, in able to retrieve
a specific file from there, you have to checkout
Checking-out is the process of getting files from repository to your working copy.
This is because you can only edit files when it is on your working copy. When you are
done editing the file, you will save it back to the repository by commiting it.
Committing is the process of putting back the files from working copy to repository.
3 tree archi vs 2 tree arch
In this process, Working Copy and Repository is saved in the disk as series of folders
and files like a Tree, since files and directories resembles a tree wherein folder
represents a branch of a tree and files represents the leaf. Hence, this architecture is
called 2 Tree Architecture. Because you have two tree in there -- Working Copy and
Repository. The famous VCS with this kind of architecture is Subversion or SVN.
Now, that you know what a 2 Tree Architecture looks like, interesting to say Git has
different one, it is instead powered by 3 Trees!
Well, interestingly, Git has also the Working Copy and Repository as well, but it had
added an extra tree in between:
As you can see above, there is a new tree called Staging. What this is for?
This is one of the fundamental difference of Git that sets it apart from other VCS, this
Staging tree (usually termed as Staging area) is a place where you prepare all the
things that you are going to commit.
In Git, you don't move things directly from your working copy to the repository, you
have to stage them first, one of the main benefits of this is, lets say:
You did changes on your 10 files, 2 of the files is something related to fixing an
alignment issue in a webpage, while the other 8 changed files is related to database
connection..
Maven :
What is Maven?
Maven is a project management and comprehension tool that provides developers a complete build
lifecycle framework. Development team can automate the project's build infrastructure in almost no time as
Maven uses a standard directory layout and a default build lifecycle.
In case of multiple development teams environment, Maven can set-up the way to work as per standards in
a very short time. As most of the project setups are simple and reusable, Maven makes life of developer
easy while creating reports, checks, build and testing automation setups.
Maven tutorial provides basic and advanced concepts of apache maven technology. Our maven tutorial is
developed for beginners and professionals.
Maven is a powerful project management tool that is based on POM (project object model). It is used for
projects build, dependency and documentation.
There are many problems that we face during the project development. They are discussed below:
1) Adding set of Jars in each project: In case of struts, spring, hibernate frameworks, we need to add set of
jar files in each project. It must include all the dependencies of jars also.
2) Creating the right project structure: We must create the right project structure in servlet, struts etc,
otherwise it will not be executed.
3) Building and Deploying the project: We must have to build and deploy the project so that it may work.
What it does?
Maven simplifies the above mentioned problems. It does mainly following tasks.
1. It makes a project easy to build
2. It provides uniform build process (maven project can be shared by all the maven projects)
3. It provides project information (log document, cross referenced sources, mailing list, dependency
list, unit test reports etc.) It is easy to migrate for new features of Maven
A build tool takes care of everything for building a process. It does following:
o Generates source code (if auto-generated code is used)
o Generates documentation from source code
o Compiles source code
o Packages compiled code into JAR of ZIP file
o Installs the packaged code in local repository, server repository, or central repository
POM stands for Project Object Model. It is fundamental unit of work in Maven. It is an XML file that
resides in the base directory of the project as pom.xml.
The POM contains information about the project and various configuration detail used by Maven to build
the project(s).
A Project Object Model or POM is the fundamental unit of work in Maven. It is an XML file that contains
information about the project and configuration details used by Maven to build the project. It contains
default values for most projects. Examples for this is the build directory, which is target; the source
directory, which is src/main/java; the test source directory, which is src/main/test; and so on.
The POM was renamed from project.xml in Maven 1 to pom.xml in Maven 2. Instead of having a
maven.xml file that contains the goals that can be executed, the goals or plugins are now configured in
the pom.xml. When executing a task or goal, Maven looks for the POM in the current directory. It reads
the POM, gets the needed configuration information, then executes the goal.
POM also contains the goals and plugins. While executing a task or goal, Maven looks for the POM in the
current directory. It reads the POM, gets the needed configuration information, and then executes the goal.
Some of the configuration that can be specified in the POM are following −
● project dependencies
● plugins
● goals
● build profiles
● project version
● developers
● mailing list
Before creating a POM, we should first decide the project group (groupId), its name (artifactId) and its
version as these attributes help in uniquely identifying the project in repository.
POM Example
xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation = "http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.companyname.project-group</groupId>
<artifactId>project</artifactId>
<version>1.0</version>
</project>
It should be noted that there should be a single POM file for each project.
● All POM files require the project element and three mandatory fields: groupId, artifactId,
version.
● Projects notation in repository is groupId:artifactId:version.
● Minimal requirements for a POM −
1 Project root
This is project root tag. You need to specify the basic schema settings such as
apache schema and w3.org specification.
2 Model version
Model version should be 4.0.0.
3 groupId
This is an Id of project's group. This is generally unique amongst an organization
or a project. For example, a banking group com.company.bank has all bank
related projects.
4 artifactId
This is an Id of the project. This is generally name of the project. For example,
consumer-banking. Along with the groupId, the artifactId defines the artifact's
location within the repository.
5 version
This is the version of the project. Along with the groupId, It is used within an
artifact's repository to separate versions from each other. For example −
com.company.bank:consumer-banking:1.0
com.company.bank:consumer-banking:1.1.
artifactId is the name of the jar without version. If you created it then you can choose whatever name
you want with lowercase letters and no strange symbols. If it's a third party jar you have to take the
name of the jar as it's distributed. eg. maven, commons-math
groupId will identify your project uniquely across all projects, so we need to enforce a naming schema.
It has to follow the package name rules, what means that has to be at least as a domain name you
control, and you can create as many subgroups as you want. Look at More information about package
names. eg. org.apache.maven, org.apache.commons
Install Maven :
For most AWS instances, there are JRE installed on them, that means we can run java program on them.
However, some AWS instances do not have JDK, so that we need to install JDK manually by running:
1 yum install java-devel
InstallJDK
Now we can use javac to compile our java classes.
We can use vim to create a java file on the instance. For example, we want to create a java class named
“Hello.java” to print “Hello World!” on the screen. We can run the following command:
1 vim Hello.java
It will create a file named “Hello.java” under the current directory, and swith to the viewing model:
What you will see after running "vim Hello.java"
By typing i, you enter to the editing model. Now type the following codes:
1 public class Hello {
2 public static void main(String[] args) {
3 System.out.println("Hello World!");
4 }
5 }
Then type Esc to escape the editing model and input :w to save the content and input :q to quit.
Edit, escape, write and quite
Now we are ready to compile the java file and run it:
1 javac Hello.java
2 java Hello
Now we can install maven :
wget http://mirror.olnevhost.net/pub/apache/maven/maven-3/3.0.5/binaries/apache-maven-3.0.5-bin.tar.gz
basically just go to the maven site. Find the version of maven you want. The file type and use the mirror for
the wget statement above.
Afterwards the process is easy
1. Run the wget command from the dir you want to extract maven too.
2. run the following to extract the tar,
tar xvf apache-maven-3.0.5-bin.tar.gz
3. move maven to /usr/local/apache-maven
mv apache-maven-3.0.5 /usr/local/apache-maven
4. Next add the env variables to your ~/.bashrc file
5. export M2_HOME=/usr/local/apache-maven
6. export M2=$M2_HOME/bin
export PATH=$M2:$PATH
7. Execute these commands
source ~/.bashrc
6:. Verify everything is working with the following command
mvn -version
And :
mkdir project
cd project/
mvn archetype:generate
mvn clean package
2. Take the code from the git and put pom.xml file and build the code.
Lets do this process manually.
There are always pre and post phases to register goals, which must run prior to, or after a particular phase.
When Maven starts building a project, it steps through a defined sequence of phases and executes goals,
which are registered with each phase.
cd test/
12 git clone https://github.com/vinayRaj98/vinayproject
13 ls
14 mvn
15 mvn archetype:generate
16 ls
17 cd test1/
18 ls
19 cd ..
20 ls
21 mvn clean package
Maven Repository
A maven repository is a directory of packaged JAR file with pom.xml file. Maven searches for
dependencies in the repositories. There are 3 types of maven repository:
1. Local Repository
2. Central Repository
3. Remote Repository
If dependency is not found in these repositories, maven stops processing and throws an error.
Maven local repository is located in your local system. It is created by the maven when you run any maven
command.
By default, maven local repository is %USER_HOME%/.m2 directory. For example: C:\Users\SSS IT\.m2.
Update location of Local Repository
We can change the location of maven local repository by changing the settings.xml file. It is located
in MAVEN_HOME/conf/settings.xml, for example: E:\apache-maven-3.1.1\conf\settings.xml.
Let's see the default code of settings.xml file.
settings.xml
1. ...
2. <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
3. xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
4. xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.
0.xsd">
5. <!-- localRepository
6. | The path to the local repository maven will use to store artifacts.
7. |
8. | Default: ${user.home}/.m2/repository
9. <localRepository>/path/to/local/repo</localRepository>
10. -->
11.
12. ...
13. </settings>
Now change the path to local repository. After changing the path of local repository, it will look like this:
settings.xml
1. ...
2. <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
3. xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
4. xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.
0.xsd">
5. <localRepository>e:/mavenlocalrepository</localRepository>
6.
7. ...
8. </settings>
As you can see, now the path of local repository is e:/mavenlocalrepository.
Maven central repository is located on the web. It has been created by the apache maven community itself.
The path of central repository is: http://repo1.maven.org/maven2/.
The central repository contains a lot of common libraries that can be viewed by this
url http://search.maven.org/#browse.
Maven remote repository is located on the web. Most of libraries can be missing from the central
repository such as JBoss library etc, so we need to define remote repository in pom.xml file.
Let's see the code to add the jUnit library in pom.xml file.
pom.xml
1. <project xmlns="http://maven.apache.org/POM/4.0.0"
2. xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
3. xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
4. http://maven.apache.org/xsd/maven-4.0.0.xsd">
5.
6. <modelVersion>4.0.0</modelVersion>
7.
8. <groupId>com.javatpoint.application1</groupId>
9. <artifactId>my-application1</artifactId>
10. <version>1.0</version>
11. <packaging>jar</packaging>
12.
13. <name>Maven Quick Start Archetype</name>
14. <url>http://maven.apache.org</url>
15.
16. <dependencies>
17. <dependency>
18. <groupId>junit</groupId>
19. <artifactId>junit</artifactId>
20. <version>4.8.2</version>
21. <scope>test</scope>
22. </dependency>
23. </dependencies>
24.
25. </project>
You can search any repository from Maven official website mvnrepository.com.
What are Maven Plugins?
Maven is actually a plugin execution framework where every task is actually done by plugins. Maven
Plugins are generally used to −
mvn [plugin-name]:[goal-name]
For example, a Java project can be compiled with the maven-compiler-plugin's compile-goal by running the
following command.
mvn compiler:compile
Plugin Types
Maven provided the following two types of Plugins −
1 Build plugins
They execute during the build process and should be configured in the <build/>
element of pom.xml.
2 Reporting plugins
They execute during the site generation process and they should be configured in
the <reporting/> element of the pom.xml.
1 clean
Cleans up target after the build. Deletes the target directory.
2 compiler
Compiles Java source files.
3 surefire
Runs the JUnit unit tests. Creates test reports.
4 jar
Builds a JAR file from the current project.
5 war
Builds a WAR file from the current project.
6 javadoc
Generates Javadoc for the project.
7 antrun
Runs a set of ant tasks from any phase mentioned of the build.
Next, open the command console and go to the folder containing pom.xml and execute the
following mvn command.
C:\MVN\project>mvn clean
Maven will start processing and displaying the clean phase of clean life cycle.
Versions :
Release Artifacts
These are specific, point-in-time releases. Released artifacts are considered to be solid, stable, and perpetual
in order to guarantee that builds which depend upon them are repeatable over time. Released JAR artifacts
are associated with PGP signatures and checksums verify both the authenticity and integrity of the binary
software artifact. The Central Maven repository stores release artifacts.
Snapshot Artifacts
Snapshots capture a work in progress and are used during development. A Snapshot artifact has both a
version number such as “1.3.0” or “1.3” and a timestamp. For example, a snapshot artifact for
commons-lang 1.3.0 might have the name commons-lang-1.3.0-20090314.182342-1.jar.
● Direct: These are dependencies defined in your pom.xml file under the <dependencies/>section.
● Transitive: These are dependencies that are dependencies of your direct dependencies.
Jenkins
Note the callouts numbers as I will use them to explain the main differences.
Scripted style
properties([
1 parameters([
2 gitParameter(branch: '',
3 branchFilter: 'origin/(.*)',
4 defaultValue: 'master',
5 description: '',
6 name: 'BRANCH',
7 quickFilterEnabled: false,
8 selectedValue: 'NONE',
9 sortMode: 'NONE',
10 tagFilter: '*',
11 type: 'PT_BRANCH')
12 ])
13 ])
14
15 def SERVER_ID="artifactory"
16
17 node {
18 stage("Checkout") {
19 git branch: "${params.BRANCH}", url:
20 'https://github.com/sergiosamu/blog-pipelines.git'
21 }
22 stage("Build") {
23 try {
24 withMaven(maven: "Maven363") {
25 sh "mvn package"
26 }
27 } catch (error) {
28 currentBuild.result='UNSTABLE'
29 }
30 }
31 stage("Publish artifact") {
32 def server = Artifactory.server "$SERVER_ID"
33
34 def uploadSpec = """{
35 "files": [
36 {
37 "pattern": "target/blog-pipelines*.jar",
38 "target":
39 "libs-snapshot-local/com/sergiosanchez/pipelines/"
40 }
41 ]
42 }"""
43
44 server.upload(uploadSpec)
45 }
}
Input parameters as defined in the properties section
Input parameters are defined in the same way as the Scripted Pipeline because the
properties section is outside the Pipeline main structure
The first element of a Scripted Pipeline is pipeline. This is the best way to identify a
Declarative Pipeline
Try/catch structure is not allowed like any other Groovy syntax. The custom
step warnError is used to manage build state
1. Declarative Syntax
2. Scripted Syntax
Declarative Syntax
Declarative pipeline syntax offers an easy way to create pipelines. It contains a predefined
structure to create Jenkins pipelines. It gives you the ability to control all aspects of a pipeline
execution in a simple, straightforward manner.
Scripted Syntax
The scripted pipeline was the first syntax of the Jenkins pipeline. We use groovy script inside node
scope to define scripted pipeline, so it becomes a little bit difficult to start with for someone who
doesn’t have an idea about groovy. Scripted Jenkins pipeline runs on the Jenkins master with the
help of a lightweight executor. It uses very few resources to translate the pipeline into atomic
commands. Both declarative and scripted syntax are different from each other and we define them
differently.
Jenkinsfile (Declarative Pipeline)
pipeline {
agent any
tools {
maven 'maven_3_5_0'
stages {
steps {
git 'https://github.com/SaumyaBhushan/Selenium_Test_Automation.git'
stage('compile stage') {
steps {
stage('testing stage') {
steps {
}
}
node {
tools{
maven 'maven_3_5_0'
git 'https://github.com/SaumyaBhushan/Selenium_Test_Automation.git'
stage('Compile') {
stage('Test') {
● always (example can be sending an email to the team after the bields run)
● success
● failure
post{
always{
// this condition always gets executed no matter if the field has been failed or succeeded
success{
// execute script that are only relevant when the beilds succeed
failure{
// execute script that are only relevant when the beilds failed
when{
expression{
steps {
This part of the stage will only execute if the current branch is dev if not it’s just gonna skip. You
can also apply boolean expression in case you only want to run that step when some condition is
true like CODE_CHANGES == true.
Ansible :
Anyone who works as an operations engineer has witnessed a bunch of issues with manual configuration
approach and more repetitive tasks which are time-consuming. How many times key resources left the
company and new engineer struggle to understand the environment and start performing the tasks without
escalation? Server configuration is a very broad landscape which needs to be maintained properly from the
beginning. Organization Standard will be documented in KM but people will forget/miss to follow due to
resource crunch, laziness and skill gap. Scripting is one of the option to automate and maintain the
configuration but it’s not an easy task.
What is Ansible?
Configuration management and Orchestration tool is the solution to eliminate all the problem in the system
management. Ansible is one of the most popular ones which is supported by Red Hat. Ansible is simple IT
automation engine to save time and be more productive. Human resources can spend more time on
innovation to make the operation more cost-effective.
Why Ansible?
Ansible works by connecting to your server using “SSH” and pushing out small programs, called “Ansible
modules” to it. Using these modules, playbooks (a small piece of YAML code), we should be to perform the
specific task on all the ansible clients. The specific task could be installing the packages, restarting the
services, rebooting the servers etc..There are lots of things that you could do using ansible.
Ansible – Tower
● Provisioning
● Configuraiton Management
● App Deployment
● Continous Delivery
● Security & Compliance
● Orchestration
● VMware
● Red Hat Enterprise Virtualization (RHEV)
● Libvirt
● Xenserver
● Vagrant
Passwordless :
Install ansible :
---
hosts: testServer
become: yes
vars:
oracle_db_port_value : 1521
tasks:
service:
The above is a sample Playbook where we are trying to cover the basic syntax of a playbook. Save the
above content in a file as test.yml. A YAML syntax needs to follow the correct indentation and one needs to
be a little careful while writing the syntax.
The Different YAML Tags
Let us now go through the different YAML tags. The different tags are described below −
name
This tag specifies the name of the Ansible playbook. As in what this playbook will be doing. Any logical
name can be given to the playbook.
hosts
This tag specifies the lists of hosts or host group against which we want to run the task. The hosts field/tag
is mandatory. It tells Ansible on which hosts to run the listed tasks. The tasks can be run on the same
machine or on a remote machine. One can run the tasks on multiple machines and hence hosts tag can have
a group of hosts’ entry as well.
vars
Vars tag lets you define the variables which you can use in your playbook. Usage is similar to variables in
any programming language.
tasks
All playbooks should contain tasks or a list of tasks to be executed. Tasks are a list of actions one needs to
perform. A tasks field contains the name of the task. This works as the help text for the user. It is not
mandatory but proves useful in debugging the playbook. Each task internally links to a piece of code called
a module. A module that should be executed, and arguments that are required for the module you want to
execute.
1- Inventories :
-Can be called from a different file via the " -i " option
########################################################
2- Modules :
-Ansible has many Modules which can be run directly or via playbooks against hosts " local and remote "
########################################################
3- Variables :
-Variables are how we deal with the differences between systems since not all systems are the same
4- Ansible Facts
- hosts: mainhosts
- gather_facts:no
########################################################
-Playbooks are the instruction manuals, the hosts are the raw materials
-A play is a task
########################################################
6- Configuration Files :
-We can use config files other than the default as follow :
- /etc/ansible/ansible.cfg
########################################################
7- Templates :
-Is the definition and set of parameters for running an ansible job
-Job templates are useful to execute the same job many times
########################################################
8- Handlers :
########################################################
9- Roles :
- One file for tasks , one for variables , one for handlers
-They are the methood to package up tasks , handlers and everything else
########################################################
- Passwords
- Encrypted Files
-The command line flag " --ask-vault-pass " or " --vault-password-file "
Roles:
If left unchecked, our Playbooks can quickly become large and unwieldy.
Ansible uses the concept of Roles to address this problem. By following a standardised
directory structure, we can keep our Ansible project structure nice and tidy, along with
maintaining some semblence of sanity and order.
So really, a Role is nothing more than a more specific Playbook. We already have
covered the basics of Playbooks, and a Role takes the concept of a Playbook and
applies it to a common set of tasks to achieve a specific thing.
Let us imagine we have a list of common tasks we always want to perform on every
server we manage.
We want to install some software (git, curl, htop, whatever), we want our authorised
SSH keys to be set so we don't have to muck about with passwords, and it'd be quite
nice if our User accounts were created, along with our standard home directory
structure.
We could think about these as our 'Common' tasks.
With a Common role defined, we can then remove all that standard set up from every
Playbook we have, and simply request that the Playbook includes that Role when it
executes.
I would strongly encourage you to browse the Ansible Galaxy as it's really what
piqued my interests in Ansible, when compared to other similar infrastructure
automation systems like Chef and Puppet.
Ansible Galaxy is like the Apple App Store for geeks. Think of any 'thing' you might
want to play with - Redis, Jenkins, Blackfire, Logstash, NodeJs - and there will, more
likely than not, be a Role created by a friendly community member to download and
use with almost no effort.
Of course, life is never that easy, and many of the Roles on Ansible Galaxy will need
at least a basic grasp of the software you are trying to install, before you can make the
most of the Role in your own setup.
Again, we will come back to Ansible Galaxy in more detail in a future video.
We covered using ansible-galaxy init your-role-name-here in the video on using Git with
Ansible, but not a lot was said on why we were doing that.
Using ansible-galaxy init will generate us a standardised directory structure for our
Role.
We can then populate the individual files and folders with our own data, and bonza, we
have a working Role.
I recommend following the method I used in the Git with Ansible video as we likely
won't be working locally on the server, so won't have easy access to
the ansible-galaxy command every time we want to create a new Role.
Simply, if we create our Role using ansible-galaxy then all the files we need
- /tasks/main.yml, /handlers/main.yml, vars/main.yml, etc, will be created for us already,
and we can just copy and paste our existing Playbook entries into the files and life will
be good.
Creating those files by hand isn't a problem - nor does ansible-galaxy do anything
particularly special - it's just a time saver.
Example
In the video we migrate from having all our Apache set up - tasks and handlers - in one
Playbook, and instead, we start moving these blocks of config (the tasks block, the
handlers block) into their own yaml
files: roles/apache/tasks/main.yml and roles/apache/handlers/main.yml.
The actual file contents don't change, only their locations.
We start off with our original apache-playbook.yml - and remember, this is merely a
demonstration, this won't get you a working Apache install at this stage:
First of all, we need to create our Apache Role:
cp -R roles/__template__ roles/apache
That creates us the desired role structure.
apache-playbook.yml contents:
---
- hosts: all
vars:
- website_dir: /var/www/oursite.dev/web
tasks:
notify:
- start apache
notify:
- restart apache
handlers:
notify:
- start apache
notify:
- restart apache
Next, extract the handlers section into roles/apache/handlers/main.yml.
Again, we don't need the handlers section heading.
---
---
- hosts: all
vars:
- website_dir: /var/www/oursite.dev/web
roles:
- apache
We could go further and extract the variables out also - it's the exact same process.
Running our apache-playbook.yml still works exactly the same as before the change:
ansible-playbook apache-playbook.yml -k -K -s
But note, the output changes ever so slightly.
ok: [127.0.0.1]
ok: [127.0.0.1]
*cut*
Notice the [apache | Install Apache] line - this now takes the format of:
[role name | task name]
This can be helpful for identifying where things are as your Playbooks grow in size
and complexity.
Demo machines :
3.93.186.254(ansible)
chmod 400 ec2ami.pem
ssh -i "ec2ami.pem" ubuntu@ec2-3-93-186-254.compute-1.amazonaws.com
3.88.11.7(machine1) docker
chmod 400 LinuxDemo.pem
ssh -i "LinuxDemo.pem" ubuntu@ec2-3-88-11-7.compute-1.amazonaws.com
54.224.110.214(machine2)
chmod 400 awsdemo.pem
ssh -i "awsdemo.pem" ubuntu@ec2-54-224-110-214.compute-1.amazonaws.com
---
- hosts: all
become: yes
tasks:
- name: Ensure Chrony (for time synchronization) is
installed.
yum:
name: chrony
state: present
What is Docker ? – Docker is a containerization platform that packages your application and all its
dependencies together in the form of a docker container to ensure that your application works seamlessly in
any environment.
What is Container ? – Docker Container is a standardized unit which can be created on the fly to deploy a
particular application or environment. It could be an Ubuntu container, CentOs container, etc. to full-fill the
requirement from an operating system point of view. Also, it could be an application oriented container
like CakePHP container or a Tomcat-Ubuntu container etc.
Let’s understand it with an example:
A company needs to develop a Java Application. In order to do so the developer will setup an environment
with tomcat server installed in it. Once the application is developed, it needs to be tested by the tester.
Now the tester will again set up tomcat environment from the scratch to test the application. Once the
application testing is done, it will be deployed on the production server. Again the production needs an
environment with tomcat installed on it, so that it can host the Java application. If you see the same tomcat
environment setup is done thrice. There are some issues that I have listed below with this approach:
1) There is a loss of time and effort.
2) There could be a version mismatch in different setups i.e. the developer & tester may have installed
tomcat 7, however the system admin installed tomcat 9 on the production server.
Now, I will show you how Docker container can be used to prevent this loss.
In this case, the developer will create a tomcat docker image ( A Docker Image is nothing but a blueprint to
deploy multiple containers of the same configurations ) using a base image like Ubuntu, which is already
existing in Docker Hub (Docker Hub has some base docker images available for free) . Now this image can
be used by the developer, the tester and the system admin to deploy the tomcat environment. This is how
docker container solves the problem.
However, now you would think that this can be done using Virtual Machines as well. However, there is
catch if you choose to use virtual machine. Let’s see a comparison between a Virtual machine and Docker
Container to understand this better.
● Size – This parameter will compare Virtual Machine & Docker Container on their resource they
utilize.
● Startup – This parameter will compare on the basis of their boot time.
● Integration – This parameter will compare on their ability to integrate with other tools with ease.
Size
The following image explains how Virtual Machine and Docker Container utilizes the resources allocated to
them.
Start-Up
When it comes to start-up, Virtual Machine takes a lot of time to boot up because the guest operating system
needs to start from scratch, which will then load all the binaries and libraries. This is time consuming and
will prove very costly at times when quick startup of applications is needed. In case of
Advantages :
CI Efficiency
Docker enables you to build a container image and use that same image across every step of the deployment
process. A huge benefit of this is the ability to separate non-dependent steps and run them in parallel. The
length of time it takes from build to production can be sped up notably.
Rapid Deployment
Docker manages to reduce deployment to seconds. This is due to the fact that it creates a container for every
process and does not boot an OS. Data can be created and destroyed without worry that the cost to bring it
up again would be higher than what is affordable.
Multi-Cloud Platforms
One of Docker’s greatest benefits is portability. Over last few years, all major cloud computing providers,
including Amazon Web Services (AWS) and Google Compute Platform (GCP), have embraced Docker’s
availability and added individual support. Docker containers can be run inside an Amazon EC2 instance,
Google Compute Engine instance, Rackspace server, or VirtualBox, provided that the host OS supports
Docker. If this is the case, a container running on an Amazon EC2 instance can easily be ported between
environments, for example to VirtualBox, achieving similar consistency and functionality. Also, Docker
works very well with other providers like Microsoft Azure, and OpenStack, and can be used with various
configuration managers like Chef, Puppet, and Ansible, etc.
Isolation
Docker ensures your applications and resources are isolated and segregated. Docker makes sure each
container has its own resources that are isolated from other containers. You can have various containers for
separate applications running completely different stacks. Docker helps you ensure clean app removal since
each application runs on its own container. If you no longer need an application, you can simply delete its
container. It won’t leave any temporary or configuration files on your host OS.
On top of these benefits, Docker also ensures that each application only uses resources that have been
assigned to them. A particular application won’t use all of your available resources, which would normally
lead to performance degradation or complete downtime for other applications.
Security
The last of these benefits of using docker is security. From a security point of view, Docker ensures that
applications that are running on containers are completely segregated and isolated from each other, granting
you complete control over traffic flow and management. No Docker container can look into processes
running inside another container. From an architectural point of view, each container gets its own set of
resources ranging from processing to network stacks.
● Images can be version controlled as well and we build image once which runs in all environment.
Docker Architecture :
The basic architecture of Docker consists of 3 major parts:
1. Docker Host
2. Docker Client
3. Registry - dockerhub
Docker uses a client-server architecture. The Docker client talks to the Docker daemon, which does the
heavy lifting of the building, running, and distributing your Docker containers.
The Docker client and daemon can run on the same system, or you can connect a Docker client to a remote
Docker daemon. The Docker client and daemon communicate using a REST API, over UNIX sockets or a
network interface.
Docker Registries
The Registry is a stateless, highly scalable server-side application that stores and lets you distribute Docker
images. You can create your own image or you can use public registries namely, Docker Hub and Docker
Cloud. Docker is configured to look for images on Docker Hub by default.
We can create our own registry in fact.
So, when we run the command docker pull or docker run, the required images are pulled from your
configured registry. When you use the docker push command, your image is pushed to your configured
registry.
We will look deep into docker commands in the next blog.
Docker Objects
Docker images, containers, networks, volumes, plugins etc are the Docker objects.
In Dockerland, there are images and there are containers. The two are closely related, but distinct. But it all
starts with a Dockerfile.
A Dockerfile is a file that you create which in turn produces a Docker image when you build it. It contains a
bunch of instructions which informs Docker HOW the Docker image should get built.
You can relate it to cooking. In cooking you have recipes. A recipe lets you know all of the steps you must
take in order to produce whatever you’re trying to cook.
A Dockerfile is a recipe or a blueprint for building Docker images and the act of running a separate build
command produces the Docker image from the recipe.
– Docker Images
An image is an inert, immutable, file that’s essentially a snapshot of a container. It is simply a template
with instructions for creating a Docker container.
Images are stored in a Docker registry such as registry.hub.docker.com. Because they can become quite
large, images are designed to be composed of layers of other images, allowing a minimal amount of data to
be sent when transferring images over the network.
– Docker Containers
To use a programming metaphor, if an image is a class, then a container is an instance of a class—a runtime
object. They are lightweight and portable encapsulations of an environment in which to run applications.
You can create, start, stop, move, or delete a container using the Docker API or CLI. You can connect a
container to one or more networks, attach storage to it, or even create a new image based on its current state.
Docker Installation :
These structures are based on rules, clearly and explicitly defined, and they are to be followed by the
programmer to interface with whichever computer application (e.g. interpreters, daemons etc.) uses or
expects them. If a script (i.e. a file containing series of tasks to be performed) is not correctly structured (i.e.
wrong syntax), the computer program will not be able to parse it. Parsing roughly can be understood as
going over an input with the end goal of understanding what is meant.
Dockerfiles use simple, clean, and clear syntax which makes them strikingly easy to create and use. They are
designed to be self explanatory, especially because they allow commenting just like a good and properly
written application source-code.
Note: As explained in the previous section (Dockerfile Syntax), all these commands are to be listed (i.e.
written) successively, inside a single plain text file (i.e. Dockerfile), in the order you would like them
performed (i.e. executed) by the docker daemon to build an image. However, some of these commands (e.g.
MAINTAINER) can be placed anywhere you seem fit (but always after FROM command), as they do not
constitute of any execution but rather value of a definition (i.e. just some additional information).
ADD
The ADD command gets two arguments: a source and a destination. It basically copies the files from the
source on the host into the container's own filesystem at the set destination. If, however, the source is a URL
(e.g. http://github.com/user/file/), then the contents of the URL are downloaded and placed at the
destination.
Example:
To clarify: an example for CMD would be running an application upon creation of a container which is
already installed using RUN (e.g. RUN apt-get install …) inside the image. This default application
execution command that is set with CMD becomes the default and replaces any command which is passed
during the creation.
Example:
ENTRYPOINT
ENTRYPOINT argument sets the concrete default application that is used every time a container is created
using the image. For example, if you have installed a specific application inside an image and you will use
this image to only run that application, you can state it with ENTRYPOINT and whenever a container is
created from that image, your application will be the target.
If you couple ENTRYPOINT with CMD, you can remove "application" from CMD and just leave
"arguments" which will be passed to the ENTRYPOINT.
Example:
ENV
The ENV command is used to set the environment variables (one or more). These variables consist of “key
value” pairs which can be accessed within the container by scripts and applications alike. This functionality
of Docker offers an enormous amount of flexibility for running programs.
Example:
Example:
FROM
FROM directive is probably the most crucial amongst all others for Dockerfiles. It defines the base image to
use to start the build process. It can be any image, including the ones you have created previously. If a
FROM image is not found on the host, Docker will try to find it (and download) from the Docker Hub or
other container repository. It needs to be the first command declared inside a Dockerfile.
Example:
MAINTAINER
One of the commands that can be set anywhere in the file - although it would be better if it was declared on
top - is MAINTAINER. This non-executing command declares the author, hence setting the author field of
the images. It should come nonetheless after FROM.
Example:
RUN
The RUN command is the central executing directive for Dockerfiles. It takes a command as its argument
and runs it to form the image. Unlike CMD, it actually is used to build the image (forming another layer on
top of the previous one which is committed).
Example:
USER
The USER directive is used to set the UID (or username) which is to run the container based on the image
being built.
Example:
Example:
WORKDIR
The WORKDIR directive is used to set where the command defined with CMD is to be executed.
Example:
Examples:
############################################################
# Dockerfile to build MongoDB container images
# Based on Ubuntu
############################################################
############################################################
# Dockerfile to build MongoDB container images
# Based on Ubuntu
############################################################
Custom :
FROM ubuntu:16.04
LABEL maintainer='Vinay'
EXPOSE 80
Network drivers
Docker’s networking subsystem is pluggable, using drivers. Several drivers exist by default, and provide
core networking functionality:
● bridge: The default network driver. If you don’t specify a driver, this is the type of network you are
creating. Bridge networks are usually used when your applications run in standalone
containers that need to communicate. See bridge networks.
● host: For standalone containers, remove network isolation between the container and the Docker
host, and use the host’s networking directly. host is only available for swarm services on Docker
17.06 and higher. Seeuse the host network.
● overlay: Overlay networks connect multiple Docker daemons together and enable swarm services
to communicate with each other. You can also use overlay networks to facilitate communication
between a swarm service and a standalone container, or between two standalone containers on
different Docker daemons. This strategy removes the need to do OS-level routing between these
containers. See overlay networks.
● none: For this container, disable all networking. Usually used in conjunction with a custom network
driver. none is not available for swarm services. See disable container networking.
● Network plugins: You can install and use third-party network plugins with Docker. These plugins are
available from Docker Hub or from third-party vendors. See the vendor’s documentation for
installing and using a given network plugin.
Docker network ls
Docker run -itd - -name=alpine1 alpine
Docker network ls
Docker network inspect network id
Create one more
Ping 2
Custom :
Docker container :
services:
web:
image: nginx
db:
image: mysql
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_USER=user
- MYSQL_PASSWORD=password
- MYSQL_DATABASE=demodb
$ cat docker-compose.yml
version: '3'
services:
nginx-1:
image: nginx
hostname: nginx-1.docker
network_mode: bridge
linux-1:
image: alpine
hostname: linux-1.docker
command: sh -c 'apk add --update bind-tools && tail -f /dev/null'
network_mode: bridge # that way he can solve others containers names even inside, solve
nginx-2, for example
Deploy Kubernetes application on AWS Using Kops :
Introduction :
Containers are a method of operating system virtualization that allow you to run an application and its
dependencies in resource-isolated processes. Containers allow you to easily package an application's code,
configurations, and dependencies into easy to use building blocks that deliver environmental consistency,
operational efficiency, developer productivity, and version control. Containers can help ensure that
applications deploy quickly, reliably, and consistently regardless of deployment environment. Containers
also give you more granular control over resources giving your infrastructure improved efficiency. Running
containers in the AWS Cloud allows you to build robust, scalable applications and services by leveraging the
benefits of the AWS Cloud such as elasticity, availability, security, and economies of scale. You also pay for
only as much resources as you use.
Any containerized application typically consists of multiple containers. There are containers for the
application itself, a database, possibly a web server, and so on. During development, it’s normal to build and
test this multi-container application on a single host. This approach works fine during early dev and test
cycles but becomes a single point of failure for production, when application availability is critical.
In such cases, a multi-container application can be deployed on multiple hosts. Customers may need an
external tool to manage such multi-container, multi-host deployments. Container orchestration frameworks
provides the capability of cluster management, scheduling containers on different hosts, service discovery
and load balancing, crash recovery, and other related functionalities. There are multiple options for container
orchestration on Amazon Web Services: Amazon ECS, Docker for AWS, and DC/OS.
Another popular option for container orchestration on AWS is Kubernetes. There are multiple ways to run
a Kubernetes cluster on AWS. This multi-part blog series provides a brief overview and explains some of
these approaches in detail. This first post explains how to create a Kubernetes cluster on AWS using kops.
Problem Statement :
Before the use of the Docker containers, we were using VM cluster to host our applications. Here are some
of the disadvantages of the same :
With standalone hosts, this can be addressed with one or two reboots. In a clustered environment, however,
the coordination across many virtual host servers is difficult. Upgrading the actual virtual host software,
however, can be an even greater challenge because of cluster node interactions and the different supporting
software versions (i.e., System Center Virtual Machine Manager, Data Protection Manager, etc.).
Generally, vendors provide detailed, step by step instructions for many of these complex updates; and, for
the most part, they go smoothly.
System Architecture :
Docker: One of the basic requirement of nodes is Docker. Docker is responsible for pulling down and
running container from Docker images. Read here for more information on docker .
Kube-Proxy: Every node in the cluster runs a simple network proxy. Using proxy node in cluster routes
request to the correct container in a node.
Kubelet: It is an agent process that runs on each node. It is responsible for managing pods and their
containers. It deal with pods specifications which are defined in YAML or JSON format. Kubelet takes the
pod specifications and checks whether the pods are running healthy or not.
Flannel: It is an overlay network that works on assigning a range of subnet address. It is used to assign IPs
to each pods running in the cluster and to make the pod-to-pod and pod-to-services communications.
Context diagram :
HLL diagram :
A K8s setup consists of several parts, some of them optional, some mandatory for the whole system to
function.
Master Node
The master node is responsible for the management of Kubernetes cluster. This is the entry point of all
administrative tasks. The master node is the one taking care of orchestrating the worker nodes, where the
actual services are running.
API server
The API server is the entry points for all the REST commands used to control the cluster. It processes the
REST requests, validates them, and executes the bound business logic. The result state has to be persisted
somewhere, and that brings us to the next component of the master node.
etcd storage
etcd is a simple, distributed, consistent key-value store. It’s mainly used for shared configuration and service
discovery.
It provides a REST API for CRUD operations as well as an interface to register watchers on specific nodes,
which enables a reliable way to notify the rest of the cluster about configuration changes.
An example of data stored by Kubernetes in etcd is jobs being scheduled, created and deployed, pod/service
details and state, namespaces and replication information, etc.
scheduler
The deployment of configured pods and services onto the nodes happens thanks to the scheduler component.
The scheduler has the information regarding resources available on the members of the cluster, as well as the
ones required for the configured service to run and hence is able to decide where to deploy a specific
service.
controller-manager
Optionally you can run different kinds of controllers inside the master node. controller-manager is a daemon
embedding those.
A controller uses apiserver to watch the shared state of the cluster and makes corrective changes to the
current state to change it to the desired one.
An example of such a controller is the Replication controller, which takes care of the number of pods in the
system. The replication factor is configured by the user, and it's the controller’s responsibility to recreate a
failed pod or remove an extra-scheduled one.
Other examples of controllers are endpoints controller, namespace controller, and serviceaccounts controller,
but we will not dive into details here.
Worker node
The pods are run here, so the worker node contains all the necessary services to manage the networking
between the containers, communicate with the master node, and assign resources to the containers
scheduled.
Docker
Docker runs on each of the worker nodes, and runs the configured pods. It takes care of downloading the
images and starting the containers.
kubelet
kubelet gets the configuration of a pod from the apiserver and ensures that the described containers are up
and running. This is the worker service that’s responsible for communicating with the master node.
It also communicates with etcd, to get information about services and write the details about newly created
ones.
kube-proxy
kube-proxy acts as a network proxy and a load balancer for a service on a single worker node. It takes care
of the network routing for TCP and UDP packets.
kubectl
And the final bit – a command line tool to communicate with the API service and send commands to the
master node.
So I guess developers got pissed off by the whole “Linux” or “Windows” thing, and they where like: “Let’s
build something that can run both Windows and Linux applications, regardless of the operating system or
environment”, then containers were invented!
The idea is Containers will isolate “our code” from what is “not our code”, to make sure the “works on my
machine” situation doesn’t happen.
Though some people might argue that, we might not even need docker, if we choose a right Cloud platform,
and use PaaS (Platform as a Service) offerings which will give us a higher level of abstraction, but again
other might argue that, that way you are kind of tight to that Cloud Provider, which again might not
necessarily be a bad thing, considering what they offer these days!
Also, even some of the Cloud providers does not natively support Linux or Windows, so now with
Containers, you can put your code in some container and then move your container into your cloud provider
if you like.
The following image from Docker website explains the differences Between Containers and VMs:
Deeper dive into Virtualization
As mentioned before Virtualization is handled by Hypervisor, and it basically manages the CPU’s “root
mode” and by some sort of interception, manages to create an illusion for the VM’s Operation System as if it
has its own hardware. I f you are interested to know who did this first to send them a “Thank You” note, it
was “VMWare”.
So ultimately, the hypervisor, facilitates to run multiple seperate operation systems on the same hardware.
All the VM operating system (known as Guest OS) go through the boot process to load the kernel and all the
other OS modules, just like regular computers, hence the slowness! And if you are curious about the
isolation between the guests and hosts, I should say, you can have pretty strict security between them.
So this basically creates a level of isolation, where each process has only access to the resources that are in
their own namespace.
This is how Docker works! Each container runs in its own namespace and all containers use the same kernel
to manage the namespaces.
Now because kernel is the control plane here and knows the namespace that was assigned to the process, it
makes sure that process can only access resources in its own namespace.
As you can see the isolation level in Docker is not as strong as VMs as they all share the same kernel, also
because of the same reason they are much lighter than VMs.
Docker really makes it easier to create, deploy, and run applications by using containers, and containers
allow a developer to package up an application with all of the parts it needs, such as libraries and other
dependencies, and ship it all out as one package. By doing so, the developer can be assured that the
application will run on any other Linux machine regardless of any customized settings that machine might
have that could differ from the machine used for writing and testing the code.
● 2/3 of companies that try using Docker, adopt it. Most companies who will adopt have already done
so within 30 days of initial production usage, and almost all the remaining adopters convert within
60 days.
● Docker adoption is up 30% in the last year.
● Adopters multiply their containers by five. Docker adopters approximately quintuple the average
number of running containers they have in production between their first and tenth month of usage.
● PHP, Ruby, Java, and Node are the main programming frameworks used in containers.
In this sense, Docker can help facilitate this type of savings by dramatically reducing infrastructure
resources. The nature of Docker is that fewer resources are necessary to run the same application. Because
of the reduced infrastructure requirements Docker has, organizations are able to save on everything from
server costs to the employees needed to maintain them. Docker allows engineering teams to be smaller and
more effective.
As we mentioned, Docker containers allow you to commit changes to your Docker images and version
control them. For example, if you perform a component upgrade that breaks your whole environment, it is
very easy to rollback to a previous version of your Docker image. This whole process can be tested in a few
minutes. Docker is fast, allowing you to quickly make replications and achieve redundancy. Also, launching
Docker images is as fast as running a machine process.
CI Efficiency
Docker enables you to build a container image and use that same image across every step of the deployment
process. A huge benefit of this is the ability to separate non-dependent steps and run them in parallel. The
length of time it takes from build to production can be sped up notably.
Rapid Deployment
Docker manages to reduce deployment to seconds. This is due to the fact that it creates a container for every
process and does not boot an OS. Data can be created and destroyed without worry that the cost to bring it
up again would be higher than what is affordable.
If you need to perform an upgrade during a product’s release cycle, you can easily make the necessary
changes to Docker containers, test them, and implement the same changes to your existing containers. This
sort of flexibility is another key advantage of using Docker. Docker really allows you to build, test, and
release images that can be deployed across multiple servers. Even if a new security patch is available, the
process remains the same. You can apply the patch, test it, and release it to production.
Multi-Cloud Platforms
One of Docker’s greatest benefits is portability. Over last few years, all major cloud computing providers,
including Amazon Web Services (AWS) and Google Compute Platform (GCP), have embraced Docker’s
availability and added individual support. Docker containers can be run inside an Amazon EC2 instance,
Google Compute Engine instance, Rackspace server, or VirtualBox, provided that the host OS supports
Docker. If this is the case, a container running on an Amazon EC2 instance can easily be ported between
environments, for example to VirtualBox, achieving similar consistency and functionality. Also, Docker
works very well with other providers like Microsoft Azure, and OpenStack, and can be used with various
configuration managers like Chef, Puppet, and Ansible, etc.
Isolation
Docker ensures your applications and resources are isolated and segregated. Docker makes sure each
container has its own resources that are isolated from other containers. You can have various containers for
separate applications running completely different stacks. Docker helps you ensure clean app removal since
each application runs on its own container. If you no longer need an application, you can simply delete its
container. It won’t leave any temporary or configuration files on your host OS.
On top of these benefits, Docker also ensures that each application only uses resources that have been
assigned to them. A particular application won’t use all of your available resources, which would normally
lead to performance degradation or complete downtime for other applications.
Security
The last of these benefits of using docker is security. From a security point of view, Docker ensures that
applications that are running on containers are completely segregated and isolated from each other, granting
you complete control over traffic flow and management. No Docker container can look into processes
running inside another container. From an architectural point of view, each container gets its own set of
resources ranging from processing to network stacks.
A dockerfile is a script which contains a collection of dockerfile commands and operating system commands
(ex: Linux commands). Before we create our first dockerfile, you should become familiar with the
dockerfile command.
Below are some dockerfile commands you must know:
FROM
The base image for building a new image. This command must be on top of the dockerfile.
MAINTAINER
Optional, it contains the name of the maintainer of the image.
RUN
Used to execute a command during the build process of the docker image.
ADD
Copy a file from the host machine to the new docker image. There is an option to use an URL for the file,
docker will then download that file to the destination directory.
ENV
Define an environment variable.
CMD
Used for executing commands when we build a new container from the docker image.
ENTRYPOINT
Define the default command that will be executed when the container is running.
WORKDIR
This is directive for CMD command to be executed.
USER
Set the user or UID for the container created with the image.
VOLUME
Enable access/linked directory between the container and the host machine.
Now let's stat to create our first dockerfile.
Step 1 - Installing Docker
ssh root@192.168.1.248
apt-get update
When the installation is finished, start the docker service and enable it to start at boot time:
In this step, we will create a new directory for the dockerfile and define what we want to do with that
dockerfile.
Create a new directory and a new and empty dockerfile inside that directory.
mkdir ~/myimages
cd myimages/
touch Dockerfile
Next, define what we want to do with our new custom image. In this tutorial, I will install Nginx and
PHP-FPM 7 using an Ubuntu 16.04 docker image. Additionally, we need Supervisord, so we can start Nginx
and PHP-FPM 7 both in one command.
Edit the 'Dockerfile' with vim:
vim Dockerfile
On the top of the file, add a line with the base image (Ubuntu 16.04) that we want to use.
#Download base image ubuntu 16.04
FROM ubuntu:16.04
Update the Ubuntu software repository inside the dockerfile with the 'RUN' command.
# Update Ubuntu Software repository
RUN apt-get update
Then install the applications that we need for the custom image. Install Nginx, PHP-FPM and Supervisord
from the Ubuntu repository with apt. Add the RUN commands for Nginx and PHP-FPM installation.
# Install nginx, php-fpm and supervisord from ubuntu repository
RUN apt-get install -y nginx php7.0-fpm supervisor && \
rm -rf /var/lib/apt/lists/*
At this stage, all applications are installed and we need to configure them. We will configure Nginx for
handling PHP applications by editing the default virtual host configuration. We can replace it our new
configuration file, or we can edit the existing configuration file with the 'sed' command.
In this tutorial, we will replace the default virtual host configuration with a new configuration by using the
'COPY' dockerfile command.
#Define the ENV variable
ENV nginx_vhost /etc/nginx/sites-available/default
ENV php_conf /etc/php/7.0/fpm/php.ini
ENV nginx_conf /etc/nginx/nginx.conf
ENV supervisor_conf /etc/supervisor/supervisord.conf
EXPOSE 80 443
Save the file and exit.
Here is the complete Dockerfile in one piece:
#Download base image ubuntu 16.04
FROM ubuntu:16.04
# Volume configuration
VOLUME ["/etc/nginx/sites-enabled", "/etc/nginx/certs", "/etc/nginx/conf.d", "/var/log/nginx",
"/var/www/html"]
EXPOSE 80 443
Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the Amazon Web
Services (AWS) cloud. Using Amazon EC2 eliminates your need to invest in hardware up front, so you can
develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual
servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to
scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast
traffic.
● AS – Auto Scaling
● Amazon EC2 Auto Scaling helps you ensure that you have the correct number of Amazon EC2
instances available to handle the load for your application. You create collections of EC2 instances,
called Auto Scaling groups. You can specify the minimum number of instances in each Auto Scaling
group, and Amazon EC2 Auto Scaling ensures that your group never goes below this size. You can
specify the maximum number of instances in each Auto Scaling group, and Amazon EC2 Auto
Scaling ensures that your group never goes above this size. If you specify the desired capacity, either
when you create the group or at any time thereafter, Amazon EC2 Auto Scaling ensures that your
group has this many instances. If you specify scaling policies, then Amazon EC2 Auto Scaling can
launch or terminate instances as demand on your application increases or decreases.
● For more information about the benefits of Amazon EC2 Auto Scaling, see Benefits of Auto Scaling.
● The following table describes the key components of Amazon EC2 Auto Scaling.
Groups
Launch configurations
Scaling options
In traditional IT world, there are limited number of servers to handle the application load. When the number
of requests increases the load on the servers also increases, which causes latency and failures.
Amazon Web service provides Amazon EC2 Auto Scaling services to overcome this failure. Auto Scaling
ensures that Amazon EC2 instances are sufficient to run your application. You can create an auto-scaling
group which contains a collection of EC2 instances. You can specify a minimum number of EC2 instance in
that group and auto-scaling will maintain and ensure the minimum number of EC2 instances. You can also
specify a maximum number of EC2 instances in each auto scaling group so that auto-scaling will ensure
instances never go beyond that maximum limit.
You can also specify desired capacity and auto-scaling policies for the Amazon EC2 auto-scaling. By using
the scaling policy, auto-scaling can launch or terminate the EC2 instances depending on the demand.
1. Groups
Groups are the logical groups which contain the collection of EC2 instances with similar characteristics for
scaling and management purpose. Using the auto scaling groups you can increase the number of instances to
improve your application performance and also you can decrease the number of instances depending on the
load to reduce your cost. The auto-scaling group also maintains a fixed number of instances even if an
instance becomes unhealthy.
To meet the desired capacity the auto scaling group launches enough number of EC2 instances, and also auto
scaling group maintains these EC2 instances by performing a periodic health check on the instances in the
group. If any instance becomes unhealthy, the auto-scaling group terminates the unhealthy instance and
launches another instance to replace it. Using scaling policies you can increase or decrease the number of
running EC2 instances in the group automatically to meet the changing conditions.
2. Launch Configuration
The launch configuration is a template used by auto scaling group to launch EC2 instances. You can specify
the Amazon Machine Image (AMI), instances type, key pair, and security groups etc.. while creating the
launch configuration. You can also modify the launch configuration after creation. Launch configuration can
be used for multiple auto scaling groups.
3. Scaling Plans
Scaling plans tells Auto Scaling when and how to scale. Amazon EC2 auto-scaling provides several ways
for you to scale the auto scaling group.
Maintaining Current instance level at all time:- You can configure and maintain a specified number of
running instances at all the time in the auto scaling group. To achieve this Amazon EC2 auto-scaling
performs a periodic health check on running EC2 instances within an auto scaling group. If any unhealthy
instance occurs, auto-scaling terminates that instance and launches new instances to replace it.
Manual Scaling:- In Manual scaling, you specify only the changes in maximum, minimum, or desired
capacity of your auto scaling groups. Auto-scaling maintains the instances with updated capacity.
Scale based on Schedule:- In some cases, you know exactly when your application traffic becomes high.
For example on the time of limited offer or some particular day in peak loads, in such cases, you can scale
your application based on scheduled scaling. You can create a scheduled action which tells Amazon EC2
auto-scaling to perform the scaling action based on the specific time.
Scale based on demand:- This is the most advanced scaling model, resources scales by using a scaling
policy. Based on specific parameters you can scale in or scale out your resources. You can create a policy by
defining the parameter such as CPU utilization, Memory, Network In and Out etc. For Example, you can
dynamically scale your EC2 instances which exceeds the CPU utilization beyond 70%. If CPU utilization
crosses this threshold value, the auto scaling launches new instances using the launch configuration. You
should specify two scaling policies, one for scaling In (terminating instances) and one for scaling out
(launching instances).
● Target tracking scaling:- Based on the target value for a specific metric, Increase or decrease the current
capacity of the auto scaling group.
● Step scaling:- Based on a set of scaling adjustments, increase or decrease the current capacity of the
group that vary based on the size of the alarm breach.
● Simple scaling:- Increase or decrease the current capacity of the group based on a single scaling
adjustment.
Setup
As a pre-requisite, you need to create an AMI of your application which is running on your EC2 instance.
3. Then, select the instances type which is suitable for your web application and click Next: Configure
details.
4. On Configure details, name the launch configuration, you can assign if any specific IAM role is assigned
for your web application, and also you can enable the detailed monitoring.
5. After that, Add the storage and Security Groups then go for review.
Note: Open the required ports for your application to run.
6. Click on Create launch configuration and choose the existing key pair or create new key pair
1. From EC2 console click on Auto Scaling Group which is below the launch configuration. Then click on
create auto scaling group.
2. From Auto scaling Group page, you can create either using launch configuration or Launch Template.
Here I have created using Launch Configuration. You can create new Launch Configuration from this
page also. Since you had already created the launch configuration, you can go for creating auto scaling
group by using “Use a existing launch configuration”.
3. After clicking on next step, you can configure group name, group initial size, and VPC and subnets. Also
you can configure load balance with auto scaling group by clicking Advanced Details.
After that click on next to configure scaling policies
4.On scaling policy page, you can specify the minimum and maximum number of instance in this group.
Here you can use target tracking policy to configure the scaling policies. In metric type you can specify such
as CPU utilisation and Network In or Out and also you can give the target value as well. Depending on the
target value the scaling policy will work. You can also disable scale-in from here.
It works based on alarm, so first create the alarm by clicking on ‘add new alarm’.
Here the alarm created is based on CPU utilisation above 65%. If CPU utilisation crosses 65% the auto
scaling launches new instances based on the step action.
You can specify more step actions based on your load, but in simple policy you can’t categorise depending
on the percentage of CPU utilisation. Also you need to configure scale-in policies once the traffic become
low, as it reduces the billing.
5. Next click on ‘Next: Configure Notification’ to get the notification based on launch, terminate, and fail
etc. to your mail ID, and enter the tag and click on ‘Create auto scaling group’.
Elastic Load Balancing distributes incoming application or network traffic across multiple targets, such as
Amazon EC2 instances, containers, and IP addresses, in multiple Availability Zones. Elastic Load Balancing
scales your load balancer as traffic to your application changes over time, and can scale to the vast majority
of workloads automatically.
A load balancer distributes workloads across multiple compute resources, such as virtual servers. Using a
load balancer increases the availability and fault tolerance of your applications.
You can add and remove compute resources from your load balancer as your needs change, without
disrupting the overall flow of requests to your applications.
You can configure health checks, which are used to monitor the health of the compute resources so that the
load balancer can send requests only to the healthy ones. You can also offload the work of encryption and
decryption to your load balancer so that your compute resources can focus on their main work.
Elastic Load Balancing supports three types of load balancers: Application Load Balancers, Network Load
Balancers, and Classic Load Balancers. You can select a load balancer based on your application needs. For
more information, see Comparison of Elastic Load Balancing Products.
For more information about using each load balancer, see the User Guide for Application Load Balancers,
the User Guide for Network Load Balancers, and the User Guide for Classic Load Balancers.
You can create, access, and manage your load balancers using any of the following interfaces:
● AWS Management Console— Provides a web interface that you can use to access Elastic Load
Balancing.
● AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS
services, including Elastic Load Balancing, and is supported on Windows, Mac, and Linux. For more
information, see AWS Command Line Interface.
● AWS SDKs — Provides language-specific APIs and takes care of many of the connection details,
such as calculating signatures, handling request retries, and error handling. For more information,
see AWS SDKs.
● Query API— Provides low-level API actions that you call using HTTPS requests. Using the Query
API is the most direct way to access Elastic Load Balancing, but it requires that your application
handle low-level details such as generating the hash to sign the request, and error handling.
Creation of ELB :
I’ve recently received some questions about the AWS Application Load Balancer, what advantages it
provides, and how to monitor it. AWS is already calling the original Elastic Load Balancer it’s ‘Classic’
Load Balancer, so if you’re anxious to understand why so many are using it over the Classic ELB, this post
is for you.
This post will describe the AWS Application Load Balancer, when to use it, and introduce how to connect it
with your EC2 instances and autoscaling groups. Additional resources on integrating ECS Containers with
the Application Load Balancer are also provided.
If you already have an Application Load Balancer set up and just need to monitor it, check out the Sumo
Logic AWS Application Load Balancer Application . You can sign up for Sumo Logic Free here.
The AWS Application Load Balancer is the newest load balancer technology in the AWS product suite.
Some of the benefits it provides are:
Despite the enhanced functionality of the ALB, there are a few reasons you might elect to use the Classic
Load Balancer for your stack:
● Your application requires Application Controlled Sticky Sessions (rather than duration based)
● Your application needs to distribute TCP/IP requests – this is only supported with the Classic Load
Balancer
If you’re looking for containerized application support, path based routing, better health checks, websocket
support, or HTTP/2 support, the Application Load Balancer is the right choice for you.
How do I use it?
First, you’ll need to create your load balancer. A description of how to do this can be found in AWS’s
documentation here. Make sure you make the following selections while setting up the load balancer:
● Step 1:
o Set ‘Scheme’ to ‘Internet Facing’ and make sure there is a Listener on port 80 (HTTP)
o Select the Default VPC, or if launching the ALB into another VPC, select one where you have
testing servers running or are able to launch servers for testing
● Step 3: Create or use an existing security group that allows inbound HTTP traffic of port 80
● Step 4: Create a new Target Group and select port 80/protocol HTTP
● Step 5: Skip for now and create the load balancer
Distribute Traffic to Existing EC2 Instances
Check ALB Configuration
1. Before you begin, verify that your ALB has a Listener set to port 80 – we will test with HTTP requests
although when using your load balancer in production make to only allow interactions via HTTPS port 443
o To verify, go to the EC2 Dashboard > Load Balancers > Select your ALB > Select the ‘Listeners’ tab
2. Next, double check that the Application Load Balancer’s security group allows inbound HTTP and HTTPS
inbound traffic
o To check this, go to the EC2 Dashboard > Load Balancers > Select your ALB > Under ‘Description’
click on ‘Security group’ > Make sure the correct security group is selected and choose the ‘Inbound
Rules’ tab
1. First, navigate to the EC2 Dashboard > Load Balancers > Select your ALB > Select ‘Targets’ tab > Select
‘Edit’
2. Select the test server(s) you want to distribute traffic to and click ‘Add to Registered’, then click ‘Save’
If you want to create a test server to connect to the ALB, follow these steps:
1. Launch a Linux AMI (see documentation here for more info). While launching, you must ensure that:
o Step 3: You have selected the same VPC as the VPC your ALB was launched into
o Step 3: You have a running web server technology and a sample web page – under ‘Advanced Details’
you can use the following bootstrap script if you are not familiar with this:
o #!/bin/bash
o mkdir /var/www/html/test
o Step 6: Allow inbound HTTP traffic from your ALB’s security group
2. Now that you have a running web server to test with, navigate to the EC2 Dashboard > Load Balancers >
Select your ALB > Select ‘Targets’ tab > Select ‘Edit’
3. Select the test server(s) you want to distribute traffic to and click ‘Add to Registered’, then click ‘Save’
KOPS -
Kubernet
es
Operatio
ns
1. Launch one Ubuntu instance and execute below steps to install kops.
aws --version
4.
- Create IAM user & make a note of access key & secruity key
- Create S3 bucket and enable versioning.
aws configure -- Give access & security access key details here..
RUN if you're okay withe the configuration run the command with --yes as
like below:
kops update cluster --name ${KOPS_CLUSTER_NAME} --yes
Suggestions:
* validate cluster: kops validate cluster
* list nodes: kubectl get nodes --show-labels
* ssh to the master: ssh -i ~/.ssh/id_rsa admin@api.advith.k8s.local
* the admin user is specific to Debian. If not using Debian please use
the appropriate user based on your OS.
* read about installing addons at:
https://github.com/kubernetes/kops/blob/master/docs/addons.md.
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-east-1a Master m3.medium 1 1
us-east-1a
nodes Node t2.medium 1 1
us-east-1a
NODE STATUS
NAME ROLE READY
ip-172-20-52-91.ec2.internal node True
ip-172-20-54-252.ec2.internal master True
https://master-dns:nodeport/
Code Samples :
Index.php:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Cockatiel</title>
</head>
<div class="container">
<hgroup class="row">
<h1>
</h1>
</div>
<div class="navbar-header">
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
</div>
</ul>
</div>
</nav>
</hgroup>
</div>
</div>
<section class="text-center">
<h2>Cockatiel</h2>
<a href="#" class="button-header">Get Strated</a>
</section>
<figure>
</figure>
</header>
<section>
<article>
<h2>NATIVE TO</h2>
<p>Grasslands of Australia. Wild cockatiels are predominately grey and white with bright
orange cheek patches, which are brighter on the male.</p>
<ul>
</ul>
</article>
</div>
</div>
</section>
<div class="clearfix"></div>
<section>
</div>
<article>
<h2>MALE OR FEMALE?:</h2>
<p>cockatiels are sexually dimorphic, which means males and females are visually
different. Female cockatiels have small white dots on the tops of the tips of their flight feathers and black
barring and stripes on the undersides of their wings and tail.
</p>
<p>However, all cockatiels have the markings of a female until they are six months old,
after that, the males lose these features. Male cockatiels also have brighter orange cheek patches and usually
have a greater ability to talk.</p>
</article>
</div>
</section>
<!-- image-content -->
<div class="clearfix"></div>
<section>
<article>
<h2>PHYSICAL CHARACTERISTICS</h2>
<p>cockatiels are beautiful, small-bodied birds that have varied colorations from all grey
to all brown. Some popular types are: grey, lutino, white-faced, cinnamon, pied and albino. A single bird can
also be a combination of any of these or a color “mutation” of any one or more. cockatiels have a proud
posture, small dark eyes and a long tail.</p>
<p>All cockatiels have a head crest, which the bird can
raise or lower depending on mood and stimulation. cockatiels are a “powder-down” bird. This means they
have an extra powdery substance in their feathers. This powder can be very irritating to those owners and
handlers with allergies and asthma. If you, or a family member, have these issues, a different parrot species
may be more suited to you. Other “powder down” birds include cockatoos and African greys.
</p>
</article>
</div>
</div>
</section>
<div class="clearfix"></div>
<section>
</div>
<article>
</div>
</section>
<div class="clearfix"></div>
<section>
<article>
<h2>SUPPLEMENTS:</h2>
<p>The only supplement that should be necessary if you are feeding your cockatiel
correctly is calcium.</p>
<p>Calcium can usually be offered in the form of a
cuttlebone or calcium treat that attaches to the inside of your bird’s cage. If you notice that your bird does
not touch his cuttlebone or calcium treat, a powdered supplement such as packaged oyster shell can be added
directly to your pet’s food. Follow the directions on the supplement package.</p>
<p>For optimal physiologic use of the calcium you are
giving your bird, the bird should be exposed to UVB light for at
least 3-4 hours a day (or more or less depending on the species). Please see our UVB Lighting for
Companion Birds
and Reptiles handout for further information about UVB light.
</p>
</article>
</div>
</div>
</section>
<div class="clearfix"></div>
<section>
</div>
<article>
<h2>WATER ?</h2>
<p>Fresh water must be available to your cockatiel at all times. Because your pet will
often even bathe in his water, it must be checked and changed several times a day. It is recommended that
the bowl be wiped clean with a paper towel at every change to prevent a slimy film from collecting on the
inside of the bowl. This ‘slime’ will harbor bacteria, which can be dangerous for your bird. Thoroughly
wash the bowl with a mild dishwashing detergent and water at least once a day.
All water given to birds for drinking, as well as water used for misting, soaking or bathing must be 100%
free of chlorine and heavy metals. (Not all home water filtration systems remove 100% of the chlorine and
heavy metals from tap water).</p>
<p>We recommend that you use unflavored bottled drinking water or bottled natural spring water; never
use untreated tap water. If tap water is used, you should treat it with a de-chlorinating treatment. If you do
not want to chemically dechlorinate the water, you can leave an open container of tap water out for at least
24 hours.
</p>
</article>
</div>
</section>
<div class="clearfix"></div>
<section>
<article>
<h2>ENRICHMENT</h2>
<p>In the wild, birds spend most of their day from morning until night foraging for their
food. In our homes in a cage, their food is right at their beaks, no need to go hunting! Because of this, it is
very easy for our pet birds to become bored and lazy. Since these animals are so intelligent, it is a horrible
sentence to be banished to a cage with nothing to do. </p>
<p>“Enrichment” is important because it will keep your
cockatiel’s mind busy!
At least three different types of toys should be available to your bird in his cage at one time. cockatiels enjoy
shiny, wooden, rope, foraging, and plastic toys. It is very important to purchase toys made specifically for
birds as they are much more
</p>
<p>likely to be safer in construction and material. Birds can be poisoned by dangerous metals, such as lead
or zinc. They can also chew off small pieces of improperly manufactured “toys” and ingest them, which of
course can lead to a variety of health problems. Be sure to include “foraging” toys. These types of toys
mimic the work that a bird might do to find food in the wild.
</p>
</article>
</div>
</div>
</section>
<div class="emplty"></div>
<div class="clearfix"></div>
<section>
</div>
<article>
<p>At least three clean bowls should be ready for use: one for fresh water, one for seed/pellets and one for
fresh foods.</p>
<p>Your bird may appreciate a cage cover for nighttime. The cover can block out any extraneous light and
create a more secure sleeping place. Be careful not to use any fabrics for your cover that your bird might
catch his claws or beak in, or that he might pull strings from and eat.</p>
</article>
</div>
</section>
<div class="clearfix"></div>
<br>
<section class="section-five" id="section-five">
<div class="container">
</p>
</header>
<!-- subscribe -->
<div class="subscribe-form">
<div class="ntify_form">
</form>
<div id="mesaj"></div>
</div>
</div>
</div>
</section>
<div class="container">
<h4>Get in touch</h4>
<h2>Have any questions? Our team will happy to<br/>answer your questionss.</h2>
</header>
<div id="message"></div>
</div>
</div>
<div class="clearfix"></div>
<div class="clearfix"></div>
<div id="simple-msg"></div>
</form>
</div>
<div class="clearfix"></div>
</div>
<div class="map-wrapper">
<div id="surabaya"></div>
</div>
</section>
<div class="container">
</section>
<nav role="footer-nav">
</nav>
<p class="copy">© 2018 name. All rights reserved. Made with by <a
href="http://sousukeinfosolutions.com/" target="_blank">Sousuke</a></p>
</div>
</footer>
</main>
<script type="text/javascript">
$('.parallax-window').parallax({});
</script>
<script src="https://maps.googleapis.com/maps/api/js?v=3.exp&sensor=false"></script>
</body>
</html>
Main.js:
$(document).ready(function() {
//#HEADER
var slideHeight = $(window).height();
$(window).resize(function(){'use strict',
});
//Scroll Menu
$(window).on('scroll', function(){
if( $(window).scrollTop()>600 ){
} else {
});
$(window).scroll(function(){
$('#menu').fadeIn(500);
} else {
$('#menu').fadeOut(500);
}
});
// Navigation Scroll
$(window).scroll(function(event) {
Scroll();
});
return false;
});
function Scroll() {
$('.navbar-collapse').find('.scroll a').each(function(){
})
$('.navbar-collapse li.scroll')
.removeClass('active')
.eq(i).addClass('active');
})
};
// affix
$('.navbar').affix({
offset: {
top: top
, bottom: function () {
});
owl.owlCarousel({
itemsCustom : [
[0, 1],
[450, 1],
[600, 1],
[700, 1],
[1000, 1],
[1200, 1],
[1400, 1],
[1600, 1]
],
navigation : true,
autoPlay : 3000,
});
disableOn: 700,
type: 'iframe',
mainClass: 'mfp-fade',
removalDelay: 160,
preloader: false,
fixedContentPos: false
});
});
References :
● https://github.com/kubernetes/kops/blob/master/docs/cli/kops_cre
ate_secret_encryptionconfig.md
● https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/
● https://github.com/kubernetes/kops/blob/master/nodeup/pkg/mode
l/kube_apiserver.go#L61
● https://github.com/georgebuckerfield/kops/blob/master/pkg/apis/ko
ps/cluster.go#L162