TYBSC CS Cloud Computing Munotes
TYBSC CS Cloud Computing Munotes
CLOUD COMPUTING
Unit Structure
1.0 Objectives
1.1 Introduction to Cloud Computing
1.2 Characteristics and benefits of Cloud Computing
1.3 Basic concepts of Distributed Systems
1.4 Web 2.0
1.5 Service-Oriented Computing
1.6 Utility-Oriented Computing
.in
1.7 Let us Sum Up
1.8 List of References
es
1.9 Unit End Exercises
1.0 OBJECTIVE
ot
Virtualization
Grid Computing
Utility Computing
Virtualization
1
Cloud computing Types of Virtualization
1. Hardware virtualization
2. Server virtualization
3. Storage virtualization
4. Operating system virtualization
5. Data Virtualization
.in
1. It is used in the healthcare industry.
Fig. 1. SOA
2
Grid computing Cloud Computing
.in
es
ot
un
Utility Computing
m
3
Cloud computing ● Cloud computing gives guarantee to transform computing into a utility
delivered over the internet.
● Enterprise architecture is a function within IT departments that has
developed over time, playing a high value role in managing transitions
to new technologies, such as cloud computing.
.in
● The term cloud has an abstraction of the network in system diagrams.
This meaning is also put into cloud computing, which refers to an
Internet-centric way of computing.
es
● The Internet plays a fundamental role in cloud computing, it shows the
medium or the platform through which many cloud computing
services are delivered and made accessible.
ot
4
Cloud Computing
.in
Fig. 4 Cloud computing environment
● Utility-oriented approach is an important aspect of cloud computing .
es
● Cloud computing concentrates on delivering services with a given
pricing model, in most cases a pay-per-use method.
ot
● All these operations can be executed and billed simply by entering the
credit card details and accessing the exposed services through a Web
browser.
m
● Characteristics
○ Resources Pooling
Cloud providers pulled the computing resources to give services to
multiple customers with the help of a multi-tenant model.
5
Cloud computing ○ On-Demand Self-Service
It is one of the key and valuable features of Cloud Computing as the user
can regularly monitor the server uptime, capabilities, and allotted network
storage.
○ Easy Maintenance
The servers are easy to maintain and the downtime is required very less
and even in some situations, there is no downtime.
○ Availability
The potential of the Cloud can be altered as per the use and can be
extended a lot. It studies storage usage and allows the user to purchase
.in
extra Cloud storage if needed for a very small amount.
○ Automatic System
Cloud computing automatically analyzes the data needed and supports a
es
metering capability at some level of services.
○ Economical
ot
○ Security
Security creates a snapshot of the data stored so that the data may not get
m
○ Measured Service
Cloud Computing resources used to handle and the company uses it for
recording.
6
● The characterization of a distributed system is define using definition: Cloud Computing
.in
● In fact, one of the navigational factors of cloud computing has been
the availability of the large computing facilities or services of IT such
as Amazon, Google.
es
● This offers their computing capabilities as a service provided
opportunities for best utilization of their infrastructure.
ot
● There are three major milestones that have led to cloud computing
facilities:
1. Mainframe Computing
● Mainframes controlling the large computational facilities with multiple
processing units.
● Mainframes are powerful, highly reliable computers functional for
large data movement and massive input and output operations.
● The Mainframe computing is used by large organizations for huge data
processing tasks like online transactions, enterprise resource planning,
and other operations that require the processing of significant amounts
of data.
● One of the most highlighted features of mainframes computing was the
ability to be highly reliable computers that were “always on” and be up
to tolerating failures transparently.
7
Cloud computing ● System shutdown process was needed to replace failed components,
and the system worked properly without any interruption.
● Batch processing is the main application of mainframes computing.
Not only are their popularity and deployments reduced, but also
extended versions of such systems are presently in use for transaction
processing.
● Examples, such as online banking, airline ticket booking, supermarket
and telcos, and government services.
2. Cluster computing
● Cluster computing started using a low-cost another method to the use
of mainframes and supercomputers.
● The technology advancement that makes faster and more powerful
mainframes and supercomputers in time generated an increased
availability of cheap product machines as a side effect.
.in
● These machines are connected by a high-bandwidth network and
controlled by certain software tools that handle them as an individual
system.
es
● Cluster technology contributed to the natural selection of tools and
frameworks for distributed computing, example including Condor,
Parallel Virtual Machine (PVM), and Message Passing Interface (MPI)
ot
3. Grid computing
● Grid computing is a description of the power grid, grid computing
suggests a new approach to access large computational power, large
storage facilities, and a different variety of services.
● Users can consume resources in the same manner as they use other
utilities such as power, gas, and water.
● Grids initially developed as aggregations of geographically scatter
clusters by means of Internet connections.
● These types of clusters belonged to different organizations, and
arrangements were made among them to share the computational
power.
● This is different from a large cluster, a computing grid was a dynamic
collection of heterogeneous computing nodes, and its scale was
nationwide or even worldwide.
8
● Several developments made possible the spreading of computing grids: Cloud Computing
.in
● In the case of mainframes clouds are distinguished by the fact of
having virtually unbounded capacity, being liberal to failures, and
being always on..
es
● In the case of clusters, The computing nodes that shape the
infrastructure of computing clouds are commodity machines.
● The services made available by a cloud vendor are consumed on a pay-
ot
per-use basis, and clouds fully implement the utility vision established
by grid computing.
un
.in
not have been possible without the support of AJAX technology.
● Facebook is a social networking site that leverages user activity to
enable content, and Blogger, like any other blogging website, provides
es
an online diary that is fed by users.
● Web 2.0. Applications and frameworks for implementing rich Internet
ot
10
● Virtually any segment of code that performs a task can be changed into Cloud Computing
a service and expose its functionalities through a network-accessible
protocol.
● A service needs to be loosely coupled, reusable, programming
language independent, and location transparent. Loose coupling
enables services to obey different frameworks more easily and makes
them reusable.
● Independence from a specific platform increases services accessibility.
● Accordingly, a wider range of clients, which can improve services in
global registries and consume them in a location transparent manner,
can be served.
● Services are controlled and accumulated into a service-oriented
architecture, which is a logical way to arrange software systems to
provide end users or other entities distributed over the network with
services through published and discoverable interfaces.
.in
● Service oriented computing establishes and broadcasts two important
concepts, which are also fundamental to cloud computing:
1. quality of service (QoS)
es
2. Software-as-a-Service (SaaS)
○ Quality of service:
ot
■ QoS requirements are formed between the client and the provider
through an SLA that recognizes the least values for the QoS attributes
that are required to be satisfied upon the service call.
○ Software-as-a-Service
■ The idea of Software-as-a-Service introduces a new delivery model for
applications.
■ The word has been inherited from the application service providers
(ASPs), which bring software services-based solutions over the wide
area network from a central datacenter and make them available on a
rental basis.
■ “The ASP is responsible for maintaining the infrastructure and making
available the application, and the client is discharged from
maintenance costs and difficult upgrades.
11
Cloud computing ■ This SD model is possible because economies of scale are reached by
means of timeshare.
■ The SaaS approach achieves its full development with service-oriented
computing.
■ Loosely coupled software components allow the delivery of complex
business processes and transactions as a service while allowing
applications to be composed on the fly and services to be reused from
everywhere and by anyone.
.in
power, and telephone connection has a long past but has become a
reality with the advent of cloud computing.
● This vision can be observed as a “If computers of the kind I have
es
advocated become the computers of the future, then computing may
someday be organized as a public utility, just as the telephone system
is a public utility. The computer utility could become the pillar of a
new and important industry.”
ot
12
● The capillary scattering of the Internet and the Web enables the Cloud Computing
technological means to notice utility computing on a worldwide scale
and through simple interfaces.
● Computing grids provided a planet scale distributed computing
infrastructure that was approachable on demand. Computing grids
bring the concept of utility computing to a new level.
● With the help of utility computing accessible on a wider scale, it is
easier to provide a trading infrastructure where grid products storage,
computation, and services are offered for or sold.
● Here E-commerce technology provided the infrastructure support for
utility computing. Example, significant interest in buying any kind of
good online spreads to the wider public: food, clothes, multimedia
products, and online services such as storage space and Web hosting.
● Applications were not only dispenses, they started to be composed as a
network of services provided by different entities.
.in
● These services, accessible through the Internet, were made available
by charging on the report to usage.
● SOC widened the concept of what could have been retrieved as a
es
utility in a computer system, not only measuring power and storage but
also services and application components could be employed and
integrated on demand.
ot
13
Cloud computing 1.8 LIST OF REFERENCES
● Mastering Cloud Computing, RajkumarBuyya, Christian Vecchiola, S
ThamaraiSelvi, Tata McGraw Hill Education Private Limited, 2013.
.in
8. Explain the Cloud Computing Reference Model.
es
ot
un
m
14
2
ELEMENTS OF PARALLEL COMPUTING
Unit Structure
2.0 Objectives
2.1 Introduction
2.2 Elements of Parallel Computing
2.3 Elements of Distributed Computing
2.4 Technologies for Distributed Computing
2.5 Summary
2.6 Reference for further reading
.in
2.7 Unit End Exercises
2.0 OBJECTIVES
es
● To understand the concept of Parallel & Distributed computing.
● To study the elements of Parallel Distributed computings.
ot
2.1 INTRODUCTION
● The analogous/ simultaneous development in availability of big data
m
15
Cloud computing ● The first steps in this conduct direction to the development of parallel
computing, which encloses techniques, architectures, and systems for
performing multiple activities in parallel.
● The term parallel computing has indistinct edges with the term
distributed computing and is often used in place of the latter term.
● In this chapter, we associate it with its proper characterization, which
involves the introduction of parallelism within a single computer by
coordinating the activity of multiple processors together.
.in
● A given task is divided into multiple subtasks using a divide and
conquer technique (data structure), and each subtask is processed on a
different central processing unit (CPU).
es
● Programming on a multiprocessor system with the divide and conquer
technique is called parallel programming.
ot
16
○ Hardware refinement in pipelining, superscalar, and the like are non Elements of Parallel
scalable and require sophisticated compiler technology. Evolving like Computing
compiler technology is a hard task.
.in
executing a single instruction,which operates on a single data stream
shown in figure. 1.
● In SISD, machine instructions are processed linearly, therefore
es
computers that acquire this model are popularly called sequential
computers.
ot
.in
● MISD computing system is a multiprocessor machine efficient of
executing different instructions on different processing elements but all of
them operating on the same data set shown in figure 3. For example,
statements such as
es
y = sin(x) + cos(x) + tan(x)
Which carry out different operations on the same data set. Machines built
ot
using the MISD model are not beneficial in most of the applications; a few
machines are assembled, but none of them are accessible commercially.
They became more of an intellectual effort than a practical configuration.
un
m
18
Elements of Parallel
Computing
.in
Fig. 3 Multiple-instruction, single-data (MISD) architecture.
19
Cloud computing
.in
Fig. 4 Multiple-instructions, multiple-data (MIMD) architecture.
○ Data parallelism
○ Process parallelism
○ Farmer-and-worker model
● These types of models are all suitable for task level parallelism.
● In the point of data parallelism, the divide and conquer technique is
used to divide data into multiple sets, and each data set is processed on
different processing elements using the identical instruction.
● This approach is highly compatible for processing on machines based
on the SIMD model. In the case of process parallelism, a given
operation has multiple activities that can be processed on multiple
processors.
● In the example of the farmer and worker model, a task distribution
approach is used: one processor is designated as master and all other
20
remaining processing elements are designated as slaves; the master Elements of Parallel
allocates jobs to slave processing elements and, on fulfillment, they Computing
tell the master, which in turn collects results.
Levels of parallelism
● Levels of parallelism are marked based on the lumps of code (like a
grain size) that can be a probable candidate for parallelism. Below
Table lists the categories of code granularity for parallelism.
● All these approaches have a common goal:
○ To boost processor efficiency by hiding latency.
○ To conceal latency, there must be another thread ready to run every
time a lengthy operation occurs.
The plan is to execute concurrently two or more single-threaded
applications, such as compiling, text formatting, database searching, and
device simulation.
.in
● As shown in the table and depicted in figure 5, parallelism within an
application can be discovered at several levels.
es
○ Large grain (or task level)
○ Medium grain (or control level)
ot
Levels of Parallelism
21
Cloud computing
.in
Fig. 5 Levels of parallelism in an application.
es
2.3 ELEMENTS OF DISTRIBUTED COMPUTING
● In elements of distributed computing, extend these concepts and
ot
● Here we will learn the most common guidelines and patterns for
implementing distributed computing systems from the perspective of
the software designer.
m
22
● The components of a distributed system communicate with some sort Elements of Parallel
of message passing. This is a term that encloses several Computing
communication models.
.in
es
ot
un
23
Cloud computing Architectural styles for distributed computing
● Architectural styles are mainly used to find the vocabulary of
components and connectors that are used as instances of the style
together with a set of constraints on how they can be combined.
● Architectural styles for distributed systems are helpful in
understanding the different roles of components in the system and how
they are distributed across multiple machines.
● Organization of the architectural styles into two major classes:
○ Software architectural styles
○ System architectural styles
● The first class has the relation to the logical organization of the
software, the second class contains all those styles that express the
physical organization of distributed software systems in terms of their
major components.
.in
Component and connectors
● These are the basic elements with which architectural styles are
es
defined.
● A component represents a unit of software that encapsulates a function
or a feature of the system.
ot
.in
● RPC is the fundamental abstraction that allows the execution of
procedures on client’s request.
es
● RPC allows increasing the concept of a procedure call completely
outside the boundaries of a process and a single memory address
space.
ot
● The called procedure and calling procedure may reside on the same
system or they may be on different systems in a network.
un
25
Cloud computing ● Therefore, developing a system strengthening RPC for IPC contain the
following steps:
○ Design and implementation of the server procedures that will be
uncovered for remote invocation.
○ Registration of remote procedures with the RPC server on the node
where they will be made accessible.
○ Design and implementation of the client code that invokes the remote
procedures (RPC).
.in
introduced with RPC and extend it to enable the remote invocation of
object methods and to keep watch on references to objects made
available through a network connection.
es
3. With respect to the RPC model, the infrastructure manages types that
are exposed through well known interfaces rather than procedures.
Therefore, the common interaction pattern will be like this:
ot
a. The server process keeps track of a registry of active objects that are
made available to other processes. On the report of the specific
implementation, active objects can be published using interface
un
.in
● DCOM, behind time integrated and developed into COM1, is the
solution provided by Microsoft for distributed object programming
before the introduction of .NET technology.
es
● DCOM allows a set of features allowing the use of COM components
beyond the process boundaries.
ot
.NET remoting
● .NET Remoting is the technology enabling IPC among .NET
applications.
● It provides developers with a uniform platform for retrieving remote
objects from within any application developed in any of the languages
supported by .NET.
Service-oriented computing
● Service oriented computing arrange distributed systems in terms of
services, which represent the great abstraction for building systems.
27
Cloud computing ● Service orientation expresses applications and software systems as
aggregations of services that are correlated within a service-oriented
architecture (SOA).
● A service encapsulates a software component that enables a set of
coherent and related functionalities that can be reused and integrated
into huge and more complex applications. The term service is a
general abstraction that encompasses various different
implementations using different technologies and protocols.
● Four major characteristics that identify a service:
1. Boundaries are explicit.
2. Services are autonomous
3. Services divide the schema and contracts, not class or interface
definitions.
4. Service compatibility is determined based on policy.
.in
Service-oriented architecture
● SOA is an architectural style supporting service orientation.
es
● It arranges a software system into a collection of interacting services.
● SOA encloses a set of design principles that structure system
ot
28
2.5 LET US SUM UP Elements of Parallel
Computing
● Parallel and distributed computing emerged as a solution for solving
complex
● Parallel computing introduces models and architectures for performing
multiple tasks within a single computing node or a set of tightly
coupled nodes with homogeneous hardware.
● Parallelism is achieved by leveraging hardware capable of processing
multiple instructions in parallel.
● Distributed systems constitute a large umbrella under which several
different software systems are classified.
.in
2.7 UNIT END EXERCISES
1. What is the difference between parallel and distributed computing?
es
2. What is a SIMD architecture?
3. Describe the different levels of parallelism that can be obtained in a
ot
computing system.
4. What is a distributed system? What are the components that
un
characterize it?
m
29
3
CLOUD COMPUTING ARCHITECTURE
Unit Structure
3.0 Objective
3.1 Introduction
3.2 Cloud Computing Architecture
3.3 The cloud reference model
3.4 Cloud Computing Services: SAAS, PAAS, IAAS
3.5 Types of clouds.
3.6 Summary
.in
3.7 Reference for further reading
3.8 Unit End Exercises
es
3.0 OBJECTIVE
● To understand the architecture of cloud computing.
ot
3.1 INTRODUCTION
● Cloud Computing can be defined as the exercise of using a network of
remote servers hosted on the Internet to store, manage, and process
data, alternatively a local server or a may be a personal computer.
● Organizations offering such types of cloud computing services are
called cloud providers and charge for cloud computing services based
on their usage.
● Grids and clusters are the base for cloud computing.
30
● When we deliver the specific service to the end user, different layers Cloud Computing
can be stacked on top of the virtual infrastructure like a virtual Architecture
machine manager, a development platform, or a specific application
middleware.
● The cloud computing paradigm came out as an output of the
convergence of various existing models, technologies, and concepts
that switch the way we deliver and use IT services.
● A definition of Cloud computing:
“Cloud computing is a utility-oriented and Internet-centric way of
delivering IT services on demand. These services cover the entire
computing stack: from the hardware infrastructure packaged as a set of
virtual machines to software services such as development platforms and
distributed applications.”
.in
● IT service can be consumed as a utility and delivered through a
network
● Cloud computing supports these IT services, most likely the Internet.
es
● The characterization in cloud computing includes various aspects:
infrastructure, development platforms, application and services.
ot
.in
es
Fig. 1 The cloud computing architecture.
ot
32
management layer is frequently integrated with other IaaS solutions that Cloud Computing
provide physical infrastructure and adds value. Architecture
.in
the Internet.
● Behavior automatically is an implementation of SaaS which should
es
feature, whereas PaaS and IaaS provide this functionality as a part of
the API shows to users.
● The reference model also describes the concept of everything as a
ot
of a system.
33
Cloud computing ● SaaS is a software delivery model, (one-to-many) whereby an
application is shared across multiple users.
● Example includes CRM3 and ERP4 applications that add up common
needs for almost all enterprises, from small to medium-sized and large
businesses.
● This structure relives the development of software platforms that
provide a general set of features and support specialization and ease of
integration of new components.
● SaaS applications are naturally multitenant.
● The term SaaS was then invented in 2001 by the Software Information
& Industry Association (SIIA).
● The analysis done by SIIA was mainly aligned to cover application
service providers (ASPs) and all their variations, which imprison the
concept of software applications consumed as a service in a wide
sense.
.in
● ASPs Core characteristics of SaaS:
○ The product sold to customers is an application approach.
es
○ The application is centrally managed.
○ The service delivered is one-to-many.
ot
Platform as a service
● Platform-as-a-Service (PaaS) which provides a development and
m
34
Cloud Computing
Architecture
.in
Fig.2. The Platform-as-a-Service reference model
● Application management is the key functionality of the middleware
systems.
es
● PaaS provides applications with a runtime environment and does not
shows any service for managing the underlying infrastructure.
ot
35
Cloud computing Infrastructure as a service or hardware as a service
● Infrastructure as a Service is the most popular model and developed
market segment of cloud computing.
● They deliver customizable infrastructure on request.
● The IaaS offers single servers for entire infrastructures, including
network devices, load balancers, and database and Web servers.
● The main aim of this technology used to deliver and implement these
solutions is hardware virtualization:
○ one or more virtual machines configured and interconnected
○ Virtual machines also constitute the atomic components that are
installed and charged according to the specific features of the virtual
hardware:
■ memory
.in
■ number of processors, and
■ disk storage
es
● IaaS shows all the benefits of hardware virtualization:
○ workload partitioning
ot
■ application isolation
■ sandboxing, and
un
■ hardware tuning
● HaaS allows better utilization of the IT infrastructure and provides a
m
.in
es
ot
un
37
Cloud computing
● Public clouds.
○ The cloud is open to the wider public.
● Private clouds.
○ The cloud is executed within the private property of an institution and
generally made accessible to the members of the institution
● Hybrid clouds.
○ The cloud is a combination of the two previous clouds and most likely
identifies a private cloud that has been augmented with services hosted
in a public cloud.
● Community clouds.
○ The cloud is distinguished by a multi administrative domain consisting
of different deployment models (public, private, and hybrid).
.in
Public clouds
● Public clouds account for the first expression of cloud computing.
es
● They are an awareness of the canonical view of cloud computing in
which the services provided are made available to anyone, from
ot
38
Private clouds Cloud Computing
Architecture
● Private clouds, which are the same as public clouds, but their resource
provisioning model is restricted within the boundaries of an
organization.
● Private clouds have the benefit of keeping the core business operations
in house by depending on the existing IT infrastructure and reducing
the cost of maintaining it once the cloud has been set up.
● The private cloud can provide services to a different range of users.
● private clouds is the possibility of testing applications and systems at a
comparatively less price rather than public clouds before implementing
them on the public virtual infrastructure.
● The main advantages of a private cloud computing infrastructure:
1. Customer information protection.
.in
2. Infrastructure ensuring SLAs.
3. Compliance with standard procedures and operations.
es
ot
un
m
Hybrid clouds
● A hybrid cloud could be an attractive opportunity for taking advantage
of the best of the private and public clouds. This shows the
development and diffusion of hybrid clouds.
● Hybrid clouds enable enterprises to utilize existing IT infrastructures,
maintain sensitive information within the area, and naturally increase
and reduce by provisioning external resources and releasing them
when they’re no longer needed.
39
Cloud computing ● Figure 5 demonstrate the a general overview of a hybrid cloud:
○ It is a heterogeneous distributed system consisting of a private cloud
that integrates supplementary services or resources from one or more
public clouds.
○ For this intention they are also called heterogeneous clouds.
○ Hybrid clouds look into scalability issues by leveraging external
resources for exceeding
.in
es
ot
un
Community clouds
m
40
Cloud Computing
Architecture
.in
● Figure 6 shows a view of the usage scenario of community clouds,
jointly with reference architecture.
es
● The users of a distinct community cloud fall into a well identified
community, sharing the same concerns or needs such as government
bodies, industries, or even simple users, but all of them concentrate on
ot
○ Media industry
m
○ Healthcare industry.
■ In the healthcare industry, there are different storyline in which
community clouds are used.
■ Community clouds provide a global platform on which to share
information and knowledge without telling sensitive data maintained
within the private infrastructure.
41
Cloud computing ■ The naturally hybrid deployment model of community clouds supports
the storing of patient data in a private cloud while using the shared
infrastructure for noncritical services and automating processes within
hospitals.
○ Public sector.
■ The public sector can limit the adoption of public cloud offerings.
■ governmental processes involve several institutions and agencies
■ Aimed at providing strategic solutions at local, national, and
international administrative levels.
.in
■ involve business-to-administration, citizen-to-administration, and
possibly business-to-business processes.
■ Examples, invoice approval, infrastructure planning, and public
es
hearings.
○ Scientific research.
ot
● Openness.
Clouds are open systems in which fair competition between different
solutions can occur.
● Community.
Providing resources and services, the infrastructure turns out to be more
scalable.
● Graceful failures.
There is no single provider & vendor in control of the infrastructure, there
is no chance of a single point of failure.
3.6 SUMMARY
● Three service models. Software-as-a-Service (SaaS), Platform-as-a-
Service (PaaS), and Infrastructure-as-a-Service (IaaS).
● Four deployment models. Public clouds, private clouds, community
clouds, and hybrid clouds.
● Cloud computing has been rapidly adopted in industry, there are
several open research challenges in areas such as management of cloud
computing systems, their security, and social and organizational
issues.
.in
● Enterprise Cloud Computing Technology, Architecture, Applications,
GautamShroff, Cambridge University Press, 2010
es
● Mastering In Cloud Computing, RajkumarBuyya, Christian Vecchiola
And ThamariSelvi S, Tata Mcgraw-Hill Education, 2013
● Cloud Computing: A Practical Approach, Anthony T Velte, Tata
ot
3. What does the acronym SaaS mean? How does it relate to cloud
computing?
4. Classify the various types of clouds.
5. Give an example of the public cloud.
43
4
VIRTUALIZATION
Unit Structure
4.0 Objective
4.1 Introduction
4.2 Characteristics of Virtualized Environments
4.3 Taxonomy of Virtualization Techniques.
4.4 Summary
4.5 Reference for further reading
4.6 Unit End Exercises
.in
4.0 OBJECTIVE
● To understand the fundamental components of cloud computing
es
● To study the application running on an execution environment using
virtualization.
ot
4.1 INTRODUCTION
● Virtualization is a large universe of technologies and concepts of an
m
44
b. All these desktop computers have resources enough to host a virtual Virtualization
machine manager and execute a virtual machine with by far acceptable
performance.
c. The same deliberation applies to the high-end side of the PC market,
where supercomputers can provide huge compute power that can make
room for the execution of hundreds or thousands of virtual machines.
3. Lack of space
a. The ongoing need for additional capacity, whether storage or compute
power, makes data centers grow rapidly.
.in
b. Companies such as Google and Microsoft expand their infrastructures
by building data centers as large as football fields that are able to host
thousands of nodes.
es
c. Although this is viable for IT giants, in most cases enterprises cannot
afford to build another data center to accommodate additional resource
capacity.
ot
45
Cloud computing 5. Rise of administrative costs.
a. Power consumption and cooling costs become higher than the cost of
IT equipment nowadays.
b. The increased demand for extra capacity, which translates into more
servers in a data center.
c. Which is responsible for a significant increment in administrative
costs.
d. Common system administration duties consist of hardware
monitoring, defective hardware replacement, server setup and updates,
server resources monitoring, and backups.
e. These are labor-intensive operations, and the higher the number of
servers that have to be managed, the higher the administrative costs.
f. Virtualization helps to reduce the number of required servers for a
given workload, thus reducing the cost of the administrative
.in
manpower.
storage, or a network.
● A virtualized environment has three major components: guest, host,
un
happen.
● The host shows the original environment where the guest is expected
to be managed.
● The virtualization layer is controlling for recreating the same or a
different environment where the guest will operate shown in figure 1.
46
Virtualization
.in
Figure 1. Virtualization reference model.
es
● The most instinctive and popular is represented by hardware
virtualization, which also composes the original realization of the
virtualization concept.
ot
These are fixed on top of virtual hardware that is handled and managed
by the virtualization layer, also called as the virtual machine manager.
The host is represented by the physical hardware, and the operating
system, that describe the environment where the virtual machine
m
manager is running.
● In the case of virtual storage, the guests might be client applications or
users that interact with the virtual storage management software
deployed on top of the real storage system.
● Virtual networking is also similar as above: The guest applications and
users communicate with a virtual network, such as a VPN, which is
managed by specific software (also called VPN client) that employs
the physical network available on the node. VPNs are functional for
creating the illusion of being within a different physical network and
thus acquiring the resources in it, which would otherwise not be
available.
.in
for customizing the execution environment of applications.
● Hardware virtualization solutions like VMware Desktop, VirtualBox,
and Parallels provide the ability to create a virtual computer with
es
customized virtual hardware on top of which a new operating system
can be installed.
● The file system exposed by the virtual computer is completely
ot
48
Virtualization
.in
1. Sharing.
a. Virtualization enables the formation of a separate computing
es
environment within the same host.
b. In this way it is possible to fully utilize the capabilities of a powerful
guest, which would otherwise be underutilized.
ot
power consumption.
2. Aggregation.
m
3. Emulation.
a. Guest programs are executed inside an environment that is controlled
by the virtualization layer, which ultimately is a program.
b. This enables controlling and tuning the environment that is revealed to
guests. For illustration, a completely different environment with
respect to the host can be emulated, thus allowing the execution of
49
Cloud computing guest programs needs specific characteristics that are not available in
the physical host.
c. This feature becomes very important for testing purposes, where a
specific guest has to be authenticated against different platforms or
architectures and the wide range of options is not easily attainable during
the development.
4. Isolation.
a. Virtualization enables guests whether they are OS, applications, or
other entities with a completely different environment, in which they
are carried out.
b. The guest program accomplishes its activity by interacting with an
abstraction layer, which gives access to the underlying resources.
c. Isolation comes with several benefits; for example, it enables multiple
guests to run on the one and the same host without interfering with each
other.
.in
d. Second, it provides a segregation between the host and the guest.
e. The virtual machine can sieve the activity of the guest and prevent
es
harmful operations against the host.
4.2.3 Portability
ot
50
4.3 TAXONOMY OF VIRTUALIZATION TECHNIQUES Virtualization
.in
es
ot
un
m
51
Cloud computing ● Above two categories we can list out various techniques that provide
the guest a different type of virtual computing environment: bare
hardware, operating system resources, low-level programming
language, and application libraries.
Execution virtualization
● Execution virtualization consists of all methods that aim to emulate an
execution environment that is different from the one hosting the
virtualization layer.
● All these techniques focus their interest on providing support for the
execution of programs, whether these are the operating system, a
binary specification of a program compiled against an abstract
machine model, or an application.
● Hence execution virtualization can be executed directly on top of the
hardware by the OS, an application, or libraries dynamically or
statically connected to an application image.
.in
1. Machine reference model
● Virtualizing an execution environment at different levels of the
computing stack requires a reference model that defines the interfaces
es
between the levels of abstractions, which hide implementation details.
● From this viewpoint, virtualization techniques actually replace one of
ot
the layers and intercept the calls that are directed toward it.
● Therefore, a clear uncoupling between layers clarify their
implementation, which only requires the emulation of the interfaces
un
52
Virtualization
.in
Figure 4 A machine reference model.
2. Hardware-level virtualization
● Virtualization technique that enables an abstract execution
es
environment in terms of computer hardware on peak of which a guest
operating system can be run.
ot
● In this model, the guest is actuated by the OS, the host by the physical
computer hardware, the virtual machine by its emulation, and the
virtual machine manager by the hypervisor shown in figure 5.
un
Hypervisors
● An elementary element of hardware virtualization is the hypervisor. It
re-form a hardware environment in which guest operating systems are
installed.
● Types of hypervisor:
○ Type I hypervisors execute on top of the hardware. Therefore, they
grasp the place of the OS and communicate directly with the ISA
interface exposed by the underlying hardware, and they imitate this
interface in order to permit the management of guest operating
systems. This hypervisor is also called a native virtual machine.
.in
○ Type II hypervisors need the support of an operating system to enable
virtualization services. Its mean programs controlled by the OS, which
communicate with it through the ABI and emulate the ISA of virtual
es
hardware for guest operating systems. Shown in figure 6.
ot
un
m
54
○ This technique was initially introduced in the IBM System 370. Virtualization
Examples is the extensions to the x86-64 bit architecture introduced
with Intel VT and AMD V.
● Full virtualization.
○ Full virtualization is the ability to run a program and application, such
as an operating system, which resides directly on top of a virtual
machine and without any alteration, as though it were run on the raw
hardware.
● Paravirtualization.
○ This is a not-transparent virtualization solution that enables execution
of thin virtual machine managers.
○ Paravirtualization methods expose a software interface to the virtual
machine that is moderately changed or up-to- date from the host and,
as a consequence, guests need to be modified. .
.in
● Partial virtualization.
○ Partial virtualization enables a partial emulation of the primary
hardware, thus not permitting the complete execution of the guest
es
operating system in complete isolation.
○ Partial virtualization enables many applications to run translucency,
but not all the features of the OS can be supported.
ot
3. Application-level virtualization
● Application-level virtualization allows applications to be executed in
runtime environments that do not natively help all the features required
by such applications.
55
Cloud computing 4.4 SUMMARY
● The term virtualization is a large umbrella under which a variety of
technologies and concepts are classified.
● The common root of all forms of virtualization is the ability to provide
the illusion of a specific environment, whether a runtime environment,
a storage facility, a network connection, or a remote desktop, by using
some kind of emulation or abstraction layer.
.in
Mcgraw Hill, 2009
● https://www.instructables.com/How-to-Create-a-Virtual-Machine/
es
● https://www.redhat.com/en/topics/virtualization/what-is-
KVM#:~:text=Kernel%2Dbased%20Virtual%20Machine%20(KVM,
KVM%20is%20part%20of%20Linux.
ot
● https://u-next.com/blogs/cloud-computing/challenges-of-cloud-
computing/
un
56
5
VIRTUALIZATION & CLOUD
COMPUTING
Unit Structure
5.0 Objective
5.1 Introduction
5.2 Virtualization and Cloud Computing
5.3 Pros and Cons of Virtualization
5.4 Virtualization using KVM
5.5 Creating virtual machines
.in
5.6 Virt - management tool for virtualization environment
5.7 Open challenges of Cloud Computing
es
5.8 Summary
5.9 Reference for further reading
ot
5.0 OBJECTIVE
un
5.1 INTRODUCTION
● Virtualization is the “creation of a virtual version of something, such
as a server, a desktop, a storage device, an operating system or
network resources”.
● Another Way, Virtualization is a technique, which allows to share a
single physical instance of a resource or an application among multiple
customers and organizations.
● It does this by assigning a logical name to a physical storage and
providing a pointer to that physical resource when demanded.
57
Cloud computing 5.2 VIRTUALIZATION AND CLOUD COMPUTING
● Virtualization plays an important role in cloud computing since it
allows for the appropriate degree of customization, security, isolation,
and manageability that are fundamental for delivering IT services on
demand.
● Virtualization technologies are primarily used to offer configurable
computing environments and storage.
● Network virtualization is less popular and, in most cases, is a
complementary feature, which is naturally needed in building virtual
computing systems.
● Particularly important is the role of the virtual computing environment
and execution virtualization techniques. Among these, hardware and
programming language virtualization are the techniques adopted in
cloud computing systems.
.in
● Hardware virtualization is an enabling factor for solutions in the
Infrastructure-as-a-Service (IaaS) market segment, while programming
language virtualization is a technology leveraged in Platform-as-a-
Service (PaaS) offerings.
es
● In both cases, the capability of offering a customizable and sandboxed
environment constituted an attractive business opportunity for
companies featuring a large computing infrastructure that was able to
ot
58
Virtualization & Cloud
Computing
.in
Figure 1 Live migration and server consolidation
59
Cloud computing ● Portability is one more advantage of virtualization, especially for
execution virtualization techniques.
● Virtual machine instances are normally represented by one or more
files that can be easily carried with respect to physical systems.
● Portability and self-containment simplify their administration. Java
code is compiled once and runs everywhere. This needs the Java
virtual machine to be installed on the host.
● Portability and self-containment helps to reduce the costs of
maintenance.
● Multiple systems can securely coincide and share the resources of the
underlying host, without interfering with each other.
● This is essential for server strengthening, which allows adjusting the
number of active physical resources dynamically according to the
current load of the system, thus creating the opportunity to save in
terms of energy consumption and to be less impacting on the
.in
environment.
layer between the guest and the host, the guest can experience
increased latencies.
● The causes of performance degradation can be traced back to the
overhead introduced by the following pursuit.
○ Maintaining the status of virtual processors
○ Support of privileged instructions
○ Support of paging within VM
○ Console functions
60
● In hardware virtualization, the virtual machine can from time to time Virtualization & Cloud
simply provide a default graphic card that maps only a subset of the Computing
features available in the host.
● In the course of programming level virtual machines, some of the
features of the underlying operating systems may become inaccessible
unless specific libraries are used.
● Example is the first version of Java the support for graphic
programming was very finite and the look and feel of applications was
very needy compared to native applications.
● These problems have been resolved by providing a new framework
called java swing for designing the user interface, and further
development has been done by integrating support for the OpenGL
libraries in the software development kit.
.in
phishing.
● The potential of emulating a host in a completely transparent manner
led the way to malicious programs that are designed to extract
es
sensitive information from the guest.
● In hardware virtualization, malicious programs can preload one self
ot
before the OS and act as a thin virtual machine manager toward it.
● The operating system is then managed and can be altered to extract
importatnt information of interest to third parties.
un
61
Cloud computing Working of KVM
● KVM converts Linux into a type-1 hypervisor.
● All hypervisors require some OS-level part like a memory manager,
process scheduler, input or output (I/O) stack, device drivers, security
manager, a network stack, and more to run VMs.
● KVM consists of all these components because it’s part of the Linux
kernel.
● Every VM is implemented as a regular Linux process, arranged by the
standard Linux scheduler, with fixed virtual hardware like a network
card, graphics adapter, CPU(s), memory, and disks.
KVM features
KVM is part of Linux. Linux is part of KVM. Everything Linux has,
KVM has too. But there are certain features that form KVM, an
enterprise's preferred hypervisor.
.in
● Security
KVM employs a combination of security enhanced Linux and secure
es
virtualization (sVirt) for enhanced VM security and isolation.
● Storage
ot
● Hardware support
KVM can use a broad variation of certified Linux supported hardware
platforms.
m
● Memory management
Kernal VM inherits the memory management features of Linux, including
non-uniform memory access and kernel same-page merging.
● Live migration
Kernel VM helps live migration, which is the ability to proceed a running
VM between physical hosts with no service onstrution.
62
Virtualization & Cloud
Computing
● Lower latency and higher prioritization
The Linux kernel features real-time extensions that allow VM-based apps
to run at lower latency with better prioritization (compared to bare metal).
● Managing KVM
It’s possible to manually manage a handful of VM fires on a single
workstation without a management tool.
.in
es
ot
un
Keep all of the default settings. You will be prompted to install several
Oracle components. Install all of them.
m
63
Cloud computing Start VirtualBox and Click on 'New' in the menu. Enter the Name of your
VM. This is how you will identify it in VirtualBox so name it something
meaningful to you. Select Type and Version. This depends on what OS you
are installing.
.in
es
This depends on how much memory you have on your host computer.
Never allocate more than half of your available RAM. If you are creating
ot
64
If you already have an existing VM that you want to add select "Use an Virtualization & Cloud
existing Virtual hard drive file." Otherwise select "Create a virtual hard Computing
drive now."
.in
es
Select 'VDI.' This is usually the best option. The VM will be stored in a
single file on your computer with the .vdi extension.
65
Cloud computing Step 7: Setup File Location and Size
.in
By default, Virtualbox selects the minimum size you should choose.
Depending on what you want to do with the VM you may want to select a
bigger size.
es
Step 8: Install the Operating System
ot
un
m
Double click on your newly created VM (It will be on the left hand side
and will have the name you gave it in Step 2). Browse to your installation
media or .iso file. Finish installation.
66
5.6 OVIRT - MANAGEMENT TOOL FOR Virtualization & Cloud
Computing
VIRTUALIZATION ENVIRONMENT
● OVirt is an open source data center virtualization platform developed
and encouraged by Red Hat. OVirt, which provides large-scale,
centralized management for server and desktop virtualization, was
planned as an open source alternative to VMware vCenter.
● OVirt gives kernel based virtual machine management for multi-node
virtualization. Kernel-based Virtual Machines (KVMs) are a
virtualization infrastructure that changes the Linux kernel into a
hypervisor.
● Features of oVirt
○ OVirt enables centralized management of VMs, networking
configurations, hosts, and compute and storage resources from the web
based front end.
○ OVirt also provides features for disaster recovery (DR) and hyper
.in
converged infrastructure deployments.
○ Features for the management of compute resources include:
■ CPU pinning,
es
■ same-page merging and
■ memory over commitment.
ot
■ live snapshots,
■ the creation of VM templates and VMs,
■ automated configuration
m
Components of oVirt
1. oVirt engine
a. The oVirt engine acts as the control center for oVirt environments.
b. The engine enables admins to define hosts and networks, as well as to
add storage, create VMs and manage user permissions.
c. Included in the oVirt engine is a graphical user interface (GUI), which
manages oVirt infrastructure resources.
d. The oVirt engine can be installed on a stand-alone server or in a node
cluster in a VM.
67
Cloud computing 2. oVirt node
a. The oVirt node is a server that runs on CentOS, Fedora or Red Hat
Enterprise Linux with a virtual desktop and server manager (VDSM)
daemon and KVM hypervisor.
b. The VDSM controls the resources available to the node, including
compute, networking and storage resources.
5.7 OPEN CHALLENGES OF CLOUD COMPUTING
1. Security
● The main concern in investing in cloud services is security issues in
cloud computing.
It is because your data gets stored and processed by a third-party
vendor and we cannot see it.
● We listen about broken authentication, compromised credentials,
account hacking, data breaches, etc. in a particular organization. It
makes you a little more doubtful.
.in
2. Password Security
● As large numbers of people access cloud accounts, it sometimes
es
becomes vulnerable. Anybody who knows the password or hacks into
the cloud will be able to access confidential information.
● Nowadays organizations should use multiple level authentications and
ot
make sure that the passwords remain secured. Also, the passwords
should be updated regularly, especially when a particular employee
leaves the job or leaves the organization.
un
3. Cost Management
● Cloud computing allows access to application software over a fast
internet connection and lets save on investing in costly computer
m
68
6. Control or Governance Virtualization & Cloud
Computing
● One more ethical issue in cloud computing is maintaining proper
control over asset management and maintenance.
● There should be an individual team to make sure that the assets used to
implement cloud services are used according to concur policies and
dedicated procedures.
7. Compliance
● Another major risk of cloud computing is maintaining compliance.
● By compliance using mean, a set of rules about what data is permitted
to be moved and what should be kept in house to maintain compliance.
● The organizations hence follow and respect the compliance rules set by
various government bodies.
8. Multiple Cloud Management
● Companies have begun to invest in multiple public clouds, multiple
private clouds or a combination of both is called the hybrid cloud.
.in
● This has expanded rapidly in recent times.
● So it has become important to list the various challenges faced by such
types of organizations and find solutions to grow with the trend.
es
9. Creating a private cloud
● Implementing an internal cloud is beneficial. This is because all the
ot
10. Performance
● When business applications migrate to a cloud or a third-party vendor,
the business performance starts to depend on the service provider as
well.
● Another major issue in cloud computing is investing in the right cloud
service provider.
11. Migration
● Migration is nothing but updating an application and a new
application or an existing application to a cloud. In the case of a new
application, the process is good and straightforward.
69
Cloud computing 5.8 SUMMARY
● Virtualization has become very popular and extensively used,
especially in cloud computing.
● All these concepts play a fundamental role in building cloud
computing infrastructure and services in which hardware; IT
infrastructure, applications, and services are delivered on demand
through the Internet or more generally via a network connection.
● OVirt is an open source data center virtualization platform developed
and encouraged by Red Hat. OVirt, which provides large-scale,
centralized management for server and desktop virtualization, was
planned as an open source alternative to VMware vCenter.
.in
● Mastering In Cloud Computing, RajkumarBuyya, Christian Vecchiola
And ThamariSelvi S, Tata Mcgraw-Hill Education, 2013
es
● Cloud Computing: A Practical Approach, Anthony T Velte, Tata
Mcgraw Hill, 2009
● https://www.instructables.com/How-to-Create-a-Virtual-Machine/
ot
● https://www.redhat.com/en/topics/virtualization/what-is-
KVM#:~:text=Kernel%2Dbased%20Virtual%20Machine%20(KVM,
un
KVM%20is%20part%20of%20Linux.
● https://u-next.com/blogs/cloud-computing/challenges-of-cloud-
computing/
m
70
6
OPEN STACK
Unit Structure
6.1 Objectives
6.2 Introduction to Open Stack
6.3 OpenStack test-drive
6.4 Basic Open Stack operations
6.5 OpenStack CLI and APIs
6.6 Tenant model operations
6.7 Quotas, Private cloud building blocks
.in
6.8 Controller deployment
6.9 Networking deployment
es
6.10 Block Storage deployment
6.11 Compute deployment
ot
6.16 Questions
6.17 References
6.1 OBJECTIVES
At the end of this unit, the student will be able to
71
Cloud computing 2. OpenStack consists of a set of interrelated components, each of which
provides a specific function in the cloud computing environment.
These components include:
.in
Dashboard (Horizon): This component provides a web-based user
interface for managing the cloud computing environment.
es
Orchestration (Heat): This component provides automated deployment
and management of cloud applications.
ot
.in
7. Test OpenStack: To test OpenStack, you can create and deploy virtual
machines, test network connectivity, and simulate workload scenarios
to test the performance and scalability of the environment.
es
8. Keep in mind that deploying and configuring OpenStack can be a
complex process, and it requires some level of expertise in cloud
ot
computing and networking. You may want to seek assistance from the
OpenStack community or a professional services provider to ensure a
smooth and successful deployment.
un
10. Deploy your own OpenStack environment: If you want to deploy your
own OpenStack environment, you can use tools such as DevStack or
Packstack. DevStack is a script that automates the installation of
OpenStack on a single machine, while Packstack is a similar tool that
can be used to deploy OpenStack on multiple machines. To deploy
OpenStack on your own, you will need to have a server or virtual
machine that meets the hardware and software requirements for
OpenStack.
73
Cloud computing 11. Once you have a test environment set up, you can use the OpenStack
web interface (Horizon) or command-line interface (CLI) to create
virtual machines, networks, and storage resources. You can also
explore the different OpenStack components and their functionality,
such as the compute (Nova) and networking (Neutron) components.
.in
create a network, you can use the OpenStack web interface or CLI and
specify the network name, subnet, and IP address range. You can also
attach instances to networks and configure security settings for the
es
network.
volume, you can use the OpenStack web interface or CLI and specify
the volume size, type, and other settings. To create an object storage
container, you can use the Swift CLI and specify the container name
un
(Keystone) that allows you to create and manage users, projects, and
roles. Users are granted roles that define their level of access to
OpenStack resources. To create a user, you can use the OpenStack
web interface or CLI and specify the user name, password, and other
settings. To create a project, you can also use the OpenStack web
interface or CLI and specify the project name and description.
To launch an instance using the CLI, you can use the openstack server
create command and specify the necessary parameters.
Create a network: You can create a new network for your instances by
selecting the Network tab in the Horizon dashboard and clicking on
"Create Network." You will be prompted to specify the network type,
subnet details, and other configuration options.
.in
To create a network using the CLI, you can use the openstack network
create command and specify the necessary parameters.
es
Attach a volume: You can attach a volume to an instance to provide
additional storage by selecting the Compute tab in the Horizon
dashboard, clicking on the instance, and selecting "Attach Volume."
You will be prompted to select the volume and specify the device
ot
name.
un
To attach a volume using the CLI, you can use the openstack server
add volume command and specify the necessary parameters.
To manage security groups using the CLI, you can use the openstack
security group create and openstack security group rule create
commands.
To resize an instance using the CLI, you can use the openstack server
resize command and specify the necessary parameters.
75
Cloud computing 6.5 OPENSTACK CLI AND APIS
1. OpenStack provides a command-line interface (CLI) and APIs that
allow you to manage and automate your cloud resources.
3. The CLI uses the OpenStack API to communicate with the OpenStack
services. The CLI is available for all OpenStack services, including
Compute, Networking, Identity, Image, and Block Storage.
4. To use the OpenStack CLI, you need to install the OpenStack client on
your local machine. The client is available for Linux, macOS, and
Windows. Once you have installed the client, you can use the
openstack command to interact with OpenStack services.
.in
5. The OpenStack APIs are a set of RESTful APIs that allow you to
programmatically interact with OpenStack services. The APIs provide
a standardized way of accessing OpenStack services and can be used
es
by developers to create custom applications and tools that interact with
OpenStack services.
8. Overall, the OpenStack CLI and APIs are powerful tools that allow
you to manage and automate your cloud resources. Whether you prefer
to use the CLI or APIs, OpenStack provides a flexible and extensible
platform for building and managing cloud infrastructure.
76
2. Here are some basic OpenStack tenant model operations: Open Stack
Creating a tenant: You can create a new tenant using the OpenStack
CLI or APIs. When you create a tenant, you specify a name and an
optional description.
Creating users: Once you have created a tenant, you can create one or
more users within that tenant. Users are granted access to the resources
associated with their tenant.
Assigning roles: You can assign roles to users within a tenant. Roles
are used to define what actions a user can perform on a specific
resource. For example, you might assign a user the role of "admin" for
a particular project, giving them full access to all resources within that
project.
.in
and roles to those projects.
Managing user access to tenants: You can control which users have
access to a tenant's resources by assigning roles to those users. You
m
can assign multiple roles to a user, and you can assign roles to users at
the tenant level or the project level.
Managing resources: Once you have created a tenant, you can create
and manage resources within that tenant. You can create instances,
volumes, networks, and images, and you can assign those resources to
the tenant.
.in
2. Here are some examples of quotas that can be set in OpenStack:
Floating IP quotas: This sets a limit on the number of floating IPs that
a tenant can allocate.
m
3. Private cloud building blocks are the components that are used to build
a private cloud infrastructure. These components include the physical
hardware, such as servers and storage devices, as well as the software,
such as OpenStack, that is used to manage the cloud infrastructure.
Compute nodes: These are the physical servers that are used to host
virtual machines.
Storage nodes: These are the physical servers that are used to provide
storage for the cloud infrastructure.
Networking hardware: This includes switches and routers that are used
to connect the cloud infrastructure to the external network.
78
Virtualization software: This is the software that is used to create and Open Stack
manage virtual machines.
.in
7. Setting quotas helps to prevent overutilization of resources and ensure
fair resource allocation among different tenants and users. Quotas can
be managed using the OpenStack CLI or APIs.
es
8. A private cloud is a cloud infrastructure that is dedicated to a single
organization. Private clouds offer several benefits, including enhanced
security, greater control over resources, and increased flexibility.
ot
physical server.
79
Cloud computing Identity and Access Management (IAM): IAM is used to manage user
access to cloud resources. You can use IAM tools such as OpenStack
Keystone to authenticate and authorize users and assign roles and
permissions.
Monitoring and Management: Monitoring and management tools are
used to ensure that your private cloud infrastructure is running
smoothly. You can use tools such as Nagios or Zabbix to monitor
system performance and detect issues before they become problems.
10. Overall, these building blocks are the foundation of a private cloud
infrastructure, and they enable you to create a flexible, scalable, and
secure cloud environment that meets your organization's needs.
.in
and network nodes.
m
3. Here are some general steps for deploying the controller node:
Install the base operating system: The first step is to install the
operating system on the server that will become the controller node.
Many OpenStack distributions provide pre-configured images that you
can use.
80
Install OpenStack packages: Next, you need to install the OpenStack Open Stack
packages on the controller node. This can be done using package
managers like yum or apt-get.
Verify the installation: After the services are configured, you can
verify that they are working properly by running various tests and
checks. For example, you can use the OpenStack CLI to check that
you can authenticate and access OpenStack services.
.in
4. It's important to note that the controller node deployment process can
vary depending on the specific OpenStack distribution and version you
are using, as well as the requirements of your environment. It's always
es
a good idea to consult the documentation and follow best practices for
your particular deployment.
Prepare the controller node: The first step is to prepare the controller
node by installing the operating system and configuring the network
interfaces. You should also configure the hostname, domain name, and
time zone.
81
Cloud computing Configure the database: OpenStack uses a database to store
configuration information and metadata about resources. You need to
configure the database service (e.g., MySQL or MariaDB) and create
the necessary databases and users.
Start the services: After configuring the services, you can start them on
the controller node using the service manager of your operating system
(e.g., systemctl or service).
Verify the installation: Finally, you should verify that the OpenStack
services are running correctly by using the OpenStack CLI or API to
create and manage resources.
.in
7. Overall, deploying an OpenStack controller node requires careful
planning and configuration, but it is a critical step in creating a
functional and scalable cloud infrastructure.
es
6.9 NETWORKING DEPLOYMENT
1. In OpenStack, the networking component is responsible for providing
ot
Install and configure the Neutron service: The first step is to install the
Neutron service on the controller node and configure it to provide
network connectivity. This involves configuring the Neutron server,
the Neutron API, and the Neutron plugin (e.g., ML2). You also need to
configure the Neutron database and the message queue service.
Configure the OVS agent: The next step is to configure the Open
vSwitch (OVS) agent, which provides virtual network connectivity to
instances. This involves configuring the OVS service, creating the
necessary bridges and ports, and configuring the OVS firewall.
82
Create networks, subnets, and routers: Once the Neutron service and Open Stack
OVS agent are configured, you can create networks, subnets, and
routers. A network is a logical abstraction that provides connectivity
between instances, while a subnet is a range of IP addresses that can be
used by instances in a network. A router is a virtual device that
connects two or more networks.
Launch instances: Finally, you can launch instances and attach them to
the networks and security groups you created. The instances should be
able to communicate with each other and with the external network
through the router.
.in
and configuration, but it is a critical step in creating a functional and
scalable cloud infrastructure.
the instance.
Install and configure the Cinder service: The first step is to install the
Cinder service on the controller node and configure it to provide block
storage. This involves configuring the Cinder server, the Cinder API,
the Cinder scheduler, and the Cinder volume service.
83
Cloud computing Create volume types: A volume type is a way to define the
characteristics of a block volume, such as the size, performance, and
availability. You need to create volume types that reflect the different
needs of your applications.
Create block volumes: Once the storage backends and volume types
are configured, you can create block volumes. A block volume is a
persistent block storage device that can be attached to an instance.
.in
Create storage pools and volumes: Once the Cinder service and storage
backend are configured, you can create storage pools and volumes. A
es
storage pool is a group of storage devices that are used to create
volumes, while a volume is a block-level storage device that can be
attached to an instance.
ot
84
6.11 COMPUTE DEPLOYMENT Open Stack
Install and configure the Nova compute service: The first step is to
install and configure the Nova compute service on each compute node.
This involves installing the necessary packages, configuring the Nova
compute service, and setting up the networking.
.in
OpenStack Compute, such as KVM, Xen, and VMware. You need to
configure the hypervisor based on the type of virtualization you are
using.
es
Create and manage instances: Once the Nova compute service and
hypervisor are configured, you can create and manage instances. An
instance is a virtual machine that runs on the compute node and
ot
Install and configure the Nova services: The first step in deploying
Nova is to install and configure the Nova services on the controller
node. This involves configuring the Nova API, the Nova conductor,
and the Nova database.
Install and configure the Nova compute nodes: The next step is to
install and configure the Nova compute nodes. This involves
configuring the Nova compute service, setting up networking, and
configuring the hypervisor.
.in
based on various criteria, such as available resources, affinity, and
anti-affinity.
Create flavors: Flavors are predefined templates that define the size,
es
CPU, memory, and disk specifications of an instance. You can create
different flavors based on the requirements of your applications and
workloads.
ot
Create and manage instances: Once Nova is deployed, you can create
and manage instances using the Nova API or the OpenStack
un
dashboard. You can select the appropriate flavor for your instances,
attach storage volumes, and configure networking.
m
.in
features and tools, you can build a robust, scalable, and secure cloud
infrastructure that meets your business requirements.
Operating System: Choose the operating system for the nodes, and
ensure that it is compatible with the OpenStack version.
Storage: Choose the storage solution for the environment, and ensure
that it is compatible with OpenStack. Consider using redundant storage
systems for high availability.
87
Cloud computing Configure OpenStack: After installation, configure OpenStack
according to the requirements of the production environment. This
includes configuring compute, networking, and storage.
.in
6.14 APPLICATION ORCHESTRATION USING
OPENSTACK HEAT
es
1. OpenStack Heat is a service that provides orchestration capabilities to
OpenStack. Heat enables automated provisioning of infrastructure and
applications on top of OpenStack, by defining templates that describe
the desired configuration of the resources.
ot
Create a Heat template: A Heat template is a text file that defines the
resources required to deploy an application. The template is written in
m
88
Update the stack: If changes need to be made to the stack, the Heat Open Stack
template can be modified and uploaded to Heat. Heat will then update
the stack by making the necessary changes to the resources.
3.15 SUMMARY
In this chapter we learned about openstack and its components. The
summing of all points as follows
.in
The OpenStack APIs are available for all OpenStack services,
including Compute, Networking, Identity, Image, and Block Storage.
The APIs are based on industry standards such as JSON, XML, and
es
HTTP.
3.16 QUESTIONS
1. What is openstack?
2. Write a short note on
i. OpenStack test-drive
ii. Basic OpenStack operations
iii. OpenStack CLI and APIs
iv. Tenant model operations
v. Quotas, Private cloud building blocks
3. Explain the following concepts in detail.
i. Controller deployment
ii. Networking deployment
89
Cloud computing iii. Block Storage deployment
iv. Compute deployment
v. deploying and utilizing OpenStack in production environments
6.17 REFERENCES
1. OpenStack Essentials, Dan Radez, PACKT Publishing, 2015
3. https://www.openstack.org
.in
es
ot
un
m
90