Unit 1 Notes
Unit 1 Notes
1. INTRODUCTION:
Cloud Computing : It is storing and accessing data application and resources over the internet “the cloud “
instead on your own computer or physical server, allowing for on demand and flexible resource
management.
Computing technology has been evolved over the years. There have been steady developments in the field
of computing hardware, software architecture, and web technology and network communications
over the last decade. The speed of internetworks has increased day by day and it has also become cheaper.
All these developments contributed in setting the stage for the initiation of the revolutionary concept of
‘cloud computing’.
Cloud computing provides the means for users to easily avail computing facilities whenever and
wherever required. They need not worry about setting up infrastructure, purchasing new equipment or
investing in the procurement of licensed software. Rather they can access any volume, large or small, of
computing facilities in exchange for some nominal payment.
Every new technology emerges with the promise to resolve the shortcomings of the existing ones.
Traditional computing has played a pivotal role in the field of computing and communication over the
past few decades.
Nearly half-of-the-decade back, when enterprises used to execute their businesses merely by the aid of the
pen, paper, telephone and fax machine. Gradually, computer systems intruded manual processes and
started automating them. The pen and paper were replaced by digital communication, and even the phone
and faxing services started being managed by computers.
Businesses from local to global, are dependent on the computing systems for almost everything they do.
Even individuals depend heavily on computing systems for their day-to-day activities. Easy and cheap
access to computing facilities has become essential for everyone.
1. Enterprise Perspective
2. Individual User’s Perspective
1. Enterprise Perspective
Business without help of computing services is To run enterprise resource planning applications,
beyond imagination, and the customized software business organizations need to invest huge volumes
packages manage business activities. Most of capital to setup the required IT infrastructure.
organizations use ERP packages (implemented by Servers, client terminals, network infrastructure are
some IT enterprise) to get maximum benefits from required, and they to be put together in a proper
regular business operations. manner. Moreover, arranging adequate power
supply, cooling system and provisioning space also
consume a major part of the IT budget.
Business application package implementation also Enterprises (or IT service firms) need to maintain
over-burdens the IT enterprises with many other a team of experts (system maintenance team) in
costs. Setting up infrastructure, installation of OS, order to manage the whole thing. This is a burden
device drivers, management of routers, firewalls, for HR management and incurs recurring capital
proxy servers etc. are all responsibilities of the investment (for salaries).
enterprise in traditional computing approach. Can enterprises get relief
from these responsibilities and difficulties? It
would help them concentrate fully on the
functioning of business applications
Even those IT enterprises whose sole business This is an extra burden for enterprises who are
interest is developing applications are bound to only interested in application development. They
setup computing infrastructure before they start can outsource the management of infrastructure to
any development work some third party, but the cost and quality of such
services varies quite a bit.
Can IT enterprises avert such difficulties?
Computing infrastructure requires adequate It becomes difficult to compete in the market
hardware procurement. This procurement is costly, with outdated hardware infrastructure. Advanced
but it is not a one-time investment. After every few software applications also require upgraded
years, existing devices become outdated as more hardware in order to maximize business output.
powerful devices appear
Can this process of upgrading hardware on a
regular basis be eliminated from an enterprise’s
responsibility?
It is not unusual to find an updated version Adopting an updated version of an application
of application with new releases that is more requires necessary efforts from subscriber’s end.
advanced and apt to keep up with changing Fresh installation and integration of components
business scenario need to be done.
Can subscribers be relieved of this difficulty of
periodically upgrading the applications?
An enterprise can be overburdened when handling multiple issues in the traditional way of computing.
Issues like huge initial investment for setting up computing infrastructure
Individual users have also been consumers of computing since a long time. They use computing for
various purposes like editing documents, developing program, playing games, watching videos, accessing
Internet etc.
To work with software applications (like text For general users (who don’t want to experiment
editor, image editor, programming, games etc.), with computer hardware devices), this initial
users first need to procure computing system where capital investment for setting up computing
these software applications run. infrastructure is often more than the software
applications they use!
A hardware component may fail for many reasons. Time, cost and the uncertainty are involved in the
Maintenance of the hardware infrastructure is the process.
users’ responsibility.
If users could get relief from these responsibilities
and difficulties?
Software licensing cost needs separate budgetary If software is used for 2–5 hour per day on an
allocation. Licenses are sold for fixed period of average during licensing period, it depicts 8%–20%
time. (usually for one year-duration). utilization of entire investment.
If this cost could be reduced? If the licensing fee
would be paid on hourly usage basis?
Users are burdened with installation and critical Professional help can be obtained against payment,
customization of software. They also troubleshoot or users can troubleshoot themselves, thereby
in case the software crashes. investing more time.
If users could get relief from these responsibilities
and difficulties?
Users need to have physical access to the system Though portable computing devices are available
for using a personal computing system. (like laptop, tablet etc.), it may not be possible to
carry them all the time.
Could there be a way of accessing personal
computing systems remotely, from any location,
any time?
Within few years, hardware systems become Users have no other option but to throw out the
outdated. It becomes difficult to run advanced or whole setup and replace it with a new one.
new software on them.
If there be a permanent solution to this wastage
(from users’ end)?
IT service outsourcing is a popular model which has been adopted by enterprises over the last two
decades, where all of the application development and implementation related activities/ responsibilities
are transferred to IT service companies. But, traditional outsourcing is not the best solution to all
problems mentioned above.
For example, enterprises that only use applications to run their business activities have some kind of
computing requirements, but those IT service organizations to which they outsource the complex tasks of
application development or implementation have their separate computing requirements.
Application development team of an IT service organization may in turn depend on some system
(physical computing system), whereas the assembling and maintenance group who need not to have
Knowledge of application development may have concerns that are entirely different.
They will certainly have separate requirements. Difficulties faced by users (of computing) depend on the
layers of computing they work on.
Computers and computing have become an integral part of our daily lives. Different people use different
categories of computing facilities. These computing facilities can be segmented into three categories:
1. Infrastructure
2. Platform
3. Application
These three categories of computing facilities form three layers in the basic architecture of computing.
Figure 1.1 represents the relationships between these three entities.
1. Infrastructure
The bottom layer or the foundation is the ‘computing infrastructure’ facility. This includes all physical
computing devices or hardware components like the processor, memory, network, storage devices and
other hardware appliances. Infrastructure refers to computing resources in their bare-metal form (without
any layer of software installed over them, not even the operating system). This layer needs basic
amenities like electric supply, cooling system etc.
2. Platform
In computing, platform is the underlying system over which applications run. It can be said that the
platform consists of the physical computing device (hardware) loaded with layer(s) of software where the
program or application can run. The term ‘computing platform’ refers to different abstract levels. It
consists of:
A fully configured physical computer loaded with an operating system is considered as a platform for
computing. Different platforms can be installed over the same computing infrastructure. Linux or
Windows operating systems installed over the same physical computer(computing infrastructure) can
provide two different computing platforms.
The platform layer is also the place where software developers work. Hence Integrated Development
Environments (IDEs) and runtimes are part of this layer. Java Development Kit (JDK) or .NET are
examples of popular computing frameworks. Software applications can be developed and run over these
platforms.
3. Application
Applications (application software) constitute the topmost layer of this layered architecture. This layer
generally provides interfaces for interaction with external systems (human or machine) and is accessed by
end users of computing. A user actually works on the application layer while he or she is going to edit a
document, play a game or use the calculator in a computer. At this layer, organizations access enterprise
applications using application interfaces to run their business.
Different types of people work at different layers of computing. They need to have different
skill-sets and knowledge
Elements of these computing layers:
A person working online or offline with some software application on a personal computing device or
directly accessing the Internet basically consumes all these three facilities together. The term ‘compute’
refers to the set of resources required for assembling a computing system or computer, particularly
processor and memory components.
The boundaries between computing layers were not very clear to the general users.
Most end users unknowingly had to bear the burden of being concerned about all these three computing
layers, while their actual area of interest was only around the application layer. Developers had to be
concerned about the infrastructure layer apart from their own layer of activities (the platform layer).
These vendors or internal system management departments are responsible for managing the whole
computing infrastructure of the enterprise, including
Planning
Designing
Assembling of physical servers
Network or data storage infrastructure build ups
other services like load balancing of electricity supply.
The whole process of setting up of such an infrastructure is quite complex. For a new office, this process
generally takes weeks and sometimes months.
Even the purchase cycle of additional hardware to power up the infrastructure is not counted in hours or
minutes, but in days or weeks.
Protection and security of the infrastructure appears as an extra burden for any organization.
For example, Java Runtime Environment (which provides a platform for the applications) is essential to
run any java based application.
Traditionally the entire responsibility of platform installation, configuration, updates and other staffing at
appropriate levels often fall on the shoulders of the people concerned with application development or
even on users who are interested only in using applications.
Thus, the actual assignments get delayed. Licensing, time bound installation of patches or maintenance of
the platform cause difficulty for users (application developer or application user)
At application layers, the users are the end users of computing. They need not have any knowledge about
a complex computing system. Their only interests are to access or use different software applications to
fulfill requirements of personal or business related works.
Software applications provide solutions for various needs. In traditional model, computing infrastructure
or computing platform or both often become concerns of application subscribers, which is a critical
scenario. Apart from this, the traditional software applications attract fixed and prepaid licensing costs
and annual support costs.
The three main layers of computing, infrastructure, platform and application can be delivered to the
consumers as ready-made stuff arranged and maintained by others, whenever they need it, relieving the
consumers from the burden of arranging all the stuff themselves. This new model of computing is known
as cloud computing.
Cloud computing precipitated a significant shift in the responsibilities of managing computing resources.
Computing facilities in this model are supplied in the same way as like as a civic authority supplies water
or electricity in a city.
Customers can use those facilities without being worried about how they are being supplied or who is
managing all of these activities.
The three major aspects of computing which were represented in three-layered computing architecture.
The only thing required at customer’s end to avail these readymade computing services(infrastructure,
platform or software) are Internet connection and any basic computing device (PC, laptop, tab etc.) where
software interface with cloud computing systems can run. These computing devices need not be highly
configured since the local computers no longer have to do heavy tasks and cloud interface applications
are fairly light-weight.
Cloud computing vendors who are Independent Software Vendors (ISV) generally provide the
service.
These vendors are reputed computing/IT giants. They are the owners or developers of the clouds and
manage everything. Users can use cloud services by payment on use basis, just like they pay monthly bill
for electricity service. This becomes profitable for both parties, the customers and the vendors. Vendors
can supply the service at cheaper rates because of the large size of their business (due to, economy of
scale) since the number of computing subscribers is very large.
This model of computing is much talked about because it provides a lot of flexibility compared to the
traditional way of computing. For instance, it drastically reduces the cost and complexity of owning and
operating computers and networks. Moreover, since specialized computing vendors manage the cloud the
quality of service undoubtedly gets better.
The Concerns
Adoption of cloud computing also requires entrusting another company (the cloud vendor) with
subscriber’s personal, official and confidential data. But, these threats were all along existent in
traditional computing where enterprises used to depend on IT service firms to, whom they outsourced
various computing activities
Cloud computing promotes the idea of outsourcing of computing services, similar to the way electricity
requirements are being outsourced today. Users can avail electricity without being bothered about its
generation technique, where it comes from or how it is being transported. The only thing to know is that
the usage is being metered and it will be billed as per the same.
Individuals as well as enterprises have been using cloud computing in some form or the
other, through Internet.
When individuals use e-mail services like Yahoo Mail, Gmail or social networking services like
facebook, twitter etc., they actually use a cloud computing service.
Picture or video sharing activities via mobile phones, which have been quite popular, are also based on
the cloud computing model.
To access these services, users need not install any heavy application in local computing devices (like
desktop, laptop, tablet etc.) apart from having some web browser or web-based app
1. Technological Influences
2. Operational or Business Influences
The key technological influences behind cloud service adoption are
Universal Network Connectivity:
Cloud computing services are generally accessed through high speed network or Internet. Well-connected
digital communication network spread across the world is necessary for ubiquitous access to cloud
facility. As high speed network communication infrastructure has become available around the world,
access to the cloud computing facility from any location has become a reality.
High-Performance Computing:
High-performance computing (HPC) systems needed specialized hardware components which were
costly. Affording HPC was once beyond the imagination of small enterprises and individuals. Cloud
computing has made HPC affordable for everyone by aggregating computing power to produce
computing performance for executing high performance tasks.
Commoditization:
A product or service turns into a commodity when it becomes marketable and can be interchanged with
another product of same type, which should also be available in the market. This is possible when
products or services from multiple vendors provide more or less same value to customers, and customers
have the option of replacing one product with another product of some other vendor. Cloud offerings
from different providers create the same scenario. This commoditization of cloud services has developed
irrefutable marketplace for cloud adoption.
There are many Business Factors associated for moving into the cloud environment
Outsourcing
IT outsourcing is a common phenomenon among enterprises. Cloud computing utility services provide
scope for using facilities entirely managed by other
Speed or Responsiveness
The time required to develop, configure, or launch new systems in cloud computing environment
is much lesser in comparison with the traditional one.
Automation
Automatic availability of systems, recovery from failure, load balancing or performance monitoring of
systems and applications, and auto-maintenance of computing systems are several features offered by
cloud environment that provide a lots of advantage
Even adopting latest technology also meant considerable amount of investment for existing users. Cloud
computing eliminates these barriers, as customers need to invest very small capital to start.
Studies show that on an average about 70 to 80 percent of any organization’s IT budgets goes to the
operation and maintenance of existing systems and infrastructure. This cost has drastically reduced in the
regime of the cloud computing.
Mobility
Cloud services can also grow or shrink as needed. It can expand or shrink with business that provides the
higher cost-benefit ratio. When a business needs more computing/IT support they consume more cloud
services, when they need less they consume less. Since payment is on usage basis, this elasticity of cloud
services provides great flexibility for the consumers. Cloud computing has eliminated this problem and it
has become easier for anyone to avail the state-of-the-art computing facilities. Businesses can exhibit
more speed and agility. Agility enables a system or business to exhibit quick response to changing
atmosphere or situation without losing momentum.
Flexibility
Cloud services have emerged as a lucrative option for individual users of computing as well as for most
enterprises. But, along with lots of driving forces few resisting forces also exist that decide the level and
speed of moving towards cloud services.
Cloud computing has introduced a real paradigm shift in the scope of computing. Unlike the conventional
uses of computer technology, it facilitates computing as a utility service which is delivered on demand.
The computing facility is managed by providers and can be measured in usage volume or usage time.
utility computing the cost of running any systems round the clock moves towards the
provider’s end. Subscribers get rid of the responsibility of system administration, maintenance,
and 24 × 7 energy support as well as its cooling support.
This is a basis for cost savings because subscribers can use the service by paying very
nominal. The provider on the other hand can offer the service at nominal fee to subscribers
because of their volume of business.
Cloud computing model shifts majority of the infrastructure and other system management
tasks towards cloud vendors. Dedicated teams at the vendor’s end takes care of all of these
activities.
Thus, the users can enjoy a sense of relief and can concentrate only on their area (layer) of
computing interest without bothering about the management of the underlying computing layers.
Cloud computing does not charge its subscribers when they do not use it.
Even the charge is not fixed; it depends on the duration of usage.
Rather, any use is metered and users are charged a reasonable fee according to their
consumption. This reduces the cost of computing.
In cloud computing, users can easily access supercomputer like computing power at
reasonable cost, if necessary. Earlier in traditional approach, only big corporate could
afford high-end computing.
Storage is another important issue for users. Cloud provides as much storage as required.
It is virtually unlimited which is viewed as a big benefit for users.
6. Quality of Service
7. Reliability
The ability to deliver the quality service and support load balancing, backup and recovery
in cases of failure makes the reputed cloud vendors highly reliable which often emerges as
big worry in traditional computing.
In cloud computing, subscribers no more need to plan for all of these complex tasks as
vendors take care of those issues and they do it better.
8. Continuous Availability
Reputed cloud vendors assure almost 24 × 7 service availability. Statistics have shown that
service uptime (delivered from reputed vendors) counted for a year generally doesn’t go below
99.9%.
Such guaranteed continuous availability of cloud service is a big enabler for any business.
9. Locational Independence/Convenience of Access
Cloud computing is available everywhere via Internet. Users can access it through any
computing device like PCs, or portable computing devices like tablet, laptop or smart
phone.
Only the thing required to avail cloud computing through those devices is the access to
Internet, irrespective of geographic location or time zone.
Resiliency is the ability of reducing the magnitude and/or period of disruptions caused by
undesirable circumstances.
Cloud computing is developed based on resilient computing infrastructure, and thus cloud
services are more resilient to attacks and faults.
Infrastructure resiliency is achieved through infrastructure redundancy combined with
effective mechanism to anticipate, absorb and adapt
Deployment time in cloud environment has significantly reduced than what is was in
traditional computing environment. This is possible since resource provisioning is rapid and
automatic in cloud environment.
In a highly competitive market, the ability of quicker deployment gains significant business
advantages.
The issue of software upgrade incurs a lot of headache in traditional computing environment.
New patches are released every now and then and users need to run those patches
periodically.
In cloud computing environment, this upgrade happens automatically.
Cloud vendors always deliver the latest available version of any software (if not asked for
otherwise).
Upgraded environment gets available to users almost immediately after it releases, and whenever
user logs in next time.
Breakdown of systems due to sudden technical failure or natural disaster is a major concern
for users.
Specially, any damage to physical storage devices may cause huge commercial loss.
Cloud computing delivered by reputed vendors have robust recovery systems incorporated in
their set up. Thus, systems and data remain more protected in cloud computing in terms of
safety and security than previous ones
Cloud computing provides numerous benefits. But, like any new technology, this model of computing
also brings some challenges with it.
Different vendors are coming up with cloud computing facility for public use which is mostly
proprietary to various extents.
Applications developed on these proprietary clouds are difficult to move to other cloud platform
due to vendor lock-in.
This problem limits portability of applications. Hence, many times it becomes a challenge to
move from one cloud provider to another
2. Inter-operability Problem
In cloud computing, users or enterprises need to store data outside their network boundary
protected by firewalls.
Thus the trust boundary of enterprises expands up to the external cloud.
Security of users’ data largely depends on the cloud vendors.
This may introduce some extent of vulnerabilities to the security of data.
Another concern arises when a cloud computing facility accessed by multiple parties causes
overlapping of trust boundaries.
Cloud computing is built and governed by the policies of computing vendor or service provider.
Consumers are relieved of the tiring responsibility of managing the computing system.
While this turns out as a major benefit, the low control over the governance or authority of
computing environment sometimes raises concerns among consumers who used to enjoy full
control over self- owned traditional data centers.
The main concern is regarding how a vendor operates the cloud.
Although low but a certain degree of operational control is given to the subscribers depending on
the type of service and service level agreement plays an important role in this regard.
Cloud computing vendors build data centers at locations of their convenience, both geographical
and economical. A vendor may even have more than one data centers dispersed over multiple
geographic locations.
Since subscribers remotely access cloud computing over the Internet, they may not be aware of
the actual location of the resources they consume. More importantly, the storage location of
subscriber’s data may not be within the country or region of the subscriber.
This sometimes poses serious legal concerns.
Most regulatory frameworks recognize cloud consumer organizations responsible for the
security, integrity, and storage of data even when in reality it is held by an external cloud
vendor.
In such scenario, resolving the multi-regional compliance and legal issues are soaring challenges
before cloud computing.
6. Bandwidth Cost
But it is a fact that whiles the pay-per-use model of cloud computing cuts down costs as
subscribers only pay for the resources or services they use, the model brings some associated
cost along with it. That is the cost of network bandwidth used to access the service.
In the current age of Internet, cost of bandwidth is very low at moderate speed of access.
But more bandwidth can provide higher speed which is essential for high quality service.
While low cost bandwidth may often fulfill requirements of general applications, data intensive
applications (those deal with critical and huge volume of data sets) demand higher bandwidth
which may add a little more in the total cost of computing.
IV CLOUD COMPUTING SERVICES
Cloud computing environment is built of heterogeneous computing systems spread over different
geographic locations. Together they all act as a single system. Applications running on those disparate
systems communicate with each other through the web services.
A web service is the way of establishing communication between two software systems over the
internetwork. Web services use standardized way of data exchange since different software systems might
be built using different programming languages and run on different platforms.
Thus, this standardization is very important so that communication remains independent of programming
languages or platforms
Web Service
Web service describes the method of establishing communication between two web-based applications.
World Wide Web Consortium (W3C) defines web services as “a software system designed to support
interoperable machine-to-machine interaction over a network”.
Web services are generally categorized into two different classes. Based on how the service is being
implemented, a web service can either be
SOAP-based web services use XML format for communicating messages among web applications as
XML is an open format and recognized by all kind of applications.
In this approach, HTTP or hyper-text transfer protocol is used for passing messages.
The SOAP is originally developed by Microsoft as older Remote Procedure Call (RPC)- based message
passing technologies like DCOM (Distributed Component Object Model) or CORBA (Common Object
Request Broker Architecture) did not work well with Internet.
This was primarily because those technologies relied on binary messaging. On the other hand, the
XML format of messaging performs well over Internet.
The SOAP was accepted as standard when Microsoft submitted it to the Internet Engineering Task Force
(IETF).
The rules of SOAP communications are described in Web Services Description Language (WSDL)
format. The rules are stored in files with .wsdl extension.
SOAP is often considered as complex since creation of the XML structure is mandatory for passing
message
Instead of using XML to make a request, REST relies on global identifier to locate the resources. Thus, a
separate resource discovery mechanism is not needed.
The global identifier assigned to each resource makes the access method of the resources uniform. Any
resource at a known address can be accessed by any application agents.
REST allows many standard formats like XML, JavaScript Object Notation (JSON) or plain text as well
as any other agreed upon formats for data exchange.
REST is an architecture style for designing networked applications. Here, simple HTTP is used to make
calls between machines identified by their URLs (Uniform Resource Locators) which is simpler than
other mechanisms like CORBA, DCOM or SOAP
Both SOAP and REST can be used for cloud computing development. The choice depends on the design
and implementation requirements.
ROLE OF API:
API (Application Program Interface) is a set of defined functions or methods which is used to compile the
application.
It defines the contract of communication or standard interface provided by software components for
others (other software components) in order to interact with them.
APIs play important role in cloud computing. When some cloud services are released, corresponding
APIs (referred as cloud API) are also released as they are critical for the usefulness and operational
success of those services. Cloud services generally provide well- defined APIs for its consumers so that
anyone can access and use the capabilities offered to develop application or service.
Request for data or computation can be made to cloud services through cloud APIs. Cloud APIs expose
their features via REST or SOAP.
Both vendor specific and cross platform APIs are available, but cross platform APIs are still not
widespread in cloud computing arena and are available for specific functional areas only.
Clubbed together these three service models are commonly referred as SPI (Service-Platform-
Infrastructure) model. Cloud service providers arrange these services for the cloud consumers. The NIST
reference architecture represents these services under ‘service orchestration’ component of the providers
(Figure 5.1)
Service layer is the topmost layer of cloud service orchestration over the resource abstraction and control
layer. The service layer includes three major cloud services as SaaS, PaaS and IaaS. The PaaS layer
resides over IaaS and SaaS layer resides over PaaS.
Infrastructure-as-a-Service
Platform-as-a-Service
1. E-mail facility is one common example of SaaS application that is used by everyone.
2. The CRM (customer relationship management) package of Salesforce. com gained popularity among
enterprises since early 2000s.
3. SAP (Systems, Application and Products) as the solution provider of Enterprise Resource Planning
(ERP) entered into the SaaS CRM and ERP markets with its ‘Business ByDesign’ solution.
4. Oracle launched its CRM SaaS ‘On Demand’. There are also many popular SaaS offerings for general
users in the market today like GoogleApps, Microsoft Office 365 and else.
SERVICE ABSTRACTION
● The word ‘abstraction’ is derived from Latin words ‘abs’ meaning ‘away from’ and ‘trahere’
meaning ‘to draw’. Abstraction takes away or hides the characteristics in order to reduce
something into an essential utility.
● In computing, layer of abstraction is a way of hiding the implementation details of a particular set
of functionalities. In cloud computing model, the level of service abstraction raises while moving
from IaaS towards SaaS.
● At the IaaS level, consumers have the freedom of building computing infrastructure (essentially
in virtual mode) from a set of available options.
● They can configure machines, setup network and select storages. All of the devices are available
in virtual mode and consumers should have the knowledge of building the required infrastructure
from the scratch. In PaaS level, the abstraction intensifies. The underlying virtual infrastructure of
IaaS level remains hidden from the consumers.
● They can simply work with chosen platform environment without knowing how the underlying
system has been configured.
● At this level, consumers have full control over the PaaS environment and they can configure and
manage the utilities like web servers, database servers, and different application development
environments. This abstraction intensifies further at SaaS level where consumers remain unaware
even about the technology of an application. They simply use applications without knowing
anything about its implementation
THE SPI MODEL
The three service models SaaS-PaaS-IaaS together is referred as the SPI model of cloud computing.
In this layered architecture of cloud service model, by moving upward, each service layer is
empowered by the capabilities of the service layer(s) beneath it.
Thus, PaaS is empowered by IaaS layer and SaaS is empowered with the potentials of underlying
IaaS and PaaS layers. Figure below shows the layered cloud SPI model with the mention of popular
commercial services.
One point to note here is that there is no functional relation between cloud deployment models and
delivery models.
Any delivery model can exist in any deployment scenario and thus any delivery/deployment pairing
is possible. Although, SaaS offerings are mostly public services and hence the SaaS/public
combination is more common than the other combinations.
UBIQUITOUS CLOUD
The idea of ubiquitous computing talks about making computing facility available everywhere and for all
the time.
Cloud computing further strengthens the idea of ubiquitous computing. Ubiquitous cloud refers to the use
of computing resources spread over geographic locations from any place and any time.
The actual fact is that earlier users used to access websites or web portals consisting of static and dynamic
pages via Internet but now they access cloud computing too.
Cloud computing has its own characteristics. It follows utility service model and it is measurable where
users can be billed as per use. More importantly, it can deliver both software and hardware to users over
internetwork or Internet in special form called ‘service’.
Cloud computing does not mean simple static or dynamic web content; it is much more than that.
V RESOURCE VIRTUALIZATION
WHAT IS VIRTUALIZATION
● In simple sense, the virtualization is the logical separation of physical resources from direct
access of users to fulfill their service needs.
● Although, at the end, actually the physical resources are responsible to provide those
services. The idea of virtualizing a computer system’s resources (including processor,
memory, storage etc.) has been well-established since many decades.
● Virtualization provides a level of logical abstraction that liberates user-installed software
(starting from operating system and other systems as well as application software) from
being tied to a specific set of hardware.
● Rather, the users install everything over the logical operating environment (rather than
physical ones) having created through virtualization.
Any kind of computing resources can be virtualized. Apart from basic computing devices like processor,
primary memory, other resources like storage, network devices (like switch, router etc.), the
communication links and peripheral devices (like keyboard, mouse, printer etc.) can also be virtualized.
But, it should be noted that in case of core computing resources a virtualized component can only be
operational when a physical resource empowers it from the back end.
For example, a virtual processor can only work when there is a physical processor linked with it. Figure
below represents a virtualized computing environment comprising of processor, memory and storage
disk.
The layers of virtualization transforms these physical computing devices into virtual form and presents
them before user.
The important thing here to be noted is that the simulated devices produced through virtualization may
or may not resemble the actual physical components (in quality, architecture or in quantity).
For instance, in the given figure below, users get access to three processors while there is one physical
processor in reality. Or, 32-bit processor can be produced (in virtual form) from 64-bit actual physical
processor.
● The software for virtualization consists of a set of control programs. It offers all of the physical
computing resources in custom made simulated (virtual) form which users can utilize to build
virtual computing setup or virtual computers or virtual machines (VM).
● Users can install operating system over virtual computer just like they do it over physical
computer. Operating system installed over virtual computing environment is known as guest
operating system.
● When virtualization technique is in place, the guest OS executes as if it were running directly on
the physical machine.
● Machine virtualization (also called server virtualization) is the concept of creating virtual
machine (or virtual computer) on actual physical machine.
● The parent system on which the virtual machines run is called the host system, and the virtual
machines are themselves referred as guest systems.
● In conventional computing system, there has always been a one-to-one relationship between
physical computer and operating system. At a time, a single OS can run over them.
● Hardware virtualization eliminates this limitation of having a one-to-one relationship between
physical hardware and operating system.
● It facilitates the running of multiple computer systems having their own operating systems on
single physical machine.
● As shown in Figure, the OS of the guest systems running over same physical machine need not
to be similar. All these virtual machines running over a single host system, remain independent
of each other. Operating systems are installed into those virtual machines.
● These guest systems can access the hardware of the host system and can run applications within
its own operating environment.
● There are two different techniques of server or machine virtualization as hosted approach and
the bare metal approach.
● The techniques differ depending on the type of hypervisor used.
● Although the techniques are different but they have the same end or ultimate goal by creating a
platform where multiple virtual machines can share same system resources. Each technique is
simply a different way of achieving this goal.
1.Hosted Approach
● In this approach, an operating system is first installed on the physical machine to activate it.
● This OS installed over the host machine is referred as host operating system. The hypervisor is
then installed over this host OS. This type of hypervisor is referred to as Type 2 hypervisor or
Hosted hypervisor.
● Figure below represents the hosted machine virtualization technique. So, here the host OS
works as the first layer of software over the physical resources.
● Hypervisor is the second layer of software and guest operating systems run as the third layer of
software. Products like VMWare Workstation and Microsoft Virtual PC are the most common
examples of type 2 hypervisors.
Benefits: In this approach, the host OS supplies the hardware drivers for the underlying physical
resources. This eases the installation and configuration of the hypervisor. It makes the type-2
hypervisors compatible for a wide variety of hardware platform.
Drawbacks: A hosted hypervisor does not have direct access to the hardware resources and hence, all of
the requests from virtual machines must go through the host OS. This may degrade the performance of
the virtual machines
● In this approach of machine virtualization, the hypervisor is directly installed over the physical
machine. Since, the hypervisor is the first layer over hardware resources, hence, the technique is
referred as bare metal approach.
● Here, the VMM or the hypervisor communicates directly with system hardware. In this
approach, the hypervisor acts as low-level virtual machine monitor and also called as Type 1
hypervisor or Native Hypervisor. VMware’s ESX and ESXi Servers, Microsoft’s Hyper-V, solution
Xen are some of the examples of bare-metal hypervisors.
Benefits: Since the bare metal hypervisor can directly access the hardware resources in most of the
cases it provides better performance in comparison to the hosted hypervisor. For bigger application like
enterprise data centers, bare-metal virtualization is more suitable because usually it provides advanced
features for resource and security management. Administrators get more control over the host
environment.
Drawbacks: As any hypervisor usually have limited set of device drivers built into it, so the bare metal
hypervisors have limited hardware support and cannot run on a wide variety of hardware platform
● The hypervisor or virtual machine monitor (VMM) presents a virtual operating platform before
the guest systems. It also monitors and manages the execution of guest systems and the virtual
machines.
● All of the virtual machines run as self-sufficient computers isolated from others, even though
they are served by the same set of physical resources.
● Alternately, it can be said that a hypervisor or VMM facilitates and monitors the execution of
virtual machines and allows the sharing of the underlying physical resources among them.
Following section focuses on different hypervisor-based virtualization approaches.
● In full virtualization (also called as native virtualization), the hypervisor fully simulates or
emulates the underlying hardware. Virtual machines run over these virtual set of hardware.
● The guest operating systems assume that they are running on actual physical resources and thus
remain unaware that they have been virtualized.
● This enables the unmodified versions of available operating systems (like Windows, Linux and
else) to run as guest OS over hypervisor. In this model, it is the responsibility of the hypervisor to
handle all OS-to-hardware (i.e. guest OS to physical hardware) requests during running of guest
machines.
● The guest OS remains completely isolated from physical resource layers by the hypervisor. This
provides flexibility as almost all of the available operating systems can work as guest OS.
VMware’s virtualization product VMWare ESXi Server and Microsoft Virtual Server are few
examples of full virtualization solution.
● ‘Para’ is an English affix of Greek origin that means ‘beside’ or ‘alongside’. In full virtualization,
the operating systems which run in VMs as guest OSs need not to have any prior knowledge that
they will run over virtualized platform.
● They do not need any special modification (or, functionality incorporation) for running over the
hypervisors and are installed in their original form.
● The whole virtualization activities are managed by the hypervisor like translating instructions
and establishing communication between different guest OSs and the underlying hardware
platform.
● In para-virtualization, a portion of the virtualization management task is transferred (from the
hypervisor) towards the guest operating systems.
● Normal versions of available operating systems are not capable of doing this. They need special
modification for this capability inclusion. This modification is called porting.
● Each guest OS is explicitly ported for the para-application program interface (API). A model of
para-virtualization has been shown in Figure .
● Thus, in para-virtualization, each guest OS needs to have prior knowledge that it will run over
the virtualized platform.
● Moreover, it also has to know on which particular hypervisor they will have to run. Depending
on the hypervisor, the guest OS is modified as required to participate in the virtualization
management task.
● The unmodified versions of available operating systems (like Windows, Linux) cannot be used in
para-virtualization. Since it involves modifications of the OS the para-virtualization is sometimes
referred to as OS-Assisted Virtualization too.
● This technique relieves the hypervisor from handling the entire virtualization tasks to some
extent. Best known example of paravirtualization hypervisor is the open-source Xen project
which uses a customized Linux kernel.
3. Hardware-Assisted Virtualization
● The utility service model of cloud computing requires to maintain huge amount of all types of
computing resources to provide different services to consumers.
● For this purpose, cloud service providers create pool of computing resources. Effective pooling
or grouping of resources requires appropriate system designing and architectural planning.
● In traditional computing model, the silos are made with very little or no inter-connections. On
the other hand in cloud computing, the consumers use well-connected pool of computing
resources.
● They gain almost no knowledge or control over the locations from where physical resources are
allotted to them. In its out-of-best scenario, the providers sometimes ask for choice of
geographic location (country or continent) from where a consumer wants to get resources.
Resource Pooling Architecture
● A resource pooling architecture is designed to combine multiple pools of resources where each
pool groups together with identical computing resources.
● The challenge is to build an automated system which will ensure that all of the pools get
together in synchronized manner. Computing resources can broadly be divided into three
categories as computer/server, network and storage.
● Hence, the physical computing resources to support these three purposes are essential to be
configured in cloud data centers in good quantity. Again, a computer’s capability mainly
depends on two resource components like processor and memory.
● Thus, the resource pooling mainly concentrates on developing rich pools of four computing
resources like processor, memory, network devices and storage.
1.1 Computer or Server Pool
● Server pools are developed by building physical machine pools installed with operating systems
and necessary system software.
● Virtual machines are built on these physical servers and combine into virtual machine pool.
Physical processor and memory components from respective pools are later linked with these
virtual servers in virtualized modes to increase capacity of the servers.
● Dedicated processor pools are made of various capacity processors. Memory pools are also built
in similar fashion. Processor and memory are allotted to the virtual machines as and when
required.
● Again, they are returned to the pools of free components when load of virtual server decreases.
Figure 8.2 shows a resource pool comprising of three separate pools of resources. This is a
simplified demonstration of resource pooling. In reality, the pooling is implemented with other
essential resources in more structured manner.
1.2 Storage Pool
Storage is another essential resource with rapidly growing requirement, frequently accessed by
applications as well as with the consumers of computing. Storage pools are made of block-based storage
disks. They are configured with proper portioning, formatting and are available to consumers in
virtualized mode. Data having stored into those virtualized storage devices are actually saved in these
pre-configured physical disks.
1.3 Network Pool
Elements of all of the resource pools and the pools themselves owned by a service provider remain well-
connected with each other.
This networking facilitates the cloud service at provider’s end. As the cloud computing offers the facility
of creating virtualized networks to consumers, the pool of physical network devices are also maintained
in data centers.
Pools of networking components are composed of different preconfigured network connectivity devices
like switches, routers and others. Consumers are offered virtualized versions of these components. They
can configure those virtual network components in their own way to build their network.
● Resource sharing leads to higher resource utilization rate in cloud computing. As a large number
of applications run over a pool of resources, the average utilization of each resource component
can be increased by sharing them among different applications since all of the applications do
not generally attain their peak demands at same time.
● Cloud computing allows sharing of pooled and virtualized resources among applications, users
and servers.
● The implementation needs appropriate architectural support. While servers are shared among
users and applications the resources like storage, I/O and communication bandwidth are
generally shared among several virtual machines.
● Resource sharing in utility service environment does not come without its own set of challenges.
● The main challenge is to guarantee the Quality of Service (QoS), as performance isolation is a
crucial condition for QoS.
● The sharing may affect the run-time behaviour of other applications as multiple applications
compete for the same set of resources. It may also be difficult to predict the response and
turnaround time.