[go: up one dir, main page]

0% found this document useful (0 votes)
25 views76 pages

Unit II

Unit II of the course introduces Cloud Computing, covering topics such as Cloud Enabling Technologies, Service-Oriented Architecture (SOA), REST, web services, virtualization, and disaster recovery. It emphasizes the importance of SOA in creating modular, scalable, and interoperable systems, and outlines key components and characteristics of SOA. The document also discusses web services, their protocols, and the Publish-Subscribe model for event-driven architectures in cloud computing.

Uploaded by

s9650862
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views76 pages

Unit II

Unit II of the course introduces Cloud Computing, covering topics such as Cloud Enabling Technologies, Service-Oriented Architecture (SOA), REST, web services, virtualization, and disaster recovery. It emphasizes the importance of SOA in creating modular, scalable, and interoperable systems, and outlines key components and characteristics of SOA. The document also discusses web services, their protocols, and the Publish-Subscribe model for event-driven architectures in cloud computing.

Uploaded by

s9650862
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 76

Unit II: Introduction to Cloud

Computing(BCAM051 )

Dr. Arvinda Kushwaha


Professor
Content
• DETAILED SYLLABUS
• Cloud Enabling Technologies Service Oriented Architecture
• REST and Systems of Systems
• Web Services
• Publish-Subscribe Model
• Basics of Virtualization
• Types of Virtualization
• Implementation Levels of Virtualization
• Virtualization Structures
• Tools and Mechanisms
• Virtualization of CPU – Memory – I/O Devices
• Virtualization Support and Disaster Recovery .
DETAILED SYLLABUS
Proposed
Topic
Unit Lecture

Cloud Enabling Technologies Service Oriented Architecture: REST and


Systems of Systems – Web Services – Publish, Subscribe Model –
Basics of Virtualization – Types of Virtualization – Implementation
II Levels of Virtualization – Virtualization Structures – Tools and 8
Mechanisms – Virtualization of CPU – Memory – I/O Devices –
Virtualization Support and Disaster Recovery .

Text books:
1. Kai Hwang, Geoffrey C. Fox, Jack G. Dongarra, “Distributed and Cloud Computing, From
Parallel Processing to the Internet of Things”, Morgan Kaufmann Publishers, 2012.
2. Rittinghouse, John W., and James F. Ransome, ―Cloud Computing: Implementation,
Management and Security, CRC Press, 2017.
3. Rajkumar Buyya, Christian Vecchiola, S. ThamaraiSelvi, ―Mastering Cloud Computing, Tata
Mcgraw Hill, 2013.
4. Toby Velte, Anthony Velte, Robert Elsenpeter, “Cloud Computing – A Practical Approach,
Tata Mcgraw Hill, 2009.
5. George Reese, “Cloud Application Architectures: Building Applications and Infrastructure in
the Cloud: Transactional Systems for EC2 and Beyond (Theory in Practice), O’Reilly, 2009.
Cloud Enabling Technologies
Cloud Enabling Technologies are the fundamental technologies that support the
creation, deployment, and functioning of cloud computing. They provide the
foundation upon which cloud services (IaaS, PaaS, SaaS) are built. Without them, the
cloud cannot deliver scalability, flexibility, and efficiency.

Major Cloud Enabling Technologies


Service-Oriented Architecture (SOA)
• Provides a modular approach to designing software as services.
• Enables reusability, interoperability, and integration of applications in the
cloud.
Web Services & APIs
• Allow communication between distributed applications via standards like
REST, SOAP, XML, JSON.
• Essential for integrating different systems and enabling cloud-based
applications.
Virtualization
• Abstracts physical hardware into virtual machines (VMs) or containers.
• Provides efficient use of resources, scalability, and isolation of workloads.
Multitenancy
• Multiple users (tenants) share the same infrastructure while
maintaining data isolation.
• Optimizes resource utilization and reduces cost.
Service Level Agreements (SLAs)
• Define measurable commitments (e.g., uptime, availability, response
time) between provider and consumer.
• Ensure trust and accountability in cloud services.
Internet and Broadband Networks
• High-speed internet acts as the backbone for accessing and delivering
cloud services globally.
Parallel and Distributed Computing
• Underpins cloud’s ability to handle large-scale data processing.
• Frameworks like Hadoop and Spark leverage this for big data
analytics.
Scalability and Elasticity Mechanisms
• Automatically scale resources up/down based on demand.
• Ensures efficient resource usage and cost-effectiveness.
Service Oriented Architecture(SOA)
Service-Oriented Architecture (SOA) is a design paradigm and
architectural pattern in software development where software
components (known as services) are created, deployed, and
consumed over a network. Each service represents a discrete, self-
contained unit of functionality that can be accessed over a network
by other applications or services. These services can be reused,
combined, and orchestrated to fulfil specific business requirements,
making SOA a highly modular and flexible approach to software
development.
SOA is a powerful approach for building large, distributed, and
scalable systems, and it aligns well with cloud computing
environments, where services need to be loosely coupled, easy to
scale, and able to integrate with various technologies and platforms.
Architecture of SOA
The architecture of SOA is structured around services that are loosely
coupled and communicate over a network, typically using standard
protocols and formats (such as HTTP, SOAP, REST, XML, and JSON).
Below are an outline of the core components and the architecture of SOA:
Core Components of SOA:
1. Services
2. Service Consumer
3. Service Provider
4. Service Registry
5. Service Bus (Enterprise Service Bus – ESB)
6. Service Orchestration
7. Communication Layer
Services: Services are the core building blocks in SOA. Each
service is a self-contained, reusable, and discrete unit of
functionality. Services are typically well-defined, and they expose
clear interfaces (often called APIs) that other services or
applications can interact with Services communicate over a
network, and they may use standard protocols such as SOAP
(Simple Object Access Protocol) or REST (Representational State
Transfer) for interaction.
Service Consumer: A Service Consumer is any client or
application that calls or consumes the functionality provided by a
service. Service consumers could be other services, web
applications, desktop applications, or mobile applications. They
invoke service functionality by calling service operations
(methods) via a network.
Service Provider: A Service Provider is the entity that implements the
service. It hosts the service logic, processes the requests from
consumers, and sends back the appropriate responses.
Service Registry: A Service Registry is a directory where services are
published and discovered. It allows service consumers to find services
based on their functionality and specifications. The registry stores
information such as the service location, interface details, and protocols
supported. Examples of registries include UDDI(Universal Description,
Discovery, and Integration) and service catalogs used in cloud
environments.
Service Bus (Enterprise Service Bus – ESB): An Enterprise Service Bus
(ESB) is a middleware that facilitates communication and interaction
between services. It handles tasks like routing, transformation,
orchestration, and message handling. The ESB acts as a mediator and
helps ensure that services can communicate in a standardized way, even
if they are built using different technologies.ESBs enable integration of
various services, manage service interaction, and ensure that
communication between services is efficient and reliable.
Service Orchestration: Service Orchestration refers to the
process of combining multiple services into a workflow to
complete a business process. The orchestration layer manages
how services interact, coordinate, and execute together to
achieve a goal (e.g., a complex transaction or workflow).
Orchestration tools help define the sequence of service
invocations, error handling, and transaction management.
Communication Layer: The Communication Layer defines the
protocols used for communication between services and their
consumers. Common protocols include HTTP, SOAP, REST, JMS
(Java Message Service), and MQTT. This layer ensures that
services can communicate over the network efficiently and
reliably.
Characteristics of SOA
Service-Oriented Architecture (SOA) offers several key characteristics
that define its approach and make it highly suitable for enterprise
applications and cloud computing. Below are the defining
characteristics of SOA:
1. Loose Coupling
2. Interoperability
3. Reusability
4. Scalability
5. Flexibility and Extensibility
6. Loose Communication and Asynchronous Messaging
7. Standardized Communication
8. Service Discovery
9. Business-Driven Design
10. Transaction Management and Security
1. Loose Coupling
• In SOA, services are loosely coupled, meaning that they are independent of
each other. A change in one service (such as an upgrade or modification) does
not affect the other services that interact with it, as long as the interface
remains consistent.
• This loose coupling allows for greater flexibility, as services can be modified,
replaced, or extended without disrupting other parts of the system.
2. Interoperability :
• Interoperability is a core principle of SOA. Services in SOA are built to interact
across different platforms, technologies, and programming languages.
• By using standard communication protocols (e.g., HTTP, SOAP, REST), SOA
allows systems built on different technologies to communicate seamlessly,
making it ideal for integrating legacy systems with modern systems.
3. Reusability
• Services in SOA are designed to be reusable across different applications.
Once a service is developed, it can be reused by different consumers (such as
other services, applications, or business processes).
• This reduces redundancy and promotes efficient development practices by
enabling developers to leverage existing services for new business logic.
4. Scalability
 SOA promotes scalability because services can be independently
scaled based on demand. If a particular service is receiving more
requests than others, it can be scaled without affecting the other
services.
 Cloud environments, which provide elastic scaling capabilities,
align well with SOA because they can automatically scale services
based on load.
5. Flexibility and Extensibility
• Flexibility in SOA comes from its modular structure. New services
can be added without disrupting the entire system, and existing
services can be modified independently.
• Extensibility allows systems to evolve by adding new services or
updating existing services without significant changes to the
architecture.
6. Loose Communication and Asynchronous Messaging:
• SOA supports both synchronous and asynchronous
communication, meaning services can exchange messages either
in real-time (synchronous) or at a later time (asynchronous).
• Asynchronous communication, especially in the form of message
queues or event-driven messaging, allows systems to be more
responsive and resilient to failures.
7. Standardized Communication:
• SOA ensures that services communicate using standardized
formats and protocols, typically XML-based, such as SOAP or
REST, which allow for easy integration between different
systems.
• XML, JSON, and WSDL are common standards used in SOA for
message formatting and service description.
8. Service Discovery
• Service discovery allows clients or applications to locate available services
dynamically. In SOA, this is typically achieved via a Service Registry, where
services are published and can be searched based on specific criteria.
• Discovery can be either manual or automated depending on the system and
tools used.
9. Business-Driven Design
• SOA focuses on creating services that align with business goals and
processes. Services are designed around business functionalities (e.g.,
payment processing, inventory management, or customer management).
• This alignment helps in optimizing business processes and enabling better
integration across departments, teams, and systems.
10. Transaction Management and Security
• SOA ensures transaction management by using industry standards like WS-
Transaction for managing distributed transactions across services. This is
essential in business-critical applications.
• Security is implemented using standards like WS-Security, ensuring that
services are accessed securely, and the data exchanged is protected from
unauthorized access.
REST and Systems of Systems
In the context of modern distributed architectures, REST
(Representational State Transfer) and Systems of Systems (SoS) are
concepts that often interact with each other in designing scalable,
flexible, and interoperable solutions

What is REST (Representational State Transfer)?


REST (Representational State Transfer) is an architectural style for
designing networked applications. It is based on a set of principles
that define how to interact with resources (data or services) over the
web in a stateless manner. REST is particularly well-suited for the
development of web services and APIs, as it leverages the HTTP
protocol, which is ubiquitous and well-supported.
What is a System of Systems (SoS)?:
A System of Systems (SoS) is a complex configuration of multiple
independent systems that work together to achieve a common goal or
perform a specific function. SoS are typically composed of various
autonomous systems or subsystems that may operate independently,
but when integrated, they can deliver greater functionality than the
sum of their individual parts.
Web Services
Web services are standardized ways to enable communication
between applications over a network, most commonly the
internet. They allow different software applications to interact
with one another regardless of their underlying platforms,
languages, or technologies.
Let's take a closer look at some of the key components and
technologies related to web services
• Simple Object Access Protocol (SOAP)
• Web Services Description Language (WSDL)
• Universal Description, Discovery and Integration (UDDI)
Simple Object Access Protocol (SOAP)
SOAP (Simple Object Access Protocol) is a protocol specification used to
enable communication between applications over a network. It is a key part
of the web services stack and is used for exchanging structured information
between systems. SOAP is platform-independent and relies on XML to
encode messages.
Advantages of SOAP:
Platform and language independence: SOAP is not dependent on the
operating system or programming language of the client or server.
Security: SOAP supports WS-Security, a standard for securing web services.
Reliability: SOAP supports reliable messaging and transactions.

Disadvantages of SOAP:
Complexity: SOAP can be more complex and slower compared to other web
service protocols, such as REST.
Heavyweight: The use of XML and the extensive message structure adds
overhead in terms of size and processing time.
Web Services Description Language (WSDL)
WSDL (Web Services Description Language) is an XML-based language used
for describing the functionality offered by a web service. A WSDL document
defines the operations provided by a web service, the format of the
messages, and how to communicate with the service (e.g., transport
protocol and address).
Advantages of WSDL:
Standardization: WSDL provides a standard way to describe web services,
making it easier for developers to interact with them.
Automation: Tools can automatically generate client code based on a WSDL
file, saving time and effort.
Interoperability: WSDL supports a wide range of platforms and
programming languages.
Disadvantages of WSDL:
Complexity: WSDL files can become complex and cumbersome to write and
maintain, especially for large-scale services.
Verbosity: Since it is based on XML, WSDL files can become quite large and
difficult to manage, especially as services evolve.
Universal Description, Discovery, and Integration (UDDI)
UDDI (Universal Description, Discovery, and Integration) is a directory service
specification used to enable businesses to discover each other's web
services. UDDI allows organizations to publish their web services and makes
it easier for clients to find those services, based on certain criteria (e.g.,
service type, location, etc.).
Advantages of UDDI:
Centralized Service Discovery: UDDI provides a centralized place to discover
available web services.
Automation: UDDI allows automated discovery of services, helping clients
dynamically find and integrate services.
Disadvantages of UDDI:
Decline in Use: UDDI is not widely used anymore, especially after the rise of
RESTful APIs and other decentralized service discovery mechanisms.
Complexity: UDDI can be complex to implement and manage, and its
centralized nature has made it less appealing in distributed systems.
Web Services Protocol Stack
The Web Services Protocol Stack is a collection of standards and technologies
that work together to support web services. These protocols help in defining
how to interact with, describe, and secure web services. The most common
protocols in this stack include:
SOAP: The protocol used to encode the message content and specify how to
exchange messages over HTTP or other transport protocols.
WSDL: A standard that describes the service, the operations it exposes, and the
data formats it uses.
UDDI: A service discovery protocol that enables businesses to register and
search for web services.
XML: The foundational markup language used to structure the messages
exchanged by web services.
WS-Security: A specification that provides a framework for securing SOAP
messages, including encryption, digital signatures, and authentication.
WS-Reliable Messaging: A protocol that ensures reliable delivery of SOAP
messages, even in case of failures.
WS-Addressing: A standard that allows SOAP messages to include additional
routing information, such as addressing and transport details.
Publish Subscribe Model
The Publish-Subscribe (Pub/Sub) Model is a messaging pattern that
decouples the message producers (publishers) from the message
consumers (subscribers). It is widely used in cloud computing for
building scalable, asynchronous, and event-driven architectures.
In cloud computing, the Pub/Sub model enables real-time data
streaming, event-driven applications, and communication between
distributed systems. It is a key component of cloud-native
architectures, especially in environments with dynamic and scalable
services
How the Publish-Subscribe Model Works:
The Pub/Sub pattern works through a central message broker or
event bus, which handles the distribution of messages to the
relevant subscribers. The communication between the components
is asynchronous and typically based on the concept of topics or
channels that subscriber’s express interest in.
There are two major strategies for dispatching the event to the
subscribers.
Push strategy:
• It is the responsibility of the publisher to notify all the subscribers.
Eg: Method invocation.
Pull strategy :
• The publisher simply makes available the message for a specific
event.
• It is the responsibility of the subscribers to check whether there
are messages on the events that are registered.
• Subscriptions are used to filter out part of the events produced by
publishers.
• In Software Architecture, Publish/Subscribe pattern is a message
pattern and a network oriented architectural pattern
• It describes how two different parts of a message passing system
connect and communicate with each other.
There are three main components to the Publish Subscribe Model:
• Publishers
• Subscribers
• Event bus/broker
Publishers:
• Broadcast messages, with no knowledge of the subscribers.
Subscribers:
• They ‘listen’ out for messages regarding topic/categories that
they are interested in without any knowledge of who the
publishers are.
Event Bus:
• Transfers the messages from the publishers to the subscribers.
• Each subscriber only receives a subset of the messages that have
been sent by the Publisher.
• Receive the message topics or categories they have subscribed
to.
Subscription Models

Topic based Model


Type based Model
Concept based Model
Content based Model

Topic-based Model
• Events are grouped in topics.
• A subscriber declares its interest for a particular topic to receive
all events pertaining to that topic.
• Each topic corresponds to a logical channel ideally connecting
each possible publisher to all interested subscribers.
• Requires the messages to be broadcasted into logical channels.
• Subscribers only receive messages from logic channels they care
about (and have subscribed to).
Type based Model
• Pub/sub variant events are actually objects belonging to a specific type,
which can thus encapsulate attributes as well as methods.
• Types represent a more robust data model for application developer.
• Enforce type-safety at the pub/sub system, rather than inside the
application.
• The declaration of a desired type is the main discriminating attribute.
Concept based Model
• Allows to describe event schema at a higher level of abstraction by using
ontologies.
• Provide a knowledge base for an unambiguous interpretation of the
event structure, by using metadata and mapping functions.
Content based Model
• System allows subscribers to receive messages based on the content of
the messages.
• Subscribers themselves must sort out junk messages from the ones they
want.
Advantages of the Pub/Sub Model in Cloud Computing
Loose Coupling: The Pub/Sub pattern promotes loose coupling between
components of the system, which enhances modularity and
maintainability. Changes to the publisher or subscriber don’t affect the
other as long as the message format remains consistent.
Scalability: Pub/Sub models are inherently scalable. Publishers and
subscribers can scale independently of one another, which is essential for
cloud applications with dynamic load.
Asynchronous Communication: Pub/Sub allows for asynchronous
communication, meaning that services can publish messages without
needing to wait for responses. This is essential for building responsive,
high-performance applications.
Fault Tolerance: Since communication is decoupled, it’s easier to build
fault-tolerant systems. If a subscriber fails, messages can be buffered until
the subscriber is back online. Many cloud-based Pub/Sub systems have
built-in retry and persistence mechanisms to ensure reliability.
Flexibility: You can add or remove subscribers dynamically, without
needing to alter the publisher logic. This flexibility is crucial for evolving
systems and cloud-based applications that need to adapt to changing
requirements.
Challenges with the Pub/Sub Model
Message Ordering: In some scenarios, the order in which messages are
processed may be important. However, in distributed systems like
Pub/Sub, there’s no guarantee that messages will be delivered in the
same order they were published, especially in the case of high-volume,
distributed deployments.
Message Duplication: Message brokers may occasionally deliver
duplicate messages to subscribers. While many systems handle this
by using deduplication techniques, it’s a challenge that needs to be
addressed in certain use cases, such as financial transactions.
Latency: While Pub/Sub can be fast, there can be a slight latency in
message delivery, especially in systems with large numbers of
publishers or subscribers. This may not be ideal for use cases
requiring ultra-low latency.
Scaling the Broker: In cloud environments, managing the scalability
of the message broker itself can be complex. Many cloud platforms
offer managed Pub/Sub services (e.g., Google Cloud Pub/Sub, AWS
SNS), but self-hosting Pub/Sub systems (like Apache Kafka) require
careful configuration and scaling.
Applications
Used in a wide range of group communication applications
including
➢ Software Distribution
➢ Internet TV
➢ Audio or Video-conferencing
➢ Virtual Classroom
➢ Multi-party Network Games
➢ Distributed Cache Update
It can also be used in even larger size group communication
applications, such as broadcasting and content distribution.
➢ News and Sports Ticker Services
➢ Real-time Stock Quotes and Updates
➢ Market Tracker
➢ Popular Internet Radio Sites
Basics of Virtualization
• Virtualization is a technique, which allows sharing single physical instance
of an application or resource among multiple organizations or tenants
(customers).
• Virtualization is a proved technology that makes it possible to run
multiple operating system and applications on the same server at same
time.
• Virtualization is the process of creating a logical(virtual) version of a
server operating system, a storage device, or network services.
• The technology that work behind virtualization is known as a virtual
machine monitor(VM), or virtual manager which separates compute
environments from the actual physical infrastructure.
• Virtualization -- the abstraction of computer resources.
• Virtualization hides the physical characteristics of computing resources
from their users, applications, or end users.
• This includes making a single physical resource (such as a server, an
operating system, an application, or storage device) appear to function as
multiple virtual resources.
• It can also include making multiple physical resources (such as
storage devices or servers) appear as a single virtual resource.
• In computing, virtualization refers to the act of creating a virtual
(rather than actual) version of something, like computer hardware
platforms, operating systems, storage
devices, and computer network resources. Creation of a virtual
machine over existing operating system and hardware.
• Host machine: The machine on which the virtual machine is
created.
• Guest machine: virtual machines referred as a guest machine.
• Hypervisor: Hypervisor is a firmware or low-level program that acts
as a Virtual Machine Manager.
Characteristics of Virtualization
The key characteristics of virtualization, particularly in cloud
computing, enable it to provide the flexibility, scalability, and
resource efficiency that cloud environments require:

1. Abstraction of Physical Resources


2. Resource Pooling
3. Isolation
4. Dynamic Resource Allocation
5. Live Migration
6. Efficiency in Hardware Utilization
7. Encapsulation and Snapshots
8. Multi-Tenancy
Advantages of Virtualization
1. Reduced Costs.
2. Efficient hardware Utilization.
3. Virtualization leads to better resource Utilization and increase
performance
4. Testing for software development.
5. Increase Availability
6. Save energy
7. Shifting all your Local Infrastructure to Cloud in a day
8. Possibility to Divide Services
9. Running application not supported by the host.
Disadvantages of Virtualization

1. Performance Overhead
2. Complexity in Management
3. Security Risks
4. Resource Contention
5. Dependency on the Hypervisor
6. Licensing and Cost Complexity
7. Limited Support for Certain Applications
8. Storage I/O Bottlenecks
9. Initial Setup and Migration Complexity
Types of Virtualization

1. Desktop Virtualization
2. Application Virtualization
3. Server Virtualization
4. Storage Virtualization
5. Network Virtualization
Desktop Virtualization
Desktop Virtualization allows users to access a virtual desktop
environment hosted on a remote server or in a data centre rather
than running directly on their local machine. This means that a user
can access their desktop, including applications and files, from any
device that connects to the virtual desktop infrastructure (VDI).

Examples:
• VMware Horizon View: A solution for delivering virtual
desktops and applications to end users.
• Citrix Virtual Apps and Desktops: Provides virtual
desktops and applications for remote access.
Benefits:
• Centralized management of desktop environments,
making updates and maintenance easier.
• Enhanced security since data and applications reside in
the data centre, not on local devices.
• Flexibility for remote work and access from multiple
devices.
Challenges:
• High performance requirements (e.g., network bandwidth
and server resources) to ensure a good user experience.
• Initial setup can be complex and costly for smaller
organizations.
Application Virtualization

Application Virtualization involves encapsulating applications from


the underlying operating system so that they can run in a virtualized
environment without being installed on a local machine. This allows
applications to be executed on different operating systems or
hardware platforms without modification.
Examples:
• Microsoft App-V: A tool for delivering virtualized applications
to end users.
• VMware ThinApp: A solution for packaging and running
applications in isolated containers.
• Citrix XenApp: A platform for delivering virtualized
applications to end users.
Benefits:
• Reduces application conflicts, as virtualized applications run in
isolation.
• Simplifies application deployment and management by
centralizing applications in a data center.
• Enables mobility since applications can be accessed from
different devices without requiring local installation.
Challenges:
• Compatibility issues may arise with certain applications or
operating systems.
• Some applications may experience performance degradation
due to the abstraction layer.
Server Virtualization
Server Virtualization is the process of creating multiple virtual
servers on a single physical server. Each virtual server (also called a
virtual machine (VM)) operates as an independent system with its
own operating system (OS) and applications, though it shares the
underlying hardware resources (CPU, memory, storage, etc.) with
other virtual machines.
Examples:
• VMware vSphere: A suite for managing virtualized server
environments.
• Microsoft Hyper-V: A hypervisor for running virtualized
servers and workloads.
• KVM (Kernel-based Virtual Machine): An open-source
hypervisor for Linux-based systems.
Benefits:
• Improved hardware utilization by consolidating multiple
physical servers into a single physical host.
• Easier provisioning of new server instances and improved
scalability.
• Enhanced disaster recovery through VM migration and
replication.
Challenges:
• Resource contention may occur if too many virtual
machines are placed on a single physical server.
• Requires skilled management to ensure proper allocation
of resources and avoid performance degradation.
Storage Virtualization
Storage Virtualization is the process of pooling together
multiple physical storage devices into a single virtualized storage
resource. This abstraction layer allows users and applications to
access storage without knowing the specifics of the underlying
hardware. The primary goal of storage virtualization is to
improve storage management, flexibility, and performance.
Examples:
• VMware vSAN: A software-defined storage solution that
aggregates local storage resources into a virtual storage
pool.
• IBM Storwize: A line of virtualized storage systems that
combine multiple storage devices into a single unit.
• EMC VMAX: A virtualized storage array that provides high
performance and scalability.
Benefits:
• Simplified management of storage resources, reducing the
complexity of handling multiple storage devices.
• Improved storage utilization by dynamically allocating
resources based on demand.
• Enhanced data protection and disaster recovery by
enabling features like replication and snapshotting.
Challenges:
• Initial setup and configuration of storage virtualization can
be complex and resource-intensive.
• Performance may suffer if there is improper configuration
or inadequate hardware to support the virtualized layer.
Network Virtualization
Network Virtualization involves abstracting and pooling network
resources (such as bandwidth, routers, switches, and firewalls) to
create virtual networks that behave like physical networks but with
more flexibility and scalability. This allows for the creation of
multiple, isolated networks on a single physical network
infrastructure.
Examples:
• VMware NSX: A network virtualization platform that
provides virtual network overlays, security, and automation.
• Cisco ACI (Application Centric Infrastructure): A solution that
integrates network virtualization with application policies.
• OpenStack Neutron: An open-source project that provides
networking as a service for OpenStack cloud environments.
Benefits:
• Simplifies network management and configuration by
abstracting network resources into virtual devices.
• Enables more flexible and dynamic network configurations
that can be tailored to specific application needs.
• Increases network security and isolation, especially in multi-
tenant environments like data centres or cloud
infrastructures.
Challenges:
• Network virtualization solutions can introduce complexity,
especially in large, enterprise-scale networks.
• Performance can be impacted if not properly configured or
if the underlying physical network infrastructure cannot
support the virtualized layer.
Summary of Virtualization Types
Virtualization Description Examples Benefits Challenges

Virtual desktops are VMware Centralized desktop Performance and


Desktop hosted on remote Horizon, Citrix management, remote access, network dependency,
Virtualization servers and accessed via Virtual Apps enhanced security, reduced complex setup for
network. and Desktops hardware requirements. smaller businesses.

Applications run in
Reduced application conflicts, Compatibility issues,
Application isolated environments Microsoft App-
centralized management, mobile potential performance
Virtualization without being installed V, Citrix XenApp
access to applications. degradation.
on the local device.

Multiple virtual servers VMware


Improved hardware utilization, Resource contention,
Server are hosted on a single vSphere,
scalable, easier provisioning of complex resource
Virtualization physical server using Microsoft
new servers. management.
hypervisor technology. Hyper-V, KVM

Aggregates multiple
VMware vSAN, Simplified storage management, Initial setup complexity,
Storage storage devices into a
IBM Storwize, improved utilization, enhanced potential performance
Virtualization unified virtual storage
EMC VMAX disaster recovery. issues.
pool.

Creates virtual networks


VMware NSX, Flexibility, simplified network
that run on top of Performance impact,
Network Cisco ACI, management, improved security
physical networks, complexity in large-
Virtualization OpenStack and isolation, supports multi-
abstracting network scale deployment.
Neutron tenant environments.
resources.
Implementation Levels of Virtualization
1. Instruction Set Architecture (ISA) Level
2. Hardware Abstraction Layer (HAL)
3. Operating System-Level Virtualization
4. Library-Level Virtualization
5. Application-Level Virtualization
Virtualization Structures
Virtualization structures define the architecture or framework
through which virtualization is implemented. The two main
types of virtualization structures are Hosted Structure (Type II)
and Bare-Metal Structure (Type I). Each structure has its own
characteristics, advantages, and ideal use cases, primarily
depending on whether the hypervisor runs on a host operating
system or directly on the hardware
Hosted Structure (Type II)
A Hosted Virtualization Structure (also referred to as Type II
Hypervisor) relies on an existing operating system (host OS) to
manage hardware resources. In this structure, the hypervisor runs
as a software application on top of the host operating system, and
virtual machines (VMs) are created and managed within this
environment.
Key Characteristics of Hosted Structure (Type II):
Hypervisor Role: The hypervisor in a hosted structure operates as
an application on the host OS. The hypervisor accesses the
hardware resources indirectly through the host operating system.
Dependence on Host OS: The hypervisor does not have direct
access to the hardware. Instead, it depends on the host operating
system to manage resources like CPU, memory, and storage.
Performance Overhead: Because the hypervisor runs as an
application on top of a host operating system, there is an inherent
performance overhead. The VM must share system resources with
the host OS, which can limit efficiency and performance.
Ease of Use: Type II hypervisors are typically easier to install and
configure since they work on top of an existing OS (such as
Windows, Linux, or macOS). This makes them more suitable for
desktop environments, developers, or test environments.
Examples of Hosted (Type II) Hypervisors:
VMware Workstation: A popular tool for desktop virtualization,
enabling users to run multiple operating systems on a single
physical machine.
Oracle Virtual Box: A free, open-source hypervisor that
supports multiple guest operating systems on a host OS.
Parallels Desktop: Primarily for macOS users, allowing users to
run Windows or other operating systems alongside macOS.
Microsoft Virtual PC: A now-legacy tool for virtualization,
typically used on Windows platforms.
QEMU: An open-source emulator and virtualizer, supporting a
variety of platforms and architectures.
Bare-Metal Structure (Type I)
A Bare-Metal Virtualization Structure (also referred to as Type I
Hypervisor) operates directly on the physical hardware without the
need for a host operating system. The hypervisor is the first layer
that interacts with the hardware, managing the resources and
creating virtual machines
Key Characteristics of Bare-Metal Structure (Type I):
Hypervisor Role: In a bare-metal structure, the hypervisor interacts
directly with the physical hardware, which allows it to have full control
over system resources such as CPU, memory, storage, and networking. The
hypervisor manages all virtual machine creation, resource allocation, and
hardware interfacing.
No Host OS: Unlike Type II hypervisors, there is no host operating system
involved. The bare-metal hypervisor directly controls the hardware and
provides isolation for each virtual machine. This results in better
performance and efficiency.
High Performance: Because the hypervisor runs directly on the hardware,
it has access to the full power of the system, and virtual machines benefit
from better performance without the overhead of a host operating system.
This makes Type I hypervisors ideal for enterprise environments or data
centers.
Resource Management: Type I hypervisors offer more efficient and
scalable resource management. They are designed for running multiple
virtual machines, supporting resource allocation, isolation, and load
balancing.
Examples of Bare-Metal (Type I) Hypervisors:
VMware ESXi: A leading enterprise hypervisor used in data centers
for server virtualization. ESXi runs directly on hardware and is highly
optimized for performance and scalability.
Microsoft Hyper-V: A Type I hypervisor integrated into Windows
Server, designed for enterprise environments. It supports high
scalability, live migration, and other advanced features.
Xen: An open-source Type I hypervisor used by cloud platforms like
Amazon Web Services (AWS). It supports both full virtualization and
para virtualization for better performance.
KVM (Kernel-based Virtual Machine): A Type I hypervisor integrated
into the Linux kernel. KVM is used in many cloud environments (like
Open Stack) for virtualization on Linux-based systems.
Oracle VM Server for x86: A Type I hypervisor optimized for running
Oracle applications, particularly in enterprise environments.
Feature Hosted (Type II) Bare-Metal (Type I)
Hypervisor Placement Runs on top of a host Runs directly on the
operating system. physical hardware.
Performance Lower performance due Higher performance with
to host OS overhead. minimal overhead.
Use Case Desktop virtualization, Enterprise environments,
development, testing. data centers, cloud.

Resource Management Dependent on host OS Manages


for resource allocation.
resources
directly, better isolation.

Complexity Easier to install and More complex setup and


manage. management.

Scalability Suitable for small-scale Highly scalable for large


or non-production use. environments.

VMware Workstation, VMware ESXi,


Examples Oracle VirtualBox, Microsoft Hyper-V, Xen,
Parallels Desktop. KVM.
Virtualization Tools
Virtualization Tools and Mechanisms in Cloud Computing are
essential components that enable cloud providers and
organizations to optimize hardware usage, improve scalability,
increase efficiency, and offer isolated environments for different
workloads.
Virtualization tools are software solutions or platforms that enable
the creation, management, and orchestration of virtual
environments. These tools abstract physical hardware resources
(such as CPU, memory, storage, and network) and allocate them to
virtual machines (VMs), containers, and other virtualized resources.
Virtualization tools help enhance system flexibility, improve
resource utilization, and simplify management.
Here are the key virtualization tools across different areas of
virtualization:
Virtualization Tools
Here are the key virtualization tools across different areas of
virtualization:
1. Hypervisor (Virtual Machine Monitor) Tools
Hypervisors are software layers that manage the creation and
operation of virtual machines (VMs). They abstract physical
hardware and allocate resources to VMs.
• VMware ESXi (Type 1, Bare-metal Hypervisor)
• Microsoft Hyper-V (Type 1 and Type 2)
• KVM (Kernel-based Virtual Machine) (Type 1)
• Xen Hypervisor (Type 1)
• Oracle VM VirtualBox (Type 2)
2. Containerization Tools
Containerization is a form of operating system-level virtualization
where applications and their dependencies are isolated into
containers. These containers share the same OS kernel but run in
isolated environments.
• Docker
• Kubernetes
• OpenShift
• LXC (Linux Containers)
3. Storage Virtualization Tools
Storage virtualization abstracts and aggregates storage resources,
allowing them to be managed as a single entity. These tools improve
storage utilization, scalability, and flexibility.
• VMware vSAN
• Ceph:
• Storage Spaces Direct (Microsoft)
4. Network Virtualization Tools
Network virtualization allows the abstraction of physical
networking hardware to create multiple virtual networks. It helps
in optimizing network resources, enhancing scalability, and
improving the flexibility of network management.
VMware NSX: A network virtualization platform that enables the
creation and management of virtual networks within VMware
environments.
Open vSwitch (OVS): An open-source virtual switch that enables
virtual network functionality, often used with hypervisors like KVM
and Xen.
Cisco ACI (Application Centric Infrastructure): A network
virtualization solution by Cisco that integrates network hardware
with software to provide policy-driven automation and scalability.
OpenStack Neutron: An open-source networking service for
managing virtual networks in OpenStack cloud environments.
Virtualization Mechanisms
Virtualization mechanisms are the underlying technologies and
techniques that enable the creation and management of virtual
environments. These mechanisms vary based on the type of
virtualization being implemented, such as hardware virtualization,
OS-level virtualization, and application-level virtualization
1. Hardware Virtualization
Hardware virtualization refers to the abstraction of physical
hardware to create multiple virtual environments. This is typically
managed by a hypervisor (Type 1 or Type 2).
CPU Virtualization
Memory Virtualization
I/O Virtualization
2. Operating System-Level Virtualization (Containers)
Operating system-level virtualization, or containerization, provides
isolated environments for applications to run on a shared OS
kernel. The OS itself is responsible for resource allocation and
isolation.
• Namespaces
• Control Groups (cgroups)
• Union File Systems
3. Storage Virtualization
Storage virtualization abstracts physical storage devices and
presents them as a single logical pool of storage. It is managed by
specialized software to optimize storage utilization.
• Thin Provisioning
• RAID Virtualization:Redundant Array of Independent Disks
(RAID) is used in virtualized environments to combine multiple
physical storage devices into a single virtual volume. This
improves data redundancy and performance.
4. Network Virtualization
Network virtualization decouples the network services from the
physical hardware, creating virtual networks that can be managed
and configured independently of the underlying infrastructure.
Overlay Networks: These are virtual networks built on top of existing
physical networks, allowing the creation of isolated networks for different
tenants or applications.
Mechanism: Technologies like VXLAN (Virtual Extensible LAN) and
NVGRE (Network Virtualization using Generic Routing Encapsulation)
provide the tunneling protocols used to create overlay networks.
SDN (Software-Defined Networking): SDN decouples the network control
plane from the data plane, enabling centralized control of the entire
network infrastructure through software.
Mechanism: The SDN controller manages flow tables and network
policies across the network, enabling dynamic network management
and automation.
Virtualization of CPU
CPU virtualization refers to the abstraction of physical CPU resources to create
multiple virtual CPUs (vCPUs) that can be.used by different virtual machines (VMs)
or processes. This enables multiple operating systems to run concurrently on a
single physical machine while being isolated from each other. CPU virtualization is a
core component of modern virtualization technologies, such as hypervisors (e.g.,
VMware, KVM, Hyper-V), and allows for efficient use of hardware resources in a
data center or cloud environment.
Key Concepts of CPU Virtualization:
1. Virtual CPUs (vCPUs)
2. Hypervisor (Virtual Machine Monitor - VMM)
3. CPU Scheduling
4. Instruction Set Virtualization
5. Context Switching
6. Nested Virtualization
7. CPU Pinning (or Affinity)
8. Performance Considerations
9. CPU Virtualization Extensions (Intel VT-x / AMD-V)
Benefits of CPU Virtualization
Resource Efficiency: By virtualizing the CPU, a single physical
machine can run multiple VMs, each of which can run its own OS
and applications. This improves resource utilization and reduces
hardware costs.
Isolation: Each virtual machine is isolated from others, meaning
that a failure or crash in one VM does not affect others. This
isolation is critical for security and reliability.
Flexibility: Virtualized systems can be easily scaled by adding or
removing VMs as needed. Administrators can dynamically allocate
CPU resources to VMs based on workload demands.
Consolidation: CPU virtualization allows for the consolidation of
multiple workloads onto fewer physical servers, optimizing data
center space, power, and cooling requirements.
Challenges and Considerations
Overhead: Although CPU virtualization is highly efficient, it
introduces some overhead, especially when hardware-assisted
features (e.g., Intel VT-x) are not available. The degree of overhead
depends on the workload and the virtualization technology.
Performance Impact: High-performance workloads, especially
those requiring heavy computation, may not perform as well in a
virtualized environment due to the context switching and resource
contention.
Compatibility: Not all applications are designed to run in virtual
environments, and some may experience issues when virtualized,
especially if they require direct access to hardware resources.
Resource Contention: Over commitment of CPU resources
(allocating more vCPUs than physical CPUs) can lead to contention
and performance degradation, especially under heavy workloads
Virtualization of Memory
Memory . of physical memory
virtualization is the abstraction to allow
multiple virtual machines (VMs) to operate independently, with each VM
appearing to have its own private memory space, even though they share
the same physical memory. It is an essential component of virtualization,
as it enables efficient and secure memory management in a multi-tenant
environment.
Benefits of Memory Virtualization
Efficient Memory Use: Memory virtualization enables better utilization of
available physical memory through techniques like ballooning and memory
over commitment.
Isolation: It provides strong isolation between VMs, ensuring that each has
its own virtual memory space, protecting against memory leaks and
unauthorized access.
Flexibility: Memory allocation can be adjusted dynamically, with VMs
being granted or limited access to memory as needed based on workload
demands.
Challenges of Memory Virtualization
Performance Overhead: Techniques like memory paging,
translation lookaside buffers (TLBs), and shadow page tables can
introduce overhead, impacting performance, especially with
memory-intensive workloads.
Memory Fragmentation: Over time, as memory is dynamically
allocated and deallocated, fragmentation can occur, leading to
inefficient use of physical memory.
Virtualization I/O Devices
I/O .
virtualization involves abstracting and sharing input/output
(I/O) devices, such as storage devices, network interfaces, and
graphics adapters, among multiple virtual machines (VMs) running
on a hypervisor. It allows VMs to access I/O devices as if they have
dedicated hardware while enabling efficient sharing of the physical
resources.
Benefits of I/O Virtualization
Resource Efficiency: By allowing multiple VMs to share physical I/O
devices, resources can be allocated more efficiently.
Isolation: Each VM can have its own virtualized I/O device, ensuring
that the failure or misconfiguration of one VM’s I/O does not affect
others.
Flexibility: Administrators can dynamically allocate, reassign, and
optimize I/O resources for VMs as workload demands change
Challenges of I/O Virtualization
Performance Overhead: Emulating devices or virtualizing I/O
operations can add overhead, particularly for I/O-intensive
workloads.
Complexity in Device Assignment: Configuring PCI pass through or
SR-IOV requires careful management and may not be available for
all devices or hypervisors.
Device Compatibility: Not all devices are easily virtualizable, and
some may require special drivers or support from both the
hypervisor and guest OS.
Virtualization Support and Disaster Recovery
Disaster recovery (DR) in virtualized environments involves
strategies and technologies that help restore virtual machines and
their associated data in the event of a failure, whether it's hardware
failure, data corruption, or a natural disaster.
Benefits of Virtualization in Disaster Recovery
Faster Recovery: Virtualized environments enable faster recovery
times through features like live migration, snapshots, and replication.
Cost Efficiency: Virtualized disaster recovery can be more cost-
effective than traditional physical disaster recovery solutions, as it
often requires fewer physical resources and less downtime.
Geographic Flexibility: Disaster recovery sites can be located
remotely, leveraging the flexibility of virtualization to replicate VMs
across different geographical regions.
Challenges of Disaster Recovery in Virtualization
Complexity: Configuring and managing disaster recovery in
virtualized environments can be complex, requiring careful
planning and testing to ensure effectiveness.
Data Integrity: Ensuring that replicated data is consistent and free
of corruption when restored is a challenge in virtualized disaster
recovery.
Bandwidth Requirements: VM replication and backup processes
can require significant bandwidth, especially for large
environments.

You might also like