[go: up one dir, main page]

0% found this document useful (0 votes)
89 views50 pages

Unit 2 (Virtualization)

The document discusses cloud enabling technologies, focusing on virtualization as a key component of cloud computing that separates physical hardware into multiple virtual machines for efficient resource management. It outlines various levels of virtualization implementation, including hardware, operating system, and application levels, as well as different virtualization architectures like hypervisor and para-virtualization. The document also highlights the benefits and challenges of each virtualization method, emphasizing the importance of virtualization in enhancing system efficiency and resource utilization.

Uploaded by

tummaladurgasri4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
89 views50 pages

Unit 2 (Virtualization)

The document discusses cloud enabling technologies, focusing on virtualization as a key component of cloud computing that separates physical hardware into multiple virtual machines for efficient resource management. It outlines various levels of virtualization implementation, including hardware, operating system, and application levels, as well as different virtualization architectures like hypervisor and para-virtualization. The document also highlights the benefits and challenges of each virtualization method, emphasizing the importance of virtualization in enhancing system efficiency and resource utilization.

Uploaded by

tummaladurgasri4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 50

Unit-II

Cloud Enabling Technologies: Implementation Levels of


Virtualization, Virtualization Structures/Tools and
Mechanisms, Virtualization of CPU, Memory and I/O Devices,
Virtual Clusters and Resource Management, Virtualization
for Data-Center Automation.

Virtualization: Virtualization is a fundamental technology in


cloud computing that separates hardware from a single
physical machine to multiple virtual machines, thereby
allowing more efficient scaling and allocation of
resources.
To discuss about the most important topic in the cloud
computing which is virtualization. So, without
virtualization that is no cloud computing.
The term virtualization can be used in many respects for
computers. It is the process of creating a virtual
environment of something which may include hardware
platforms, storage devices, operating system, network
resources etc.
Virtualization is the ability that allows sharing the physical
instances of a single application or resource among
multiple organization or users. This technique is done by
assigning a name logically to all those physical resources
and provides a pointer to those physical resources based on
demand.
Over an existing operating system & hardware, we
generally create a virtual machine that and above it, we run
other operating systems or applications. This is called
hardware virtualization.
The virtual machine provides a separate environment that
is logically distinct (separate or different) from its
underlying hardware. Here, the system or the machine is
the host & the virtual machine is the guest machine. The
virtual environment is managed by VMware or Firmware,
which is termed as a hypervisor.

VM1 VM2 VM3 VM4

Hypervisor
Figure-1: The Clouds Virtualization

How virtualization works on cloud?


Virtualization plays a significant role in cloud technology and its
working mechanism.
Usually, what happens in the cloud – the users not only share
the data that are located in the cloud like application but also
share their infrastructures with the help of virtualization used
mainly to provide applications with standard versions for cloud
customers.
2.1. Implementation Levels of Virtualization: Virtualization
is a computer architecture technology by which multiple virtual
machines (VMs) are multiplexed in the same hardware unit.
The purpose of a Virtual Machine is to improve resource sharing
by many users and improve computer performance in terms of
resource utilization and application flexibility.
Hardware resources like CPU, Memory, I/O devices etc. or
Software resources like Operating System and applications that
can be virtualized at various layers of functionality.
The primary objective is to improve system efficiency by
separating hardware from software. Ex: Users can gain
access to more memory by this concept of VMs.
With the sufficient storage of any computer platform installed in
another host computer, even if processors usage and operating
systems are different.
Levels of Virtualization Implementation: A traditional
computer system runs with a host Operating System
particularly familiar for its hardware architecture. This is
represented in Figure 2.1a.

After virtualization developed in different user applications


managed by their own Operating System i.e., guest OS can run
on the same hardware, independent of the host Operating
System. Usually, the procedure is completed by adding a
virtualization layer, as Figure 2.1b shows.
This virtualization layer namely Virtual Machine Monitor or
Hypervisor. The Virtual Machines can be realized in the Upper
Boxes where applications run on their own guest Operating
System over a virtualized CPU, Memory and I/O Devices.
The main function of the Software Layer for virtualization is to
virtualize the physical hardware of a host machine into virtual
resources to be saved by the Virtual Machines.
Virtualization software creates a visual representation
of virtual machines through the display of a
virtualization layer at several computer levels.
There are Five virtualization layers include the Instruction
Set Architecture (ISA) level, Hardware level, Operating
System level, Library support level, and Application
level.
This can be experimental in Figure 2.1b, where the levels are
described below.

Instruction Set Architecture (ISA) Level: The process of


virtualization involves the Host Machine emulating (complete
with) the specified ISA. Ex: ISA simulation enables binary code
to run on an x86-based host computer.
Any physical machine can construct virtual ISAs through the
process of instruction simulation. Code interpretation traces the
fundamental level of simulation.
An interpreter (line-by-line compiler) program works on the
instructions one-by-one and this process is slow.
To speed up the execution of instructions using dynamic binary
translation, where it can translate the blocks/chunks of dynamic
source instructions to target instructions.
The basic blocks/chunks can also be extended to program
traces or superblocks to increase translation efficiency. This
emulation or simulation needs binary translation and
optimization. Hence, a virtual ISA needs a processor-
specific translation layer to the compiler.
Hardware Abstraction Level: Hardware-level virtualization is
performed on the bare hardware. This approach generates a
virtual hardware environment and processes the hardware in a
virtual fashion.
The idea is to virtualize the resources of a computer by utilizing
them concurrently. Ex: IBM Xen Hypervisor (VMM) runs Linux or
other guest Operating System applications.
Operating System Level: The OS and the user application
are separated by an abstraction layer, as indicated.
The OS level virtualization creates isolated containers on a
single physical server and OS examples to utilize software and
hardware in data centers. The containers behave like real
servers.
OS level virtualization is used in creating virtual hosting
environments to allocate hardware resources among a large
number of distrusting users. It can also be used to indirectly
merge server hardware by moving resources on different hosts
into different Virtual Machines on one server.
Library Support Level: Most applications use Application
Programming Interfaces exported by user-level libraries before
extended system calls by the operating system.
Virtualization with library interfaces is possible by controlling
the communication link between applications and application
programming interfaces.
User-Application Level: Application-level virtualizations
eliminate a true virtual machine. This process is also known as
Process level virtualization. For instance, Application-level
virtualization and high logical level virtual machines enable the
execution of programs created and compiled to abstract
machine specifications. These machines can be utilized for
streaming and application isolation.
It is easier to distribute and uninstall the application from user
desktops since it is layered and separated from the host
operating system and other applications. Without requiring
actual installation or system changes, LAN-Desk is an
application virtualization platform that installs applications as
executable files in a separate environment.
2.2. Virtualization Structures/Tools and Mechanisms:
There are three classes of Virtual Machine Architecture:
Before virtualization, the operating system manages the
hardware of traditional computers.
After virtualization, a virtualization layer is inserted
between the hardware and the OS.
The virtualization layer is responsible for converting parts of
Physical Hardware into Virtual Hardware.
Virtual Machine is working different operating systems like
Windows and Linux can run concurrently on the same
machine.
The usage of different kinds of Virtual Machine
Architectures is influenced by the location of the
virtualization layer, including

1. Hypervisor Architecture
2. Para-virtualization
3. Host-based virtualization.
Hypervisor Architecture:
 The hypervisor or VMM supports hardware level
virtualization on Bare Metal Devices like CPU,
Memory, Hard Disk and Network Interfaces.
 Hypervisor is software that exists between Hardware and
Operating System.
 Hypervisor provides Hyper calls for Guest Operating
System and Applications.
 A hypervisor can assume Micro-Kernel Architecture
for Server Virtualization depending on its functionality.

Hyper call: A hyper call is a software mechanism that runs


from a domain to the hypervisor. A system call is a software
technique that the kernel takes from a user program. This
domain will use hyper calls to user request privileged
operations like updating page tables.
Software Trap: A Software Trap is a program exception or
fault, classically a synchronous interrupt caused by an
exception. It results in a switch to kernel mode, potentially fatal
errors. It can also refer to a software interrupt initiating a
context switch to a monitor program or debugger.
Domain: It is a group of computers/devices on a network
managed as a unit with common rules and procedures. Ex:
Within the Internet to all devices and sharing a common
part of the IP address is said to be in the same domain.
Page Table: A page table is the data structure used by a
virtual memory system in an OS to store the mapping
between virtual addresses and physical addresses.
Kernel: A kernel is the central part of an Operating System
that manages the Process Job assign to Computer and
Hardware like Memory and CPU Time. There are two
kinds as follows:
1.Monolithic Kernel: Operating System commonly uses
Monolithic Kernel. When a particular device is included to it,
the kernel size increases. The main disadvantages of a
monolithic kernel, such as programs with defects that damage
the kernel and others.
Ex: Memory, Processor, Device Drivers (It is a computer
program that operates or controls a particular kind of
device that is attached to computer) etc.
2.Micro-kernel: In micro-kernels are only the basic
functions are distributed. Ex: Memory Management and
Processor Scheduling. Operating System could not run
only on a micro-kernel in which slowdown the Operating
System. A micro-kernel hypervisor's virtualization code is
smaller than a monolithic hypervisor's. A hypervisor's primary
function is to transform physical devices into virtual resources
that are only used by the virtual machines (VMs).
Xen Architecture: It is an Open-Source Hypervisor
Program developed by Cambridge University.
Xen is a micro-kernel hypervisor, whose procedure is
implemented by Domain 0 for Control and I/O, and several
guest domains for user applications.

The Figure 3.5, XEN does not include any Device Drivers
(It is a Computer Program that operates or controls a
particular kind of device that is attached to computer).
It provides a mechanism by which a guest-Operating System
can have direct access to the physical devices.
The size of XEN is a small and provides a virtual
environment between the hardware and Operating
System.
The core components of XEN are the Hypervisor, Kernel
and Applications. The guest OS with the control ability is
called Domain-0 is the others are called Domain U.
Domain 0 is first loaded, when the system boots and can
access the hardware directly and manage devices by
allocating the hardware resources for the guest domains
(Domain U). Xen Architecture is based on Linux
Operating System.
Virtual Machine namely as Domain 0, which can access and
manage all other Virtual Machines is on the same host.

If a user has access to Domain 0 (VMM) then user can


create, copy, save, modify or share files and resources
of all the VMs.
Domain 0 provides user benefits but can also be a
privilege for hackers, allowing them to control all VMs
and host systems.
The XEN user should be provided with a complete description
of any security susceptibilities before they are disclosed.

Binary Translation with Full Virtualization: Hardware


virtualization can be categorised into two categories like

1. Full virtualization
2. Host-based virtualization.

Full Virtualization:
 Operating system modification is not required for full
virtualization.
 The system virtualizes some sensitive instructions and
uses binary translation to Software Trap (Trick).
 The host OS can run normal commands directly. The goal
is to perform better than usual when the instructions are
followed as directed.
 However, this Difficult is the Precise Executions are
first exposed using a software trap and executed in
a virtual manner.
 To improve the security of the system and increasing
the performance rate.
Binary Translation of Guest OS Requests Using a VMM:

 VMware Software is mainly using this Approach.


 In Figure 3.6, VMware software assigns the VMM to
Ring 0 and the guest Operating System to Ring 1.
 VMM scans like the instructions to identify complex
and privileged instructions and software trap into
the VMM in which emulates the behaviour of these
instructions.
 Binary translation is the method used for imitation (A =>
97 => 01100001)
 Note: Full Virtualization combines both binary
translation and direct execution. The guest OS is
totally de-coupled from the hardware and run
virtually (like an emulator).
 Full virtualization involves Binary Translation and
Time Consuming.
 Binary translation is cost consuming but it increases the
system performance.
In a host-based virtualization system both host and guest
OS are used and a virtualization layer is built between them.
The host OS is still responsible for managing the
hardware resources. VMs can host dedicated applications,
while others can run on the host OS directly, allowing users to
install VM architecture without modifying the host OS.

The virtualization software can rely upon the host OS to provide


device drivers and other low-level services. The installation and
maintenance of the Virtual Machine (VM) have become more
efficient.
With four layers of mapping between the guest and host
operating systems, the advantage is in the possibility to use
multiple host machine configurations.
Binary translation could be necessary when the guest OS's
Instruction Set Architecture deviates from the hardware; this
adds costs and time while delaying speed and performance.

Para Virtualization: In para virtualization, the guest


operating system is modified to recognize its virtualized
environment and use "hypercalls" to connect with the
hypervisor directly.
This approach improves performance and efficiency
compared to full virtualization by reducing the overhead
associated with emulating hardware.
Key Concepts:
Hypervisor: The software layer that manages and allocates
resources to virtual machines.
Guest OS: The operating system running inside a virtual
machine.
Hypercalls: Special instructions used by the guest OS to
communicate with the hypervisor, bypassing the need to
emulate hardware.
Awareness: In para virtualization, the guest OS knows it's
virtualized and cooperates with the hypervisor for better
performance.
How it Works?
Modified Guest OS: Para virtualization requires modifications
to the guest operating system's kernel to enable it to use
hypercalls.
Direct Communication: Instead of relying on hardware
emulation, the guest OS uses hypercalls to directly
request services from the hypervisor.
Reduced Overhead: This direct communication reduces the
overhead associated with emulating hardware, leading to
better performance.
Example:
Xen: A popular open-source hypervisor that utilizes para-
virtualization.
AWS: Amazon Web Services uses para virtualization
techniques in their cloud platform to optimize VM
performance according to CloudDefense.AI.
Benefits:
Improved Performance: Direct communication with the
hypervisor reduces overhead and improves speed.
Increased Efficiency: Paravirtualization allows for better
resource utilization.
Simplified Hypervisor: The hypervisor can be simpler
because it doesn't need to emulate all hardware.
Better Scalability: Easier to scale virtual machines with
paravirtualization.
Para-Virtualization with Compiler Support:
 Para-Virtualization modifies the guest operating system.
 Para-virtualized VM provides special APIs, which take
up user applications needing those changes.
 Para-virtualization tries to reduce the virtualization
burden/extra-work to improve the performance.
 This is done by modifying only the guest OS kernel.
 This can be seen in Figure 3.7.
Ex: In a typical para-virtualization architecture in which
considers an x86 processor, a virtualization layer is inserted
between Hardware and Operating System.
According to the x86 ‘ring definition’ the virtualization
layer installed at Ring 0.
Figure 3.8 demonstrates that para-virtualization replaces
instructions that cannot be virtualized with hyper calls that
communicate directly with the VMM.
The guest OS kernel modified for virtualization cannot directly
run hardware, as it should be done through the virtualization
layer.
Disadvantages of Para-Virtualization:
 While para-virtualization reduces additional costs, its
reliability and scalability may be challenged because it
must support both changed guest and host operating
systems.
 The maintenance cost of para-virtualization is high due to
the potential need for deep kernel modifications.
 Para-virtualization's performance advantage varies based
on workload, but it is easier and more practical than full
virtualization due to less consideration of binary
translation.
 Many products like Xen, KVM, and VMware ESX utilize
para-virtualization to improve the speed of binary
translation.
Difference Between Full Virtualization and
Paravirtualization
Virtualization allows one computer to function as multiple
computers by sharing its resources across different
environments. CPU virtualization includes full
virtualization and paravirtualization.
In full virtualization, the original operating system runs
without knowing it's virtualized, using translation to
handle system calls.
Paravirtualization modifies the OS to use hypercalls instead
of certain instructions, making the process more efficient but
requiring changes before compiling.
The differences between Full Virtualization and
Paravirtualization in operating systems.
But first, let's understand what each of these terms’ means.
What is Full Virtualization?
Full Virtualization was introduced by IBM in 1966. It is the first
software solution for server virtualization and uses
binary translation and direct approach techniques.
In full virtualization, the virtual machine completely
isolates the guest OS from the virtualization layer and
hardware.
Microsoft and Parallels systems are examples of full
virtualization.
What is Paravirtualization?
Paravirtualization is the category of CPU virtualization which
uses hypercalls for operations to handle instructions at compile
time.
In paravirtualization, guest OS is not completely isolated but it
is partially isolated by the virtual machine from the
virtualization layer and hardware.
VMware and Xen are some examples of para virtualization.
Difference Between Full Virtualization and
Paravirtualization
The difference between Full Virtualization and Paravirtualization
are as follows:

S.N
o. Full Virtualization Paravirtualization

In Full virtualization, In paravirtualization, a


virtual machines permit virtual machine does not
the execution of the implement full isolation of
instructions with the OS but rather provides a
running of unmodified OS different API which is
in an entirely isolated utilized when OS is exposed
1. way. to modification.

While the Paravirtualization


Full Virtualization is less
is more secure than the Full
secure.
2. Virtualization.

Full Virtualization uses


While Paravirtualization
binary translation and
uses hypercalls at
a direct approach as a
compile time for
technique for
operations.
3. operations.

Full Virtualization is slow Paravirtualization is faster in


than paravirtualization operation as compared to
4. in operation. full virtualization.

Full Virtualization is more Paravirtualization is less


5. portable and compatible. portable and compatible.

6. Examples of full Examples of


virtualization are paravirtualization are
S.N
o. Full Virtualization Paravirtualization

Microsoft and Parallels Microsoft Hyper-V, Citrix


systems. Xen, etc.

The guest operating system


It supports all guest
has to be modified and only
operating systems
a few operating systems
without modification.
7. support it.

Using the drivers, the guest


The guest operating
operating system will
system will issue
directly communicate with
hardware calls.
8. the hypervisor.

It is less streamlined
compared to para- It is more streamlined.
9. virtualization.

It provides less isolation


It provides the best
compared to full
isolation.
10. virtualization.

VMM Design Requirement and Providers: Virtual Machine


Monitor (VMM) is also known as Hypervisor. Hardware-level
virtualization inserts a layer between physical hardware and
traditional OS.
This layer (VMM or Hypervisor) manages the hardware
resources of the computer effectively.
By the usage of VMM, different traditional operating systems
can be used with the same set of hardware simultaneously.
Requirements for a VMM:
(a) A VMM should to offer applications a similar
environment to the original machine.
(b) Programs running in this environment should to
observe minimal speed decreases.
(c) A VMM should be in complete control of the
system resources.
System resource availability and timing dependencies can
cause differences, as multiple virtual machines can reduce
hardware resource demands but exceed physical machine
demands. A VMM should demonstrate efficiency in using the
VMs.
A statistically important subset of the virtual processor's
commands must be carried out by the physical processor
directly, avoiding the VMM, in order to guarantee the efficiency
of the VMM.
In this case, a few issues to consider about are:
(1) The VMM is responsible for allocating hardware resources
for programs.
(2) A program cannot access any resource that has not been
allocated to it.
(3) The VMM ultimately operates without the ability to
manage the resources already allocated.
A VMM stands for virtual machine model that can be
challenging to implement on certain processors, requiring
hardware-assisted virtualization if the processor is not
designed to meet VMM requirements.
Virtualization Support at the Operating System Level:
Cloud computing is transforming the computing industry by
transferring the hardware and management costs of a data
center to third parties like banks.
The challenges of Cloud Computing are:
(a) The ability to use multiple physical machines and virtual
machines (VM) depending on the problem's requirements,
such as using a single CPU at one instance and multiple
CPUs at another.
(b) The current process of instantiating new VMs is slow,
with new VMs either being fresh boots or replicates of a VM
template, unaware of the application's status.
Why OS Level Virtualization?
The drawbacks of hardware level virtualization:
(a) The process of initiating a hardware level Virtual
Machine is slow, while each Virtual Machine creates its
own copy from the beginning.
(b) The virtual machines are known for their high level
of redundancy content.
(c) Low density and a slow performance.
(d) Hardware Alterations may be needed.
OS level virtualization: OS level virtualization partitions a
system's physical resources and allows multiple isolated Virtual
Machines (VMs) within a single kernel.
These VMs, called Virtual Execution Environments (VEs),
have their own processes, file systems, user accounts, network
interfaces, routing tables, and firewalls.
Single-OS copy virtualization is a technique that allows the
virtual machine container to be configured for users with
the same Operating System Kernel, as shown in Figure 3.3.

The benefits of OS delays are covered in the OS Level


Virtualization:
 At the OS level, virtual machines (VMs) offer excellent
scalability, low resource requirements, and minimal
startup and shutdown cost.
 An OS level virtual machine (VM) may coordinate state
changes with its host location.
These can be achieved through two mechanisms of OS
level virtualization:
(a) All OS level VMs on the same physical machine share
a single OS kernel.
(b) The virtualization layer can be designed to allow
processes in VMs to access as many resources as possible
from the host machine without modifying them.
Disadvantages of OS Extension (Delaying Time):
OS extensions have a disadvantage as all VMs on a single
container must have the same guest OS.
Different OS level VMs must be related to the same OS family
(Win).
A Windows distribution cannot run on a Linux-based container.
To implement OS level virtualization, isolated execution
environments (VMs) should be created based on a single OS
kernel.
There are two ways to implement virtual root directories:
duplicating common resources or sharing resources with the
host environment.
Virtualization on Linux or Windows Platforms: Generally,
the OS-level virtualization systems are Linux-based.
 Windows based virtualization platforms are not much in
use.
 The Linux kernel provides an abstraction layer, enabling
software processes to interact with and manage resources
without requiring knowledge of the hardware details.
 Linux platforms utilize repaired kernels to offer enhanced
functionality.
Middleware Support for Virtualization: Library-level
virtualization, also known as user-level Application Binary
Interface (API emulation), creates execution environments for
running unfamiliar programs, utilizing API call interception and
remapping functions.
Virtualization of CPU, Memory and I/O Devices: For any
computer architecture to support virtualization. To understand
how architecture can support Virtualization, we need to keep
the following in mind
1. All Virtual Machines needs a Hypervisor /Virtual Machine
Monitor (VMM).
2. A Virtual machine should behaviourally be same as the
actual machine.
3. Hypervisor or VMM needs complete control over
resources.
4. Virtual machine needs to support all Machine/Assembly
Instructions Supported by the actual Machine.
 If we look at Point -4 in particular, we can see a dependency
in the Architecture.
 If we are looking to run a System that supports RISC
(Reduced Instruction Set Computer) architecture, then
the fundamental machine architecture’s Instruction
Set (ISA) should also support (directly or indirectly) all
the instructions in the RISC Virtual machine’s Architecture.
 The Services of Computer Processors are a special running
mode and instructions known as Hardware helped
virtualization.
 Hypervisor/VMM and guest Operating System run in
different methods.
 The ‘software trap’ in the Hypervisor/VMM catches to all
complex instructions of the guest Operating System and
its applications.
H/W Support for Virtualization:
 Up-to-date operating systems and processors permit
multiple processes to run simultaneously.
 A protection mechanism should exist in the processor so
that not all instructions from different processes will
access the hardware directly.
 This will lead to a system crash.
 All processors should have at least two modes namely
as User and Supervisor modes to control the access
to the hardware directly.
 Instructions running in the supervisor mode are called
privileged instructions and the others are called as
unprivileged instructions
 Ex: VMware Workstation

CPU Virtualization: CPU virtualization involves a single


CPU acting as if it were multiple separate CPUs.
 The most common reason for doing this is to run
multiple different operating systems on one
machine.
 CPU virtualization emphasizes performance and runs
directly on the available CPUs whenever possible.
 The underlying physical resources are used in every
time possible and the virtualization layer runs
instructions only as needed to make virtual
machines operate as if they were running directly on
a physical machine.
 When many virtual machines are running on an ESXi
host, those virtual machines contest for CPU resources.
 When CPU argument occurs, the ESXi host time-slices the
physical processors across all virtual machines
therefore each virtual machine runs as if it has its
specified number of virtual processors.
 A Virtual Machine is a duplicate of an existing system.
 The host processor executes majority of instructions.
 Unprivileged instructions run on the host machine directly.
 Other instructions are to be handled carefully.
 These critical instructions are of three types: privileged,
control-sensitive and behaviour-sensitive.
 Privileged=> Executed in a special mode and are
trapped if not done so.
 Control-Sensitive=> Attempt to change the
configuration of the used resources.
 Behaviour-Sensitive=> They have different behaviours
in different situations (high load or storage or capacity)
Hardware Assisted CPU Virtualization:
Since Full Virtualization or para- Virtualization is
complicated, this new methodology tries to simplify the
situation.
Intel and AMD add an additional mode called privilege
mode level to the x86 processors.
The OS can still run at Ring 0 and hypervisor at Ring 1.
Note: all privileged instructions are trapped at the
hypervisor. Hence, no modifications are required in the VMs
at Operating System level.
VMCS=> VM Control System
VMX=> A virtual router

Memory Virtualization:

 In the traditional computer memory methodology is


the Operating System maintains mappings between
virtual memory to machine memory (MM) using
page tables in which is a one-stage mapping from
virtual memory to Machine Memory.

 Virtual Memory is a feature of an operating


system (OS) that allows a computer to compensate for
shortages of physical memory by temporarily
transferring pages of data from random
access memory (RAM) to disk storage.

 Machine Memory is the physical memory that a cloud


user/host machine can allocate to the Virtual Machine.

 All modern x86 processors contain memory


management unit (MMU) and a translation look-
aside buffer (TLB) to optimize the virtual memory
performance.
 In a virtual execution environment, virtual
memory-VZ involves sharing the physical system
memory in RAM and dynamically allocating it to
the physical memory of the VMs.

Stages in Memory Virtualization: Each Virtual Machine


contains process to process communication for storage
purposes in that purposes we use two stages to store the
data.
 Virtual memory to physical memory.
 Physical memory to machine memory.
Other Points: Memory Management Unit (MMU) should
be supported to guest OS controls to monitor mapping of
virtual addresses to physical memory address of the Virtual
Machines. All this is depicted in Figure 3.12 [1].

 VA-Virtual Address; PA-Physical Address; MA-


Machine Address
 Each page table of a guest OS has a page table
allocated for it in the Virtual Machine Monitor (VMM). The
page table in the VMM (is a Software program that
enables the creation, manage and control of virtual
machine), which handles all these, is called a shadow
page table.
 As it can be seen, all this process is nested and inter-
connected at different levels through the concerned
address.
 If any change occurs in the virtual memory page table or
Translation Look-aside Buffer (TLB) as the shadow
page table in the VMM (Virtual Machine Monitor) is
updated accordingly.
I/O Virtualization: This involves managing of the routing
of I/O requests between virtual devices and shared
physical hardware.
There are three ways to implement this are full device
emulation (Imitation), para-VZ and direct I/O.
 Full Device Emulation: This process emulates well-
known and real-world devices.
 All the functions of a device or bus infrastructure
such as device enumeration, identification, interrupts
that are replicated (fake) in the software in which it is
located in the VMM and acts as a virtual device.
 The I/O requests are trapped in the VMM accordingly.
 The emulation approach can be seen in Figure 3.14 [1].
Para-VZ: Para-VZ is a method of I/O VZ is taken up since
software emulation runs slower than the hardware it emulates.
 In para-VZ, the frontend driver runs in Domain-U; it
manages the requests of the guest OS.
 The backend driver runs in Domain-0 and is responsible
for managing the real I/O devices. This methodology
(para) gives more performance but has a higher CPU
overhead.
 Direct I/O VZ: VM access devices directly to achieve high
performance with lower costs. Currently, it is used only for
the mainframe operating system.
Ex: VMware Workstation for I/O VZ: NIC=> Network
Interface Controller
Virtualization in Multi-Core Processors:

Virtualizing a multi-core processor is more complicated than


that of a uni-core processor.

Multi-core processors have high performance by integrating


multiple cores in a chip, but their virtualization poses a new
challenge.

The main difficulties are that apps must be utilized in a


parallelized way to use all the cores and this task must be
accomplished by software, which is a much higher problem.

To reach these goals, new programming models, algorithms,


languages and libraries are needed to increase the parallelism.
Physical versus Virtual Processor Cores:

A multi-core virtualization method was proposed to allow


hardware designers to obtain an abstraction of the lowest level
details of all the cores.

This technique alleviates (lessens) the burden of managing the


hardware resources by software.

It is located under the ISA (Instruction Set Architecture) and is


unmodified by the OS or hypervisor. This can be seen in Figure
3.16.

Virtual Hierarchy:
The emerging concept of many-core chip multiprocessors
(CMPs) is a new computing landscape (background).

Instead of supporting time-sharing jobs on one or few cores,


abundant cores can be used in a space-sharing – here single or
multi-threaded jobs are simultaneously assigned to the cores.

Thus, the cores are separated from each other and no


interferences take place. Jobs go on in parallel, for long time
intervals.

To optimize (use effectively) the workloads, a virtual hierarchy


has been proposed to overlay (place on top) a coherence
(consistency) and caching hierarchy onto a physical processor.

A virtual hierarchy can adapt by itself to fit how to carry out the
works and share the workspace depending upon the workload
and the availability of the cores.
The CMPs use a physical hierarchy of two or more cache levels
that statically determine the cache (memory) allocation and
mapping.

A virtual hierarchy is a store hierarchy that can adapt to fit


the workloads.

First level in the hierarchy locates data blocks close to the cores
to increase the access speed; it then establishes a shared-
cache domain, establishes a point of coherence, thus increasing
communication speed between the levels.
This idea can be seen in Figure 3.17(a).

Space sharing is applied to assign three workloads to three


clusters of virtual cores: VM0 and VM3 for DB workload, VM1
and VM2 for web server workload, and VM4-VM7 for
middleware workload.

Basic assumption here is that a workload runs in its own VM.


However, in a single OS, space sharing applies equally.

To come across this problem, Marty and Hill suggested a two-


level virtual coherence(consistency) and caching(store)
hierarchy. This can be seen in Figure 3.17(b) [1].
4. Virtual Clusters and Resource Management: Virtual
Cluster is a real method that confirms High Availability of
Servers and the Network like Virtual Network.
Virtual Machines that are installed at different services make
use of Virtual Clusters.

A Virtual Network interconnects each Virtual Machine in a


Virtual Cluster. The first level reduces Access Time and
Performance interference by having each virtual machine
run inside its own virtual cluster. The second level
maintains a Globally Shared Memory.
Multiprogramming and combining servers are examples of
space-shared workloads that a Virtual Hierarchy can
accommodate. It is an emerging concept of many-core chip
multiprocessors is a new Computing Landscape
(background).

Space sharing allows for concurrent allocation of single or


multi-threaded jobs to numerous cores, increasing workload,
especially when more virtual machines join clusters.

A service is required to manage the configuration information of


virtual machines (VMs), including capacity and speed. With a
virtual network, virtual machines (VMs) are able to freely
communicate and self-configure.

Physical versus Virtual Clusters: A Physical Cluster is a


collection of Physical Servers that are interconnected. The
following problems must be discussed:
1. Live Migration (Relocation) of Virtual Machines
2. Memory and File Migrations
3. Dynamic Deployment of Virtual Clusters.
Virtual Clusters made with Virtual Machines installed at
One or More Physical Clusters.
A Virtual Network interconnects the Virtual Machines in a
Virtual Cluster across Several Physical Networks. The
concept can be observed in Figure 3.18.

VMs are Dynamically Delivered to a Virtual Cluster with


the following characteristics:

 The Virtual Cluster Nodes can be either Physical or


Virtual (VMs) with different Operating Systems.
 A VM runs with a guest OS that manages the
resources in the Physical Machine.
 The goal of Virtual Machines (VMs) is to integrate
several functions on a Single Server.
 Virtual machines (VMs) can be duplicated across many
servers to encourage fault tolerance, parallelism, and
disaster discovery.
 The No. of Nodes in a Virtual Cluster can Grow or
Shrink Dynamically.
 While some physical nodes' failures make work more
difficult, virtual machines' high fault tolerance
guarantees that failures won't be harmful.
Since virtualization is widely used, it is necessary to manage
virtual machines (VMs) on virtual clusters effectively.

The virtual computing environment should provide high


performance in virtual cluster deployment, monitoring
large clusters, scheduling of the resources, fault
tolerance and so on.
Figure 3.19 shows the concept of a virtual cluster based
on application partitioning.
The different colours represent nodes in different virtual
clusters.
The concept of Single Storage Images (SSI) from various
VMs and clusters is crucial in this context.
Software packages can be pre-installed as templates and
the users can build their own software stacks.
Note: The boundary of the virtual cluster might change
since VM nodes are added, removed, or migrated
dynamically.
Fast Deployment and Effective Scheduling:
 The system should efficiently construct and distribute
software loads (OS, libraries, apps) to physical nodes
within the cluster, swiftly switching runtime
environments between virtual clusters.
NOTE: Green computing is a methodology that uses
computers and their resources in an
environmentally responsible and eco-friendly
manner. The study of developing, producing,
utilizing, and discarding computing equipment in a
manner that minimizes their environmental impact
is another definition of it.
 Technologists should focus on utilizing available
resources efficiently and cost-effectively to enhance
performance and throughput. To achieve this goal,
parallelism should be implemented in all instances
and virtual machines/clusters should be utilized.
This can minimize overhead, achieve load
balancing, and implement scale-up and scale-down
mechanisms on virtual clusters.
 The virtual clusters must be dynamically re-
clustered using mapping methods.
High Performance Virtual Storage: A template must be
prepared for the construction and usage of virtual machines
(VMs) and distributed to the physical hosts.
Software Programs that reduce the time it takes for users to
adapt to new environments and customize existing ones. Users
should be identified by their profiles, which are stored in data
blocks. All these methods increase the performance in Virtual
Storage. Ex: Dropbox
Steps to deploy (arrange/install) a group of VMs onto a
target cluster:
 Preparing the disk image (SSI)
 Configuring the virtual machines
 Choosing the destination nodes
 Executing the VM deployment commands at every
host

A template is a disk image/SSI that hides the distributed


environment from the user. It may consist of an OS and some
apps.

Templates are selected by the users as per their


requirements and can implement Copy on Write format. A
new Copy on Write backup file is compact, easy to
create, and transfer, thereby reducing space
consumption.

VMs are configured with names, disk images, network settings,


CPU, and memory, but this can be overwhelming for large
numbers.

The process can be simplified by configuring similar virtual


machines with pre-edited profiles.

The deployment principle should be able to meet the VM


requirement and balance workloads.

Managing a Virtual Cluster:


There exist four ways are as follows
1. We can use a guest-based manager, by which the cluster
manager resides inside a guest OS. Ex: A Linux cluster can
run different guest operating systems on top of the Xen
hypervisor.
2. We can create a host-based manager that functions as a
cluster manager on the host systems. Ex: VMware HA
(High Availability) system that can restart a guest system
after failure.
3. An independent cluster manager in which can be used on
both the host and the guest – making the infrastructure
complex.
4. An integrated cluster manager can be utilized on both
guest and host operating systems, ensuring clear
distinction between physical and virtual resources.
Enabling virtual machine (VM) live migration with minimal
overhead significantly improves virtual cluster management
schemes.
Virtual clusters, used in grids, clouds, and HPC platforms,
prioritize fault tolerance and dynamic resource usage,
reducing migration(transfer) time and bandwidth for
efficient HPC performance.
Live VM Migration: Transferring a lively virtual machine
between physical servers without interfering with its
functionality is known as "live migration," which allows load
balancing, maintenance, and other operational duties
without resulting in downtime.
In a cluster with mixed modes of Host and Guest systems,
the procedure typically involves running everything on the
Physical Machine.
When a VM fails, it can be replaced by another VM on a
different node, as long as they both run the same guest
OS. This is called a fail-over (It is a procedure by which a
system automatically transfers control to a duplicate
system when it detects a fault or failure) of a physical
system to a Virtual Machine. Compared to a physical-
physical failover, this methodology has more flexibility. A VM
must stop functioning if its host node fails, which can be
addressed by migrating from one node to another for a similar
VM.
The live migration process is depicted in Figure 3.20.

A VM can be in one of the following states:


(a) Inactive State: This is defined by the VZ platform, under
which the VM is not enabled.
(b) Active State: This refers to a VM that has been
instantiated at the VZ platform to perform a task.
(c) Paused State: A VM has been instantiated but disabled
temporarily to process a task or is in a waiting state itself.
(d) Suspended State: A VM enters this state if its machine
file and virtual resources are stored back to the disk.

Live Migration Steps:

This consists of 6 steps.


(a) Steps 0 and 1: Start migration automatically and
checkout load balances and server consolidation.
(b) Step 2: Transfer memory (transfer the memory data +
recopy any data that is changed during the process). This
goes on iteratively till changed memory is small enough to
be handled directly.
(c) Step 3: Suspend the VM and copy the last portion of the
data.
(d) Steps 4 and 5: Commit and activate the new host. Here,
all the data is recovered, and the VM is started from exactly
the place where it was suspended, but on the new host.

Virtual clusters are used to efficiently utilize computing


resources, allowing for high performance and resolving issues
with configuration coexistence and OS interaction.

Memory Migration (MM): Memory migration is a process that


occurs between a Physical Host and Any Other Physical or
Virtual Machine.
The techniques used now depend upon the guest OS.
Memory Migration can be in a range of megabytes to
gigabytes.
The Internet Suspend-Resume (ISR) technique utilizes
temporal locality to address potential intersections in
memory states between Suspended and Resumed
instances of a virtual machine (VM).
Temporal locality (TL) refers to the fact that the Memory
States differ only by the amount of work done since a VM
was last suspended.
To utilize the TL, each file is represented as a tree of small sub-
files. A copy of this tree exists in both the running and
suspended instances of the VM.
Here, the use of a Tree Representation of a file is
advantageous, and caching guarantees that the Modified
Files are only used for Transmission.
File System Migration: To support VM migration from one
cluster to another, a consistent and location-dependent view of
the file system is available on all hosts.

Each VM is provided with its own virtual disk to which the file
system is mapped.

The contents of the VM can be transmitted across the cluster


by inter-connections (mapping) between the hosts. However,
migration of an entire host (if required) is not sensible due to
cost and security problems.

We offer a global file system for all host machines, enabling


VMs to be located without the need for copying files,
guaranteeing Seamless (unified) network access. The VMM
accesses only the local file system of a machine and the
original/modified files are stored at their respective systems
only.

This decoupling improves security and performance but


increases the overhead of the VMM – every file has to be stored
in virtual disks in its local files.

Smart Copying ensures that after being resumed from


suspension state, a VM does not get a whole file as a backup. It
receives only the changes that were made. This technique
reduces the amount of data that has to be moved between two
locations.
Network Migration: Migrating should maintain open network
connections without relying on forwarding or mobile
mechanisms.

Each VM should be assigned a unique IP or MAC (Media Access


Control) addresses which is different from that of the host
machine. The mapping of the IP and MAC addresses to their
respective VMs is done by the VMM.

The VM's IP address moves to a new location if the


destination is on the same LAN, or if it's on another
network, the OS remains unchanged.

Note that Live migration involves transferring a virtual


machine (VM) without suspending it, using a migration
daemon program for efficient system maintenance,
reconfiguration, load balancing, and improved fault
tolerance.

There are two approaches in live migration: pre copy and


post copy.

(a) In pre copy, which is manly used in live migration, all


memory pages are first transferred; it then copies the
modified pages in the last round iteratively. Here,
performance ‘degradation’ will occur because migration will
be encountering dirty pages (pages that change during
networking) [10] all around in the network before getting to
the right destination. The iterations could also increase,
causing another problem. To encounter these problems,
check-pointing/recovery process is used at different
positions to take care of the above problems and increase
the performance.
(b) In post-copy, all memory pages are transferred only once
during the migration process. The threshold time allocated
for migration is reduced. However, the downtime is higher
than that in pre-copy.
NOTE: Downtime means the time in which a system is out
of action or cannot handle other works. Ex: Live migration
between two Xen-enabled hosts: Figure 3.22.
CBC Compression=> Context Based Compression
RDMA=> Remote Direct memory Access

5. Virtualization for Data Centre Automation:


 Data Centres are constructed and computerized, newly by
Different companies like Google, Microsoft, IBM, and
Apple etc.
 By using data centers, the data that virtualization is
transferring to accessibility, maintenance time is
decreased, and the number of virtual clients is
increased.
 Workload balancing, backup services, and high
availability (HA) are additional elements that affect the
setup and utilization of data centers.

5.1. Server Association in Data Centres: In data


centers are heterogeneous workloads may run at different
times. Here, the two types are
 Interactive Workloads: These types may be still at one
point in time and extend the top at another. (Top priority)
 Non-Interactive Workloads: The concepts don't require
any user effort for development once they have been
submitted. For instance, HPC

The data center should be able to handle the workload with


acceptable performance at both the top and normal levels.

Data centers frequently suffer from under-utilization of


resources like hardware, space, power, and cost at various
levels and times.

The method of server consolidation can be employed to


overcome this disadvantage.
This improves the server utility ratio of hardware devices by
reducing the number of physical servers.
There exist two types of server consolidation:
 Centralised and Physical Consolidation(merging)
 Virtualization based Server Consolidation.
Nowadays, the second approach is popular and has
certain benefits:
 Consolidation increases are Hardware Utilization.
 It enables more Agile (actively) Delivering of the
Available Resources.
 The total cost of possessing and using data center is
reduced (low maintenance, low cooling, low cabling
etc.)
 The system ensures business continuity and
availability by preventing the crash of a guest OS
from impacting the host OS.
NOTE:
 To automate (VZ) data centers one must consider several
factors like resource scheduling, control
management, performance of analytical models and
so on.
 This improves the utilization in data centers and gives
high performance.
 Scheduling and reallocation can be done at different
levels at VM level, Server level and Data Center
level, but generally any one (or two) level is used at a
time.
The schemes that can be considered are:
(a) Dynamic CPU allocation scheme:
 This is based on VM Utilization and application level
QoS (Quality of Service) metrics.
 The CPU should change automatically according to the
demands and workloads to deliver the best
performance possible.
(b) Another scheme uses two-level resource management
system to handle the complexity of the requests and
allocations.
 The resources are distributed automatically and
independently to distribute the workload among a
data center's servers.
 Finally, we should efficiently balance the power saving
and data center performance to achieve the High
Performance and High Throughput also at different
situations as they demand.

Virtual Storage Management:


<

 Virtualization is the deployment of restricted access


virtual machines and is primarily the cover for data
center transformation.

 Chips are not replaced, CPUs are not updated often,


and host/guest operating systems are not modified
to accommodate changing circumstances.
 Virtual machine storage methods are not as
effective as they would need to be.
 The overflow of thousands of Virtual Machines in a
data center, joined with their lakhs of Single-to-Storage
images (SSI), could potentially lead to data center
failure.

 To develop efficient storage methods and reduce


image size by storing parts at different locations. The
solution here is Content Addressable Storage (CAS).
Ex: Parallax system architecture (A distributed
storage system). This can be observed at Figure 3.26,
P25.

Note:
 Parallax itself runs as a user-level application in the
Virtual Machine storage, providing Virtual Disk Images
(VDIs).
 A Virtual Disk Images (VDI) can access in a
transparent manner from any host machine in the
Parallax cluster.
 It is a core concept of the storage methodology used
by Parallax.
Example of Eucalyptus for Virtual Networking of Private
Cloud: It is an open-source software system intended for
IaaS clouds. This is seen in Figure 3.27.
Instance Manager (IM): It controls execution, inspection and
terminating of VM instances on the host machines where it
runs.
Group Manager (GM): It gathers information about VM
execution and schedules them on specific IMs; it also manages
virtual instance network.
Cloud Manager (CM): It is an entry-point into the cloud for
both users and administrators. It gathers information about the
resources, allocates them by proper scheduling, and
implements them through the GMs.

Trust Management in VZ Data Centers:


 As a remember, a VMM (hypervisor) is a layer between
the host OS and the hardware to create 1 or more
VMs on a single platform.
 A VM encapsulates the guest OS and its current state
and can transport it through the network as a Single-
to-Storage Image.
 Network transportation has any intruders may get
into the image to both the image and the host
system. Ex: An intelligent problem lies in reusing a
random number for cryptography.

VM-based Intrusion Detection: Intrusions are unauthorized


access to a computer from other network users.

An intrusion detection system (IDS) in which is built on the


host OS can be divided into two types: Host-based IDS
(HIDS) and a Network-based IDS (NIDS).

Virtualization based Intrusion Detection System can


separate each Virtual Machine on the VMM and work upon the
concerned systems without having contacts with the other.

Any problem with a VM will not pose problems for other VMs.
In addition to a VMM audits the hardware allocation and
Usage for the Virtual Machines regularly to notice any
abnormal changes. Host machine and guest OS are fully
isolated from each other.
A methodology on these bases can be noticed in Figure 3.29
[1].

 The above figure proposes the concept of granting IDS


runs only on a highly privileged VM.
 It is notice that policies play an important role here.
 A policy framework can monitor the actions or
events in different guest operating systems of
different VMs by using an OS interface library to
determine which grant is secure and which is not.
 Controlling which access is considered intrusion and
which is not is challenging without some time delay.
 These Systems also may use access ‘logs’ to analyze
which is an intrusion and which is secure.
 The IDS log service is based on the OS kernel and the
UNIX kernel is hard to break; so even if a host machine
is taken over by the hackers, the IDS logbook
remains unaffected.
 The security problems of cloud mainly get up in the
transport of the images through the network from one
location to another.
 The VMM needs to be utilized more effectively and
efficiently to prevent potential hacking
opportunities.

You might also like