Unit 3
Unit 3
The machine on which the virtual machine is going to create is known as Host
Machine and that virtual machine is referred as a Guest Machine
Types of Virtualization:
1. Hardware Virtualization.
2. Operating system Virtualization.
3. Server Virtualization.
4. Storage Virtualization.
1) Hardware Virtualization:
When the virtual machine software or virtual machine manager (VMM) is directly
installed on the hardware system is known as hardware virtualization.
The main job of hypervisor is to control and monitoring the processor, memory
and other hardware resources.
When the virtual machine software or virtual machine manager (VMM) is installed
on the Host operating system instead of directly on the hardware system is known
as operating system virtualization.
Usage:
3) Server Virtualization:
When the virtual machine software or virtual machine manager (VMM) is directly
installed on the Server system is known as server virtualization.
Usage:
Server virtualization is done because a single physical server can be divided into
multiple servers on the demand basis and for balancing the load.
4) Storage Virtualization:
Storage virtualization is the process of grouping the physical storage from multiple
network storage devices so that it looks like a single storage device.
Usage:
Advantages of Virtualization
Here are some Pros/Benefits of Virtualization:
Disadvantages of Virtualization
The disadvantages of Virtualization are very much limited in nature. Here are the
cons/disadvantages of Virtualization:
Sensor Virtualization
Earthquake sensors detect seismic waves. They come in many shapes and sizes and
work in different ways. Some sensors are placed on the ground but others are
attached to buildings or other structures. But what is the alternative when a
physical sensor cannot be placed in the chosen position due to spatial conditions?
Virtual sensors are software-based models of physical sensors that can simulate
their behaviour and generate sensor readings without the need for actual physical
hardware. They can be used as digital twins to monitor or control a physical
sensor, providing cost-effective and scalable solutions for certain applications.
The automotive industry heavily relies on sensing technology for many processes
related to safety, entertainment, traffic control, navigation and guidance. As
vehicles gain autonomy, this reliance on sensing devices will likely increase.
However, physical sensors used in cars can be expensive and, in some situations,
unreliable. Virtual sensors are becoming a valuable alternative for car
manufacturers. They can provide a redundant safety backup to physical sensors
and are fundamental in the development of more advanced driver assistance
systems (ADAS) and therefore for the realization of autonomous vehicles.
HVM
One of the key advantages of HVM virtualization is its ability to support a wide
range of operating systems. This flexibility empowers AWS users to deploy VMs
with different operating systems simultaneously, enabling diverse workloads
within the same infrastructure. With HVM, users can run Windows, Linux, and
even BSD-based operating systems on AWS, providing a versatile environment for
various applications.
In terms of security, HVM virtualization plays a vital role in safeguarding the data
and systems hosted on AWS. By isolating each VM from others and the hypervisor
itself, HVM ensures that potential vulnerabilities or malicious activities in one VM
do not impact others. This isolation mitigates the risk of unauthorized access and
keeps sensitive information protected within individual VMs.
TYPE-1 Hypervisor:
The hypervisor runs directly on the underlying host system. It is also known as a
“Native Hypervisor” or “Bare metal hypervisor”. It does not require any base
server operating system. It has direct access to hardware resources. Examples of
Type 1 hypervisors include VMware ESXi, Citrix XenServer, and Microsoft
Hyper-V hypervisor.
TYPE-2 Hypervisor:
A Host operating system runs on the underlying host system. It is also known as
‘Hosted Hypervisor”. Such kind of hypervisors doesn’t run directly over the
underlying hardware rather they run as an application in a Host system(physical
machine). Basically, the software is installed on an operating system. Hypervisor
asks the operating system to make hardware calls. An example of a Type 2
hypervisor includes VMware Player or Parallels Desktop. Hosted hypervisors are
often found on endpoints like PCs. The type-2 hypervisor is very useful for
engineers, and security analysts (for checking malware, or malicious source code
and newly developed applications).
Pros & Cons of Type-2 Hypervisor:
Pros: Such kind of hypervisors allows quick and easy access to a guest Operating
System alongside the host machine running. These hypervisors usually come with
additional useful features for guest machines. Such tools enhance the
coordination between the host machine and the guest machine.
Cons: Here there is no direct access to the physical hardware resources so the
efficiency of these hypervisors lags in performance as compared to the type-1
hypervisors, and potential security risks are also there an attacker can
compromise the security weakness if there is access to the host operating system
so he can also access the guest operating system.
Type 1 hypervisors offer much better performance than Type 2 ones because
there’s no middle layer, making them the logical choice for mission-critical
applications and workloads. But that’s not to say that hosted hypervisors don’t
have their place – they’re much simpler to set up, so they’re a good bet if, say,
you need to deploy a test environment quickly. One of the best ways to determine
which hypervisor meets your needs is to compare their performance metrics.
These include CPU overhead, the amount of maximum host and guest memory,
and support for virtual processors. The following factors should be examined
before choosing a suitable hypervisor:
1. Understand your needs: The company and its applications are the reason for
the data center (and your job). Besides your company’s needs, you (and your co-
workers in IT) also have your own needs. Needs for a virtualization hypervisor
are:
a. Flexibility
b. Scalability
c. Usability
d. Availability
e. Reliability
f. Efficiency
g. Reliable support
2. The cost of a hypervisor: For many buyers, the toughest part of choosing a
hypervisor is striking the right balance between cost and functionality. While a
number of entry-level solutions are free, or practically free, the prices at the
opposite end of the market can be staggering. Licensing frameworks also vary, so
it’s important to be aware of exactly what you’re getting for your money.
3. Virtual machine performance: Virtual systems should meet or exceed the
performance of their physical counterparts, at least in relation to the applications
within each server. Everything beyond meeting this benchmark is profit.
5. Test for yourself: You can gain basic experience from your existing desktop
or laptop. You can run both VMware vSphere and Microsoft Hyper-V in either
VMware Workstation or VMware Fusion to create a nice virtual learning and
testing environment.
1. DISPATCHER:
The dispatcher behaves like the entry point of the monitor and reroutes the
instructions of the virtual machine instance to one of the other two modules.
2. ALLOCATOR:
The allocator is responsible for deciding the system resources to be provided
to the virtual machine instance. It means whenever a virtual machine tries to
execute an instruction that results in changing the machine resources
associated with the virtual machine, the allocator is invoked by the dispatcher.
3. INTERPRETER:
The interpreter module consists of interpreter routines. These are executed,
whenever a virtual machine executes a privileged instruction.
The LPAR has its own operating system (OS), applications and configurations, just
like its physical counterpart. If an LPAR is set up with resources comparable to a
physical server and they're both running the same OS and applications, they will
seem like similar systems from the outside.
The number of logical partitions that can be created on a computer depends on its
hardware, OS and available resources. For example, an IBM Power Systems server
can host up to 1,000 LPARs. No matter how many LPARs are running on a
physical computer, each one looks like an independent system.
Logical partitions hosted on the same computer run in isolation from each other.
They do not interfere with each other's operations no matter what operating
systems or applications they run. For instance, an IT team can create LPARs that
run IBM AIX, IBM i and Linux all on the same server. They can also create
LPARs that run the same OS, with each LPAR using its own OS installation. The
following figure represents a server running five LPARs with three different
operating systems.
Storage Virtualization
As we know that, there has been a strong link between the physical host and the
locally installed storage devices. However, that paradigm has been changing
drastically, almost local storage is no longer needed. As the technology
progressing, more advanced storage devices are coming to the market that provide
more functionality, and obsolete the local storage.
File servers: The operating system writes the data to a remote location with no
need to understand how to write to the physical media.
WAN Accelerators: Instead of sending multiple copies of the same data over the
WAN environment, WAN accelerators will cache the data locally and present the
re-requested blocks at LAN speed, while not impacting the WAN performance.
SAN and NAS: Storage is presented over the Ethernet network of the operating
system. NAS presents the storage as file operations (like NFS). SAN technologies
present the storage as block level storage (like Fibre Channel). SAN technologies
receive the operating instructions only when if the storage was a locally attached
device.
Storage Tiering: Utilizing the storage pool concept as a stepping stone, storage
tiering analyze the most commonly used data and places it on the highest
performing storage pool. The lowest one used data is placed on the weakest
performing storage pool.
This operation is done automatically without any interruption of service to the data
consumer.
1. Data is stored in the more convenient locations away from the specific host.
In the case of a host failure, the data is not compromised necessarily.
2. The storage devices can perform advanced functions like replication,
reduplication, and disaster recovery functionality.
3. By doing abstraction of the storage level, IT operations become more
flexible in how storage is provided, partitioned, and protected.
Storage Area Network (SAN) is used for transferring the data between the
servers and the storage devices’ fiber channels and switches. In SAN (Storage
Area Network), data is identified by disk block. Protocols that are used in SAN
are SCSI (Small Computer System Interface), SATA (Serial Advanced
Technology Attachment), etc.
Components of Storage Area Network (SAN):
1. Node ports
2. Cables
3. Interconnect devices such as Hubs, switches, directors
4. Storage arrays
5. SAN management Software
Network Attached Storage (NAS), data is identified by file name as well as
byte offset. In-Network Attached Storage, the file system is managed by Head
units such as CPU and Memory. In this for backup and recovery, files are used
instead of the block-by-block copying technique.
Components of Network Attached Storage (NAS):
1. Head unit: CPU, Memory
2. Network Interface Card (NIC)
3. Optimized operating system
4. Protocols
5. Storage protocols: ATA (Advanced Technology Attachment), SCSI, FC (Fibre
Channel)
The difference between Storage Area Network (SAN) and Network Attached
Storage (NAS) are as follows:
SAN NAS
SAN stands for Storage Area NAS stands for Network Attached
Network. Storage.
SAN (Storage Area Network) is more NAS (Network Attached Storage) is less
costly. expensive than SAN.
Server Virtualization
Figure
– Server Virtualization
To implement Server Virtualization, hypervisor is installed on server which
manages and allocates host hardware requirements to each virtual machine. This
hypervisor sits over server hardware and regulates resources of each VM. A user
can increase or decrease resources or can delete entire VM as per his/her need.
This servers with VM created on them is called server virtualization and concept of
controlling this VM by users through internet is called Cloud Computing.
Advantages of Server Virtualization:
Each server in server virtualization can be restarted separately without affecting
the operation of other virtual servers.
Server virtualization lowers the cost of hardware by dividing a single server
into several virtual private servers.
One of the major benefits of server virtualization is disaster recovery. In server
virtualization, data may be stored and retrieved from any location and moved
rapidly and simply from one server to another.
It enables users to keep their private information in the data centers.
Disadvantages of Server Virtualization:
The major drawback of server virtualization is that all websites that are hosted
by the server will cease to exist if the server goes offline.
The effectiveness of virtualized environments cannot be measured.
It consumes a significant amount of RAM.
Setting it up and keeping it up are challenging.
Virtualization is not supported for many essential databases and apps.
This article is an introduction to virtual data centers and the benefits of cloud-
based infrastructure. Learn how to take advantage of this strategy and capitalize on
the flexibility, scalability, and cost-savings of cloud computing.
What is a Virtual Data Center?
A virtual data center (VDC) is a set of cloud resources that support a business with
computing capabilities. A VDC eliminates the need for a company to set up and
run an on-prem data center. Another common name for a VDC is a software-
defined data center.
Servers.
Processing power (CPU).
Storage clusters (RAM and disk space).
Networking components and bandwidth.
Like a regular data center, a VDC provides computing capabilities that enable
workloads of business apps and activities, such as:
File sharing.
Email operations.
Productivity apps.
CRM and ERP platforms.
Database operations.
Big data.
AIOps and machine learning.
Communication and collaboration apps.
The main upside of virtual data centers is the ability to add or remove capacity
without having to set up or take down hardware. Every abstracted component runs
on the provider’s virtual machine (VM), and the client pays the usage on a pay-as-
you-use basis.
Relies on software-defined
The team must plan for and set up networks (SDN) and virtual
Networking gear
switch ports, routers, and cabling. routers to scale network
capacity up or down.
Migration is a slow and expensive Data center migration Migrating a VDC is quick,
project. simple, and cheap.
Easy workload
Difficult to move the workload
Workload migration migration between hardware
from one hardware to another.
platforms.
Each server needs a separate anti- Server anti-virus Anti-virus operates at the
virus program. management hypervisor level.
DR occurs on a per-application
DR is a service and enables
basis, and every app has a Disaster recovery
center-wide strategies.
different solution.
The company pays only for the
Requires accurate estimation of needed capacity and can scale
future needs to avoid unnecessary Future planning up and down to meet the
overhead. current requirements. No
overhead.