[go: up one dir, main page]

0% found this document useful (0 votes)
21 views22 pages

Unit 3

The document discusses virtualization in cloud computing, defining it as the creation of virtual versions of physical resources to share among multiple users. It outlines various types of virtualization, including hardware, operating system, server, and storage virtualization, along with their uses and advantages. Additionally, it covers the role of hypervisors, logical partitions, and sensor virtualization, highlighting the benefits and challenges associated with these technologies.

Uploaded by

rahib5461
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views22 pages

Unit 3

The document discusses virtualization in cloud computing, defining it as the creation of virtual versions of physical resources to share among multiple users. It outlines various types of virtualization, including hardware, operating system, server, and storage virtualization, along with their uses and advantages. Additionally, it covers the role of hypervisors, logical partitions, and sensor virtualization, highlighting the benefits and challenges associated with these technologies.

Uploaded by

rahib5461
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 22

Unit-3

Virtualization in Cloud Computing

Virtualization is the "creation of a virtual (rather than actual) version of something,


such as a server, a desktop, a storage device, an operating system or network
resources".

In other words, Virtualization is a technique, which allows to share a single


physical instance of a resource or an application among multiple customers and
organizations. It does by assigning a logical name to a physical storage and
providing a pointer to that physical resource when demanded.

What is the concept behind the Virtualization?

Creation of a virtual machine over existing operating system and hardware is


known as Hardware Virtualization. A Virtual machine provides an environment
that is logically separated from the underlying hardware.

The machine on which the virtual machine is going to create is known as Host
Machine and that virtual machine is referred as a Guest Machine

Types of Virtualization:

1. Hardware Virtualization.
2. Operating system Virtualization.
3. Server Virtualization.
4. Storage Virtualization.

1) Hardware Virtualization:

When the virtual machine software or virtual machine manager (VMM) is directly
installed on the hardware system is known as hardware virtualization.

The main job of hypervisor is to control and monitoring the processor, memory
and other hardware resources.

After virtualization of hardware system we can install different operating system


on it and run different applications on those OS.
Usage:

Hardware virtualization is mainly done for the server platforms, because


controlling virtual machines is much easier than controlling a physical server.

2) Operating System Virtualization:

When the virtual machine software or virtual machine manager (VMM) is installed
on the Host operating system instead of directly on the hardware system is known
as operating system virtualization.

Usage:

Operating System Virtualization is mainly used for testing the applications on


different platforms of OS.

3) Server Virtualization:

When the virtual machine software or virtual machine manager (VMM) is directly
installed on the Server system is known as server virtualization.

Usage:

Server virtualization is done because a single physical server can be divided into
multiple servers on the demand basis and for balancing the load.

4) Storage Virtualization:

Storage virtualization is the process of grouping the physical storage from multiple
network storage devices so that it looks like a single storage device.

Storage virtualization is also implemented by using software applications.

Usage:

Storage virtualization is mainly done for back-up and recovery purposes.

How does virtualization work in cloud computing?

Virtualization plays a very important role in the cloud computing technology,


normally in the cloud computing, users share the data present in the clouds like
application etc, but actually with the help of virtualization users shares the
Infrastructure.

The main usage of Virtualization Technology is to provide the applications with


the standard versions to their cloud users, suppose if the next version of that
application is released, then cloud provider has to provide the latest version to their
cloud users and practically it is possible because it is more expensive.

To overcome this problem we use basically virtualization technology, By using


virtualization, all severs and the software application which are required by other
cloud providers are maintained by the third party people, and the cloud providers
has to pay the money on monthly or annual basis.

Advantages of Virtualization
Here are some Pros/Benefits of Virtualization:

 Virtualization offers several benefits, such as it helps in cost reduction and


boosting productivity towards the development process.
 It does away with the need to have a highly complex IT infrastructure.
 It facilitates remote access to resources and ensures that it promotes faster
scalability.
 It is highly flexible, and it allows the users to execute multiple desktops
operating systems on one standard machine.
 It removes the risks involved in terms of system failures, and it also boosts
flexible data transfer between different virtual servers.
 The working process in Virtualization is highly streamlined and agile, which
ensures that the users work and operate most economically.

Disadvantages of Virtualization
The disadvantages of Virtualization are very much limited in nature. Here are the
cons/disadvantages of Virtualization:

 The transition of the existing hardware setup to a virtualized setup requires


an extensive time investment, and hence this can be regarded as a time-
intensive process.
 There is a lack of availability of skilled resources that helps in terms of
transition of existing or actual setup to virtual setup.
 Since there is a limitation in terms of having less skilled resources, the
implementation of Virtualization calls for high-cost implementations.
 If the transition process is not handled meticulously, it also poses a security
risk to sensitive data.

Sensor Virtualization
Earthquake sensors detect seismic waves. They come in many shapes and sizes and
work in different ways. Some sensors are placed on the ground but others are
attached to buildings or other structures. But what is the alternative when a
physical sensor cannot be placed in the chosen position due to spatial conditions?

Virtual sensors are software-based models of physical sensors that can simulate
their behaviour and generate sensor readings without the need for actual physical
hardware. They can be used as digital twins to monitor or control a physical
sensor, providing cost-effective and scalable solutions for certain applications.

Virtual sensors leverage developments on the artificial intelligence and machine


learning front to allow for data-driven approaches to estimate key process
parameters. In addition to being less expensive, virtual sensors provide an
interesting alternative when a physical sensor cannot be placed in the preferred
position due to spatial conditions (e.g. lack of room for a sensor) or a hostile
environment (e.g. exposure to acids or extreme temperatures). Virtual sensor
technology can reduce signal noise and, thus, increase confidence in the signals
when a sensor’s output is confirmed by other sensors measuring the same
phenomenon. Finally, virtual sensors are extremely flexible and can be redesigned
as required, whereas physical sensors, once installed, often can only be
repositioned by mechanical intervention.

Cost is a key advantage

Industry 4.0 is an important driver of virtual sensing technology. The information


needed to digitize a factory plant is obtained from many field sensors. If only
physical sensors are used for this purpose, the cost of digitizing a factory can be
prohibitive for many companies. This cost can be minimized using virtual sensors.

The automotive industry heavily relies on sensing technology for many processes
related to safety, entertainment, traffic control, navigation and guidance. As
vehicles gain autonomy, this reliance on sensing devices will likely increase.
However, physical sensors used in cars can be expensive and, in some situations,
unreliable. Virtual sensors are becoming a valuable alternative for car
manufacturers. They can provide a redundant safety backup to physical sensors
and are fundamental in the development of more advanced driver assistance
systems (ADAS) and therefore for the realization of autonomous vehicles.

HVM

HVM (Hardware Virtual Machine) virtualization is a concept crucial to


understanding the underlying technology of Amazon Web Services (AWS). AWS,
the leading cloud service provider, utilizes HVM virtualization to offer its
customers a highly secure and efficient cloud computing environment.

HVM virtualization differs from other virtualization techniques, such as para-


virtualization, in that it allows for direct access to underlying hardware resources.
This access means that virtual machines (VMs) running on AWS can operate at
near-native performance levels, enhancing overall efficiency and minimizing
performance bottlenecks.

One of the key advantages of HVM virtualization is its ability to support a wide
range of operating systems. This flexibility empowers AWS users to deploy VMs
with different operating systems simultaneously, enabling diverse workloads
within the same infrastructure. With HVM, users can run Windows, Linux, and
even BSD-based operating systems on AWS, providing a versatile environment for
various applications.

In terms of security, HVM virtualization plays a vital role in safeguarding the data
and systems hosted on AWS. By isolating each VM from others and the hypervisor
itself, HVM ensures that potential vulnerabilities or malicious activities in one VM
do not impact others. This isolation mitigates the risk of unauthorized access and
keeps sensitive information protected within individual VMs.

Furthermore, HVM virtualization includes security features like Secure Boot,


which verifies the integrity of the guest operating system during startup, and the
ability to encrypt VM instances using AWS Key Management Service (KMS).
These additional security measures add layers of protection to user data and help
fulfill compliance requirements.

In conclusion, HVM virtualization is a core component of the AWS infrastructure


and offers numerous benefits to users in terms of performance, flexibility, and
security. Understanding HVM is crucial for businesses and individuals aiming to
leverage the power and potential of AWS while ensuring the utmost protection for
their cloud-based assets. By harnessing the capabilities of HVM virtualization,
AWS users can confidently deploy and manage their applications in a highly
secure and efficient cloud environment.

Study about Hypervisor

A hypervisor is a form of virtualization software used in Cloud hosting to divide


and allocate the resources on various pieces of hardware. The program which
provides partitioning, isolation, or abstraction is called a virtualization
hypervisor. The hypervisor is a hardware virtualization technique that allows
multiple guest operating systems (OS) to run on a single host system at the same
time. A hypervisor is sometimes also called a virtual machine manager(VMM).
Types of Hypervisor –

TYPE-1 Hypervisor:
The hypervisor runs directly on the underlying host system. It is also known as a
“Native Hypervisor” or “Bare metal hypervisor”. It does not require any base
server operating system. It has direct access to hardware resources. Examples of
Type 1 hypervisors include VMware ESXi, Citrix XenServer, and Microsoft
Hyper-V hypervisor.

Pros & Cons of Type-1 Hypervisor:


Pros: Such kinds of hypervisors are very efficient because they have direct
access to the physical hardware resources(like Cpu, Memory, Network, and
Physical storage). This causes the empowerment of the security because there is
nothing any kind of the third party resource so that attacker couldn’t compromise
with anything.
Cons: One problem with Type-1 hypervisors is that they usually need a dedicated
separate machine to perform their operation and to instruct different VMs and
control the host hardware resources.

TYPE-2 Hypervisor:
A Host operating system runs on the underlying host system. It is also known as
‘Hosted Hypervisor”. Such kind of hypervisors doesn’t run directly over the
underlying hardware rather they run as an application in a Host system(physical
machine). Basically, the software is installed on an operating system. Hypervisor
asks the operating system to make hardware calls. An example of a Type 2
hypervisor includes VMware Player or Parallels Desktop. Hosted hypervisors are
often found on endpoints like PCs. The type-2 hypervisor is very useful for
engineers, and security analysts (for checking malware, or malicious source code
and newly developed applications).
Pros & Cons of Type-2 Hypervisor:
Pros: Such kind of hypervisors allows quick and easy access to a guest Operating
System alongside the host machine running. These hypervisors usually come with
additional useful features for guest machines. Such tools enhance the
coordination between the host machine and the guest machine.
Cons: Here there is no direct access to the physical hardware resources so the
efficiency of these hypervisors lags in performance as compared to the type-1
hypervisors, and potential security risks are also there an attacker can
compromise the security weakness if there is access to the host operating system
so he can also access the guest operating system.

Choosing the right hypervisor :

Type 1 hypervisors offer much better performance than Type 2 ones because
there’s no middle layer, making them the logical choice for mission-critical
applications and workloads. But that’s not to say that hosted hypervisors don’t
have their place – they’re much simpler to set up, so they’re a good bet if, say,
you need to deploy a test environment quickly. One of the best ways to determine
which hypervisor meets your needs is to compare their performance metrics.
These include CPU overhead, the amount of maximum host and guest memory,
and support for virtual processors. The following factors should be examined
before choosing a suitable hypervisor:

1. Understand your needs: The company and its applications are the reason for
the data center (and your job). Besides your company’s needs, you (and your co-
workers in IT) also have your own needs. Needs for a virtualization hypervisor
are:

a. Flexibility
b. Scalability
c. Usability
d. Availability
e. Reliability
f. Efficiency
g. Reliable support

2. The cost of a hypervisor: For many buyers, the toughest part of choosing a
hypervisor is striking the right balance between cost and functionality. While a
number of entry-level solutions are free, or practically free, the prices at the
opposite end of the market can be staggering. Licensing frameworks also vary, so
it’s important to be aware of exactly what you’re getting for your money.
3. Virtual machine performance: Virtual systems should meet or exceed the
performance of their physical counterparts, at least in relation to the applications
within each server. Everything beyond meeting this benchmark is profit.

4. Ecosystem: It’s tempting to overlook the role of a hypervisor’s ecosystem –


that is, the availability of documentation, support, training, third-party developers
and consultancies, and so on – in determining whether or not a solution is cost-
effective in the long term.

5. Test for yourself: You can gain basic experience from your existing desktop
or laptop. You can run both VMware vSphere and Microsoft Hyper-V in either
VMware Workstation or VMware Fusion to create a nice virtual learning and
testing environment.

HYPERVISOR REFERENCE MODEL :


There are 3 main modules coordinates in order to emulate the underlying
hardware:

1. DISPATCHER:
The dispatcher behaves like the entry point of the monitor and reroutes the
instructions of the virtual machine instance to one of the other two modules.

2. ALLOCATOR:
The allocator is responsible for deciding the system resources to be provided
to the virtual machine instance. It means whenever a virtual machine tries to
execute an instruction that results in changing the machine resources
associated with the virtual machine, the allocator is invoked by the dispatcher.

3. INTERPRETER:
The interpreter module consists of interpreter routines. These are executed,
whenever a virtual machine executes a privileged instruction.

What is a logical partition (LPAR)?


A logical partition (LPAR) is a subset of a
computer's processor, memory and I/O resources that behaves much like a physical
server. A computer can host multiple LPARs, each one running independently of
the other.

The LPAR has its own operating system (OS), applications and configurations, just
like its physical counterpart. If an LPAR is set up with resources comparable to a
physical server and they're both running the same OS and applications, they will
seem like similar systems from the outside.

The number of logical partitions that can be created on a computer depends on its
hardware, OS and available resources. For example, an IBM Power Systems server
can host up to 1,000 LPARs. No matter how many LPARs are running on a
physical computer, each one looks like an independent system.

Logical partition advantages


Logical partitions offer several important advantages over physical machines. They
help to consolidate services and resources, reducing the need for equipment and the
maintenance overhead that goes with it. Logical partitions also make it easy to
assign hardware resources to different LPARs and to move those resources around
as needed, providing IT teams with a great deal of flexibility. For example, a team
can create mixed production and quality assurance environments on a single
machine or run integrated clusters on that machine.

Logical partitions hosted on the same computer run in isolation from each other.
They do not interfere with each other's operations no matter what operating
systems or applications they run. For instance, an IT team can create LPARs that
run IBM AIX, IBM i and Linux all on the same server. They can also create
LPARs that run the same OS, with each LPAR using its own OS installation. The
following figure represents a server running five LPARs with three different
operating systems.
Storage Virtualization

As we know that, there has been a strong link between the physical host and the
locally installed storage devices. However, that paradigm has been changing
drastically, almost local storage is no longer needed. As the technology
progressing, more advanced storage devices are coming to the market that provide
more functionality, and obsolete the local storage.

Storage virtualization is a major component for storage servers, in the form of


functional RAID levels and controllers. Operating systems and applications with
device can access the disks directly by themselves for writing. The controllers
configure the local storage in RAID groups and present the storage to the operating
system depending upon the configuration. However, the storage is abstracted and
the controller is determining how to write the data or retrieve the requested data for
the operating system.

Storage virtualization is becoming more and more important in various other


forms:

File servers: The operating system writes the data to a remote location with no
need to understand how to write to the physical media.

WAN Accelerators: Instead of sending multiple copies of the same data over the
WAN environment, WAN accelerators will cache the data locally and present the
re-requested blocks at LAN speed, while not impacting the WAN performance.

SAN and NAS: Storage is presented over the Ethernet network of the operating
system. NAS presents the storage as file operations (like NFS). SAN technologies
present the storage as block level storage (like Fibre Channel). SAN technologies
receive the operating instructions only when if the storage was a locally attached
device.

Storage Tiering: Utilizing the storage pool concept as a stepping stone, storage
tiering analyze the most commonly used data and places it on the highest
performing storage pool. The lowest one used data is placed on the weakest
performing storage pool.

This operation is done automatically without any interruption of service to the data
consumer.

Advantages of Storage Virtualization

1. Data is stored in the more convenient locations away from the specific host.
In the case of a host failure, the data is not compromised necessarily.
2. The storage devices can perform advanced functions like replication,
reduplication, and disaster recovery functionality.
3. By doing abstraction of the storage level, IT operations become more
flexible in how storage is provided, partitioned, and protected.

Storage Area Network (SAN) is used for transferring the data between the
servers and the storage devices’ fiber channels and switches. In SAN (Storage
Area Network), data is identified by disk block. Protocols that are used in SAN
are SCSI (Small Computer System Interface), SATA (Serial Advanced
Technology Attachment), etc.
Components of Storage Area Network (SAN):
1. Node ports
2. Cables
3. Interconnect devices such as Hubs, switches, directors
4. Storage arrays
5. SAN management Software
Network Attached Storage (NAS), data is identified by file name as well as
byte offset. In-Network Attached Storage, the file system is managed by Head
units such as CPU and Memory. In this for backup and recovery, files are used
instead of the block-by-block copying technique.
Components of Network Attached Storage (NAS):
1. Head unit: CPU, Memory
2. Network Interface Card (NIC)
3. Optimized operating system
4. Protocols
5. Storage protocols: ATA (Advanced Technology Attachment), SCSI, FC (Fibre
Channel)
The difference between Storage Area Network (SAN) and Network Attached
Storage (NAS) are as follows:
SAN NAS

SAN stands for Storage Area NAS stands for Network Attached
Network. Storage.

In NAS (Network Attached Storage),


In SAN (Storage Area Network), data
data is identified by file name as well as
is identified by disk block.
byte offset.

In SAN (Storage Area Network), the In NAS (Network Attached Storage),


file system is managed by servers. file system is managed by Head unit.

SAN (Storage Area Network) is more NAS (Network Attached Storage) is less
costly. expensive than SAN.

SAN(Storage Area Network) is more NAS (Network Attached Storage) is less


complex than NAS. complex than SAN.

Protocols used in NAS are: File server,


Protocols used in SAN are:
CIFS (Common Internet File System),
SCSI, SATA, etc.
etc.

For backups and recovery in SAN,


For backups and recovery in NAS, Files
Block by block copying technique is
are used.
used.

While NAS is not suitable for that


SAN gives high performance in high-
environment which has high speed
speedefforthigh-speed traffic systems.
traffic.
SAN NAS

NAS is easy to manage and provides a


SAN needs more time and efforts in
simple interface for organizing and
organizing and controlling.
controlling.

SAN does not depends on the LAN


NAS needs TCP/IP networks and
and uses a high-speedfiber channel
depends on the LAN.
network.

Mostly used in enterprise Applications include small-sized


environments. organizations high-speed and homes.

Compared to SAN, NAS has higher


It has lower latency.
latency.

SAN supports virtualization. NAS does not support virtualization.

The working of SAN is not affected The working of NAS is affected by


by network traffic bottlenecks. network traffic bottlenecks.

Server Virtualization


Server Virtualization is most important part of Cloud Computing. So, Talking


about Cloud Computing, it is composed of two words, cloud and computing. Cloud
means Internet and computing means to solve problems with help of computers.
Computing is related to CPU & RAM in digital world. Now Consider situation,
You are using Mac OS on your machine but particular application for your project
can be operated only on Windows. You can either buy new machine running
windows or create virtual environment in which windows can be installed and
used. Second option is better because of less cost and easy implementation. This
scenario is called Virtualization. In it, virtual CPU, RAM, NIC and other
resources are provided to OS which it needed to run. This resources is virtually
provided and controlled by an application called Hypervisor. The new OS running
on virtual hardware resources is collectively called Virtual Machine (VM).

Figure – Virtualization on local machine


Now migrate this concept to data centers where lot of servers (machines with fast
CPU, large RAM and enormous storage) are available. Enterprise owning data
centre provide resources requested by customers as per their need. Data centers
have all resources and on user request, particular amount of CPU, RAM, NIC and
storage with preferred OS is provided to users. This concept of virtualization in
which services are requested and provided over Internet is called Server
Virtualization.

Figure
– Server Virtualization
To implement Server Virtualization, hypervisor is installed on server which
manages and allocates host hardware requirements to each virtual machine. This
hypervisor sits over server hardware and regulates resources of each VM. A user
can increase or decrease resources or can delete entire VM as per his/her need.
This servers with VM created on them is called server virtualization and concept of
controlling this VM by users through internet is called Cloud Computing.
Advantages of Server Virtualization:
 Each server in server virtualization can be restarted separately without affecting
the operation of other virtual servers.
 Server virtualization lowers the cost of hardware by dividing a single server
into several virtual private servers.
 One of the major benefits of server virtualization is disaster recovery. In server
virtualization, data may be stored and retrieved from any location and moved
rapidly and simply from one server to another.
 It enables users to keep their private information in the data centers.
Disadvantages of Server Virtualization:
 The major drawback of server virtualization is that all websites that are hosted
by the server will cease to exist if the server goes offline.
 The effectiveness of virtualized environments cannot be measured.
 It consumes a significant amount of RAM.
 Setting it up and keeping it up are challenging.
 Virtualization is not supported for many essential databases and apps.

A virtual data center is a more cost-effective, flexible, and practical alternative to


an on-prem data center. Instead of relying on physical hardware, a virtual data
center allows a company to use cloud-based resources and create a scalable
infrastructure that aligns perfectly with operational needs.

This article is an introduction to virtual data centers and the benefits of cloud-
based infrastructure. Learn how to take advantage of this strategy and capitalize on
the flexibility, scalability, and cost-savings of cloud computing.
What is a Virtual Data Center?

A virtual data center (VDC) is a set of cloud resources that support a business with
computing capabilities. A VDC eliminates the need for a company to set up and
run an on-prem data center. Another common name for a VDC is a software-
defined data center.

A software-based data center offers the same capabilities as its physical


counterpart. It allows a business to set up:

 Servers.
 Processing power (CPU).
 Storage clusters (RAM and disk space).
 Networking components and bandwidth.

Like a regular data center, a VDC provides computing capabilities that enable
workloads of business apps and activities, such as:

 File sharing.
 Email operations.
 Productivity apps.
 CRM and ERP platforms.
 Database operations.
 Big data.
 AIOps and machine learning.
 Communication and collaboration apps.
The main upside of virtual data centers is the ability to add or remove capacity
without having to set up or take down hardware. Every abstracted component runs
on the provider’s virtual machine (VM), and the client pays the usage on a pay-as-
you-use basis.

Virtualization of physical components offers a lot of advantages, and companies


opt to deploy a VDC in pursuit of:

 Flexible and scalable infrastructure.


 Shorter time-to-market and idea-to-cash cycles.
 High availability.
 Higher levels of IT setup customization.
 Cost reductions (no rental, power, cooling, maintenance, or hardware costs).
 Traditional Data Center VS. Virtual Data Center
 The table below highlights the main differences between on-prem and
virtual data centers.

Traditional data center Point of comparison Virtual data center

A facility that houses computer A pool of cloud resources that


hardware and provides computing Definition uses virtualization to provide
capabilities. computing capabilities.

A high upfront cost as companies


A cost-effective pay-as-you-
need to buy hardware and rent
Costs use payment model. No initial
space. High electricity, cooling,
investment necessary.
and maintenance costs.

Capital expenditure (CapEx) as Operating expenses (OpEx) as


companies need to acquire and Main investment type the company pays ongoing
maintain physical assets. costs only.

Designing and building a data Setup speed Building a new VDC is


center can take months. Each new typically a matter of days.
piece of hardware requires Adding new VMs and
purchasing, configuring, and
racking. capabilities requires minutes.

A single machine can host


The data center owner makes full multiple VMs, so clients can
use of CPU, memory, storage, and Hardware dedication see performance issues if the
network resources. vendor’s device has too many
tenants.

The team deploys physical servers


The team deploys
with fixed CPUs, memory, and
resizable virtual servers that
storage. Limited upgrade options Servers
keep up with current workload
and time-consuming server
demands.
management.

Relies on software-defined
The team must plan for and set up networks (SDN) and virtual
Networking gear
switch ports, routers, and cabling. routers to scale network
capacity up or down.

Data center security starts with The team focuses on IT-level


entry restrictions and verifiable security, while the provider
Security
access to server racks. The in- takes care of physical
considerations
house team is in charge of IT- protection. Most vendors offer
level security, too. services for IT-security as well.

Hard to implement and manage Has centralized security and


Security centralization
centralized security. management.

Companies need trained personnel As little as two or three people


to rack and stack equipment. Most can manage a VDC, but staff
Staff requirements
companies have separate compute, members require strong
storage, and network teams. expertise.

Migration is a slow and expensive Data center migration Migrating a VDC is quick,
project. simple, and cheap.

Easy workload
Difficult to move the workload
Workload migration migration between hardware
from one hardware to another.
platforms.

Dynamic provisioning enables


Relatively static and predictable,
teams to scale the number of
and typically goes one way Scalability
VMs up and down with speed
(adding more equipment).
and ease.

A traditional data center is a large Users do not cover power


Power consumption
consumer of power. expenses.

Many repetitive tasks and


Maintenance Less repetitive tasks, but the
coordination work, but not a lot of
complexity team requires deep expertise.
necessary expertise.

Requires backup agents that the The hypervisor provides LAN-


team must deploy, patch, and Backups free and agentless backup
manage. services.

Each server needs a separate anti- Server anti-virus Anti-virus operates at the
virus program. management hypervisor level.

Firewalls are centrally located and


Firewalls A built-in property of the VM.
typically not part of the server.

DR occurs on a per-application
DR is a service and enables
basis, and every app has a Disaster recovery
center-wide strategies.
different solution.
The company pays only for the
Requires accurate estimation of needed capacity and can scale
future needs to avoid unnecessary Future planning up and down to meet the
overhead. current requirements. No
overhead.

You might also like