[go: up one dir, main page]

0% found this document useful (0 votes)
20 views13 pages

Cloud Computing Unit 2

The document discusses various cloud computing concepts, including Utility Computing and Elastic Computing, highlighting their definitions, payment models, resource management, and scalability. It also covers virtualization technology, its benefits, pitfalls, and the importance of fault tolerance in cloud systems. Additionally, it explains different storage types such as block, object, and file storage, along with cloud analytics and the cloud ecosystem.

Uploaded by

fipen98578
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views13 pages

Cloud Computing Unit 2

The document discusses various cloud computing concepts, including Utility Computing and Elastic Computing, highlighting their definitions, payment models, resource management, and scalability. It also covers virtualization technology, its benefits, pitfalls, and the importance of fault tolerance in cloud systems. Additionally, it explains different storage types such as block, object, and file storage, along with cloud analytics and the cloud ecosystem.

Uploaded by

fipen98578
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

Unit 2.

Utility Computing, Elastic Computing, Ajax: asynchronous ‘rich’ interfaces,

Mashups: User interface,

Services Virtualization Technology: Virtualization applications in enterprises,

Pitfalls of virtualization Multitenant software: Multi-entity support, Multischema approach, Multi-


tenancy using cloud data stores.

—----------------------------------------------------------------------------------------------------------------------------

Aspect Utility Computing Elastic Computing

Definition Renting computing resources Dynamically adding or removing


(e.g., storage, servers) on a pay- resources based on real-time
per-use basis. demand.

Payment Model Pay only for what you use—like an Pay based on usage, with resources
electricity bill. scaling automatically when needed.

Resource Resources are allocated manually Resources are managed


Management based on user requests. automatically depending on traffic
and workload.

Scalability Limited and manual; you need to Highly scalable; the system adjusts
request more resources if needed. resources automatically.

Flexibility Static—you get what you ask for Dynamic—resources grow or shrink
and pay for usage. without manual intervention.

Focus Cost efficiency by paying only for Performance optimization through


used resources. automatic scaling.

Usage Scenario Renting cloud storage (e.g., Running a web app with
Google Drive, AWS S3). unpredictable traffic (e.g., AWS
Auto Scaling).

Example Paying for electricity—you pay Hiring extra pizza staff during rush
Analogy only for the units consumed. hours and sending them home
afterward.

Cloud Services AWS S3, Google Cloud Storage. AWS EC2 Auto Scaling, Google
Examples Cloud Run.
In Short:
● Utility Computing: Pay-as-you-go, manually adjusted resources.
● Elastic Computing: Pay-as-you-go, auto-adjusting resources based on demand.

Utility Computing
Utility Computing is a cloud computing model where computational resources—such as
storage, processing power, and applications—are provided to users on a pay-as-you-go basis. It
abstracts the underlying infrastructure, allowing users to access and utilize resources without
the need for upfront capital investments in hardware or software.

Key Characteristics:
1. On-Demand Resource Allocation:
Resources like CPU, memory, storage, and network bandwidth are provisioned as
needed, similar to public utilities like electricity or water.
2. Metered Usage:
Usage is monitored, metered, and billed based on actual consumption (e.g., per CPU
cycle, GB of storage, or bandwidth utilization).
3. Scalability and Elasticity:
Users can scale resources up or down based on workload requirements, ensuring
efficient resource utilization.
4. Abstraction of Infrastructure:
The complexity of the underlying infrastructure is hidden from users, enabling them to
focus on application development rather than infrastructure management.
5. Multi-Tenancy:
Resources are shared among multiple tenants while ensuring isolation and security
through virtualization.

Examples:
● Amazon Web Services (AWS) S3: Pay for storage usage based on the amount of data
stored and the number of requests made.
● Google Cloud Compute Engine: Pay based on the number of virtual machine hours
consumed.

Elastic Computing
Elastic Computing refers to the capability of a cloud computing environment to automatically
provision and deprovision computing resources in response to fluctuating workloads. It ensures
optimal resource utilization, cost-efficiency, and performance by scaling resources dynamically
based on real-time demand.

Key Characteristics:
1. Dynamic Resource Scaling:
○ Resources (e.g., compute, memory, and storage) are automatically added or
removed based on workload variations.
○ Vertical scaling (adding more resources to an instance) or horizontal scaling
(adding more instances) can be applied.
2. Real-Time Adaptability:
○ The system monitors performance metrics (e.g., CPU usage, network traffic) to
adjust resources without manual intervention.
○ Ensures applications maintain performance under varying loads.
3. Cost Efficiency:
○ Pay only for the resources utilized during peak and non-peak periods.
○ Reduces the need for over-provisioning infrastructure.
4. Fault Tolerance and High Availability:
○ Elastic architectures often use load balancers and redundant resources to handle
failures and maintain availability.
5. Infrastructure as Code (IaC):
○ Resource elasticity is typically configured using declarative configurations and
cloud services like AWS CloudFormation or Terraform.

Examples:
● AWS EC2 Auto Scaling: Automatically adjusts the number of EC2 instances based on
defined policies.
● Google Kubernetes Engine (GKE): Scales pods up or down according to workload
demand.
● Azure Virtual Machine Scale Sets (VMSS): Adjusts virtual machines based on
performance metrics.

What is Virtualization?
Virtualization is a technology that creates virtual representations of various computing
resources, allowing more efficient utilization of physical hardware.

● Host Machine: The machine on which the virtual machine is going to be created
is known as Host Machine.
● Guest Machine: The virtual machines that are created on the Host Machine are
called Guest Machines.

Virtualization is a very important concept in cloud computing. In cloud computing, a


cloud vendor who will provide cloud services has all physical resources like servers,
storage devices, network devices, etc. and these physical services are rented by cloud
vendors so that users will not worry about these physical services.

But it is very costly to provide physical services per customer on rent because firstly it
becomes very costly and also user’s will not use the fully services. So this problem can
be solved by Virtualization.

It is very cool approach for not only efficient use of Physical services but also reduce
costs of vendors. Thus cloud vendor’s can vitalize their single big server and provide
smaller spec server to multiple customer’s.

Characterstics of Virtualization
1. Isolation
2. Encapsulation
3. Hardware Independence
4. Resource Sharing
5. Snapshot and Cloning
6. Dynamic Resource Allocation:
7. High Availability and Fault Tolerance
8. Scalability

Pitfalls of Virtualization

Slower Performance:
Virtual machines (VMs) can be slower than regular computers because they share
resources and add an extra layer of software.

Resource Sharing Issues:


If one VM uses too much CPU, memory, or storage, it can slow down other VMs
running on the same machine.

Complex to Manage:
Managing multiple VMs, especially in big systems, can get complicated and requires
skilled people.

Security Risks:
Hackers can exploit vulnerabilities in the virtualization layer (hypervisor) to access
sensitive data.

High Costs:
Even though you save on hardware, virtualization software and licenses can be
expensive.

Slower Input/Output (I/O) Operations:


Applications that handle lots of data might run slower because virtualized storage and
networks aren't as fast as physical ones.

Backup and Snapshots Can Slow Systems:


Taking backups or snapshots of VMs during busy times can cause performance issues.

What is Cloud Analytics


The practice of storing and processing data in the cloud to get useful business insights
is known as cloud analytics. The algorithms are used to analyze big data sets like on-
premises data analytics to find patterns, forecast outcomes, and provide other data
helpful to business decision-makers.

Cloud analytics is applying analytic algorithms to data stored in a private or public cloud
and then delivering the desired outcome.

Types of Cloud Analytics


There are three main types:

1. Public Cloud: Like a public library, anyone can access the computing power to
crunch data. It’s affordable but less secure.

2. Private Cloud: Imagine a private study room. Only you have access to the
computing power, making it very secure but expensive.

3. Hybrid Cloud: A mix of public and private clouds. You can store sensitive data in
the private room and analyze other data in the public library.

Key Components:

1. Data Collection: Ingesting data from various sources (e.g., databases, IoT
devices, and applications).
2. Data Storage: Storing data in cloud-based repositories like Amazon S3 or
Google Cloud Storage.
3. Data Processing: Utilizing frameworks like Apache Spark or AWS Glue to
process and clean the data.
4. Data Analysis: Applying analytical models using tools like Google BigQuery or
Azure Synapse Analytics.
5. Data Visualization: Presenting insights through dashboards using services like
Power BI or Looker.

Benefits:

● Scalability: Easily handle growing datasets.


● Accessibility: Access data and reports from any device with internet connectivity.
● Cost Efficiency: Pay only for the resources used.

Use Case:
A healthcare provider might use cloud analytics to analyze patient data from different
hospitals, helping identify disease trends and improve treatment strategies across
regions.
Would you like examples of popular cloud analytics tools or real-world applications? 😊

Block-Level Storage Virtualization

1. Block storage in System Design


Block storage is a technique for keeping data in chunks or blocks of an exact size.
Every block has its own address, functions independently, and can store any kind of
data. Block storage lacks a predetermined structure, unlike file storage, which arranges
data in a hierarchy. It is frequently utilized in systems like databases and virtual
machines that require great performance and scalability.

Key features of Block Storage


1. High Performance: Block storage is perfect for high-performance applications
since it is designed for quick read/write operations.
2. Flexibility: Since it does not impose a particular structure, it allows data to be
stored in any format.
3. Scalability: Blocks can be added or removed easily to scale the storage up or
down.
4. Independence: Each block operates independently, enabling precise control and
management of data.
5. Use in Distributed Systems: Block storage can be distributed across multiple
servers for redundancy and improved performance.

Example of Block Storage

Consider a cloud-based database service where you need to store a large amount of
structured data. The data is broken into smaller pieces (blocks) and distributed across a
storage area network. When you access the database,

1. The system retrieves the required blocks and reassembles them into meaningful
data for your application.
2. Amazon Elastic Block Store (EBS) is a real-world example of block storage.

2. Object Storage in System Design


With object storage, data is kept as discrete units known as “objects.” A unique identity,
metadata (information about the data), and the actual data are all contained in each
item. Object storage is hence very flexible, scalable, and appropriate for storing vast
amounts of unstructured data, such as backups, videos, and pictures.
● Object storage doesn’t use fixed-sized blocks or a hierarchical file system like file
or block storage does.
● Instead, it organizes data into a flat structure, which is easier to scale and
manage in distributed environments.

Key features of Object Storage


1. Scalability: Object storage is perfect for cloud applications because it can
manage massive volumes of data.
2. Metadata Richness: Metadata is stored in every object to help with data
management, indexing, and searching.
3. Global Accessibility: Objects can be accessed via HTTP/HTTPS, making it
suitable for web-based applications.
4. Cost-Effective for Unstructured Data: Large amounts of unstructured data, such
as logs, media files, and backups, are ideal for this storage.
5. Resilient and Durable: To provide stability and fault tolerance, object storage
systems frequently duplicate data across different locations.

Example of Object Storage

Imagine you have a video streaming platform where users upload thousands of videos
daily. Each video is stored as an object along with its metadata (e.g., title, description,
and upload date).

The unique identifier for each video makes it easy to retrieve and manage.
Services like Amazon S3 and Google Cloud Storage use object storage to handle such
use cases.

3. File Storage in System Design


Similar to how we arrange files on a computer, file storage is a conventional technique
of storing data in a hierarchical system of files and folders. Every file has a name and
directory path, which helps access and navigation. Applications that need regular
updates and organized data management are best suited for it.

Key features of File Storage


1. Hierarchical Organization: Data is stored in a clear folder-and-file structure,
making it easy to locate and manage.
2. Simplicity: File storage systems are easy to set up and use for small-scale
applications.
3. Compatibility: Works well with legacy applications and systems that require
traditional file access methods.
4. Shared Access: Supports multi-user environments with file permissions and
version control.
5. Data Integrity: Ensures consistency and integrity through locking mechanisms
during file updates.

Example of File Storage

Consider a team working on a shared document. Each team member can access, edit,
and save the document stored on a shared network drive. The file system keeps track
of the file’s location and changes. Network-attached storage (NAS) devices commonly
use file storage for shared access in small to medium-scale environments.

Object
Aspect Block Storage File Storage
Storage

Stores data as
Divides data into Organizes data in
objects with
Storage fixed-size blocks, a hierarchical
metadata and a
Structure each with a structure of files
unique ID in a
unique identifier. and folders.
flat structure.

Ideal for Suitable for


Best for storing
databases, structured file
large amounts
virtual machines, storage and
of unstructured
Use Case and transactional shared file
data, like
workloads access, such as
multimedia files
requiring high documents and
or backups.
performance. spreadsheets.
High
Optimized for Moderate
performance and
scalability and performance;
low latency,
Performance durability, not dependent on
especially for
real-time file system and
read/write
performance. storage device.
operations.

Highly scalable;
Scales well but Limited
can handle
may require scalability
massive
manual compared to
Scalability amounts of
configuration for object storage;
data across
capacity suitable for
distributed
expansion. smaller systems.
systems.

Extensive
metadata
Minimal stored with Basic metadata,
Metadata metadata, often each object, such as file
Handling handled by the enabling name, type, and
application layer. advanced permissions.
search and
analytics.

Durability Requires manual Highly durable Data durability


backup or with built-in depends on the
snapshot underlying file
redundancy
configurations system and
across multiple
for data backup
locations.
durability. strategies.

Network
AWS EBS,
AWS S3, Azure Attached
Google
Blob Storage, Storage (NAS),
Examples Persistent Disks,
Google Cloud Shared Drives,
SAN (Storage
Storage. Local File
Area Network).
Systems.

Fault Tolerance in Cloud Computing

Fault tolerance in cloud computing refers to the system's ability to keep running even if
a software or hardware malfunction occurs and it enters a down state, critical to
increase a system's reliability and maintain it helpful to the user in all circumstances.

The entire system will continue to require monitoring of available resources and
probable breakdowns, as with any fault tolerance in distributed systems.

Fault tolerance In cloud computing is creating a blueprint for continuous work when
some components fail or become unavailable.

It assists businesses in assessing their infrastructure needs and requirements, as well


as providing services if the relevant equipment becomes unavailable for whatever
reason.

The capacity of an operating system to recover and recognize errors without failing can
be managed by hardware, software, or a mixed approach that uses load balancers. As
a result, fault tolerance solutions are most commonly used for mission-critical
applications or systems.

Main Concepts behind Fault Tolerance in Cloud Computing System


1. Redundancy: When a system system part fails or goes down, it is critical to have
backup systems. For example, a website software that uses MS SQL as its
database may fail in the middle due to a hardware issue. When the original
database goes offline, a new database must be created as part of the
redundancy strategy.

2. Replication: The fault-tolerant system operates on the principle of running


multiple replicas for every service. As a result, if one component of the system
fails, additional instances can be used to maintain it operational.

Cloud Ecosystem

A cloud ecosystem is defined as a complex system of cloud services, platforms, and


infrastructure used for the storage, processing, and distribution of data and applications
through the Internet.

It consists of multiple parts: cloud providers, software developers, users, and other
services, which are integrated into a prolific and adaptable architecture for computing
assets.

This ecosystem enhances the ability of businesses and individuals to lease


computational solutions at will, in line with flexibility, innovation and cost sensitivity in
the digital frontier.

Service Models in a Cloud Ecosystem


1. Infrastructure as a Service (IaaS):
In IaaS, facilities are provided where they let out computing infrastructures
through the internet’s operation. Cloud services are available in the form of virtual
computers and disks, storage and network equipment for hire with no predefined
time limits.
The end-user or client can govern the operating system, applications, and
development frameworks while the cloud provider handles the network, storage,
and server facilities.

2. Platform as a Service (PaaS):


PaaS simply acts as a platform whereby developers can create and host their
applications without having to worry about the base hardware facility for the
support of the applications.
Developers can focus only on the code and applications apartment, whereas the
operation of the runtime environments, databases, and middleware lies with the
cloud provider.

3. Software as a Service (SaaS):


SaaS is a model of software applications distribution via the World Wide Web;
this usage is paid per some period. They are accessed using a browser or APIs
and do not require any delivery management, installation or update on local
devices.
In the case of SaaS licensing models, the cloud provider is responsible for
deploying the software and managing infrastructure, databases, and application
code, and the user accesses the software as a service.

You might also like