Cloud Computing Unit 2
Cloud Computing Unit 2
—----------------------------------------------------------------------------------------------------------------------------
Payment Model Pay only for what you use—like an Pay based on usage, with resources
electricity bill. scaling automatically when needed.
Scalability Limited and manual; you need to Highly scalable; the system adjusts
request more resources if needed. resources automatically.
Flexibility Static—you get what you ask for Dynamic—resources grow or shrink
and pay for usage. without manual intervention.
Usage Scenario Renting cloud storage (e.g., Running a web app with
Google Drive, AWS S3). unpredictable traffic (e.g., AWS
Auto Scaling).
Example Paying for electricity—you pay Hiring extra pizza staff during rush
Analogy only for the units consumed. hours and sending them home
afterward.
Cloud Services AWS S3, Google Cloud Storage. AWS EC2 Auto Scaling, Google
Examples Cloud Run.
In Short:
● Utility Computing: Pay-as-you-go, manually adjusted resources.
● Elastic Computing: Pay-as-you-go, auto-adjusting resources based on demand.
Utility Computing
Utility Computing is a cloud computing model where computational resources—such as
storage, processing power, and applications—are provided to users on a pay-as-you-go basis. It
abstracts the underlying infrastructure, allowing users to access and utilize resources without
the need for upfront capital investments in hardware or software.
Key Characteristics:
1. On-Demand Resource Allocation:
Resources like CPU, memory, storage, and network bandwidth are provisioned as
needed, similar to public utilities like electricity or water.
2. Metered Usage:
Usage is monitored, metered, and billed based on actual consumption (e.g., per CPU
cycle, GB of storage, or bandwidth utilization).
3. Scalability and Elasticity:
Users can scale resources up or down based on workload requirements, ensuring
efficient resource utilization.
4. Abstraction of Infrastructure:
The complexity of the underlying infrastructure is hidden from users, enabling them to
focus on application development rather than infrastructure management.
5. Multi-Tenancy:
Resources are shared among multiple tenants while ensuring isolation and security
through virtualization.
Examples:
● Amazon Web Services (AWS) S3: Pay for storage usage based on the amount of data
stored and the number of requests made.
● Google Cloud Compute Engine: Pay based on the number of virtual machine hours
consumed.
Elastic Computing
Elastic Computing refers to the capability of a cloud computing environment to automatically
provision and deprovision computing resources in response to fluctuating workloads. It ensures
optimal resource utilization, cost-efficiency, and performance by scaling resources dynamically
based on real-time demand.
Key Characteristics:
1. Dynamic Resource Scaling:
○ Resources (e.g., compute, memory, and storage) are automatically added or
removed based on workload variations.
○ Vertical scaling (adding more resources to an instance) or horizontal scaling
(adding more instances) can be applied.
2. Real-Time Adaptability:
○ The system monitors performance metrics (e.g., CPU usage, network traffic) to
adjust resources without manual intervention.
○ Ensures applications maintain performance under varying loads.
3. Cost Efficiency:
○ Pay only for the resources utilized during peak and non-peak periods.
○ Reduces the need for over-provisioning infrastructure.
4. Fault Tolerance and High Availability:
○ Elastic architectures often use load balancers and redundant resources to handle
failures and maintain availability.
5. Infrastructure as Code (IaC):
○ Resource elasticity is typically configured using declarative configurations and
cloud services like AWS CloudFormation or Terraform.
Examples:
● AWS EC2 Auto Scaling: Automatically adjusts the number of EC2 instances based on
defined policies.
● Google Kubernetes Engine (GKE): Scales pods up or down according to workload
demand.
● Azure Virtual Machine Scale Sets (VMSS): Adjusts virtual machines based on
performance metrics.
What is Virtualization?
Virtualization is a technology that creates virtual representations of various computing
resources, allowing more efficient utilization of physical hardware.
● Host Machine: The machine on which the virtual machine is going to be created
is known as Host Machine.
● Guest Machine: The virtual machines that are created on the Host Machine are
called Guest Machines.
But it is very costly to provide physical services per customer on rent because firstly it
becomes very costly and also user’s will not use the fully services. So this problem can
be solved by Virtualization.
It is very cool approach for not only efficient use of Physical services but also reduce
costs of vendors. Thus cloud vendor’s can vitalize their single big server and provide
smaller spec server to multiple customer’s.
Characterstics of Virtualization
1. Isolation
2. Encapsulation
3. Hardware Independence
4. Resource Sharing
5. Snapshot and Cloning
6. Dynamic Resource Allocation:
7. High Availability and Fault Tolerance
8. Scalability
Pitfalls of Virtualization
Slower Performance:
Virtual machines (VMs) can be slower than regular computers because they share
resources and add an extra layer of software.
Complex to Manage:
Managing multiple VMs, especially in big systems, can get complicated and requires
skilled people.
Security Risks:
Hackers can exploit vulnerabilities in the virtualization layer (hypervisor) to access
sensitive data.
High Costs:
Even though you save on hardware, virtualization software and licenses can be
expensive.
Cloud analytics is applying analytic algorithms to data stored in a private or public cloud
and then delivering the desired outcome.
1. Public Cloud: Like a public library, anyone can access the computing power to
crunch data. It’s affordable but less secure.
2. Private Cloud: Imagine a private study room. Only you have access to the
computing power, making it very secure but expensive.
3. Hybrid Cloud: A mix of public and private clouds. You can store sensitive data in
the private room and analyze other data in the public library.
Key Components:
1. Data Collection: Ingesting data from various sources (e.g., databases, IoT
devices, and applications).
2. Data Storage: Storing data in cloud-based repositories like Amazon S3 or
Google Cloud Storage.
3. Data Processing: Utilizing frameworks like Apache Spark or AWS Glue to
process and clean the data.
4. Data Analysis: Applying analytical models using tools like Google BigQuery or
Azure Synapse Analytics.
5. Data Visualization: Presenting insights through dashboards using services like
Power BI or Looker.
Benefits:
Use Case:
A healthcare provider might use cloud analytics to analyze patient data from different
hospitals, helping identify disease trends and improve treatment strategies across
regions.
Would you like examples of popular cloud analytics tools or real-world applications? 😊
Consider a cloud-based database service where you need to store a large amount of
structured data. The data is broken into smaller pieces (blocks) and distributed across a
storage area network. When you access the database,
1. The system retrieves the required blocks and reassembles them into meaningful
data for your application.
2. Amazon Elastic Block Store (EBS) is a real-world example of block storage.
Imagine you have a video streaming platform where users upload thousands of videos
daily. Each video is stored as an object along with its metadata (e.g., title, description,
and upload date).
The unique identifier for each video makes it easy to retrieve and manage.
Services like Amazon S3 and Google Cloud Storage use object storage to handle such
use cases.
Consider a team working on a shared document. Each team member can access, edit,
and save the document stored on a shared network drive. The file system keeps track
of the file’s location and changes. Network-attached storage (NAS) devices commonly
use file storage for shared access in small to medium-scale environments.
Object
Aspect Block Storage File Storage
Storage
Stores data as
Divides data into Organizes data in
objects with
Storage fixed-size blocks, a hierarchical
metadata and a
Structure each with a structure of files
unique ID in a
unique identifier. and folders.
flat structure.
Highly scalable;
Scales well but Limited
can handle
may require scalability
massive
manual compared to
Scalability amounts of
configuration for object storage;
data across
capacity suitable for
distributed
expansion. smaller systems.
systems.
Extensive
metadata
Minimal stored with Basic metadata,
Metadata metadata, often each object, such as file
Handling handled by the enabling name, type, and
application layer. advanced permissions.
search and
analytics.
Network
AWS EBS,
AWS S3, Azure Attached
Google
Blob Storage, Storage (NAS),
Examples Persistent Disks,
Google Cloud Shared Drives,
SAN (Storage
Storage. Local File
Area Network).
Systems.
Fault tolerance in cloud computing refers to the system's ability to keep running even if
a software or hardware malfunction occurs and it enters a down state, critical to
increase a system's reliability and maintain it helpful to the user in all circumstances.
The entire system will continue to require monitoring of available resources and
probable breakdowns, as with any fault tolerance in distributed systems.
Fault tolerance In cloud computing is creating a blueprint for continuous work when
some components fail or become unavailable.
The capacity of an operating system to recover and recognize errors without failing can
be managed by hardware, software, or a mixed approach that uses load balancers. As
a result, fault tolerance solutions are most commonly used for mission-critical
applications or systems.
Cloud Ecosystem
It consists of multiple parts: cloud providers, software developers, users, and other
services, which are integrated into a prolific and adaptable architecture for computing
assets.