Cloud Computing Course Plan
Cloud Computing Course Plan
Hey there I'll provide you the topics of cloud computing in this chat you have to provide me the
industry level course, notes, codes, diagrams and examples. Explain everything in as much
detail as possible. Kyu to explain in simple and logical terms
and dont start provideing stuffs wait for me to proc=vide the topics
✅
Bilkul!
Main ready hoon. Aap topics bhej do, aur main har topic ke liye industry-level course plan, detailed
notes, runnable code, diagrams, aur real-world examples bana dunga—simple & logical Hinglish
explainers ke saath.
Jab topics send karo, toh saath mein yeh 3 cheezein bhi batā dena (taaki content perfectly tailor ho):
Bro English
Got it✅
Send me the topics, and I’ll create an industry-level course for each, including:
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 1/118
What is Cloud Computing?
Alright — let’s start with the industry-level, simple yet deep explanation of Cloud Computing 🚀
1. Definition (Simple + Professional)
Cloud Computing is the delivery of computing services — such as servers, storage, databases,
networking, software, analytics, and AI — over the internet (“the cloud”) instead of using local computers
or physical on-premises servers.
You rent resources from a cloud provider like AWS, Azure, or GCP and pay only for what you use.
Think of it like Netflix for IT infrastructure — you don’t buy DVDs (servers), you stream (use) on
demand.
Example:
E-commerce sites scale up during Black Friday sales and scale down after.
4. Deployment Models
Public Cloud – Shared infrastructure (AWS, Azure, GCP).
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 2/118
Hybrid Cloud – Combination of public + private cloud.
┌─────────────────────────────────────┐
│ Front-End │
│ (Browser, Mobile App, API Client) │
└─────────────────────────────────────┘
│
▼
┌─────────────────────────────────────┐
│ Internet │
└─────────────────────────────────────┘
│
▼
┌─────────────────────────────────────┐
│ Back-End │
│ Cloud Infrastructure: │
│ - Compute (VMs, Containers) │
│ - Storage (Block, Object) │
│ - Databases (SQL, NoSQL) │
│ - Networking │
│ - AI/ML Services │
└─────────────────────────────────────┘
│
▼
┌─────────────────────────────────────┐
│ Cloud Provider Management │
│ (Monitoring, Billing, Scaling) │
└─────────────────────────────────────┘
import boto3
instance = ec2.create_instances(
ImageId='ami-0abcdef1234567890', # Example AMI
MinCount=1,
MaxCount=1,
InstanceType='t2.micro',
KeyName='my-key-pair'
)
Uses AWS SDK (boto3) to create a new virtual machine in the cloud.
You can install your app on it and make it accessible over the internet.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 3/118
7. Real-World Example
Netflix uses AWS to stream movies worldwide — scaling up during peak hours.
Instagram runs its backend services on AWS and uses cloud storage for billions of photos.
If you want, I can now create a short “exam-ready” notes version + real interview Q&A for Cloud
Computing so you can quickly revise.
Do you want me to do that before moving to the next topic?
On-Demand Self-Service
Alright — here’s the industry-level, simple, and detailed explanation of On-Demand Self-Service in
Cloud Computing 🚀
1. Definition (Simple + Professional)
On-Demand Self-Service is a core feature of cloud computing where you can provision (set up)
computing resources automatically, without requiring manual approval or human interaction
with the cloud provider.
This means:
If you need a server, storage space, or database — you can get it instantly via a web portal, CLI, or
API.
You don’t need to call the cloud provider’s sales or support team.
Think of it like ordering food from Swiggy/Zomato instead of calling the restaurant — you choose
what you want, pay, and get it delivered instantly.
They had to purchase hardware, wait for delivery, install, configure — could take weeks.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 4/118
Perfect for startups, dev teams, and enterprises needing speed & flexibility.
Example:
A developer needs a test environment → spins up a VM in AWS EC2 in 2 minutes → deletes it after
testing → pays only for those 2 minutes.
[ User / Developer ]
│
▼
[ Self-Service Portal / CLI / API ]
│
▼
[ Cloud Management Layer ]
│
▼
[ Automated Resource Provisioning ]
│
▼
[ Compute / Storage / Network Resources Ready to Use ]
4. Industry Example
AWS EC2: Create a new server from the AWS Console in under 60 seconds.
Google Cloud Storage: Create a storage bucket instantly without emailing support.
Azure Database: Provision an SQL Database in minutes via the Azure portal.
import boto3
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 5/118
You send a request to AWS via API → AWS automatically provisions a VM → no human approval
needed.
E-commerce companies spin up extra servers during high traffic and scale down after.
7. Advantages
✅ Speed – No delays from human intervention.
✅ Cost Efficiency – Pay only for what you use, for as long as you use it.
✅ Scalability – Can add or remove resources in real time.
✅ Automation – Integrates with scripts and tools for zero-touch deployment.
8. Quick Summary (One-Liner)
On-Demand Self-Service lets you instantly provision and manage cloud resources yourself —
without waiting for manual approval.
If you want, I can now prepare a comparison chart between On-Demand Self-Service and traditional IT
provisioning so you can use it in notes or interviews.
Do you want me to make that next?
Here’s the industry-ready, simple yet deep explanation of Broad Network Access in cloud computing
🚀
1. Definition (Simple + Professional)
Broad Network Access means cloud services are available over the network and can be accessed
from anywhere, on any device, using standard internet protocols.
In simple terms:
If you have an internet connection, you can access cloud resources — whether you’re on a laptop,
tablet, smartphone, or IoT device — without being in a specific physical location.
Think of it like Google Drive: You can open your files from home, office, or while traveling — using a
phone, PC, or tablet.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 6/118
2. Why It Matters (Industry Perspective)
Before cloud:
You could only access company applications from office desktops or VPN-connected systems.
Employees, customers, and partners can connect securely from anywhere in the world.
This enables remote work, global applications, and always-on services.
Example:
Netflix streams movies globally over the internet using cloud infrastructure.
Zoom meetings run in the cloud and are accessible on any device.
3. Key Characteristics
Device Independence – Works on desktop, laptop, mobile, or thin clients.
4. Workflow Diagram
scss
┌────────────────────────────┐
│ Cloud Service │
│ (App, DB, Storage, etc.) │
└────────────────────────────┘
▲
┌────────────────────────────┐
│ Internet │
└────────────────────────────┘
┌───────────────┬───────────────┬───────────────┐
│ Laptop │ Smartphone │ Tablet │
│ (HTTP/HTTPS) │ (App API) │ (Browser) │
└───────────────┴───────────────┴───────────────┘
5. Industry Examples
AWS S3: Access files from any location using an HTTPS URL.
Microsoft Office 365: Use Word, Excel, and PowerPoint in a browser anywhere.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 7/118
6. Example Code – Accessing a Cloud Resource via API
python
import requests
if response.status_code == 200:
print("Data from cloud service:", response.json())
else:
print("Error:", response.status_code)
The cloud service responds over the internet — demonstrating broad access.
7. Advantages
✅ Remote accessibility – Users can work from anywhere.
✅ Multi-device compatibility – Works on desktops, mobiles, tablets, IoT.
✅ Global user base – Applications can serve customers worldwide.
✅ Faster collaboration – Teams in different regions can use the same tools.
8. One-Line Summary
Broad Network Access means cloud services are accessible anytime, anywhere, on any
device, using standard network protocols.
If you want, I can now give you a side-by-side industry case study showing how Broad Network Access
transformed a real company’s operations.
Do you want me to prepare that next?
Resource Pooling
2. Key Idea
Multiple customers share the same pool of hardware, software, and networking resources.
Customers don’t know the exact location of the resources, but may choose a region (e.g., "US-
East", "Asia-Pacific").
Each company had its own dedicated servers — expensive and underused most of the time.
Example:
In AWS EC2, your VM runs on a physical server that is also running VMs for other customers — but
thanks to virtualization, they are isolated from each other.
┌─────────────────────────┐
│ Resource Pool │
│ (Compute, Storage, DB) │
└─────────────────────────┘
▲
┌─────────────┼─────────────┐
│ │ │
┌────────────┐ ┌────────────┐ ┌────────────┐
│ Customer A │ │ Customer B │ │ Customer C │
└────────────┘ └────────────┘ └────────────┘
5. Industry Examples
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 9/118
AWS S3: One massive storage pool used by millions of users — data logically separated.
Google Cloud Compute Engine: Multiple VMs from different customers share the same
underlying hardware.
Microsoft Azure SQL Database: Multiple tenants’ databases share the same backend
infrastructure.
import boto3
bucket_name = "my-unique-bucket-name-12345"
s3.create_bucket(Bucket=bucket_name)
Your bucket is stored in AWS’s massive storage infrastructure, shared with millions of other buckets
— but your data remains private.
7. Advantages
✅ Cost efficiency – Shared infrastructure reduces costs for all customers.
✅ Scalability – Resources can be dynamically allocated where needed.
✅ Flexibility – Customers can choose location (region/zone) without knowing exact hardware.
✅ High utilization – No idle physical servers wasting money.
8. Security in Resource Pooling
Virtualization keeps tenants isolated.
9. One-Line Summary
Resource Pooling means cloud providers serve multiple customers using a shared pool of
virtualized resources, dynamically allocated to meet demand.
If you want, I can also make a "Resource Pooling vs Dedicated Hosting" comparison table so it’s easy
to use for interviews and notes.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 10/118
Do you want me to create that next?
Rapid Elasticity
Here’s the clear, detailed, and industry-oriented explanation of Rapid Elasticity in Cloud Computing
🚀
1. Definition (Simple + Professional)
Rapid Elasticity is the cloud’s ability to automatically scale resources up or down quickly based on
workload demand — ensuring applications always have the right amount of computing power.
Think of it like a rubber band — it stretches when more is needed and shrinks when demand drops.
2. Key Idea
Resources are elastic → you can add or remove compute power, storage, or bandwidth almost
instantly.
Scaling can be automatic (triggered by usage thresholds) or manual (triggered by the user).
Customers only pay for the resources used during that time.
Example:
E-commerce site during a festival sale → Auto-scaling launches extra servers → After the sale,
servers shut down → Cost saved.
┌─────────────────────┐
│ Monitoring System │
│ (CPU, Traffic, etc.)│
└ ┘
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 11/118
└─────────────────────┘
│
▼
┌─────────────────────┐
│ Auto-Scaling Rules │
│ (e.g., CPU > 70%) │
└─────────────────────┘
│
▼
┌─────────────────────┐
│ Provision/Remove │
│ Resources │
└─────────────────────┘
5. Industry Examples
AWS Auto Scaling Groups – Automatically adds/removes EC2 instances based on demand.
Azure App Service Scaling – Scales web apps up/down by request count.
import boto3
Updates the capacity of an Auto Scaling Group in AWS — instantly adding more servers to handle
increased traffic.
7. Advantages
✅ Performance stability – Always enough resources for workload.
✅ Cost efficiency – Scale down when not needed to save money.
✅ Business agility – Respond instantly to market demand.
✅ Automation – No manual intervention required.
8. Real-World Case Study
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 12/118
Netflix: Automatically scales streaming servers during peak viewing hours (evening times) and
reduces them overnight.
Ticket booking sites: Scale massively during big event releases (e.g., concerts, cricket matches).
9. One-Line Summary
Rapid Elasticity means cloud resources can be scaled up or down quickly and automatically
to meet changing demand, ensuring performance and cost efficiency.
If you want, I can now create a diagram comparing Rapid Elasticity with Scalability so you can clearly
see the difference — this is a common interview question.
Do you want me to prepare that next?
Measured Service
Think of it like your electricity bill — the meter records how much you use, and you pay accordingly.
2. Key Idea
Cloud resources are quantified and billed like utilities.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 13/118
Companies bought fixed hardware → often underused, wasting money.
Example:
You run a virtual server for 10 hours → pay only for 10 hours, not for the entire month.
┌─────────────────────┐
│ Cloud Service │
│ (Compute, Storage) │
└─────────────────────┘
│
▼
┌─────────────────────┐
│ Metering System │
│ (Tracks Usage) │
└─────────────────────┘
│
▼
┌─────────────────────┐
│ Billing & Reporting │
└─────────────────────┘
│
▼
┌─────────────────────┐
│ Customer Dashboard │
│ (View & Manage) │
└─────────────────────┘
5. Industry Examples
AWS CloudWatch + Billing Console – Tracks EC2, S3, Lambda usage.
Azure Monitor – Tracks VM hours, storage GB, data transfers.
import boto3
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 14/118
MetricName='CPUUtilization',
Dimensions=[{'Name': 'InstanceId', 'Value': 'i-0123456789abcdef0'}],
StartTime='2025-08-01T00:00:00Z',
EndTime='2025-08-14T00:00:00Z',
Period=3600,
Statistics=['Average']
)
7. Advantages
✅ Cost control – Pay only for actual usage.
✅ Transparency – Real-time tracking of resource consumption.
✅ Efficiency – Encourages optimal use of resources.
✅ Automation – Providers automatically monitor and bill.
8. Real-World Case Study
Startup on AWS: Runs 5 EC2 servers for testing → after testing, shuts them down → Bill reflects
exact hours used.
9. One-Line Summary
Measured Service means cloud usage is automatically tracked and billed like a utility,
ensuring you pay only for what you consume.
If you want, I can now compile all six NIST cloud computing characteristics (including this one) into a
single exam-ready chart with definitions, diagrams, and real-world examples.
That way, you’ll have the whole set in one place.
Alright — here’s the industry-level, detailed but simple explanation of Pre-Cloud Systems and Their
Limitations 🚀
1. What Were Pre-Cloud Systems?
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 15/118
Before cloud computing, most organizations ran their IT infrastructure in-house or used traditional
hosting. This was called On-Premises Infrastructure.
Key features:
Traditional Web
Rent physical servers from a hosting provider. GoDaddy dedicated hosting.
Hosting
Rent space in a data center to keep your own
Colocation Rackspace colocation services.
servers.
Everything was manual — scaling up meant physically buying and installing more hardware.
b) Slow Provisioning
Setting up a new server could take weeks (purchase, shipping, installation, configuration).
No instant deployment for new projects.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 16/118
c) Over-Provisioning or Under-Provisioning
d) Limited Scalability
e) Maintenance Overhead
IT teams had to handle patching, backups, hardware repairs, and security manually.
Downtime risk during maintenance.
g) Geographical Limitations
5. Real-World Example
Scenario:
A retail company in 2005 wants to launch an online store.
Servers run at only 20% utilization most of the year, but during sales events they are overloaded.
Adding new capacity means ordering new hardware — another month delay.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 17/118
Pre-Cloud Problem Cloud Solution
Weeks to provision Deploy in minutes
Under/Over-provisioning Auto-scaling & elasticity
7. One-Line Summary
Pre-cloud systems relied on costly, slow, and inflexible physical infrastructure — making
scaling, maintenance, and global access challenging.
If you want, I can make you a visual timeline diagram showing the evolution from Pre-Cloud → Early
Cloud → Modern Cloud so it’s easier to remember for exams and interviews.
Do you want me to prepare that?
Alright, let’s break down Key Milestones in Cloud Computing with industry-level clarity, logical flow,
real-world examples, and diagrams so you understand why each milestone matters.
What it is:
Virtualization allows one physical server to run multiple virtual machines (VMs) with their own OS and
applications.
Instead of dedicating a whole server to one task, you can share it among multiple workloads.
Industry Example:
Diagram:
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 18/118
pgsql
+-----------------------------+
| Physical Server (Hardware) |
+-----------------------------+
| Hypervisor (VMware, Xen) |
+-----------------------------+
| VM1 | VM2 | VM3 |
+------+-------+--------------+
| OS | OS | OS |
+------+-------+--------------+
| Apps | Apps | Apps |
What happened:
Amazon Web Services launched S3 (storage) and EC2 (compute), making infrastructure available over
the internet on a pay-as-you-go basis.
Real-World Impact:
Why it mattered:
What it is:
Lightweight packaging of applications with all dependencies so they run consistently anywhere.
Why it’s better than traditional VMs:
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 19/118
Smaller footprint.
Easier deployment and scaling.
Industry Example:
Diagram:
diff
+----------------------------+
| Host OS |
+----------------------------+
| Docker Engine |
+----------------------------+
| Container 1 | Container 2 |
| App + Deps | App + Deps |
+-------------+--------------+
Why needed:
As companies ran thousands of containers, manual management became impossible.
Deployment
Scaling
Networking
Load balancing
Example:
Concept:
Run code without managing servers. Pay only for execution time.
Why important:
Example:
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 20/118
7. Multi-Cloud & Hybrid Cloud (2018–Present)
Why:
Example:
Banks use hybrid cloud to keep sensitive data on-prem but run analytics in the cloud.
Timeline Summary
If you want, I can now create you a single clean diagram showing the entire evolution of cloud computing
from pre-cloud to modern hybrid/multi-cloud, so it’s easier to remember for interviews and real-world
work.
Do you want me to make that visual?
Alright — let’s go deep dive into Infrastructure as a Service (IaaS) so you fully get the industry-level
picture.
IaaS is a cloud computing model where a provider delivers virtualized computing resources over the
internet.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 21/118
Instead of buying and managing physical servers, storage devices, and networking equipment, you
rent them on-demand from a cloud provider.
You still manage the operating system, applications, and data, but the provider handles the
hardware, virtualization, networking, and physical security.
Examples of IaaS
pgsql
┌─────────────────────────────┐
│ End User / Developer │
└───────────┬─────────────────┘
│ Internet
┌───────────▼─────────────────┐
│ AWS Management Console │
└───────────┬─────────────────┘
│
┌───────────▼───────────────────────────────┐
│ IaaS Infrastructure │
│ │
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 22/118
│ ┌──────────────┬─────────────┬─────────┐ │
│ │ Compute (EC2)│ Storage(EBS)│ Network │ │
│ └──────────────┴─────────────┴─────────┘ │
└──────────────────────────────────────────┘
python
import boto3
Alright, let’s break down Platform as a Service (PaaS) in a detailed, industry-level, yet simple way so
you’ll fully understand it.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 23/118
3.2.1 Platform as a Service (PaaS)
Definition
Platform as a Service (PaaS) is a cloud computing model that provides hardware, software, and
infrastructure required for application development and deployment — without you having to
manage servers, storage, or networking.
You focus on writing and deploying your code, while the provider handles the operating system,
middleware, runtime, and scaling.
In short:
PaaS = Pre-built platform where you just build and run your apps.
Examples of PaaS
Google App Engine – Lets you deploy Python, Java, Node.js apps without managing infrastructure.
AWS Elastic Beanstalk – Deploy and manage applications in multiple languages.
Microsoft Azure App Service – Build and host web apps, REST APIs.
Heroku – Popular among startups for quick app deployment.
python
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 24/118
@app.route('/')
def home():
return "Hello, Cloud World!"
if __name__ == '__main__':
app.run()
yaml
runtime: python39
entrypoint: gunicorn -b :$PORT main:app
2. Deploy with:
bash
Google handles:
Server provisioning
Load balancing
Auto-scaling
Security patches
scss
User Code
│
▼
PaaS Platform
├── Runtime (Java, Python, Node.js, etc.)
├── Middleware (Databases, Messaging)
├── OS Management
├── Scaling & Load Balancing
└── Monitoring & Logging
│
▼
Cloud Infrastructure (Servers, Storage, Network)
Advantages
✅ Faster development
✅ No need to manage infrastructure
✅ Built-in scalability and security
✅ Supports collaboration
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 25/118
Disadvantages
Alright, let’s break Software as a Service (SaaS) down in a simple but industry-level way so you get
both clear understanding and real-world relevance.
SaaS is a cloud computing service model where software applications are delivered over the internet
on a subscription basis, eliminating the need for users to install, maintain, or manage hardware and
software infrastructure.
In simpler words:
Instead of buying software, installing it on your computer, and worrying about updates, you just open a
browser or app and start using it. Everything (storage, updates, security) is handled by the provider.
Key Characteristics
1. Hosted in the Cloud – The application is run from the provider’s servers.
2. Accessible via Internet – Usually through a web browser or lightweight client.
3. Subscription Model – Pay monthly/yearly instead of a one-time purchase.
4. Automatic Updates – The provider handles all updates, patches, and improvements.
5. Scalable – Easily add or remove users.
6. Multi-Tenant Architecture – One software instance serves multiple customers while keeping their
data separate.
Examples
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 26/118
Microsoft 365 → Word, Excel, Teams online.
Dropbox → Cloud storage and file sharing.
Zoom → Video conferencing.
Real-World Analogy
Think of Netflix: You don’t download movies to own them, you just log in and watch — no installation
hassles, no storage issues, updates happen in the background.
Advantages
Disadvantages
Diagram
scss
┌─────────────────────┐
│ SaaS Layer │ <- Applications (Salesforce, Google Workspace)
├─────────────────────┤
│ PaaS Layer │ <- Platforms & frameworks (Google App Engine)
├─────────────────────┤
│ │
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 27/118
│ IaaS Layer │ <- Servers, storage, networking (AWS EC2)
└─────────────────────┘
Cloud Infrastructure
Before SaaS: Businesses had to buy expensive video conferencing hardware and software licenses.
After SaaS: Zoom allows unlimited participants, global access, and instant updates — all via the
cloud, without any on-premise setup.
If you want, I can also give you a side-by-side table comparing IaaS, PaaS, and SaaS with real
industry examples, so you’ll have a complete reference for your notes.
Do you want me to make that next?
Alright, let’s go deep into Function as a Service (FaaS) / Serverless Computing in an industry-level
way — clear, logical, with examples, diagrams, and even some code.
1. Definition
Function as a Service (FaaS) is a cloud computing model where you write individual functions (small
pieces of code) that execute only when triggered and only for the time they run.
No server management — the cloud provider handles provisioning, scaling, and maintenance.
You only pay for execution time, not idle server time.
Often called Serverless Computing (though servers do exist, you just don’t manage them).
2. Key Characteristics
1. Event-Driven Execution
Runs when triggered by events (HTTP requests, file uploads, database updates, etc.).
2. Automatic Scaling
Scales up instantly if 1,000 users hit your function at once, then scales down when not in use.
3. Short-lived Execution
Functions run for a limited time (e.g., AWS Lambda has a 15-minute max).
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 28/118
4. Stateless
Each execution is independent; no memory is shared between runs.
5. Pay-per-Use
Pay only for the exact compute time and memory used.
5. Industry-Level Example
Let’s say we have an e-commerce site that wants to send a confirmation email when a user places an
order.
Flow Diagram:
css
[User Places Order] → [Event in Order Service] → [AWS Lambda Function Triggered] → [Send Email
via SES] → [Success Response]
python
import json
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 29/118
# Log event
print("Received event: ", json.dumps(event))
# Create a message
message = f"Hello, {name}! Welcome to our platform."
# Return as JSON
return {
'statusCode': 200,
'body': json.dumps({'message': message})
}
Trigger Example:
json
{
"message": "Hello, Vaibhav! Welcome to our platform."
}
7. Advantages
✅ No infrastructure management
✅ Automatic scaling
✅ Cost-effective (no pay for idle time)
✅ Quick deployment of small features
✅ High availability by default
8. Limitations
⚠ Cold Starts — First execution after a long idle period can be slow.
⚠ Execution Time Limit — Not suitable for long-running processes.
⚠ Statelessness — Cannot store session data directly (need external DB or cache).
⚠ Vendor Lock-In — Code often tied to a provider's environment.
9. Comparison Table
Feature FaaS / Serverless Traditional Server
Scaling Automatic Manual/Configured
Pricing Pay per execution Pay per uptime
Management Fully managed User managed
State Stateless Stateful possible
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 30/118
10. Summary Diagram
css
┌─────────────┐ ┌─────────────────────┐
│ Event (API, │────▶│ Lambda Function │
│ File Upload)│ │ Executes Code │
└─────────────┘ └─────────────────────┘
│
▼
[Result Returned]
If you want, I can next give you a detailed industry-level AWS Lambda real project with deployment
steps so you can actually run a FaaS application yourself. That will make the concept crystal clear.
Public Cloud
4.1.1 Characteristics and Use Cases
Alright — let’s break down Public Cloud in a detailed, industry-ready way so you understand it
completely, both conceptually and practically.
The Public Cloud is a type of cloud deployment where computing resources (servers, storage,
databases, networking, etc.) are owned, managed, and operated by third-party cloud service
providers and delivered over the internet to multiple customers (also called tenants).
These resources are shared among all customers, but each customer’s data and applications remain
logically isolated.
1. Multi-Tenancy
Multiple customers share the same physical infrastructure.
Logical separation ensures data privacy.
2. Pay-as-You-Go Pricing
You only pay for what you use — compute hours, storage GBs, bandwidth, etc.
No upfront capital expenditure (CapEx).
3. High Scalability
Resources can scale up or down quickly based on demand.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 31/118
4. Global Accessibility
Services can be accessed from anywhere with internet connectivity.
Data centers located worldwide reduce latency.
5. No Maintenance Burden
The cloud provider handles all hardware, updates, and infrastructure maintenance.
Microsoft Azure
Google Cloud Platform (GCP)
IBM Cloud
Oracle Cloud
1. Web Hosting
Hosting websites or blogs using services like AWS S3 + CloudFront or Azure Web Apps.
Quick to deploy, low-cost for small traffic, and scalable for large traffic.
Create isolated environments for developing and testing applications without buying hardware.
Example: Use AWS EC2 or Azure Virtual Machines for dev/test environments.
Processing massive datasets using cloud-native tools like AWS EMR, Google BigQuery, or Azure
Synapse.
Delivering apps like Office 365, Dropbox, or Salesforce entirely over the cloud.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 32/118
Diagram – Public Cloud
pgsql
Internet
|
-------------------
| Public Cloud |
-------------------
| SaaS | PaaS | IaaS|
-------------------
/ | \
Multiple Apps Storage
Tenants Hosted Hosted
Here Here
💡 Industry Tip:
Public Cloud is best for businesses that:
It’s not always ideal for highly sensitive government or military data — for that, Private or Hybrid Cloud
is better.
If you want, I can now explain 4.2 Private Cloud – Characteristics and Use Cases so you can directly
compare it with Public Cloud.
Private Cloud
4.2.1 Characteristics and Use Cases
Alright, let’s break down Private Cloud in an industry-level, easy-to-understand, and detailed manner
with examples, diagrams, and practical insights.
It offers the same benefits as public cloud (scalability, self-service, elasticity) but with greater control,
security, and customization.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 33/118
Key Characteristics
Feature Description
Dedicated Resources Hardware, storage, and network are not shared with other organizations.
High Security &
Better suited for industries with strict data regulations (e.g., banking, healthcare).
Compliance
Example Providers
VMware vSphere / vCloud – For enterprises running their own virtualization infrastructure.
Microsoft Azure Stack – Hybrid cloud solution to run Azure services on-premises.
OpenStack – Open-source software for creating private clouds.
Architecture Diagram
pgsql
+-----------------------------+
| Organization |
| (IT Staff & Admin Control) |
+-----------------------------+
|
+-----------------------------+
| Private Cloud Infrastructure |
| (Servers, Storage, LAN) |
+-----------------------------+
| | |
+-------+ +------+ +--------+
|Compute| |Storage| |Network|
+-------+ +------+ +--------+
|
+-------------+
| Applications |
+-------------+
Use Cases
Large Enterprises Running ERP systems and sensitive R&D projects internally.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 34/118
Advantages
If you want, I can next cover Hybrid Cloud, which blends both Private & Public Cloud, and explain why
90%+ enterprises prefer it today.
Hybrid Cloud
4.3.1 Characteristics and Use Cases
Hybrid Cloud
4.3.1 Characteristics and Use Cases
Definition
A Hybrid Cloud is a cloud computing environment that combines public and private cloud
infrastructures, allowing data and applications to be shared between them.
It offers the best of both worlds—the scalability and cost-effectiveness of the public cloud with the
control and security of the private cloud.
Key Characteristics
1. Integration of Public and Private Clouds
Applications and workloads can move between environments as needed.
Example: A company might keep sensitive customer data in a private cloud but use a public
cloud for less sensitive operations like analytics.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 35/118
2. Workload Portability
Workloads can be shifted dynamically between the two clouds depending on cost,
performance, or security requirements.
3. Flexible Deployment
Organizations can decide which parts of their workloads run on the public cloud and which
remain in-house.
4. Scalability with Control
Use public cloud to handle spikes in demand ("cloud bursting") without investing in additional
private infrastructure.
5. Security with Accessibility
Sensitive workloads remain on a secure private cloud while other processes benefit from the
accessibility of the public cloud.
6. Cost Optimization
Critical workloads stay in the private cloud, while high-demand or temporary tasks use cheaper
public resources.
Architecture Diagram
pgsql
+-----------------------+
| Private Cloud |
| (On-Prem / Secure) |
+-----------------------+
| ^
Secure Link | | Data Sync
v |
+-----------------------+
| Hybrid Cloud |
| Management Layer |
+-----------------------+
| ^
| |
v |
+-----------------------+
| Public Cloud |
| (AWS / Azure / GCP) |
+-----------------------+
The Hybrid Cloud Management Layer ensures seamless integration, data synchronization, and
workload movement between clouds.
Use Cases
1. Cloud Bursting
During high demand, workloads overflow from private to public cloud to avoid downtime.
Example: An e-commerce site moves extra traffic handling to AWS during sales.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 36/118
Store sensitive financial/medical data in private cloud while running customer apps in the
public cloud.
Example: A healthcare provider keeps patient records on-premises but uses Azure for analytics.
3. Disaster Recovery
Keep backups in the public cloud while core systems run in a private cloud.
✅ Industry Example:
Netflix: Uses AWS (public cloud) for content delivery but keeps recommendation engine data in a
private cloud for security.
Banking Sector: Keeps transaction systems in private cloud, but runs marketing analytics in public
cloud.
If you want, I can now cover Community Cloud (4.4), which is the fourth deployment model in cloud
computing.
Community Cloud
4.4.1 Characteristics and Use Cases
Characteristics:
Shared Infrastructure: Designed for a specific group of organizations with shared needs.
Cost Sharing: Costs are distributed among participants, making it more affordable than a private
cloud.
Common Compliance & Security: Meets regulatory or operational requirements common to all
members (e.g., HIPAA for healthcare).
Controlled Access: Only authorized members of the community can use it.
Custom Governance: Policies and controls are set collectively by community members.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 37/118
Use Cases:
1. Government Agencies – Multiple departments sharing infrastructure for public services while
maintaining security and compliance.
2. Healthcare Institutions – Hospitals sharing medical data platforms while adhering to privacy
regulations like HIPAA.
3. Financial Organizations – Banks collaborating on secure platforms for fraud detection or payment
processing.
4. Research Communities – Universities sharing computing resources for joint research projects.
If you want, I can now prepare a comparison table of Public, Private, Hybrid, and Community Cloud
so you have a quick reference. That would make revisions much faster.
Cost Efficiency
Reduced Infrastructure Costs
Pay-as-You-Go Model:
You pay only for the resources you use (compute, storage, bandwidth), avoiding wasted capacity.
Reduced Maintenance Costs:
Cloud providers handle hardware upgrades, patching, and repairs, removing the need for large in-
house IT teams.
Economies of Scale:
Providers operate at massive scale, offering resources at a lower per-unit cost than individual
companies can achieve.
Energy Savings:
No need to run and cool large data centers, reducing electricity costs.
Example
Instead of spending ₹50–80 lakhs on setting up an on-premises data center, a startup can rent cloud
infrastructure for a fraction of that cost per month, scaling up or down as needed.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 38/118
If you want, I can also prepare a short 2–3 line version of this for quick exam answers so you don’t have
to memorize the entire explanation. Would you like me to do that?
Meaning
Dynamic resource allocation in cloud computing refers to the automatic adjustment of computing
resources (CPU, memory, storage, bandwidth) based on real-time demand. This ensures applications
run smoothly without manual intervention.
Key Characteristics
Example
An e-commerce website during a Diwali sale may experience 10x more visitors. Cloud services
automatically allocate more servers to handle traffic. After the sale, resources scale back down to save
costs.
If you want, I can also create a comparison chart between static vs. dynamic resource allocation so
you remember this topic instantly in exams.
Enhanced Collaboration
5.3.1 Remote Work Support
Meaning
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 39/118
Remote work support in cloud computing refers to the ability for employees, teams, or stakeholders
to access files, applications, and communication tools from anywhere with an internet connection. This
is a key advantage of cloud services for modern, distributed workforces.
Key Characteristics
Anywhere, Anytime Access: Work resources are available globally via the internet.
Real-Time Collaboration: Multiple users can work on the same document or project
simultaneously (e.g., Google Docs, Microsoft 365).
Device Independence: Access from laptops, desktops, tablets, or smartphones.
Centralized Data: All files are stored in the cloud, ensuring consistency across teams.
Integrated Communication Tools: Built-in video conferencing, chat, and project management
tools.
Example
A software development team spread across India, the UAE, and the USA can use cloud tools like
Slack, GitHub, and Google Workspace to code, share documents, conduct meetings, and track
progress—without being physically together.
If you want, I can also give you a real-life workflow diagram showing how remote teams collaborate
using the cloud.
Meaning
Data resilience strategies in cloud computing ensure that data remains safe, accessible, and
recoverable in the event of failures, cyberattacks, or natural disasters. These strategies minimize
downtime and prevent data loss.
Key Strategies
2. Geo-Redundancy
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 40/118
Replicate data across different geographic data centers to avoid regional disasters affecting
all copies.
3. Snapshot & Versioning
Keep multiple versions of files and system snapshots for quick rollback in case of corruption or
ransomware.
Example
A financial services company uses AWS for hosting applications. They store database backups in three
different AWS regions and implement automated failover. If the primary server fails in Mumbai, the
workload instantly switches to Singapore without data loss.
If you want, I can make a clear disaster recovery architecture diagram for this section so it’s visually
easy to understand.
Meaning
In cloud computing, vendor-managed updates mean that the cloud service provider (e.g., AWS, Azure,
Google Cloud) automatically handles software patches, security updates, and infrastructure
maintenance without requiring manual intervention from the customer.
Key Points
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 41/118
Customers benefit from the latest features and improvements without extra cost or effort.
4. Security Compliance
Ensures systems meet compliance requirements (e.g., ISO, GDPR, HIPAA) by keeping them
updated.
5. Performance Optimization
Example
A company using Microsoft Azure SQL Database doesn’t need to manually apply security patches or
upgrade the database engine. Azure automatically updates the service in the background, ensuring
both security and optimal performance without affecting day-to-day operations.
If you’d like, I can also prepare a side-by-side table comparing vendor-managed updates vs. self-
managed updates for this section so it’s easier to grasp.
Meaning
A data breach occurs when unauthorized individuals gain access to sensitive, protected, or confidential
data, such as personal information, financial records, intellectual property, or trade secrets. In cloud
environments, breaches can happen due to vulnerabilities in software, weak access controls, or insider
threats.
1. Unauthorized Access
Weak passwords, compromised credentials, or poor authentication allow attackers to infiltrate
systems.
2. Misconfigured Cloud Settings
Publicly exposed storage buckets or incorrect access permissions can leak data.
3. Insider Threats
Employees or contractors with legitimate access may intentionally or accidentally disclose data.
4. Third-Party Vulnerabilities
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 42/118
5. Insecure APIs
Poorly protected APIs can be exploited to extract sensitive information.
Consequences
Example
In 2019, a misconfigured Amazon S3 bucket exposed personal data of millions of users. The breach
occurred because the storage was left publicly accessible without authentication, highlighting how small
misconfigurations can cause large-scale data leaks.
If you want, I can next create 6.1.2: Compliance Challenges to continue your Security and Privacy
Concerns section.
Meaning
In cloud computing, services are delivered over the internet. This means continuous internet
connectivity is essential to access applications, data, and resources. Any downtime—whether due to
network issues, ISP failures, or service provider outages—can disrupt operations.
1. ISP Outages
Network provider failures can block access to cloud services.
2. Cloud Service Provider Downtime
Even top providers like AWS, Azure, or Google Cloud may face occasional outages.
3. Hardware Failures
Router, modem, or local network equipment malfunctions.
4. Natural Disasters
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 43/118
Storms, earthquakes, or floods damaging network infrastructure.
5. Cyberattacks
Distributed Denial of Service (DDoS) attacks can make services inaccessible.
Implications of Downtime
Business Disruption
Interrupts day-to-day operations, especially for real-time services like e-commerce or SaaS
platforms.
Financial Loss
Every minute of downtime can cost businesses thousands of dollars in lost sales and
productivity.
Customer Dissatisfaction
Service unavailability can frustrate users and damage brand reputation.
Example
In December 2021, an AWS outage affected major websites and apps globally, causing streaming
platforms, payment systems, and IoT devices to go offline for hours. Businesses relying solely on cloud-
hosted infrastructure faced significant losses during this downtime.
If you’d like, I can also prepare 6.2.2: Latency and Performance Issues so your Dependency on Internet
Connectivity section is complete.
Restricted Configuration Options – Vendors may only allow specific versions of software or
predefined system settings, limiting the ability to fine-tune environments for specialized needs.
Limited Feature Modifications – Users cannot always modify or add custom features beyond
what the provider offers.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 44/118
Compliance Restrictions – Cloud providers may have standard compliance measures, but
organizations with unique regulatory requirements might find it difficult to adapt.
Performance Tuning Restrictions – In many cases, performance parameters such as CPU limits,
memory allocation, or network configurations are fixed by the vendor.
Example:
A company using a SaaS CRM platform like Salesforce might want to implement a niche workflow
automation feature, but is unable to do so because the vendor does not support custom code beyond
predefined APIs and tools.
If you want, I can now move to Vendor Lock-In Risks (6.4.1) to complete this section on cloud
disadvantages.
Vendor Lock-In
6.4.1 Migration Challenges
Data Portability Issues – Different vendors use unique storage formats and APIs, making data
transfer to another provider time-consuming and technically complex.
Application Compatibility Problems – Applications developed using vendor-specific services,
SDKs, or integrations may need to be rewritten to function in another environment.
High Switching Costs – Migrating workloads often involves significant financial investment,
including reconfiguration, testing, and downtime.
Skill Dependency – Staff trained on a particular vendor’s ecosystem may require retraining to
work with another provider’s tools and processes.
Contractual Obligations – Long-term contracts or early termination fees can further discourage
switching providers.
Example:
An organization using AWS Lambda for serverless computing may find it difficult to migrate to Azure
Functions because of differences in execution models, triggers, and vendor-specific integrations. This
could require rewriting large portions of the application.
Next, I can prepare Future Trends in Cloud Computing (Section 7) to smoothly transition into the
conclusion of your document.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 45/118
Data Security and Encryption
Encryption at Rest – Data stored in cloud servers is encrypted to protect it from physical theft or
unauthorized access.
Encryption in Transit – Data moving between a client and cloud services (or between cloud
components) is encrypted using protocols like TLS/SSL to prevent interception.
End-to-End Encryption – Ensures that only the sender and intended recipient can read the data,
even the cloud provider cannot access it.
Key Management – Secure generation, storage, and rotation of encryption keys, often handled via
cloud Key Management Services (KMS).
Compliance Requirements – Meeting industry standards such as GDPR, HIPAA, and PCI DSS that
mandate specific encryption practices.
Example:
A healthcare company storing patient records in the cloud uses AES-256 encryption for data at rest and
TLS 1.3 for secure transmission, ensuring compliance with HIPAA regulations.
If you want, I can now prepare AI and Machine Learning Integration in Cloud Computing (7.2) so
your section flows logically toward upcoming innovations.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 46/118
Usage Example: Encrypting files before uploading to Amazon S3.
2. RSA (Rivest–Shamir–Adleman)
Type: Asymmetric key encryption (public and private key pair).
Strength: Strong for secure key exchange and digital signatures.
Usage Example: Securely transmitting AES keys between cloud servers.
3. Triple DES (3DES)
Type: Symmetric encryption, applies DES three times.
Strength: More secure than DES but slower than AES; being phased out for stronger methods.
4. Blowfish / Twofish
Type: Symmetric encryption, good for fast encryption with smaller data sizes.
Cloud Example:
A financial services company encrypts customer transaction data using AES-256 before storage and uses
RSA encryption for secure key exchange between their on-premises server and the cloud.
If you want, I can also write the 7.1.2 Authentication and Access Control section so it continues the
security discussion naturally.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 47/118
Region: United States.
PCI DSS (Payment Card Industry Data Security Standard) – Protects payment card data.
CCPA (California Consumer Privacy Act) – Protects data privacy for California residents.
If you want, I can now cover 7.1.5 Security Best Practices to wrap up this cloud security section
smoothly.
Identity and Access Management (IAM) is a framework of policies, processes, and technologies that
ensures the right individuals have the appropriate access to technology resources. In cloud computing,
IAM enables organizations to define and manage user identities, authentication, and authorization,
ensuring that resources are accessed securely and only by authorized users.
IAM systems typically include:
User Authentication – Verifying the identity of a user (e.g., via passwords, biometrics, MFA).
Authorization – Determining what actions a user can perform after authentication.
Audit and Monitoring – Tracking access activities for security and compliance.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 48/118
Examples of IAM Services:
Role-Based Access Control is a method within IAM that assigns permissions to users based on their roles
within an organization. Instead of assigning permissions individually, roles are created with specific
access rights, and users are assigned to those roles.
Key Features of RBAC:
Least Privilege Principle – Users are granted the minimum permissions required to perform their
duties.
Example Scenario:
Admin Role – Can manage all cloud resources and user accounts.
Developer Role – Can deploy and manage applications but not modify IAM policies.
Viewer Role – Can only view resources without making changes.
Benefits of RBAC:
Example:
In AWS IAM, you can create an "EC2Admin" role with permissions to start, stop, and configure EC2
instances. Any user assigned this role will automatically get these permissions without direct user-
specific assignments.
If you want, I can now continue with 7.3 Multi-Factor Authentication (MFA) to complete the IAM
section.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 49/118
Data governance and privacy in cloud computing involve establishing policies, processes, and standards
to ensure that data is managed responsibly, securely, and in compliance with legal requirements. It
defines how data is collected, stored, processed, shared, and disposed of, with a strong emphasis on
maintaining confidentiality, integrity, and availability. Privacy specifically focuses on protecting sensitive
and personal information from unauthorized access or misuse, ensuring compliance with laws such as
GDPR, HIPAA, and CCPA. In a cloud environment, data governance also includes vendor agreements,
audit trails, and consistent monitoring to prevent violations and maintain trust.
Data Minimization – Collect and store only the data that is necessary for operations.
User Consent – Ensure transparent communication and obtain consent for data collection and
usage.
Data Location Awareness – Understand where data is physically stored, as laws vary by country or
region.
Anonymization & Pseudonymization – Use techniques to mask personally identifiable
information (PII) to protect user privacy.
Third-Party Sharing Controls – Restrict and monitor data access by cloud vendors or other
partners.
Retention & Deletion Policies – Define timelines for securely deleting unnecessary data.
Compliance Alignment – Continuously verify that data handling meets standards like GDPR,
HIPAA, or ISO 27018.
By integrating strong governance policies with robust privacy measures, organizations can protect user
trust, meet legal obligations, and reduce the risk of regulatory penalties.
If you want, I can now move to Cloud Computing Service Models (IaaS, PaaS, SaaS) to continue the
next section of your notes.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 50/118
7.4.1 Provider vs. User Responsibilities
✅ Example:
If a company stores customer data in AWS S3:
If you want, I can now create a table comparison of responsibilities for IaaS, PaaS, and SaaS so this
section becomes visually clear and ready for your document. That will make it more professional and
exam-friendly.
1. Object Storage
Definition:
Stores data as objects, each containing the data itself, metadata, and a unique identifier. Data is
managed in a flat structure instead of a hierarchical file system.
Characteristics:
Examples:
Amazon S3
Google Cloud Storage
Azure Blob Storage
Use Cases:
2. File Storage
Definition:
Stores data in a hierarchical file and folder structure, similar to traditional file systems.
Characteristics:
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 52/118
Examples:
Use Cases:
3. Block Storage
Definition:
Stores data in fixed-size blocks without metadata, similar to traditional hard drives. Each block has a
unique address and can be modified independently.
Characteristics:
Examples:
Use Cases:
✅ If you want, I can now prepare "Cloud Storage Access Methods" as the next logical section to follow
this. That will connect storage types to how they are actually accessed in real-world systems.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 53/118
Object Storage (e.g., AWS S3)
Key Characteristics
1. Scalability – Can handle petabytes of data seamlessly without complex scaling configurations.
2. Metadata-rich – Stores detailed metadata for each object, enabling better indexing and retrieval.
3. Global Accessibility – Data can be accessed from anywhere via HTTP/HTTPS APIs.
4. Durability – Providers like AWS S3 offer 99.999999999% (11 nines) durability by replicating data
across multiple locations.
5. Cost Efficiency – Pay only for the storage used, with tiered pricing for different access frequencies.
6. Immutable Storage – Supports versioning and write-once-read-many (WORM) compliance.
Use Cases
Backup and Archiving – Store long-term backups and archival data cost-effectively.
Big Data and Analytics – Store large datasets for processing with cloud analytics tools.
Media Storage – Host images, videos, and audio files for streaming platforms.
Disaster Recovery – Maintain offsite copies of critical data for rapid recovery.
Static Website Hosting – Host static content directly from storage buckets.
💡 Example: Amazon S3 allows businesses to store any volume of data, integrate with AWS analytics
tools, and manage data lifecycle policies automatically.
If you want, I can now give you "File Storage" as the next type so you can complete your cloud storage
section.
Key Characteristics
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 54/118
Persistent Storage: Data remains available even after the attached instance is stopped (unless
deleted).
Attachable to Compute Instances: Typically used as a virtual hard drive attached to cloud servers
(e.g., EC2 instances in AWS).
File System Control: The user installs a file system (e.g., ext4, NTFS) to manage data.
AWS EBS provides block-level storage volumes that can be attached to EC2 instances. These volumes
behave like physical drives but can be resized and backed up via snapshots without downtime.
3. High-Performance Applications: Suitable for workloads requiring fast read/write operations like
analytics or financial applications.
If you want, I can now prepare the next one "File Storage (e.g., AWS EFS)" so your "Types of Cloud
Storage" section stays complete.
Key Characteristics
Hierarchical Structure: Data is stored as files in folders, just like on a traditional computer.
Shared Access: Multiple users or instances can access the same files simultaneously.
Standard Protocols: Uses protocols like NFS (Network File System) or SMB (Server Message Block).
Elastic Scaling: Automatically grows and shrinks storage capacity based on usage (in services like
AWS EFS).
AWS EFS provides scalable, managed file storage that can be mounted to multiple EC2 instances at once.
It automatically adjusts storage size and offers high availability across multiple Availability Zones.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 55/118
Common Use Cases
If you want, I can now give you a comparison table of Object, Block, and File storage so it’s easier to
revise in one glance. That would make your notes more exam/interview-friendly.
Cloud Databases
Cloud Databases
A cloud database is a database that runs on a cloud computing platform rather than on on-premises
servers. It is managed, maintained, and scaled by a cloud service provider, and can be accessed over the
internet.
Key Characteristics
Fully Managed Services: The provider handles setup, backups, patching, scaling, and security.
Scalability: Can scale vertically (more resources to a single node) or horizontally (more nodes) on
demand.
High Availability: Built-in redundancy and replication across multiple regions or availability zones.
Flexible Pricing: Pay-as-you-go or reserved capacity models.
Accessible Anywhere: Accessible over the internet via APIs, SDKs, or standard database protocols.
Snowflake
Benefits
If you want, I can now make you a visual diagram showing all types of cloud databases with
examples so it’s easier to recall during interviews. That would make it stick in your memory much faster.
AWS RDS (Amazon Relational Database Service) supports popular relational database engines
like MySQL, PostgreSQL, MariaDB, Oracle, and Microsoft SQL Server.
Key Features:
Structured Data Storage: Ideal for applications with well-defined data models.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 57/118
ACID Compliance: Ensures Atomicity, Consistency, Isolation, and Durability for reliable
transactions.
Automated Management: AWS RDS handles backups, patching, monitoring, and scaling.
Multi-AZ Deployments: For high availability and disaster recovery.
Read Replicas: Improve read performance for large-scale applications.
Use Cases:
If you want, I can now give you the next part on Non-Relational Databases (e.g., AWS DynamoDB) to
continue your cloud storage section.
AWS DynamoDB – A fully managed, serverless NoSQL database service that provides single-digit
millisecond latency at any scale.
Key Features:
Flexible Schema: No fixed structure; supports JSON-like documents, key-value pairs, and more.
High Performance: Consistent low-latency reads and writes.
Automatic Scaling: Handles millions of requests per second without manual intervention.
Fully Managed: AWS handles backups, security, and infrastructure.
Global Tables: Enables multi-region, active-active database replication.
Use Cases:
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 58/118
Real-time analytics
Gaming leaderboards
IoT device data storage
Session management and caching
Social media apps with dynamic content
Next, I can give you Data Warehouses (e.g., AWS Redshift) to complete the cloud databases section.
In-memory databases store data entirely in RAM instead of on disk, which allows ultra-fast data
retrieval and processing. They are ideal for scenarios where low latency and high throughput are
critical.
Example Service:
Redis – An open-source, in-memory key-value store used as a database, cache, and message
broker. Managed versions are available via Amazon ElastiCache for Redis.
Key Features:
Caching Layer: Reduce database load by storing frequently accessed data in memory.
Session Storage: Store user session information for web applications.
Real-Time Analytics: Handle leaderboard updates, gaming stats, and financial tickers.
Message Queues: Implement lightweight messaging and task queues.
Rate Limiting: Control request rates in APIs and services.
If you want, I can now break down Data Warehouses (e.g., AWS Redshift) so your cloud
storage/database notes are complete.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 59/118
8.3.2 Backup Solutions
Here’s a clear and detailed breakdown for your notes on Data Replication and Redundancy —
including Replication Strategies and Backup Solutions:
Replication means creating copies of data across different servers or locations to improve availability,
performance, and disaster recovery.
1. Synchronous Replication
How it works: Data is written to the primary and replica at the same time.
Pros: Strong consistency (replica is always up-to-date).
Cons: Higher latency (write operation waits for both servers to confirm).
Use Case: Financial transactions where data accuracy is critical.
2. Asynchronous Replication
How it works: Data is written to the primary first, then sent to the replica in the background.
Pros: Low latency, faster writes.
Cons: Risk of data loss if the primary fails before syncing.
3. Multi-Master Replication
How it works: Multiple nodes can accept read/write requests and sync changes.
Pros: High availability, no single point of failure.
Cons: Conflict resolution complexity.
Use Case: Global applications with regional write operations.
4. Geo-Replication
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 60/118
Examples in AWS:
Backups are snapshots or copies of data used to restore systems after failures, corruption, or
accidental deletions.
1. Full Backup
2. Incremental Backup
How it works: Saves only data changed since the last backup.
Pros: Fast backups, minimal storage.
Cons: Slower restore (requires multiple backup sets).
Use Case: Daily backups.
3. Differential Backup
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 61/118
EBS Snapshots
✅ If you want, I can now create a diagram showing replication strategies and backup types so it’s
easier to revise visually.
Would you like me to prepare that?
Isolation: Ensures resources are separated from other customers in the same cloud provider
network.
Customization: Users can control network configurations, including IP addressing, DNS settings,
and firewall rules.
Security: Integrated with security groups and network access control lists (ACLs) for traffic filtering.
Subnets in a VPC:
Definition: Subnets are subdivisions of a VPC’s IP address range. They help organize and
segregate resources within the network.
Purpose:
Example:
This setup allows tight control over traffic flow while enabling secure, scalable cloud deployments.
Do you want me to also write 9.1.2 Internet Gateway and NAT Gateway next so it flows naturally?
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 62/118
Load Balancing and Traffic Management
9.2.1 Load Balancer Types (e.g., ELB)
Cloud providers like AWS, Azure, and Google Cloud offer managed load balancers with various types
designed for different workloads.
If you want, I can now cover 9.3 Content Delivery Networks (CDNs) so we complete this performance
optimization section.
1. Reduced Latency
Content is served from the nearest edge server, reducing the time it takes for data to travel.
2. Faster Website Loading
Cached files load faster, improving user experience and SEO rankings.
3. Bandwidth Optimization
Offloads traffic from the origin server, reducing hosting costs.
4. DDoS Protection
Many CDNs, like Cloudflare, offer security features to mitigate Distributed Denial-of-Service
attacks.
5. High Availability
If one server fails, traffic is rerouted to the next closest server, ensuring uninterrupted access.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 64/118
6. Global Reach
Allows businesses to efficiently serve content to a worldwide audience without building
infrastructure in every region.
💡 Example:
Cloudflare caches your website assets across its global network.
When a user in India accesses your site, content is served from the nearest Indian edge server
instead of your U.S.-based origin server, ensuring faster load times and reduced network strain.
If you want, I can now continue with Edge Computing since it naturally follows CDNs in cloud
networking topics.
IP Addressing:
Every resource in the cloud (e.g., virtual machine, load balancer) is assigned an IP address to
communicate over the network.
Types of IP Addresses:
Public IP – Accessible over the internet.
Private IP – Used for communication within a private network (e.g., inside a VPC).
Elastic IP (AWS) – A static public IP that can be remapped to different instances.
DNS (Domain Name System):
DNS translates human-readable domain names (e.g., example.com) into IP addresses.
This eliminates the need for users to remember numerical IPs.
Amazon Route 53 is AWS’s highly available and scalable DNS web service.
Key Features:
Domain Registration – Purchase and manage domain names directly from AWS.
DNS Resolution – Converts domain names into IP addresses using a global DNS network.
Health Checks & Failover – Automatically redirects traffic if a service becomes unavailable.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 65/118
Simple Routing – Directs traffic to a single resource.
Weighted Routing – Distributes traffic based on defined weights.
Latency-based Routing – Routes users to the resource with the lowest network latency.
Geolocation Routing – Directs traffic based on the requester’s geographic location.
If you want, I can now explain "Cloud Monitoring and Management Tools" next so your notes stay in a
perfect sequence.
Virtualization Basics
10.1.1 Virtual Machines
10.1.2 Hypervisors (e.g., VMware, Hyper-V)
Here’s the explanation for Virtualization Basics with 10.1.1 Virtual Machines and 10.1.2 Hypervisors
— keeping it structured and interview-friendly.
Definition:
A Virtual Machine is a software-based emulation of a physical computer. It has its own CPU, memory,
storage, and network interfaces, all virtualized from the underlying physical hardware.
Key Features:
Resource Allocation: Hardware resources are allocated and managed via a hypervisor.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 66/118
Advantages:
Example:
Running Ubuntu and Windows Server on the same laptop/server using VirtualBox or VMware
Workstation.
Definition:
A hypervisor (Virtual Machine Monitor – VMM) is software that enables virtualization by separating the
physical hardware from the virtual machines.
Types of Hypervisors:
1. Type 1 (Bare-metal):
Installed directly on the physical hardware.
More efficient and secure.
Examples: VMware ESXi, Microsoft Hyper-V (Server version), Xen, KVM.
2. Type 2 (Hosted):
Runs on top of an existing operating system.
Easier for personal or development use.
Examples: VMware Workstation, Oracle VirtualBox, Parallels Desktop.
Functions of a Hypervisor:
Example in Cloud:
AWS uses a customized Xen and Nitro Hypervisor for EC2 instances.
“VMs virtualize hardware and run a full OS per instance, while containers virtualize the OS
kernel and share it among applications, making them lighter and faster to start.”
If you want, I can now prepare the "10.2 Containers vs. VMs" section so your cloud computing notes
flow smoothly from here.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 67/118
Containerization
10.2.1 Docker Basics
10.2 Containerization
Containerization is a form of virtualization that encapsulates an application and its dependencies into
a lightweight, portable container that can run consistently across different environments. Unlike virtual
machines, containers share the host OS kernel, making them faster and more resource-efficient.
Docker is the most popular containerization platform. It allows developers to build, ship, and run
applications inside containers.
Key Concepts in Docker:
1. Images:
Read-only templates used to create containers.
Can include application code, runtime, libraries, and OS dependencies.
Example: python:3.11 image includes Python runtime on Linux.
2. Containers:
Runtime instances of Docker images.
Lightweight and isolated, sharing the host OS kernel.
Example: Running a container from the nginx image starts a web server instance.
3. Dockerfile:
Command Purpose
docker pull <image> Download an image from Docker Hub
docker build -t <name> . Build an image from a Dockerfile
docker run -d -p 8080:80 <image> Run a container in detached mode and map ports
docker ps List running containers
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 68/118
Command Purpose
docker stop <container> Stop a running container
docker rm <container> Remove a container
Example Workflow:
If you want, I can continue with 10.2.2 Kubernetes Basics next to explain container orchestration at an
industry level.
Isolation Strong isolation between VMs Process-level isolation; less strict than VMs
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 69/118
Feature Virtual Machines (VMs) Containers
Runs anywhere with container runtime (Docker,
Portability Needs compatible hypervisor
Podman)
Running multiple OS environments,
Use Case Microservices, CI/CD pipelines, cloud-native apps
legacy apps
Managed via hypervisors (VMware, Managed via container runtimes & orchestrators
Management
Hyper-V) (Docker, Kubernetes)
Key Insights:
VMs are better when strong isolation or different OS environments are needed.
Containers are ideal for modern, cloud-native, microservices-based applications where speed,
scalability, and resource efficiency matter.
Many organizations use a hybrid approach, running containers inside VMs for added isolation and
management flexibility.
If you want, I can now move to 10.4 Container Orchestration (e.g., Kubernetes) to explain how
containers are managed at scale in the industry.
Container Orchestration
10.4.1 Kubernetes Overview
1. Cluster
A set of worker machines (nodes) running containerized applications managed by Kubernetes.
2. Node
A single machine (physical or virtual) in the cluster.
Can be a Master Node (control plane) or Worker Node (runs application workloads).
3. Pod
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 70/118
The smallest deployable unit in Kubernetes, usually containing one or more containers that
share storage, network, and configuration.
4. Deployment
Defines desired state for pods (number of replicas, updates) and manages scaling and rolling
updates.
5. Service
Exposes pods to internal or external networks and provides load balancing.
6. ConfigMap & Secret
Store configuration data and sensitive information (like passwords or API keys) for containers.
7. Ingress
Manages external access to services, usually via HTTP/HTTPS.
Benefits of Kubernetes:
Example:
Suppose you deploy a web application with multiple microservices:
Kubernetes ensures that each microservice runs in the required number of pods, automatically
replaces failed pods, balances traffic across healthy pods, and scales the application during traffic
spikes.
If you want, I can now cover 10.4.2 Kubernetes Architecture Diagram with Explanation to make this
section visually clear and easier to remember for exams or interviews.
1. Docker Swarm
Overview:
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 71/118
Native clustering and orchestration tool for Docker containers.
Turns a group of Docker engines into a single virtual Docker engine.
Key Features:
Ease of Use: Simple setup for developers already familiar with Docker.
Integrated with Docker CLI: Commands like docker service create are used to deploy containers.
Service Discovery: Automatically assigns containers a DNS name.
Load Balancing: Distributes traffic across running containers.
Scaling: Easily scale services up or down with a single command.
Use Cases:
Overview:
Cluster manager that can run and orchestrate containers as well as other workloads.
Often used in big data environments (Hadoop, Spark).
Key Features:
3. Nomad (HashiCorp)
Overview:
Key Features:
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 72/118
Tool Complexity Scalability Use Case
Kubernetes High Very High Large-scale, production-grade deployments
Docker Swarm Low Medium Small-to-medium deployments
Nomad Medium High Flexible workloads, lightweight orchestration
Mesos High Very High Big data & mixed workloads
💡 Key Insight:
Kubernetes dominates large-scale production environments due to its ecosystem, flexibility, and
cloud provider integrations.
Docker Swarm is better for simpler deployments or teams already familiar with Docker CLI.
If you want, I can now create a visual diagram comparing Kubernetes, Docker Swarm, and Nomad to
make this section easier to revise quickly.
Serverless Concepts
1. No Server Management
Developers do not manage the underlying servers, OS, or runtime.
Infrastructure is fully abstracted by the cloud provider.
2. Event-Driven Architecture
Functions are triggered by events such as HTTP requests, database changes, file uploads, or
scheduled tasks.
3. Automatic Scaling
Serverless functions scale automatically with demand.
No need to pre-allocate resources.
4. Pay-Per-Use Billing
Charges are based on the number of requests and execution duration, not idle time.
5. Stateless Functions
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 73/118
11.2 Examples of Serverless Services
If you want, I can now explain “Serverless Architecture with AWS Lambda Example” next, including
code snippets and diagram for a clear, practical understanding.
Event-Driven Architecture
11.2.1 Event Triggers
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 74/118
An event trigger is a condition or action that initiates the execution of a serverless function. These
triggers can come from various cloud services or external inputs.
Common Event Triggers in Cloud Serverless Platforms:
If you want, I can now cover 11.3 Serverless Workflow Example with Code to show a real-world
implementation of event-driven serverless architecture.
FaaS Examples
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 75/118
1. AWS Lambda (Amazon Web Services)
python
import boto3
from PIL import Image
def lambda_handler(event, context):
s3 = boto3.client('s3')
bucket = event['Records'][0]['s3']['bucket']['name']
key = event['Records'][0]['s3']['object']['key']
download_path = '/tmp/{}'.format(key)
s3.download_file(bucket, key, download_path)
img = Image.open(download_path)
img.thumbnail((128, 128))
img.save(download_path)
s3.upload_file(download_path, bucket, 'resized-' + key)
3. Azure Functions
5. Other Examples
AWS Fargate: Serverless container execution (runs containers without managing servers).
Alibaba Function Compute: Event-driven functions on Alibaba Cloud.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 76/118
No Server Management: Focus purely on writing code.
Automatic Scaling: Functions scale automatically with load.
Event-driven: Only runs when triggered, reducing cost.
Microservices-friendly: Perfect for modular application design.
If you want, I can now create a diagram showing FaaS workflow with triggers, execution, and
storage integration — perfect for understanding and interviews.
AWS Lambda
AWS Lambda
AWS Lambda is Amazon Web Services’ serverless, event-driven computing platform that allows you to
run code without provisioning or managing servers. Lambda automatically scales and executes code in
response to events, charging only for actual compute time.
1. Event-Driven Execution
Lambda functions are triggered by events such as file uploads (S3), database updates
(DynamoDB), API calls (API Gateway), or scheduled events (CloudWatch).
2. No Server Management
AWS handles the infrastructure, OS, and runtime environment. Developers focus purely on
code.
3. Automatic Scaling
Lambda automatically scales to handle any number of incoming events.
4. Pay-Per-Use
Billed based on the number of requests and execution duration (rounded to 1ms).
5. Language Support
Python, Node.js, Java, Go, Ruby, .NET Core, and custom runtimes.
6. Stateless Functions
Each invocation is independent; persistent state must be stored externally (e.g., S3,
DynamoDB).
1. Trigger/Event Source
An event occurs (S3 upload, DynamoDB update, HTTP request via API Gateway).
2. Lambda Function
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 77/118
Runs the code in response to the event.
3. Execution Role
IAM Role defines permissions (e.g., access to S3, DynamoDB).
4. Output/Integration
Processes data or triggers downstream services.
Can interact with other AWS services like S3, DynamoDB, SNS, SQS, CloudWatch.
python
import boto3
def lambda_handler(event, context):
bucket_name = event['Records'][0]['s3']['bucket']['name']
object_key = event['Records'][0]['s3']['object']['key']
print(f"New file uploaded: {object_key} in bucket {bucket_name}")
Use Cases
If you want, I can next explain AWS Lambda Lifecycle and Deployment Methods, including zip
uploads, container images, and versioning, for a complete hands-on understanding.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 78/118
1. Event-Driven
Functions are triggered by events such as HTTP requests, Cloud Storage changes, Pub/Sub
messages, or Firebase events.
2. No Server Management
Developers focus on code; Google manages the infrastructure, runtime, and scaling.
3. Automatic Scaling
Functions scale automatically based on traffic or event load.
4. Pay-Per-Use
Billing is based on execution time, memory allocated, and number of invocations.
5. Language Support
Node.js, Python, Go, Java, Ruby, and .NET.
6. Stateless Execution
Each function invocation is independent; persistent state must be stored externally (e.g.,
Firestore, Cloud Storage).
1. Trigger/Event Source
Examples: HTTP requests via HTTPS endpoint, Pub/Sub messages, Cloud Storage uploads.
2. Cloud Function
Executes code in response to the trigger.
3. IAM Permissions
Cloud Functions uses service accounts to access other GCP services securely.
4. Integration
Can interact with other GCP services like Cloud Firestore, Cloud Pub/Sub, BigQuery, and Cloud
Storage.
python
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 79/118
Use Cases
If you want, I can next cover Azure Functions so your serverless examples are complete across the
three major cloud providers.
Azure Functions
Azure Functions
Azure Functions is Microsoft Azure’s serverless computing service that enables developers to run code
on-demand without managing servers. Like AWS Lambda and Google Cloud Functions, it scales
automatically and is triggered by events.
1. Event-Driven
Functions can be triggered by HTTP requests, timers, Azure Storage events, Service Bus
messages, or Cosmos DB changes.
2. No Server Management
Developers focus only on code; Azure manages the infrastructure, runtime, and scaling.
3. Automatic Scaling
Azure Functions automatically scales based on workload and event frequency.
4. Pay-Per-Use
Billed for execution time, memory, and number of executions.
5. Language Support
C#, Java, JavaScript, TypeScript, Python, PowerShell, and custom handlers.
6. Stateless Execution
Each function execution is independent. Persistent state can be stored in external services like
Azure Blob Storage or Cosmos DB.
1. Trigger/Event Source
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 80/118
Examples: HTTP request, Blob storage file upload, Service Bus queue message, timer-based
cron job.
2. Function App
A container for one or more related functions, sharing configuration and deployment.
3. Execution Context
Provides runtime information, logging, and environment variables to the function.
4. Integration
Functions can interact with other Azure services like Event Grid, Cosmos DB, Storage, and Logic
Apps.
python
import logging
def main(blob: bytes):
logging.info(f"Processed blob of size {len(blob)} bytes.")
Use Cases
If you want, I can now create a Serverless Comparison Table: AWS Lambda vs Google Cloud
Functions vs Azure Functions to summarize all three major cloud providers in one view.
Pay-as-You-Go Model
12.1.1 Usage-Based Pricing
The Pay-as-You-Go (PAYG) model is a cloud computing pricing strategy where users are charged based
on actual usage of cloud resources instead of a fixed fee. This model enables cost efficiency and
scalability for businesses, as they pay only for what they consume.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 81/118
12.1.1 Usage-Based Pricing
Definition:
Key Features:
1. Granular Billing:
Users are billed in units like seconds, GB, or requests.
Example Scenarios:
Cost Efficiency: Pay only for what you use; idle resources are not billed.
Flexibility: Scale workloads without worrying about wasted capacity.
Encourages Optimization: Users optimize code, storage, and usage to reduce costs.
Ideal for Startups & Variable Workloads: Start small and scale as needed without upfront
investment.
If you want, I can next explain 12.2 Reserved Instances vs Spot Instances, which complements pay-as-
you-go with cost-saving strategies in cloud pricing.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 82/118
Reserved Instances and Discounts
12.2.1 Cost-Saving Options
Definition: Flexible pricing model offering lower rates for committed compute usage (measured in
$/hour).
Benefit: Automatically applies discounted rates to eligible instances, even if instance type or
region changes.
Use Case: Workloads that may shift over time or need flexibility.
Definition: Unused compute capacity offered at steep discounts (up to 90% off).
Caveat: Can be interrupted with short notice (AWS Spot, GCP Preemptible).
Use Case: Batch processing, testing, data analysis, or fault-tolerant workloads.
4. Free Tiers
Many cloud providers offer a limited free usage for the first 12 months or indefinitely for small
workloads.
Example: AWS Free Tier provides 750 hours/month of t2.micro EC2 and 5GB S3 storage.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 83/118
Key Advantages of Reserved Instances & Discounts
If you want, I can next explain 12.3 Cost Monitoring and Billing Tools, including examples like AWS
Cost Explorer, Azure Cost Management, and GCP Billing Reports, which help track usage and
optimize cloud spending.
Features:
View costs by service, linked account, or tags.
Forecast future usage and costs.
Identify idle resources or underutilized instances.
Example Use: Detect that an EC2 instance runs 24/7 but is only needed 8 hours/day → switch to
on-demand or schedule stop/start to save costs.
Purpose: Provides recommendations for cost optimization, security, performance, and fault
tolerance.
Cost Recommendations:
Idle EC2 instances
Features:
Detailed usage reports by project, product, or SKU
Recommendations for rightsizing VMs or using committed use discounts
💡 Example Scenario:
A startup runs 10 EC2 instances 24/7.
Using AWS Cost Explorer, they find 4 are idle most of the day.
Action: Stop those 4 during off-hours → save ~40% on monthly compute costs.
If you want, I can next explain 12.4 Billing Models Across AWS, Azure, and GCP, so you have a
complete understanding of cloud pricing strategies across major providers.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 85/118
13.1.1 CI/CD Pipelines
Definition:
A CI/CD pipeline is an automated workflow that moves code from development to production in stages,
ensuring consistent quality and faster release cycles.
Key Components of a CI/CD Pipeline:
text
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 86/118
▼
Production Deployment
💡 Industry Example:
AWS CodePipeline + CodeBuild + CodeDeploy:
Developer pushes code → CodePipeline triggers → CodeBuild runs tests → Docker image
stored → CodeDeploy deploys to EC2 / ECS / Lambda automatically.
If you want, I can next explain 13.2 CI/CD Tools and Examples, including Jenkins, GitHub Actions, and
GitLab CI with a sample pipeline setup and YAML/code examples for practice.
13.2.1 Terraform
Overview:
Key Features:
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 87/118
State Management: Maintains a state file to track deployed resources.
Modularity: Supports reusable modules for consistent infrastructure patterns.
Immutable Infrastructure: Changes are applied incrementally, reducing downtime.
hcl
provider "aws" {
region = "us-east-1"
}
tags = {
Name = "MyTerraformInstance"
}
}
Commands:
bash
Overview:
AWS-native IaC service that allows provisioning AWS resources using JSON or YAML templates.
Fully integrated with AWS services.
Key Features:
Declarative Syntax: Define what resources are needed, AWS handles the creation order.
Stacks: Resources are grouped into stacks for easy management.
Change Sets: Preview proposed changes before applying updates.
Automation: Supports automatic rollback if errors occur during deployment.
yaml
AWSTemplateFormatVersion: '2010-09-09'
Resources:
MyEC2Instance:
Type: AWS::EC2::Instance
Properties:
InstanceType: t2.micro
ImageId: ami-0c55b159cbfafe1f0
Tags:
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 88/118
- Key: Name
Value: MyCFInstance
Commands / Workflow:
💡 Industry Insight:
Terraform is widely used for multi-cloud deployments and DevOps pipelines.
CloudFormation is preferred for AWS-native environments and tight integration with AWS
services.
If you want, I can next cover 13.3 CI/CD + IaC Example Workflow, showing how Terraform or
CloudFormation integrates with Jenkins/GitHub Actions to automatically deploy cloud infrastructure
alongside applications.
Configuration management
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 89/118
Application deployment
Task automation
Key Features:
1. Agentless:
No software installation required on target machines; connects over SSH.
2. Declarative Language:
Uses YAML playbooks to define desired state of systems.
3. Idempotent:
Ensures running the same playbook multiple times doesn’t change the system if it’s already in
the desired state.
4. Extensible:
Modules available for managing cloud services (AWS, Azure, GCP), networking, databases, and
applications.
1. Inventory:
Defines the hosts or groups of hosts to manage.
ini
[webservers]
web1.example.com
web2.example.com
2. Playbook:
YAML file defining tasks to configure hosts.
yaml
3. Modules:
Pre-built units to perform tasks like installing packages, managing files, or interacting with
cloud resources.
4. Roles:
Organize playbooks into reusable units for complex deployments.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 90/118
Example: Deploying a Web Server with Ansible
Inventory file:
ini
[webservers]
192.168.1.10
192.168.1.11
Playbook:
yaml
Run Command:
bash
If you want, I can next cover 13.4 Integrating IaC and Automation Tools in CI/CD, showing a complete
DevOps pipeline workflow from code commit to automated infrastructure and application deployment.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 91/118
Cloud Migration Strategies
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 92/118
Pros: Maintains critical workloads while planning migration.
Cons: Partial migration may increase hybrid management complexity.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 93/118
If you want, I can next cover 14.4 Cloud Migration Challenges and Best Practices, including downtime
reduction, security considerations, and cost optimization techniques.
Types of Migrations
14.2.1 Lift and Shift
14.2.2 Refactoring
14.2.3 Replatforming
Definition:
Moving applications to the cloud without making any changes to the code or architecture.
Characteristics:
Pros:
Cons:
Example:
Definition:
Characteristics:
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 94/118
Changes application architecture, e.g., breaking monoliths into microservices.
Pros:
Cons:
Example:
Rewriting a monolithic application into AWS Lambda functions + DynamoDB or Azure Functions
+ Cosmos DB.
Definition:
Minor modifications to applications to better leverage cloud capabilities without full redesign.
Characteristics:
Pros:
Cons:
Example:
Moving a self-managed database to AWS RDS or containerizing an existing web app to run on AWS
ECS/EKS or Azure AKS.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 95/118
Migration Type Change Level Speed Cloud Optimization Example
Refactor Major Slow High Monolith → Lambda / Microservices
If you want, I can next explain 14.3 Migration Planning & Tools, including AWS Migration Hub, Azure
Migrate, and GCP Migrate, showing how they support each type of migration.
1. Resource Monitoring
Track CPU, memory, disk usage, network traffic, and storage.
2. Application Performance Monitoring (APM)
Monitor response times, error rates, and throughput of applications.
3. Log Management
Centralized logging for analysis and troubleshooting.
4. Alerting & Notifications
Automatic alerts for thresholds, failures, or unusual behavior via email, SMS, or dashboards.
5. Visualization & Dashboards
Graphical representation of metrics and trends for easier analysis.
6. Automated Remediation
Integration with automation tools to fix issues automatically (e.g., restart instances).
Cloud
Tool Key Features Use Case
Provider
Metrics, logs, alarms,
AWS CloudWatch AWS Monitor EC2, Lambda, S3, RDS
dashboards
Metrics, logs, Application
Azure Monitor Azure Track VMs, containers, web apps
Insights
Google Cloud Metrics, dashboards, alerts, Monitor Compute Engine, Cloud
GCP
Monitoring uptime checks Functions, BigQuery
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 96/118
Cloud
Tool Key Features Use Case
Provider
Full-stack monitoring for hybrid
Datadog Multi-cloud Metrics, logs, traces, APM
environments
APM, dashboards, synthetic
New Relic Multi-cloud Application performance optimization
monitoring
Prometheus + Cloud-native and Kubernetes
Open-source Metrics collection, visualization
Grafana monitoring
bash
What it does:
Monitors the average CPU usage of an EC2 instance every 5 minutes.
If CPU exceeds 80% for 2 evaluation periods, an SNS notification is sent.
If you want, I can next explain 15.5 Cloud Monitoring in DevOps, including how CI/CD pipelines
integrate with monitoring tools for automated alerts and remediation.
AWS CloudWatch
AWS CloudWatch
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 97/118
AWS CloudWatch is Amazon Web Services’ monitoring and observability service for cloud resources
and applications. It allows you to collect, visualize, and act on metrics, logs, and events from AWS
resources or custom applications.
1. Metrics Monitoring
Track performance metrics like CPU usage, memory, disk I/O, network traffic, and request
counts.
Supports both AWS-managed metrics (EC2, RDS, S3) and custom metrics from your
applications.
2. Logs Management
Collect logs from EC2, Lambda, VPC, and on-prem applications.
Allows searching, filtering, and retention of logs.
Monitor EC2 instance CPU utilization and send an alert when it exceeds 80%.
Track Lambda function invocation count and errors.
Aggregate and analyze application logs to detect errors or performance issues.
Trigger auto-scaling when metrics exceed thresholds.
text
[Resources] → [CloudWatch Metrics / Logs] → [Alarms & Events] → [Actions (SNS, Lambda, Auto
Scaling)]
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 98/118
Metrics/Logs: Stored in CloudWatch for visualization and analysis.
bash
Benefits of CloudWatch
Centralized Monitoring: Track all AWS resources and custom metrics in one place.
Proactive Alerts: Prevent downtime by detecting issues early.
Automation: Trigger auto-scaling or Lambda functions automatically.
Cost Efficiency: Optimize resource usage based on real-time data.
Security & Compliance: Monitor logs for security events or unusual activity.
If you want, I can next explain AWS CloudWatch Logs, Dashboards, and Events in more detail with
examples, diagrams, and code snippets to give you a complete hands-on understanding.
Azure Monitor
Azure Monitor
Azure Monitor is Microsoft Azure’s comprehensive cloud monitoring and observability service. It
helps collect, analyze, and act on telemetry data from Azure resources, applications, and on-premises
environments to ensure performance, availability, and operational health.
1. Metrics Collection
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 99/118
Monitors resource performance like CPU, memory, disk, and network usage.
Provides both platform metrics (Azure services) and custom application metrics.
2. Log Analytics
Centralized collection and querying of logs from resources and applications.
Uses Kusto Query Language (KQL) for powerful analysis.
3. Alerts and Notifications
Configure alerts on metrics, logs, or activity to trigger email, SMS, webhook, or automated
actions.
Supports dynamic thresholds and machine-learning-based anomaly detection.
4. Application Insights
Monitors application performance, dependencies, and errors.
Tracks response times, request rates, and failures for web applications.
5. Dashboards and Visualization
Create custom dashboards to visualize metrics and logs across services in real time.
6. Autoscale Integration
Use monitored metrics to automatically scale VMs, App Services, or other resources.
7. Workbooks and Analytics
Create interactive reports and visualizations using collected telemetry.
text
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 100/118
bash
Comprehensive Visibility: Unified monitoring for Azure resources, applications, and on-premises
environments.
Proactive Issue Detection: Alerts and anomaly detection prevent downtime.
Integration with Automation: Autoscale, Logic Apps, and remediation actions.
Cost Optimization: Identify underutilized resources to save costs.
Enhanced Security & Compliance: Monitor logs for suspicious activity.
If you want, I can next explain Azure Monitor vs AWS CloudWatch vs GCP Cloud Monitoring, with a
comparison table, diagrams, and real-world examples to understand all three major cloud
monitoring services side by side.
Definition:
Log aggregation is the process of collecting logs from multiple sources (servers, applications,
cloud services) into a centralized system for storage, analysis, and visualization.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 101/118
2. Faster troubleshooting of errors or performance issues.
3. Historical analysis for compliance and auditing.
4. Integration with alerting systems to detect anomalies automatically.
1. Elasticsearch
Distributed search and analytics engine.
Stores and indexes logs for fast querying.
2. Logstash
Collects, parses, and transforms logs from multiple sources.
Can handle JSON, CSV, syslog, or custom log formats.
3. Kibana
Visualization and dashboard tool.
Helps analyze trends, create graphs, and detect anomalies.
text
Example Scenario:
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 102/118
Web server logs are generated across 5 servers.
Logstash collects logs and sends them to Elasticsearch.
Kibana displays traffic trends, errors, and response times on a centralized dashboard.
1. Tracing
Tracks requests as they flow through distributed systems (e.g., OpenTelemetry, Jaeger).
2. Metrics
Numerical measurements like CPU usage, memory, request latency.
3. Alerting
Detect anomalies or threshold breaches in logs or metrics.
💡 Industry Insight:
Companies often combine metrics + logs + traces to achieve full observability.
Example: ELK + Prometheus + Grafana stack for monitoring cloud-native applications.
If you want, I can next explain 15.2.2 Distributed Tracing (e.g., Jaeger, OpenTelemetry), including
diagrams and example setups for cloud applications and microservices.
Performance Optimization
15.3.1 Tuning Techniques
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 103/118
2. Storage Optimization
3. Database Optimization
Read Replicas & Sharding: Distribute load across multiple database nodes.
Connection Pooling: Reduce overhead of opening/closing database connections.
4. Network Optimization
Content Delivery Networks (CDNs): Cache static content closer to users (e.g., CloudFront, Azure
CDN).
Load Balancing: Distribute traffic evenly across servers (e.g., AWS ELB, Azure Load Balancer).
Compression: Compress data before transmission to reduce latency.
5. Application-Level Optimization
Code Profiling & Refactoring: Identify bottlenecks and optimize slow functions.
Asynchronous Processing: Use message queues or serverless functions for background tasks
(e.g., SQS, Azure Queue, Cloud Tasks).
Batch Processing: Aggregate small operations into fewer batch requests to reduce overhead.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 104/118
4. Result: Application handles traffic spikes without manual intervention, reducing latency and
avoiding over-provisioning.
If you want, I can next cover 15.3.2 Load Testing and Benchmarking, including tools, examples, and
diagrams for simulating and optimizing cloud workloads.
16.1.1 Hadoop
Overview:
Apache Hadoop is an open-source framework for distributed storage and processing of large
datasets.
Uses commodity hardware to store and process data efficiently.
Key Components:
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 105/118
4. Hadoop Ecosystem Tools
Hive: SQL-like querying
Pig: Script-based data analysis
HBase: NoSQL database for real-time access
java
// Mapper
public class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context) throws IOException,
InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
// Reducer
public class IntSumReducer extends Reducer<Text, IntWritable, Text, IntWritable> {
private IntWritable result = new IntWritable();
Overview:
Apache Spark is an open-source distributed computing framework for big data processing.
Faster than Hadoop MapReduce because it processes data in-memory, rather than
reading/writing to disk after every operation.
Key Components:
1. Spark Core – Core engine for distributed task scheduling, memory management, fault tolerance.
2. Spark SQL – Structured data processing using SQL queries.
3. Spark Streaming – Real-time stream processing.
4. MLlib – Machine learning library for scalable algorithms.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 106/118
5. GraphX – Graph processing library for complex relationships.
Features:
python
sc = SparkContext("local", "WordCount")
text_file = sc.textFile("hdfs://path/to/input.txt")
counts = text_file.flatMap(lambda line: line.split()) \
.map(lambda word: (word, 1)) \
.reduceByKey(lambda a, b: a + b)
counts.saveAsTextFile("hdfs://path/to/output")
💡 Industry Insight:
Use Hadoop when batch processing very large datasets with fault tolerance.
Use Spark when low-latency, interactive, or machine learning workloads are needed.
If you want, I can next cover 16.2 Cloud-Based Big Data Services, including AWS EMR, Azure
HDInsight, and Google Dataproc, with diagrams, examples, and sample code for cloud deployment.
Cloud-Based ML and AI
16.2.1 AWS SageMaker
16.2.2 Google AI Platform
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 107/118
16.2.1 AWS SageMaker
Overview:
AWS SageMaker is a fully managed service to build, train, and deploy machine learning models
at scale.
Removes the need to set up servers, GPUs, or infrastructure for ML workflows.
Key Features:
1. Data Preparation
Integrates with S3 for data storage.
Supports data labeling with SageMaker Ground Truth.
2. Model Training
Provides built-in algorithms (XGBoost, Linear Learner, etc.).
python
import sagemaker
from sagemaker import LinearLearner
sagemaker_session = sagemaker.Session()
role = "arn:aws:iam::123456789012:role/SageMakerRole"
Use Cases:
Fraud detection
Predictive maintenance
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 108/118
Customer recommendation systems
Overview:
Google AI Platform (Vertex AI) is Google Cloud’s managed service for building, training, and
deploying ML models.
Supports end-to-end ML workflow and integrates with BigQuery, Dataflow, and Cloud Storage.
Key Features:
python
aiplatform.init(project='my-project', location='us-central1')
Use Cases:
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 109/118
Image and video analysis
Natural language processing (chatbots, sentiment analysis)
Predictive analytics for business intelligence
💡 Industry Insight:
SageMaker is widely used in enterprise AWS environments.
Vertex AI excels in data-driven analytics pipelines using Google Cloud services.
Both reduce operational overhead, allowing data scientists to focus on model development and
insights rather than infrastructure.
If you want, I can next cover 16.3 Cloud-Based AI Tools for NLP, Vision, and Recommendation
Systems, including examples with AWS Comprehend, Rekognition, and Google AI APIs.
Data Warehousing
16.3.1 AWS Redshift
16.3.2 Google BigQuery
Overview:
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 110/118
Amazon Redshift is a fully managed cloud data warehouse that allows petabyte-scale data
storage and analysis.
Optimized for analytical workloads rather than transactional workloads.
Key Features:
1. Columnar Storage
Stores data in columns instead of rows for faster query performance.
2. Massively Parallel Processing (MPP)
Distributes data and queries across multiple nodes for parallel computation.
3. Redshift Spectrum
Query data directly from S3 without loading into Redshift.
4. Integration
Works with AWS Glue, QuickSight, and SageMaker for ETL and analytics.
5. Security
Supports encryption at rest and in transit, IAM, VPC, and audit logging.
bash
sql
-- Create a table
CREATE TABLE sales (
sale_id INT,
product_name VARCHAR(100),
quantity INT,
sale_date DATE
);
Use Cases:
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 111/118
16.3.2 Google BigQuery
Overview:
Google BigQuery is a serverless, fully managed cloud data warehouse that enables fast SQL
analytics over large datasets.
Focused on high-speed query execution without worrying about infrastructure.
Key Features:
1. Serverless Architecture
No need to manage clusters or nodes; scales automatically.
2. Standard SQL Support
Use ANSI SQL for querying structured and semi-structured data.
python
client = bigquery.Client()
# Create dataset
dataset_id = "my_project.my_dataset"
dataset = bigquery.Dataset(dataset_id)
dataset.location = "US"
dataset = client.create_dataset(dataset)
# Create table
table_id = "my_project.my_dataset.sales"
schema = [
bigquery.SchemaField("sale_id", "INTEGER"),
bigquery.SchemaField("product_name", "STRING"),
bigquery.SchemaField("quantity", "INTEGER"),
bigquery.SchemaField("sale_date", "DATE"),
]
table = bigquery.Table(table_id, schema=schema)
table = client.create_table(table)
sql
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 112/118
GROUP BY product_name
ORDER BY total_sold DESC;
Use Cases:
💡 Industry Insight:
Redshift is preferred for organizations with AWS-centric architectures and predictable
workloads.
BigQuery is ideal for ad-hoc analytics, serverless setups, and pay-per-query billing.
Both integrate well with BI tools like Tableau, Power BI, and Looker.
If you want, I can next cover 16.4 Cloud Data Lakes vs Data Warehouses, including diagrams,
examples, and use cases, showing when to use Redshift/BigQuery versus S3/Azure Data Lake Storage.
Overview:
IoT devices (sensors, cameras, wearables, industrial equipment) generate large amounts of data.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 113/118
Sending all data to the cloud can be slow, costly, and inefficient.
Edge computing devices (gateways, microcontrollers, or edge servers) process data locally and
only send relevant summaries or alerts to the cloud.
text
[IoT Devices/Sensors]
│
▼
[Edge Gateway / Edge Node] -- Local processing, filtering, aggregation
│
▼
[Cloud Platform] -- Centralized storage, analytics, ML, dashboards
1. IoT Devices
Generate raw data (temperature, video, motion, machine telemetry).
2. Edge Nodes
Pre-process data, run real-time analytics, detect anomalies.
Can run lightweight AI/ML models (e.g., TensorFlow Lite).
3. Cloud Platform
Store historical data, run heavy analytics, training ML models, long-term dashboards.
Examples: AWS IoT Core, Azure IoT Hub, Google Cloud IoT Core.
Scenario:
1. Low Latency: Immediate response for critical applications (e.g., autonomous vehicles, industrial
automation).
2. Bandwidth Optimization: Reduces amount of data sent to the cloud.
3. Data Privacy: Sensitive data can be processed locally without leaving the premises.
4. Reliability: Edge nodes can continue functioning even with intermittent internet connectivity.
5. Cost Efficiency: Less cloud storage and data transfer costs.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 114/118
Industry Tools for IoT and Edge Integration
python
import random
import time
while True:
data = get_sensor_data()
time.sleep(2)
Explanation:
💡 Industry Insight:
Edge computing is essential in autonomous vehicles, smart factories, remote monitoring, and
healthcare IoT.
Combining IoT devices + Edge + Cloud creates a hybrid architecture that balances speed,
scalability, and analytics.
I can next cover 17.1.2 Edge AI and Real-Time Analytics, including tools, frameworks, and cloud
integration examples for intelligent IoT processing.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 115/118
Multi-Cloud and Hybrid Cloud
17.2.1 Strategies and Benefits
1. Definitions
Hybrid Cloud: A combination of on-premises infrastructure and public cloud services that work
together.
Multi-Cloud: Using two or more cloud providers (e.g., AWS, Azure, GCP) for different workloads
or redundancy.
Multi-Cloud Strategies
text
Explanation:
Tool Purpose
VMware Cloud on AWS Hybrid cloud management for VMware workloads
Azure Arc Manage multi-cloud and on-prem resources
Google Anthos Run apps across GCP, on-prem, and other clouds
HashiCorp Terraform Multi-cloud infrastructure provisioning
Kubernetes Orchestrate containerized apps across clouds
💡 Industry Insight:
Large enterprises often adopt hybrid + multi-cloud to balance compliance, cost, and innovation.
Examples:
Netflix uses AWS primarily but has multi-region setups for redundancy.
Financial institutions keep sensitive data on-premises and leverage cloud for analytics.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 117/118
I can next explain 17.3 Cloud Governance and Cost Management, including policies, best practices,
and real-world cloud management examples.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 118/118