Unit V - App Implementation in Cloud
Unit V - App Implementation in Cloud
Agenda
• Virtual Machines
• Docker Container
• Kubernetes
● Cloud service providers (CSPs) are companies that offer cloud computing services—like servers,
● Instead of owning and maintaining physical servers and infrastructure, organizations and individuals
● -
● Infrastructure as a Service (IaaS): Basic computing resources like virtual machines, storage, and
networks.
● Platform as a Service (PaaS): Tools and platforms to develop, run, and manage applications.
● Software as a Service (SaaS): Fully managed applications delivered over the internet.
● Microsoft Azure
● IBM Cloud
● Oracle Cloud
● Alibaba Cloud
● A virtual private cloud (VPC) is a secure, isolated private cloud hosted within a public cloud.
● VPC customers can run code, store data, host websites, and do anything else they could do in an
ordinary private cloud, but the private cloud is hosted remotely by a public cloud provider. (Not all
● VPCs combine the scalability and convenience of public cloud computing with the data isolation of
● Imagine a public cloud as a crowded restaurant, and a virtual private cloud as a reserved table in that
crowded restaurant.
● Even though the restaurant is full of people, a table with a "Reserved" sign on it can only be accessed
● Similarly, a public cloud is crowded with various cloud customers accessing computing resources but a
VPC reserves some of those resources for use by only one customer.
● A public cloud is shared cloud infrastructure. Multiple customers of the cloud vendor access that same
infrastructure, although their data is not shared just like every person in a restaurant orders from the
● Public cloud service providers include AWS, Google Cloud Platform, and Microsoft Azure, among
others
● The technical term for multiple separate customers accessing the same cloud infrastructure is
"multitenancy"
• A private cloud, however, is single-tenant. A private cloud is a cloud service that is exclusively offered to
one organization. A virtual private cloud (VPC) is a private cloud within a public cloud; no one else
• A VPC isolates computing resources from the other computing resources available in the public cloud.
The key technologies for isolating a VPC from the rest of the public cloud are:
Subnets:
A subnet is a range of IP addresses within a network that are reserved so that they're not available to
everyone within the network, essentially dividing part of the network for private use. In a VPC these are
private IP addresses that are not accessible via the public Internet, unlike typical IP addresses, which are
publicly visible
• VLAN:
A LAN is a local area network, or a group of computing devices that are all connected to each other
A VLAN is a virtual LAN. Like a subnet, a VLAN is a way of partitioning a network, but the partitioning
takes place at a different layer within the OSI model (layer 2 instead of layer 3).
• VPN:
A virtual private network (VPN) uses encryption to create a private network over the top of a public
network. VPN traffic passes through publicly shared Internet infrastructure routers, switches, etc. but the
traffic is scrambled and not visible to anyone. A VPC will have a dedicated subnet and VLAN that are only
This prevents anyone else within the public cloud from accessing computing resources within the VPC
effectively placing the "Reserved" sign on the table. The VPC customer connects via VPN to their VPC, so
that data passing into and out of the VPC is not visible to other public cloud users.
• Scalability: Because a VPC is hosted by a public cloud provider, customers can add
• Easy hybrid cloud deployment: It's relatively simple to connect a VPC to a public cloud or to on-
• Better performance: Cloud-hosted websites and applications typically perform better than those
• Better security: The public cloud providers that offer VPCs often have more resources for updating and
maintaining the infrastructure, especially for small and mid-market businesses. For large enterprises or
any companies that face extremely tight data security regulations, this is less of an advantage.
What is scaling?
• The cloud has dramatically simplified these scaling problems by making it easier to scale up or down
• Primarily, there are two ways to scale in the cloud: horizontally or vertical.
• When you scale horizontally, you are scaling out or in, which refers to the number of provisioned
resources.
• When you scale vertically, it’s often called scaling up or down, which refers to the power and capacity
of an individual resource.
• Types of scaling
• Horizontal Scaling (Scaling Out/In): Adding/Removing more servers or machines to distribute the
load. Or Adding more instances or nodes to a system (scale out) or removing them when demand drops
(scale in).
Merits:
Demerits:
• More complex—requires load balancing, data syncing, and distributed systems management.
Example:
• Adding more servers behind a load balancer to handle increased website traffic.
• Vertical Scaling (Scaling Up/Down): Increasing or decreasing the resources (CPU, RAM, storage) of
Merits:
• Good for databases or legacy apps that can't run on multiple servers.
Demerits:
• Single point of failure—if that one server goes down, you're in trouble.
Example:
• Upgrading from a 4-core CPU to a 16-core CPU, or from 16 GB RAM to 64 GB on a single server.
Manual Scaling:
• Manual scaling is just as it sounds. It requires an engineer to manage scaling up and out or down and
in. In the cloud, both vertical and horizontal scaling can be accomplished with the push of a button, so
the actual scaling isn’t terribly difficult when compared to managing a data center.
• However, because it requires a team member9s attention, manual scaling cannot take into account all
• This also can lead to human error. An individual might forget to scale back down, leading to extra
charges.
Scheduled Scaling:
• Scheduled scaling solves some of the problems with manual scaling. This makes it easier to tailor your
provisioning to your actual usage without requiring a team member to make the changes manually
every day.
• If you know when peak activity occurs, you can schedule scaling based on your usual demand curve.
For example, you can scale out to ten instances from 5 p.m. to 10 p.m., then back into two instances
from 10 p.m. to 7 a.m., and then back out to five instances until 5 p.m. Look for a cloud management
platform with Heat Maps that can visually identify such peaks and valleys of usage.
Automatic Scaling:-
• Automatic scaling (also known as Auto Scaling) is when your compute, database, and storage
• For example, AWS Auto Scaling adds instances when traffic spikes and removes them when it's low.
• Scaling your infrastructure means handling more load—but it also means spending more.
Cost Model: You pay more as you upgrade to larger machines (more CPU, RAM, etc.).
Pricing Pattern: Costs grow non-linearly. A machine with 2x power might cost more than 2x the price.
When it makes sense: Short-term needs, quick fixes, or when horizontal scaling isn’t possible.
Best for: Handling traffic spikes without overpaying when demand is low.
Scaling Type Users (Low Traffic) Users (Peak Traffic) Cost Implication
Low cost, poor
No Scaling App is slow App may crash
performance
Medium cost, limited
Vertical Scaling Fast Slow or crash
power
Higher cost, better
Horizontal Scaling Fast Fast
performance
Virtual Machines
• A virtual machine (VM) is a software-based computer that exists within another computer’s operating
system, often used for the purposes of testing, backing up data, or running SaaS applications.
(OS) and applications just like a real computer—but it’s hosted on virtualized hardware.
• One powerful physical computer (called a host) can run many VMs (called guests) at the same time,
• Each VM runs its own OS (Windows, Linux, etc.) and is completely isolated from other VMs.
Virtual Machines
• Type 1 (bare-metal): Runs directly on hardware (e.g., VMware ESXi, Microsoft Hyper-V)
Virtual Machines
• Cloud providers like AWS, Azure, and Google Cloud offer virtual machines as a service, so you don’t
Examples:
• AWS EC2
• You can: Choose your OS, Select the amount of CPU, RAM, and storage, Start, stop, or scale VMs as
needed.
41 MCA4201- FULL STACK WEB DEVELOPMENT |© Smartcliff | Internal | Version 1.0
Docker Container
What is a Docker?
• Docker is an open-source platform for developing, shipping, and running applications in lightweight,
• Think of Docker as a tool that lets you package your app and everything it needs (code, libraries,
dependencies) into a single, portable unit — a container — that can run anywhere: your laptop, a
Docker Container?
• A Docker container is a lightweight, standalone, and executable package that includes everything
• Code
• System tools
• Libraries
• Config files
Why Docker?
• Docker uses the host machine's OS kernel and runs containers as isolated processes.
• You can run multiple containers on the same machine without them interfering with each other.
• Key concepts,
Term Meaning
Image A read-only template used to create containers
Container A running instance of an image
Dockerfile A text file with instructions to build a Docker image
VM vs Docker Container
VM vs Docker Container
Note:
• Containers share the same underlying OS kernel of the host system, so you can’t run entirely different
OS types (like Windows and Linux) side-by-side in containers on the same server
• You can’t run a Windows container on a Linux host natively, or vice versa.
• If You Want Full OS Flexibility use Virtual Machines (VMs) instead of containers.
VM vs Docker Container
Note:
Example:
VM vs Docker Container
• Figure shows integration of VM and docker container for running multiple OS on a host.
Kubernetes?
making sure they all start on time, don’t crash, and scale
when needed.
• 50 microservices
Kubernetes solves this by handling: Container deployment, Load balancing, Auto-scaling, Health checks
1. Cluster: A group of machines (physical or virtual) that Kubernetes uses to run your containers.
3. Pod: The smallest unit in Kubernetes. A pod contains one or more containers that share the same
4. Deployment: A blueprint that defines how many replicas (copies) of a pod you want and how to manage
updates.
5. Service: A stable way to access pods. It acts like a load balancer and makes sure traffic gets to the right
• -
Example Flow
4. You create a service so users can access your app via a single IP/URL.
Benefits of Kubernetes
2. Self-healing apps
• In cloud computing, Ethernet and network switches still play critical roles — but in ways that are
• Ethernet is a standard for networking that defines how data is transmitted over physical media (usually
cables like Cat5e, Cat6). It’s used for local area networks (LANs) and is the foundation of much of
today's network infrastructure — including in data centers that power the cloud.
• Switches are networking devices that connect multiple devices on a LAN and use MAC addresses to
Ethernet
• Local Area Networks (LANs): Ethernet is the fundamental technology for wired connections within
data centers, which are the physical backbone of cloud infrastructure. It allows servers, storage
systems, and networking devices to communicate with each other at high speeds.
• High-Speed Data Transfer: Cloud environments rely on the rapid movement of massive amounts of
data. Ethernet provides the necessary bandwidth and low latency for efficient data transfer between
Ethernet
• Evolving Standards: While initially used for LANs, Ethernet has evolved significantly with faster
speeds (Gigabit Ethernet, 10 Gigabit Ethernet, and beyond) and wider area network (WAN) capabilities
(Carrier Ethernet). This makes it crucial for connecting different data centers and providing connectivity
to users.
• Private Connections: Ethernet provides secure and reliable private connections, essential for the
• Underlying Technology: Even with the rise of wireless technologies, Ethernet remains a critical
underlying technology in data centers to ensure stable and high-performance connections for core
infrastructure.
64 MCA4201- FULL STACK WEB DEVELOPMENT |© Smartcliff | Internal | Version 1.0
Ethernet and Switches
Ethernet
• -
Ethernet
• -
Ethernet- speed
• Fast Ethernet: Supports speeds up to 100 Mbps, commonly used in older networks.
• 10 Gigabit Ethernet: Offers speeds of 10 Gbps, commonly used in high-speed enterprise networks,
• 25/40/100 Gigabit Ethernet: Supports speeds of 25 Gbps, 40 Gbps, and 100 Gbps, respectively, used
Ethernet- cabling
• Twisted-Pair Cables: Commonly used, especially in LANs, with different categories (Cat5, Cat5e, Cat6,
• Fiber-Optic Cables: Provide high bandwidth and are suitable for long-distance communication,
• Coaxial Cables: Less common now, but used in older networks and for cable internet access.
• Cloud networks rely on Ethernet for connecting servers, storage systems, and networking equipment.
• High-speed Ethernet (Gigabit Ethernet, 10 Gigabit Ethernet, and higher) is crucial for efficient data
• Ethernet is used in cloud services like Elastic Load Balancing (ELB) to distribute traffic across multiple
instances.
Switches
• Connecting Devices: Ethernet switches are networking devices that connect multiple devices (servers,
storage, routers) within a data center network. They act as central points for all the wired connections.
• Efficient Data Forwarding: Unlike older hubs that broadcast data to all connected devices, switches
learn the Media Access Control (MAC) addresses of connected devices and forward data packets only
to the intended destination. This significantly reduces network congestion and improves performance.
• Network Segmentation: Switches enable the creation of Virtual Local Area Networks (VLANs). VLANs
logically segment the network, allowing for better organization, security, and traffic management within
Switches
• Redundancy and High Availability: Modern switches often support features like link aggregation and
spanning tree protocols, which provide redundancy and prevent network loops, ensuring high
• Scalability: As cloud environments grow, switches provide the necessary scalability to connect
increasing numbers of devices and handle higher traffic loads. Different types of switches (access,
aggregation, core, data center) are used at various levels of the network hierarchy to manage traffic
effectively.
Switches
• Cloud-Managed Switches: Some switches are specifically designed for cloud environments, offering
features like centralized management, remote configuration, and enhanced monitoring capabilities,
• Virtual Switches: These operate within virtual machine environments, providing networking capabilities
for VMs.
• Routing Switches: Also known as Layer 3 switches, they can route traffic between different network
segments.
• PoE (Power over Ethernet) Switches: Deliver power to devices like wireless access points over
Ethernet cables.