[go: up one dir, main page]

0% found this document useful (0 votes)
18 views21 pages

Tee Fog and Edge Computing

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 21

TEE FOG AND EDGE COMPUTING

The Internet of Things (IoT) refers to the network of interconnected devices,


sensors, and objects that collect and exchange data over the internet without
requiring human intervention. These devices are embedded with electronics,
software, and sensors that enable them to connect, communicate, and interact
with each other, as well as with cloud-based applications and services.
Here's an overview of IoT:
1. Devices and Sensors: IoT devices can range from simple sensors and
actuators to complex smart appliances and industrial machinery. These
devices are equipped with sensors to collect data from the environment,
such as temperature, humidity, light, motion, and more.
2. Connectivity: IoT devices typically communicate using various connectivity
technologies, including Wi-Fi, Bluetooth, Zigbee, RFID, cellular networks
(2G, 3G, 4G, and now 5G, and satellite. The choice of connectivity depends
on factors such as range, power consumption, bandwidth, and cost.
3. Data Processing and Analytics: The data collected by IoT devices is
transmitted to centralized servers or cloud platforms for processing,
analysis, and storage. This data can be analyzed in real-time to derive
insights, detect patterns, predict trends, and optimize processes.
4. Applications and Services: IoT enables a wide range of applications and
services across various industries, including:
• Smart Homes: IoT devices such as smart thermostats, lights, security
cameras, and appliances enable homeowners to automate and
control their home environment remotely.
• Smart Cities: IoT is used to monitor and manage city infrastructure,
including transportation systems, energy grids, waste management,
public safety, and environmental monitoring.
• Industrial IoT (I-IoT): IoT sensors and devices are deployed in
industrial settings to monitor equipment performance, optimize
processes, improve efficiency, and enable predictive maintenance.
• Healthcare: IoT devices such as wearable fitness trackers, remote
patient monitoring systems, and medical sensors enable personalized
healthcare monitoring and management.

LOAD BALANCING + BEFORE AND AFTER F&E COMPUTING


SMALL BUSINESS NETWORK SETUP
1. User mobility defines the movement of users between different network access points
while maintaining continuous connectivity and seamless access to network and services
a. Ensuring smooth handovers between different network access points,
minimizing latency and packet loss is the solution
2. In ensuring reliability:
a. High Availability: Ensuring that network services and resources are consistently
available and accessible to users, minimizing downtime and disruptions.
b. Resilience: Building networks that can withstand and recover from unexpected
events, such as network congestion, equipment failures, or cyberattacks.
Resilient networks leverage techniques like load balancing, traffic engineering,
and rapid recovery protocols to maintain performance and functionality under
adverse conditions.
3. In ensuring service mobility:
a. User Mobility: Supporting seamless mobility for users as they move between
different network environments or access technologies, such as transitioning
between Wi-Fi and cellular networks or roaming between different geographic
regions.
4. Multiple administrative domains:
a. Determining the most suitable edge node for end-user device services involves
selecting the nearest server or considering potential servers in other geographic
regions.
b. Despite network heterogeneity, user experience integrity must be maintained,
technical complexities from end-user devices.
c. Addressing these challenges requires a solution blending centralized and
distributed system characteristics to achieve a global network view and
synchronization
5. SDN (software defined networking) offers solution to networking challenges by
separating the control plane (decision making plane) from the data plane (packet
forwarding plane), maintaining and minimized view of network resources.
a. This approach simplifies network management and improve efficiency by
utilizing resources more effectively
b. The control plane communicates with the network nodes via the OpenFlow
protocol which is a standardized comm. Protocol that enables centralized control
of the network switches and routers within a SDN.
c. To meet the demands of edge computing SDN’s control plane should be
distributed to support orchestration with multiple control instances
6. Low overhead virtualization refers to virtualization approach that minimizes the
additional computational, memory and performance cost typically associated with
running virtualized environments.
7. What is federated (organization/system that is composed of multiple individual entities)
edge environment??
a. It is a distributed computing architecture where computing, resources, services
and data are spread across multiple edge computing nodes.
b. Discovering Edge Resource also poses a management challenge. Groups of edge
nodes in specific geographic locations must be visible to each other to facilitate
efficient resource sharing and coordination.
c. Deploying services and applications on the edge presents a management
challenge. Determining the suitability of edge nodes for service deployment
involves assessing factors such as resource availability and anticipated workload
to ensure optimal performance

d. Migrating services across the edge poses a management challenge due to


differences in technology and resource constraints compared to cloud
environments.
i. The shortest network path for migrating services between edge nodes
must be determined, considering real-time constraints and resource
availability, to ensure efficient service migration.
e. Load balancing at the edge presents a management challenge, particularly when
dealing with varying levels of service subscriptions and resource allocations.
i. Traditional monitoring methods are inadequate due to resource
constraints on edge nodes.
ii. Mechanisms must be established to ensure fair workload distribution by
scaling resources for heavily subscribed services while reallocating
resources from dormant ones, maintaining workload integrity.

HCI stands for "Hyperconverged Infrastructure" It's a software-defined IT infrastructure


framework that integrates compute, storage, networking, and virtualization resources into a
single, integrated platform, typically managed through a centralized management interface.
8. What is the point of using edge servers?
a. Reducing Latency: Edge servers are strategically positioned closer to end-users,
devices, and data sources at the network edge. This proximity minimizes the
time it takes for data to travel between the source and the server, thereby
reducing latency. Lower latency results in faster response times for applications
and services, improving user experience, especially for latency-sensitive
applications like real-time communication, gaming, and IoT.
b. Processing Data Locally: Edge servers enable data processing and analysis to
occur closer to where data is generated. This localized processing helps alleviate
the burden on centralized data centers and cloud environments, reducing the
need to transmit large volumes of data over long distances. By processing data
locally, edge servers can improve scalability, efficiency, and cost-effectiveness
for applications that require real-time or near-real-time insights.
c. Enhancing Reliability and Resilience: Edge servers contribute to the overall
reliability and resilience of distributed systems by providing redundancy and
fault tolerance at the network edge. By distributing computing resources across
multiple edge locations, organizations can mitigate the impact of network
outages, hardware failures, or other disruptions.
d. Enabling Content Delivery: Edge servers are often used in content delivery
networks (CDNs) to cache and deliver content closer to end-users. By distributing
content across edge servers located in different geographic regions, CDNs can
reduce latency, improve performance, and optimize bandwidth usage for web
applications, media streaming, and software distribution. Edge servers play a
crucial role in delivering static and dynamic content efficiently to users
worldwide.
9. ENROM (Edge node resource management): Primarily addresses deployment and load
balancing challenges at individual edge
a. Provides decentralized control: Doesn’t rely on a master controller to manage
edge nodes. Instead it assumes visibility of edge nodes to cloud servers.
b. Provides provisioning mechanism: robust provisioning facilitating workload
deployment from cloud servers to edge servers
c. Enhances QoS: by partitioning cloud server resources and offloading them to the
edge nodes ENORM enhances overall quality of service for applications thus
optimizing performance and resource utilization
10. A sink device, in the context of networking and communication, refers to a device that
receives data or information from other devices or sources but typically does not
actively transmit data. Sink devices are often endpoints or destinations within a
communication network where data is collected, processed, or consumed.
11. In the context of edge systems, network slicing involves partitioning the network
infrastructure to provide customized connectivity and services for edge computing
applications. Use Cases
a. IoT Applications: Network slicing facilitates tailored connectivity for IoT devices,
ensuring efficient communication and data transfer in IoT deployments.
b. Low-Latency Services: Supports the delivery of low-latency services such as real-
time analytics, augmented reality (AR), and virtual reality (VR) by prioritizing
network resources.
c. Mobile Edge Computing (MEC): Network slicing enables MEC platforms to create
dedicated network slices for edge applications, enhancing performance and
scalability

5G slicing
5G network slicing is a revolutionary concept that enables the partitioning of a single physical
5G network infrastructure into multiple virtual networks, each tailored to specific applications,
services, or customers. It allows network operators to allocate dedicated virtualized resources
such as bandwidth, computing power, and storage on-demand, according to the unique
requirements of different use cases.
Here's how it works:
1. Virtualization: The underlying physical infrastructure of a 5G network is virtualized using
technologies like software-defined networking (SDN) and network function
virtualization (NFV). This enables the creation of multiple virtual networks or "slices" on
top of the shared physical infrastructure.
2. Slice Creation: Each network slice is created based on specific requirements such as
latency, bandwidth, security, and reliability. For example, a network slice for
autonomous vehicles might prioritize low latency and high reliability, while a slice for IoT
devices might prioritize low power consumption and massive connectivity.
3. Resource Allocation: Once the slices are defined, the network operator allocates
resources dynamically to meet the performance objectives of each slice. This includes
bandwidth, computing resources, storage, and network functions.
Optimization Opportunities
along the Fog Architecture
• Edge Resource Allocation: Distribute resources effectively across edge devices to
minimize latency and maximize performance. Use intelligent algorithms to allocate tasks
based on proximity and available resources.
• Task Offloading: Offload tasks from resource-constrained edge devices to more
powerful nodes within the Fog network. Employ dynamic task offloading strategies
based on real-time conditions and priorities.
• Load Balancing: Implement load balancing mechanisms to evenly distribute workloads
among Fog nodes. This prevents overload on specific nodes and ensures optimal
resource utilization.
• Network Optimization: Optimize network communication protocols and routing
algorithms to reduce latency and improve bandwidth utilization within the Fog network.
Utilize techniques such as edge caching and content delivery networks (CDNs) to serve
frequently accessed data locally
• Security Measures: Implement robust security measures to protect data and resources
within the Fog network. This includes encryption, authentication, and intrusion
detection systems to safeguard against cyber threats and unauthorized access.
• Scalability: Design Fog architectures to be scalable, allowing for easy expansion as the
network grows. Employ modular and flexible designs that can accommodate changes in
workload and resource availability.
• Energy Efficiency: Develop energy-efficient algorithms and protocols to minimize power
consumption in edge devices. This includes techniques such as dynamic voltage and
frequency scaling (DVFS) and sleep modes to optimize energy usage without sacrificing
performance.
• Fault Tolerance: Implement fault-tolerant mechanisms to ensure continuous operation
in the event of node failures or network disruptions. This includes redundancy, failover
mechanisms, and automated recovery procedures to maintain system reliability.
Optimization Opportunities along
the Service Life Cycle
• Define Requirements: At the design stage of fog services, optimization focuses mainly
on the cloud and edge layers of the architecture due to limited information about end
devices. Decisions are made based on device types rather than specific instances.
Design-time optimization aims to maximize efficiency and performance based on
available architectural knowledge.
• Design and Deployment-time: During service deployment, optimization decisions can
leverage detailed information about available resources. For example, knowledge of
edge resource capacity allows for task allocation between cloud and edge resources to
be optimized. Deployment-time optimization aims to fine-tune resource utilization and
task distribution for optimal performance.
• Run-time Monitor and Analyze: Many critical optimization aspects become apparent
only during system operation. Factors such as specific end device capabilities and
dynamic task offloading patterns influence optimization decisions. Run-time
optimization involves continuous monitoring, analysis, and adjustment of system
parameters to maintain effectiveness and efficiency. Algorithms must be fast due to
limited optimization time during runtime, focusing on adapting existing setups rather
than redesigning from scratch.
• Importance of optimization: Run-time optimization plays a crucial role in fog computing
systems, as it allows for dynamic adjustments to changing conditions and requirements.
However, it poses challenges such as limited optimization time and the need to consider
costs associated with system changes. Effective run-time optimization ensures that fog
services operate at peak performance and efficiency, maximizing the benefits of
distributed computing architectures.
Q) How does a formal modeling framework for fog computing help address optimization
challenges and improve system efficiency?
Formal Modeling Framework for Fog Computing
Components and Benefits of a Formal Modeling Framework:
1. System Representation:
o Modeling Components: Define the components of the fog computing system,
including edge devices, fog nodes, cloud servers, network links, and data flows.
o Interactions and Dependencies: Represent interactions between components,
such as data exchanges, processing dependencies, and communication protocols.
2. Optimization Objectives:
o Resource Utilization: Optimize the allocation and utilization of computational,
storage, and network resources.
o Latency Minimization: Minimize data processing and transmission delays to
improve response times.
3. Constraint Definition:
o Capacity Constraints: Define the limits of resources (CPU, memory, bandwidth)
for each node.
o QoS Requirements: Specify Quality of Service (QoS) requirements, such as
latency thresholds, reliability, and availability.
o Security Constraints: Incorporate security measures, including data privacy and
access control.
4. Algorithm Development:
o Optimization Algorithms: Develop algorithms to solve specific optimization
problems, such as task scheduling, load balancing, and data placement.
o Simulation and Validation: Use simulations to validate the effectiveness of the
algorithms under various scenarios and workloads.
5. Analytical Tools:
o Performance Analysis: Analyze system performance using metrics like
throughput, latency, resource utilization, and energy consumption.
o Bottleneck Identification: Identify and address bottlenecks in the system that
hinder performance.
6. Scenario Planning:
o What-if Analysis: Perform what-if analysis to evaluate the impact of different
configurations, workloads, and failures on system performance.
o Scalability Testing: Test the scalability of the system by modeling different levels
of demand and resource availability.
Q) How does FEC address issues like scalability, security, cognition, agility, latency, and
efficiency?
Scalability
• Hierarchical Resource Distribution: FEC allows computational tasks to be distributed
across multiple layers of the network, from local edge devices to intermediary fog nodes
and up to the cloud. This hierarchical distribution ensures that resources can be scaled
up or down depending on demand.
• Dynamic Resource Allocation: FEC systems can dynamically allocate resources based on
current workloads. This dynamic allocation helps in efficiently managing resources
during peak times and scaling down when demand is low.
• Load Balancing: By implementing load balancing techniques, FEC can evenly distribute
tasks across multiple nodes, preventing any single node from becoming a bottleneck and
thus supporting scalable operations.
Security
• Local Data Processing: Processing data closer to its source (at the edge or fog nodes)
minimizes the amount of data that needs to travel across the network, reducing the
exposure to potential attacks during transmission.
• Data Encryption and Authentication: FEC systems can implement robust encryption and
authentication mechanisms at the edge and fog levels, ensuring secure data handling
and transmission.
Cognition
• Real-time Data Processing: FEC supports real-time data analysis and decision-making,
allowing systems to respond to events as they occur. This capability is crucial for
applications like autonomous vehicles, industrial automation, and healthcare
monitoring.
• Context-aware Services: FEC can offer context-aware services by processing data locally
and taking immediate actions based on the current context and environment.
Agility
• Adaptive Load Management: FEC systems can adaptively manage loads by shifting tasks
between nodes based on real-time conditions, ensuring optimal performance even in
dynamic environments.
Q) Explain the concept of collaborative edge computing and how it leverages multiple
geographically distributed edge nodes for data sharing and resource collaboration.
• Decentralization: Processing is distributed across numerous edge nodes rather than
relying solely on centralized cloud servers.
• Inter-node Communication: Edge nodes communicate with each other to share data,
tasks, and resources.
• Resource Pooling: Collective resources (computational power, storage, etc.) of multiple
edge nodes are pooled together to handle workloads efficiently.
Benefits of Collaborative Edge Computing
1. Improved Latency and Response Time: By processing data closer to the source, edge
computing significantly reduces latency compared to centralized cloud computing.
Collaborative edge computing further improves this by allowing nodes to work together,
reducing the need for data to travel long distances.
2. Enhanced Scalability: The distributed nature of edge nodes allows the system to scale
more effectively. New nodes can be added to the network as demand increases,
ensuring continuous service availability and load distribution.
3. Fault Tolerance and Reliability: Collaborative edge computing provides better fault
tolerance. If one node fails, others can take over its tasks, ensuring system reliability and
continuous operation.
4. Optimized Resource Utilization: By leveraging resources from multiple nodes,
collaborative edge computing ensures optimal utilization of available computational
power and storage, preventing bottlenecks and underutilization.
Leveraging Geographically Distributed Edge Nodes
To understand how collaborative edge computing leverages geographically distributed edge
nodes, consider the following aspects:
1. Data Sharing:
o Local Data Aggregation: Edge nodes collect and aggregate data from nearby
devices. This data can be processed locally to reduce the volume of data that
needs to be sent to centralized servers.
2. Resource Collaboration:
o Task Offloading: Edge nodes can offload tasks to each other based on their
current workload and resource availability. This ensures that no single node
becomes a bottleneck and that resources are used efficiently.
o Collaborative Processing: Multiple nodes can work together on complex tasks.
For example, a video surveillance system might split video analysis tasks across
several edge nodes to speed up processing and reduce latency.
3. Geographically Aware Applications:
o Location-based Services: Applications that require geographic awareness, such
as augmented reality, geofencing, and location-based analytics, benefit from
collaborative edge computing. Edge nodes can provide localized processing and
insights, tailored to specific geographic regions.
o Environmental Monitoring: In scenarios like environmental monitoring, edge
nodes distributed across different locations can collect and analyze data locally,
providing real-time insights and collaborative analysis of environmental
conditions.
4. Data Privacy and Security:
o Local Data Processing: Sensitive data can be processed locally on edge nodes,
minimizing the need to transmit it over the network and reducing exposure to
potential security threats.
Example Use Cases
1. Smart Cities:
o Traffic Management: Edge nodes deployed at traffic lights and intersections can
share data and collaborate to optimize traffic flow, reduce congestion, and
respond to incidents in real-time.
2. Healthcare:
o Remote Patient Monitoring: Edge nodes in hospitals and clinics can collaborate
to monitor patients' vital signs in real-time, share critical health data, and provide
immediate responses to emergencies.
Q) How would you implement load balancing for an edge computing application in a smart
city to ensure efficient resource utilization and low-latency response times across distributed
edge servers?
Implementing Load Balancing in a Smart City
Steps and Techniques for Implementation:
1. Dynamic Resource Monitoring:
o Performance Indicators: Track performance indicators like response times,
throughput, and error rates.
2. Load Balancing Algorithms:
o Round Robin: Distribute tasks cyclically across edge servers. This is simple but
may not account for varying server loads.
o Resource-Based Allocation: Allocate tasks based on current resource availability.
For example, direct tasks to servers with the most available CPU or memory.
o Geographic Proximity: Route tasks to the nearest edge server to minimize
latency.
3. Edge Node Coordination:
o Distributed Coordination: Use protocols like the Gossip protocol to enable edge
nodes to share their status and load information.
o Centralized Controller: Implement a central controller to make real-time load
balancing decisions based on global knowledge of the network state.
4. Task Scheduling and Offloading:
o Predictive Scheduling: Use machine learning to predict future load patterns and
proactively schedule tasks.
o Task Offloading: Offload tasks to other less loaded nodes or even to the cloud if
local resources are insufficient.
5. Latency Optimization:
o Edge Caching: Cache frequently accessed data at the edge nodes to reduce
retrieval time and network congestion.
o Prioritization of Critical Tasks: Implement priority queuing to ensure time-
sensitive tasks are processed with higher priority.
6. Redundancy and Fault Tolerance:
o Replication: Replicate critical tasks and data across multiple edge nodes to
ensure reliability.
o Failover Mechanisms: Implement failover strategies to reroute tasks from failed
nodes to healthy ones without interruption.
Example Scenario: Traffic Management System
• Sensors and Cameras: Deployed at intersections to monitor traffic flow.
• Edge Nodes: Process data locally to detect congestion and accidents.
• Load Balancer: Distributes data processing tasks based on current load and proximity to
ensure timely responses.
• Central Controller: Coordinates data from multiple nodes to provide a city-wide view of
traffic conditions and optimize traffic signals in real-time.

You might also like