Data Link Layer
Sub Layers of Data Link Layer:
1. Logical Link Control (LLC) Sublayer:
○ Responsible for managing communication between the network layer
and the data link layer.
○ Provides error checking and control, as well as flow control.
○ Allows multiple network protocols to operate over the same physical
medium.
2. Media Access Control (MAC) Sublayer:
○ Manages protocol access to the physical network medium.
○ Determines how devices on the network communicate and access the
shared medium.
○ Handles addressing and the framing of data packets
Ethernet: Ethernet is the traditional technology for connecting devices in a wired
local area network (LAN) or wide area network. It enables devices to communicate
with each other via a protocol, which is a set of rules or common network
language.
Advantages of Ethernet
Ethernet has many benefits for users, which is why it grew so popular. Here are
some of the common benefits of Ethernet:
● Relatively low cost.
● Generally resistant to noise.
● Good data transfer quality.
● Speed.
● Reliability.
● Data security, as common firewalls can be used.
Disadvantages of Ethernet
Despite its widespread use, Ethernet does have its share of disadvantages, such as
the following:
● Intended for smaller, shorter-distance networks.
● Limited mobility.
● Use of longer cables can create cross-talk.
● Doesn't work well with real-time or interactive applications.
● Speeds decrease with increased traffic.
How it works: Ethernet is a widely used networking technology that facilitates
communication between devices in a local area network (LAN). Here’s a
simplified explanation of how it works:
1. Basic Structure
● Frames: Data is transmitted in units called frames, which contain source and
destination MAC (Media Access Control) addresses, as well as error-
checking information.
● Medium: Ethernet can use various physical media, including twisted pair
cables (like Cat5e or Cat6), fiber optics, or coaxial cables.
2. Communication Process
● Sending Data: When a device wants to send data, it creates an Ethernet
frame containing the necessary information and data payload.
● Addressing: Each device on the network has a unique MAC address. The
sender’s MAC address is included in the frame, along with the recipient's
MAC address.
● Collision Detection: Ethernet uses a protocol called Carrier Sense Multiple
Access with Collision Detection (CSMA/CD). This means that before a
device transmits data, it "listens" to the network to check if it’s clear. If two
devices transmit simultaneously, a collision occurs, and both devices will
stop, wait a random amount of time, and try again.
3. Receiving Data
● Frame Reception: When a device receives a frame, it checks the destination
MAC address. If it matches its own, the device processes the data; if not, it
ignores the frame.
● Error Checking: Each frame includes a Frame Check Sequence (FCS) for
error detection. If the frame is corrupted, it will be discarded.
4. Switches and Hubs
● Hubs: In a basic setup, a hub broadcasts incoming frames to all devices.
This can lead to collisions and inefficiencies.
● Switches: More commonly, networks use switches that intelligently send
frames only to the intended recipient, reducing collisions and improving
efficiency.
Ethernet Technology:
IEEE 802.3: This is the main set of standards governing Ethernet technology. It
includes various specifications for different speeds and media types.
Common Types:
● 10BASE-T: 10 Mbps over twisted pair cable.
● 100BASE-TX (Fast Ethernet): 100 Mbps, also over twisted pair.
● 1000BASE-T (Gigabit Ethernet): 1 Gbps over twisted pair.
● 10GBASE-T: 10 Gbps over twisted pair.
● 100BASE-FX: Fast Ethernet over fiber optics.
● 1000BASE-SX/LX: Gigabit Ethernet over fiber optics.
Ethernet Frame Structure:
1. PREAMBLE: Ethernet frame starts with a 7-Bytes Preamble. This is a
pattern of alternative 0’s and 1’s which indicates starting of the frame and
allow sender and receiver to establish bit synchronization.
2. Start of frame delimiter (SFD): This is a 1-Byte field that is always set to
10101011. SFD indicates that upcoming bits are starting the frame, which is
the destination address.
3. Destination Address: This is a 6-Byte field that contains the MAC address
of the machine for which data is destined.
4. Source Address: This is a 6-Byte field that contains the MAC address of
the source machine
5. Length: Length is a 2-Byte field, which indicates the length of the entire
Ethernet frame. This 16-bit field can hold a length value between 0 to 65535,
but length cannot be larger than 1500 Bytes because of some own limitations
of Ethernet.
6. Data: This is the place where actual data is inserted, also known
as Payload . Both IP header and data will be inserted here if Internet
Protocol is used over Ethernet.
7. Data: This is the place where actual data is inserted, also known
as Payload . Both IP header and data will be inserted here if Internet
Protocol is used over Ethernet.
8. VLAN Tagging: This tag allows network administrators to logically
separate a physical network into multiple virtual networks, each with its own
VLAN ID.
9. Jumbo Frames: In addition to the standard Ethernet frame size of 1518
bytes, some network devices support Jumbo Frames, which are frames with
a payload larger than 1500 bytes.
10.Ether Type Field: The EtherType field in the Ethernet frame header
identifies the protocol carried in the payload of the frame. For example, a
value of 0x0800 indicates that the payload is an IP packet, while a value of
0x0806 indicates that the payload is an ARP (Address Resolution Protocol)
packet.
11.Multicast and Broadcast Frames: In addition to Unicast frames (which are
sent to a specific destination MAC address), Ethernet also supports
Multicast and Broadcast frames
12.Collision Detection: In half-duplex Ethernet networks, collisions can occur
when two devices attempt to transmit data at the same time. To detect
collisions, Ethernet uses a Carrier Sense Multiple Access with Collision
Detection (CSMA/CD) protocol.
Switch: Forwarding and filtering are two fundamental processes that switches
use to manage network traffic:
● Forwarding: This is the process of sending incoming frames to the
appropriate output port based on the destination MAC address. When a
switch receives a frame, it examines the MAC address and forwards the
frame only to the port connected to the destination device.
How forwarding works:
MAC Address Learning: When a switch receives a frame, it reads the source
MAC address and associates it with the port on which the frame was received. This
information is stored in a MAC address table (also known as a forwarding table or
content-addressable memory).
Forwarding Frames: For every incoming frame, the switch checks the destination
MAC address against its MAC address table:
o If the address is found: The switch forwards the frame only to the
corresponding port.
o If the address is not found: The switch floods the frame to all ports
(except the port it was received on), allowing the destination device to
respond and allowing the switch to learn the destination address for
future frames.
● Filtering: This refers to the process of blocking frames from being
forwarded to certain ports. A switch filters out frames destined for MAC
addresses that are not part of the connected network or for which there is no
valid entry in the switch's MAC address table. Filtering helps to prevent
unnecessary traffic from congesting the network.
Properties of Link-Layer Switching:
1. Transparent Operation:
oSwitches operate at Layer 2 of the OSI model and are typically
transparent to devices on the network. They do not modify the frames
being transmitted.
2. Reduced Collision Domains:
oEach switch port represents a separate collision domain, which
reduces the likelihood of collisions and increases overall network
performance.
3. High Throughput:
o Switches can handle multiple simultaneous connections and provide
high throughput, making them suitable for high-traffic environments.
4. Full-Duplex Communication:
o Switches support full-duplex communication, allowing simultaneous
sending and receiving of frames on each port, further improving
performance.
5. Support for VLANs:
o Many switches can implement VLANs, allowing network
segmentation for better traffic management and security.
6. Quality of Service (QoS):
o Some switches support QoS features, enabling prioritization of certain
types of traffic (e.g., voice, video) to ensure performance and
reliability.
Drawbacks of Link Layer Switches
1. Broadcast Traffic:
o Issue: Switches forward broadcast frames to all devices on the same
network segment, which can lead to excessive traffic, reduced
network performance, and congestion.
2. Limited Scalability:
o Issue: As the number of devices increases, the amount of broadcast
and multicast traffic can overwhelm the network, making it less
efficient.
3. Security Concerns:
o Issue: All devices within the same broadcast domain can
communicate freely, potentially leading to security vulnerabilities, as
unauthorized devices may gain access to sensitive data.
4. Network Management:
o Issue: Managing a large flat network can be complex, making it
difficult to enforce policies or control traffic flows effectively.
5. Physical Limitations:
o Issue: Traditional switches operate within a single physical network,
limiting their ability to separate different types of traffic or user
groups.
VLAN: A Virtual Local Area Network (VLAN) is a logical grouping of devices
within a larger physical network that allows for better management, security, and
efficiency. VLANs enable network administrators to segment networks into
smaller, isolated sections, even if the devices are physically connected to the same
switch.
VLAN for solving the drawbacks of Link Layer switch:
Reduced Broadcast Domains:
● Solution: VLANs create separate logical networks within the same physical
switch. This segmentation reduces the size of broadcast domains, limiting
broadcast traffic and enhancing overall performance.0
● https://www.pynetlabs.com/what-is-collision-domain/#:~:text=If%20more
%20than%20one%20device,to%20the%20same%20collision%20domain.
Improved Scalability:
● Solution: By dividing the network into VLANs, organizations can manage
growth more effectively. Each VLAN can grow independently, reducing the
risk of broadcast storms.
Enhanced Security:
● Solution: VLANs can isolate sensitive data and resources. By grouping
users based on roles or departments, organizations can restrict access to only
those who need it, improving security.
Simplified Network Management:
● Solution: VLANs allow for easier implementation of network policies. For
example, Quality of Service (QoS) settings can be applied to specific
VLANs to prioritize traffic for critical applications.
Flexibility and Efficiency:
● Solution: VLANs enable the dynamic grouping of devices, regardless of
their physical location. This flexibility allows for more efficient use of
network resources and simplifies the reconfiguration of network segments.
MPLS: Multiprotocol Label Switching (2.5 layer protocol)
MPLS can creat end to end paths that act link circuit switched connections but
deliver layer 3 packet. It allows IP packets to be forwarded at layer 20switching
level without being passed upto lavel 3.
Routing is layer 3 functions and switching is layer 2 functions, MPLS makes those
routers on the internet act like switches on the network thats why MPLS is called
2.5 layer protocol.
Data Centre: Data centers are specialized facilities designed to house computer
systems and associated components, such as telecommunications and storage
systems. They are essential for various reasons:
1. Centralized Management: Data centers allow organizations to centralize
their IT resources, making it easier to manage servers, applications, and data.
2. Scalability: They provide the infrastructure necessary to scale operations
quickly, accommodating increasing data loads and user demands.
3. Reliability and Redundancy: Data centers are built with redundancy in
mind, ensuring high availability of services through backup systems and
disaster recovery options.
4. Security: They incorporate physical and network security measures to
protect sensitive data from unauthorized access and cyber threats.
5. Energy Efficiency: Modern data centers focus on energy-efficient designs
to reduce operational costs and environmental impact.
Components of a Data Center
1. Hosts (Blades): Serve content (e.g., web pages, videos), store data (e.g.,
emails, documents), and perform distributed computations. Each host
operates as a separate computing unit within the data center.
2. Racks: Organize multiple hosts (blades) in a compact manner, allowing
for efficient use of space and easy management of the servers. Each rack
typically houses 20 to 40 blades.
3. Top of Rack (TOR) Switch: Connects all hosts within a rack and facilitates
communication between them. It also links the rack to other switches in the
data center and manages network traffic to and from the hosts.
4. Network Interface Card (NIC): Installed in each host, the NIC enables the
host to connect to the TOR switch and facilitates network communication.
5. Border Routers: Connect the data center network to the public Internet.
They manage the traffic flow between external clients and internal hosts,
ensuring efficient data exchange.
6. Data Center Network: The overall interconnection system that includes
racks, switches, and routers, designed to manage and optimize traffic
between internal hosts and external clients.
7. Cooling Systems: Maintain optimal temperature and humidity levels
within the data center to prevent overheating and ensure the reliable
operation of all equipment.
8. Power Supply Units (PSUs): Provide uninterrupted power to the data
center equipment, often including backup systems like uninterruptible power
supplies (UPS) and generators to maintain operation during outages.
9. Storage Systems: Offer additional data storage solutions, including disk
drives and storage area networks (SANs), for managing and backing up data
efficiently.
10. Physical Security Measures: Protect the data center from unauthorized
access and physical threats through access control systems, surveillance
cameras, and other security protocols.
Step-by-Step Load Balancing Process
1. Client Request:
○ An external client sends a request to a publicly accessible IP address
associated with an application.
2. Load Balancer Receives Request:
○ The request first reaches the load balancer, which is configured to
handle traffic for specific applications.
3. Load Assessment:
○ The load balancer checks the current load on each host (server) that
handles the application. This assessment includes metrics like CPU
load, memory usage, and active connections.
4. Routing Decision:
○ Using a predefined algorithm (e.g., round-robin, least connections, or
IP hash), the load balancer determines the optimal host to handle the
incoming request.
5. Forwarding the Request:
○ The load balancer forwards the request to the selected host, translating
the public IP address to the host’s internal IP address.
6. Host Processes Request:
○ The chosen host processes the request, which may involve additional
calls to other hosts or services.
7. Response Sent to Load Balancer:
○ Once processing is complete, the host sends the response back to the
load balancer.
8. Response Relay:
○ The load balancer receives the response and translates the internal IP
address back to the client’s public IP address, then forwards the
response to the client.
9. Security Layer:
○ Throughout the process, the load balancer prevents clients from
accessing hosts directly, enhancing security by masking the internal
network structure.
10.Ongoing Load Monitoring:
○ The load balancer continuously monitors the performance of each host
and adjusts traffic distribution in real time based on current load
conditions.