[go: up one dir, main page]

0% found this document useful (0 votes)
19 views26 pages

Cloud Lab Report 10

The document outlines a Cloud Computing Lab course (CSE 4257) with various lab exercises focused on virtual machines, cloud databases, and cloud storage services. Key labs include installing Ubuntu on VirtualBox, creating a NoSQL database with Firebase Firestore, and managing files using Google Drive API. Each lab provides step-by-step procedures, objectives, and conclusions highlighting the practical applications of cloud computing technologies.

Uploaded by

nawazshorif344
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views26 pages

Cloud Lab Report 10

The document outlines a Cloud Computing Lab course (CSE 4257) with various lab exercises focused on virtual machines, cloud databases, and cloud storage services. Key labs include installing Ubuntu on VirtualBox, creating a NoSQL database with Firebase Firestore, and managing files using Google Drive API. Each lab provides step-by-step procedures, objectives, and conclusions highlighting the practical applications of cloud computing technologies.

Uploaded by

nawazshorif344
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Course Name: Cloud Computing Lab

Course code: CSE 4257

Name: Name Ismail Hossen Hridoy

Class Roll: 3.. Lecturer, Department of CSE

Exam Roll: 15.. Mymensingh Engineering College

Reg No: 28..

Batch: CSE 3rd

Session: 2019-20

Submission Date: Signature:


INDEX

Lab no. Lab name Page no. Date

1 Install a virtual machine (eg., VirtualBox 1-8 2/7/2025


or VMware) and configure a guest
operating system (eg., Ubuntu/Linux)

2 Create and access a database on a cloud 9-12 3/7/2025


platform (Firebase Firestore) and retrieve
data from the database

3 Interact with cloud storage services 13-15 6/7/2025


(Google Drive API) to perform file
operations (Upload,Delete,Share)

4 Install and verify a working C compiler 16-17 7/7/2025


(like GCC) in the VM for development
purpose

5 Install and configure web servers in the 18-19 8/7/2025


VM and serve sample webpages. Visualize
webpage from both host and the guest OS

6 Demonstrate file transfer using shared 20-21 9/7/2025


folders between local system and VM

7 Configure firewall rules to allow/block 22-24 10/7/2025


specific ports in the VM

Signature:
1

Lab no. 1

Lab name: Install a virtual machine (eg., VirtualBox or VMware) and configure a
guest operating system (eg., Ubuntu/Linux).

Objectives:
Ubuntu is a free and open-source operating system based on Linux. Ubuntu
24.04 LTS (code-named Noble Numbat) is the latest released Ubuntu version that
enhances security, modifies the desktop experience with GNOME 46, and improves
usability features. While Oracle VirtualBox is a feature-rich open-source tool that
lets us create and run multiple machines on our system simultaneously. We can
install Ubuntu 24.04 on VirtualBox to run it alongside our primary operating system.

Procedure: Step-by-Step Instructions:

Step 1: Download and Install VirtualBox.

1. Open web browser and go to the [VirtualBox


website](https://www.virtualbox.org/).
2. Click on "Download VirtualBox" and choose the Windows hosts version.
3. Once the download is complete, run the installer and follow the on-screen
instructions to install VirtualBox on your Windows 11 machine.
2

Step 2: Download Ubuntu 24.04 LTS ISO.

1. Visit the [Ubuntu website](https://ubuntu.com/download) and download the


Ubuntu 24.04 LTS ISO file.
2. Save the ISO file to a location on computer where can easily find it later.

Step 3: Create a New Virtual Machine in VirtualBox.

1. Open VirtualBox and click on the "New" button to create a new virtual machine.
2. In the "Name and Operating System" window, enter a name for virtual machine
(e.g., "Ubuntu 24.04 LTS"). Ensure the "Type" is set to "Linux" and the "Version" is
set to "Ubuntu (64-bit)".
3. Click "Next" to proceed.
3

Step 4: Allocate Memory (RAM).

1. In the "Memory Size" window, allocate the amount of RAM for virtual machine. A
minimum of 2048 MB (2 GB) is recommended, but can allocate more if system
allows.
2. Click "Next" to continue.

Step 5: Create a Virtual Hard Disk.

1. Select "Create a virtual hard disk now" and click "Create".


2. In the "Hard Disk File Type" window, choose "VDI (VirtualBox Disk Image)" and
click "Next".
3. For "Storage on Physical Hard Disk", choose either "Dynamically allocated" or
"Fixed size" based on preference. "Dynamically allocated" is more flexible.
4. Click "Next" to proceed.
4

Step 6: Specify the Size of the Virtual Hard Disk.

1. Set the size of the virtual hard disk. A minimum of 25 GB is recommended for
Ubuntu.
2. Click "Create" to finish setting up the virtual hard disk.

Step 7: Configure the Virtual Machine Settings.

1. Select newly created virtual machine in the VirtualBox Manager and click on
"Settings".
2. Go to the "System" tab and ensure that "Enable EFI" is unchecked (unless needed
for specific purposes).
3. Go to the "Storage" tab, click on the empty optical drive under "Controller: IDE",
then click on the disk icon and choose "Choose a disk file".
4. Locate and select the Ubuntu 24.04 LTS ISO file you downloaded earlier.
5. Click "OK" to save the settings.
5

Step 8: Start the Virtual Machine.

1. With virtual machine selected, click on the "Start" button.


2. The virtual machine will boot from the Ubuntu ISO file. Follow the on-screen
instructions to install Ubuntu.
6

Step 9: Install Ubuntu 24.04 LTS.

1. Select "Install Ubuntu" from the options.


2. Choose language and keyboard layout.
3. Follow the prompts to set up installation preferences, including updates and third-
party software.
4. Select "Erase disk and install Ubuntu" (this will only affect the virtual hard disk).
5. Follow the remaining prompts to create user account and complete the
installation.
7
8

Step 10: Finalize Installation and Reboot.

1. Once the installation is complete, restart the virtual machine when prompted.
2. Remove the installation media by going to "Devices" - "Optical Drives" - "Remove
disk from virtual drive" in the VirtualBox menu.
3. Press "Enter" to reboot virtual machine.

Input/Output:

Conclusion:

Ubuntu:
Ubuntu is an operating system for computer, just like Windows or macOS, but it is
based on Linux – a free, open-source system that serves as the foundation for many
computer programs.
Ubuntu provides a user-friendly desktop environment.

VirtualBox:
VirtualBox is a program that lets run a different operating system inside current one.
It is like having a computer within user computer.
VirtualBox creates a safe, isolated space where can experiment with other operating
system environments without changing anything on main Mac or Windows-based
computer.
9

Lab no. 2

Lab name: Create and access a database on a cloud platform (Firebase Firestore) and retrieve
data from the database.

Objectives:
The primary goal of this experiment is to understand and implement a fundamental cloud database
architecture. This involves creating a structured, NoSQL database on a cloud platform and accessing
it from a client-side application.
The specific objectives for this lab are:
 To set up a new project on the Google Firebase platform.
 To create and configure a Cloud Firestore database, a flexible, scalable NoSQL document
database.
 To understand the data model of Firestore, consisting of collections, documents, and fields.
 To manually populate the database with sample data for testing purposes.
 To integrate the Firebase SDK into a standard HTML web page.
 To write JavaScript code that authenticates with the Firebase project, retrieves data from the
Firestore collection, and dynamically renders it on the web page.

Procedure:

This experiment is divided into two main parts: setting up the cloud backend (Firebase Firestore) and
creating the local web client (HTML/JavaScript) to interact with it.

Part A: Setting up the Cloud Firestore Database

1. Project Creation:
o Navigated to the Firebase Console.
o Logged in with a Google account and clicked "Add project".
o Provided a unique project name (e.g., cloud-lab-2-db) and accepted the terms. Google
Analytics was disabled for this simple project.
o The project was provisioned, and we were redirected to the project dashboard.

2. Database Creation:
o From the left-hand navigation menu, under the "Build" section, selected "Firestore
Database".
o Clicked the "Create database" button.
o Security Rules: For this lab, we started in Test mode, which allows open read/write
access for a limited time. A warning about this insecure configuration was noted.
o Location: Chose a cloud Firestore location (e.g., us-central). This cannot be changed
later.
o The database was initialized.
10

3. Data Population:
o Inside the Firestore UI, clicked "+ Start collection".
o Entered a Collection ID: students.
o Clicked "Next" to create the first document in this collection.
o An Auto-ID was generated for the Document ID.
o Added the following fields to the document:
 name (string): "Alice Johnson"
 adress (string): "dhaka"
 age(num): 21
o Clicked "Save".
o Repeated the process by clicking "+ Add document"

Part B: Creating the Web Client to Access Data

1. Registering the Web App in Firebase:


o Navigated back to the Project Overview (by clicking the gear icon ⚙️ > Project
settings).
o In the "Your apps" section, clicked the web icon (</>) to add a new web app.
o Gave the app a nickname (e.g., "Web Client") and clicked "Register app".
o Firebase generated a configuration object (firebaseConfig). This object contains the
unique keys and IDs needed for our HTML file to connect to this specific Firebase
project. This object was copied for later use.

2. Creating the Local HTML and JavaScript Files:


o On our local PC, a new folder was created. Inside it, a file named index.html was
created.
o The basic HTML structure was added to index.html. A div with an ID data-
container was included as a placeholder where the fetched data would be displayed.

3. Integrating the Firebase SDK and Writing the Script:


o The following code was written inside the index.html file. It includes:
 Importing the necessary Firebase SDK modules (app and firestore).
 The firebaseConfig object copied from the Firebase console.
 A script to initialize Firebase and fetch/display the data.
11

Input and Output:

Figure: Firebase Database

Figure : Html file in Web Browser


12

Conclusion:

This laboratory exercise was successfully completed. We have demonstrated the end-to-end process
of creating a serverless database architecture using Google Firebase's Cloud Firestore and accessing
it from a simple client-side application.

The key takeaways from this experiment are:


1. Ease of Setup: Cloud platforms like Firebase have dramatically simplified the process of
provisioning and deploying a scalable, globally available database. What once required
significant server administration can now be accomplished in minutes.
2. Structured vs. Unstructured Data: Unlike the file-based storage in the previous lab (Google
Drive), Firestore allows for the storage of structured data in a queryable format. This enables
applications to perform complex data retrieval, not just file downloading.
3. The Power of SDKs: The Firebase SDK abstracts away the complexity of REST API calls and
authentication. By including the SDK and the configuration object, our web client could
communicate securely and efficiently with the database.
4. Foundation for Modern Apps: This client-database model is the cornerstone of modern web
and mobile application development, especially within a serverless paradigm. It allows
developers to build rich, data-driven applications without managing a traditional backend
server.

In conclusion, this lab provided invaluable hands-on experience with a real-world cloud database
service, bridging the theoretical concepts of cloud computing with a practical and tangible
implementation. Future work could involve exploring write operations (adding data from the client),
real-time data listeners, and implementing proper security rules for a production environment.
13

Lab no. 3

Lab name: Interact with cloud storage services (Google Drive API) to perform file
operations (Upload,Delete,Share).

Objectives: This lab is designed to provide a foundational understanding of cloud-


based data management using a widely accessible platform. While not a traditional
database, Google Drive serves as an excellent example of a cloud storage service
where data can be created, accessed, shared, and managed.
 The primary objectives of this experiment are:
 To understand the core concept of Cloud Storage as a Service (SaaS).
 To perform fundamental data lifecycle operations: uploading (Create),
viewing/downloading (Read), renaming (Update), and deleting (Delete) a
data file on a cloud platform.
 To explore and configure access control and collaboration features by sharing
data with specific permissions.
 To differentiate between a simple file-based data store (like Google Drive)
and a structured cloud database (like Firebase or AWS RDS).

Procedure: This experiment was conducted using a standard web browser and a
Google Account. The data used was a sample CSV file named project_data.csv.

Part A: Data Creation (Uploading a File)


1. Access the Platform: Navigated to the Google Drive web interface by opening
a browser and going to drive.google.com.
2. Authentication: Logged in using valid Google Account credentials.
3. Initiate Upload: Clicked the "+ New" button located on the top-left of the
interface.
4. Select Operation: From the dropdown menu, selected "File upload".
5. Choose Data: A local file browser window opened. We navigated to the
location of our sample file, project_data.csv, selected it, and clicked "Open".
6. Confirmation: Monitored the upload progress indicator at the bottom-right of
the screen until it confirmed that the upload was complete. The
file project_data.csv was now visible in the "My Drive" section.
Part B: Data Retrieval and Management (Accessing and Updating)
1. Read Operation (View): To access the data, we double-clicked
the project_data.csv file. Google Drive opened it in a preview mode using
Google Sheets, allowing us to view its contents directly in the browser.
14

2. Read Operation (Download): To retrieve a local copy, we right-clicked the file


and selected the "Download" option. The file was saved to the local
machine's default download folder.
3. Update Operation (Rename): To update the file's metadata, we right-clicked
on project_data.csv and selected "Rename". We changed the name
to project_data_final.csv and clicked "OK".
Part C: Data Sharing (Configuring Access Control)
1. Initiate Sharing: Right-clicked the renamed file, project_data_final.csv, and
selected the "Share" option.
2. Configure Permissions: In the sharing dialog box, two methods were explored:
o Direct Sharing: Added a specific collaborator's email address in the
"Add people and groups" field. The permission level was set
to "Viewer", ensuring they could read the file but not edit or delete it.
o General Link Sharing: Under "General access," the setting was
changed from "Restricted" to "Anyone with the link". The permission
level for the link was kept at "Viewer".
3. Finalize Sharing: Clicked "Done" to apply the settings. The file icon now
displayed a small "people" symbol, indicating it was shared.
Part D: Data Deletion (Removing a File)
1. Soft Delete: To remove the file, we right-clicked
on project_data_final.csv and selected the "Move to Trash" (or "Remove")
option. The file disappeared from the "My Drive" view.
2. Permanent Delete: We then navigated to the "Trash" folder from the left-
hand menu. We located the file, right-clicked it, and selected "Delete
forever" to permanently erase it from the cloud platform.

Input and Output:

Figure: file upload in google drive


15

Conclusion:

This lab successfully demonstrated the fundamental principles of data management


on a cloud platform using Google Drive. We were able to perform the complete
lifecycle of a data file—creation (upload), retrieval (view/download), modification
(rename), and deletion. Furthermore, the powerful and intuitive collaboration
features of cloud services were explored by configuring granular sharing permissions.
Through this experiment, we concluded that while services like Google Drive are
incredibly effective for storing and sharing individual, unstructured data files (like
documents, spreadsheets, and images), they function differently from a true cloud
database. A platform like Firebase Firestore or AWS RDS manages structured data,
allowing for complex queries, data indexing for performance, and transactional
integrity across multiple data points.

In essence, this lab served as an essential introduction to the concept of entrusting


data to the cloud. It established a practical baseline for understanding cloud
interactions, which is the foundational knowledge required before advancing to the
more complex and powerful world of structured cloud databases.
16

Lab no. 4

Lab name: Install and verify a working C compiler (like GCC) in the VM for
development purpose.

Objectives: GCC, the GNU Compiler Collection is a compiler system developed to


support various programming languages. It is a standard compiler used in most
projects related to GNU and Linux, for example, Linux kernel. The objective is to
install the GCC compiler, build a simple C program, and verify that everything is
working correctly inside the Ubuntu virtual machine.

Procedure: Step-by-Step Instructions:

Step 1: Install GCC:

 sudo apt update


 sudo apt install gcc

Step 2: Verify GCC Installation:

 gcc --version

Step 3: Install Additional Build Tools:

 sudo apt install build-essential

(This package includes the GCC compiler, make, libraries, and other tools needed for
compiling most C/C++ programs.)

Step 4: Navigate to the Desktop Directory:

 cd ~/Desktop/

Step 5: Create a New C File:

 touch demo.c

Step 6: Open the File and Write Some C Code:

 open demo.c
17

Step 7: Compile the C Program:

 gcc demo.c -o test

(This compiles demo.c and creates an output binary named test.)

Step 8: Run the Compiled Program:

 ./test

Input/Output:

Conclusion:

GCC (GNU Compiler Collection) is an open-source compiler system that has become
a critical part of software development. Here’s why it’s important:

 Multi-language support: While most people know it for C and C++, GCC supports
additional languages like Go, Fortran, and Ada.
 Powerful optimizations: It offers advanced code optimization capabilities,
helping developers create fast, efficient programs.
 Widely used: Many open-source projects, including Linux, rely on GCC for code
compilation and development.

GCC is a must-have for Ubuntu users and Linux developers, whether working on a
personal project or contributing to open-source software.
18

Lab no. 5

Lab name: Install and configure web servers in the VM and serve sample
webpages. Visualize webpage from both host and the guest OS.

Objectives: Apache is one of the most popular web servers on the internet. It is
used to serve more than half of all active websites. A server running Ubuntu, along
with a non-root user with sudo privileges and an active firewall.

Procedure: Step-by-step instructions:

Step 1: Install and configure web server into guest OS:

 sudo apt update


 sudo apt install apache2
 sudo systemctl status apache2
 sudo systemctl start apache2
 cd /var/www/html
 ls (show index.html => apache2 default page)

Step 2: Open the default webpage through Firefox:

 search => http://localhost

Step 3: Create a new html file for new webpage:

 sudo gedit welcome.html


 ls (show index.html, welcome.html)

Step 4: Open the new created webpage through Firefox:

 search => http://localhost/welcome.html

Step 5: Visualize the webpage from the host OS:

 Power off the VM


 Create port:
I. Open VM settings
II. Go to Network
III. Select: Adaptor -> NAT
IV. Port Forwarding -> Name: Apache; Protocol: TCP; Host port: 8080; Guest
port: 80
19

Step 6: Open the webpage from host OS:

 search => http://localhost:8080

Input/Output:

Conclusion:

Apache is versatile and very modular, so configuration needs will be different


depending on setup. After reviewing some general use cases above, user should
have a good understanding of what the main configuration files are used for and
how they interact with each other. If user need to know about specific configuration
options, the provided files are well commented and Apache provides excellent
documentation. Hopefully, the configuration files will not be as intimidating now and
user will feel more comfortable experimenting and modifying to suit user’s needs.
20

Lab no. 6

Lab Name: Demonstrate file transfer using shared folders between local system
and VM.

Objectives: Creating a folder or file on local system and sharing the file with VM by
installing guest additions on the VM.

Procedure: Step-by-step instructions:

Step 1: Install the Guest Additions manually into VM:

 sudo apt update


 sudo apt install build-essential
 sudo apt install dkms
 sudo apt install linux-headers-$(uname -r)

Step 2: Activate guest additions:

I. Go to Menubar
II. Click on “Devices”
III. Select “insert guest additions CD image” (display CD image)
IV. Click on “CD image” (open VBox_GAs folder)
V. Open terminal on the folder:
 ls
 ./autorun.sh
VI. Devices => Shared clipboard -> Bidirectional
Drag and drop -> Bidirectional
VII. Restart VM

Step 3: Create a folder on local system and share this with VM:

I. Go to VM settings
II. Open Shared Folders
III. Add Folder Path
IV. Select Auto-mount and Make Permanent
V. Start VM and open terminal:
 sudo adduser $USER vboxsf
 sudo reboot
21

Sep 4: Open Folder and check:

 ls /media/sf_Feelings (display the file list of the shared folder)

Input/Output:

Conclusion:

Transferring files between a virtual machine and a host computer is a fundamental


skill for virtualization users. By employing shared folders, using drag-and-drop
features, leveraging network protocols, and applying cloud storage solutions or USB
device sharing, users can effectively manage their data between environments.
22

Lab no. 7

Lab Name: Configure firewall rules to allow/block specific ports in the VM.

Objectives: The primary objective of this experiment is to gain hands-on


proficiency in managing a host-based firewall on a Linux system. This involves using
the Uncomplicated Firewall (UFW) tool on an Ubuntu Virtual Machine to control
network traffic by allowing and blocking specific ports.
The specific learning goals are:
 To install and activate the UFW firewall service.
 To understand and configure UFW's default policies for incoming and
outgoing traffic.
 To create specific, allow rules for essential services like SSH (port 22) and
HTTP (port 80).
 To create deny rules to explicitly block traffic on specific ports, such as FTP
(port 21).
 To use command-line tools (nmap, ftp, httpie) to test and verify the state of
network ports, confirming that the firewall rules are being enforced correctly.
 To learn how to manage the firewall ruleset, including viewing numbered
rules and deleting specific entries.

Procedure: This experiment was performed on an Ubuntu VM running in


VirtualBox. All actions were conducted via the command-line interface.
Part A: UFW Installation and Initial Configuration
1. Installation: The Uncomplicated Firewall (UFW) package, which provides a
user-friendly interface for managing iptables, was installed using the APT
package manager.
Use code : sudo apt install ufw -y
2. Initial Status Check: Before activation, the status of UFW was checked to
confirm that the firewall is inactive by default: sudo ufw status verbose
3. Defining Base Rules: To ensure essential services remain accessible after
activation, allow rules were created for SSH (port 22) and HTTP (port 80).
Allowing SSH is critical to prevent being locked out of a remote server.
Use code: sudo ufw allow ssh , sudo ufw allow 80
4. Firewall Activation: The firewall was enabled. The system prompts for
confirmation as this action can disrupt active network connections.
Generated bash: sudo ufw enable
5. Verifying Active Rules: The status was checked again to view the list of active
rules in a numbered format, which is useful for future management.
Generated bash: sudo ufw status numbered
23

Part B: Testing an ALLOWED Port (HTTP)


1. Verification Tool Installation: The nmap utility, a powerful network scanner,
was installed to perform port-level diagnostics.
Use code : sudo apt install nmap -y
2. Port Scan: A TCP scan was performed on port 80 of the local machine
(localhost) to verify the allow rule.
Bash: sudo nmap -sT -p 80 localhost
The expected output is STATE: open, confirming the firewall is permitting
traffic on this port.

Part C: Blocking a Port and Verifying the Block


1. Creating a DENY Rule: An explicit deny rule was added to block all incoming
traffic on port 80.
bash: sudo ufw deny 80
2. Verification of Block: The nmap scan was repeated for port 80 to test the new
rule.
Generated bash : sudo nmap -sT -p 80 localhost
The expected output is STATE: closed, indicating the firewall is now actively
rejecting connections on this port.
3. Rule List Confirmation: The numbered rule list was displayed again to observe
the new DENY rule's position and confirm the change.
Generated bash: sudo ufw status numbered

Part D: System Cleanup


1. Resetting the Firewall: To return the system to a clean state for future
experiments, the reset command was used. This command disables the
firewall and deletes all user-added rules.
Generated bash: sudo ufw reset

Input/Output:
Input (Key
Observed Output (System Response
Action/Phase Command
/ nmap Result)
Executed)

Initial Firewall sudo ufw Firewall is active and enabled on system


Setup enable startup

sudo ufw A numbered list showing ALLOW rules for


View Initial
status ports 22 (SSH) and 80 (HTTP) from any
Rules
numbered source. <br> Status: active

Test Allowed sudo nmap -sT PORT STATE SERVICE <br> 80/tcp open
24

HTTP Port -p 80 localhost http <br><br> Confirms the port is accessible.

Block HTTP sudo ufw deny


Rule added
Port 80

PORT STATE SERVICE <br> 80/tcp closed


Test Blocked sudo nmap -sT
http <br><br> Confirms the port is now
HTTP Port -p 80 localhost
inaccessible.

The numbered list now includes a new DENY


sudo ufw
View Final IN rule for port 80, placed before the
status
Ruleset generic ALLOW IN rule, demonstrating rule
numbered
precedence.

Prompts the user with `Resetting all rules to


Reset Firewall sudo ufw reset
installed defaults. Proceed with operation (y

Conclusion: This experiment successfully demonstrated the configuration and


management of a host-based firewall on an Ubuntu operating system using UFW.
The objectives of the lab were fully met.
Key Analytical Findings:
1. Simplicity and Power: UFW lives up to its name by providing a
straightforward syntax that abstracts the complexity of the
underlying iptables system. This makes basic firewall management highly
accessible.
2. Rule Precedence: The experiment clearly showed how UFW processes rules.
When both an allow 80 and deny 80 rule exist, the more specific or later-
added rule often takes precedence, effectively blocking the port. This
highlights the importance of a well-structured ruleset.
3. The Critical Role of Verification: The procedure emphasized that simply
adding a rule is not enough. Using an independent tool like nmap to probe
the port's state is an essential verification step to confirm that the security
policy is being enforced as intended.
4. Host-Based vs. Network-Based Security: This lab provided practical insight
into host-based security, where the firewall runs directly on the machine it is
protecting. This is a fundamental layer of defense that complements
network-level firewalls (like those in cloud VPCs or physical network
appliances).
In conclusion, this hands-on exercise provided a robust understanding of how to
secure a Linux server at the host level. The ability to install, configure, and verify
firewall rules using standard command-line tools is a fundamental and critical skill
for any system administrator or cloud engineer.

You might also like