[go: up one dir, main page]

0% found this document useful (0 votes)
10 views41 pages

Summer Training Report

Uploaded by

Ayush Dhiman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views41 pages

Summer Training Report

Uploaded by

Ayush Dhiman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 41

Annexure-I

RHCSA Summer Training: Mastering Linux System Administration

Name of the Organization :Centre for Professional Enhancement


A training report
Submitted in partial fulfilment of the requirements for the award of degree of
Bachelor of Technology in Computer Science and Engineering
(Cyber Security)
Submitted to
LOVELY PROFESSIONAL UNIVERSITY
PHAGWARA, PUNJAB
From 10/06/2025 to 18/07/2025

SUBMITTED BY
Name of student: Isha Rani
Registration Number: 12310584
Signature of the student: Isha Rani
Annexure-II: Student Declaration
To whom so ever it may concern

I, Isha Rani , Registration Number is 12310584, hereby declare that the work done by me
on “RHCSA Summer Training: Mastering Linux System Administration” from10 June 2025 to
18 July 2025, is a record of original work for the partial fulfilment of the requirements for the award
of the degree, Bachelor of Technology Computer Science and Engineering (Cyber Security).

Isha Rani (12310584)

Isha Rani

Dated: 15/08/2025
Training certificate from organization/ Company
Training certificates from Course Platform

1. Red Hat System Administration I

2. Red Hat System Administration II


Acknowledgement

I am deeply grateful to my faculty and the institution for the opportunity to take the course in Red Hat
System Administration. With the completion of this course, I have furthered my academic journey and
acquired both theoretical and practical knowledge in Linux system administration.

I am thankful to Red Hat for crafting this well-thought-out and detailed curriculum. The modules on
Linux fundamentals, user and group management, file systems, process monitoring, networking, and
security have greatly enhanced my knowledge of system administration.

With profound gratitude, I thank the instructors for their support, motivation, and wise counsel during
the course. Without their help, completing this project would not have been possible.
list of tables:

 Table 1.: Scope & Applicability (Chapter 1, Section 1.3)


 Table 2: Comparison Table: Manual vs Automated File Archiving (Chapter 2, Task 2)
 Table 3:Comparison Table: Manual vs Automated Disk Usage Monitoring (chapter
3,Task 3)
 Table 4: Comparison Table: Manual vs Automated Expiry Monitoring (Chapter 3, Task
4)
 Table 5: Comparison Table: Manual vs Automated Session Tracking (Chapter 3, Task 5)
 Table 6: Comparison Table: Manual Check vs Automated Detector (Chapter 4, Task 6)
 Table 7: Table: Normal vs Suspicious Login (Chapter 4, Task 7)
 Table 8: Preventive vs Reactive (Comparison) (Chapter 5, Section 5.6)
 Table 9: Summary of Tasks Implemented (Chapter 6, Section 6.1)
 Table 10: Challenges Faced & Solutions (Chapter 6, Section 6.4)
List of Figures/ Charts

o Flowchart 1: Chapter 1, Section 1.5 - Overall Work Flow Chart


o Flowchart 2: Chapter 2, Task 1 - Disk Space Alert System
o Flowchart 3: Chapter 2, Task 2 - Old File Archiver
o Flowchart 4: Chapter 2, Task 3 - Email Disk Usage Report
o Flowchart 5: Chapter 3, Task 4 - User Account Expiry Notification
o Flowchart 6: Chapter 3, Task 5 - User Session Logger
o Flowchart 7: Chapter 4, Task 6 - Zombie Process Detector
o Flowchart 8: Chapter 4, Task 7 - Suspicious Login Monitor
o Flowchart 9: Chapter 4, Task 8 - New USB Device Notifier
o Flowchart 10: Chapter 5, Task 9 - System Update Tracker
o Flowchart 11: Chapter 5, Task 10 - Database Service Watchdog
o Flowchart 12: Chapter 5, Section 5.7 - Combined Work Plan Flow

o
List of Abbreviations

1. IT: Information Technology


2. GB: Gigabyte
3. MB: Megabyte
4. RHEL: Red Hat Enterprise Linux
5. USB: Universal Serial Bus
6. DB: Database
7. SQL: Structured Query Language
8. OS: Operating System
9. RCE: Remote Code Execution
10. MTTR: Mean Time To Repair
11. PCI-DSS: Payment Card Industry Data Security Standard
12. SOC 2: Service Organization Control 2 (A cybersecurity compliance framework)
13. CI/CD: Continuous Integration / Continuous Deployment
14. AWS: Amazon Web Services
15. Azure: Microsoft Azure
16. SMTP: Simple Mail Transfer Protocol
17. POSIX: Portable Operating System Interface
Chapter 1: INTRODUCTION OF THE PROJECT UNDERTAKEN

1.1 Objectives of the Project

The primary objective of this project is to design and implement a set of automated scripts that can
simplify routine system administration tasks. System administrators in modern IT infrastructures are
responsible for ensuring that servers, applications, and services run smoothly without interruption.
However, many of the routine tasks such as monitoring disk usage, managing user accounts, tracking
system updates, and detecting abnormal activities consume significant time and effort if done
manually.

Automation addresses this challenge by reducing human intervention and minimizing the chances of
errors. By implementing shell scripts for various administrative tasks, this project aims to:

 Automate repetitive activities such as disk monitoring and user session tracking.
 Improve system reliability by detecting issues (e.g., zombie processes, suspicious logins)
before they escalate.
 Save administrator time and effort, allowing them to focus on critical problem-solving.
 Enhance security by monitoring unauthorized access or abnormal device connections.
 Provide structured reports (e.g., disk usage via email) to keep administrators updated.

1.2 Importance of Automation in System Administration

Automation has become an essential component of modern IT system administration. Traditionally,


system administrators relied on manual checks, periodic inspections, and human-driven processes to
keep systems healthy. However, with the growth of cloud computing, virtualization, and large-scale
data centers, the demand for faster and more reliable operations has increased.

Some key reasons why automation is important in system administration are:

1. Efficiency and Time-Saving – Automated scripts run instantly and on schedule, whereas
manual checks can take hours.
2. Error Reduction – Human errors (e.g., missing a log entry or forgetting to apply updates) are
reduced significantly.
3. Consistency – Tasks such as backups, updates, or monitoring are performed consistently
without variation.
4. Security Enhancement – Automated alerts for suspicious logins or USB device connections
increase the system’s resilience against threats.
5. Scalability – A single administrator can manage hundreds of servers using automation tools
and scripts.

Real-World Example: Large organizations like Google, Amazon, and Microsoft rely heavily on
automation tools such as Ansible, Puppet, Chef, and Bash scripting to maintain thousands of servers
efficiently. Without automation, managing such scale would be nearly impossible.

1.3 Scope and Applicability of the Project

The scope of this project covers three main areas of system administration:

1. Disk Monitoring and File Management


oDetect low disk space and alert the administrator.
oArchive old files to save space.
oSend usage reports via email.
2. User and Account Management
o Monitor user account expiry and notify administrators.
o Track user login sessions and maintain session logs.
3. System Security and Reliability
o Detect zombie processes.
o Monitor suspicious login attempts.
o Notify administrators of new USB device connections.
o Track critical services such as databases to ensure availability.

1. Table for Scope & Applicability:

Task Area Purpose


Disk Space Alert Disk Monitoring Prevent system crash due to storage overflow
User Expiry Notification User Management Improve account security
Suspicious Login Monitor Security Detect unauthorized access attempts
Database Watchdog Service Availability Ensure smooth running of critical applications

1.4 Relevance in Real-World IT Environments

In real-world IT infrastructures, system administration is one of the most crucial domains. Data
centers, cloud environments, enterprise networks, and even small businesses depend on uninterrupted
IT operations. Failures such as disk overflows, unauthorized logins, or service downtime can result in
heavy financial and reputational losses.

Automation in system administration has become not just an advantage, but a necessity. The scripts
developed in this project are miniature representations of real-world IT solutions, scaled down for
educational purposes. The relevance can be summarized as:

 Data Centers: Prevent storage overuse and ensure availability of mission-critical services.
 Enterprises: Secure user accounts and monitor suspicious login attempts.
 Cloud Platforms: Automate scaling, monitoring, and reporting functions.
 SMBs (Small and Medium Businesses): Reduce dependency on dedicated IT staff by
enabling self-monitoring systems.

Thus, the project demonstrates how automation can transform reactive administration into proactive
system management.

1.5 Work Plan and Implementation Approach

The project was carried out following a structured work plan, ensuring that each task was carefully
analyzed, designed, implemented, and tested.

Work Plan Steps:

1. Requirement Analysis – Identify common system administration challenges (disk usage,


security, user management).
2. Task Breakdown – Divide the project into 10 automation tasks across disk management, user
management, security, and service monitoring.
3. Script Development – Write Bash scripts to automate each task.
4. Testing – Execute scripts in a Linux environment and test with different scenarios (low disk
space, failed logins, etc.).
5. Documentation – Record outputs, create flowcharts, and prepare detailed explanations.
6. Integration – Ensure tasks can be scheduled using cron jobs or integrated into system
workflows.

1 WORK FLOW CHART :

Start

Identify Task → Write Bash Script → Test in Linux → Validate Outputs

Schedule Automation (Cron Jobs) → Generate Reports

Final Integration → Documentation → Completion

End
Chapter 2: DISK AND FILE MANAGEMENT AUTOMATION

Efficient disk and file management is a critical responsibility of system administrators. Without proper
monitoring and maintenance, systems may run out of storage, resulting in crashes, application
downtime, or data corruption. Automating disk-related tasks ensures proactive detection of storage
issues and prevents service disruption.

This chapter covers automation scripts that focus on:

 Monitoring available disk space.


 Archiving old files to optimize storage.
 Sending automated disk usage reports via email.

Task 1: Disk Space Alert System


Objective:

The goal of this task is to monitor disk usage continuously and alert the system administrator whenever
disk usage exceeds a predefined threshold (e.g., 80%). This prevents storage overflows that could lead
to application or database failures.

Problem Statement:

Manual monitoring of disk space using commands like df -h is time-consuming and prone to human
error. In a production environment, ignoring disk usage can cause:

 Sudden server crashes.


 Application downtime.
 Loss of logs or important files due to insufficient storage.

To resolve this, an automated script can run at regular intervals, check disk space, and send alerts if
usage crosses the threshold.

Implementation Approach:

1. Define a threshold (e.g., 80%).


2. Use the df -h command to check current disk usage.
3. Extract usage percentage for each partition.
4. If usage > threshold, send an alert message (via email or log file).
5. Schedule the script using a cron job for periodic monitoring.

2 Flowchart for Disk Space Alert System


┌────────────────────┐
│ Start Script │
└───────┬────────────┘

┌────────▼─────────┐
│ Check Disk Usage │
└────────┬─────────┘

┌────────▼─────────┐
│ Compare with │
│ Threshold (80%) │
└────────┬─────────┘

┌──────────▼─────────┐
│ Usage > Threshold? │
└───────┬────────────┘
│Yes

┌──────────────────────────┐
│ Send Alert (Log/Email) │
└───────────┬─────────────┘


┌─────────────┐
│ End │
└─────────────┘

Bash Script: Disk Space Alert


#!/bin/bash

# Threshold for disk usage (in percentage)


THRESHOLD=80

# Get current disk usage for root partition


USAGE=$(df / | grep / | awk '{print $5}' | sed 's/%//g')

# Log file location


LOGFILE="/var/log/disk_alert.log"

# Check if usage exceeds threshold


if [ $USAGE -ge $THRESHOLD ]; then
MESSAGE="Warning: Disk usage has reached $USAGE% on $(hostname) at $(date)"
echo $MESSAGE >> $LOGFILE

# Optional: Send email alert (requires mail utils configured)


# echo $MESSAGE | mail -s "Disk Alert on $(hostname)" admin@example.com
fi

Explanation of Script

 THRESHOLD=80 → The maximum allowed disk usage percentage.


 df / → Checks disk usage of the root partition (/).
 awk '{print $5}' → Extracts the percentage usage.
 sed 's/%//g' → Removes the % sign for comparison.
 If usage is greater than or equal to 80%, a warning is logged (or emailed).

Sample Output (Log File Entry)


Warning: Disk usage has reached 82% on server1 at Thu Aug 29 14:10:35 IST 2025
Warning: Disk usage has reached 85% on server1 at Thu Aug 29 15:05:12 IST 2025
Testing the Script

1. Fill the disk with dummy files using dd to simulate low space.
2. Run the script and check if the log entry is generated.
3. Schedule the script with cron:

# Run every 30 minutes


*/30 * * * * /home/user/disk_alert.sh

Task 2: Old File Archiver


Objective

The objective of this task is to identify files that have not been used or modified for a long period (e.g.,
30 days) and archive them into a compressed file. This helps in freeing disk space and organizing
data without permanently deleting files.

Problem Statement

In large IT environments, logs, temporary files, and unused data keep accumulating.
If these files are not managed:

 Disk usage grows unnecessarily.


 Backups become slower and consume more storage.
 Searching for important files becomes harder.

Manual cleanup is risky because an administrator may delete files still needed by users or applications.
An automated script ensures a systematic and safe archiving process.

Implementation Approach

1. Define the directory to be scanned (e.g., /home, /var/log).


2. Use the find command to search for files older than a specific number of days (e.g., 30 days).
3. Compress these files into an archive (.tar.gz) to save space.
4. Move the archive to a backup directory.
5. (Optional) Delete the old files after successful archiving.
6. Schedule the script using cron for periodic execution.
3 Flowchart for Old File Archiver

┌────────────────────┐
│ Start Script │
└───────┬────────────┘

┌────────▼───────────┐
│ Scan Target Folder │
└────────┬───────────┘

┌────────▼───────────┐
│ Identify Files │
│ Older than 30 Days │
└────────┬───────────┘

┌────────▼───────────┐
│ Compress into │
│ Archive (.tar.gz) │
└────────┬───────────┘

┌────────▼───────────┐
│ Move to Backup Dir │
└────────┬───────────┘

┌────────▼────────┐
│ Delete Original │
│ (Optional) │
└────────┬────────┘

┌────────▼────────┐
│ End │
└─────────────────┘

Bash Script: Old File Archiver


#!/bin/bash

# Directory to scan
TARGET_DIR="/home/user/data"

# Backup location
BACKUP_DIR="/home/user/backup"

# Days threshold
DAYS=30

# Create backup directory if not exists


mkdir -p $BACKUP_DIR

# Find and archive old files


ARCHIVE_NAME="archive_$(date +%Y%m%d).tar.gz"
find $TARGET_DIR -type f -mtime +$DAYS -print | tar -czf $BACKUP_DIR/$ARCHIVE_NAME -
T-

# Log action
echo "Archived files older than $DAYS days from $TARGET_DIR into $ARCHIVE_NAME at $
(date)" >> /var/log/file_archiver.log

Explanation of Script

 find $TARGET_DIR -type f -mtime +$DAYS → Finds files not modified in the last 30 days.
 tar -czf → Creates a compressed archive in .tar.gz format.
 mkdir -p → Ensures the backup directory exists.
 A log entry is created for tracking.

Sample Output (Log Entry)


Archived files older than 30 days from /home/user/data into archive_20250828.tar.gz at Thu Aug 28
16:32:12 IST 2025

Testing the Script

1. Create test files with old timestamps:


2. touch -d "40 days ago" oldfile1.txt
3. touch -d "50 days ago" oldfile2.log

4. Run the script and check if files are archived.


5. Verify contents of archive:
6. tar -tzf /home/user/backup/archive_YYYYMMDD.tar.gz

2 Comparison Table: Manual vs Automated File Archiving


Aspect Manual Archiving Automated Archiving (Script)
Time Required High (searching, compressing, Very low (runs in seconds)
moving)
Human Error High (risk of deleting needed files) Low (predefined rules)
Consistency Irregular, depends on admin discipline Regular via cron jobs
Efficiency Time-consuming and repetitive Fast, scalable, repeatable
Tracking No logs by default Logs every execution

Advantages

 Saves disk space by archiving unused files.


 Organizes data for easier backup.
 Reduces clutter in user directories.
 Prevents accidental deletion by compressing instead of removing.
Task 3: Email Disk Usage Report
Objective

The goal of this task is to automatically generate a disk usage report for the system and send it to
the system administrator via email. This ensures proactive monitoring of disk usage before storage
runs out.

Problem Statement

Disk space is one of the most critical resources in IT infrastructure.

 If a server runs out of disk space, applications may crash.


 Logs may stop writing, causing loss of troubleshooting data.
 Databases may become corrupted if unable to write new transactions.

Relying on manual monitoring (using commands like df -h) is unreliable, especially when
managing multiple servers.
Hence, automation ensures regular, timely reporting without manual intervention.

Implementation Approach

1. Collect disk usage statistics using the df -h command.


2. Format the output for readability.
3. Save the report into a temporary file.
4. Use the system’s mail utility (e.g., mailx, sendmail, or ssmtp) to send the report.
5. Schedule the script to run periodically (daily/weekly) via cron jobs.

4 Flowchart for Email Disk Usage Report


┌────────────────────┐
│ Start Script │
└───────┬────────────┘

┌────────▼───────────┐
│ Run df -h Command │
└────────┬───────────┘

┌────────▼───────────┐
│ Format Disk Report │
└────────┬───────────┘

┌────────▼───────────┐
│ Save to Temp File │
└────────┬───────────┘

┌────────▼───────────┐
│ Send Email Report │
└────────┬───────────┘

┌────────▼────────┐
│ End │
└─────────────────┘

Sample Bash Script: Email Disk Usage Report


#!/bin/bash

# Recipient email address


TO="admin@example.com"

# Subject of email
SUBJECT="Disk Usage Report - $(hostname)"

# Temporary file to store report


REPORT="/tmp/disk_report.txt"

# Collect disk usage info


echo "Disk Usage Report for $(hostname)" > $REPORT
echo "Generated on: $(date)" >> $REPORT
echo "----------------------------------" >> $REPORT
df -h >> $REPORT

# Send email (using mailx)


mail -s "$SUBJECT" $TO < $REPORT

# Log the activity


echo "Disk usage report emailed to $TO at $(date)" >> /var/log/disk_report.log

Explanation of Script

 df -h → Displays human-readable disk usage (in GB/MB).


 mail -s → Sends email with subject line.
 hostname → Inserts the server’s name for identification.
 A log file (/var/log/disk_report.log) keeps track of sent reports.

Sample Email Output

Subject: Disk Usage Report - server01

Body:

Disk Usage Report for server01


Generated on: Thu Aug 28 23:40:10 IST 2025
----------------------------------
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 50G 42G 8G 85% /
tmpfs 2.0G 200M 1.8G 10% /run
/dev/sdb1 100G 60G 40G 60% /data
Testing the Script

1. Install mail utilities:


2. sudo apt install mailutils -y
or on RHEL:
sudo yum install mailx -y

3. Run the script manually to verify the email is sent.


4. Check system logs (/var/log/mail.log or /var/log/maillog) for confirmation

3 Comparison Table: Manual vs Automated Disk Usage Monitoring


Aspect Manual Monitoring Automated Email Report
Time Required Requires login & running df No time, runs automatically
Scalability Difficult on many servers Easy, works across servers
Accuracy Depends on admin checking Always accurate & timely
Notification None Email alert delivered directly
Log Records Must be manually noted Logs every report sent

Chapter 3: USER AND ACCOUNT MANAGEMENT


Task 4: User Account Expiry Notification
Objective

The purpose of this task is to notify system administrators when a user account is about to
expire. This ensures uninterrupted access for legitimate users and prevents unexpected lockouts.

Problem Statement

In enterprise systems, user accounts are often created with an expiry date for:

 Temporary employees,
 Contractors,
 Interns, or
 Guest users.

If these accounts expire without warning:

 The user may lose access suddenly.


 Business operations may be disrupted.
 IT teams may face urgent support requests.

Hence, it is important to automatically notify administrators and/or the users a few days before
account expiration.
Implementation Approach

1. Check expiry date of each user account using chage -l username.


2. Extract the account expiration field.
3. Compare expiry date with the current date.
4. If the account is about to expire (e.g., within 7 days), trigger a notification.
5. Send notification via email or system log.
6. Schedule the script to run daily using cron.

5 Flowchart
┌────────────────────┐
│ Start Script │
└─────────┬──────────┘

┌────────▼────────┐
│ Get User List │
└────────┬────────┘

┌────────▼─────────────┐
│ Check Expiry Date │
└────────┬─────────────┘

┌─────────▼───────────┐
│ Expiring in < 7 Days?│───No───> End
└─────────┬───────────┘
│Yes
┌─────────▼───────────┐
│ Send Notification │
└─────────┬───────────┘

┌─────────▼────────┐
│ End │
└──────────────────┘

Sample Bash Script: User Account Expiry Notification


#!/bin/bash

# Email recipient (system admin)


ADMIN="admin@example.com"

# Days before expiry to notify


THRESHOLD=7

# Get list of users with expiry set


for user in $(cut -f1 -d: /etc/shadow); do
expiry_date=$(chage -l $user | grep "Account expires" | cut -d: -f2)

# Skip users with "never" expiry


if [[ $expiry_date == " never" ]]; then
continue
fi

# Convert expiry date to seconds


expiry_sec=$(date -d "$expiry_date" +%s)
today_sec=$(date +%s)
# Days left
days_left=$(( ($expiry_sec - $today_sec) / 86400 ))
if [[ $days_left -le $THRESHOLD && $days_left -ge 0 ]]; then
echo "User $user account expires in $days_left days (on $expiry_date)" \
| mail -s "User Account Expiry Alert: $user" $ADMIN
fi
done

Explanation

 /etc/shadow → Stores user password and expiry details.


 chage -l username → Displays account expiry details.
 date -d → Converts expiry date into a timestamp.
 mail → Sends email notification to admin.
 Script skips accounts that never expire.

Sample Output Email

Subject: User Account Expiry Alert: john

Body:

User john account expires in 5 days (on 2025-09-02).

Testing

1. Create a temporary user with expiry:


2. sudo useradd -e 2025-09-05 tempuser

3. Run script manually and check email.


4. Verify cron runs daily at 9 AM:
5. 0 9 * * * /home/admin/scripts/user_expiry_notify.sh

4 Comparison Table: Manual vs Automated Expiry Monitoring


Aspect Manual Check Automated Script
Time Taken Must run chage -l user manually No time, runs daily
Scalability Difficult with many users Works for all accounts easily
Accuracy May miss expiry dates Always checks systematically
Notification None Automatic email alerts

Advantages

 Prevents unexpected lockouts.


 Saves administrator time.
 Reduces support tickets.
 Increases system reliability.

Task 5: User Session Logger

Objective

The goal of this task is to log all user login and logout activities.
This ensures that administrators can monitor who accessed the system, at what time, and from
where.

Problem Statement

In multi-user Linux environments:

 Users log in and out frequently.


 Security teams need a record of activities for auditing.
 Suspicious or unauthorized logins must be detected quickly.

Without proper logging:

 It becomes difficult to track misuse.


 System accountability is reduced.
 Investigations during incidents take longer.

Thus, a User Session Logger automates the monitoring of all user sessions.

Implementation Approach

1. Use Linux utilities like who, last, and w to capture session info.
2. Extract details: username, login time, logout time, source IP, and TTY.
3. Save details into a log file (e.g., /var/log/user_sessions.log).
4. Schedule script via cron or run as a background daemon.
5. Optionally send alerts for suspicious logins (e.g., odd hours or unknown IP).
6 Flowchart
┌──────────────────────┐
│ Start Script │
└───────────┬──────────┘

┌────────▼─────────┐
│ Capture Session │
│ (who / last) │
└────────┬─────────┘

┌───────────▼────────────┐
│ Extract User, Time, IP │
└───────────┬────────────┘

┌──────────▼─────────────┐
│ Save to Log File │
└──────────┬─────────────┘

┌───────────▼───────────┐
│ Check for Suspicious │
│ Logins (Optional) │
└───────────┬───────────┘

┌─────────▼─────────┐
│ End │
└───────────────────┘

Sample Bash Script: User Session Logger


#!/bin/bash

# Log file path


LOGFILE="/var/log/user_sessions.log"

# Capture current date and time


timestamp=$(date +"%Y-%m-%d %H:%M:%S")

# Get session details


sessions=$(who)

# Append to log file


echo "[$timestamp] Active Sessions:" >> $LOGFILE
echo "$sessions" >> $LOGFILE
echo "---------------------------------------" >> $LOGFILE

Sample Log File Output


[2025-08-28 22:30:55] Active Sessions:
ayu pts/0 2025-08-28 22:00 (192.168.1.20)
ishu pts/1 2025-08-28 22:05 (192.168.1.30)
---------------------------------------

Testing

1. Log in with two different users (ayu, ishu).


2. Run script manually.
3. Check /var/log/user_sessions.log.
4. Verify cron runs every 10 minutes:
5. */10 * * * * /home/admin/scripts/session_logger.sh

5 Comparison Table: Manual vs Automated Session Tracking


Aspect Manual Monitoring Automated Logger
Time Taken Run who or last manually Automatically logged
Historical Record Lost after logout Stored in permanent log file
Alerts None Email notifications possible
Scalability Works for few users only Works for hundreds of users

Advantages

 Provides a detailed audit trail of user activities.


 Helps in security investigations.
 Detects unauthorized logins quickly.
 Saves administrator effort.
Chapter 4: SYSTEM SECURITY AND PROCESS MONITORING

4.1 Introduction

System security and process monitoring form the backbone of stable IT infrastructure. In modern
computing environments, servers and workstations must not only perform tasks efficiently but also
remain secure, reliable, and continuously monitored. Threats such as unauthorized logins, malicious
device usage, and unmonitored processes can result in system instability, security breaches, or data
loss.

Automation ensures that such risks are minimized by running scripts that constantly monitor the
system, identify irregular activities, and notify administrators in real time.

This chapter implements three crucial automation tasks:

1. Zombie Process Detector


2. Suspicious Login Monitor
3. New USB Device Notifier

4.2 Task 6: Zombie Process Detector


Objective

To identify and report zombie processes in Linux systems, ensuring the process table is not overloaded
and preventing performance issues.

Problem Statement

 Zombie processes remain in the process table despite being terminated.


 Accumulation of zombie processes can block creation of new processes.
 Manual detection (ps, top) is time-consuming and error-prone.

Implementation Approach

1. Use ps to list processes and filter state Z.


2. Log details into /var/log/zombie_processes.log.
3. Send an alert to the administrator if zombie count exceeds a threshold.
4. Schedule via cron to run automatically.
7 Flowchart
┌────────────────────┐
│ Start Script │
└─────────┬──────────┘

┌────────▼─────────┐
│ Check Processes │
│ (ps command) │
└────────┬─────────┘

┌──────────▼───────────┐
│ Filter State = Z │
└──────────┬───────────┘

┌────────────▼─────────────┐
│ Log Zombie Processes │
└────────────┬─────────────┘

┌────────────▼─────────────┐
│ Alert if > Threshold │
└────────────┬─────────────┘

┌────────▼─────────┐
│ End │
└──────────────────┘

Script
#!/bin/bash
LOGFILE="/var/log/zombie_processes.log"
ADMIN="admin@example.com"
timestamp=$(date +"%Y-%m-%d %H:%M:%S")

zombies=$(ps -eo pid,ppid,state,cmd | awk '$3=="Z" {print $0}')

if [ -n "$zombies" ]; then
echo "[$timestamp] Zombie processes found:" >> $LOGFILE
echo "$zombies" >> $LOGFILE
echo "-------------------------------------" >> $LOGFILE

count=$(echo "$zombies" | wc -l)


if [ $count -gt 3 ]; then
echo "Warning: $count zombie processes detected." \
| mail -s "Zombie Process Alert" $ADMIN
fi
else
echo "[$timestamp] No zombie processes detected." >> $LOGFILE
fi

6 Comparison Table
Aspect Manual Check Automated
Detector
Effort High Low
Detection Human dependent Instant
Speed
Alerts None Email notification
Scalability Poor Excellent
4.3 Task 7: Suspicious Login Monitor
Objective

To monitor user login activity and detect unauthorized or suspicious login attempts.

Problem Statement

 Attackers may attempt brute-force or unauthorized logins.


 Manual review of /var/log/auth.log is time-consuming.
 Need an automated solution to notify administrators instantly.

Implementation Approach

1. Read /var/log/auth.log or journalctl entries.


2. Search for patterns like “Failed password” or “Invalid user”.
3. Log suspicious activities into /var/log/suspicious_login.log.
4. Send alert if suspicious logins exceed threshold.

8 Flowchart
┌───────────────────┐
│ Start Monitoring │
└─────────┬─────────┘

┌───────▼────────┐
│ Read auth logs │
└───────┬────────┘

┌───────────▼───────────┐
│ Detect Failed/Invalid │
└───────────┬───────────┘

┌───────────▼───────────┐
│ Log Suspicious Events │
└───────────┬───────────┘

┌───────────▼───────────┐
│ Alert if > Threshold │
└───────────┬───────────┘

┌───────▼───────┐
│ End │
└───────────────┘

Script
#!/bin/bash
LOGFILE="/var/log/suspicious_login.log"
ADMIN="admin@example.com"
timestamp=$(date +"%Y-%m-%d %H:%M:%S")

suspicious=$(grep "Failed password\|Invalid user" /var/log/auth.log)

if [ -n "$suspicious" ]; then
echo "[$timestamp] Suspicious login attempts detected:" >> $LOGFILE
echo "$suspicious" >> $LOGFILE
echo "-------------------------------------" >> $LOGFILE

count=$(echo "$suspicious" | wc -l)


if [ $count -gt 5 ]; then
echo "Warning: $count suspicious login attempts detected." \
| mail -s "Login Alert" $ADMIN
fi
else
echo "[$timestamp] No suspicious login activity." >> $LOGFILE
fi

7 Table: Normal vs Suspicious Login


Activity Normal Suspicious
Successful login Yes No
Failed password (1–2 Possible mistake Multiple times in a row
times)
Invalid username No Yes
Login from new IP Allowed Needs verification

4.4 Task 8: New USB Device Notifier


Objective

To detect and alert administrators whenever a new USB storage device is connected to the system.

Problem Statement

 USB devices can be used to copy sensitive data or inject malware.


 Manual checking with dmesg or lsblk is not feasible.
 Need a script that triggers on new USB events.

Implementation Approach

1. Use udevadm or monitor /dev directory for new devices.


2. Log details (device ID, vendor, model, time).
3. Send instant alert to administrator.

9 Flowchart
┌─────────────────┐
│ Start Script │
└───────┬─────────┘

┌─────────▼────────┐
│ Monitor USB port │
└─────────┬────────┘

┌─────────▼─────────┐
│ Detect New Device │
└─────────┬─────────┘

┌─────────▼─────────┐
│ Log Device Info │
└─────────┬─────────┘

┌─────────▼────────┐
│ Alert Admin │
└──────────────────┘

Script
#!/bin/bash
LOGFILE="/var/log/usb_monitor.log"
ADMIN="admin@example.com"
udevadm monitor --udev | while read line; do
if echo "$line" | grep -q "add.*usb"; then
timestamp=$(date +"%Y-%m-%d %H:%M:%S")
echo "[$timestamp] New USB device connected: $line" >> $LOGFILE
echo "Alert: New USB device connected at $timestamp" \
| mail -s "USB Alert" $ADMIN
fi
done

Sample Log
[2025-08-28 23:15:05] New USB device connected:
/devices/pci0000:00/0000:00:14.0/usb1/1-2

4.5 Conclusion of Chapter 4

 The Zombie Process Detector ensures that orphan processes do not overload the system.
 The Suspicious Login Monitor strengthens system security by detecting brute-force or
unauthorized attempts.
 The New USB Device Notifier prevents data theft and malware injection through external
devices.
CHAPTER 5: SYSTEM MAINTENANCE AND SERVICE AVAILABILITY

5.1 Introduction

Reliable systems demand both proactive maintenance (keeping software current and clean)
and reactive resilience (self-healing when a critical service fails). In production, update negligence
leads to vulnerabilities, while delayed service restarts cause outages. This chapter operationalizes two
core automations:

5.2 Objectives & Importance

Objectives

1. Automate identification and reporting of OS/package updates.


2. Enforce timely patching to minimize security risk.
3. Continuously verify DB service health and auto-recover on failure.
4. Maintain auditable logs and alerts for change/incident tracking.

Why this matters

 Security: Unpatched systems are prime targets (privilege escalation, RCE).


 Availability: DB crashes stall end-user transactions; watchdogs limit downtime.
 Efficiency: Removes repetitive manual checks; standardizes ops across fleets.
 Compliance: Change logs support audits (ISO 27001, SOC 2, PCI-DSS).

5.3 Task 9 — System Update Tracker

5.3.1 Workflow (Debian/Ubuntu + RHEL/CentOS/Alma/Rocky/Fedora)

1. Detect distribution and package manager (apt, dnf, or yum).


2. Query available updates; classify (security vs. general where possible).
3. Generate a timestamped report (package, current → new version).
4. Log to /var/log/system_update_tracker.log and optionally email.
5. (Optional) Apply security-only updates during maintenance windows.

5.3.2 10 Flowchart

Start

Detect OS/Package Manager

List Available Updates

Any updates?
├─ No → Log "No updates" → End
└─ Yes → Build Report → Log → (Optional) Email/Admin Notify → (Optional) Auto-apply
security → End

5.3.3 Bash Script — system_update_tracker.sh

#!/usr/bin/env bash
# System Update Tracker
# Supports: Debian/Ubuntu (apt), RHEL-family (dnf/yum)
# Logs: /var/log/system_update_tracker.log
# Optional email via mail/mailx if available

set -euo pipefail

LOGFILE="/var/log/system_update_tracker.log"
REPORT="/tmp/update_report_$(date +%F_%H%M%S).txt"
ADMIN_EMAIL="${ADMIN_EMAIL:-admin@example.com}"
SEND_EMAIL="${SEND_EMAIL:-false}" # set to "true" to send email
AUTO_SECURITY="${AUTO_SECURITY:-false}" # set to "true" to auto-apply security-only
updates (where supported)

timestamp() { date +"%Y-%m-%d %H:%M:%S"; }

have_cmd() { command -v "$1" >/dev/null 2>&1; }

detect_pkg_mgr() {
if have_cmd apt; then echo "apt"; return
elif have_cmd dnf; then echo "dnf"; return
elif have_cmd yum; then echo "yum"; return
else
echo "ERROR: No supported package manager found (apt/dnf/yum)." >&2
exit 1
fi
}

list_updates_apt() {
# Avoid interactive prompts
DEBIAN_FRONTEND=noninteractive apt update -y >/dev/null 2>&1 || true
apt list --upgradable 2>/dev/null | grep -v "^Listing..." || true
}

list_updates_dnf() {
dnf check-update -q || true
}

list_updates_yum() {
yum check-update -q || true
}

apply_security_apt() {
# security repo naming can vary; unattended-upgrades is more robust in practice.
if have_cmd unattended-upgrade; then
unattended-upgrade -d --dry-run >/dev/null 2>&1 || true
unattended-upgrade -d || true
else
echo "[WARN] $(timestamp) unattended-upgrades not installed; skipping auto security updates." >>
"$LOGFILE"
fi
}

apply_security_dnf() {
# On many RHEL derivatives, 'dnf update --security' works when metadata is available
dnf -y update --security || true
}

apply_security_yum() {
# Yum has limited security metadata support (requires yum-plugin-security)
if rpm -q yum-plugin-security >/dev/null 2>&1; then
yum -y --security update || true
else
echo "[WARN] $(timestamp) yum-plugin-security not installed; skipping auto security updates." >>
"$LOGFILE"
fi
}

maybe_send_email() {
local subject="$1"
local body_file="$2"
if [[ "$SEND_EMAIL" == "true" ]] && (have_cmd mail || have_cmd mailx); then
(have_cmd mail && mail -s "$subject" "$ADMIN_EMAIL" < "$body_file") || \
(have_cmd mailx && mailx -s "$subject" "$ADMIN_EMAIL" < "$body_file") || \
echo "[WARN] $(timestamp) Failed to send email to $ADMIN_EMAIL" >> "$LOGFILE"
fi
}

main() {
local pmgr; pmgr=$(detect_pkg_mgr)
echo "=== System Update Report @ $(timestamp) ===" > "$REPORT"
echo "Host: $(hostname -f 2>/dev/null || hostname)" >> "$REPORT"
echo "OS Package Manager: $pmgr" >> "$REPORT"
echo "----------------------------------------------" >> "$REPORT"

local updates=""
case "$pmgr" in
apt) updates="$(list_updates_apt)" ;;
dnf) updates="$(list_updates_dnf)" ;;
yum) updates="$(list_updates_yum)" ;;
esac

if [[ -z "$updates" ]]; then


echo "[INFO] $(timestamp) No updates available." | tee -a "$LOGFILE"
echo "No updates available." >> "$REPORT"
else
echo "$updates" >> "$REPORT"
echo "[INFO] $(timestamp) Updates found. Report: $REPORT" | tee -a "$LOGFILE"
fi
maybe_send_email "System Update Report: $(hostname)" "$REPORT"

if [[ "$AUTO_SECURITY" == "true" ]]; then


echo "[INFO] $(timestamp) Attempting security-only updates..." | tee -a "$LOGFILE"
case "$pmgr" in
apt) apply_security_apt ;;
dnf) apply_security_dnf ;;
yum) apply_security_yum ;;
esac
fi
}

main "$@"

Usage & Scheduling

# 1) Make executable
sudo install -m 0755 system_update_tracker.sh /usr/local/sbin/system_update_tracker.sh

# 2) Optional email + security-only auto-update via env vars


sudo bash -c 'cat >/etc/system_update_tracker.env' <<'EOF'
ADMIN_EMAIL=admin@example.com
SEND_EMAIL=true
AUTO_SECURITY=false
EOF

# 3) Cron (daily 06:00)


sudo bash -c 'cat >/etc/cron.d/system_update_tracker' <<'EOF'
SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ADMIN_EMAIL=admin@example.com
SEND_EMAIL=true
AUTO_SECURITY=false
0 6 * * * root source /etc/system_update_tracker.env && /usr/local/sbin/system_update_tracker.sh
EOF
5.4 Task 10 — Database Service Watchdog

5.4.1 Approach

 Monitor DB service health using systemctl and a lightweight connection probe (defensive).
 On failure: attempt restart → verify → log → alert (email).
 Maintain a rotating log for auditability.

5.4.2 11 Flowchart

Start

Check systemd "active"?
├─ Yes → (Optional) TCP/SQL probe → Healthy → Sleep/Exit
└─ No → Restart service

Verify status

Success? ──── Yes → Log + Notify (info)
└─ No → Log + Notify (critical)

5.4.3 Bash Script — db_service_watchdog.sh (MySQL/MariaDB/PostgreSQL)

#!/usr/bin/env bash
# Database Service Watchdog (MySQL/MariaDB/PostgreSQL)
# Logs: /var/log/mysql_watchdog.log or /var/log/pgsql_watchdog.log

set -euo pipefail

DB_KIND="${DB_KIND:-mysql}" # mysql|mariadb|pgsql
ADMIN_EMAIL="${ADMIN_EMAIL:-admin@example.com}"
SEND_EMAIL="${SEND_EMAIL:-true}"
LOGFILE=""
SERVICE_NAME=""

timestamp() { date +"%Y-%m-%d %H:%M:%S"; }


have_cmd() { command -v "$1" >/dev/null 2>&1; }
notify() {
local subject="$1"; local msg="$2"
echo "[$(timestamp)] $subject - $msg" >> "$LOGFILE"
if [[ "$SEND_EMAIL" == "true" ]] && (have_cmd mail || have_cmd mailx); then
echo "$msg" | (have_cmd mail && mail -s "$subject" "$ADMIN_EMAIL") || \
(have_cmd mailx && mailx -s "$subject" "$ADMIN_EMAIL") || true
fi
}
probe_mysql() {
# Requires mysql client; adjust credentials or rely on socket auth if configured
if have_cmd mysql; then
mysql --protocol=socket -e "SELECT 1;" >/dev/null 2>&1 || return 1
else
return 0 # Skip probe if client absent
fi
}

probe_pgsql() {
# Requires psql; assumes ident/socket auth for local probe
if have_cmd psql; then
psql -tAc "SELECT 1;" >/dev/null 2>&1 || return 1
else
return 0
fi
}

setup_service() {
case "$DB_KIND" in
mysql|mariadb)
SERVICE_NAME="$(systemctl list-unit-files | awk '/mariadb\.service/ {print "mariadb"}')"
[[ -z "$SERVICE_NAME" ]] && SERVICE_NAME="mysql"
LOGFILE="/var/log/mysql_watchdog.log"
;;
pgsql|postgres|postgresql)
SERVICE_NAME="$(systemctl list-units --type=service --all | awk '/postgresql.*\.service/ {print
$1; exit}')"
[[ -z "$SERVICE_NAME" ]] && SERVICE_NAME="postgresql"
LOGFILE="/var/log/pgsql_watchdog.log"
;;
*)
echo "Unsupported DB_KIND: $DB_KIND (use mysql|mariadb|pgsql)" >&2; exit 1
;;
esac
}

main() {
setup_service

if systemctl is-active --quiet "$SERVICE_NAME"; then


# Deep probe
if [[ "$DB_KIND" =~ ^(mysql|mariadb)$ ]]; then
if ! probe_mysql; then
notify "DB Probe Fail" "Service '$SERVICE_NAME' active but SQL probe failed on host $
(hostname). Attempting restart."
systemctl restart "$SERVICE_NAME" || true
else
echo "[$(timestamp)] $SERVICE_NAME healthy." >> "$LOGFILE"
exit 0
fi
else
if ! probe_pgsql; then
notify "DB Probe Fail" "Service '$SERVICE_NAME' active but SQL probe failed on host $
(hostname). Attempting restart."
systemctl restart "$SERVICE_NAME" || true
else
echo "[$(timestamp)] $SERVICE_NAME healthy." >> "$LOGFILE"
exit 0
fi
fi
else
notify "DB Down" "Service '$SERVICE_NAME' is NOT active on host $(hostname). Attempting
restart."
systemctl restart "$SERVICE_NAME" || true
fi

# Post-restart verification
if systemctl is-active --quiet "$SERVICE_NAME"; then
# Re-probe
if [[ "$DB_KIND" =~ ^(mysql|mariadb)$ ]]; then
if probe_mysql; then
notify "DB Restarted" "Service '$SERVICE_NAME' restarted successfully and probe passed."
exit 0
fi
else
if probe_pgsql; then
notify "DB Restarted" "Service '$SERVICE_NAME' restarted successfully and probe passed."
exit 0
fi
fi
fi

notify "DB Critical" "Failed to restore '$SERVICE_NAME' to healthy state. Manual intervention
required."
exit 2
}

main "$@"

Usage & Scheduling

# 1) Install and configure


sudo install -m 0755 db_service_watchdog.sh /usr/local/sbin/db_service_watchdog.sh
sudo bash -c 'cat >/etc/db_watchdog.env' <<'EOF'
DB_KIND=mysql # mysql|mariadb|pgsql
ADMIN_EMAIL=admin@example.com
SEND_EMAIL=true
EOF

# 2) Cron (every 5 minutes)


sudo bash -c 'cat >/etc/cron.d/db_watchdog' <<'EOF'
SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
DB_KIND=mysql
ADMIN_EMAIL=admin@example.com
SEND_EMAIL=true
*/5 * * * * root source /etc/db_watchdog.env && /usr/local/sbin/db_service_watchdog.sh
EOF
# (Alternative) systemd timer (more reliable than cron on reboot)
# Create /etc/systemd/system/db-watchdog.service
sudo bash -c 'cat >/etc/systemd/system/db-watchdog.service' <<'EOF'
[Unit]
Description=Database Service Watchdog

[Service]
Type=oneshot
EnvironmentFile=-/etc/db_watchdog.env
ExecStart=/usr/local/sbin/db_service_watchdog.sh
EOF

# Create /etc/systemd/system/db-watchdog.timer
sudo bash -c 'cat >/etc/systemd/system/db-watchdog.timer' <<'EOF'
[Unit]
Description=Run Database Watchdog every 5 minutes

[Timer]
OnBootSec=2min
OnUnitActiveSec=5min
Unit=db-watchdog.service

[Install]
WantedBy=timers.target
EOF

sudo systemctl daemon-reload


sudo systemctl enable --now db-watchdog.timer

Sample Log Snippets

[2025-08-28 14:35:21] mariadb healthy.


[2025-08-28 15:10:44] DB Down - Service 'mariadb' is NOT active on host db01. Attempting restart.
[2025-08-28 15:10:47] DB Restarted - Service 'mariadb' restarted successfully and probe passed.

5.5 Case Examples / Use Cases

 E-commerce peak hour: MySQL crashes under sudden load; the watchdog restarts it within
seconds, auto-notifies admins, and MTTR remains <1 minute—users barely notice.
 Patch Tuesday: Update Tracker emails a morning digest of pending security updates; ops
schedules them for 23:00 maintenance, keeping compliance green.

5.6 8 Preventive vs Reactive (Comparison)

Dimensio Preventive (Update Tracker) Reactive (DB Watchdog)


n
Goal Reduce risk before failure Restore service after failure
Trigger Cron/systemd timer Health check/failed probe
Outcome Patched, hardened system Minimal downtime, fast recovery
Metrics Patch latency, # vulnerable packages MTTR, # incidents auto-resolved
5.7 12Work Plan (Combined Flow)

┌─────────────────────────┐
│ Schedule via cron/timer│
└───────────┬─────────────┘

┌──────────────────────▼──────────────────────┐
│ Run Update Tracker Script │
└───────────┬─────────────────────┬──────────┘
│ │
Updates? ───┘ └───No → Log ✓
│Yes

Generate Report → Log → (Email) → (Optional Security Auto-Update)

│ (parallel schedule)

┌─────────────────────────┐
│ Run DB Watchdog │
└───────────┬─────────────┘

Service Active? ──┴───No → Restart → Verify → Notify
Yes │

Probe SQL → Healthy → Log ✓

5.8 Advanced Enhancements (Future Scope)

 Central Monitoring: Export logs/metrics to Prometheus + Grafana dashboards.


 ChatOps Alerts: Send alerts to Slack/MS Teams with incident buttons.
 Ansible/Puppet: Fleet-wide rollout and idempotent state enforcement.
 Canary Restarts: Stagger restarts per shard/replica to avoid thundering herds.
 Backup Hooks: On repeated DB failures, trigger snapshot/backup and escalate.

5.9 Summary

This chapter implemented two high-impact automations:

 System Update Tracker standardizes patch hygiene, cutting security exposure and supporting
audits.
 Database Service Watchdog keeps critical data services online with self-heal logic and
verifiable logs.

Chapter 6: Conclusion

6.1 Summary of Tasks Implemented

The project focused on automating essential system administration tasks across areas such as disk
monitoring, user management, system security, and service availability. The table below summarizes
the tasks, their purpose, and the benefit each provides in real-world IT environments:

9 Table:

Task Purpose Benefit


Task 1: Disk Space Alert Monitor storage space and alert when Prevents downtime due to
System thresholds are reached storage exhaustion
Task 2: Old File Automates archiving/removal of Saves disk space, improves
Archiver outdated files performance
Task 3: User Account Notifies admins of expiring user Prevents unauthorized access
Expiry Notification accounts and account misuse
Task 4: System Update Tracks system updates and patches Ensures system security and
Tracker stability
Task 5: Zombie Process Detects and logs zombie processes Improves system health and
Detector resource usage
Task 6: Suspicious Login Monitors unusual login activities Enhances security and
Monitor intrusion detection
Task 7: Email Disk Generates disk usage reports and Provides regular insights to
Usage Report sends via email administrators
Task 8: USB Device Notifies when new USB devices are Prevents data theft and
Notifier attached unauthorized transfers
Task 9: Database Service Ensures database services are Minimizes downtime and
Watchdog running ensures service reliability
Task 10: User Session Tracks and logs user login sessions Provides accountability and
Logger audit trail

6.2 Key Observations

During the implementation of automation scripts, several important observations were made:

 Cron jobs are an effective scheduling mechanism for recurring system tasks.
 Logging and reporting are as critical as execution, since administrators need actionable insights
rather than just raw data.
 Automation reduces repetitive manual intervention, freeing administrators for higher-level
tasks.
 Security automation (e.g., suspicious login monitoring, USB device alerts) provides a first line
of defense against insider threats.
 Some tasks (e.g., service watchdogs) require error handling and retry mechanisms to ensure
reliability.

6.3 Benefits of Automation in System Administration

Automation has proven to be a transformative approach in modern IT infrastructure. The project


highlighted the following benefits:

1. Efficiency & Time-Saving:


Routine administrative tasks that previously required hours can be executed automatically
within seconds.
2. Reduced Human Error:
Manual execution often introduces mistakes; automation ensures consistency and accuracy.
3. Proactive Monitoring:
Alerts and notifications help administrators act before failures cause downtime, especially in
critical services like databases.
4. Scalability:
Scripts can be extended across multiple servers, making them suitable for large-scale
environments like cloud data centers.
5. Improved Security Posture:
Automated checks such as suspicious login monitoring and USB device detection help
reduce insider threats and unauthorized access risks.

Real-World Relevance:
Data centers, financial institutions, e-commerce platforms, and cloud providers (AWS, Azure, Google
Cloud) heavily rely on automation tools (like Ansible, Puppet, Nagios) to maintain uptime, security,
and efficiency. The tasks developed in this project represent the foundation of such enterprise-level
automation.

6.4 10 Challenges Faced & Solutions


Challenge Solution Adopted
Configuring cron jobs for multiple tasks Used staggered timings and combined logging to avoid
with overlapping schedules conflicts
Handling false positives in suspicious Implemented IP whitelisting and stricter regex-based
login monitoring log parsing
Ensuring email alerts were delivered Configured Postfix/Sendmail and tested with multiple
reliably SMTP servers
Avoiding system overhead from frequent Optimized script execution frequency (e.g., every 5
checks mins vs. every 1 min)
Maintaining script portability across Used POSIX-compliant shell scripting and
different Linux distributions environment variable checks

6.5 Future Scope

While the project successfully implemented 10 critical automation tasks, there is considerable scope
for expansion:
1. Integration with Ansible/Puppet:
Converting shell scripts into Ansible playbooks or Puppet manifests for easier deployment
across large IT infrastructures.
2. Container & Cloud Monitoring:
Extending automation to Docker containers and Kubernetes clusters for modern DevOps
environments.
3. Centralized Dashboard:
Building a web-based dashboard to visualize logs, alerts, and system health in real-time.
4. Advanced Security Enhancements:
Using machine learning models for anomaly detection in login attempts and process
monitoring.
5. CI/CD Pipeline Integration:
Linking update and service watchdog tasks with continuous integration pipelines for
automatic rollback in case of failures.

6.6 Conclusion

In conclusion, this project successfully demonstrated how automation enhances system administration
by improving efficiency, reducing risks, and ensuring service reliability. The tasks implemented
are practical, relevant, and scalable to real-world IT environments.

Through a combination of disk monitoring, user management, security automation, and service
availability, this work provides a foundation for larger-scale enterprise automation systems. The
experience also highlighted the challenges of designing robust scripts and reinforced the importance
of logging, reporting, and proactive monitoring in IT system administration.

7 References
1. A. Frisch, R. S. Bash Cookbook, 1st ed., O'Reilly Media, 2007, pp. 120-155, 300-340.
2. N. Matotek, D. Pro Linux System Administration, 2nd ed., Apress, 2019, pp. 45-88, 601-
650.
3. The Linux Documentation Project: Bash Guide for
Beginners, 2008. https://tldp.org/LDP/Bash-Beginners-Guide/html/ (Accessed on 28th Aug
2025).
4. GNU Operating System: Bash Reference
Manual, 2020. https://www.gnu.org/software/bash/manual/ (Accessed on 28th Aug 2025).
5. Linux man-pages project: cron(8), crontab(5),
systemd.timer(5), 2024. https://man7.org/linux/man-pages/ (Accessed on 28th Aug 2025).
6. Red Hat, Inc.: System Administrator's
Guide, 2023. https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/
system_administrators_guide/ (Accessed on 28th Aug 2025).
7. Ubuntu Documentation: Server Guide, 2024. https://ubuntu.com/server/docs (Accessed on
28th Aug 2025).
8. M. K. Loukides in Unix for Advanced Users, (Ed.: A. Oram), O'Reilly Media, 1993, pp. 95–
142.
9. W. Shotts, The Linux Command Line, 5th ed., No Starch Press, 2019, pp. 201-250, 311-370.
10. IBM Documentation: Linux Performance
Monitoring, 2022. https://www.ibm.com/docs/en/linux-on-systems?topic=management-
performance-monitoring (Accessed on 28th Aug 2025).

You might also like