Operating System Report Class CC05 Group 04
Operating System Report Class CC05 Group 04
3
Ho Chi Minh City University of Technology
Faculty of Computer Science and Engineering
1 Acknowledgment
Our team extends our deepest gratitude to Mr. Nguyen Phuong Duy and Mrs. Le Thanh
Van from the Ho Chi Minh City University of Technology - Ho Chi Minh City National
University for providing us with the fundamental knowledge necessary to accomplish
our assignment. As we worked together on this significant task, we recognized that our
understanding is still developing, and we encountered several challenges in the stages
of learning, assessment, and presentation. We eagerly await your valuable feedback and
guidance to further refine and enhance our skills. We sincerely appreciate your support.
Thank you.
1
Ho Chi Minh City University of Technology
Faculty of Computer Science and Engineering
Contents
1 Acknowledgment 1
2 Background 3
2.1 Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.1.1 Scheduling Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.1.2 Multi-level Queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Memory Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2.1 Virtual Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2.2 Physical Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2.3 Paging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 System Call . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3.1 System Call Mechanism . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3.2 Types of System Call . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3 Implementation 7
3.1 Scheduler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.1.1 Queue Data Structure (queue.c) . . . . . . . . . . . . . . . . . . . 7
3.2 Memory Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.2.1 Paging (mm.c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.2.2 Physical Memory Management (mm-memphy.c) . . . . . . . . . . . . 10
3.2.3 Virtual Memory Management (mm-vm.c) . . . . . . . . . . . . . . . 10
3.2.4 Virtual Memory Management (libmem.c) . . . . . . . . . . . . . . 12
3.3 Syscall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.3.1 Terminating Processes (sys_killall.c) . . . . . . . . . . . . . . . 16
4 Question Answering 16
4.1 Scheduler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.2 Memory Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.3 System Call . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.4 Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2
Ho Chi Minh City University of Technology
Faculty of Computer Science and Engineering
2 Background
This section provides the foundational knowledge and theoretical framework necessary
for implementing a simple operating system (OS) as part of this assignment. The focus is
on three major components—scheduling, memory management, and system calls.
2.1 Scheduling
Scheduling is a fundamental OS mechanism, responsible for allocating CPU time
among multiple processes in a multi-processor setting to optimize performance and re-
source utilization. It ensures efficient utilization of the CPU, fairness among processes,
and responsiveness of the system.
- Priority: A static priority value (lower value means higher priority) that influences
scheduling decisions.
- Code: A pointer to the process’s text segment (instructions), stored separately from
RAM for simplicity.
As we understand all the important units, there are key metrics to guide the design of
scheduling algorithms:
- CPU Utilization: Maximizing the use of CPU resources to keep processors active.
- Turnaround Time: Minimizing the total time from process submission to completion.
3
Ho Chi Minh City University of Technology
Faculty of Computer Science and Engineering
- Response Time: Reducing the delay between a process’s request and its first CPU
allocation.
- Fairness: Ensuring equitable CPU access to prevent any process from being indefi-
nitely delayed.
This assignment adopts the Multi-Level Queue (MLQ) scheduling algorithm, inspired
by the Linux kernel’s scheduling approach, but simplified for educational purposes. This
scheduling algorithm is used to efficiently allocate CPU time among different types of
processes. In this method, processes are grouped into several distinct queues based on
specific characteristics, each associated with a specific priority level ranging from 0 to
MAX_PRIO. Each queue is designed for a specific category of tasks, such as system
processes, interactive user applications, or background batch jobs.
Each queue in the MLQ system can employ its own scheduling algorithm. For instance,
a queue containing interactive processes might use Round Robin scheduling, while a queue
with batch processes could use First Come First Serve (FCFS). The queues themselves
are prioritized, meaning the scheduler always selects a process from the highest-priority
queue before considering lower-priority ones. This ensures that critical system tasks are
attended to before less urgent jobs.
The key features of this MLQ implementation are:
- Time Slot Allocation: The CPU allocates time slots to each queue based on its
priority. The slot duration for a queue is calculated as slot = M AX_P RIO − prio,
where prio is the queue’s priority value. This ensures that higher-priority processes,
which means lower prio values, receive more CPU time.
- Traversal Mechanism: The scheduler traverses the ready queues in a fixed order,
ensuring that all queues are eventually serviced. However, if a queue exhausts its
slot, the CPU switches to the next queue, even if the current queue has remaining
processes. This enforces strict priority while preventing starvation of lower-priority
processes.
4
Ho Chi Minh City University of Technology
Faculty of Computer Science and Engineering
- Process Lifecycle: When a new process is created, the loader assigns it a PCB and
places it into the appropriate ready queue based on its priority (prio). The CPU
executes the process for its allocated time slice, then re-enqueues it to its priority
queue if it hasn’t finished. The scheduler then selects the next process using the
MLQ policy.
Virtual memory provides each process with an independent address space, abstract-
ing physical memory details. The process control block includes a memory management
structure that tracks virtual memory areas, each corresponding to segments like code,
data, or stack. These areas are mapped to physical memory through page tables, allowing
processes to operate as if they have contiguous memory, even when physical allocation is
fragmented.
Physical memory in this system consists of two types of devices: RAM (primary mem-
ory) and SWAP (secondary memory). Their roles and characteristics are:
- RAM: The primary memory device, directly accessible by the CPU via the address
bus. It is divided into fixed-size frames.
- SWAP: A secondary memory device used to extend available memory. SWAP cannot
be accessed directly by the CPU; data must be moved to RAM for processing. The
system supports up to four SWAP devices, each with configurable sizes. SWAP is
typically larger and cheaper than RAM, making it ideal for storing infrequently
accessed pages.
2.2.3 Paging
Paging divides the virtual address space into pages, mapped to physical frames via
a page table stored in the page global directory. Each virtual address consists of a page
5
Ho Chi Minh City University of Technology
Faculty of Computer Science and Engineering
number and offset, with the page table entry specifying the corresponding frame number
or SWAP location if the page is not in RAM. This mechanism supports memory swapping,
where pages are moved between RAM and SWAP to manage memory constraints, and
provides isolation by maintaining separate page tables for each process.
System calls enable controlled communication between user processes and the kernel,
transitioning from user mode to kernel mode to perform privileged operations. Key aspects
include:
- Mode Switching: Processes request services, triggering a switch to kernel mode for
secure execution, then return to user mode with results.
- Argument Passing: Parameters are passed via registers or memory, validated by the
kernel to ensure system integrity.
System calls are categorized by their functionality, each supporting distinct operating
system tasks. These include:
- Memory Management: Handle memory allocation and access, coordinating with the
memory unit for address translation.
- System Information: Provide data on system state, aiding diagnostics and process
adaptation.
6
Ho Chi Minh City University of Technology
Faculty of Computer Science and Engineering
3 Implementation
3.1 Scheduler
3.1.1 Queue Data Structure (queue.c)
The function enqueue(struct queue_t *q, struct pcb_t *proc) is responsible for
adding a new process (proc) into the queue (q). This is invoked whenever the system needs
to insert a process into the waiting queue for scheduling.
Initially, the function checks the validity of the input parameters. If either q or proc
is NULL, the function terminates.
Next, the function checks if the queue is already full by comparing q->size to MAX_QUEUE_SIZE.
If the queue is full, it also terminates at this .
If the inputs are valid and the queue is not full, the process is inserted at the end of
the queue’s internal array, and the size of the queue is incremented by 1 to reflect the
successful addition.
The function struct pcb_t *dequeue(struct queue_t *q) selects and removes the
process with the highest priority from the queue.
As with enqueue, the function first validates the input. If q is NULL or if the queue
is empty (q->size == 0), the function returns NULL to indicate that no processes are
available for removal.
If validation succeeds, the function traverses the queue to find the process with the
smallest priority value (representing the highest priority, assuming lower values indicate
higher scheduling priority).
After identifying the highest-priority process, it is saved into a temporary pointer. To
maintain the structure of the queue, all elements after the removed process are shifted
one position to the left, closing the gap.
The queue’s size is then decremented by 1, and the pointer to the removed process is
returned.
This file is responsible for managing the process queues for a simple scheduler in an
operating system.
7
Ho Chi Minh City University of Technology
Faculty of Computer Science and Engineering
The function struct pcb_t *get_mlq_proc() is used to select and retrieve a process
from the multi-level queue (MLQ) based on a priority scheduling policy, where higher-
priority processes are given more CPU time.
• Before selecting and retrieving a process, we first check if any queue is non-empty
using a for loop. If all the queues are empty, the function returns NULL. Otherwise,
the process continues.
• After that, we use a while loop to iterate through all the priority queues in mlq_ready_queue[]
from i = 0 (highest priority) to i = MAX_PRIO - 1 (lowest priority).
• The slot[] array is used to track the number of attempts for each queue. During
each iteration, the value of slot[i] is decremented. If this value becomes 0, the
process cannot be retrieved from this queue in the current loop.
• When i exceeds MAX_PRIO, meaning all queues have been checked, the slot[] array
is reset with the value MAX_PRIO - j. This allows us to reuse the lower priority
queues after a number of iterations. After resetting, i is set back to 0, and the loop
continues with the next round of checks.
• If the condition is met, a process is dequeued using the dequeue function (with
mutex protection to ensure synchronization). After retrieving the process, the value
of slot[i] is decremented by 1. Once the process is retrieved from the queue, the
mutex is released.
void put_proc(struct pcb_t *proc) This function returns a currently running pro-
cess to the multi-level queue (MLQ) for rescheduling and records it in the running_list
for monitoring.
• It first acquires a mutex lock to prevent race conditions when modifying running_list,
ensuring thread-safe operation. Then it calls enqueue(proc->running_list, proc);
to add the process to the running list.
After enqueuing, it releases the mutex. Finally, it reinserts the process into the MLQ
according to its priority level.
8
Ho Chi Minh City University of Technology
Faculty of Computer Science and Engineering
void add_proc(struct pcb_t *proc) This function adds a new process into the sched-
uler.
struct pcb_t *get_proc() This function retrieves the next process from the ready
queue for execution.
This section presents the paging mechanism used in a simple operating system for
memory management. It allocates memory to processes using pages and frames, and
supports swapping when RAM is insufficient.
vmap_page_range Function
This loop maps each available frame to a virtual address one by one, updating the page
table and FIFO list accordingly. If not all pages can be mapped (pgit < pgnum), the
function returns the number of missing pages. Otherwise, it returns 0 on success.
alloc_pages_range Function
9
Ho Chi Minh City University of Technology
Faculty of Computer Science and Engineering
The loop:
1 for ( pgit = 0; pgit < req_pgnum ; pgit ++)
iterates to allocate one frame per iteration. It allocates memory for each node and stores
the frame number. If head is NULL, it initializes the list.
If MEMPHY_get_freefp fails, the function rolls back the allocations by:
init_mm Function
The function init_mm sets up the memory space (mm_struct) for a new process. Since
this is the first VMA (vma0), it is added directly to the process’s VMA list.
MEMPHY_dump Function
This section describes the virtual memory management of a process in the simple
operating system.
10
Ho Chi Minh City University of Technology
Faculty of Computer Science and Engineering
Function get_vm_area_node_at_brk
Function validate_overlap_vm_area
Function inc_vma_limit
The function inc_vma_limit extends the virtual memory space of a process (pcb_t)
by allocating a new virtual address region and mapping it to physical memory.
The process begins by allocating a new vm_rg_struct to describe the corresponding
physical memory region. The requested memory size is aligned to the system’s page size
to ensure proper paging. The number of required pages is computed based on the aligned
size, and the new region’s start and end addresses are determined.
Using vmaid, the function identifies the VMA to be extended and stores its previous
limit. It then verifies that the newly requested virtual memory range does not overlap
with existing VMAs. If validation passes, the function proceeds to update the VMA’s
limit and maps the new range to physical memory, thereby allowing the process to use
the extended memory space.
11
Ho Chi Minh City University of Technology
Faculty of Computer Science and Engineering
This file is responsible for allocating, freeing, and accessing virtual memory regions
using a paging mechanism.
Function int __alloc(int vmaid, size_t size) The function __alloc carries out
virtual memory allocation for the calling process within the memory region identified by
vmaid.
• Initial allocation:
We first attempt a normal allocation by searching for a free region. If a suitable re-
gion is found, we update its metadata and return the symbol table entry (symrgtbl)
to the caller.
Function int __free(struct pcb_t *caller, int vmaid, int rgid) This function
frees a previously allocated virtual memory region, identified by rgid, within the VMA
numbered vmaid for the calling process caller.
• First, the region node (rgnode) corresponding to rgid is retrieved from the symbol
table symrgtbl to obtain the region’s metadata.
• The function verifies that rgid lies within the valid range [0, PAGING_MAX_SYMTBL_SZ].
• Finally, the freed region is added back into the list of free regions.
12
Ho Chi Minh City University of Technology
Faculty of Computer Science and Engineering
Library Functions: liballoc and libfree These two functions provide a simplified
library interface for virtual memory allocation and deallocation:
• liballoc(proc, size): Allocates a new virtual memory region of size bytes for
process proc and records its descriptor in symrgtbl[reg_index].
• libfree(proc, reg_index): Frees the virtual memory region for process proc iden-
tified by reg_index, returning it to the pool of available regions.
Function int pg_getpage(int pgn) This function handles page-fault processing when
the process tries to access page number pgn.
• Inspect the Page Table Entry (PTE): Retrieve the PTE for page pgn and check
PAGING_PAGE_PRESENT(pte).
• Page already resident in RAM: Simply retrieve its frame number (fpn) from
the PTE.
• Return: On success, return the frame number (fpn) for the caller to continue access.
If an error occurs during swap operations, return a negative error code.
The functions pg_getval and pg_setval perform byte-level reads and writes, respec-
tively, on a process’s virtual memory at address addr.
13
Ho Chi Minh City University of Technology
Faculty of Computer Science and Engineering
– Extract the page number (pgn) and the byte offset within that page from addr.
– Use the frame number returned by pg_getpage and the offset to compute the
exact physical address.
– For pg_getval:
∗ Populate a regs structure with the physical address and target buffer.
∗ Call syscall(caller, 17, ®s) to request the kernel read one byte
from that physical address.
∗ On success, the byte is written into *data.
– For pg_setval:
∗ Populate regs with the physical address and the byte to write.
∗ Call syscall(caller, 17, ®s) to request the kernel perform the write.
• Return value:
Function libread
• Call to __read:
• Update destination:
14
Ho Chi Minh City University of Technology
Faculty of Computer Science and Engineering
– When the debug flag IODUMP is enabled, it prints details of the read
operation and dumps the current page table and physical memory state
for diagnostic purposes.
• Return value:
Function get_free_vmrg_area
• First, obtain a pointer to the virtual memory area (VMA) by its vmaid
in the caller process.
• Initialize newrg in an invalid state so that later code can detect whether
allocation succeeded.
– If the free block is large enough (end - start >= required size),
assign the new region (newrg) to start at rg_it->start and update
the free block accordingly.
– If the old block has no remaining space (start >= end), remove this
node from the list.
– If it’s the head node, update cur_vma->vm_freerg_list to the next
node.
– Otherwise, link the previous node (prev) to rg_it->rg_next.
15
Ho Chi Minh City University of Technology
Faculty of Computer Science and Engineering
3.3 Syscall
3.3.1 Terminating Processes (sys_killall.c)
The purpose of this file is to locate and terminate all processes whose
names match the name provided by the caller.
4 Question Answering
4.1 Scheduler
1. What are the advantages of using the scheduling algorithm described in this assign-
ment compared to other scheduling algorithms you have learned?
In this assignment, we use the Multi-Level Queue (MLQ) scheduling algorithm. Based
on the following four criteria, this algorithm proves to be more efficient compared to other
scheduling algorithms we have studied so far, which including First - Come, First - Served
(FCFS), Shortest Job Next (SJN), Round Robin (RR) and Priority Scheduling:
- Priority support
16
Ho Chi Minh City University of Technology
Faculty of Computer Science and Engineering
+ Others: Other algorithms generally lack flexible priority support. FCFS (no
priority), SJN (short-task focus), RR (equal time slices), or Priority Scheduling
(single-queue limits).
- Fairness/Starvation Prevention
+ MLQ: MLQ’s fixed-slot system ensures CPU time for low-priority processes,
preventing starvation.
+ Others: Other algorithms struggle to ensure fairness. FCFS blocks short tasks.
SJN delays long ones, RR slows critical tasks, and Priority Scheduling neglects
low-priority tasks.
- Resource Utilization
+ MQL: MLQ optimizes CPU use by prioritizing high-priority tasks and sup-
porting multi-processor systems.
+ Others: FCFS (idle time), SJN (delays long tasks), RR (switching overhead),
or Priority Scheduling (unused cycles).
+ Others: FCFS is extremely simple but lacks flexibility and prioritization. SJN
requires accurate estimation of job lengths. RR is simple to implement but
requires careful tuning of time slices to balance responsiveness and overhead.
Priority Scheduling is simple but has severe starvation issues.
According to the previous comparison, it is easy to point out that even though MQL
may not be the easiest to implement, it sure is the easiest to handle and manage. Moreover,
MQL offers several advantages over other scheduling algorithms like FCFS, SJN, RR,
and Priority Scheduling. Its key values, including priority-based scheduling, fairness and
reduced starvation, adaptability to diverse workloads, efficient resource utilization and
multi-processor system support, prove to be a far more efficient method to handle systems
17
Ho Chi Minh City University of Technology
Faculty of Computer Science and Engineering
with mixed workloads, where critical tasks must be prioritized without neglecting lower-
priority processes. By segregating processes into priority queues with tailored scheduling
policies, MLQ achieves a balance of performance, fairness, and flexibility that is superior
to other algorithms we have learned so far.
- Modularity and structure clarity: Segments with specific roles (code, data, stack)
create a clear, modular memory structure, simplifying OS design and maintenance.
2. What will happen if we divide the address to more than 2 levels in the paging
memory management system?
A multi-level paging system with more than two levels involves a hierarchical traversal
to resolve virtual to physical address mapping. There are some advantages and challenges
of this approach.
Advantages:
- Increased address space:More indexing bits expand the virtual address space, sup-
porting future memory demands or complex applications.
18
Ho Chi Minh City University of Technology
Faculty of Computer Science and Engineering
- Details control: Smaller address units enable precise allocation, ideal for the OS’s
segmented memory areas.
- Reduce page table size: Opposite form the single-level system, multi-level paging
allocates tables only for used address regions, minimizing memory overhead, freeing
space for actual data and improving resources efficiency.
Challenges:
- Increased overhead: Additional memory accesses to traverse page table levels in-
crease address translation latency, potentially slowing performance in the single-
RAM system.
- Fragmentation: Higher levels may fragment physical memory and page table entries.
In the simple operating system, this could negatively affect allocation efficiency and
complicate swapping between RAM and SWAP.
Dividing the address space into more than two levels in the simple OS’s paging sys-
tem would provide advantages like a larger address space, fine-grained control, and re-
duced page table size, supporting scalability and efficient memory use. However, it would
introduce challenges such as increased overhead, greater complexity, and potential frag-
mentation, which could undermine performance and simplicity in the current single-level,
resource-limited design, suggesting caution in adopting this approach without significant
system enhancements.
3. What are the advantages and disadvantages of segmentation with paging?
Segmentation provides logical division, while paging handles physical allocation, com-
bining the strengths of both approaches. Below are the advantages and disadvantages of
this scheme.
Advantages:
- Flexible memory management: Logical segments with fixed-size pages enable dy-
namic, efficient allocation for diverse process needs.
- Protection and isolation: Segments enforce logical boundaries, and paging ensures
physical isolation, enhancing security and preventing unauthorized access.
- Memory sharing: Paging allows multiple processes to map shared physical pages to
different segments, reducing memory duplication for common code or data.
19
Ho Chi Minh City University of Technology
Faculty of Computer Science and Engineering
Disadvantages:
- Overhead: Segment descriptors and page tables add memory access and storage
costs, impacting performance.
- Page Table Size: Large address spaces demand extensive page tables, consuming
memory and slowing operations.
Segmentation with paging in the simple OS offers advantages like flexible memory
management, robust protection and isolation, efficient memory sharing, and simplified
address translation, enhancing system versatility and security. However, it introduces
disadvantages such as fragmentation, increased complexity, overhead, and large page table
sizes, which could strain the OS’s limited resources and complicate its simple design,
necessitating a balance between functionality and efficiency.
20
Ho Chi Minh City University of Technology
Faculty of Computer Science and Engineering
- Kernel Retrieval: The kernel accesses the complex argument by translating the ref-
erenced virtual address to a physical location, retrieving the data securely through
the memory management system.
This approach ensures that complex arguments can be passed efficiently despite reg-
ister limitations, maintaining security and isolation by leveraging the operating system’s
memory management capabilities.
2. What happens if the syscall job implementation takes too long execution time?
If a system call’s execution takes too long in the Simple Operating System, several
adverse effects can occur, impacting system performance, responsiveness, and stability:
- Delayed Process Scheduling: Running in kernel mode, long system calls preempt
user processes, delaying higher-priority tasks and reducing responsiveness.
In short, a system call that takes too long to execute disrupts scheduling, causes
resource contention, undermines fairness, and risks instability, degrading the Simple Op-
erating System’s performance and reliability in a multi-processor environment.
4.4 Synchronization
1. What happens if the synchronization is not handled in your Simple OS? Illustrate
the problem of your simple OS (assignment outputs) by example if you have any.
If synchronization is not handled in the Simple Operating System, the lack of coor-
dination in accessing shared resources in a multi-processor environment leads to several
critical issues, compromising system reliability and performance:
21
Ho Chi Minh City University of Technology
Faculty of Computer Science and Engineering
data. For example, simultaneous modifications to a process list may overwrite valid
entries, leading to lost or invalid process states.
- System Crashes: Uncontrolled access to critical system data can trigger errors, such
as accessing invalid memory or dereferencing corrupted pointers, potentially causing
the operating system to crash or become unresponsive.
During coding session, we came across race condition when we were trying to compile
the code, it appeared to return the correct output at first; however, when we compile it
multiples time at high rate, the return output was wrong, and lead to segment fault. After
reading the output and combined them all to an error log file, here are what we found:
Nature of the Error In the error log, we use Valgrind to help finding the error source
and it location, after carefully examine, we have the information about the problems in
data races in the queue management functions responsible for checking queue status,
adding processes, and removing processes. Moreover, we also indicate a specific race be-
tween a queue status check and a queue modification, confirming that unsynchronized
reads and writes to the queue’s state cause inconsistencies.
Causes of Race Condition After discussion, we agreed that the error stems from
insufficient synchronization:
– Queue Management: The MLQ scheduler’s priority queues store processes, with
threads checking queue status and adding/removing processes. Some operations
lack locks, allowing threads to read stale queue sizes or process lists while others
modify them.
22
Ho Chi Minh City University of Technology
Faculty of Computer Science and Engineering
– Timer Coordination: The timer’s time-slot counter, accessed by CPU and loader
threads, lacks synchronization, causing inconsistent timing.
Impact on Functionality When we know the causes, this time we finally analyzing
the impact deeper through the output and the error logs. There are some key components
are disrupted by the race condition.
– Syscalls: Operations like process termination may miss targets due to queue incon-
sistencies.
23
Ho Chi Minh City University of Technology
Faculty of Computer Science and Engineering
Mitigation Strategies Although we tried our best to fix the error, none of solution
prove to bring us the desired output, especially in such a time rush. But still, we were
able to come up with some possible mitigation strategies:
At the end, while we were working on the assignment, we came across the race condi-
tion in the Simple OS, caused by unsynchronized queue and timer access, leads to incor-
rect scheduling, corrupted outputs, and instability. It results in incomplete logs, incorrect
dumps, and erroneous syscall results, failing assignment goals. Mutex-based synchroniza-
tion and robust error handling can ensure reliable operation and correct outputs.
24