[go: up one dir, main page]

0% found this document useful (0 votes)
10 views9 pages

Assignment 1 BIS125 Operating System

The document discusses key concepts in operating systems, differentiating between processes and threads, and explaining the importance of the Process Control Block (PCB). It covers scheduling goals, memory management terms, and various disk scheduling algorithms, including Circular Look, Shortest Seek Time First, Scan, and Circular Scan. Additionally, it highlights differences between memory management techniques such as paging and segmentation, as well as scheduling algorithms like SJF and SRT.

Uploaded by

pchingwaru544
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views9 pages

Assignment 1 BIS125 Operating System

The document discusses key concepts in operating systems, differentiating between processes and threads, and explaining the importance of the Process Control Block (PCB). It covers scheduling goals, memory management terms, and various disk scheduling algorithms, including Circular Look, Shortest Seek Time First, Scan, and Circular Scan. Additionally, it highlights differences between memory management techniques such as paging and segmentation, as well as scheduling algorithms like SJF and SRT.

Uploaded by

pchingwaru544
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Assignment 1 BIS125 Operating Systems

Question 1 a. Differentiate between a process and a thread. [6]

A process is an independent program in execution that has its own memory space. It is the basic
unit of resource allocation and scheduling in an operating system. In contrast, a thread is a
smaller unit of execution within a process. Multiple threads can exist within the same process,
sharing the same memory and resources.

Memory Allocation

Each process has its own memory space, including the code segment, data segment, and stack.
This isolation provides better security and stability. Threads, on the other hand, share the same
memory space of the process, allowing for more efficient communication but potentially leading
to issues such as race conditions.

Overhead

Creating and managing processes involves more overhead due to the need for separate memory
allocation and protection. Threads are lightweight and have less overhead, making them faster
to create and manage.

Communication

Inter-process communication (IPC) mechanisms (like pipes and message queues) are required for
processes to communicate, which can be complex and slower. Threads can communicate more
easily through shared variables since they operate in the same memory space.

Scheduling

The operating system schedules processes based on various criteria, which may involve context
switching. Thread scheduling is generally faster due to lower context-switching costs, as threads
in the same process share resources.

Execution

Each process runs independently. If one process crashes, it does not affect the others. If a thread
encounters an error, it can potentially crash the entire process, affecting all threads within it.

b. Discuss each of the important information stored in the PCB. [4]

The Process Control Block (PCB) is a data structure used by the operating system to store all the
information about a process. Key components encompass the following:

Process State

This indicates the current state of the process (e.g., running, waiting, ready, terminated). It helps
the OS manage the process lifecycle.

Process ID (PID)
A unique identifier assigned to each process, enabling the OS to track and manage processes
effectively.

CPU Registers

The PCB stores the values of the CPU registers when the process is not executing. This allows the
OS to resume the process from the same point when it is scheduled again.

Memory Management Information

This includes details about the process's memory allocation, such as page tables, segment tables,
and limits. It helps the OS manage memory resources efficiently.

Question 2

a) Discuss any four goals of scheduling. [8]

One goal of scheduling is maximizing CPU utilization. The objective is to keep the CPU as busy as
possible to ensure efficient processing of tasks.

Another goal is minimizing turnaround time. This refers to reducing the total time taken from the
submission of a process to its completion, enhancing user satisfaction.

Minimizing waiting time is also crucial. By reducing the time processes spend in the ready queue,
the overall efficiency of the system improves, leading to faster response times for users.

Lastly, ensuring fairness among processes is important. This means that all processes should
receive a reasonable share of the CPU time, preventing starvation and ensuring that no process
is indefinitely delayed.

b) Briefly explain any three differences between paging and segmentation. [6]

Paging divides the process's memory into fixed-size blocks called pages, while segmentation
divides memory into variable-sized segments based on logical divisions, such as functions or data
structures.

In paging, the entire process is treated uniformly without regard to its logical structure, which
can lead to internal fragmentation. Segmentation, on the other hand, preserves the logical
structure of the program, which can reduce fragmentation but may complicate memory
allocation.

Lastly, page tables are used to map virtual pages to physical frames, while segment tables are
used for the mapping of segments. This results in different complexities in managing the
memory.

c) What is the difference between SJF and SRT scheduling algorithms? [6]

Shortest Job First (SJF) scheduling selects the process with the smallest execution time to run
next, aiming to minimize average waiting time. It is a non-preemptive algorithm, meaning once a
process starts executing, it cannot be interrupted until it finishes.
Shortest Remaining Time (SRT) scheduling, on the other hand, is a preemptive version of SJF. It
allows a currently running process to be interrupted if a new process arrives with a shorter
remaining time. This can lead to more optimal response times in certain scenarios but may
introduce complexity in managing process states.

Question 3

a. Differentiate the following memory management terms. [10]

i. External Fragmentation and Internal Fragmentation


External fragmentation occurs when free memory is divided into small, non-contiguous
blocks over time, making it difficult to allocate larger contiguous memory segments even
though the total free memory is sufficient. Internal fragmentation happens when
allocated memory blocks are larger than necessary, leading to wasted space within those
blocks. For example, if a process is allocated 10 KB but only uses 7 KB, the remaining 3
KB is internal fragmentation.
ii. Contiguous and Non-contiguous Memory Allocation
Contiguous memory allocation requires that each process be allocated a single
contiguous block of memory. This simplifies memory management and access but can
lead to fragmentation issues. Non-contiguous memory allocation allows a process to be
allocated memory in multiple segments scattered throughout physical memory, reducing
fragmentation and allowing better utilization of available memory, but it increases
management complexity.
iii. Fixed Memory Partitioning and Variable/Dynamic Memory Partitioning
Fixed memory partitioning divides memory into a set number of fixed-size partitions at
system startup. Each partition can hold one process, leading to potential internal
fragmentation if processes do not fully utilize their allocated space. Variable or dynamic
memory partitioning allocates memory in variable-sized chunks based on the needs of
processes. This approach can lead to more efficient use of memory and reduced
fragmentation, but it can also result in external fragmentation as memory is allocated
and freed over time.

Question 4

a) Disk Scheduling Algorithms

Given a request queue from 0 to 199 with the following requests: 95, 180, 34, 119, 11, 123, 62,
64, and a head pointer at 50, we'll illustrate how the operating system will service these requests
using different disk scheduling algorithms.

i. Circular Look [5]

In Circular Look scheduling, the head moves in one direction to service requests and wraps
around to the beginning of the queue when it reaches the end.

Initial Head Position: 50


Servicing Order: 62, 64, 95, 119, 123, 180

62_ 50 =12

64_62=2

95_64 =31

119_95=24

123_119 =4

180_123=57

11_ 0=11

34_11=23

Which is 169 movements

ii. Shortest Seek Time First (SSTF) [5]

In Shortest Seek Time First scheduling, the head services the request closest to its current
position.

Initial Head Position: 50

Servicing Order: 62, 64, 95, 119, 123, 180, 11, 34

62-50=12
64_62=2

64_34=30

34_11= 23

95_34= 61

119_95=24

123_119= 4

180_123=57

Which 213 movements

iii. Scan [5]

In Scan scheduling, the head moves in one direction servicing requests until it reaches the end,
then reverses direction.

Initial Head Position: 50


Servicing Order: 62, 64, 95, 119, 123, 180, 199, 0, 11, 34

On this one we clearly calculate by subtracting the difference between the movements so as to
get the total movement.

62_ 50 =12

64_62=2

95_64 =31

119_95=24

123_119 =4

180_123=57

199_180= 19

11_ 0=11

34_11=23

Total which is 12+2+31+24+4+57+19+11+23

183

iv. Circular Scan (C-Scan) [5]

In Circular Scan scheduling, the head moves in one direction to service requests and jumps back
to the beginning without servicing any requests on the return trip.

Initial Head Position: 50

Servicing Order: 62, 64, 95, 119, 123, 180, 199, 0, 11, 34

62_ 50 =12

64_62=2

95_64 =31

119_95=24

123_119 =4

180_123=57

199_180= 19

11_ 0=11

34_11=23

50_34 =16
Which then brings a total of 180

You might also like