Sample Final
Sample Final
T/F
1. Direct Memory Access (DMA) bypasses the cache or TLB to read directly from main
memory.
3. Multiple threads within the same process share heap memory but not stack
memory.
4. In UNIX, the exec( ) system call creates a new child process, and the fork system
call replaces the current process’s program with a new program and executes it.
7. The following pseudo-code for a process and TestAndSet function, without any other
assumptions, solves the critical section problem between N processes:
8. The Progress requirement for the critical section problem states that: if no process
is currently in the critical section and then one requests to enter the critical section,
this requesting process will be granted access to the critical section in a finite period
of time.
9. Paging breaks up a program’s view of memory into logical blocks such as “main”,
“global variables”, and functions/procedures
10. Late binding of memory addresses enables the compiler to produce more efficient
code than early binding.
11. Consider an OS that employs the working set model to track the number of recent
page references. To avoid the problem of thrashing, it swaps out a process if the
total # demand frames exceeds the # available frames.
12. External fragmentation is free memory between allocated partitions. Internal
fragmentation is memory that is internal to an allocated partition but is not used.
13. The dynamic storage allocation strategies of worst-fit and best-fit are better than
first-fit in terms of speed and storage utilization.
15. The Banker’s Algorithm is used to avoid deadlocks when considering multiple
instances of each resource type.
16. Consider a system having p processes, where each process needs a maximum of
m instances of resource type R1. Given that there are r instances of resource type
R1 in total, we can ensure that the system is deadlock-free by guaranteeing enough
instances of R1, i.e. ensuring that: r > p( m - 1)
18. The Indexed Allocation scheme increases storage overhead by including an index
table, which is a block containing the addresses of each actual file block.
19. In a filesystem using tree-structured directories, the leaf nodes are directories and
the interior nodes are files.
Consider N processes sharing the CPU in a round-robin fashion (N>=2). Assume that each context switch
takes S ms and that each time quantum is Q ms. For simplicity, assume that processes never block on any
event and simply switch between the CPU and the ready queue. Also, assume that a process is still in the
ready queue while a context switch is happening.
a) What happens if Q is much smaller than S? What happens when Q → ∞, i.e. is much larger than the
maximum turnaround time of all the processes? Be brief (1-2 sentences max) in your answer.
b) If you use RR for scheduling, which of the three performance metrics (waiting, response, turnaround time) is
more likely to be improved? Why (1-2 sentences max)?
Question B.3: CPU Scheduling Algorithms
Consider the set of process:
P1 0 5
P2 0 10
P3 4 15
P4 18 10
P5 22 20
a) Draw the GANTT chart for the Round Robin (time quantum = 5) scheduling algorithm. Use the same
implementation you used for the programming assignment i.e. the processes should always run in PID order.
Assume there is no context-switch overhead. Show your work for partial credit.
b) Write your answer to the following performance metrics given your above GANTT charts. Show your work
for partial credit.
Recall how we use the following semaphores to enforce the execution order above:
s1=0; s2=0; s3=0;
P1: body; V(s1); V(s1);
P2: P(s1); body; V(s2);
P3: P(s1); body; V(s3);
P4: P(s2); P(s3); body;
Where the semaphores s1, s2, and s3 are created with an initial value of 0.
Shared Data:
Semaphore wrt initialized to 1
Semaphore mutex initialized to 1
Integer read_count initialized to 0
Writer Process:
do {
_____________
...
/* writing is performed */
...
_____________
} while (true);
Reader Process:
do {
wait(mutex);
read_count++;
if (read_count == 1)
_____________
signal(mutex);
...
/* reading is performed */
...
_____________
read_count--;
if (read_count == 0)
_____________
_____________
} while (true);
Question C.3: Deadlocks & safe state
Consider the following snapshot of a system:
A B C D A B C D A B C D
P0 3 0 2 1 4 2 4 2 1 0 0 0
P1 0 1 0 1 0 2 2 2
P2 1 2 0 0 3 2 1 0
P3 0 1 1 2 1 1 1 2
P4 0 0 1 1 1 0 2 1
Process Need
A B C D
P0
P1
P2
P3
P4
b) Is the system in a safe state? If yes, give a safe sequence of processes. If not, explain why the system is
not in a safe state.
c) I f a request from process P4 arrives for (1,0,0,0), can the request be granted immediately? Please state the
reason.
Part D: [Virtual Memory Management]
a) What is the effective access time (i.e. reading a word in memory) with no caching and a two-level page
table?
b) Consider the above scenario but with a TLB having a cache hit rate of 98%. If the TLB takes 20 ns to
access, what is the effective access time of this setup when considering this TLB?
0 100 300
1 1400 600
2 450 100
3 3200 80
4 2200 500
5 3300 33
1. Given the original base addresses, what are the physical addresses for the following logical addresses? If
it’s an invalid address, just write “invalid”. Note that (X, Y) => segment X, offset Y
a) (0, 350)
b) ( 1, 599)
c) ( 2, 50)
d) ( 3, 81)
e) ( 4, 300)
f) ( 5, 0)
g) ( 5, 34)
2. If the above segments are the only segments in the system, i.e. there is only 1 process, and all segments
are full, fill the right-most column of the base address table above after completing a disk compaction.
Question D.4: FIFO Page Replacement
Consider the following page reference string:
e, c, b, e, a, g, d, c, e, g, d, a
Considering 4 frames, fill in the following table and then answer how many page faults would occur with the
FIFO page replacement algorithm.
RS: reference string; F0: frame 0, F1: frame 1, etc.
Hint: all frames are initially empty, so your first unique pages will all cost one fault each.
Time 1 2 3 4 5 6 7 8 9 10 11 12
RS e c b e a g d c e g d a
F0
F1
F2
F3
Page fault?
c) Briefly (1-2 sentences) explain Belady’s Anomaly that can occur in FIFO Page Replacement.
Part E: [Filesystems]
Consider a file currently consisting of 10 blocks. Assume that the file control block and the new block
information to be added are already in memory. Calculate how many disk I/O operations are required for the
linked allocation strategy, if, for one block, the following conditions hold:
HINTS: 1) ignore disk I/O associated with the file control block. 2) each read and each write is an explicit disk
I/O. 3) assume a pointer to the end of the list for linked allocation
Consider a two-level indexed allocation scheme that uses only a single top-level index table. If each block is B
bytes and each index entry is P bytes, what is the maximum file length in bytes?