Operating Systems
Threads 2
Operating Systems
Slide 1 of 22
Threading Issues
fork() & exec() System Calls Thread Cancellation Signal Handling Thread Pools Thread-Specific Data Scheduler Activations
Operating Systems
Slide 2 of 22
fork() & exec() System Calls
fork()creates separate, duplicate process
What about when it's invoked by a thread?
Some UNIX systems have two fork() implementations
Duplicates all threads Duplicates only the calling thread
exec() system call works same way as before
Slide 3 of 22
Operating Systems
fork() & exec() System Calls
Appropriate fork()is decided by the calling process
If exec() is going to be called soon, duplicate only calling thread Else duplicate all threads
Operating Systems
Slide 4 of 22
Thread Cancellation
Thread cancellation is the task of terminating a thread before it has completed
E.g., multiple threads searching through DB one thread finds result others canceled User presses stop button while web page is loading (multiple processes do loading)
Thread to be canceled is called target thread
Operating Systems
Slide 5 of 22
Thread Cancellation
Cancellation of thread may occur in two different scenarios
Asynchronous cancellation: a thread immediately terminates the target thread Deferred cancellation: target thread periodically checks whether it should terminate allows graceful termination
Operating Systems
Slide 6 of 22
Signal Handling
A signal is used to notify a process that an event has occurred (UNIX terminology)
Asynchronous or synchronous Either way, same process is followed
Signal is generated for a particular event Generated signal is delivered to process Process must handle the signal
Operating Systems
Slide 7 of 22
Synchronous Signals
Generated due to the actions of a process (e.g., division by 0, illegal memory access) Delivered to the process that caused the signal to be generated
Operating Systems
Slide 8 of 22
Asynchronous Signals
When generated by an event external to receiving process (e.g., terminating a process with Ctrl+C, timer expiry) Delivered to a process other than the one that caused it to be generated
Operating Systems
Slide 9 of 22
Signal Handlers
Every signal has to be handled by
Default signal handler, or,
Defined by OS for each signal Overrides OS-defined default handler
User-defined signal handler
Operating Systems
Slide 10 of 22
Signal Handlers
Single-threaded processes are easy; signal handling for multi-threaded processes is complicated
Deliver signal to the thread that it applies to Deliver signal to every thread Deliver signal to certain threads Assign a specific thread to receive all signals
Slide 11 of 22
Operating Systems
Signal Handlers
Delivering a signal depends on the type of signal generated
Synchronous signals are sent to the thread that caused them Less clear-cut in case of asynchronous signals
Operating Systems
Slide 12 of 22
Handling Asynchronous Signals
UNIX lets threads accept or block signals
Sometimes signal delivered only to accepting threads (or first accepting thread) Standard UNIX function for delivering signal: kill(aid_t aid, int signal) POSIX Pthreads provides: pthread_kill(pthread_t tid, int signal) Emulated by Aynchronous Procedure Calls (APC)
Slide 13 of 22
Windows doesn't have signals
Operating Systems
Asynchronous Procedure Calls
User thread can specify a function to be called when it receives notification of a particular event More straightforward: delivered to a particular thread
Operating Systems
Slide 14 of 22
Thread Pools
Creating threads may be lighter than creating processes, but unlimited numbers could exhaust system resources Solution: create a number of threads & put them into a pool sit & wait for work When work available, awaken a thread & let it work
If none available, wait till one becomes available
Slide 15 of 22
When thread completes it returns to pool
Operating Systems
Thread Pools
Benefits of thread pools
Assigning work to waiting thread is faster than creating a new thread Limit on the total number of threads: can avoid exhausting resources
Number of threads in pool can be set heuristically based on factors like
Number of CPUs, physical memory, expected load
Slide 16 of 22
Operating Systems
Thread-specific data
Most data is shared between threads
Source of efficiency
In some circumstances each thread needs its own copy of certain data called thread specific data Most thread libraries provide support
Operating Systems
Slide 17 of 22
Scheduler Activation
Communication is required between kernel & thread library
Allows number of kernel threads to be dynamically adjusted for performance
Implemented by putting an intermediate data structure between kernel & user threads
Called Lightweight Process (LWP)
Operating Systems
Slide 18 of 22
Scheduler Activation
LWP acts like a virtual processor to the user thread library
Operating Systems
Slide 19 of 22
Scheduler Activation
Kernel provides process with virtual processors (LWPs)
Application can schedule user threads onto LWPs Accomplished using a mechanism called upcalls Upcalls must be handled by an upcall handler in the thread library Upcall handler also run on a virtual processor
Slide 20 of 22
Kernel must inform app about certain events
Operating Systems
Scheduler Activation
An even that triggers an upcall is when an application thread is about to block
Kernel makes an upcall Kernel allocates a new virtual processor App runs an upcall handler on this new LWP
Saves state of blocking thread, relinquishes its LWP
Upcall handler schedules another eligible process Kernel again upcalls when the blocking thread becomes available
Slide 21 of 22
Operating Systems
Reading Exercise
Read the section on Operating-System Examples (section 4.5 in 7th edition of the book)
Operating Systems
Slide 22 of 22