[go: up one dir, main page]

0% found this document useful (0 votes)
73 views3 pages

Multitasking/Time-sharing Operating Systems

Time-sharing and multitasking operating systems allow multiple users to use a computer system simultaneously. The processor's time is shared rapidly between users, giving the appearance that each has sole use of the system. This improves response time over batch processing systems. Multiprogramming systems also aim to maximize processor usage by quickly switching between ready processes when one is waiting for I/O.

Uploaded by

Shriya Jain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
73 views3 pages

Multitasking/Time-sharing Operating Systems

Time-sharing and multitasking operating systems allow multiple users to use a computer system simultaneously. The processor's time is shared rapidly between users, giving the appearance that each has sole use of the system. This improves response time over batch processing systems. Multiprogramming systems also aim to maximize processor usage by quickly switching between ready processes when one is waiting for I/O.

Uploaded by

Shriya Jain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Multitasking/Time-sharing operating systems

Time sharing is a technique which enables many people, located at various terminals, to use a particular computer system
at the same time. Time-sharing or multitasking is a logical extension of multiprogramming. Processor's time which is
shared among multiple users simultaneously is termed as time-sharing. The main difference between Multiprogramming
Batch Systems and Time-Sharing Systems is that in case of Multiprogramming batch systems, objective is to maximize
processor use, whereas in Time-Sharing Systems objective is to minimize response time.
Multiple jobs are executed by the CPU by switching between them, but the switches occur so frequently. Thus, the user
can receives an immediate response. For example, in a transaction processing, processor execute each user program in a
short burst or quantum of computation. That is if n users are present, each user can get time quantum. When the user
submits the command, the response time is in few seconds at most.
Operating system uses CPU scheduling and multiprogramming to provide each user with a small portion of a time.
Computer systems that were designed primarily as batch systems have been modified to time-sharing systems.
Advantages of Timesharing operating systems are following

 Provide advantage of quick response.


 Avoids duplication of software.
 Reduces CPU idle time.
Disadvantages of Timesharing operating systems are following.

 Problem of reliability.
 Question of security and integrity of user programs and data.
 Problem of data communication.
 Distributed operating System

Distributed Operating System


Distributed systems use multiple central processors to serve multiple real time application and multiple users. Data
processing jobs are distributed among the processors accordingly to which one can perform each job most efficiently.
The processors communicate with one another through various communication lines (such as high-speed buses or
telephone lines). These are referred as loosely coupled systems or distributed systems. Processors in a distributed system
may vary in size and function. These processors are referred as sites, nodes, and computers and so on.
The advantages of distributed systems are following:

 With resource sharing facility user at one site may be able to use the resources available at another.
 Speedup the exchange of data with one another via electronic mail.
 If one site fails in a distributed system, the remaining sites can potentially continue operating.
 Better service to the customers.
 Reduction of the load on the host computer.
 Reduction of delays in data processing.
Distributed Operating System is a model where distributed applications are running on multiple computers linked by
communications. A distributed operating system is an extension of the network operating system that supports higher
levels of communication and integration of the machines on the network.
This system looks to its users like an ordinary centralized operating system but runs on multiple, independent central
processing units (CPUs).
These systems are referred as loosely coupled systems where each processor has its own local memory and processors
communicate with one another through various communication lines, such as high speed buses or telephone lines. By
loosely coupled systems, we mean that such computers possess no hardware connections at the CPU - memory bus level,
but are connected by external interfaces that run under the control of software.
The Distributed OS involves a collection of autonomous computer systems, capable of communicating and cooperating
with each other through a LAN / WAN. A Distributed OS provides a virtual machine abstraction to its users and wide
sharing of resources like as computational capacity, I/O and files etc.
The structure contains a set of individual computer systems and workstations connected via communication systems, but
by this structure we cannot say it is a distributed system because it is the software, not the hardware, that determines
whether a system is distributed or not.
The users of a true distributed system should not know, on which machine their programs are running and where their
files are stored. LOCUS and MICROS are the best examples of distributed operating systems.
Distributed systems provide the following advantages:

 Sharing of resources.
 Reliability.
 Communication.
 Computation speedup.
 Distributed systems are potentially more reliable than a central system because if a system has only one instance of some
critical component, such as a CPU, disk, or network interface, and that component fails, the system will go down. When
there are multiple instances, the system may be able to continue in spite of occasional failures. In addition to hardware
failures, one can also consider software failures. Distributed systems allow both hardware and software errors to be dealt
with.
A distributed system is managed by a distributed operating system. A distributed operating system manages the system
shared resources used by multiple processes, the process scheduling activity (how processes are allocating on available
processors), the communication and synchronization between running processes and so on. The software for parallel
computers could be also tightly coupled or loosely coupled. The loosely coupled software allows computers and users of a
distributed system to be independent each other but having a limited possibility to cooperate. An example of such a
system is a group of computers connected through a local network. Every computer has its own memory, hard disk. There
are some shared resources such files and printers. If the interconnection network broke down, individual computers could
be used but without some features like printing to a non-local printer.

Multiprogramming Operating System


To overcome the problem of underutilization of CPU and main memory, the multiprogramming was introduced. The
multiprogramming is interleaved execution of multiple jobs by the same computer.
In multiprogramming system, when one program is waiting for I/O transfer; there is another program ready to utilize the
CPU. So it is possible for several jobs to share the time of the CPU. But it is important to note that multiprogramming is
not defined to be the execution of jobs at the same instance of time. Rather it does mean that there are a number of jobs
available to the CPU (placed in main memory) and a portion of one is executed then a segment of another and so on. A
simple process of multiprogramming is shown in figure
A program in execution is called a "Process", "Job" or a "Task". The concurrent execution of programs improves the
utilization of system resources and enhances the system throughput as compared to batch and serial processing. In this
system, when a process requests some I/O to allocate; meanwhile the CPU time is assigned to another ready process. So,
here when a process is switched to an I/O operation, the CPU is not set idle.
Multiprogramming is a common approach to resource management. The essential components of a single-user operating
system include a command processor, an input/ output control system, a file system, and a transient area. A
multiprogramming operating system builds on this base, subdividing the transient area to hold several independent
programs and adding resource management routines to the operating system's basic functions.
A multiprogramming operating system is one that allows end-users to run more than one program at a time. The
development of such a system, the first type to allow this functionality, was a major step in the development of
sophisticated computers. The technology works by allowing the central processing unit (CPU) of a computer to switch
between two or more running tasks when the CPU is idle.
A multiprogramming operating system acts by analyzing the current CPU activity in the computer. When the CPU is idle
— when it is between tasks — it has the opportunity to use that downtime to run tasks for another program. In this way,
the functions of several programs may be executed sequentially. For example, when the CPU is waiting for the end-user to
enter numbers to be calculated, instead of being entirely idle, it may run load the components of a web page the user is
accessing.
The main benefit of this functionality is that it can reduce wasted time in the system's operations. As in a business,
efficiency is the key to generating the most profit from an enterprise. Using this type of operating system eliminates waste
in the system by ensuring that the computer's CPU is running at maximum capacity more of the time. This results in a
smoother computing experience from the end-user's point of view, as program commands are constantly being executed in
the background at all times, helping to speed execution of programs.
In a multiprogramming system there are one or more programs loaded in main memory which are ready to execute. Only
one program at a time is able to get the CPU for executing its instructions (i.e., there is at most one process running on the
system) while all the others are waiting their turn.
The main idea of multiprogramming is to maximize the use of CPU time. Indeed, suppose the currently running process is
performing an I/O task (which, by definition, does not need the CPU to be accomplished). Then, the OS may interrupt that
process and give the control to one of the other in-main-memory programs that are ready to execute (i.e. process context
switching). In this way, no CPU time is wasted by the system waiting for the I/O task to be completed, and a running
process keeps executing until either it voluntarily releases the CPU or when it blocks for an I/O operation. Therefore, the
ultimate goal of multiprogramming is to keep the CPU busy as long as there are processes ready to execute.

Parallel Processing Systems


Parallel Processing Systems are designed to speed up the execution of programs by dividing the program into multiple
fragments and processing these fragments simultaneously. Such systems are multiprocessor systems also known as tightly
coupled systems. Parallel systems deal with the simultaneous use of multiple computer resources that can include a single
computer with multiple processors, a number of computers connected by a network to form a parallel processing cluster or
a combination of both.
Parallel computing is an evolution of serial computing where the jobs are broken into discrete parts that can be executed
concurrently. Each part is further broken down to a series of instructions. Instructions from each part execute
simultaneously on different CPUs.
Parallel systems are more difficult to program than computers with a single processor because the architecture of parallel
computers varies accordingly and the processes of multiple CPUs must be coordinated and synchronized. Several models
for connecting processors and memory modules exist, and each topology requires a different programming model. The
three models that are most commonly used in building parallel computers include synchronous processors each with its
own memory, asynchronous processors each with its own memory and asynchronous processors with a common, shared
memory. Flynn has classified the computer systems based on parallelism in the instructions and in the data streams. There
are of two types:

 Asymmetric Multiprocessing
 Symmetric Multiprocessing
Parallel operating systems are primarily concerned with managing the resources of parallel machines. A parallel computer
is a set of processors that are able to work cooperatively to solve a computational problem. So, a parallel computer may be
a supercomputer with hundreds or thousands of processors or may be a network of workstations.

You might also like