The UNIX Operating System
The UNIX Operating System
Perhaps the key to the continuing growth of the UNIX system is the free-market demands placed upon
suppliers who produce and support software built to open standards. The "open systems" approach is in
bold contrast to other operating environments that lock in their customers with resultant high switching
costs. UNIX system suppliers, on the other hand, must constantly provide the highest quality systems in
order to retain their customers. Those who become dissatisfied with one UNIX system implementation
retain the ability to easily move to another UNIX system implementation.
The continuing success of the UNIX system should come as no surprise. No other operating environment
enjoys the support of every major system supplier. Mention the UNIX system and IT professionals
immediately think not only of the operating system itself, but also of the large family of application
software that the UNIX system supports. In the IT marketplace, the UNIX system has been the catalyst for
sweeping changes that have empowered consumers to seek the best-of-breed without the arbitrary
constraints imposed by proprietary environments.
In a nutshell then, the UNIX system is the users' and suppliers' operating environment of choice. The
UNIX system represents the best collective efforts of competing suppliers, the most refined standards in
the public domain, and the rock-solid stability that comes from years of quality assurance testing, mission-
critical use, and refinement.
This white paper examines the UNIX system with a special concern for both its extraordinary past and its
equally extraordinary prospects for the future.
Moreover, in the UNIX system's early days, security was virtually nonexistent. Subsequently, the UNIX
system became the first operating system to suffer attacks mounted over the nascent Internet. As the
UNIX system matured, however, the organization of security shifted from centralized to distributed
authentication and authorization systems.
Now, a single Graphical User Interface is shipped and supported by all major vendors has replaced
command-line syntax, and security systems, up to and including B1, provide appropriate controls over
access to the UNIX system.
The Value of Standards
The UNIX system's increasing popularity spawned the development of a number of variations of the UNIX
operating system in the 1980s, and the existence of these caused a mid-life crisis. Standardization had
progressed slowly and methodically in domains such as telecommunications and third-generation
languages; yet no one had addressed standards at the operating system level. For suppliers, the thought
of a uniform operating environment was disconcerting. Consumer lock-in was woven tightly into the fabric
of the industry. Individual consumers, particularly those with UNIX system experience, envisioned
standardized environments, but had no way to pull the market in their direction.
However, for one category of consumer -governments- the standardization of the UNIX system was both
desirable and within reach. Governments have clout and are the largest consumers of information
technology products and services in the world. Driven by the need to improve commonality, both US and
European governments endorsed a shift to the UNIX system. The Institute of Electrical and Electronic
Engineers POSIX family of standards, along with standards from ISO, ANSI and others, led the way.
Consortia such as the X/Open Company (merged with the Open Software Foundation in 1995 to form The
Open Group) hammered out draft standards to accelerate the process.
In 1994, the definitive specification of what constitutes a UNIX system was finalized through X/Open
Company's consensus process. The Single UNIX Specification was born-not from a theoretical, ivory
tower approach, but by analyzing the applications that were in use in businesses across the world.
With the active support of government and commercial buyers alike, vendors began to converge on
products that implement the Single UNIX Specification, and now all major vendors have products labeled
UNIX 95, which indicates that the vendor guarantees that the product conforms to the Single UNIX
Specification.
Vendors continue to add value to the UNIX system, particularly in areas of new technology, however that
value will always be built upon a single, consensus standard. Meanwhile, the functionality of the UNIX
system was established and the mid-life crisis was resolved. Suppliers today provide UNIX systems that
are built upon a single, consensus standard.
It is also important to remember that even when variance among UNIX systems was at its worst, IT
professionals agreed that migration among UNIX system variants was far easier than migration among
the proprietary alternatives.
Now with UNIX 95 branded products available from all major systems vendors, the buyer can for the first
time buy systems from different manufacturers, safe in the knowledge that each one is guaranteed to
implement the complete functionality of the Single UNIX Specification and will continue to do so.
UNIX system suppliers can assure customers that they own a standards-based system by registering
them to use the Open Brand. Below is a list of suppliers who give users this guarantee.
IBM: IBM POWER, POWER2, and PowerPCTM Systems with IBM AIX® Version 4.2
or later
IBM: OS/390 Version 1 Release 2 or later with OS/390 V1R2
or later
Security Server and OS/390 V1R2
or later C/C++ Compiler on IBM System/390 Processors that support OS/390 Version 1
Release 2
NCR: NCR UNIX System V Release 4 MP-RAS Release 3.02
or later on NCR WorldMark Series & System 3000 Series
NEC: UX/4800 R12.3 and later on UP4800 and EWS4800 Series
SCO: SCO UnixWare® Family R2.1.1
and later for single and multiprocessor Intel TM 386/486 or Pentium® PCs conforming to
PC/AT architectures
What do buyers expect from an Open Systems strategy based on the UNIX system? In 1996 EvansGroup
Technology carried out research among computer system buyers in the United States and Europe.
When asked about the benefits of open systems, they key issues of compatibility, flexibility and cost
emerged.
The table below shows how respondents ranked the various benefits of Open Systems.
Flexibility 70%
The UNIX system today is more robust, reliable and scalable. Analysts say this observation, which is
widely reported from many different viewpoints, makes practical sense. Engineers at Microsoft are
retracing the steps that the UNIX system has completed. How else could it be?
In sharp contrast to the open standards that define the UNIX system, Windows NT technology remains
fiercely proprietary. Microsoft remains ambivalent to the world of standards. Choosing NT entangles
customers with nonstandard utilities, directories, and software tools that do not conform to any de jure or
consensus standards.
Particularly when high performance is at issue, hardware suppliers suggest the UNIX system, rather than
Windows NT. The primary appeal of NT is for low-end, office-centered, departmental applications.
Unit shipment growth rates for Windows NT exceed the rates for the UNIX system, which is to be
expected for a new product. However, revenue growth in UNIX systems sales is much higher than NT. It
is reasonable to expect Windows NT to take a share in the operating systems market, along with other
more specialized operating systems. There is no evidence today to indicate that NT will be dominant; in
fact, most IT professionals predict that it will not.
Windows NT Server 4.0 is still not a full-function server operating system. While it does support multi-user
computing via third-party add-on tools, it lacks certain fundamental features that the UNIX system is
known for providing, such as directory services for managing user access and peripherals over a
distributed enterprise network.
The presence of the UNIX system in the marketplace has been good for Windows NT. The UNIX system
established the market for cross-platform client and server operating environments that NT seeks to
address. In turn, NT will improve the market for UNIX systems in the future. That is, competition among
UNIX system providers will be enhanced by competition with NT. The choice between open and
proprietary products will be quite crisp.
The continuing success of the UNIX system should come as no surprise. No other operating environment
enjoys the support of every major system supplier. Mention the UNIX system and IT professionals
immediately think not only of the operating system itself, but also of the large family of hardware and
application software that the UNIX system supports. In the IT marketplace, the UNIX system has been the
catalyst for sweeping changes that have empowered consumers to seek the best-of-breed without the
arbitrary constraints imposed by proprietary environments.
The market's pull for the UNIX system was amplified by other events as well. The availability of relational
database management systems, the shift to the client/server architecture, and the introduction of low-cost
UNIX system servers together set the stage for business applications to flourish. For client/server
systems, the networking strengths of the UNIX system shine. Standardized relational database engines
delivered on low-cost high-performance UNIX system servers offered substantial cost savings over
proprietary alternatives.
UNIX system suppliers also have a proud tradition of integration with legacy systems as well as
innovation to uphold. No other system can ensure that disparate systems - usually proprietary systems -
can be integrated, allowing the buyer's investment in data and information to be realized with minimal
disruption and reinvestment.
There is every reason to believe that the UNIX system will continue to be the platform of choice for
innovative development. In the near term, for example, UNIX system vendors will define the scope of
Java and provide the distributed computing environment into which the Network Computer terminal will fit
and enable it to thrive and grow.
How will Java and the Network Computer terminal manifest themselves? The exact answer is unknown;
however, in open computing, the process for finding that answer is well understood. The UNIX system
community has set aside (via consensus standards) the wasteful task of developing arbitrary differences
among computing environments. Rather than building proprietary traps, this community is actively
seeking ways to add value to the UNIX system with improved scalability, reliability, price/performance,
and customer service.
Java and the Network Computer terminal offer several potential advantages for consumers. One key
advantage is a smaller, lighter, standards-based client. A second advantage is a specification that is not
controlled by one company, but is developed to the benefit of all by an open, consensus process. Thirdly,
greater code reuse and a component software market based on Object technology, such as CORBA and
Java. All of these options and more are being deployed first by members of the UNIX system community.
Scalability is here today, enabling application to run on small-scale systems through to the largest servers
necessary. The UNIX system is available on hardware ranging from low-cost PC-class servers on through
parallel architectures that harness together 60 or more processors. This range is wider and the choices of
hardware more cost effective than any other system. The UNIX system is the only option for Massively
Parallel Processing (MPP).
A robust operating system is tough enough to perform successfully under a variety of different operating
conditions. By virtue of its worldwide deployment by an international community of system vendors, the
UNIX system has earned the reputation for robustness.
Uniform operating system services are at the heart of the standardized UNIX system. Many enterprise
systems are assembled with hardware from several different sources. Atop these different hardware
platforms, the UNIX operating system provides a uniform platform for database management systems
and application software.
The market for the UNIX system continues to expand. IDC estimates the market at US$ 39 billion in 1996
and forecasts the market to be US$ 50 billion in the year 2000. In addition, the installed base of the UNIX
system has an estimated value of US$ 122 billion. These market estimates lead to several conclusions
about the UNIX system, as follows:
An annual market of US$ 39 billion is large enough to remain attractive to many suppliers and to provide
sufficient revenue to fund continuing high levels of investment in support and product enhancement.
The UNIX system's growth rates, which appear modest in comparison to the unit shipment growth of
newer products, are anchored by an enormous installed base. High unit shipment growth rates are typical
of new entries in a marketplace.
In key benchmarks and mission-critical applications, the UNIX system consistently performs better.
The UNIX system is the dominant software platform for Relational Database Management Systems.
Investment in developing and enhancing UNIX system products is significantly larger than in any other
operating environment.
The most significant consequence of the Single UNIX Specification initiative is that it shifts the focus of
attention away from incompatible UNIX system product implementations on to compliance with a single,
agreed-upon set of APIs. If an operating system meets the specification, and commonly available
applications can run on it, then it can reliable viewed as open.
So, the future looks as though it will be about a set of sturdy and dependable specifications standing as a
firm foundation upon which many competing product implementations will be built.
By developing a single specification for the UNIX system, The Open Group and the computer industry
have completed the foundation of open systems.
The next version of the Single UNIX Specification, known as Version 2 was announced in March 1997.
Products guaranteed to conform to this specification will carry the label UNIX 98.
Year 2000 Alignment - changes to minimize the impact of the Millennium Rollover.
Threads: POSIX 1003.1c-1995. The Threads extensions permit development of
applications to make significant performance gains on multiprocessor hardware.
Large File Summit extensions to permit UNIX systems to support files of arbitrary sizes,
this is of particular relevance to database applications.
Networking Services: The specifications are aligned with the POSIX 1003.1g standard.
MSE: The Multibyte Support Extension is now aligned with ISO C amendment 1, 1995.
Dynamic linking extensions to permit applications to share common code across many
applications, and ease maintenance of bug fixes and performance enhancements for
applications.
N-bit cleanup (64 bit and beyond), to remove any architectural dependencies in the
Single UNIX Specification. This is of particular relevance with the ongoing move to 64 bit
(and beyond) CPUs.
The real-time extensions are an optional feature group, allowing procurement of X/Open
real-time systems with predictable, bounded behavior.
Inclusion of the existing specifications for the graphical user interface, CDE as an option
in the UNIX 98 brand.
In Summary, …
When the history of the information age is written, the extraordinary dynamics of the UNIX system
marketplace will be seen as playing an important role. The UNIX system was developed at just the right
time and place to be the critical enabler for a revolution in information technology. Client/server
architectures, the Internet, object databases, heterogeneous transaction processing, and Web computing
all emerged on the shoulders of the UNIX system.
Most importantly, the UNIX system continues to be a driving force for innovation because of its
commitment to standards. When proprietary differences are set aside, and with the wide implementation
of the Single UNIX Specification they are set aside, suppliers compete by adding value. This fundamental
tenet is the reason that the UNIX system has thrived - and will continue to thrive in the years to come.
Each computer system includes a basic set of programs called the operating system. The most
important program in the set is called the kernel. It is loaded into RAM when the system boots
and contains many critical procedures that are needed for the system to operate. The other
programs are less crucial utilities; they can provide a wide variety of interactive experiences for
the useras well as doing all the jobs the user bought the computer forbut the essential shape and
capabilities of the system are determined by the kernel. The kernel provides key facilities to
everything else on the system and determines many of the characteristics of higher software.
Hence, we often use the term "operating system" as a synonym for "kernel."
Interact with the hardware components, servicing all low-level programmable elements
included in the hardware platform.
Provide an execution environment to the applications that run on the computer system
(the so-called user programs).
Some operating systems allow all user programs to directly play with the hardware components
(a typical example is MS-DOS ). In contrast, a Unix-like operating system hides all low-level
details concerning the physical organization of the computer from applications run by the user.
When a program wants to use a hardware resource, it must issue a request to the operating
system. The kernel evaluates the request and, if it chooses to grant the resource, interacts with
the proper hardware components on behalf of the user program.
To enforce this mechanism, modern operating systems rely on the availability of specific
hardware features that forbid user programs to directly interact with low-level hardware
components or to access arbitrary memory locations. In particular, the hardware introduces at
least two different execution modes for the CPU: a nonprivileged mode for user programs and a
privileged mode for the kernel. Unix calls these User Mode and Kernel Mode , respectively.
In the rest of this chapter, we introduce the basic concepts that have motivated the design of Unix
over the past two decades, as well as Linux and other operating systems. While the concepts are
probably familiar to you as a Linux user, these sections try to delve into them a bit more deeply
than usual to explain the requirements they place on an operating system kernel. These broad
considerations refer to virtually all Unix-like systems. The other chapters of this book will
hopefully help you understand the Linux kernel internals.
A multiuser system is a computer that is able to concurrently and independently execute several
applications belonging to two or more users. Concurrently means that applications can be active
at the same time and contend for the various resources such as CPU, memory, hard disks, and so
on. Independently means that each application can perform its task with no concern for what the
applications of the other users are doing. Switching from one application to another, of course,
slows down each of them and affects the response time seen by the users. Many of the
complexities of modern operating system kernels, which we will examine in this book, are
present to minimize the delays enforced on each program and to provide the user with responses
that are as fast as possible.
A protection mechanism against buggy user programs that could block other applications
running in the system
A protection mechanism against malicious user programs that could interfere with or spy
on the activity of other users
An accounting mechanism that limits the amount of resource units assigned to each user
To ensure safe protection mechanisms, operating systems must use the hardware protection
associated with the CPU privileged mode. Otherwise, a user program would be able to directly
access the system circuitry and overcome the imposed bounds. Unix is a multiuser system that
enforces the hardware protection of system resources.
In a multiuser system, each user has a private space on the machine; typically, he owns some
quota of the disk space to store files, receives private mail messages, and so on. The operating
system must ensure that the private portion of a user space is visible only to its owner. In
particular, it must ensure that no user can exploit a system application for the purpose of
violating the private space of another user.
All users are identified by a unique number called the User ID, or UID. Usually only a restricted
number of persons are allowed to make use of a computer system. When one of these users starts
a working session, the system asks for a login name and a password. If the user does not input a
valid pair, the system denies access. Because the password is assumed to be secret, the user's
privacy is ensured.
To selectively share material with other users, each user is a member of one or more user
groups , which are identified by a unique number called a user group ID . Each file is associated
with exactly one group. For example, access can be set so the user owning the file has read and
write privileges, the group has read-only privileges, and other users on the system are denied
access to the file.
Any Unix-like operating system has a special user called root or superuser . The system
administrator must log in as root to handle user accounts, perform maintenance tasks such as
system backups and program upgrades, and so on. The root user can do almost everything,
because the operating system does not apply the usual protection mechanisms to her. In
particular, the root user can access every file on the system and can manipulate every running
user program.
1.4.3. Processes
All operating systems use one fundamental abstraction: the process. A process can be defined
either as "an instance of a program in execution" or as the "execution context" of a running
program. In traditional operating systems, a process executes a single sequence of instructions in
an address space; the address space is the set of memory addresses that the process is allowed to
reference. Modern operating systems allow processes with multiple execution flows that is,
multiple sequences of instructions executed in the same address space.
Multiuser systems must enforce an execution environment in which several processes can be
active concurrently and contend for system resources, mainly the CPU. Systems that allow
concurrent active processes are said to be multiprogramming or multiprocessing .[*] It is
important to distinguish programs from processes; several processes can execute the same
program concurrently, while the same process can execute several programs sequentially.
[*]
Some multiprocessing operating systems are not multiuser; an example is Microsoft Windows
98.
On uniprocessor systems, just one process can hold the CPU, and hence just one execution flow
can progress at a time. In general, the number of CPUs is always restricted, and therefore only a
few processes can progress at once. An operating system component called the scheduler
chooses the process that can progress. Some operating systems allow only nonpreemptable
processes, which means that the scheduler is invoked only when a process voluntarily
relinquishes the CPU. But processes of a multiuser system must be preemptable; the operating
system tracks how long each process holds the CPU and periodically activates the scheduler.
Unix is a multiprocessing operating system with preemptable processes . Even when no user is
logged in and no application is running, several system processes monitor the peripheral devices.
In particular, several processes listen at the system terminals waiting for user logins. When a user
inputs a login name, the listening process runs a program that validates the user password. If the
user identity is acknowledged, the process creates another process that runs a shell into which
commands are entered. When a graphical display is activated, one process runs the window
manager, and each window on the display is usually run by a separate process. When a user
creates a graphics shell, one process runs the graphics windows and a second process runs the
shell into which the user can enter the commands. For each user command, the shell process
creates another process that executes the corresponding program.
Unix-like operating systems adopt a process/kernel model . Each process has the illusion that it's
the only process on the machine, and it has exclusive access to the operating system services.
Whenever a process makes a system call (i.e., a request to the kernel, see Chapter 10), the
hardware changes the privilege mode from User Mode to Kernel Mode, and the process starts the
execution of a kernel procedure with a strictly limited purpose. In this way, the operating system
acts within the execution context of the process in order to satisfy its request. Whenever the
request is fully satisfied, the kernel procedure forces the hardware to return to User Mode and the
process continues its execution from the instruction following the system call.
As stated before, most Unix kernels are monolithic: each kernel layer is integrated into the whole
kernel program and runs in Kernel Mode on behalf of the current process. In contrast,
microkernel operating systems demand a very small set of functions from the kernel, generally
including a few synchronization primitives, a simple scheduler, and an interprocess
communication mechanism. Several system processes that run on top of the microkernel
implement other operating system-layer functions, like memory allocators, device drivers, and
system call handlers.
modularized approach
Because any module can be linked and unlinked at runtime, system programmers must
introduce well-defined software interfaces to access the data structures handled by
modules. This makes it easy to develop new modules.
Platform independence
Even if it may rely on some specific hardware features, a module doesn't depend on a
fixed hardware platform. For example, a disk driver module that relies on the SCSI
standard works as well on an IBM-compatible PC as it does on Hewlett-Packard's Alpha.
A module can be linked to the running kernel when its functionality is required and
unlinked when it is no longer useful; this is quite useful for small embedded systems.
No performance penalty
Once linked in, the object code of a module is equivalent to the object code of the
statically linked kernel. Therefore, no explicit message passing is required when the
functions of the module are invoked.[*]
[*]
A small performance penalty occurs when the module is linked and unlinked. However, this penalty
can be compared to the penalty caused by the creation and deletion of system processes in microkernel
operating systems.