[go: up one dir, main page]

0% found this document useful (0 votes)
6 views26 pages

DOS Unit 4

The document discusses distributed file systems, covering their design, implementation, and trends. It explains the file and directory service interfaces, semantics of file sharing, and the importance of system structure and file usage. Key topics include file operations, naming transparency, caching, replication, and the challenges of ensuring efficient access and management in a distributed environment.

Uploaded by

harshadasari8888
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views26 pages

DOS Unit 4

The document discusses distributed file systems, covering their design, implementation, and trends. It explains the file and directory service interfaces, semantics of file sharing, and the importance of system structure and file usage. Key topics include file operations, naming transparency, caching, replication, and the challenges of ensuring efficient access and management in a distributed environment.

Uploaded by

harshadasari8888
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 26

Unit-IV DISTRIBUTED FILE SYSTEMS

Topics include

Distributed file system design

 The file service interface


 The directory service interface
 Semantics of file sharing

Distributed file system implementation

 File usage
 System structure
 Caching
 Replication

Trends in distributed file systems

 New hardware
 Scalability
 Wide area networking
 Mobile users
 Fault tolerance
 Multimedia

Introduction
 The key component of any distributed system is the file system. In distributed systems
the job of the file system is to store programs and data and make them available as
needed. The file service is the specification of what the file system offers to its clients. It
describes the primitives available, what parameters they take and what actions they
perform.
 A file server is a process that runs on some machine and helps implement the file
service. A system may have one file server or several. The clients should not even know
that the file service is distributed. It should look the same as a normal single processor
file system.For example, a distributed system may have two servers that offer UNIX file
service and MS-DOS file service respectively, with each user process using the one file
service which is suitable for it.
Distributed file system design
A distributed file system has two components:
1) The file service interface
2) The directory service interface
1) The file service interface
 A file is an uninterrupted sequence of bytes in single processor system or distributed
systems. A file can be structured as a sequence of records for example with operating
system calls to read or write a particular record. The record can usually be specified
by giving either its record number (i.e position within the file) or the value of some
field.
 A files can have attributes, which are pieces of information about the file but which
are not part of the file itself. Attributes are the owner, size, creation date and access
permissions. The file service provides operations to read and write some of the
attributes. In a few advanced systems, it may be possible to create and manipulate
user defined attributes. Another important aspect of the file model is whether files can
be modified after they have been created.
 In some distributed systems, the only file operations are CREATE and READ. Once a
file has been created, it cannot be changed. Such a file is said to be
immutable.Having files be immutable makes it much easier to support file caching
and replication because it eliminates all the problems associated with having to
update all copies of a file whenever it changes.
 Protection in distributed systems uses two techniques such as capabilities and access
control lists. With capabilities, each user has a kind of access. It specifies which
kinds of accesses are permitted (Ex: reading is allowed but writing is not). All access
control list schemes associate with each file a list of users who may access the file
and how.

File services can be divided into two types, depending on whether they support an
upload/downloadmodel or remote access model.

1) In the upload/download model as shown in fig.(a) the file service provides only two
major operations: read file and write file. The read file operation transfers an entire file
from one of the file servers to the requesting client. The write file operation transfers an
entire file from client to server. The files can be stored in memory or on a local disk, as
needed. The advantage of the upload/download model is its simplicity and efficiency.

Fig. (a) The upload/download model (b) The remote access


model
2) The other kind of file service is the remote access model as shown in fig.(b). In this
model, the file service provides a large number of operations for opening and closing
files, reading and writing parts of files, changing file attributes and so on. Here the file
system runs on the servers not on the clients. The advantage of this model is not requiring
much space on the clients, as well as eliminating the need to transfer entire files when
only small files are needed.

The directory service interface


 The directory service provides operations for creating and deleting directories, naming
and renaming files and moving them from one directory to another. The nature of the
directory service does not depend on whether individual files are transferred entirely or
accessed remotely.
 It defines some alphabet and syntax for file name. File names can be from 1 to some
maximum number of letters, numbers and certain special characters. Some systems
divide file names into two parts such as prog.c for a C program or akarsh.txt for a text
file. The second part of the name called the file extension, identifies the file type.
 All distributed systems allows directories to contain subdirectories to make it possible for
users to group related files together. Operations are provided for creating and deleting
directories as well as entering, removing and looking up files in them. Each subdirectory
contains all the files for one large program or document (Ex: book). Subdirectories can
contain their own subdirectories leading to a tree of directories which is known as
hierarchical file system which is shown in fig.(a) showing five directories. In tree, it is
allowed to remove a link only when the directory pointed is empty.

Fig. (a) A directory tree contained on one machine (b) A


directory graph on two machines
 The directory can be represented in the form of trees or graphs which is shown in fig.(b).
In fig.(b) directory D has a link to directory B. In graph, it is allowed to remove link if
only atleast another link exists. Here the reference count is maintained in the upper right
hand corner. When the link from A to B is removed the reference count is removed from
2 to 1 shown n fig.(b).

A key issue in the design of any distributed file system is whether or not all machines should
have exactly the same view of the directory hierarchy.
Fig.(a) Two file servers. The squares are directories and the circles are files. (b) A system in
which all clients have the same view of the file system (c) A system in which different clients
may have different views of the file system.

In fig.(a) we show two file servers each holding three directories and some files. In fig.(b) we
have a system in which all clients have the same view of the distributed file system. If the
path /D/E/x is valid on one machine, it is valid on all of them. In fig.(c) different machines can
have different views of the file system. For Ex: the path /D/E/x is valid on client 1 but not on
client 2.

Naming transparency

 Two forms of transparency are relevant. They are first one is location transparency,
means that the path name gives no hint as to where the file is located. A path like
/server1/dir1/dir2/x tells everyone that x is located on server 1, but it does not tell where
that server is located.
 The second one is location independencewhere a system in which files can be moved
without their names changing.

There are three common approaches to file and directory naming in a distributed system

1) Machine + path naming such as /machine/path


2) Mounting remote file systems on the local file hierarchy
3) A single name space that looks the same on all machines

Two-level naming

 Most distributed systems use some form of two-level naming. Files have symbolic names
such as prog.c and internal binary names for use by the system itself. Directories provide
a mapping between these two naming levels. When a user opens a file the system looks
up the symbolic names in the directory to get the binary name that will be used to locate
the file.
 The general naming scheme is to have the binary name indicate both a server and a
specific file on that server. This approach allows a directory on one server to hold a file
on a different server. The alternative way is to use the symbolic link. A symbolic link is
a directory entry that maps onto a server, file name which can be looked up on the server
named to find the binary name. The symbolic link is just the path name.

Semantics of file sharing

When two or more users share the same file, it is necessary to define the semantics of reading
and writing to avoid problems. There are four ways of dealing with the shared files in a
distributed system. They are:

1) UNIX semantics

2) Session semantics
3) Immutable files
4) Transactions
1) UNIX semantics:In single processor systems that permit processes to share files such as
UNIX, the semantics state that when a READ operation follows a WRITE operation, the
READ returns the value just written as shown in fig.(a). Similarly when two WRITE’s
happen one after the other followed by a READ, the value read is the value stored by the
last write. This model is easy to understand and straightforward to implement. In
distributed system, UNIX semantics can be achieved easily as long as there is only one
file server and clients do not cache files. All READ’s and WRITE’s go directly to the file
server, which processes them sequentially.

Fig.(a) On a single processor, when a READ follows a WRITE, the value returned by the
READ is the value just written (b) In a distributed system with caching

2) Session semantics: The performance of a distributed system in which all the file requests
go to a single server is poor. This problem is avoided by allowing clients to maintain
local copies of used files in their private caches. Here in this method “changes to an
open file are visible only to the process that modified the file. Only when the file is
closed are the changes made visible to other processes”.In fig.(b) when A closes the
file, it sends a copy to the server, so that subsequent READ get the new value as required.
This rule is widely implemented and is known as session semantics. When two processes
try to replace the same file at the same time. With session semantics, the best solution
here is to allow one of the new files to replace the old file.
3) Immutable files: A completely different approach to the semantics of file sharing in a
distributed system is to make all files immutable. There is no way to open a file for
writing. The only operations on files are CREATE and READ. Thus it becomes
impossible to modify the file x, it remains possible to replace x by a new file.
4) Transactions: A fourth way to deal with shared files in a distributed system is to use
transactions. To access a file or a group of files, a process first executes some type of
BEGIN TRANSACTION primitive to signal that to start the transaction. Then come
system calls to read and write one or more files. When the work has been completed, an
END TRANSACTION primitive is executed. The key property of this method is that the
system guarantees that all the calls contained within the transaction will be carried out in
order, without any interference from other, concurrent transactions. If two or more
transactions start up at the same time, the system ensures that the final result is the same
as if they were all run in some sequential order.For Ex: in banking system the transaction
makes the programming easier.

Distributed file system implementation

File usage

 In Distributed File Systems (DFS), multiple machines are used to provide the file
system's facility. Different file system utilize different conceptual models of a file. Before
implementing any system, distributed it is useful to have a good idea of how it will be
used, to make sure that the most commonly executed operations will be efficient.
 In this file usage some of the measurements are static means that they represent a
snapshot of the system at a certain instant. Static measurements are made by examining
the disk to see what is on it. These measurements include the distribution of file sizes, the
distribution of file types and the amount of storage occupied by files of various types and
sizes.
 Other measurements are dynamic made by modifying the file system to record all
operations to a log for subsequent analysis. These data yield information about the
relative frequency of various operations, the number of files open at any moment and the
amount of sharing that takes place.
 By combining the static and dynamic measurements, even though they are fundamentally
different, we can get a better view of how the file system is used. One problem that
always occur with measurements of any existing system is knowing how typical the
observed user population is. For Ex: if the measurements are made at university can we
apply same to the research labs and for automation projects and banking systems. No one
really knows for sure until these systems are measured.
 Another problem in making measurements is watching out for the artifacts of the system
being measured. Ex: when looking at the distribution of file names in an MS-DOS
system, one can quickly conclude that file names are never more than 8 characters. Since
MS-DOS does not allow more than 8 characters in a file name, it is impossible to tell
what users would do if they were not constrained to eight character file names.
Observed file system properties
1) Most files are small under 10K This observation is a good idea to transfer the entire
file simple and in an efficient way.
2) Most files have short lifetimes. A common method is to create a file, read it and then
delete it.
3) File sharing is unusual i.e it is best for client caching and accept the session
semantics in turn for better performance.
4) Different file classes with different properties exist to handle different file
mechanisms.
5) Reading is much more common than writing
6) The average process uses only a few files.
7) Reads and writes are sequential but random access is rare for temporary files.
System structure

 In some systems, there is no distinction between clients and servers. All machines run the
same basic software. In other systems, the file server and directory server are just user
programs, so a system can be configured to run client and server software on the same
machines or not as it wishes.
 Finally the systems in which clients and servers are fundamentally different machines in
terms of either hardware or software. The servers may even run a different version of the
operating system from the clients. The file and directory service is structured different in
system.
 One organization is to combine the two file and directory into a single server that handles
all the directory and file calls itself. Another possibility is to keep them separate. In this
case opening a file requires going to the directory server to map its symbolic name onto
its binary name and then going to the file server with the binary name to read or write the
file.
 In the normal case, the client sends a symbolic name to the directory server which then
returns the binary name that the file server understands. It is possible for a directory
hierarchy to be partitioned among multiple servers as shown in. Suppose we have a
system in which the current directory on server 1 contains an entry for file a, the other
directory on sever 2 contains an entry for file b and the other directory on server 3
contains an entry for file c.

To look up a/b/c, the client sends a message to server 1, which manages its current directory. The
server finds a ,but sees that the binary name refers to another server. It now has a choice. It can
either tell the client which server holds b and have the client look up b/c itself as shown in fig.(a)
or it can forward the remainder of the request to server 2 itself and not reply at all as shown in
fig,(b). This method is more efficient which is known as automatic lookup. The former method
requires more messages and the clients are aware of which server holds which directory.
Fig.(a) Iterative lookup of a/b/c

Fig.(b) Automatic lookup


The final structure issue that we will consider here is whether or not file, directory and other
servers should maintain state information about clients. There are stateless servers and stateful
servers.

Stateless servers:

 The stateless server has no state with regard to the user’s information. It means when
the user access any web resource, the server does not keep a track of the user’s identity
or actions performed on the page. So every time, the user has to prove the identity to
gain access.
 With a stateless server, each request must be self-contained. It must contain the full file
name within the file in order to allow the server to do the work. An example of a
stateless transaction would be doing a search online to answer a question. You type your
question into a search engine and hit enter. If your transaction is interrupted or closed
accidentally, you just start a new one. Ex:HTTP, UDP.

Advantages of stateless servers:

1) Fault tolerance
2) No OPEN/CLOSE calls needed
3) No server space wasted on tables
4) No limits on number of open files
5) No problems if a client crashes

Stateful servers:

 Stateful applications and processes allow users to store, record and return to already
established information and processes over the internet. In stateful applications, the
server keeps track of the state of each user session and maintains information about the
user's interactions and past requests.
 They can be returned to again and again, like online banking or email. They’re performed
with the context of previous transactions and the current transaction may be affected by
what happened during previous transactions.
 For these reasons, stateful apps use the same servers each time they process a request
from a user. Stateful servers store users’ state information in the form of sessions. It
stores information like profile, preference, user’s action and gives personalized
experience on next visit. Ex: FTP, TELNET.

Advantages of stateful servers:

1) Shorter request messages


2) Better performance
3) Readahead possible
4) Idempotency easier
5) File locking possible

Caching

 In a distributed file system, files are stored across multiple servers or nodes, and file
caching involvestemporarily storing frequently accessed files in memory or on local disks
to reduce the need for network access or disk access.
 File caching enhances I/O performance because previously read files are kept in the
main memory. Because the files are available locally,Performance improvement of the
file system is based on the locality of the file access pattern.
 Caching also helps in reliability and scalability.File caching is an important feature of
distributed file systems that helps to improve performance by reducing network traffic
and minimizing disk access.
 In a client-server system, each with main memory and a disk there are four places to store
files: the server’s disk, the server’s main memory, the client’s disk(if available) or
the client’s main memory as shown in fig.
Fig. Four places to store files or parts of files

Cache Location: The file might be kept in the disc or main memory of the client or the server
in a client-server system with memory and disk.
Server’s Disk: It is always the original location where the file is saved. There is enough space
here in case that file is modified and becomes longer. Additionally, the file is visible to all
clients.

Server’s Main Memory:


 The question is whether to cache the complete file or only the disk blocks when the file
is cached in the server’s main memory. If the full file is cached, it can be stored in
contiguous locations, and high-speed transmission results in a good performance.
 Disk block caching makes the cache and disc space more efficient. When compared to
memory references, cache references are quite rare.The cache copy can be discarded if
there is an up-to-date copy on the disk. The cache data can also be written to the disk.
 Clients can easily and transparently access a cached file in the server’s main memory.
The server can easily keep disks and main memory copies of the file consistent. Only
one copy of the file exists in the system, according to the client.
Client’s disk:
 The data can also be saved on the client’s hard drive. Although network transfer is
reduced, in the event of a cache hit, the disk must be accessed. Because the changed
data will be available in the event of data loss or a crash, this technique improves
reliability.
 The information can then be recovered from the client’s hard drive. Even if the client is
disconnected from the server, the file can still be accessed. Because access to the disk
may be handled locally, there is no need to contact the server, this enhances scalability
and dependability.

Advantages:
 Reliability increased as data can be recovered in case of data loss.

 The client’s disk has a significantly larger storage capacity than the client’s primary
memory. It is possible to cache more data
Disadvantages:
 The access is slow if the disk has more space. The main memory of the server may be
able to provide a file faster than the client’s disc.
Client’s Main Memory: Once it is agreed that the files should be cached in the client’s
memory, caching can take place in the user process’s address space, the kernel, or a cache
manager as a user process. Various ways of doing caching in client memory.

a) No caching
b) Caching in the user process:
 The simple way is to cache files directly inside each user process own address
space as shown in fig.(b). The cache is managed by the system call library. As
files are opened, closed, read and written the library simply keeps the most
heavily used ones around, so that when a file is reused, it may already be
available.
 When the process exits, all modified files are written back to the server. This
scheme is effective only if individual processes open and close files repeatedly.

c) Caching in kernel:
 The system-call library is in charge of the cache. The files are opened, closed,
read, and written during the process execution. The library saves the most
frequently used files so that they can be re-used if necessary.
 The updated files are returned to the server once the operation has been
completed. When individual processes open and close files regularly, this
technique works well. The file can be cached in the kernel instead of the user’s
process address space, as shown.

d) Cache manager as a user process:

 A separate user-level cache manager can be used to cache the files. As a


result, the kernel no longer has to maintain the file system code, and it
becomes more isolated and flexible.
 The kernel can decide on the allocation of memory space for the program
vs. cache on run time. The kernel can store some of the cached files in
the disk if the cache manager runs in virtual memory, and the blocks are
brought to the main memory on cache hit.

Advantages:
 This technique is more isolated and flexible (as the kernel no longer has to maintain
the file system code)
 When individual processes open and close files regularly, the access time decreases.
 Contributes to the scalability and reliability of the system.
Disadvantages:
 A separate user-level cache manager is required.

Cache Consistency – Cache Update Policy :


 When the cache is located on the client’s node, numerous users can access the same
data or file at the same time in a file system. If all caches contain the same most current
data, they are considered to be consistent.
 It’s possible that the data will become inconsistent if some users modify the file. The
distributed system that uses a DFS must keep its data copies consistent. Depending on
when to propagate changes to the server and how to validate the authenticity of cache
data, many consistency strategies are provided.
 Write-through, write-on-close, and centralized control are the three types.When the
cache is located on the client’s node & one user writes data to cache, it must also be
visible to the other users as well. The written policy determines that when the writing is
performed.

There are four cache update policies:

 Write-Through:
 When a new user edits a cache entry in this method, it is immediately written
to the server. Any procedure that requires a file from the server will now
always receive the most up-to-date information.
 For Ex: the client process reads the file, caches it, and then exits the process.
Another client modifies the same file and sends the change to the server a
short time later.
 If a process is started on the first machine with the cached copy of the file, it
will obtain an outdated copy.
 To avoid this, compare the time of modification of both copies, the cached
copy on the client’s machine and the uploaded copy on the server, to
validate the file with the server.
 Delayed Write:
 To reduce continuous network traffic, write all updates to the server
periodically or batch them together. It’s known as ‘delayed-write.’
 This method enhances performance by allowing for a single bulk write
operation rather than several tiny writes. The temporary file is not stored on
the file server in this case.
 Write on close:
 One step forward is to only write the file back to the server once it has been
closed. ‘Write on close’ is the name of the algorithm.
 The second write overwrites the first if two cached files are written back to
back. It’s comparable to what happens when two processes read or write in
their own address space and then write back to the server in a single CPU
system.
 Centralized Control:
 For tracking purposes, the client sends information about the files it has just
opened to the server, which then performs read, write, or both activities.
 Multiple processes may read from the same file, but once one process has
opened the file for writing, all other processes will be denied access.
 After the server receives notification that the file has been closed, it updates
its table, and only then can additional users access the file.
Replication

 In a distributed system data is stored is over different computers in a network.


Therefore, we need to make sure that data is readily available for the users.
 Availability of the data is an important factor often accomplished by data replication.
 Replication is the practice of keeping several copies of data in different places. It is
good to have replicas of a node in a network due to following reasons:
 If a node stops working, the distributed network will still work fine due to its
replicas which will be there. Thus it increases the fault tolerance of the system.
 It also helps in load sharing where loads on a server are shared among different
replicas.
 It enhances the availability of the data. If the replicas are created and data is stored
near to the consumers, it would be easier and faster to fetch data.
 To allow file access to occur even if one file server is down. A server crash should
not bring the entire system down until the server can be rebooted.
 The fig. shows three ways replication can be done
(a) Explicit file replication
(b) Lazy file replication
(c) File replication using a group

Fig.(a) Explicit file replication (b) Lazy file replication (c) File replication
using a group

(a) Explicit file replication:


 The programmer should control the entire process. When a process makes
a file, it will do on one specific server. Then it can make additional copies
on their servers if needed.
 If the directory server permits multiple copies of a file, the network
addresses of all copies can then be associated with the file name as shown
in fig.(a), so that when the name is looked up, all copies will be found.
 When the file is subsequently opened, the copies can be tried sequentially
in some order, until an available file is found.
 For Ex: suppose home directory is /machine1/usr/abc. After creating a
file, /machine1/usr/abc/xyz, the programmer can uses the cp command to
make copies in /machine2/usr/abc/xyz and /machine3/usr/abc/xyz.
(b) Lazy file replication:
 Here only one copy of each file is created on some server. Later, the server
itself makes replicas on other servers automatically without the
programmer’s permission.
 When making copies in the background like this, it is important to be alert
about the file may change before the copies can be made.
(c) File replication using a group:
Our final method is to use group communication as shown in fig.(c). In this
scheme, all WRITE system calls are simultaneously transmitted to all the servers,
so extra copies are made at the same time the original file is made.

Update protocols

The update protocols are used when the existing files are updated in some cases of client crashes
where some copies of files will be changed and others not. In this case, when we read we may
get the old value and others may get the new value.

The two well known algorithms solve this problem.

1) Primary copy replication


2) Voting
1) Primary copy replication:
 The first algorithm is primary copy replication in which one server is designated
as primary and the others as secondaries.
 When a replicated file is to be updated, the change is sent to the primary server,
which makes the change locally and then sends commands to the secondaries,
ordering them to change too.
 Reads can be done from any copy i.e. primary or secondary. If the primary
crashes before it had instruct all the secondaries, the update should be written to
stable storage.
 In this way, when a server reboots after a crash, a check can be made to see if any
updates were in progress at the time of the crash. Later the secondaries will be
updated.
2) Voting:
 This is the robust method proposed by Gifford as an alternative method for
primary copy replication where the primary is down no updates are performed.
 In voting method the basic idea is to require clients to request and acquire the
permission of multiple servers before either reading or writing a replicated file.
 Ex: of how the algorithm works, suppose that a file is replicated on N servers.
The rule is to update a file, a client must first contact atleast half the servers plus 1
and get them to agree to do the update. Once they have agreed, the file is changed
and a new version number is associated with the new file.
 The version number is used to identify the version of the file and is the same for
all the newly updated files.
 To read a replicated file, a client must also contact atleast half the servers plus 1
and ask them to send the version numbers associated with the file. If all the
version numbers agree, this must be the recent version.

 For example, if there are five servers and a client determines that three of them
have version 8, it is impossible that the other two have version 9. After any
successful update from version 8 to version 9 requires getting three servers to
agree to it, not just two servers.
 Three examples of the voting algorithm is shown in fig.(a),(b),(c), where to read a file of
N replicas exist, a client needs to assemble a read quorum an arbitrary collection of any
Nr servers. To modify a file, a write quorum of atleast Nw servers is required. The values
of Nr and Nw should be subject to constraint Nr + Nw > N.

Consider fig.(a) which has Nr =3 and Nw =10. Imagine that the most recent write quorum
consisted of the 10 servers C through L. All of these get the new version and the new version
number. Any subsequent read quorum of three servers will have to contain atleast one member of
this set. In fig.(c) Nr to 1 making it possible to read a replicated file by finding any copy and
using it.

Trends in distributed file systems


Distributed systems are undergoing a period of significant change and this can be tracked back to
a number of growing trends.

1) New hardware
2) Scalability
3) Wide area networking
4) Mobile users
5) Fault tolerance
6) Multimedia

1) New hardware:
 Currently all file servers use magnetic disks for storage. Within few years,
memory may become so cheap that even small organizations can maintain file
servers with gigabytes of physical memory.
 As a result the file system may permanently reside in memory and no disks will
be needed. Most current file systems organize files as a collection of blocks.
 With in-core file system, it is easy to store file in contiguously in memory, rather
than breaking it up into blocks. Contigously stored files are easier to send over the
speed network very fast.
 Optical disks are used to store more files. They have the following properties
1) They have huge storage capacities
2) They have random access
3) They are slow
4) They are cheap
 The hardware development lies in using the fiber optic networks today. We can
implement the use of main memory along with the fiber optic network and
eliminate the use of client cache, server’s disk and get out of the servers’ main
memory, backed up by the optical disk.
 Consider the system of fig. in which each network interface has a bit map, one bit
per cached file.
Fig. The hardware scheme to update shared files
 To modify a file, a processor sets the bit to 0 if no processor is updating the file.
Setting the bit causes the interface to create and send a packet around the ring
that checks and sets the bit in all interfaces.
 After the file is locked, the processor updates the file. When the update is
complete, the processor clears the bit in the bit map, which causes the network
interface to locate the file using table in memory. When the file is updated, the bit
is cleared in the bit map on all the machines.
2) Scalability:
 A definite trend in distributed systems is towards larger and larger systems.
Algorithms that works well for 100 machines may work poorly for 1000 and
10,000 machines.
 To avoid this problem we have to partition the system into smaller units and try to
make each one relatively independent of the others.
 Having one server per unit scales much better than a single server. Resources and
algorithms should not be linear in the number of users, so having a server
maintain a linear list of users for protection is not a good idea.
 Hash tables are acceptable since the access time is more which is independent of
the entries.

3) Wide area networking:


 Most current work on distributed systems focuses on LAN based systems. In the
future LAN based distributed systems will be interconnected to form transparent
distributed systems covering countries and the continents.
 In most wide area networks, a large variety of equipment is encountered. This is
because of multiple buyers with different budgets and purchasing is spread over
many years in the rapid technological change.
 Thus wide area distributed system deal with the necessity of the heterogeneity.
4) Mobile users:
Portable computers are the fastest growing segment of the computer business. Laptop
computers, notebook computers and pocket computers can be found everywhere these
days with efficiency in computing the various applications around the world. The users
are able to download the files or documents easily with the fast growing technology in
less time.
5) Fault tolerance:
When the computers are down in some situations the users must be patience so that the
system can restart later successively. The system works even after the faults appear such
a rapid development in the systems came into existence now-a-days.
6) Multimedia:
New applications like video based are developed and are performed with different people
easily around the world with high resolution and the graphics are supporting with high
efficiency.

You might also like