[go: up one dir, main page]

0% found this document useful (0 votes)
552 views9 pages

Os Case Study

The document provides an overview of operating systems and discusses the Linux operating system in particular. It describes the basic functions and jobs of an operating system, provides a brief history of OS development, and examines process management, memory management, and file management in Linux through detailed explanations and examples.

Uploaded by

samay gujrati
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
552 views9 pages

Os Case Study

The document provides an overview of operating systems and discusses the Linux operating system in particular. It describes the basic functions and jobs of an operating system, provides a brief history of OS development, and examines process management, memory management, and file management in Linux through detailed explanations and examples.

Uploaded by

samay gujrati
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

OS CASE STUDY(REVIEW)

Group members: Mayank(16113) & Saloni(16107)


Course: B.sc(hons) computer science, section B
Subject: Operating System

INTRODUCTION:
An operating system is the most important software that runs on a computer. It manages the
computer's memory and processes, as well as all of its software and hardware. It also allows you
to communicate with the computer without knowing how to speak the computer's language. Without
an operating system, a computer is useless.

The operating system's job

Your computer's operating system (OS) manages all of the software and hardware on the computer.
Most of the time, there are several different computer programs running at the same time, and they all
need to access your computer's central processing unit (CPU), memory, and storage. The operating
system coordinates all of this to make sure each program gets what it needs.

History of Operating Systems

 The first computer, Z1, was made in 1936 – 1938. Unfortunately, this computer ran
without an operating system.
 Twenty years later, the first-ever operating system was made in 1956.
 In the 1960s, bell labs started working on building UNIX, the first multitasking
operating system.
 In 1977 the apple series came into existence. Apple Dos 3.3 was the first disk
operating system.
 In 1981, Microsoft built the first operating system called DOS by purchasing 86 –
DOS software from a Seattle company.
 The most famous Microsoft windows came into existence in 1985 when MS-DOS
was paired with GUI, a graphics environment.

Functions of Operating System

 Processor Management: An operating system manages the processor’s work by


allocating various jobs to it and ensuring that each process receives enough time from
the processor to function properly.
 Memory Management: An operating system manages the allocation and
deallocation of the memory to various processes and ensures that the other process
does not consume the memory allocated to one process.
 Device Management: There are various input and output devices. An OS controls the
working of these input-output devices. It receives the requests from these devices,
performs a specific task, and communicates back to the requesting process.
 File Management: An operating system keeps track of information regarding the
creation, deletion, transfer, copy, and storage of files in an organized way. It also
maintains the integrity of the data stored in these files, including the file directory
structure, by protecting against unauthorized access.
 Security: The operating system provides various techniques which assure the
integrity and confidentiality of user data. Following security measures are used to
protect user data:
 Protection against unauthorized access through login.
 Protection against intrusion by keeping Firefall active.
 Protecting the system memory against malicious access.
 Displaying messages related to system vulnerabilities.
 Error Detection: From time to time, the operating system checks the system for any
external threat or malicious software activity. It also checks the hardware for any type
of damage. This process displays several alerts to the user so that the appropriate
action can be taken against any damage caused to the system.
 Job Scheduling: In a multitasking OS where multiple programs run simultaneously,
the operating system determines which applications should run in which order and
how time should be allocated to each application.
Linux
Linux is a family of open-source operating systems, which means they can be modified and
distributed by anyone around the world. This is different from proprietary software like Windows,
which can only be modified by the company that owns it. The advantages of Linux are that it is free,
and there are many different distributions—or versions—you can choose from.
According to Stat-Counter Global Stats, Linux users account for less than 2% of global operating
systems. However, most servers run Linux because it's relatively easy to customize.
Linux: Linux is an open-source operating system that is available for free and can be customized to
meet specific needs. It is used by developers, businesses, and individuals who prefer an open-source,
customizable operating system

Process Management in Linux


A process means program in execution. It generally takes an input, processes it and gives us the
appropriate output. Check Introduction to Process Management for more details about a process.
There are basically 2 types of processes.
1. Foreground processes: Such kind of processes are also known as interactive
processes. These are the processes which are to be executed or initiated by the user or
the programmer, they can not be initialized by system services. Such processes take
input from the user and return the output. While these processes are running we can not
directly initiate a new process from the same terminal.
2. Background processes: Such kind of processes are also known as non interactive
processes. These are the processes that are to be executed or initiated by the system
itself or by users, though they can even be managed by users. These processes have a
unique PID or process if assigned to them and we can initiate other processes within the
same terminal from which they are initiated.

1. Process Management in Linux:

Process Creation:

When you open an application or run a command in Linux, it initiates a new process. Each
process is assigned a unique identifier called a Process ID (PID).
The operating system tracks and manages these processes, allocating resources like CPU
time and memory to them.

Process States:
Process State As a process executes, it changes state. The state of a process is defined in part
by the current activity of that process.
A process may be in one of the following states:
• New. The process is being created. • Running. Instructions are being executed. • Waiting.
The process is waiting for some event to occur (such as an I/O completion or reception of a
signal). • Ready. The process is waiting to be assigned to a processor. • Terminated. The
process has finished execution

Processes can terminate in different ways:


Voluntarily: When a process completes its task.
Due to Error: If a process encounters an issue, it may terminate unexpectedly.
Manual Termination: Users can stop processes using the kill command, which sends
specific signals to processes.
Priority and Scheduling:

Linux uses a priority system to manage processes. Higher-priority processes get more
CPU time.
The nice command allows users to adjust a process's priority.
The scheduler makes decisions about which process to run next based on priorities and the
process state.
Communication Between Processes:

Processes can communicate through Inter-Process Communication (IPC) mechanisms.


Common IPC methods include:
Pipes: Used for one-way communication between processes.
Signals: Used to notify a process of an event.
Sockets: Enable communication over networks.

2. Memory Management in Linux:


Virtual Memory:

Virtual memory allows Linux to use a combination of RAM and space on the hard drive as
if it were a single, larger memory.
It helps prevent running out of physical RAM by temporarily moving less-used data to disk.
Memory Allocation:

When a process needs memory, Linux allocates a portion of memory to it.


The memory allocated is often split into code, data, and stack segments for the process.

Page Tables:
Page tables are data structures that map virtual addresses used by processes to physical
memory addresses.
They help in the efficient management of memory and ensure that processes do not interfere
with each other.

Swapping:
When physical memory becomes limited, Linux moves some data from RAM to a special
area on the hard drive called swap space.
This frees up RAM for other processes, and swapped-out data can be retrieved when
needed.

Shared Memory:
Processes can share portions of memory to facilitate communication and data sharing.
This is commonly used in multi-process applications and for inter-process collaboration.

3. File Management in Linux:


File System Hierarchy:

The Linux file system follows a hierarchical structure starting with the root directory ("/").
Directories contain files and can have subdirectories, creating a tree-like organization.

File Types:
Linux recognizes various file types, including:
Regular files: Contain data, such as text or binary information.
Directories: Organize and store files and other directories.
Symbolic links: Point to other files or directories.
Device files: Represent hardware devices like hard drives and printers.

File Permissions:
Linux employs a permission system that defines who can access and modify files.
Permissions are categorized into read (r), write (w), and execute (x) for the owner, group,
and others.
You can use commands like chmod and chown to change file permissions.

File Operations:
Linux provides various commands for managing files:
mkdir: Create directories.
touch: Create empty files.
cp: Copy files and directories.
mv: Move or rename files and directories.
rm: Remove files and directories.

/// SHALLL WE INCLUDEEE THIS??


Mounting:
Mounting is the process of connecting external devices, network shares, or file systems to
the existing directory structure.
It allows you to access the contents of these devices as if they were part of the file system.
Inodes:

Inodes are data structures that store metadata about files, such as ownership, permissions,
timestamps, and the location of the actual data blocks on disk.
File Streams:

Linux allows for different streams within a file, such as standard input (stdin), standard
output (stdout), and standard error (stderr).
These streams enable interaction with processes, input, and error handling.
Expanding on these topics should provide you with a more detailed understanding of
process management, memory management, and file management in Linux, which you can
use as content for your review.

File Management in Linux


In Linux, most of the operations are performed on files. And to handle these files Linux has
directories also known as folders which are maintained in a tree-like structure. Though, these
directories are also a type of file themselves. Linux has 3 types of files:
1. Regular Files: It is the common file type in Linux. it includes files like – text files,
images, binary files, etc. Such files can be created using the touch command. They
consist of the majority of files in the Linux/UNIX system. The regular file contains
ASCII or Human Readable text, executable program binaries, program data and much
more.
2. Directories: Windows call these directories as folders. These are the files that store the
list of file names and the related information. The root directory(/) is the base of the
system, /home/ is the default location for user’s home directories, /bin for Essential
User Binaries, /boot – Static Boot Files, etc. We could create new directories
with mkdir command.
3. Special Files: Represents a real physical device such as a printer which is used for IO
operations. Device or special files are used for device Input/Output(I/O) on UNIX and
Linux systems. You can see them in a file system like an ordinary directory or file.
In Unix systems, there are two types of special files for each device, i.e. character special files and
block special files. For more details, read the article Unix file system.

File and Directory Management:


Discuss basic file and directory operations:
mkdir (create a directory)
touch (create an empty file)
cp (copy files)
mv (move/rename files)
rm (remove files)
Emphasize the use of flags for more options (e.g., -r for recursive operations).
File Permissions:
Explain how to view and modify file permissions using chmod and chown.
Show how to check permissions with ls -l.
Discuss the meaning of the file permission symbols (e.g., rwx).

Package Management:
Introduce the apt package manager for installing and updating software.
Explain common package management commands:
sudo apt update (update package list)
sudo apt install (install software)
sudo apt remove (uninstall software)

Process Memory Management in Linux

Process memory management is a crucial aspect of any operating system. In Linux, memory
management system is designed to efficiently manage memory usage, allowing processes to access
and use memory they require while preventing them from accessing memory they do not own. In this
article, we will discuss process memory management in Linux in detail, covering various aspects such
as memory allocation, virtual memory, memory mapping, and more.
Memory Allocation

Memory allocation is process of assigning memory to a process or program. In Linux, kernel provides
two main methods for memory allocation: static and dynamic.

Static Memory Allocation

Static memory allocation is done at compile-time, where memory allocation for a program is fixed
and cannot be changed during runtime. memory is allocated in program's data section or stack
segment. data section contains global variables and static variables, while stack segment contains
local variables.

Dynamic Memory Allocation

Dynamic memory allocation is done during runtime, where memory allocation for a program can be
dynamically adjusted based on program's requirements. kernel provides various system calls such as
malloc(), calloc(), and realloc() to dynamically allocate memory. These functions allocate memory
from heap segment of program's address space.

Virtual Memory

Virtual memory is a memory management technique that allows a program to use more memory than
is physically available in system. In Linux, virtual memory is implemented using a combination of
hardware and software. hardware component is Memory Management Unit (MMU), which is
responsible for translating virtual memory addresses to physical memory addresses. software
component is kernel's Virtual Memory Manager (VMM), which manages allocation and deallocation
of virtual memory.

Memory Mapping

Memory mapping is a technique that allows a process to access a file's contents as if it were part of
process's memory. In Linux, memory mapping is implemented using mmap() system call. mmap()
system call maps a file into a process's virtual memory address space, allowing process to read and
write to file's contents as if it were part of its own memory. Memory mapping is commonly used in
applications such as databases and multimedia players, where large files need to be accessed
efficiently.

Shared Memory

Shared memory is a technique that allows multiple processes to access same portion of memory. In
Linux, shared memory is implemented using shmget(), shmat(), and shmdt() system calls. shmget()
system call creates a shared memory segment, shmat() attaches shared memory segment to a process's
address space, and shmdt() detaches shared memory segment from process's address space. Shared
memory is commonly used in inter-process communication, where multiple processes need to share
data efficiently.

Swapping

Swapping is a technique that allows kernel to move pages of memory from RAM to a swap space on
disk when system's memory is low. In Linux, swapping is implemented using a combination of
hardware and software. hardware component is disk, which is used as swap space. software
component is kernel's Swapping Manager, which manages swapping process. When system's memory
is low, Swapping Manager selects pages of memory to swap out to disk, freeing up memory for other
processes.

Some additional concepts to consider include −

Kernel Memory Management

The Linux kernel itself also requires memory management, and it uses a separate set of memory
management techniques to manage kernel memory. Kernel memory is used to store data structures
and code required by kernel to operate. kernel uses techniques like memory mapping, page caching,
and memory allocation to manage kernel memory.

Memory Protection

Memory protection is another critical aspect of memory management in Linux. Memory protection
techniques prevent processes from accessing memory they are not authorized to access. MMU
implements memory protection by using page tables, which map virtual memory addresses to physical
memory addresses and track permissions for each memory page.

Memory Fragmentation

Memory fragmentation occurs when available memory is divided into small, non-contiguous chunks,
making it difficult to allocate larger blocks of memory. Memory fragmentation can lead to
performance issues and even crashes if system runs out of memory. Linux kernel uses several
techniques to manage memory fragmentation, including memory compaction and defragmentation.

Memory Leak Detection

As mentioned earlier, failing to release dynamically allocated memory can result in memory leaks,
where memory is not returned to system and can eventually cause program to crash due to insufficient
memory. Detecting and fixing memory leaks is crucial for maintaining system stability and
performance. Linux provides several tools for detecting memory leaks, including valgrind, which can
detect memory leaks and other memory-related issues.

Conclusion

In conclusion, process memory management is a crucial aspect of any operating system, and Linux is
no exception. Linux kernel provides a robust and efficient memory management system, allowing
processes to access and use memory they require while preventing them from accessing memory they
do not own. In this article, we discussed various aspects of process memory management in Linux,
including memory allocation, virtual memory, memory mapping, shared memory, and swapping.
Understanding these concepts is essential for any Linux developer or administrator to efficiently
manage memory usage in their systems.

You might also like