[go: up one dir, main page]

0% found this document useful (0 votes)
17 views26 pages

Embedded Mod III & V Notes

The document discusses Direct Memory Access (DMA) and device drivers in embedded systems, highlighting the importance of DMA for efficient data transfer without CPU intervention. It explains the roles of device drivers in managing hardware and handling interrupts, detailing their functions and types, including architecture-specific and generic drivers. Additionally, it covers interrupt handling mechanisms, prioritization, context switching, and the memory map concept in relation to memory device drivers.

Uploaded by

gayathri Jayalal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views26 pages

Embedded Mod III & V Notes

The document discusses Direct Memory Access (DMA) and device drivers in embedded systems, highlighting the importance of DMA for efficient data transfer without CPU intervention. It explains the roles of device drivers in managing hardware and handling interrupts, detailing their functions and types, including architecture-specific and generic drivers. Additionally, it covers interrupt handling mechanisms, prioritization, context switching, and the memory map concept in relation to memory device drivers.

Uploaded by

gayathri Jayalal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 26

Module 3

Reference
1. Embedded systems architecture, Tammy Noergaard
2. Embedded Systems: Architecture, Programming and Design,Raj Kamal

Direct Memory Access


A DMA is required when a multi-byte data set or a burst of data or a block of data is to be
transferred between the external device and system or two systems. A device facilitates DMA
transfer with a processing element (single purpose processor) and that device is called DMAC
(DMA Controller).
DMA based method useful, when a block of bytes are transferred, for example, from disk to the
RAM or RAM to the disk. • Repeatedly interrupting the processor for transfer of every byte
during bulk transfer of data will waste too much of processor time in context switching
System performance improves by separate processing of the transfers from and to the
peripherals (for example, between camera memory and USB port)

After an ISR initiates and programs the DMAC, the DMAC sends a hold request to the CPU
CPU acknowledges that if the system memory buses are free to use.

Three modes
Single transfer at a time and then release of the hold on the system bus.
Burst transfer at a time and then release of the hold on the system bus. A burst may be of a
few kB.
Bulk transfer and then release of the hold on the system bus after the transfer is completed.

DMA proceeds without the CPU intervening DMA proceeds without the CPU intervening Except
(i) at the start for DMAC programming and initializing and (ii) at the end. Whenever a DMA
request by external device is made to the DMAC, the CPU is requested (using interrupt signal)
the DMA transfer by DMAC at the start to initiate the DMA and at the end to notify (using

interrupt signal) the end of the DMA by DMAC.


When a DMA controller is used to transfer a block of bytes: • ISRs are not called during the
transfer of bytes • An ISR is called only at the beginning of the transfer to program the controller
(DMAC) • Another ISR is called only at the end of the transfer
The ISR that initiates the DMA (Direct Memory Access) to the interrupting source, simply
programs the DMA registers for the: • command (for mode of transfer─ bulk or burst or bytes), •
data-count (number of bytes to be transferred), • memory block address where access to data is
made and • I/O bus for start address of external device

Device Drivers, ISR, Interrupt Handling

Most embedded hardware requires some type of software initialization and management. The
software that directly interfaces with and controls this hardware is called a device driver. All
embedded systems that require software have, at the very least, device driver software in
their system software layer. Device drivers are the software libraries that initialize the
hardware and manage access to the hardware by higher layers of software. Device drivers
are the liaison between the hardware and the operating system, middleware, and application
layers. (See Figure.)

Different types of hardware will have different device driver requirements that need to be met.
● Even the same type of hardware, such as Flash memory, that are created by different
manufacturers can require substantially different device driver software libraries to
support within the embedded device.

The types of hardware components needing the support of device drivers vary from board to
board,
Device drivers are typically considered either architecture-specific or generic. A device driver
that is architecture-specific manages the hardware that is integrated into the master
processor (the architecture). Examples of architecture-specific drivers that initialize and
enable components within a master processor include on-chip memory, integrated memory
managers (memory management units (MMUs)), and floating-point hardware. A device driver
that is generic manages hardware that is located on the board and not integrated onto the
master processor. In a generic driver, there are typically architecture-specific portions of
source code, because the master processor is the central control unit and to gain access to
anything on the board usually means going through the master processor. However, the
generic driver also manages board hardware that is not specific to that particular processor,
which means that a generic driver can be configured to run on a variety of architectures that
contain the related board ha

Regardless of the type of device driver or the hardware it manages, all device drivers are
generally made up of all or some combination of the following functions:
● Hardware Startup: initialization of the hardware upon PowerON or reset.
● Hardware Shutdown: configuring hardware into its PowerOFF state.
● Hardware Disable: allowing other software to disable hardware on-the-fly.
● Hardware Enable: allowing other software to enable hardware on-the-fly.
● Hardware Acquire: allowing other software to gain singular (locking) access to
hardware.
● Hardware Release: allowing other software to free (unlock) hardware.
● Hardware Read: allowing other software to read data from hardware.
● Hardware Write: allowing other software to write data to hardware.
● Hardware Install: allowing other software to install new hardware on-the-fly.
● Hardware Uninstall: allowing other software to remove installed hardware on-the-fly.
● Hardware Mapping: allowing for address mapping to and from hardware storage
devices when reading, writing, and/or deleting data.
● Hardware Unmapping: allowing for unmapping (removing) blocks of data from
hardware storage devices.
Device drivers may have additional functions, but some or all of the functions shown above
are what device drivers inherently have in common. These functions are based upon the
software’s implicit perception of hardware, which is that hardware is in one of three states at
any given time—inactive, busy, or finished. Hardware in the inactive state is interpreted as
being either disconnected (thus the need for an install function), without power (hence the
need for an initialization routine), or disabled (thus the need for an enable routine). The busy
and finished states are active hardware states, as opposed to inactive; thus the need for
uninstall, shutdown, and/or disable functionality. Hardware that is in a busy state is actively
processing some type of data and is not idle, and thus may require some type of release
mechanism. Hardware that is in the finished state is in an idle state, which then allows for
acquisition, read, or write requests, for example.

Again, device drivers may have all or some of these functions, and can integrate some of
these functions into single larger functions. Each of these driver functions typically has code
that interfaces directly to the hardware and code that interfaces to higher layers of software.
In some cases, the distinction between these layers is clear, while in other drivers, the code
is tightly integrated (see fig).

Depending on the master processor, different types of software can execute in different
modes, the most common being supervisory and user modes. These modes essentially differ
in terms of what system components the software is allowed access to, with software
running in supervisory mode having more access (privileges) than software running in user
mode. Device driver code typically runs in supervisory mode.

rdware for which the driver is written. Generic drivers include code that initializes and
manages access to the remaining major components of the board, including board buses
(I2C, PCI, PCMCIA, etc.), off-chip memory (controllers, level 2+ cache, Flash, etc.), and off-
chip I/O (Ethernet, RS-232, display, mouse, etc.)

Device Drivers for Interrupt Handling

As discussed previously, interrupts are signals triggered by some event during the execution
of an instruction stream by the master processor. What this means is that interrupts can be
initiated asynchronously, for external hardware devices, resets, power failures, etc., or
synchronously, for instruction-related activities such as system calls or illegal instructions.
These signals cause the master processor to stop executing the current instruction stream
and start the process of handling (processing) the interrupt.
The software that handles interrupts on the master processor and manages interrupt
hardware mechanisms (i.e., the interrupt controller) consists of the device drivers for
interrupt handling. At least four of the 10 functions from the list of device driver functionality
introduced at the start of this chapter are supported by interrupt-handling device drivers,
including:

● Interrupt Handling Startup: initialization of the interrupt hardware (interrupt controller,


activating interrupts, etc.) upon PowerON or reset.
● Interrupt Handling Shutdown: configuring interrupt hardware (interrupt controller,
deactivating interrupts, etc.) into its PowerOFF state.
● Interrupt Handling Disable: allowing other software to disable active interrupts on-
thefly (not allowed for non-maskable interrupts (NMIs), which are interrupts that
cannot be disabled).
● Interrupt Handling Enable: allowing other software to enable inactive interrupts on-
the-fly.
Plus one additional function unique to interrupt handling:

● Interrupt Handler Servicing: the interrupt handling code itself, which is executed after
the interruption of the main execution stream (this can range in complexity from a
simple non-nested routine to nested and/or reentrant routines).
How startup, shutdown, disable, enable, and service functions are implemented in software
usually depends on the following criteria:

● The types, number, and priority levels of interrupts available (determined by the
interrupt hardware mechanisms on-chip and on-board).
● How interrupts are triggered.
● The interrupt policies of components within the system that trigger interrupts, and the
services provided by the master CPU processing the interrupts.

The three main types of interrupts are software, internal hardware, and external hardware.
Software interrupts are explicitly triggered internally by some instruction within the current
instruction stream being executed by the master processor. Internal hardware interrupts, on
the other hand, are initiated by an event that is a result of a problem with the current
instruction stream that is being executed by the master processor because of the features (or
limitations) of the hardware, such as illegal math operations (overflow, divide-by-zero),
debugging (single-stepping, breakpoints), and invalid instructions (opcodes). Interrupts that
are raised (requested) by some internal event to the master processor (basically, software
and internal hardware interrupts) are also commonly referred to as exceptions or traps.
Exceptions are internally generated hardware interrupts triggered by errors that are detected
by the master processor during software execution, such as invalid data or a divide by zero.
How exceptions are prioritized and processed is determined by the architecture. Traps are
software interrupts specifically generated by the software, via an exception instruction.
Finally, external hardware interrupts are interrupts initiated by hardware other than the
master CPU (board buses, I/O, etc.)

For interrupts that are raised by external events, the master processor is either wired via an
input pin(s) called an IRQ (Interrupt Request Level) pin or port, to outside intermediary
hardware (e.g., interrupt controllers), or directly to other components on the board with
dedicated interrupt ports, that signal the master CPU when they want to raise the interrupt.
These types of interrupts are triggered in one of two ways: level-triggered or edge-triggered.
A level-triggered interrupt is initiated when its IRQ signal is at a certain level (i.e., HIGH or
LOW; see Figure 8-5a). These interrupts are processed when the CPU finds a request for a
level-triggered interrupt when sampling its IRQ line, such as at the end of processing each
instruction.
Edge-triggered interrupts are triggered when a change occurs on the IRQ line (from LOW to
HIGH/rising edge of signal or from HIGH to LOW/falling edge of signal; see Figure 8-5b). Once
triggered, these interrupts latch into the CPU until processed.

Both types of interrupts have their strengths and drawbacks. With a level-triggered interrupt,
as shown in the example in Figure 8-6a, if the request is being processed and has not been
disabled before the next sampling period, the CPU will try to service the same interrupt again.
On the flip side, if the level-triggered interrupt were triggered and then disabled before the
CPU’s sample period, the CPU would never note its existence and would therefore never
process it. Edge-triggered interrupts could have problems if they share the same IRQ line, if
they were triggered in the same manner at about the same time (say before the CPU could
process the first interrupt), resulting in the CPU being able to detect only one of the
interrupts (see Figure 8-6b).

Because of these drawbacks, level-triggered interrupts are generally recommended for


interrupts that share IRQ lines, whereas edge-triggered interrupts are typically recommended
for interrupt signals that are very short or very long.

At the point an IRQ of a master processor receives a signal that an interrupt has been raised,
the interrupt is processed by the interrupt-handling mechanisms within the system. These
mechanisms are made up of a combination of both hardware and software components. In
terms of hardware, an interrupt controller can be integrated onto a board, or within a
processor, to mediate interrupt transactions in conjunction with software.

Interrupt acknowledgment (IACK) is typically handled by the master processor when an


external device triggers an interrupt. Because IACK cycles are a function of the local bus, the
IACK function of the master CPU depends on interrupt policies of system buses, as well as
the interrupt policies of components within the system that trigger the interrupts. With
respect to the external device triggering an interrupt, the interrupt scheme depends on
whether that device can provide an interrupt vector (a place in memory that holds the
address of an interrupt’s ISR (Interrupt Service Routine), the software that the master CPU
executes after the triggering of an interrupt). For devices that cannot provide an interrupt
vector, referred to as non-vectored interrupts, master processors implement an auto-
vectored interrupt scheme in which one ISR is shared by the non-vectored interrupts;
determining which specific interrupt to handle, interrupt acknowledgment, etc., are all
handled by the ISR software.
An interrupt-vectored scheme is implemented to support peripherals that can provide an
interrupt vector over a bus and where acknowledgment is automatic. An IACK-related
register on the master CPU informs the device requesting the interrupt to stop requesting
interrupt service, and provides what the master processor needs to process the correct
interrupt (such as the interrupt number and vector number). Based upon the activation of an
external interrupt pin, an interrupt controller’s interrupt select register, a device’s interrupt
select register, or some combination of the above, the master processor can determine which
ISR to execute. After the ISR completes, the master processor resets the interrupt status by
adjusting the bits in the processor’s status register or an interrupt mask in the external
interrupt controller. The interrupt request and acknowledgment mechanisms are determined
by the device requesting the interrupt (since it determines which interrupt service to trigger),
the master processor, and the system bus protocols.
Interrupt Priorities
Because there are potentially multiple components on an embedded board that may need to
request interrupts, the scheme that manages all of the different types of interrupts is priority-
based. This means that all available interrupts within a processor have an associated
interrupt level, which is the priority of that interrupt within the system. Typically, interrupts
starting at level “1” are the highest priority within the system and incrementally from there (2,
3, 4, etc.) the priorities of the associated interrupts decrease. Interrupts with higher levels
have precedence over any instruction stream being executed by the master processor,
meaning that not only do interrupts have precedence over the main program, but higher
priority interrupts have priority over interrupts with lower priorities as well. When an interrupt
is triggered, lower priority interrupts are typically masked, meaning they are not allowed to
trigger when the system is handling a higher- priority interrupt. The interrupt with the highest
priority is usually called an NMI.
Context Switching

After the hardware mechanisms have determined which interrupt to handle and have
acknowledged the interrupt, the current instruction stream is halted and a context switch is
performed, a process in which the master processor switches from executing the current
instruction stream to another set of instructions. This alternate set of instructions being
executed as the result of an interrupt is the ISR or interrupt handler. An ISR is simply a fast,
short program that is executed when an interrupt is triggered. The specific ISR executed for a
particular interrupt depends on whether a non-vectored or vectored scheme is in place. In the
case of a non-vectored interrupt, a memory location contains the start of an ISR that the PC
(program counter) or some similar mechanism branches to for all non-vectored interrupts.
The ISR code then determines the source of the interrupt and provides the appropriate
processing. In a vectored scheme, typically an interrupt vector table contains the address of
the ISR.

The steps involved in an interrupt context switch include stopping the current program’s
execution of instructions, saving the context information (registers, the PC, or similar
mechanism that indicates where the processor should jump back to after executing the ISR)
onto a stack, either dedicated or shared with other system software, and perhaps the
disabling of other interrupts. After the master processor finishes executing the ISR, it context
switches back to the original instruction stream that had been interrupted, using the context
information as a guide.

Memory Map and Memory Device drivers


All types of physical memory are two-dimensional arrays (matrices) made up of cells
addressed by a unique row and column, the master processor and programmers view
memory as a large one-dimensional array, commonly referred to as the memory map (see
Figure). In the memory map, each cell of the array is a row of bytes (8 bits) and the number of
bytes per row depends on the width of the data bus (8-, 16-, 32-, 64-bit, etc.). This, in turn,
depends on the width of the registers of the master architecture. When physical memory is
referenced from the software’s point-of-view it is commonly referred to as logical memory
and its most basic unit is the byte. Logical memory is made up of all the physical memory
(registers, ROM, and RAM) in the entire embedded system.
Figure. Sample Memory Map

The software must provide the processors in the system with the ability to access various
portions of the memory map. The software involved in managing the memory on the master
processor and on the board, as well as managing memory hardware mechanisms, consists of
the device drivers for the management of the overall memory subsystem. The memory
subsystem includes all types of memory management components, such as memory
controllers and MMU, as well as the types of memory in the memory map, such as registers,
cache, ROM, and DRAM. All or some combination of six of the 10 device driver functions
from the list of device driver functionality introduced at the start of this chapter are
commonly implemented, including:

● Memory Subsystem Startup: initialization of the hardware upon PowerON or reset


(initialize translation lookaside buffers (TLBs) for MMU, initialize/configure MMU).
● Memory Subsystem Shutdown: configuring hardware into its PowerOFF state. (Note:
Under the MPC860, there is no necessary shutdown sequence for the memory
subsystem, so pseudocode examples are not shown.)
● Memory Subsystem Disable: allowing other software to disable hardware on-the-fly
(disabling cache).
● Memory Subsystem Enable: allowing other software to enable hardware on-the-fly
(enable cache).
● Memory Subsystem Write: storing in memory a byte or set of bytes (i.e., in cache,
ROM, and main memory).
● Memory Subsystem Read: retrieving from memory a “copy” of the data in the form of
a byte or set of bytes (i.e., in cache, ROM, and main memory).
Regardless of what type of data is being read or written, all data within memory is managed
as a sequence of bytes. While one memory access is limited to the size of the data bus,
certain architectures manage access to larger blocks (a contiguous set of bytes) of data,
called segments, and thus implement a more complex address translation scheme in which
the logical address provided via software is made up of a segment number (address of start
of segment) and offset (within a segment) which is used to determine the physical address of
the memory location.

The order in which bytes are retrieved or stored in memory depends on the byte ordering
scheme of an architecture. The two possible byte ordering schemes are little-endian and
bigendian. In little-endian mode, bytes (or “bits” with 1-byte (8-bit) schemes) are retrieved
and stored in the order of the lowest byte first, meaning the lowest byte is furthest to the left.
In big-endian mode bytes are accessed in the order of the highest byte first, meaning that the
lowest byte is furthest to the right

On-Board Bus Device Drivers

Associated with every bus is/are (1) some type of protocol that defines how devices gain
access to the bus (arbitration), (2) the rules attached devices must follow to communicate
over the bus (handshaking), and (3) the signals associated with the various bus lines. Bus
protocol is supported by the bus device drivers, which commonly include all or some
combination of all of the 10 functions from the list of device driver functionality introduced at
the start of this chapter, including:

● Bus Startup: initialization of the bus upon PowerON or reset.


● Bus Shutdown: configuring bus into its PowerOFF state.
● Bus Disable: allowing other software to disable bus on-the-fly.
● Bus Enable: allowing other software to enable bus on-the-fly.
● Bus Acquire: allowing other software to gain singular (locking) access to bus.
● Bus Release: allowing other software to free (unlock) bus.
● Bus Read: allowing other software to read data from bus.
● Bus Write: allowing other software to write data to bus.
● Bus Install: allowing other software to install new bus device on-the-fly for expandable
buses.
● Bus Uninstall: allowing other software to remove installed bus device on-the-fly for
expandable buses.
Which of the routines are implemented and how they are implemented depends on the actual
bus. below is an example of an I2C bus initialization routine provided as an example of a bus
startup (initialization) device driver on the microprocessor MPC860

On-Board Bus Device Driver Example

I2C Bus Startup (Initialization) on the MPC860

The I2C (inter-IC) protocol is a serial bus with one serial data line (SDA) and one serial clock
line (SCL). With the I2C protocol, all devices attached to the bus have a unique address
(identifier), and this identifier is part of the data stream transmitted over the SDL line.
The components on the master processor that support the I2C protocol are what need
initialization. In the case of MPC860 the there is an integrated I2C controller on the master
processor (see Figure 8-29). The I2C controller is made up transmitter registers, receiver
registers, a baud rate generator, and a control unit. The baud rate generator generates the
clock signals when the I2C controller acts as the I2C bus master—if in slave mode, the
controller uses the clock signal received from the master. In reception mode, data is
transmitted from the SDA line into the control unit, through the shift register, which in turn
transmits the data to the receive data register. The data that will be transmitted over the I2C
bus from the PPC is initially stored in the transmit data register and transferred out through
the shift register to the control unit and over the SDA line. Initializing the I2C bus on the
MPC860 means initializing the I2C SDA and SCL pins, many of the I2C registers, some of the
parameter RAM, and the associated buffer descriptors.

Module 5
Inter-Process Communication and Synchronization of
Processes,Threads and Tasks

Process - Basic Concepts


1. A process consists of sequentially executable program (codes), state of which is
controlled by OS
2. The state during running of a process is represented by
process-status (created, running,blocked, or finished),
process structure—its data, objects and resources, and
process control block (PCB).
3. Process runs when it is scheduled to run by the OS (kernel). OS gives the control of the CPU
on a process’s request (system call).Process runs by executing the instructions and the
continuous changes of its state takes place as the program counter (PC) changes

Application program can be said to consist of number of processes


Process is defined as that executing unit of computation that processes on a
CPU and state of which is under the control of kernel of an operating system.

Eg. Processes in a Mobile Phone device Software:

1.Voice encoding and convoluting Process


2. Modulating process,
3. Display process,
4. GUIs (graphic user interfaces), and
5. Key input process ─ for provisioning of the user interrupts

Process Control Block


A data structure having the information using which the OS controls the
process state. Stores in protected memory area of the kernel.
It consists of the information about the process state
1. Process ID, process priority, parent process (if any),child process (if any), and
address to the next process PCB which will run
2. allocated program memory address blocks in physical memory and in secondary (virtual)
memory for the process-codes,
3. allocated process-specific data address blocks
4. allocated process-heap (data generated during the program run) addresses,
5. allocated process-stack addresses for the functions called during running of the
Process
6. allocated addresses of CPU register-save area as a process context represents by CPU
registers, which include the program counter and stack pointer
7. process-state signal mask [when mask is set to 0 (active) the process is inhibited from
running and when reset to 1, the process is allowed to run],
8. Signals (messages) dispatch table [process IPC functions],
9. OS allocated resources’ descriptors (for example, file descriptors for open
files, device descriptors for open (accessible) devices, device-buffer
addresses and status, socket-descriptor for open socket), and
10. Security restrictions and permissions

Context
The present CPU registers, which include program counter and stack pointer are
called context
When context saves on the PCB pointed process-stack and register-save area
addresses, then the running process stops. Other process context now loads and that
process runs─ This means that the context has switched

Thread
A tread is a process or sub-process within a process that has its own program counter, its own
stack pointer and stack, its own priority paramete rfor its scheduling by a thread scheduler
Its’ variables that load into the processor registers on context switching.
Has own signal mask at the kernel

A Thread is considered a lightweight process and a process level controlled entity.


[Light weight means its running does not depend on system resources]
Process considered as a heavyweight process and a kernel-level controlled
Entity. Process can have multiple threads, which share the process structure
Process thus can have codes in secondary memory from which the pages can be
swapped into the physical primary memory during running of the process.
[Heavy weight means its running may depend on system resources]

Thread’s signal mask


When unmasked lets the thread activate and run.
When masked, the thread is put into a queue of pending threads

Thread’s Stack
A thread stack is at a memory address block allocated by the OS.

Threads of a Process sharing Process Structure


Multiprocessing OS
A multiprocessing OS runs more than one processes.
When a process consists of multiple threads, it is called multithreaded process.
A thread can be considered as daughter process.
A thread defines a minimum unit of a multithreaded process that an OS schedules
onto the CPU and allocates other system resources.

Example ─ Multiple threads of Display process in Mobile Phone Device


Display_Time_Date thread ─ for displaying clock time and date.
Display_Battery thread ─ for displaying battery power.
Display_Signal thread ─ for displaying signal power for communication with
mobile service provider
Display_Profile thread ─ for displaying silent or sound-active mode. A thread
Display_Message thread ─ for displaying unread message in the inbox.
Display_Call Status thread ─for displaying call status; whether dialing or call waiting
Display_Menu thread ─ for displaying menu.

Display threads can share the common memory blocks and resources allocated
to the Display_Process

Thread parameters
Each thread has independent parameters ID, priority, program counter, stack pointer, CPU
registers and its present status.
• Thread states─ starting, running, blocked (sleep) and finished

Thread Stack
A data structure having the information using which the OS controls the thread
State. Stores in protected memory area of the kernel.Consists of the information about the
thread state

Thread and Task


Thread is a concept used in Java or Unix.
A thread can either be a sub-process within a process or a process within an
application program.
To schedule the multiple processes, there is the concept of forming thread
groups and thread libraries

TASK
Task is the term used for the process in RTOSes for embedded systems
A task is a process and the OS does the multitasking.
Task is a kernel-controlled entity while thread is a process-controlled entity.
[*****Since task is similar to a process all that is applicable to a process is applicable to
the task except with the term change. Eg. TCB instead of PCB….*****]

Application program can be said to consist of number of tasks


Task defined as an executing computational unit that processes on a
CPU and state of which is under the control of kernel of an operating
system.

Task and its data Includes task context and TCB


TCB─ A data structure having the information using which the OS
controls the process state. Stores in protected memory area of the
Kernel. Consists of the information about the task state

Task Information at the TCB…


TaskID, for example, ID a number between 0 and 255
task priority, if between 0 and 255, is represented by a byte
parent task (if any),
child task (if any),
address to the next task’s TCB of task that will run next,
allocated program memory address blocks in physical memory and in
secondary (virtual) memory for the tasks-codes,
allocated task-specific data address blocks
allocated task-heap (data generated during the program run) addresses
allocated task-stack addresses for the functions called during running of the
process,
allocated addresses of CPU register-save area as a task context represents by CPU
registers, which include the program counter and stack pointer
allocated addresses of CPU registe rsave area as a task context,
task-state signal mask [when mask is set to 0 (active) the process is inhibited
from running and when reset to 1, the process is allowed to run],
Task signals (messages) dispatch table [task IPC functions],
OS allocated resources’ descriptors (for example, file descriptors for open
files, device descriptors for open (accessible) devices, device-buffer
addresses and status, socket-descriptor for open socket), and
Security restrictions and permissions

Each task may be coded such that it is in endless loop waiting for an event to start
running of the codes.
Event can be a message in a queue or in mailbox or
Event can be a token or signal
Event can be delay-period over

Task and its data including Context

Inter Process Communication

Processes executing concurrently in the operating system might be either


independent processes or cooperating processes. A process is independent if it
cannot be affected by the other processes executing in the system.

Inter process communication (IPC) means that a process (scheduler or task or


ISR) generates some information by setting or resetting a Token or value, or
generates an output so that it lets another process take note or to signal to OS
for starting a process or use it under the control of an OS
There are numerous reasons for providing an environment or situation which
allows process co-operation:

Information sharing: Since a number of users may be interested in the


same piece of information (for example, a shared file), you must provide
a situation for allowing concurrent access to those information.
Computation speedup: If you want a particular work to run fast, you must
break it into sub-tasks where each of them will get execute in parallel
with the other tasks. Note that such a speed-up can be attained only
when the computer has compound or various processing elements like
CPUs or I/O channels.
Modularity: You may want to build the system in a modular way by
dividing the system functions into split processes or threads.
Convenience: Even a single user may work on many tasks at a time. For
example, a user may be editing, formatting, printing, and compiling in
parallel.
Working together multiple processes, require an inter process communication
(IPC) method which will allow them to exchange data along with various
information. There are two primary models of inter process communication:

shared memory and


message passing.
In the shared-memory model, a region of memory which is shared by
cooperating processes gets established. Processes can then able to exchange
information by reading and writing all the data to the shared region. In the
message-passing form, communication takes place by way of messages
exchanged among the cooperating processes..
The two communications models are contrasted in figure below:

Sharing Data between the Processes -

Some data is common to different processes or tasks. Examples are as follows:


Time, which is updated continuously by a process, is also used by display
process in a system Port input data Port input data, which is received by one
process and further processed and analysed by another process. Memory Buffer
data which is inserted by one process and further read (deleted), processed and
analysed by another process

Shared Data Problem

Assume that at an instant when the value of variable operates and during the
operations on it, only a part of the operation is completed and another part
remains incomplete. • At that moment, assume that there is an interrupt.
S.Assume that there is another function. It also shares the same variable. The
value of the variable may differ from the one expected if the earlier operation
had been completed. .

Whenever another process sharing the same partly operated data , then shared
data problem arises

Consider date d and time t. • Let d and t are taken in the program as global variables. • Assume
that a thread Update_Time_Date is for updating t and d information on system clock tick
interrupt IS. • The thread Display_Time_Date is for displaying that t and d information Assume
that when Update_Time_Date ran the t = 23:59:59 and date d = 17 Jul 2007. • The
Display_Time_Date gets interrupted and assume that displaying d and operation t operations
are non-atomic. • Display of d was completed but display of t was incomplete when interrupt IS
occurs
After a while, the t changes to t = 00:00:00 and date d = 18 Jul 2007 when the thread
Update_Time_Date runs. • But the display will show t = 00:00:00 and date d = 17 Jul 2007 on
re-starting of blocked thread Display_Time_Date on return from the interrupt.

Solutions to shared Data Problem


1. Use reentrant function with atomic instructions in that section of a function that needs its
complete execution before it can be interrupted. This section is called the critical
section.

2. Put a shared variable in a circular queue. A function that requires the value of this variable
always deletes (takes) it from the queue front, and another function, which inserts (writes) the
value of this variable, always does so at the queue back
3. Disable the interrupts (DI) before a critical section starts executing and enable the interrupts
(EI) on its completion.
4. Use lock ( ) a critical section starts executing and use unlock ( ) on its completion
5 Use IPC (Inter-Process Communication)
Using semaphore

Signal
One way for messaging is to use an OS function signal ( ). Provided in Unix, Linux and several
RTOSes. Unix and Linux OSes use signals profusely and have thirty-one different types of
signals for the various events.
A signal is the software equivalent of the flag at a register that sets on a hardware interrupt.
Unless masked by a signal mask, the signal allows the execution of the signal handling function
and allows the handler to run just as a hardware interrupt allows the execution of an ISR
Signal is an IPC used for signaling from a process A to OS to enable start of another process B
Signal is a one or two byte IPC from a process to the OS.

Signal provides the shortest communication. The signal ( ) sends a one-bit output for a process,
which unmasks a signal mask of a process or task (called signal handler) The handler has
coding similar to ones in an ISR runs in a way similar to a highest priority ISR

Signal ( ) forces a signaled process or task called signal handler to run. When there is return
from the signaled or forced task or process, the process, which sent the signal, runs the codes
as happens on a return from the ISR.

An OS provision for signal as an IPC function means a provision for interrupt-message from a
process or task to another process or task

Signal ( ) IPC function Signal ( ) IPC function


1. SigHandler ( ) to create a signal handler corresponding to a signal identified by the signal
number and define a pointer to signal context. . The signal context saves the registers on signal.
2. Connect an interrupt vector to a signal number, with signaled handler function and signal
handler arguments. The interrupt vector provides the program counter value for the signal
handler function address.
3 A function signal ( ) to send a signal identified by a number to a signal handler task
4. Mask the signal
5. Unmask the signal 6. Ignore the signal

Signal is used in handling of exception. An exception is a process that is executed on a specific


reported run-time condition.
A signal reports an error (called 'Exception') during the running of a task and then lets the
scheduler initiate an error-handling process or ISR, for example, initiate error-login task. -
Handling of signal is similar to an ISR handling function using an interrupt vector.

Unlike semaphores, it takes the shortest possible CPU time. The signals are the flag or one or
two byte message used as the IPC functions for synchronizing the concurrent processing of the
tasks

Adv of using signal


A signal is the software equivalent of the flag at a register that sets on a hardware interrupt. 2. It
is sent on some exception or on some condition, which can set during running of a process or
task or thread. 3. Sending a signal is software equivalent of throwing exception in C/C++ or
Java program
4. Unless process is masked by a signal mask, the signal allows the execution of the signal
handling process, just as a hardware interrupt allows the execution of an ISR
A signal is identical to setting a flag that is shared and used by another interrupt servicing
process. 6. A signal raised by one process forces another process to interrupt and to catch that
signal provided the signal is not masked at that process.

Drawbacks of using a Signal

Signal is handled only by a very high priority process (service routine). That may disrupt the
usual schedule and usual priority inheritance mechanism. 2. Signal may cause reentrancy
problem [process not returning to state identical to the one before signal handler process
executed].

Semaphores
Semaphores are a programming construct designed by E. W. Dijkstra in the late 1960
An OS provides the IPC functions for creating and using semaphores as the event flags, Mutex
for resource key (for resource locking and unlocking onto a process) and as the counting and P-
V semaphores

Semaphore:- OS primitive for controlling access to critical regions. Protocol: 1. Get access to
semaphore with P() function. (Dutch “Proberen” – to test) 2. Perform critical region operations.
3. Release semaphore with V() function. (Dutch “Verhogen” – to incremet

Semaphore Functions
1. OSSemCreate ─ to create a semaphore and to initialize 2. OSSemPost ─ to send the
semaphore to an event control block and its value increments on event occurrence. 3.
OSSemPend ─ to wait the semaphore from an event, and its value decrements on taking note
of that event occurrence.
4. OSSemAccept─ to read and returns the present semaphore value and if it shows occurrence
of an event (by non zero value) then it takes note of that and decrements that value.
5. OSSemQuery ─ to query the semaphore for an event occurrence or non-occurrence by
reading its value and returns the present semaphore value, and returns pointer to the data
structure OSSemData. The semaphore value does not decrease. The OSSemData─ data
structure to point to the present value and a table of the tasks waiting for the semaphore

Queue and Mailbox

Queue

• Some OSes provide the mailbox and queue both IPC functions • Every OS provides queue
IPC functions. • When the IPC functions for mailbox are not provided by an OS, then the OS
employs queue for the same purpose.
OS provides for inserting and deleting the message-pointers or messages. • Each queue for a
message need initialization (creation) before using the functions in the scheduler for the
message queue.
There may be a provision for multiple queues for the multiple types or destinations of messages.
Each queue have an ID. • Each queue either has a user definable size (upper limit for number of
bytes) or a fixed pre-defined size assigned by the scheduler.
When a queue becomes full, there may be a need for error handling and user codes for blocking
the task(s). There may not be self-blocking.

Queue functions

OSQCreate ─ to create a queue and initialize the queue message, blocks the contents with
front and back as queue-top pointers, *QFRONT and *QBACK, respectively.
• OSQPost ─ to post a message to the message block as per the queue back pointer, *QBACK.
(Used by ISRs and tasks)

OSQPend ─ to wait for a queue message at the queue and reads and deletes that when
received.
OSQAccept ─ to read the present queue front pointer after checking its presence yes or no and
after the read the queue front pointer increments (No wait. Used by ISRs and tasks) • OSQFlush
─ to read queue from front to back, and deletes the queue block, as it is not needed later after
the flush the queue front and back points to QTop, pointer to start of the queue. (Used by ISRs
and tasks)
OSQQuery─ to querythe queue message-block when read and but the message is not deleted.
The function returns pointer to the message queue *QFRONT if there are the messages in the
queue or else null. It return a pointer to data structure of the queue data structure which has
*QFRONT, number of queued messages, size of the queue and. table of tasks waiting for the
messages from the queue.
OSQPostFront ─ to send a message as per the queue front pointer, *QFRONT. Use of this
function is made in the following situations. A message is urgent or is of higher priority than all
the previously posted message into the queue

Mail box

A Message Mailbox is for an IPC message through a message-block at an OS that can be used
only by a single destined task.
A task on an OS function call puts (means post and also send) into the mailbox the message or
only a pointer to a mailbox message • Mailbox message may also include a header to identify
the message-type

Mailbox IPC features


• OS provides for inserting and deleting message into the mailbox messagepointer. Deleting
means message-pointer pointing to Null. • Each mailbox for a message need initialization
(creation) before using the functions in the scheduler for the message queue and message
pointer pointing to Null.

There may be a provision for multiple mailboxes for the multiple types or destinations of
messages. Each mailbox has an ID. • Each mailbox usually has one message pointer only,
which can point to message.

Mailbox IPC functions


1. OSMBoxCreate creates a box and initializes the mailbox contents with a NULL pointer at
*msg .
2. OSMBoxPost sends at *msg, which now does not point to Null.
3.OSMBoxWait (Pend) waits for *msg not Null, which is read when not Null and again *msg
points to Null. • The time out and error handling function can be provided with Pend function
argument

4. OSMBoxAccept reads the message at *msg after checking the presence yes or no [No wait.]
Deletes (reads) the mailbox message when read and *msg again points to Null

Pipes
Pipe is a device used for the inter process communication • Pipe has the functions create,
connect and delete and functions similar to a device driver (open, write, read, close)
Writing and reading a Pipe • A message-pipe ─ a device for inserting (writing) and deleting
(reading) from that between two given inter-connected tasks or two sets of tasks. • Writing and
reading from a pipe is like using a C command fwrite with a file name to write into a named file,
and C command fread with a file name to read into a named file.
1. One task using the function fwrite in a set of tasks can write to a pipe at the back pointer
address, *pBACK. • 2. One task using the function fwrite in a set of tasks can read from a pipe
at the front pointer address, *pFRONT
Pipes are also like Java PipedInputOutputStreams. Java defines the classes for the input output
streams
In a pipe there may be no fixed number of bytes per message with an initial pointer for the back
and front and there may be a limiting final back pointer. • A pipe can therefore be limited and
have a variable number of bytes per message between the initial and final pointers.
Pipe is unidirectional. One thread or task inserts into it and other one deletes from it.
Example
pipeDevCreate (“/pipe/pipeCardInfo”, 4, 32) /* Create a pipe pipeCardInfo, which can save
four messages each of 32 bytes maximum */
fd = open ((“/pipe/pipeCardInfo”,O_WR, 0) /* Open a write only device. First argument is pipe
ID /pipe/pipeCardInfo, second argument is option O_WR for write only and third argument is 0
for unrestricted permission.*/ .
…..
while (1) { . .
cardTransactionNum = 0; /* At start of the transactions with the machine*/

write (fd, cardTransactionNum, 1) /* Write 1 byte for transaction number after card insertion */

Pipe Device Functions

1. pipeDevCreate ─ for creating a device 2. open ( ) ─ for opening the device to enable its use
from beginning of its allocated buffer, its use with option and restrictions or permissions defined
at the time of opening. 3. connect ( ) ─ for connecting a thread or task inserting bytes into the
pipe to the thread or task deleting bytes from the pipe. 4. write ( ) ─ function for inserting
(writing) into the pipe from the bottom of the empty memory space in the buffer allotted to it. 5.
read ( ) ─ function for deleting (reading) from the pipe from the bottom of the unread memory
spaces in the buffer filled after writing into the pipe. 6. close ( ) ─ for closing the device to enable
its use from beginning of its allocated buffer only after opening it again..

Socket
Provides a device like mechanism for bidirectional communication.
Provides for a bi-directional pipe like device, which also use a protocol between source and
destination processes for transferring the bytes. Protocol can be
connection oriented - First a connection establishment between source and destination and then
the actual transfer of data stream can take place.- like TCP or
connectionless - no connection establishment between source and destination before data
transfer to stream Datagram means a data, which is independent need not in sequence with
the previously sent data. like UDP
Provides for establishing and closing a connection between source and destination processes
using a protocol for transferring the bytes.
May provide for listening from multiple sources or multicasting to multiple destinations. Two
tasks at two distinct places or locally interconnect through the sockets. Multiple tasks at multiple
distinct places interconnect through the sockets to a socket at a server process. The client and
server sockets can run on same CPU or at distant CPUs on the Internet.

Two processes (or sections of a task) at two sets of ports interconnect (perform inter process
communication) through a socket at each. [These are virtual (logical), and not physical sockets.]
Refer text
Remote procedure calls

Remote procedure calls (RPCs) permits remote invocation of the processes in the distributed
systems
RPC is a method used for connecting two remotely placed functions by first using a protocol for
connecting the processes. It is used in the cases of distributed tasks The RTOS can provide for
the use of RPCs. These permits distributed environment for the embedded systems.
The OS IPC function allows a function or method to run at another address space of shared
network or other remote computer. The client makes the call to the function that is local or
remote and the server response is either remote or local in the call.
Both systems work in the peer-to-peer communication mode. Each system in peer-to peer mode
can make an RPC. • An RPC permits remote invocation of the processes in the distributed
systems.

You might also like