Vxworksmm
Vxworksmm
Vxworksmm
INTRODUCTION:
A Real-Time Operating system is a functioning operating system that has been specifically developed for use
within real time situations, more typically for embedded applications. Embedded systems are often not recognisable as
computers. Instead they exist within objects, which are present throughout our day-to-day lives.
Real time and embedded systems often operate within constrained environments where processing power and
memory are limited. They often have to perform to strict time deadlines where performance is critical to the operation of
the system. VxWorks is an example of a real time operating system, and it has a good track record for providing the basis
of many complex systems. These systems include 30 million devices ranging from network equipment, automotive and
aerospace systems, life-critical medical systems and even space exploration.
VxWorks was developed from an old operating system VRTX. VRTX did not function properly as an operating
system so Wind River acquired the rights to re-sell VRTX and developed it into their own workable operating system
VxWorks (some say this means VRTX now works). Wind River then went on to develop a new kernel for VxWorks and
replaced the VRTX kernel with this.
This enabled VxWorks to become one of the leading operating systems created for the purpose of real time
applications.
Basic VxWorks System Tasks :
Depending on its configuration, VxWorks starts a variety of system tasks at boot time, which are always running. The
basic set of tasks is described below.
Root Task : The root task tRootTask is the first task executed by the kernel. The entry point of the root task is
usrRoot()initializes most VxWorks facilities. It spawns such tasks as the logging task, the exception task, the network
task, and the tRlogind daemon. Normally, the root task terminates and is deleted after all initialization has completed. For
more information tRootTask and usrRoot(), see the VxWorks BSP Developer’s Guide.
Logging Task: The log task, tLogTask, is used by VxWorks modules to log system messages without having to perform
I/O in the current task context. For more information, see 10.7Asynchronous Input/Output, p.395 and the API reference
entry for logLib.
Exception Task :The exception task, tExcTask, supports the VxWorks exception handling package by performing fun-
ctions that cannot occur at interrupt level. It is also used for actions that cannot be performed in the current task’s context,
such as task suicide. It must have the highest priority in the system. Do not suspend, delete, or change the priority of this
task. For more information, see the reference entry for excLib.
Network Task:The tNet0 task is the default network daemon. It handles the task-level (as opposed to interrupt-level)
processing required by the VxWorks network. For systems that have been configured with more than one network
daemon, the task names are tNetn. The task is primarily used by network drivers. Configure VxWorks with the
INCLUDE_NET_DAEMON component to spawn the tNet0 task.
WDB Target Agent Task:The WDB target agent task, tWdbTask, is created if the target agent is set to run in task mode.
It services requests from the host tools (by way of the target server); for information about this server, see the host deve-
lopment environment documentation. Configure VxWorks with the INCLUDE_WDB component to include the target
agent.
KERNEL ARCHITECTURE:
Prior to VxWorks 6.0, the operating system provided a single memory space with no segregation of the operating system
from user applications. All tasks ran in supervisor mode.
Although this model afforded performance and flexibility when developing applications, only skilled
programming could ensure that kerne facilities and applications coexisted in the same memory space without interfering
with one another. With the release of VxWorks 6.0, the operating system provides support for real-time processes (RTPs)
that includes execution of applications in user mode and other features common to operating systems with a clear
delineation between kernel and applications. This architecture is often referred to as the process model. VxWorks has
adopted this model with a design specifically aimed to meet the requirements of determinism and speed that are required
for hard real-time systems. (For information about VxWorks processes and developing applications to run in processes,
see VxWorks Application Programmer’s Guide.) VxWorks 6.x provides full MMU-based protection of both kernel and
user space.
At the same time, VxWorks 6.x maintains a high level of backward compatibility with VxWorks 5.5.
Applications developed for earlier versions of VxWorks, and designed to run in kernel space, can be migrated to
VxWorks 6.x kernel space with minimal effort (in most cases, merely re-compilation). For more information on this topic,
see the VxWorks Application Programmer’s Guide. Naturally, new applications can be designed for kernel space as well,
when other considerations outweigh the advantages of protection that executing applications as processes affords. These
considerations might include:
■ Size. The overall size of a system is smaller without components that provided for processes and MMU support.
■ Speed. Depending on the number of system calls an application might make, or how much I/O it is doing when running
as a process in user space, it might be faster running in the kernel.
■ Kernel-only features. Features such as watchdog timers, ISRs, and VxMP are available only in the kernel. In some
cases, however, there are alternatives for process-based applications (POSIX timers, for example).
■ Hardware access. If the application requires direct access to hardware, it can only do so from within the kernel.
VxWorks is flexible in terms of both the modularity of its features and extensibility. It can be configured as a minimal
kernel, or a full-featured operating system with user mode applications, file systems, networking, error detection and
reporting, and so on, or anything in between. The operating system can also be extended by adding custom components or
application modules to the kernel itself (for example, for new file systems, networking protocols, or drivers). The system
call interface can also be extended by adding custom APIs, which makes them available to process-based applications.
TASKS AND MULTITASKING:
VxWorks tasks are the basic unit of code execution in the operating system itself, as well as in applications that it
executes as processes. In other operating systems the term thread is used similarly.
Multitasking provides the fundamental mechanism for an application to control and react to multiple, discrete real-world
events. The VxWorks real-time kernel provides the basic multitasking environment. On a uniprocessor system multi-
tasking creates the appearance of many threads of execution running concurrently when, in fact, the kernel interleaves
their execution on the basis of a scheduling policy.Each task has its own context, which is the CPU environment and
system resources that the task sees each time it is scheduled to run by the kernel. On a context switch, a task’s context is
saved in the task control block (TCB).
A task’s context includes:
■ a thread of execution; that is, the task’s program counter
■ the tasks’ virtual memory context (if process support is included)
■ the CPU registers and (optionally) coprocessor registers
■ stacks for dynamic variables and function calls
■ I/O assignments for standard input, output, and error
■ a delay timer
■ a time-slice timer
■ kernel control structures
■ signal handlers
■ task private environment (for environment variables) error status (errno)
■ debugging and performance monitoring values
If VxWorks is configured without process support (the INCLUDE_RTP component), the context of a task does
not include its virtual memory context. All tasks can only run in a single common address space (the kernel). However, if
VxWorks is configured with process support—regardless of whether or not any processes are active—the context of a
kernel task does include its virtual memory context, because the system has the potential to operate with other virtual
memory contexts besides the kernel. That is, the system could have tasks running in several different virtual memory
contexts.
TASK STATES AND TRANSITIONS:
The kernel maintains the current state of each task in the system. A task changes from one state to another as a
result of activity such as certain function calls made by the application (for example, when attempting to take a semaphore
that is not available) and the use of development tools such as the debugger.The highest priority task that is in the ready
state is the task that executes. When tasks are created with taskSpawn(), they immediately enter the ready state. When
tasks are created with taskCreate(), or taskOpen() with the VX_TASK_NOACTIVATE options parameter, they are
instantiated in the suspended state. They can then be activated with taskActivate(), which causes them to enter the ready
state. The activation phase is fast, enabling applications to create tasks and activate them in a timely manner.
Tasks States and State Symbols
Below Table describes the task states and the state symbols that you see when working with development tools.
Note that task states are additive; a task may be in more than one state at a time. Transitions may take place with regard to
one of mul-tiple states. For example, a task may transition from pended to pended and stopped. And if it then becomes
unpended, its state simply becomes stopped.
.
Illustration of Basic Task State Transitions
Task Priorities
Task scheduling relies on a task’s priority. The VxWorks kernel provides 256 priority levels, numbered 0
through 255. Priority 0 is the highest and priority 255 is the lowest. A task is assigned its priority at creation, but you can
also change it programmatically thereafter.
Application Task Priorities
All application tasks should be in the priority range from 100 to 255.
Driver Task Priorities
In contrast to application tasks, which should be in the task priority range from 100 to 255, driver support
tasks (which are associated with an ISR) can be in the range of 51-99. These tasks are crucial; for example, if a support
task fails while copying data from a chip, the device loses that data. Examples of driver support tasks include tNet0
(the VxWorks network daemon task), an HDLC task, and so on. The system tNet0 has a priority of 50, so user tasks
should not be assigned priorities below that task; if they are, the network connection could die and prevent debugging
capabilities with the host tools.
The taskRotate()routine shifts a task from the front to the end of its priority list inthe ready queue. For example, the
following call shifts the task that is at the frontof the list for priority level 100 to the end:taskRotate(100); To shift the task
that is currently running to the end of its priority list, use TASK_PRIORITY_SELF as the parameter to taskRotate()
instead of the priority level.
For information about the ready queue, see Scheduling and the Ready Queue,The taskRotate() routine can be used as an
alternative to round-robin scheduling.It allows a program to control sharing of the CPU amongst tasks of the same
priority that are ready to run, rather than having the system do so at predetermined equal intervals. For information about
round-robin scheduling.
Task Priority
Tasks are assigned a priority when they are created . You can change a task’s priority level while it is executing by
calling taskPrioritySet(). The ability to change task priorities dynamically allowsapplications to track precedence changes
in the real world.Note that if a task’s priority is changed with taskPrioritySet(), it is placed at the end of the ready queue
priority list for its new priority.
Preemption Locks
The scheduler can be explicitly disabled and enabled on a per-task basis in the kernel with the routines taskLock() and
taskUnlock(). When a task disables the scheduler by calling taskLock(), no priority-based preemption can take place
while that task is running.If the task that has disabled the scheduler with taskLock() explicitly blocks or suspends, the
scheduler selects the next highest-priority eligible task to execute. When the preemption-locked task unblocks and begins
running again, preemption is again disabled.
NOTE: The taskLock() and taskUnlock() routines are provided for the UP configuration of VxWorks, but not the SMP
configuration. Several alternatives are available for SMP systems, including task-only spinlocks, which default to
taskLock() and taskUnlock() behavior in a UP system
Note that preemption locks prevent task context switching, but do not lock out interrupt handling. Preemption locks can
be used to achieve mutual exclusion; however, keep the duration of preemption locking to a minimum.
A Comparison of taskLock() and intLock()
When using taskLock(), consider that it will not achieve mutual exclusion. Generally, if interrupted by
hardware, the system will eventually return to your task. However, if you block, you lose task lockout. Thus, before you
return from the routine, taskUnlock() should be called.When a task is accessing a variable or data structure that is also
accessed by an ISR, you can use intLock() to achieve mutual exclusion. Using intLock() makes the operation atomic in a
single processor environment. It is best if the operation is kept minimal, meaning a few lines of code and no function
calls. If the call is too long, it can directly impact interrupt latency and cause the system to become far less deterministic.
VxWorks Traditional Scheduler
The VxWorks traditional scheduler provides priority-based preemptive scheduling as well as the option of program-
matically initiating round-robin scheduling. The traditional scheduler may also be referred to as the original or native
scheduler.
The traditional scheduler is included in VxWorks by default with the INCLUDE_VX_TRADITIONAL_SCHEDULER
component.
Priority-Based Preemptive Scheduling
A priority-based preemptive scheduler preempts the CPU when a task has a higher priority than the current task running.
Thus, the kernel ensures that the CPU is always allocated to the highest priority task that is ready to run. This means that
if a task—with a higher priority than that of the current task—becomes ready to run, the kernel immediately saves the
current task’s context, and switches to the context of the higher priority task. For example, in Figure, task t1 is preempted
by higher-priority task t2, which in turn is preempted by t3. When t3 completes, t2 continues executing. When t2
completes execution, t1 continues executing. The disadvantage of this scheduling policy is that, when multiple tasks of
equal priority must share the processor, if a single task is never blocked, it can usurp the processor. Thus, other equal-
priority tasks are never given a chance to run.Round-robin scheduling solves this problem
Scheduling and the Ready Queue
The VxWorks scheduler maintains a FIFO ready queue mechanism that includes lists of all the tasks that are ready to run
(that is, in the ready state) at each priority level in the system. When the CPU is available for given priority level, the task
that is at the front of the list for that priority level executes.
A task’s position in the ready queue may change, depending on the operation performed on it, as follows:
■ If a task is preempted, the scheduler runs the higher priority task, but the preempted task retains its position at the front
of its priority list.
■ If a task is pended, delayed, suspended, or stopped, it is removed from the ready queue altogether. When it is sub-
sequently ready to run again, it is placed at the end of its ready queue priority list
■ If a task’s priority is changed with taskPrioritySet(), it is placed at the end of its new priority list.
■ If a task’s priority is temporarily raised based on the mutual-exclusion semaphore priority-inheritance policy (using the
SEM_INVERSION_SAFE option), it returns to the end of its original priority list after it has executed at the elevated
priority.
The taskRotate()routine can be used to shift a task from the front to the end of its priority list.
Round-Robin Scheduling
VxWorks provides a round-robin extension to priority-based preemptive scheduling. Round-robin scheduling
accommodates instances in which there are more than one task of a given priority that is ready to run, and you want to
share the CPU amongst these tasks. The round-robin algorithm attempts to share the CPU amongst these tasks by using
time-slicing. Each task in a group of tasks with the same priority executes for a defined interval, or time slice, before
relinquishing the CPU to the next task in the group. No one of them, therefore, can usurp the processor until it is blocked.
When the time slice expires, the task moves to last place in the ready queue list for that priority .
Note that while round-robin scheduling is used in some operating systems to provide equal CPU time to all tasks
(or processes), regardless of their priority, this is not the case with VxWorks. Priority-based preemption is essentially
unaffected by the VxWorks implementation of round-robin scheduling. Any higher-priority task that is ready to run
immediately gets the CPU, regardless of whether or not the current task is done with its slice of execution time. When the
interrupted task gets to run again, it simply continues using its unfinished execution time.Round-robin scheduling is not
necessary for systems in which all tasks run at different priority levels. It is designed for systems in which multiple tasks
run at the same level.
Note that the taskRotate()routine can be used as an alternative to round-robin scheduling. It is useful for situations
in which you want to share the CPU amongst tasks of the same priority that are ready to run, but to do so as a program
requires, rather than at predetermined equal intervals.
Enabling Round-Robin Scheduling
Round-robin scheduling is enabled by calling kernelTimeSlice(), which takes a parameter for a time slice, or interval. It is
disabled by using zero as the argument to kernelTimeSlice().
Time-slice Counts and Preemption
The time-slice or interval defined with a kernelTimeSlice() call is the amount of time that each task is allowed to
run before relinquishing the processor to another equal-priority task. Thus, the tasks rotate, each executing for an equal
interval of time. No task gets a second slice of time before all other tasks in the priority group have been allowed to run.
If round-robin scheduling is enabled, and preemption is enabled for the executing task, the system tick handler increments
the task’s time-slice count. When the specified time-slice interval is completed, the system tick handler clears the
counter and the task is placed at the end of the ready queue priority list for its priority level. New tasks joining a given
priority group are placed at the end of the priority list for that priority with their run-time counter initialized to zero.
Enabling round-robin scheduling does not affect the performance of task context switches, nor is additional memory
allocated.
If a task blocks or is preempted by a higher priority task during its interval, its time-slice count is saved and then
restored when the task becomes eligible for execution. In the case of preemption, the task resumes execution once the
higher priority task completes, assuming that no other task of a higher priority is ready to run. In the case where the task
blocks, it is placed at the end of the ready queue list for its priority level. If preemption is disabled during round-robin
scheduling, the time-slice count of the executing task is not incremented.
Time-slice counts are accrued by the task that is executing when a system tick occurs, regardless of whether or not
the task has executed for the entire tick interval. Due to preemption by higher priority tasks or ISRs stealing CPU time
from the task, it is possible for a task to effectively execute for either more or less total CPU time than its allotted time
slice.
Figure shows round-robin scheduling for three tasks of the same priority: t1, t2, and t3. Task t2 is preempted by a higher
priority task t4 but resumes at the count where it left off when t4 is finished.
Priority inversion
Priority inversion arises when a higher-priority task is forced to wait an indefinite period of time for a lower-priority task
to complete.
Consider the scenario in Figure: t1, t2, and t3 are tasks of high, medium, and low priority, respectively. t3 has acquired
some resource by taking its associated binary guard semaphore. When t1 preempts t3 and contends for the resource by
taking the same semaphore, it becomes blocked. If we could be assured that t1 would be blocked no longer than the time it
normally takes t3 to finish with the resource, there would be no problem because the resource cannot be preempted.
However, the low-priority task is vulnerable to preemption by medium-priority tasks (like t2), which could inhibit t3 from
relinquishing the resource. This condition could persist, blocking t1 for an indefinite period of time.
Priority Inheritance Policy
The mutual-exclusion semaphore has the option SEM_INVERSION_SAFE, which enables a priority-inheritance policy.
The priority-inheritance policy assures that a task that holds a resource executes at the priority of the highest-priority task
that is blocked on that resource. Once the task’s priority has been elevated, it remains at the higher level until all mutual-
exclusion semaphores that have contributed to the tasks elevated priority are released. Hence, the inheriting task is prot-
ected from preemption by any intermediate-priority tasks. This option must be used in conjunction with a priority queue
(SEM_Q_PRIORITY).
Note that after the inheriting task has finished executing at the elevated priority level, it returns to the end of the ready
queue priority list for its original priority.
In Figure, priority inheritance solves the problem of priority inversion by elevating the priority of t3 to the priority of t1
during the time t1 is blocked on the semaphore. This protects t3, and indirectly t1, from preemption by t2.
The following example creates a mutual-exclusion semaphore that uses the priority inheritance policy:
semId=semMCreate(SEM_Q_PRIORITY|SEM_INVERSION_SAFE);
counting semaphores:
Like the binary semaphore, but keeps track of the number of times a semaphore is given. Optimized for guarding multiple
instances of a resourc Counting semaphores are another means to implement task synchronization and mutual exclusion.
The counting semaphore works like the binary semaphore except that it keeps track of the number of times a semaphore is
given. Every time a semaphore is given, the count is incremented; every time a semaphore is taken, the count is decr-
emented. When the count reaches zero, a task that tries to take the semaphore is blocked. As with the binary semaphore, if
a semaphore is given and a task is blocked, it becomes unblocked. However, unlike the binary semaphore, if a semaphore
is given and no tasks are blocked, then the count is incremented. This means that a semaphore that is given twice can be
taken twice without blocking. Table shows an example time sequence of tasks taking and giving a counting
semaphore that was initialized to a count of 3
Counting semaphores are useful for guarding multiple copies of resources. For example, the use of five tape drives might
be coordinated using a counting semaphore with an initial count of 5, or a ring buffer with 256 entries might be
implemented using a counting semaphore with an initial count of 256. The initial count is specified as an argument to the
semCCreate() routine.
read/write semaphores
A special type of semaphore that provides mutual exclusion for tasks that need write access to an object, and concurrent
access for tasks that only need read access to the object. This type of semaphore is particularly useful for SMP systems.
Read/write semaphores provide enhanced performance for applications that can effectively make use of
differentiation between read access to a resource, and write access to a resource. A read/write semaphore can be taken in
either read mode or write mode. They are particularly suited to SMP systems (for information about the SMP onfiguration
of VxWorks, see 17.VxWorks SMP).
A task holding a read/write semaphore in write mode has exclusive access to a resource. On the other hand, a
task holding a read/write semaphore in read mode does not have exclusive access. More than one task can take a
read/write semaphore in read mode, and gain access to the same resource. Because it is exclusive, write-mode permits
only serial access to a resource, while while read-mode allows shared or concurrent access. In a multiprocessor system,
more than one task (running in different CPUs) can have read-mode access to a resource in a truly concurrent manner. In a
uniprocessor system, however, access is shared but the concurrency is virtual. More than one task can have read-mode
access to a resource at the same time, but since the tasks do not run simultaneously, access is effectively multiplexed.
All tasks that hold a read/write semaphore in read mode must give it up before any task can take it in write mode.
Multiple tasks can send to and receive from the same message queue. Full-duplex communication between two tasks
generally requires two message queues, one for each direction; see Figure There are two message-queue subroutine
libraries in VxWorks. The first of these, msgQLib, provides VxWorks message queues, designed expressly for VxWorks;
the second, mqPxLib, is compliant with the POSIX standard (1003.1b) for real-time extensions.
Inter-Process Communication With Public Message Queues
VxWorks message queues can be created as private objects, which are accessible only within the memory space in which
they were created (kernel or process); or as public objects, which are accessible throughout the system.
VxWorks Message Queue Routines
VxWorks message queues are created, used, and deleted with the routines shown in Table. This library provides messages
that are queued in FIFO order, with a single exception: there are two priority levels, and messages marked as high
priority are attached to the head of the queue.
A message queue is created with msgQCreate(). Its parameters specify the maximum number of messages that can be
queued in the message queue and the maximum length in bytes of each message. Enough buffer space is allocated for the
specified number and length of messages.
A task or ISR sends a message to a message queue with msgQSend(). If no tasks are waiting for messages
on that queue, the message is added to the queue’s buffer of messages. If any tasks are already waiting for a message from
that message queue, the message is immediately delivered to the first waiting task.
A task receives a message from a message queue with msgQReceive(). If messages are already available in
the message queue’s buffer, the first message is immediately dequeued and returned to the caller. If no messages are
available, then the calling task blocks and is added to a queue of tasks waiting for messages. This queue of waiting tasks
can be ordered either by task priority or FIFO, as specified in an option parameter when the queue is created.
PIPES
Pipes provide an alternative interface to the message queue facility that goes through the VxWorks I/O system. Pipes are
virtual I/O devices managed by the driver pipeDrv. The routine pipeDevCreate() creates a pipe device and the underlying
message queue associated with that pipe. The call specifies the name of the created pipe, the maximum number of me-
ssages that can be queued to it, and the maximum length of each message:
status = pipeDevCreate ("/pipe/name", max_msgs, max_length);
The created pipe is a normally named I/O device. Tasks can use the standard I/O routines to open, read, and write pipes,
and invoke ioctl routines. As they do with other I/O devices, tasks block when they read from an empty pipe until data is
available, and block when they write to a full pipe until there is space available.
Like message queues, ISRs can write to a pipe, but cannot read from a pipe.As I/O devices, pipes provide one important
feature that message queues cannot—the ability to be used with select(). This routine allows a task to wait for data to be
available on any of a set of I/O devices. The select() routine also works with other asynchronous I/O devices including
network sockets and serial devices. Thus, by using select(), a task can wait for data on a combination of several pipes,
sockets, and serial devices; Pipes allow you to implement a client-server model of inter task communications;
SIGNALS
Signals are an operating system facility designed for handling exceptional conditions and asynchronously
altering the flow of control. In many respects signals are the software equivalent to hardware interrupts. Signals generated
by the operating system include those produced in response to bus errors and floating point exceptions. The signal facility
also provides APIs that can be used to generate and manage signals programmatically. In applications, signals are most
appropriate for error and exception handling, and not for a general-purpose inter-task communication. Common uses
include using signals to kill processes and tasks, to send signal events when a timer has fired or message has arrived at a
message queue, and so on.
In accordance with POSIX, VxWorks supports 63 signals, each of which has a unique number and default
action (defined in signal.h). The value 0 is reserved for use as the NULL signal.
Signals can be raised (sent) from tasks to tasks or to processes. Signals can be either caught (received) or
ignored by the receiving task or process. Whether signals are caught or ignored generally depends on the setting of a
signal mask. In the kernel, signal masks are specific to tasks, and if no task is set up to receive a specific signal, it is
ignored. In user space, signal masks are specific to processes; and some signals, such as SIGKILL and SIGSTOP, cannot
be ignored.
To manage responses to signals, you can create and register signal handling routines that allow a task to
respond to a specific signal in whatever way is useful for your application. A kernel task or interrupt service routine (ISR)
can raise a signal for a specific task or process. In the kernel, signal generation and delivery runs in the context of the
task or ISR that generates the signal. In accordance with the POSIX standard, a signal sent to a process is handled by the
first available task that has been set up to handle the signal in the process.
Each kernel task has a signal mask associated with it. The signal mask determines which signals the task
accepts. By default, the signal mask is initialized with all signals unblocked (there is no inheritance of mask settings in the
kernel). The mask can be changed with sigprocmask(). Signal handlers in the kernel can be registered for a specific task.
A signal handler executes in the receiving task’s context and makes use of that task’s execution stack. The signal handler
is invoked even if the task is blocked (suspended or pended).
Synchronization
When used for task synchronization, a semaphore can represent a condition or event that a task is waiting for.
Initially, the semaphore is unavailable (empty). A task or ISR signals the occurrence of the event by giving the
semaphore. Another task waits for the semaphore by calling semTake(). The waiting task blocks until the event occurs
and the semaphore is given.
Note the difference in sequence between semaphores used for mutual exclusion and those used for
synchronization. For mutual exclusion, the semaphore is initially full, and each task first takes, then gives back the
semaphore. For synchronization, the semaphore is initially empty, and one task waits to take the semaphore given by
another task.
In Example, the init() routine creates the binary semaphore, attaches an ISR to an event, and spawns a task
to process the event. The routine task1() runs until it calls semTake(). It remains blocked at that point until an event causes
the ISR to call semGive(). When the ISR completes, task1() executes to process the event. There is an advantage of
handling event processing within the context of a dedicated task: less processing takes place at interrupt level, thereby
reducing interrupt latency. This model of event processing is recommended for real-time applications.
Example Using Semaphores for Task Synchronization
/* This example shows the use of semaphores for task synchronization. */
/* includes */
#include <vxWorks.h>
#include <semLib.h>
#include <arch/arch/ivarch.h> /* replace arch with architecture type */
SEM_ID syncSem;/* ID of sync semaphore */
init (
int someIntNum
)
{
/* connect interrupt service routine */
intConnect (INUM_TO_IVEC (someIntNum), eventInterruptSvcRout, 0);
/* create semaphore */
syncSem = semBCreate (SEM_Q_FIFO, SEM_EMPTY);
/* spawn task used for synchronization. */
taskSpawn ("sample", 100, 0, 20000, task1, 0,0,0,0,0,0,0,0,0,0);
}
task1 (void)
{
...
semTake (syncSem, WAIT_FOREVER); /* wait for event to occur */
printf ("task 1 got the semaphore\n");
.../* process event */
}
eventInterruptSvcRout (void)
{
...
semGive (syncSem);/* let task 1 process event */
...
}
Broadcast synchronization allows all processes that are blocked on the same semaphore to be
unblocked atomically. Correct application behavior often requires a set of tasks to process an event before any task of the
set has the opportunity to process further events. The routine semFlush() addresses this class of synchronization problem
by unblocking all tasks pended on a semaphore.
INTER-PROCESS COMMUNICATION
VxWorks includes a watchdog-timer mechanism that allows any C function to be connected to a specified time delay.
Watchdog timers are maintained as part of the system clock ISR. Functions invoked by watchdog timers execute as
interrupt service code at the interrupt level of the system clock. Restrictions on ISRs apply toroutines connected to
watchdog timers. The functions in Table are provide by the wdLib library.
A watchdog timer is first created by calling wdCreate(). Then the timer can be started by calling
wdStart(), which takes as arguments the number of ticks to delay, the C function to call, and an argument to be passed to
that function. After the specified number of ticks have elapsed, the function is called with the specified argument. The
watchdog timer can be canceled any time before the delay has elapsed by calling wdCancel().
Watchdog Timers
/* Creates a watchdog timer and sets it to go off in 3 seconds.*/
/* includes */
#include<vxWorks.h>
#include<logLib.h>
#include<wdLib.h>
/* defines */
#defineSECONDS (3)
WDOG_ID myWatchDogId;
task (void)
{
/* Create watchdog */
if ((myWatchDogId = wdCreate( )) == NULL)
return (ERROR);
/* Set timer to go off in SECONDS - printing a message to stdout */
if (wdStart (myWatchDogId, sysClkRateGet( ) * SECONDS, logMsg,
"Watchdog timer just expired\n") == ERROR)
return (ERROR);
/* ... */
}
MEMORY MANAGEMENT:
VxWorks provides memory management facilities for all code that executes in the kernel, as well as
memory management facilities for applications that execute as real-time processes. This chapter deals primarily with
kernel-space memory management, although it also provides information about what memory maps look like for systems
that include support for processes (and related facilities).
VxWorks memory management system does not use swapping or paging. This is because the system
allocates memory within the physical address space without the need of swapping data in and out of this space due to
memory constraints. VxWorks assumes that there is enough physical memory available to operate its’ kernel and the
applications that will run on the operating system. Therefore VxWorks does not have a directly supported virtual memory
system. The amount of memory available to a VxWorks system is dependent upon the platform’s hardware and the
memory management unit’s imposed constraints. This amount is usually determined dynamically by the platform
depending on how much memory is available, but in some architectures it is a hard coded value. This value is returned by
the sysMemTop() method which will set the amount of memory available to the operating system for this session. 1 There is
an extra virtual memory support component available, The VxVMI Option, which is an architecture-independent interface to the
MMU. It is packaged separately as an add-on.
Information about configuring VxWorks with various memory management facilities is provided in the context of the discussions of
these facilities. ■ Shell Commands, ■ System RAM Autosizing, ■ Reserved Memory, ■ Kernel Heap and Memory Partition
Management, ■ Memory Error Detection, ■ Virtual Memory Manag Shell Commands
The shell’s adrSpaceShow() show routine (for the C interpreter) or the adrspinfocommand (for the command interpreter) can be used
to display an overview of the address space usage at time of the call. These are included in the kernel with the
INCLUDE_ADR_SPACE_SHOW and INCLUDE_ADR_SPACE_SHELL_CMD components, respectively.
The rtpMemShow() show routine or the rtpmeminfo command can be used to display the private mappings of a process. These are
included with the INCLUDE_RTP_SHOW and INCLUDE_RTP_SHOW_SHELL_CMD components, respectively.The kernel
mappings can be displayed with the vmContextShow() show routine or the vmcontext command. These are included with the
INCLUDE_VM_SHOW and INCLUDE_VM_SHOW_SHELL_CMD components, respectively.
Shell Commands
The shell’s adrSpaceShow() show routine (for the C interpreter) or the adrspinfo command (for the command interpreter) can
be used to display an overview of the address space usage at time of the call. These are included in the kernel with the
INCLUDE_ADR_SPACE_SHOW and INCLUDE_ADR_SPACE_SHELL_CMD components, respectively.
The rtpMemShow() show routine or the rtpmeminfo command can be used to display the private mappings of a process. These
are included with the INCLUDE_RTP_SHOW and INCLUDE_RTP_SHOW_SHELL_CMD components, respectively.The kernel
mappings can be displayed with the vmContextShow() show routine or the vmcontext command. These are included with the
INCLUDE_VM_SHOW and INCLUDE_VM_SHOW_SHELL_CMD components, respectively.
If the LOCAL_MEM_AUTOSIZE is not defined, the top of the system RAM as reported by sysPhysMemTop() is the address
calculated as:
(LOCAL_MEM_LOCAL_ADRS + LOCAL_MEM_SIZE) If the BSP is unable to perform run time memory sizing then a
compile time error should be generated, informing the user of the limitation. LOCAL_MEM_AUTOSIZE,
LOCAL_MEM_LOCAL_ADRS and LOCAL_MEM_SIZE are parameters of the INCLUDE_MEM_CONFIG component.
Reserved Memory
user-reserved memory and persistent memory. Reserved memory is not cleared byVxWorks at startup or during system operation.
Boot loaders may or may not clear the area; see Boot Loaders and Reserved MemoryUser-reserved memory, configured with the BSP
parameter USER_RESERVED_MEM, is part of the system RAM that can managed by kernel applications independently of the kernel
heap.
Persistent memory, configured with the parameter PM_RESERVED_MEM, is the part of system RAM that is used by the error
detection and reporting facilities For the layout of the user-reserved memory and the persistent memory, Boot Loaders and Reserved
Memory
Boot loaders may or may not clear reserved memory, depending on the configuration that was used to create them. If the boot loader is
built with both USER_RESERVED_MEM and PM_RESERVED_MEM set to zero, the system RAM is cleared through the address
calculated as:
(LOCAL_MEM_LOCAL_ADRS + LOCAL_MEM_SIZE) To ensure that reserved memory is not cleared, the boot loader should be
created with the USER_RESERVED_MEM and the PM_RESERVED_MEM parameter set to the desired sizes; that is, the same
values that are used to build the downloaded VxWorks image.
VxWorks provides facilities for heap access and memory partition management. The memLib and
memPartLib libraries provide routines to access the kernel heap, including standard ANSI-compatible routines as well as
routines to manipulate kernel memory partitions. The kernel heap is used by all code running in the kernel, including
kernel libraries and components, kernel applications, and by processes when executing system calls.
Memory partitions consist of areas of memory that are used for dynamic memory allocations by applications
and kernel components. Memory partitions may be used to reserve portions of memory for specific applications, or to
isolate dynamic memory usage on an application basis. The kernel heap is a specific memory partition, which is also
referred to as the system memory partition.
Memory Error Detection
Support for memory error detection is provided by the following optional instrumentation libraries:
■ The memEdrLib library, which performs error checks of operations in the user heap and memory partitions in a process.
This library can be linked to an executable compiled with either the Wind River Compiler or the GNU compiler.
■ The Run-Time Error Checking (RTEC) facility, which checks for additional errors, such as buffer overruns and
underruns, static and automatic variable reference checks. This feature is only provided by the Wind River Compiler.
Errors detected by these facilities are reported by the error detection and reporting facility, which must, therefore be
included in the VxWorks kernel configuration.
Virtual Memory Management
VxWorks can be configured with an architecture-independent interface to the CPU’s memory management unit (MMU) to
provide virtual memory support.
This support includes the following features:
■ Setting up the kernel memory context at boot time.
■ Mapping pages in virtual space to physical memory.
■ Setting caching attributes on a per-page basis.
■ Setting protection attributes on a per-page basis.
■ Setting a page mapping as valid or invalid.
■ Locking and unlocking TLB entries for pages of memory.
■ Enabling page optimization.
The programmable elements of virtual memory (VM) support are provided by the vmBaseLib library.
When process (RTP) support is included in VxWorks with the INCLUDE_RTP component, the virtual memory facilities
also provide system support for managing multiple virtual memory contexts, such as creation and deletion of
process memory context.
I/O SYSTEM.