Remove ada_user document.

We are going to make Ada a chapter in the c_user manual to simplify things.
Talked it over with Joel the differences between ada_user and c_users are a lot
smaller than they used to be.

A simple few page chapter will be enough for anyone to know the differences and
will be a lot easier to maintain.
This commit is contained in:
Amar Takhar 2016-01-18 00:06:41 -05:00 committed by Amar Takhar
parent 3a717592a9
commit 11e1a6f969
43 changed files with 0 additions and 45880 deletions

File diff suppressed because it is too large Load Diff

View File

@ -1,389 +0,0 @@
Barrier Manager
###############
.. index:: barrier
Introduction
============
The barrier manager provides a unique synchronization
capability which can be used to have a set of tasks block
and be unblocked as a set. The directives provided by the
barrier manager are:
- ``rtems.barrier_create`` - Create a barrier
- ``rtems.barrier_ident`` - Get ID of a barrier
- ``rtems.barrier_delete`` - Delete a barrier
- ``rtems.barrier_wait`` - Wait at a barrier
- ``rtems.barrier_release`` - Release a barrier
Background
==========
A barrier can be viewed as a gate at which tasks wait until
the gate is opened. This has many analogies in the real world.
Horses and other farm animals may approach a closed gate and
gather in front of it, waiting for someone to open the gate so
they may proceed. Similarly, cticket holders gather at the gates
of arenas before concerts or sporting events waiting for the
arena personnel to open the gates so they may enter.
Barriers are useful during application initialization. Each
application task can perform its local initialization before
waiting for the application as a whole to be initialized. Once
all tasks have completed their independent initializations,
the "application ready" barrier can be released.
Automatic Versus Manual Barriers
--------------------------------
Just as with a real-world gate, barriers may be configured to
be manually opened or automatically opened. All tasks
calling the ``rtems.barrier_wait`` directive
will block until a controlling task invokes the``rtems.barrier_release`` directive.
Automatic barriers are created with a limit to the number of
tasks which may simultaneously block at the barrier. Once
this limit is reached, all of the tasks are released. For
example, if the automatic limit is ten tasks, then the first
nine tasks calling the ``rtems.barrier_wait`` directive
will block. When the tenth task calls the``rtems.barrier_wait`` directive, the nine
blocked tasks will be released and the tenth task returns
to the caller without blocking.
Building a Barrier Attribute Set
--------------------------------
In general, an attribute set is built by a bitwise OR
of the desired attribute components. The following table lists
the set of valid barrier attributes:
- ``RTEMS.BARRIER_AUTOMATIC_RELEASE`` - automatically
release the barrier when the configured number of tasks are blocked
- ``RTEMS.BARRIER_MANUAL_RELEASE`` - only release
the barrier when the application invokes the``rtems.barrier_release`` directive. (default)
*NOTE*: Barriers only support FIFO blocking order because all
waiting tasks are released as a set. Thus the released tasks
will all become ready to execute at the same time and compete
for the processor based upon their priority.
Attribute values are specifically designed to be
mutually exclusive, therefore bitwise OR and addition operations
are equivalent as long as each attribute appears exactly once in
the component list. An attribute listed as a default is not
required to appear in the attribute list, although it is a good
programming practice to specify default attributes. If all
defaults are desired, the attribute``RTEMS.DEFAULT_ATTRIBUTES`` should be
specified on this call.
This example demonstrates the attribute_set parameter needed to create a
barrier with the automatic release policy. The``attribute_set`` parameter passed to the``rtems.barrier_create`` directive will be``RTEMS.BARRIER_AUTOMATIC_RELEASE``. In this case, the
user must also specify the *maximum_waiters* parameter.
Operations
==========
Creating a Barrier
------------------
The ``rtems.barrier_create`` directive creates
a barrier with a user-specified name and the desired attributes.
RTEMS allocates a Barrier Control Block (BCB) from the BCB free list.
This data structure is used by RTEMS to manage the newly created
barrier. Also, a unique barrier ID is generated and returned to
the calling task.
Obtaining Barrier IDs
---------------------
When a barrier is created, RTEMS generates a unique
barrier ID and assigns it to the created barrier until it is
deleted. The barrier ID may be obtained by either of two
methods. First, as the result of an invocation of the``rtems.barrier_create`` directive, the
barrier ID is stored in a user provided location. Second,
the barrier ID may be obtained later using the``rtems.barrier_ident`` directive. The barrier ID is
used by other barrier manager directives to access this
barrier.
Waiting at a Barrier
--------------------
The ``rtems.barrier_wait`` directive is used to wait at
the specified barrier. Since a barrier is, by definition, never immediately,
the task may wait forever for the barrier to be released or it may
specify a timeout. Specifying a timeout limits the interval the task will
wait before returning with an error status code.
If the barrier is configured as automatic and there are already
one less then the maximum number of waiters, then the call will
unblock all tasks waiting at the barrier and the caller will
return immediately.
When the task does wait to acquire the barrier, then it
is placed in the barriers task wait queue in FIFO order.
All tasks waiting on a barrier are returned an error
code when the barrier is deleted.
Releasing a Barrier
-------------------
The ``rtems.barrier_release`` directive is used to release
the specified barrier. When the ``rtems.barrier_release``
is invoked, all tasks waiting at the barrier are immediately made ready
to execute and begin to compete for the processor to execute.
Deleting a Barrier
------------------
The ``rtems.barrier_delete`` directive removes a barrier
from the system and frees its control block. A barrier can be
deleted by any local task that knows the barriers ID. As a
result of this directive, all tasks blocked waiting for the
barrier to be released, will be readied and returned a status code which
indicates that the barrier was deleted. Any subsequent
references to the barriers name and ID are invalid.
Directives
==========
This section details the barrier managers
directives. A subsection is dedicated to each of this managers
directives and describes the calling sequence, related
constants, usage, and status codes.
BARRIER_CREATE - Create a barrier
---------------------------------
.. index:: create a barrier
**CALLING SEQUENCE:**
.. code:: c
procedure Barrier_Create (
Name : in RTEMS.Name;
Attribute_Set : in RTEMS.Attribute;
Maximum_Waiters : in RTEMS.Unsigned32;
ID : out RTEMS.ID;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - barrier created successfully
``RTEMS.INVALID_NAME`` - invalid barrier name
``RTEMS.INVALID_ADDRESS`` - ``id`` is NULL
``RTEMS.TOO_MANY`` - too many barriers created 
**DESCRIPTION:**
This directive creates a barrier which resides on
the local node. The created barrier has the user-defined name
specified in ``name`` and the initial count specified in ``count``. For
control and maintenance of the barrier, RTEMS allocates and
initializes a BCB. The RTEMS-assigned barrier id is returned
in ``id``. This barrier id is used with other barrier related
directives to access the barrier.
``RTEMS.BARRIER_MANUAL_RELEASE`` - only release
Specifying ``RTEMS.BARRIER_AUTOMATIC_RELEASE`` in``attribute_set`` causes tasks calling the``rtems.barrier_wait`` directive to block until
there are ``maximum_waiters - 1`` tasks waiting at the barrier.
When the ``maximum_waiters`` task invokes the``rtems.barrier_wait`` directive, the previous``maximum_waiters - 1`` tasks are automatically released
and the caller returns.
In contrast, when the ``RTEMS.BARRIER_MANUAL_RELEASE``
attribute is specified, there is no limit on the number of
tasks that will block at the barrier. Only when the``rtems.barrier_release`` directive is invoked,
are the tasks waiting at the barrier unblocked.
**NOTES:**
This directive will not cause the calling task to be preempted.
The following barrier attribute constants are defined by RTEMS:
- ``RTEMS.BARRIER_AUTOMATIC_RELEASE`` - automatically
release the barrier when the configured number of tasks are blocked
- ``RTEMS.BARRIER_MANUAL_RELEASE`` - only release
the barrier when the application invokes the``rtems.barrier_release`` directive. (default)
BARRIER_IDENT - Get ID of a barrier
-----------------------------------
.. index:: get ID of a barrier
.. index:: obtain ID of a barrier
**CALLING SEQUENCE:**
.. code:: c
procedure Barrier_Ident (
Name : in RTEMS.Name;
ID : out RTEMS.ID;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - barrier identified successfully
``RTEMS.INVALID_NAME`` - barrier name not found
``RTEMS.INVALID_NODE`` - invalid node id
**DESCRIPTION:**
This directive obtains the barrier id associated
with the barrier name. If the barrier name is not unique,
then the barrier id will match one of the barriers with that
name. However, this barrier id is not guaranteed to
correspond to the desired barrier. The barrier id is used
by other barrier related directives to access the barrier.
**NOTES:**
This directive will not cause the running task to be
preempted.
BARRIER_DELETE - Delete a barrier
---------------------------------
.. index:: delete a barrier
**CALLING SEQUENCE:**
.. code:: c
procedure Barrier_Delete (
ID : in RTEMS.ID;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - barrier deleted successfully
``RTEMS.INVALID_ID`` - invalid barrier id 
**DESCRIPTION:**
This directive deletes the barrier specified by ``id``.
All tasks blocked waiting for the barrier to be released will be
readied and returned a status code which indicates that the
barrier was deleted. The BCB for this barrier is reclaimed
by RTEMS.
**NOTES:**
The calling task will be preempted if it is enabled
by the tasks execution mode and a higher priority local task is
waiting on the deleted barrier. The calling task will NOT be
preempted if all of the tasks that are waiting on the barrier
are remote tasks.
The calling task does not have to be the task that
created the barrier. Any local task that knows the barrier
id can delete the barrier.
.. COMMENT: Barrier Obtain
BARRIER_OBTAIN - Acquire a barrier
----------------------------------
.. index:: obtain a barrier
.. index:: lock a barrier
**CALLING SEQUENCE:**
.. code:: c
procedure Barrier_Wait (
ID : in RTEMS.ID;
Timeout : in RTEMS.Interval;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - barrier released and task unblocked
``RTEMS.UNSATISFIED`` - barrier not available
``RTEMS.TIMEOUT`` - timed out waiting for barrier
``RTEMS.OBJECT_WAS_DELETED`` - barrier deleted while waiting
``RTEMS.INVALID_ID`` - invalid barrier id
**DESCRIPTION:**
This directive acquires the barrier specified by
id. The ``RTEMS.WAIT`` and ``RTEMS.NO_WAIT``
components of the options parameter indicate whether the calling task
wants to wait for the barrier to become available or return immediately
if the barrier is not currently available. With either``RTEMS.WAIT`` or ``RTEMS.NO_WAIT``,
if the current barrier count is positive, then it is
decremented by one and the barrier is successfully acquired by
returning immediately with a successful return code.
Conceptually, the calling task should always be thought
of as blocking when it makes this call and being unblocked when
the barrier is released. If the barrier is configured for
manual release, this rule of thumb will always be valid.
If the barrier is configured for automatic release, all callers
will block except for the one which is the Nth task which trips
the automatic release condition.
The timeout parameter specifies the maximum interval the calling task is
willing to be blocked waiting for the barrier. If it is set to``RTEMS.NO_TIMEOUT``, then the calling task will wait forever.
If the barrier is available or the ``RTEMS.NO_WAIT`` option
component is set, then timeout is ignored.
**NOTES:**
The following barrier acquisition option constants are defined by RTEMS:
- ``RTEMS.WAIT`` - task will wait for barrier (default)
- ``RTEMS.NO_WAIT`` - task should not wait
A clock tick is required to support the timeout functionality of
this directive.
.. COMMENT: Release Barrier
BARRIER_RELEASE - Release a barrier
-----------------------------------
.. index:: wait at a barrier
.. index:: release a barrier
**CALLING SEQUENCE:**
.. code:: c
procedure Barrier_Release (
ID : in RTEMS.ID;
Released : out RTEMS.Unsigned32;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - barrier released successfully
``RTEMS.INVALID_ID`` - invalid barrier id
**DESCRIPTION:**
This directive releases the barrier specified by id.
All tasks waiting at the barrier will be unblocked.
If the running tasks preemption mode is enabled and one of
the unblocked tasks has a higher priority than the running task.
**NOTES:**
The calling task may be preempted if it causes a
higher priority task to be made ready for execution.
.. COMMENT: COPYRIGHT (c) 1988-2008.
.. COMMENT: On-Line Applications Research Corporation (OAR).
.. COMMENT: All rights reserved.

View File

@ -1,315 +0,0 @@
Board Support Packages
######################
.. index:: Board Support Packages
.. index:: BSPs
Introduction
============
.. index:: BSP, definition
A board support package (BSP) is a collection of
user-provided facilities which interface RTEMS and an
application with a specific hardware platform. These facilities
may include hardware initialization, device drivers, user
extensions, and a Multiprocessor Communications Interface
(MPCI). However, a minimal BSP need only support processor
reset and initialization and, if needed, a clock tick.
Reset and Initialization
========================
An RTEMS based application is initiated or
re-initiated when the processor is reset. This initialization
code is responsible for preparing the target platform for the
RTEMS application. Although the exact actions performed by the
initialization code are highly processor and target dependent,
the logical functionality of these actions are similar across a
variety of processors and target platforms.
Normally, the BSP and some of the application initialization is
intertwined in the RTEMS initialization sequence controlled by
the shared function ``boot_card()``.
The reset application initialization code is executed
first when the processor is reset. All of the hardware must be
initialized to a quiescent state by this software before
initializing RTEMS. When in quiescent state, devices do not
generate any interrupts or require any servicing by the
application. Some of the hardware components may be initialized
in this code as well as any application initialization that does
not involve calls to RTEMS directives.
The processors Interrupt Vector Table which will be used by the
application may need to be set to the required value by the reset
application initialization code. Because interrupts are enabled
automatically by RTEMS as part of the context switch to the first task,
the Interrupt Vector Table MUST be set before this directive is invoked
to ensure correct interrupt vectoring. The processors Interrupt Vector
Table must be accessible by RTEMS as it will be modified by the when
installing user Interrupt Service Routines (ISRs) On some CPUs, RTEMS
installs its own Interrupt Vector Table as part of initialization and
thus these requirements are met automatically. The reset code which is
executed before the call to any RTEMS initialization routines has the
following requirements:
- Must not make any blocking RTEMS directive calls.
- If the processor supports multiple privilege levels, must leave
the processor in the most privileged, or supervisory, state.
- Must allocate a stack of sufficient size to execute the initialization
and shutdown of the system. This stack area will NOT be used by any task
once the system is initialized. This stack is often reserved via the
linker script or in the assembly language start up file.
- Must initialize the stack pointer for the initialization process to
that allocated.
- Must initialize the processors Interrupt Vector Table.
- Must disable all maskable interrupts.
- If the processor supports a separate interrupt stack, must allocate
the interrupt stack and initialize the interrupt stack pointer.
At the end of the initialization sequence, RTEMS does not return to the
BSP initialization code, but instead context switches to the highest
priority task to begin application execution. This task is typically
a User Initialization Task which is responsible for performing both
local and global application initialization which is dependent on RTEMS
facilities. It is also responsible for initializing any higher level
RTEMS services the application uses such as networking and blocking
device drivers.
Interrupt Stack Requirements
----------------------------
The worst-case stack usage by interrupt service
routines must be taken into account when designing an
application. If the processor supports interrupt nesting, the
stack usage must include the deepest nest level. The worst-case
stack usage must account for the following requirements:
- Processors interrupt stack frame
- Processors subroutine call stack frame
- RTEMS system calls
- Registers saved on stack
- Application subroutine calls
The size of the interrupt stack must be greater than or equal to the
confugured minimum stack size.
Processors with a Separate Interrupt Stack
------------------------------------------
Some processors support a separate stack for interrupts. When an
interrupt is vectored and the interrupt is not nested, the processor
will automatically switch from the current stack to the interrupt stack.
The size of this stack is based solely on the worst-case stack usage by
interrupt service routines.
The dedicated interrupt stack for the entire application on some
architectures is supplied and initialized by the reset and initialization
code of the users Board Support Package. Whether allocated and
initialized by the BSP or RTEMS, since all ISRs use this stack, the
stack size must take into account the worst case stack usage by any
combination of nested ISRs.
Processors Without a Separate Interrupt Stack
---------------------------------------------
Some processors do not support a separate stack for interrupts. In this
case, without special assistance every tasks stack must include
enough space to handle the tasks worst-case stack usage as well as
the worst-case interrupt stack usage. This is necessary because the
worst-case interrupt nesting could occur while any task is executing.
On many processors without dedicated hardware managed interrupt stacks,
RTEMS manages a dedicated interrupt stack in software. If this capability
is supported on a CPU, then it is logically equivalent to the processor
supporting a separate interrupt stack in hardware.
Device Drivers
==============
Device drivers consist of control software for
special peripheral devices and provide a logical interface for
the application developer. The RTEMS I/O manager provides
directives which allow applications to access these device
drivers in a consistent fashion. A Board Support Package may
include device drivers to access the hardware on the target
platform. These devices typically include serial and parallel
ports, counter/timer peripherals, real-time clocks, disk
interfaces, and network controllers.
For more information on device drivers, refer to the
I/O Manager chapter.
Clock Tick Device Driver
------------------------
Most RTEMS applications will include a clock tick
device driver which invokes the ``rtems.clock_tick``
directive at regular intervals. The clock tick is necessary if
the application is to utilize timeslicing, the clock manager, the
timer manager, the rate monotonic manager, or the timeout option on blocking
directives.
The clock tick is usually provided as an interrupt from a counter/timer
or a real-time clock device. When a counter/timer is used to provide the
clock tick, the device is typically programmed to operate in continuous
mode. This mode selection causes the device to automatically reload the
initial count and continue the countdown without programmer intervention.
This reduces the overhead required to manipulate the counter/timer in
the clock tick ISR and increases the accuracy of tick occurrences.
The initial count can be based on the microseconds_per_tick field
in the RTEMS Configuration Table. An alternate approach is to set
the initial count for a fixed time period (such as one millisecond)
and have the ISR invoke ``rtems.clock_tick`` on the
configured ``microseconds_per_tick`` boundaries. Obviously, this
can induce some error if the configured ``microseconds_per_tick``
is not evenly divisible by the chosen clock interrupt quantum.
It is important to note that the interval between
clock ticks directly impacts the granularity of RTEMS timing
operations. In addition, the frequency of clock ticks is an
important factor in the overall level of system overhead. A
high clock tick frequency results in less processor time being
available for task execution due to the increased number of
clock tick ISRs.
User Extensions
===============
RTEMS allows the application developer to augment
selected features by invoking user-supplied extension routines
when the following system events occur:
- Task creation
- Task initiation
- Task reinitiation
- Task deletion
- Task context switch
- Post task context switch
- Task begin
- Task exits
- Fatal error detection
User extensions can be used to implement a wide variety of
functions including execution profiling, non-standard
coprocessor support, debug support, and error detection and
recovery. For example, the context of a non-standard numeric
coprocessor may be maintained via the user extensions. In this
example, the task creation and deletion extensions are
responsible for allocating and deallocating the context area,
the task initiation and reinitiation extensions would be
responsible for priming the context area, and the task context
switch extension would save and restore the context of the
device.
For more information on user extensions, refer to the `User Extensions Manager`_ chapter.
Multiprocessor Communications Interface (MPCI)
==============================================
RTEMS requires that an MPCI layer be provided when a
multiple node application is developed. This MPCI layer must
provide an efficient and reliable communications mechanism
between the multiple nodes. Tasks on different nodes
communicate and synchronize with one another via the MPCI. Each
MPCI layer must be tailored to support the architecture of the
target platform.
For more information on the MPCI, refer to the
Multiprocessing Manager chapter.
Tightly-Coupled Systems
-----------------------
A tightly-coupled system is a multiprocessor
configuration in which the processors communicate solely via
shared global memory. The MPCI can simply place the RTEMS
packets in the shared memory space. The two primary
considerations when designing an MPCI for a tightly-coupled
system are data consistency and informing another node of a
packet.
The data consistency problem may be solved using
atomic "test and set" operations to provide a "lock" in the
shared memory. It is important to minimize the length of time
any particular processor locks a shared data structure.
The problem of informing another node of a packet can
be addressed using one of two techniques. The first technique
is to use an interprocessor interrupt capability to cause an
interrupt on the receiving node. This technique requires that
special support hardware be provided by either the processor
itself or the target platform. The second technique is to have
a node poll for arrival of packets. The drawback to this
technique is the overhead associated with polling.
Loosely-Coupled Systems
-----------------------
A loosely-coupled system is a multiprocessor
configuration in which the processors communicate via some type
of communications link which is not shared global memory. The
MPCI sends the RTEMS packets across the communications link to
the destination node. The characteristics of the communications
link vary widely and have a significant impact on the MPCI
layer. For example, the bandwidth of the communications link
has an obvious impact on the maximum MPCI throughput.
The characteristics of a shared network, such as
Ethernet, lend themselves to supporting an MPCI layer. These
networks provide both the point-to-point and broadcast
capabilities which are expected by RTEMS.
Systems with Mixed Coupling
---------------------------
A mixed-coupling system is a multiprocessor
configuration in which the processors communicate via both
shared memory and communications links. A unique characteristic
of mixed-coupling systems is that a node may not have access to
all communication methods. There may be multiple shared memory
areas and communication links. Therefore, one of the primary
functions of the MPCI layer is to efficiently route RTEMS
packets between nodes. This routing may be based on numerous
algorithms. In addition, the router may provide alternate
communications paths in the event of an overload or a partial
failure.
Heterogeneous Systems
---------------------
Designing an MPCI layer for a heterogeneous system
requires special considerations by the developer. RTEMS is
designed to eliminate many of the problems associated with
sharing data in a heterogeneous environment. The MPCI layer
need only address the representation of thirty-two (32) bit
unsigned quantities.
For more information on supporting a heterogeneous
system, refer the Supporting Heterogeneous Environments in the
Multiprocessing Manager chapter.
.. COMMENT: COPYRIGHT (c) 1988-2002.
.. COMMENT: On-Line Applications Research Corporation (OAR).
.. COMMENT: All rights reserved.

View File

@ -1,558 +0,0 @@
Chains
######
.. index:: chains
Introduction
============
The Chains API is an interface to the Super Core (score) chain
implementation. The Super Core uses chains for all list type
functions. This includes wait queues and task queues. The Chains API
provided by RTEMS is:
- build_id
- ``rtems.chain_node`` - Chain node used in user objects
- ``rtems.chain_control`` - Chain control node
- ``rtems.chain_initialize`` - initialize the chain with nodes
- ``rtems.chain_initialize_empty`` - initialize the chain as empty
- ``rtems.chain_is_null_node`` - Is the node NULL ?
- ``rtems.chain_head`` - Return the chains head
- ``rtems.chain_tail`` - Return the chains tail
- ``rtems.chain_are_nodes_equal`` - Are the nodes equal ?
- ``rtems.chain_is_empty`` - Is the chain empty ?
- ``rtems.chain_is_first`` - Is the Node the first in the chain ?
- ``rtems.chain_is_last`` - Is the Node the last in the chain ?
- ``rtems.chain_has_only_one_node`` - Does the node have one node ?
- ``rtems.chain_node_count_unprotected`` - Returns the node count of the chain (unprotected)
- ``rtems.chain_is_head`` - Is the node the head ?
- ``rtems.chain_is_tail`` - Is the node the tail ?
- ``rtems.chain_extract`` - Extract the node from the chain
- ``rtems.chain_extract_unprotected`` - Extract the node from the chain (unprotected)
- ``rtems.chain_get`` - Return the first node on the chain
- ``rtems.chain_get_unprotected`` - Return the first node on the chain (unprotected)
- ``rtems.chain_get_first_unprotected`` - Get the first node on the chain (unprotected)
- ``rtems.chain_insert`` - Insert the node into the chain
- ``rtems.chain_insert_unprotected`` - Insert the node into the chain (unprotected)
- ``rtems.chain_append`` - Append the node to chain
- ``rtems.chain_append_unprotected`` - Append the node to chain (unprotected)
- ``rtems.chain_prepend`` - Prepend the node to the end of the chain
- ``rtems.chain_prepend_unprotected`` - Prepend the node to chain (unprotected)
Background
==========
The Chains API maps to the Super Core Chains API. Chains are
implemented as a double linked list of nodes anchored to a control
node. The list starts at the control node and is terminated at the
control node. A node has previous and next pointers. Being a double
linked list nodes can be inserted and removed without the need to
travse the chain.
Chains have a small memory footprint and can be used in interrupt
service routines and are thread safe in a multi-threaded
environment. The directives list which operations disable interrupts.
Chains are very useful in Board Support packages and applications.
Nodes
-----
A chain is made up from nodes that orginate from a chain control
object. A node is of type ``rtems.chain_node``. The node
is designed to be part of a user data structure and a cast is used to
move from the node address to the user data structure address. For
example:
.. code:: c
typedef struct foo
{
rtems.chain_node node;
int bar;
} foo;
creates a type ``foo`` that can be placed on a chain. To get the
foo structure from the list you perform the following:
.. code:: c
foo* get_foo(rtems.chain_control* control)
{
return (foo*) rtems.chain_get(control);
}
The node is placed at the start of the users structure to allow the
node address on the chain to be easly cast to the users structure
address.
Controls
--------
A chain is anchored with a control object. Chain control provide the
user with access to the nodes on the chain. The control is head of the
node.
.. code:: c
Control
first ------------------------>
permanent_null <--------------- NODE
last ------------------------->
The implementation does not require special checks for manipulating
the first and last nodes on the chain. To accomplish this the``rtems.chain_control`` structure is treated as two
overlapping ``rtems.chain_node`` structures. The
permanent head of the chain overlays a node structure on the first and``permanent_null`` fields. The ``permanent_tail`` of the chain
overlays a node structure on the ``permanent_null`` and ``last``
elements of the structure.
Operations
==========
Multi-threading
---------------
Chains are designed to be used in a multi-threading environment. The
directives list which operations mask interrupts. Chains supports
tasks and interrupt service routines appending and extracting nodes
with out the need for extra locks. Chains how-ever cannot insure the
integrity of a chain for all operations. This is the responsibility of
the user. For example an interrupt service routine extracting nodes
while a task is iterating over the chain can have unpredictable
results.
Creating a Chain
----------------
To create a chain you need to declare a chain control then add nodes
to the control. Consider a user structure and chain control:
.. code:: c
typedef struct foo
{
rtems.chain_node node;
uint8_t char* data;
} foo;
rtems.chain_control chain;
Add nodes with the following code:
.. code:: c
rtems.chain_initialize_empty (&chain);
for (i = 0; i < count; i++)
{
foo* bar = malloc (sizeof (foo));
if (!bar)
return -1;
bar->data = malloc (size);
rtems.chain_append (&chain, &bar->node);
}
The chain is initialized and the nodes allocated and appended to the
chain. This is an example of a pool of buffers.
Iterating a Chain
-----------------
.. index:: chain iterate
Iterating a chain is a common function. The example shows how to
iterate the buffer pool chain created in the last section to find
buffers starting with a specific string. If the buffer is located it is
extracted from the chain and placed on another chain:
.. code:: c
void foobar (const char* match,
rtems.chain_control* chain,
rtems.chain_control* out)
{
rtems.chain_node* node;
foo* bar;
rtems.chain_initialize_empty (out);
node = chain->first;
while (!rtems.chain_is_tail (chain, node))
{
bar = (foo*) node;
rtems_chain_node* next_node = node->next;
if (strcmp (match, bar->data) == 0)
{
rtems.chain_extract (node);
rtems.chain_append (out, node);
}
node = next_node;
}
}
Directives
==========
The section details the Chains directives.
.. COMMENT: Initialize this Chain With Nodes
Initialize Chain With Nodes
---------------------------
.. index:: chain initialize
**CALLING SEQUENCE:**
**RETURNS**
Returns nothing.
**DESCRIPTION:**
This function take in a pointer to a chain control and initializes it
to contain a set of chain nodes. The chain will contain ``number_nodes``
chain nodes from the memory pointed to by ``start_address``. Each node
is assumed to be ``node_size`` bytes.
**NOTES:**
This call will discard any nodes on the chain.
This call does NOT inititialize any user data on each node.
.. COMMENT: Initialize this Chain as Empty
Initialize Empty
----------------
.. index:: chain initialize empty
**CALLING SEQUENCE:**
**RETURNS**
Returns nothing.
**DESCRIPTION:**
This function take in a pointer to a chain control and initializes it
to empty.
**NOTES:**
This call will discard any nodes on the chain.
Is Null Node ?
--------------
.. index:: chain is node null
**CALLING SEQUENCE:**
**RETURNS**
Returns ``true`` is the node point is NULL and ``false`` if the node is not
NULL.
**DESCRIPTION:**
Tests the node to see if it is a NULL returning ``true`` if a null.
Head
----
.. index:: chain get head
**CALLING SEQUENCE:**
**RETURNS**
Returns the permanent head node of the chain.
**DESCRIPTION:**
This function returns a pointer to the first node on the chain.
Tail
----
.. index:: chain get tail
**CALLING SEQUENCE:**
**RETURNS**
Returns the permanent tail node of the chain.
**DESCRIPTION:**
This function returns a pointer to the last node on the chain.
Are Two Nodes Equal ?
---------------------
.. index:: chare are nodes equal
**CALLING SEQUENCE:**
**RETURNS**
This function returns ``true`` if the left node and the right node are
equal, and ``false`` otherwise.
**DESCRIPTION:**
This function returns ``true`` if the left node and the right node are
equal, and ``false`` otherwise.
Is the Chain Empty
------------------
.. index:: chain is chain empty
**CALLING SEQUENCE:**
**RETURNS**
This function returns ``true`` if there a no nodes on the chain and ``false``
otherwise.
**DESCRIPTION:**
This function returns ``true`` if there a no nodes on the chain and ``false``
otherwise.
Is this the First Node on the Chain ?
-------------------------------------
.. index:: chain is node the first
**CALLING SEQUENCE:**
**RETURNS**
This function returns ``true`` if the node is the first node on a chain
and ``false`` otherwise.
**DESCRIPTION:**
This function returns ``true`` if the node is the first node on a chain
and ``false`` otherwise.
Is this the Last Node on the Chain ?
------------------------------------
.. index:: chain is node the last
**CALLING SEQUENCE:**
**RETURNS**
This function returns ``true`` if the node is the last node on a chain and``false`` otherwise.
**DESCRIPTION:**
This function returns ``true`` if the node is the last node on a chain and``false`` otherwise.
Does this Chain have only One Node ?
------------------------------------
.. index:: chain only one node
**CALLING SEQUENCE:**
**RETURNS**
This function returns ``true`` if there is only one node on the chain and``false`` otherwise.
**DESCRIPTION:**
This function returns ``true`` if there is only one node on the chain and``false`` otherwise.
Returns the node count of the chain (unprotected)
-------------------------------------------------
.. index:: chain only one node
**CALLING SEQUENCE:**
**RETURNS**
This function returns the node count of the chain.
**DESCRIPTION:**
This function returns the node count of the chain.
Is this Node the Chain Head ?
-----------------------------
.. index:: chain is node the head
**CALLING SEQUENCE:**
**RETURNS**
This function returns ``true`` if the node is the head of the chain and``false`` otherwise.
**DESCRIPTION:**
This function returns ``true`` if the node is the head of the chain and``false`` otherwise.
Is this Node the Chain Tail ?
-----------------------------
.. index:: chain is node the tail
**CALLING SEQUENCE:**
**RETURNS**
This function returns ``true`` if the node is the tail of the chain and``false`` otherwise.
**DESCRIPTION:**
This function returns ``true`` if the node is the tail of the chain and``false`` otherwise.
Extract a Node
--------------
.. index:: chain extract a node
**CALLING SEQUENCE:**
**RETURNS**
Returns nothing.
**DESCRIPTION:**
This routine extracts the node from the chain on which it resides.
**NOTES:**
Interrupts are disabled while extracting the node to ensure the
atomicity of the operation.
Use ``rtems.chain_extract_unprotected()`` to avoid disabling of
interrupts.
Get the First Node
------------------
.. index:: chain get first node
**CALLING SEQUENCE:**
**RETURNS**
Returns a pointer a node. If a node was removed, then a pointer to
that node is returned. If the chain was empty, then NULL is
returned.
**DESCRIPTION:**
This function removes the first node from the chain and returns a
pointer to that node. If the chain is empty, then NULL is returned.
**NOTES:**
Interrupts are disabled while obtaining the node to ensure the
atomicity of the operation.
Use ``rtems.chain_get_unprotected()`` to avoid disabling of
interrupts.
Get the First Node (unprotected)
--------------------------------
.. index:: chain get first node
**CALLING SEQUENCE:**
**RETURNS:**
A pointer to the former first node is returned.
**DESCRIPTION:**
Removes the first node from the chain and returns a pointer to it. In case the
chain was empty, then the results are unpredictable.
**NOTES:**
The function does nothing to ensure the atomicity of the operation.
Insert a Node
-------------
.. index:: chain insert a node
**CALLING SEQUENCE:**
**RETURNS**
Returns nothing.
**DESCRIPTION:**
This routine inserts a node on a chain immediately following the
specified node.
**NOTES:**
Interrupts are disabled during the insert to ensure the atomicity of
the operation.
Use ``rtems.chain_insert_unprotected()`` to avoid disabling of
interrupts.
Append a Node
-------------
.. index:: chain append a node
**CALLING SEQUENCE:**
**RETURNS**
Returns nothing.
**DESCRIPTION:**
This routine appends a node to the end of a chain.
**NOTES:**
Interrupts are disabled during the append to ensure the atomicity of
the operation.
Use ``rtems.chain_append_unprotected()`` to avoid disabling of
interrupts.
Prepend a Node
--------------
.. index:: prepend node
**CALLING SEQUENCE:**
**RETURNS**
Returns nothing.
**DESCRIPTION:**
This routine prepends a node to the front of the chain.
**NOTES:**
Interrupts are disabled during the prepend to ensure the atomicity of
the operation.
Use ``rtems.chain_prepend_unprotected()`` to avoid disabling of
interrupts.
.. COMMENT: Copyright 2014 Gedare Bloom.
.. COMMENT: All rights reserved.

View File

@ -1,694 +0,0 @@
Clock Manager
#############
.. index:: clock
Introduction
============
The clock manager provides support for time of day
and other time related capabilities. The directives provided by
the clock manager are:
- ``rtems.clock_set`` - Set date and time
- ``rtems.clock_get`` - Get date and time information
- ``rtems.clock_get_tod`` - Get date and time in TOD format
- ``rtems.clock_get_tod_timeval`` - Get date and time in timeval format
- ``rtems.clock_get_seconds_since_epoch`` - Get seconds since epoch
- ``rtems.clock_get_ticks_per_second`` - Get ticks per second
- ``rtems.clock_get_ticks_since_boot`` - Get current ticks counter value
- ``rtems.clock_tick_later`` - Get tick value in the future
- ``rtems.clock_tick_later_usec`` - Get tick value in the future in microseconds
- ``rtems.clock_tick_before`` - Is tick value is before a point in time
- ``rtems.clock_get_uptime`` - Get time since boot
- ``rtems.clock_get_uptime_timeval`` - Get time since boot in timeval format
- ``rtems.clock_get_uptime_seconds`` - Get seconds since boot
- ``rtems.clock_get_uptime_nanoseconds`` - Get nanoseconds since boot
- ``rtems.clock_set_nanoseconds_extension`` - Install the nanoseconds since last tick handler
- ``rtems.clock_tick`` - Announce a clock tick
Background
==========
Required Support
----------------
For the features provided by the clock manager to be
utilized, periodic timer interrupts are required. Therefore, a
real-time clock or hardware timer is necessary to create the
timer interrupts. The ``rtems.clock_tick``
directive is normally called
by the timer ISR to announce to RTEMS that a system clock tick
has occurred. Elapsed time is measured in ticks. A tick is
defined to be an integral number of microseconds which is
specified by the user in the Configuration Table.
Time and Date Data Structures
-----------------------------
The clock facilities of the clock manager operate
upon calendar time. These directives utilize the following date
and time record for the native time and date format:
.. code:: c
type Time_Of_Day is
record
Year : RTEMS.Unsigned32; -- year, A.D.
Month : RTEMS.Unsigned32; -- month, 1 .. 12
Day : RTEMS.Unsigned32; -- day, 1 .. 31
Hour : RTEMS.Unsigned32; -- hour, 0 .. 23
Minute : RTEMS.Unsigned32; -- minute, 0 .. 59
Second : RTEMS.Unsigned32; -- second, 0 .. 59
Ticks : RTEMS.Unsigned32; -- elapsed ticks between seconds
end record;
The native date and time format is the only format
supported when setting the system date and time using the``rtems.clock_set`` directive. Some applications
expect to operate on a "UNIX-style" date and time data structure. The``rtems.clock_get_tod_timeval`` always returns
the date and time in ``struct timeval`` format. The``rtems.clock_get`` directive can optionally return
the current date and time in this format.
The ``struct timeval`` data structure has two fields: ``tv_sec``
and ``tv_usec`` which are seconds and microseconds, respectively.
The ``tv_sec`` field in this data structure is the number of seconds
since the POSIX epoch of January 1, 1970 but will never be prior to
the RTEMS epoch of January 1, 1988.
Clock Tick and Timeslicing
--------------------------
.. index:: timeslicing
Timeslicing is a task scheduling discipline in which
tasks of equal priority are executed for a specific period of
time before control of the CPU is passed to another task. It is
also sometimes referred to as the automatic round-robin
scheduling algorithm. The length of time allocated to each task
is known as the quantum or timeslice.
The systems timeslice is defined as an integral
number of ticks, and is specified in the Configuration Table.
The timeslice is defined for the entire system of tasks, but
timeslicing is enabled and disabled on a per task basis.
The ``rtems.clock_tick``
directive implements timeslicing by
decrementing the running tasks time-remaining counter when both
timeslicing and preemption are enabled. If the tasks timeslice
has expired, then that task will be preempted if there exists a
ready task of equal priority.
Delays
------
.. index:: delays
A sleep timer allows a task to delay for a given
interval or up until a given time, and then wake and continue
execution. This type of timer is created automatically by the``rtems.task_wake_after``
and ``rtems.task_wake_when`` directives and, as a result,
does not have an RTEMS ID. Once activated, a sleep timer cannot
be explicitly deleted. Each task may activate one and only one
sleep timer at a time.
Timeouts
--------
.. index:: timeouts
Timeouts are a special type of timer automatically
created when the timeout option is used on the``rtems.message_queue_receive``,``rtems.event_receive``,``rtems.semaphore_obtain`` and``rtems.region_get_segment`` directives.
Each task may have one and only one timeout active at a time.
When a timeout expires, it unblocks the task with a timeout status code.
Operations
==========
Announcing a Tick
-----------------
RTEMS provides the ``rtems.clock_tick`` directive which is
called from the users real-time clock ISR to inform RTEMS that
a tick has elapsed. The tick frequency value, defined in
microseconds, is a configuration parameter found in the
Configuration Table. RTEMS divides one million microseconds
(one second) by the number of microseconds per tick to determine
the number of calls to the``rtems.clock_tick`` directive per second. The
frequency of ``rtems.clock_tick``
calls determines the resolution
(granularity) for all time dependent RTEMS actions. For
example, calling ``rtems.clock_tick``
ten times per second yields a higher
resolution than calling ``rtems.clock_tick``
two times per second. The ``rtems.clock_tick``
directive is responsible for maintaining both
calendar time and the dynamic set of timers.
Setting the Time
----------------
The ``rtems.clock_set`` directive allows a task or an ISR to
set the date and time maintained by RTEMS. If setting the date
and time causes any outstanding timers to pass their deadline,
then the expired timers will be fired during the invocation of
the ``rtems.clock_set`` directive.
Obtaining the Time
------------------
The ``rtems.clock_get`` directive allows a task or an ISR to
obtain the current date and time or date and time related
information. The current date and time can be returned in
either native or UNIX-style format. Additionally, the
application can obtain date and time related information such as
the number of seconds since the RTEMS epoch, the number of ticks
since the executive was initialized, and the number of ticks per
second. The information returned by the``rtems.clock_get`` directive is
dependent on the option selected by the caller. This
is specified using one of the following constants
associated with the enumerated type``rtems.clock_get_options``:.. index:: rtems_clock_get_options
- ``RTEMS.CLOCK_GET_TOD`` - obtain native style date and time
- ``RTEMS.CLOCK_GET_TIME_VALUE`` - obtain UNIX-style
date and time
- ``RTEMS.CLOCK_GET_TICKS_SINCE_BOOT`` - obtain number of ticks
since RTEMS was initialized
- ``RTEMS.CLOCK_GET_SECONDS_SINCE_EPOCH`` - obtain number
of seconds since RTEMS epoch
- ``RTEMS.CLOCK_GET_TICKS_PER_SECOND`` - obtain number of clock
ticks per second
Calendar time operations will return an error code if
invoked before the date and time have been set.
Directives
==========
This section details the clock managers directives.
A subsection is dedicated to each of this managers directives
and describes the calling sequence, related constants, usage,
and status codes.
CLOCK_SET - Set date and time
-----------------------------
**CALLING SEQUENCE:**
.. index:: set the time of day
.. code:: c
procedure Clock_Set (
Time_Buffer : in RTEMS.Time_Of_Day;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - date and time set successfully
``RTEMS.INVALID_ADDRESS`` - ``time_buffer`` is NULL
``RTEMS.INVALID_CLOCK`` - invalid time of day
**DESCRIPTION:**
This directive sets the system date and time. The
date, time, and ticks in the time_buffer record are all
range-checked, and an error is returned if any one is out of its
valid range.
**NOTES:**
Years before 1988 are invalid.
The system date and time are based on the configured
tick rate (number of microseconds in a tick).
Setting the time forward may cause a higher priority
task, blocked waiting on a specific time, to be made ready. In
this case, the calling task will be preempted after the next
clock tick.
Re-initializing RTEMS causes the system date and time
to be reset to an uninitialized state. Another call to``rtems.clock_set`` is required to re-initialize
the system date and time to application specific specifications.
CLOCK_GET - Get date and time information
-----------------------------------------
.. index:: obtain the time of day
**CALLING SEQUENCE:**
.. code:: c
procedure Clock_Get (
Option : in RTEMS.Clock_Get_Options;
Time_Buffer : in RTEMS.Address;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - current time obtained successfully
``RTEMS.NOT_DEFINED`` - system date and time is not set
``RTEMS.INVALID_ADDRESS`` - ``time_buffer`` is NULL
**DESCRIPTION:**
This directive is deprecated.
This directive obtains the system date and time. If
the caller is attempting to obtain the date and time (i.e.
option is set to either ``RTEMS.CLOCK_GET_SECONDS_SINCE_EPOCH``,``RTEMS.CLOCK_GET_TOD``, or``RTEMS.CLOCK_GET_TIME_VALUE``) and the date and time
has not been set with a previous call to``rtems.clock_set``, then the``RTEMS.NOT_DEFINED`` status code is returned.
The caller can always obtain the number of ticks per second (option is``RTEMS.CLOCK_GET_TICKS_PER_SECOND``) and the number of
ticks since the executive was initialized option is``RTEMS.CLOCK_GET_TICKS_SINCE_BOOT``).
The ``option`` argument may taken on any value of the enumerated
type ``rtems_clock_get_options``. The data type expected for``time_buffer`` is based on the value of ``option`` as
indicated below:.. index:: rtems_clock_get_options
- ``RTEMS.Clock_Get_TOD`` - Address of an variable of
type RTEMS.Time_Of_Day
- ``RTEMS.Clock_Get_Seconds_Since_Epoch`` - Address of an
variable of type RTEMS.Interval
- ``RTEMS.Clock_Get_Ticks_Since_Boot`` - Address of an
variable of type RTEMS.Interval
- ``RTEMS.Clock_Get_Ticks_Per_Second`` - Address of an
variable of type RTEMS.Interval
- ``RTEMS.Clock_Get_Time_Value`` - Address of an variable of
type RTEMS.Clock_Time_Value
**NOTES:**
This directive is callable from an ISR.
This directive will not cause the running task to be
preempted. Re-initializing RTEMS causes the system date and
time to be reset to an uninitialized state. Another call to``rtems.clock_set`` is required to re-initialize the
system date and time to application specific specifications.
CLOCK_GET_TOD - Get date and time in TOD format
-----------------------------------------------
.. index:: obtain the time of day
**CALLING SEQUENCE:**
.. code:: c
procedure Clock_Get_TOD (
Time_Buffer : in RTEMS.Time_Of_Day;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - current time obtained successfully
``RTEMS.NOT_DEFINED`` - system date and time is not set
``RTEMS.INVALID_ADDRESS`` - ``time_buffer`` is NULL
**DESCRIPTION:**
This directive obtains the system date and time. If the date and time
has not been set with a previous call to``rtems.clock_set``, then the``RTEMS.NOT_DEFINED`` status code is returned.
**NOTES:**
This directive is callable from an ISR.
This directive will not cause the running task to be
preempted. Re-initializing RTEMS causes the system date and
time to be reset to an uninitialized state. Another call to``rtems.clock_set`` is required to re-initialize the
system date and time to application specific specifications.
CLOCK_GET_TOD_TIMEVAL - Get date and time in timeval format
-----------------------------------------------------------
.. index:: obtain the time of day
**CALLING SEQUENCE:**
.. code:: c
procedure Clock_Get_TOD_Timeval (
Time : in RTEMS.Timeval;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - current time obtained successfully
``RTEMS.NOT_DEFINED`` - system date and time is not set
``RTEMS.INVALID_ADDRESS`` - ``time`` is NULL
**DESCRIPTION:**
This directive obtains the system date and time in POSIX``struct timeval`` format. If the date and time
has not been set with a previous call to``rtems.clock_set``, then the``RTEMS.NOT_DEFINED`` status code is returned.
**NOTES:**
This directive is callable from an ISR.
This directive will not cause the running task to be
preempted. Re-initializing RTEMS causes the system date and
time to be reset to an uninitialized state. Another call to``rtems.clock_set`` is required to re-initialize the
system date and time to application specific specifications.
CLOCK_GET_SECONDS_SINCE_EPOCH - Get seconds since epoch
-------------------------------------------------------
.. index:: obtain seconds since epoch
**CALLING SEQUENCE:**
.. code:: c
procedure Clock_Get_Seconds_Since_Epoch(
The_Interval : out RTEMS.Interval;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - current time obtained successfully
``RTEMS.NOT_DEFINED`` - system date and time is not set
``RTEMS.INVALID_ADDRESS`` - ``the_interval`` is NULL
**DESCRIPTION:**
This directive returns the number of seconds since the RTEMS
epoch and the current system date and time. If the date and time
has not been set with a previous call to``rtems.clock_set``, then the``RTEMS.NOT_DEFINED`` status code is returned.
**NOTES:**
This directive is callable from an ISR.
This directive will not cause the running task to be
preempted. Re-initializing RTEMS causes the system date and
time to be reset to an uninitialized state. Another call to``rtems.clock_set`` is required to re-initialize the
system date and time to application specific specifications.
CLOCK_GET_TICKS_PER_SECOND - Get ticks per second
-------------------------------------------------
.. index:: obtain seconds since epoch
**CALLING SEQUENCE:**
.. code:: c
function Clock_Get_Ticks_Per_Seconds
return RTEMS.Interval;
**DIRECTIVE STATUS CODES:**
NONE
**DESCRIPTION:**
This directive returns the number of clock ticks per second. This
is strictly based upon the microseconds per clock tick that the
application has configured.
**NOTES:**
This directive is callable from an ISR.
This directive will not cause the running task to be preempted.
CLOCK_GET_TICKS_SINCE_BOOT - Get current ticks counter value
------------------------------------------------------------
.. index:: obtain ticks since boot
.. index:: get current ticks counter value
**CALLING SEQUENCE:**
.. code:: c
function Clock_Get_Ticks_Since_Boot
return RTEMS.Interval;
**DIRECTIVE STATUS CODES:**
NONE
**DESCRIPTION:**
This directive returns the current tick counter value. With a 1ms clock tick,
this counter overflows after 50 days since boot. This is the historical
measure of uptime in an RTEMS system. The newer service``rtems.clock_get_uptime`` is another and potentially more
accurate way of obtaining similar information.
**NOTES:**
This directive is callable from an ISR.
This directive will not cause the running task to be preempted.
CLOCK_TICK_LATER - Get tick value in the future
-----------------------------------------------
**CALLING SEQUENCE:**
**DESCRIPTION:**
Returns the ticks counter value delta ticks in the future.
**NOTES:**
This directive is callable from an ISR.
This directive will not cause the running task to be preempted.
CLOCK_TICK_LATER_USEC - Get tick value in the future in microseconds
--------------------------------------------------------------------
**CALLING SEQUENCE:**
**DESCRIPTION:**
Returns the ticks counter value at least delta microseconds in the future.
**NOTES:**
This directive is callable from an ISR.
This directive will not cause the running task to be preempted.
CLOCK_TICK_BEFORE - Is tick value is before a point in time
-----------------------------------------------------------
**CALLING SEQUENCE:**
**DESCRIPTION:**
Returns true if the current ticks counter value indicates a time before the
time specified by the tick value and false otherwise.
**NOTES:**
This directive is callable from an ISR.
This directive will not cause the running task to be preempted.
**EXAMPLE:**
.. code:: c
status busy( void )
{
rtems_interval timeout = rtems_clock_tick_later_usec( 10000 );
do {
if ( ok() ) {
return success;
}
} while ( rtems_clock_tick_before( timeout ) );
return timeout;
}
CLOCK_GET_UPTIME - Get the time since boot
------------------------------------------
.. index:: clock get uptime
.. index:: uptime
**CALLING SEQUENCE:**
.. code:: c
procedure Clock_Get_Uptime (
Uptime : out RTEMS.Timespec;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - clock tick processed successfully
``RTEMS.INVALID_ADDRESS`` - ``time_buffer`` is NULL
**DESCRIPTION:**
This directive returns the seconds and nanoseconds since the
system was booted. If the BSP supports nanosecond clock
accuracy, the time reported will probably be different on every
call.
**NOTES:**
This directive may be called from an ISR.
CLOCK_GET_UPTIME_TIMEVAL - Get the time since boot in timeval format
--------------------------------------------------------------------
.. index:: clock get uptime
.. index:: uptime
**CALLING SEQUENCE:**
**DIRECTIVE STATUS CODES:**
NONE
**DESCRIPTION:**
This directive returns the seconds and microseconds since the
system was booted. If the BSP supports nanosecond clock
accuracy, the time reported will probably be different on every
call.
**NOTES:**
This directive may be called from an ISR.
CLOCK_GET_UPTIME_SECONDS - Get the seconds since boot
-----------------------------------------------------
.. index:: clock get uptime
.. index:: uptime
**CALLING SEQUENCE:**
**DIRECTIVE STATUS CODES:**
The system uptime in seconds.
**DESCRIPTION:**
This directive returns the seconds since the system was booted.
**NOTES:**
This directive may be called from an ISR.
CLOCK_GET_UPTIME_NANOSECONDS - Get the nanoseconds since boot
-------------------------------------------------------------
.. index:: clock get nanoseconds uptime
.. index:: uptime
**CALLING SEQUENCE:**
**DIRECTIVE STATUS CODES:**
The system uptime in nanoseconds.
**DESCRIPTION:**
This directive returns the nanoseconds since the system was booted.
**NOTES:**
This directive may be called from an ISR.
CLOCK_SET_NANOSECONDS_EXTENSION - Install the nanoseconds since last tick handler
---------------------------------------------------------------------------------
.. index:: clock set nanoseconds extension
.. index:: nanoseconds extension
.. index:: nanoseconds time accuracy
**CALLING SEQUENCE:**
.. code:: c
NOT SUPPORTED FROM Ada BINDING
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - clock tick processed successfully
``RTEMS.INVALID_ADDRESS`` - ``time_buffer`` is NULL
**DESCRIPTION:**
This directive is used by the Clock device driver to install the``routine`` which will be invoked by the internal RTEMS method used to
obtain a highly accurate time of day. It is usually called during
the initialization of the driver.
When the ``routine`` is invoked, it will determine the number of
nanoseconds which have elapsed since the last invocation of
the ``rtems.clock_tick`` directive. It should do
this as quickly as possible with as little impact as possible
on the device used as a clock source.
**NOTES:**
This directive may be called from an ISR.
This directive is called as part of every service to obtain the
current date and time as well as timestamps.
CLOCK_TICK - Announce a clock tick
----------------------------------
.. index:: clock tick
**CALLING SEQUENCE:**
.. code:: c
procedure Clock_Tick (
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - clock tick processed successfully
**DESCRIPTION:**
This directive announces to RTEMS that a system clock
tick has occurred. The directive is usually called from the
timer interrupt ISR of the local processor. This directive
maintains the system date and time, decrements timers for
delayed tasks, timeouts, rate monotonic periods, and implements
timeslicing.
**NOTES:**
This directive is typically called from an ISR.
The ``microseconds_per_tick`` and ``ticks_per_timeslice``
parameters in the Configuration Table contain the number of
microseconds per tick and number of ticks per timeslice,
respectively.
.. COMMENT: COPYRIGHT (c) 1988-2008.
.. COMMENT: On-Line Applications Research Corporation (OAR).
.. COMMENT: All rights reserved.

View File

@ -1,5 +0,0 @@
Command and Variable Index
##########################
.. COMMENT: There are currently no Command and Variable Index entries.

View File

@ -1,8 +0,0 @@
import sys, os
sys.path.append(os.path.abspath('../common/'))
from conf import *
version = '1.0'
release = '5.0'

File diff suppressed because it is too large Load Diff

View File

@ -1,484 +0,0 @@
Constant Bandwidth Server Scheduler API
#######################################
.. index:: cbs
Introduction
============
Unlike simple schedulers, the Constant Bandwidth Server (CBS) requires
a special API for tasks to indicate their scheduling parameters.
The directives provided by the CBS API are:
- ``rtems.cbs_initialize`` - Initialize the CBS library
- ``rtems.cbs_cleanup`` - Cleanup the CBS library
- ``rtems.cbs_create_server`` - Create a new bandwidth server
- ``rtems.cbs_attach_thread`` - Attach a thread to server
- ``rtems.cbs_detach_thread`` - Detach a thread from server
- ``rtems.cbs_destroy_server`` - Destroy a bandwidth server
- ``rtems.cbs_get_server_id`` - Get an ID of a server
- ``rtems.cbs_get_parameters`` - Get scheduling parameters of a server
- ``rtems.cbs_set_parameters`` - Set scheduling parameters of a server
- ``rtems.cbs_get_execution_time`` - Get elapsed execution time
- ``rtems.cbs_get_remaining_budget`` - Get remainig execution time
- ``rtems.cbs_get_approved_budget`` - Get scheduler approved execution time
Background
==========
Constant Bandwidth Server Definitions
-------------------------------------
.. index:: CBS parameters
.. index:: rtems_cbs_parameters
The Constant Bandwidth Server API enables tasks to communicate with
the scheduler and indicate its scheduling parameters. The scheduler
has to be set up first (by defining ``CONFIGURE_SCHEDULER_CBS`` macro).
The difference to a plain EDF is the presence of servers.
It is a budget aware extention of the EDF scheduler, therefore, tasks
attached to servers behave in a similar way as with EDF unless they
exceed their budget.
The intention of servers is reservation of a certain computation
time (budget) of the processor for all subsequent periods. The structure``rtems_cbs_parameters`` determines the behavior of
a server. It contains ``deadline`` which is equal to period,
and ``budget`` which is the time the server is allowed to
spend on CPU per each period. The ratio between those two parameters
yields the maximum percentage of the CPU the server can use
(bandwidth). Moreover, thanks to this limitation the overall
utilization of CPU is under control, and the sum of bandwidths
of all servers in the system yields the overall reserved portion
of processor. The rest is still available for ordinary tasks that
are not attached to any server.
In order to make the server effective to the executing tasks,
tasks have to be attached to the servers. The``rtems_cbs_server_id`` is a type denoting an id of a server
and ``rtems_id`` a type for id of tasks.
Handling Periodic Tasks
-----------------------
.. index:: CBS periodic tasks
Each tasks execution begins with a default background priority
(see the chapter Scheduling Concepts to understand the concept of
priorities in EDF). Once you decide the tasks should start periodic
execution, you have two possibilities. Either you use only the Rate
Monotonic manager which takes care of periodic behavior, or you declare
deadline and budget using the CBS API in which case these properties
are constant for all subsequent periods, unless you change them using
the CBS API again. Task now only has to indicate and end of
each period using ``rtems_rate_monotonic_period``.
Registering a Callback Function
-------------------------------
.. index:: CBS overrun handler
In case tasks attached to servers are not aware of their execution time
and happen to exceed it, the scheduler does not guarantee execution any
more and pulls the priority of the task to background, which would
possibly lead to immediate preemption (if there is at least one ready
task with a higher pirority). However, the task is not blocked but a
callback function is invoked. The callback function
(``rtems_cbs_budget_overrun``) might be optionally registered upon
a server creation (``rtems_cbs_create_server``).
This enables the user to define what should happen in case of budget
overrun. There is obviously no space for huge operations because the
priority is down and not real time any more, however, you still can at
least in release resources for other tasks, restart the task or log an
error information. Since the routine is called directly from kernel,
use ``printk()`` instead of ``printf()``.
The calling convention of the callback function is:
Limitations
-----------
.. index:: CBS limitations
When using this scheduler you have to keep in mind several things:
- it_limitations
- In the current implementation it is possible to attach only
a single task to each server.
- If you have a task attached to a server and you voluntatily
block it in the beginning of its execution, its priority will be
probably pulled to background upon unblock, thus not guaranteed
deadline any more. This is because you are effectively raising
computation time of the task. When unbocking, you should be always
sure that the ratio between remaining computation time and remaining
deadline is not higher that the utilization you have agreed with the
scheduler.
Operations
==========
Setting up a server
-------------------
The directive ``rtems_cbs_create_server`` is used to create a new
server that is characterized by ``rtems_cbs_parameters``. You also
might want to register the ``rtems_cbs_budget_overrun`` callback
routine. After this step tasks can be attached to the server. The directive``rtems_cbs_set_parameters`` can change the scheduling parameters
to avoid destroying and creating a new server again.
Attaching Task to a Server
--------------------------
If a task is attached to a server using ``rtems_cbs_attach_thread``,
the tasks computation time per period is limited by the server and
the deadline (period) of task is equal to deadline of the server which
means if you conclude a period using ``rate_monotonic_period``,
the length of next period is always determined by the servers property.
The task has a guaranteed bandwidth given by the server but should not
exceed it, otherwise the priority is pulled to background until the
start of next period and the ``rtems_cbs_budget_overrun`` callback
function is invoked.
When attaching a task to server, the preemptability flag of the task
is raised, otherwise it would not be possible to control the execution
of the task.
Detaching Task from a Server
----------------------------
The directive ``rtems_cbs_detach_thread`` is just an inverse
operation to the previous one, the task continues its execution with
the initial priority.
Preemptability of the task is restored to the initial value.
Examples
--------
The following example presents a simple common use of the API.
You can see the initialization and cleanup call here, if there are
multiple tasks in the system, it is obvious that the initialization
should be called before creating the task.
Notice also that in this case we decided to register an overrun handler,
instead of which there could be ``NULL``. This handler just prints
a message to terminal, what else may be done here depends on a specific
application.
During the periodic execution, remaining budget should be watched
to avoid overrun.
.. code:: c
void overrun_handler (
rtems_cbs_server_id server_id
)
{
printk( "Budget overrun, fixing the task\\n" );
return;
}
rtems_task Tasks_Periodic(
rtems_task_argument argument
)
{
rtems_id rmid;
rtems_cbs_server_id server_id;
rtems_cbs_parameters params;
params.deadline = 10;
params.budget = 4;
rtems_cbs_initialize();
rtems_cbs_create_server( &params, &overrun_handler, &server_id )
rtems_cbs_attach_thread( server_id, SELF );
rtems_rate_monotonic_create( argument, &rmid );
while ( 1 ) {
if (rtems_rate_monotonic_period(rmid, params.deadline)==RTEMS_TIMEOUT)
break;
/* Perform some periodic action \*/
}
rtems_rate_monotonic_delete( rmid );
rtems_cbs_cleanup();
exit( 1 );
}
Directives
==========
This section details the Constant Bandwidth Servers directives.
A subsection is dedicated to each of this managers directives
and describes the calling sequence, related constants, usage,
and status codes.
CBS_INITIALIZE - Initialize the CBS library
-------------------------------------------
.. index:: initialize the CBS library
**CALLING SEQUENCE:**
**DIRECTIVE STATUS CODES:**
``RTEMS.CBS_OK`` - successful initialization
``RTEMS.CBS_ERROR_NO_MEMORY`` - not enough memory for data
**DESCRIPTION:**
This routine initializes the library in terms of allocating necessary memory
for the servers. In case not enough memory is available in the system,``RTEMS.CBS_ERROR_NO_MEMORY`` is returned, otherwise``RTEMS.CBS_OK``.
**NOTES:**
Additional memory per each server is allocated upon invocation of``rtems_cbs_create_server``.
Tasks in the system are not influenced, they still keep executing
with their initial parameters.
CBS_CLEANUP - Cleanup the CBS library
-------------------------------------
.. index:: cleanup the CBS library
**CALLING SEQUENCE:**
**DIRECTIVE STATUS CODES:**
``RTEMS.CBS_OK`` - always successful
**DESCRIPTION:**
This routine detaches all tasks from their servers, destroys all servers
and returns memory back to the system.
**NOTES:**
All tasks continue executing with their initial priorities.
CBS_CREATE_SERVER - Create a new bandwidth server
-------------------------------------------------
.. index:: create a new bandwidth server
**CALLING SEQUENCE:**
**DIRECTIVE STATUS CODES:**
``RTEMS.CBS_OK`` - successfully created
``RTEMS.CBS_ERROR_NO_MEMORY`` - not enough memory for data
``RTEMS.CBS_ERROR_FULL`` - maximum servers exceeded
``RTEMS.CBS_ERROR_INVALID_PARAMETER`` - invalid input argument
**DESCRIPTION:**
This routine prepares an instance of a constant bandwidth server.
The input parameter ``rtems_cbs_parameters`` specifies scheduling
parameters of the server (period and budget). If these are not valid,``RTEMS.CBS_ERROR_INVALID_PARAMETER`` is returned.
The ``budget_overrun_callback`` is an optional callback function, which is
invoked in case the servers budget within one period is exceeded.
Output parameter ``server_id`` becomes an id of the newly created server.
If there is not enough memory, the ``RTEMS.CBS_ERROR_NO_MEMORY``
is returned. If the maximum server count in the system is exceeded,``RTEMS.CBS_ERROR_FULL`` is returned.
**NOTES:**
No task execution is being influenced so far.
CBS_ATTACH_THREAD - Attach a thread to server
---------------------------------------------
.. index:: attach a thread to server
**CALLING SEQUENCE:**
**DIRECTIVE STATUS CODES:**
``RTEMS.CBS_OK`` - successfully attached
``RTEMS.CBS_ERROR_FULL`` - server maximum tasks exceeded
``RTEMS.CBS_ERROR_INVALID_PARAMETER`` - invalid input argument
``RTEMS.CBS_ERROR_NOSERVER`` - server is not valid
**DESCRIPTION:**
Attaches a task (``task_id``) to a server (``server_id``).
The server has to be previously created. Now, the task starts
to be scheduled according to the server parameters and not
using initial priority. This implementation allows only one task
per server, if the user tries to bind another task to the same
server, ``RTEMS.CBS_ERROR_FULL`` is returned.
**NOTES:**
Tasks attached to servers become preemptible.
CBS_DETACH_THREAD - Detach a thread from server
-----------------------------------------------
.. index:: detach a thread from server
**CALLING SEQUENCE:**
**DIRECTIVE STATUS CODES:**
``RTEMS.CBS_OK`` - successfully detached
``RTEMS.CBS_ERROR_INVALID_PARAMETER`` - invalid input argument
``RTEMS.CBS_ERROR_NOSERVER`` - server is not valid
**DESCRIPTION:**
This directive detaches a thread from server. The task continues its
execution with initial priority.
**NOTES:**
The server can be reused for any other task.
CBS_DESTROY_SERVER - Destroy a bandwidth server
-----------------------------------------------
.. index:: destroy a bandwidth server
**CALLING SEQUENCE:**
**DIRECTIVE STATUS CODES:**
``RTEMS.CBS_OK`` - successfully destroyed
``RTEMS.CBS_ERROR_INVALID_PARAMETER`` - invalid input argument
``RTEMS.CBS_ERROR_NOSERVER`` - server is not valid
**DESCRIPTION:**
This directive destroys a server. If any task was attached to the server,
the task is detached and continues its execution according to EDF rules
with initial properties.
**NOTES:**
This again enables one more task to be created.
CBS_GET_SERVER_ID - Get an ID of a server
-----------------------------------------
.. index:: get an ID of a server
**CALLING SEQUENCE:**
**DIRECTIVE STATUS CODES:**
``RTEMS.CBS_OK`` - successful
``RTEMS.CBS_ERROR_NOSERVER`` - server is not valid
**DESCRIPTION:**
This directive returns an id of server belonging to a given task.
CBS_GET_PARAMETERS - Get scheduling parameters of a server
----------------------------------------------------------
.. index:: get scheduling parameters of a server
**CALLING SEQUENCE:**
**DIRECTIVE STATUS CODES:**
``RTEMS.CBS_OK`` - successful
``RTEMS.CBS_ERROR_INVALID_PARAMETER`` - invalid input argument
``RTEMS.CBS_ERROR_NOSERVER`` - server is not valid
**DESCRIPTION:**
This directive returns a structure with current scheduling parameters
of a given server (period and execution time).
**NOTES:**
It makes no difference if any task is assigned or not.
CBS_SET_PARAMETERS - Set scheduling parameters
----------------------------------------------
.. index:: set scheduling parameters
**CALLING SEQUENCE:**
**DIRECTIVE STATUS CODES:**
``RTEMS.CBS_OK`` - successful
``RTEMS.CBS_ERROR_INVALID_PARAMETER`` - invalid input argument
``RTEMS.CBS_ERROR_NOSERVER`` - server is not valid
**DESCRIPTION:**
This directive sets new scheduling parameters to the server. This operation
can be performed regardless of whether a task is assigned or not.
If a task is assigned, the parameters become effective imediately, therefore it
is recommended to apply the change between two subsequent periods.
**NOTES:**
There is an upper limit on both period and budget equal to (2^31)-1 ticks.
CBS_GET_EXECUTION_TIME - Get elapsed execution time
---------------------------------------------------
.. index:: get elapsed execution time
**CALLING SEQUENCE:**
**DIRECTIVE STATUS CODES:**
``RTEMS.CBS_OK`` - successful
``RTEMS.CBS_ERROR_INVALID_PARAMETER`` - invalid input argument
``RTEMS.CBS_ERROR_NOSERVER`` - server is not valid
**DESCRIPTION:**
This routine returns consumed execution time (``exec_time``) of a server
during the current period.
**NOTES:**
Absolute time (``abs_time``) not supported now.
CBS_GET_REMAINING_BUDGET - Get remaining execution time
-------------------------------------------------------
.. index:: get remaining execution time
**CALLING SEQUENCE:**
**DIRECTIVE STATUS CODES:**
``RTEMS.CBS_OK`` - successful
``RTEMS.CBS_ERROR_INVALID_PARAMETER`` - invalid input argument
``RTEMS.CBS_ERROR_NOSERVER`` - server is not valid
**DESCRIPTION:**
This directive returns remaining execution time of a given server for
current period.
**NOTES:**
If the execution time approaches zero, the assigned task should finish
computations of the current period.
CBS_GET_APPROVED_BUDGET - Get scheduler approved execution time
---------------------------------------------------------------
.. index:: get scheduler approved execution time
**CALLING SEQUENCE:**
**DIRECTIVE STATUS CODES:**
``RTEMS.CBS_OK`` - successful
``RTEMS.CBS_ERROR_INVALID_PARAMETER`` - invalid input argument
``RTEMS.CBS_ERROR_NOSERVER`` - server is not valid
**DESCRIPTION:**
This directive returns servers approved budget for subsequent periods.
.. COMMENT: COPYRIGHT (c) 1989-2011.
.. COMMENT: On-Line Applications Research Corporation (OAR).
.. COMMENT: All rights reserved.

View File

@ -1,152 +0,0 @@
CPU Usage Statistics
####################
Introduction
============
The CPU usage statistics manager is an RTEMS support
component that provides a convenient way to manipulate
the CPU usage information associated with each task
The routines provided by the CPU usage statistics manager are:
- ``rtems.cpu_usage_report`` - Report CPU Usage Statistics
- ``rtems.cpu_usage_reset`` - Reset CPU Usage Statistics
Background
==========
When analyzing and debugging real-time applications, it is important
to be able to know how much CPU time each task in the system consumes.
This support component provides a mechanism to easily obtain this
information with little burden placed on the target.
The raw data is gathered as part of performing a context switch. RTEMS
keeps track of how many clock ticks have occurred which the task being
switched out has been executing. If the task has been running less than
1 clock tick, then for the purposes of the statistics, it is assumed to
have executed 1 clock tick. This results in some inaccuracy but the
alternative is for the task to have appeared to execute 0 clock ticks.
RTEMS versions newer than the 4.7 release series, support the ability
to obtain timestamps with nanosecond granularity if the BSP provides
support. It is a desirable enhancement to change the way the usage
data is gathered to take advantage of this recently added capability.
Please consider sponsoring the core RTEMS development team to add
this capability.
Operations
==========
Report CPU Usage Statistics
---------------------------
The application may dynamically report the CPU usage for every
task in the system by calling the``rtems.cpu_usage_report`` routine.
This routine prints a table with the following information per task:
- task id
- task name
- number of clock ticks executed
- percentage of time consumed by this task
The following is an example of the report generated.
+------------------------------------------------------------------------------+
|CPU USAGE BY THREAD |
+-----------+----------------------------------------+-------------------------+
|ID | NAME | SECONDS | PERCENT |
+-----------+----------------------------------------+---------------+---------+
|0x04010001 | IDLE | 0 | 0.000 |
+-----------+----------------------------------------+---------------+---------+
|0x08010002 | TA1 | 1203 | 0.748 |
+-----------+----------------------------------------+---------------+---------+
|0x08010003 | TA2 | 203 | 0.126 |
+-----------+----------------------------------------+---------------+---------+
|0x08010004 | TA3 | 202 | 0.126 |
+-----------+----------------------------------------+---------------+---------+
|TICKS SINCE LAST SYSTEM RESET: 1600 |
|TOTAL UNITS: 1608 |
+------------------------------------------------------------------------------+
Notice that the "TOTAL UNITS" is greater than the ticks per reset.
This is an artifact of the way in which RTEMS keeps track of CPU
usage. When a task is context switched into the CPU, the number
of clock ticks it has executed is incremented. While the task
is executing, this number is incremented on each clock tick.
Otherwise, if a task begins and completes execution between
successive clock ticks, there would be no way to tell that it
executed at all.
Another thing to keep in mind when looking at idle time, is that
many systems especially during debug have a task providing
some type of debug interface. It is usually fine to think of the
total idle time as being the sum of the IDLE task and a debug
task that will not be included in a production build of an application.
Reset CPU Usage Statistics
--------------------------
Invoking the ``rtems.cpu_usage_reset`` routine resets
the CPU usage statistics for all tasks in the system.
Directives
==========
This section details the CPU usage statistics managers directives.
A subsection is dedicated to each of this managers directives
and describes the calling sequence, related constants, usage,
and status codes.
cpu_usage_report - Report CPU Usage Statistics
----------------------------------------------
**CALLING SEQUENCE:**
.. code:: c
procedure CPU_Usage_Report;
**STATUS CODES: NONE**
**DESCRIPTION:**
This routine prints out a table detailing the CPU usage statistics for
all tasks in the system.
**NOTES:**
The table is printed using the ``printk`` routine.
cpu_usage_reset - Reset CPU Usage Statistics
--------------------------------------------
**CALLING SEQUENCE:**
.. code:: c
procedure CPU_Usage_Reset;
**STATUS CODES: NONE**
**DESCRIPTION:**
This routine re-initializes the CPU usage statistics for all tasks
in the system to their initial state. The initial state is that
a task has not executed and thus has consumed no CPU time.
default state which is when zero period executions have occurred.
**NOTES:**
NONE
.. COMMENT: COPYRIGHT (c) 1988-2008.
.. COMMENT: On-Line Applications Research Corporation (OAR).
.. COMMENT: All rights reserved.

View File

@ -1,81 +0,0 @@
Directive Status Codes
######################
Introduction
============
*``RTEMS.SUCCESSFUL`` - successful completion*
*``RTEMS.TASK_EXITTED`` - returned from a task*
*``RTEMS.MP_NOT_CONFIGURED`` - multiprocessing not configured*
*``RTEMS.INVALID_NAME`` - invalid object name*
*``RTEMS.INVALID_ID`` - invalid object id*
*``RTEMS.TOO_MANY`` - too many*
*``RTEMS.TIMEOUT`` - timed out waiting*
*``RTEMS.OBJECT_WAS_DELETED`` - object was deleted while waiting*
*``RTEMS.INVALID_SIZE`` - invalid specified size*
*``RTEMS.INVALID_ADDRESS`` - invalid address specified*
*``RTEMS.INVALID_NUMBER`` - number was invalid*
*``RTEMS.NOT_DEFINED`` - item not initialized*
*``RTEMS.RESOURCE_IN_USE`` - resources outstanding*
*``RTEMS.UNSATISFIED`` - request not satisfied*
*``RTEMS.INCORRECT_STATE`` - task is in wrong state*
*``RTEMS.ALREADY_SUSPENDED`` - task already in state*
*``RTEMS.ILLEGAL_ON_SELF`` - illegal for calling task*
*``RTEMS.ILLEGAL_ON_REMOTE_OBJECT`` - illegal for remote object*
*``RTEMS.CALLED_FROM_ISR`` - invalid environment*
*``RTEMS.INVALID_PRIORITY`` - invalid task priority*
*``RTEMS.INVALID_CLOCK`` - invalid time buffer*
*``RTEMS.INVALID_NODE`` - invalid node id*
*``RTEMS.NOT_CONFIGURED`` - directive not configured*
*``RTEMS.NOT_OWNER_OF_RESOURCE`` - not owner of resource*
*``RTEMS.NOT_IMPLEMENTED`` - directive not implemented*
*``RTEMS.INTERNAL_ERROR`` - RTEMS inconsistency detected*
*``RTEMS.NO_MEMORY`` - could not get enough memory*
Directives
==========
STATUS_TEXT - Returns the enumeration name for a status code
------------------------------------------------------------
**CALLING SEQUENCE:**
**DIRECTIVE STATUS CODES**
The status code enumeration name or "?" in case the status code is invalid.
**DESCRIPTION:**
Returns the enumeration name for the specified status code.
.. COMMENT: Copyright 2015 embedded brains GmbH
.. COMMENT: All rights reserved.

View File

@ -1,291 +0,0 @@
Dual-Ported Memory Manager
##########################
.. index:: ports
.. index:: dual ported memory
Introduction
============
The dual-ported memory manager provides a mechanism
for converting addresses between internal and external
representations for multiple dual-ported memory areas (DPMA).
The directives provided by the dual-ported memory manager are:
- ``rtems.port_create`` - Create a port
- ``rtems.port_ident`` - Get ID of a port
- ``rtems.port_delete`` - Delete a port
- ``rtems.port_external_to_internal`` - Convert external to internal address
- ``rtems.port_internal_to_external`` - Convert internal to external address
Background
==========
.. index:: dual ported memory, definition
.. index:: external addresses, definition
.. index:: internal addresses, definition
A dual-ported memory area (DPMA) is an contiguous
block of RAM owned by a particular processor but which can be
accessed by other processors in the system. The owner accesses
the memory using internal addresses, while other processors must
use external addresses. RTEMS defines a port as a particular
mapping of internal and external addresses.
There are two system configurations in which
dual-ported memory is commonly found. The first is
tightly-coupled multiprocessor computer systems where the
dual-ported memory is shared between all nodes and is used for
inter-node communication. The second configuration is computer
systems with intelligent peripheral controllers. These
controllers typically utilize the DPMA for high-performance data
transfers.
Operations
==========
Creating a Port
---------------
The ``rtems.port_create`` directive creates a port into a DPMA
with the user-defined name. The user specifies the association
between internal and external representations for the port being
created. RTEMS allocates a Dual-Ported Memory Control Block
(DPCB) from the DPCB free list to maintain the newly created
DPMA. RTEMS also generates a unique dual-ported memory port ID
which is returned to the calling task. RTEMS does not
initialize the dual-ported memory area or access any memory
within it.
Obtaining Port IDs
------------------
When a port is created, RTEMS generates a unique port
ID and assigns it to the created port until it is deleted. The
port ID may be obtained by either of two methods. First, as the
result of an invocation of the``rtems.port_create`` directive, the task
ID is stored in a user provided location. Second, the port ID
may be obtained later using the``rtems.port_ident`` directive. The port
ID is used by other dual-ported memory manager directives to
access this port.
Converting an Address
---------------------
The ``rtems.port_external_to_internal`` directive is used to
convert an address from external to internal representation for
the specified port.
The ``rtems.port_internal_to_external`` directive is
used to convert an address from internal to external
representation for the specified port. If an attempt is made to
convert an address which lies outside the specified DPMA, then
the address to be converted will be returned.
Deleting a DPMA Port
--------------------
A port can be removed from the system and returned to
RTEMS with the ``rtems.port_delete`` directive. When a port is deleted,
its control block is returned to the DPCB free list.
Directives
==========
This section details the dual-ported memory managers
directives. A subsection is dedicated to each of this managers
directives and describes the calling sequence, related
constants, usage, and status codes.
PORT_CREATE - Create a port
---------------------------
.. index:: create a port
**CALLING SEQUENCE:**
.. code:: c
procedure Port_Create (
Name : in RTEMS.Name;
Internal_Start : in RTEMS.Address;
External_Start : in RTEMS.Address;
Length : in RTEMS.Unsigned32;
ID : out RTEMS.ID;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - port created successfully
``RTEMS.INVALID_NAME`` - invalid port name
``RTEMS.INVALID_ADDRESS`` - address not on four byte boundary
``RTEMS.INVALID_ADDRESS`` - ``id`` is NULL
``RTEMS.TOO_MANY`` - too many DP memory areas created
**DESCRIPTION:**
This directive creates a port which resides on the
local node for the specified DPMA. The assigned port id is
returned in id. This port id is used as an argument to other
dual-ported memory manager directives to convert addresses
within this DPMA.
For control and maintenance of the port, RTEMS
allocates and initializes an DPCB from the DPCB free pool. Thus
memory from the dual-ported memory area is not used to store the
DPCB.
**NOTES:**
The internal_address and external_address parameters
must be on a four byte boundary.
This directive will not cause the calling task to be
preempted.
PORT_IDENT - Get ID of a port
-----------------------------
.. index:: get ID of a port
.. index:: obtain ID of a port
**CALLING SEQUENCE:**
.. code:: c
procedure Port_Ident (
Name : in RTEMS.Name;
ID : out RTEMS.ID;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - port identified successfully
``RTEMS.INVALID_ADDRESS`` - ``id`` is NULL
``RTEMS.INVALID_NAME`` - port name not found
**DESCRIPTION:**
This directive obtains the port id associated with
the specified name to be acquired. If the port name is not
unique, then the port id will match one of the DPMAs with that
name. However, this port id is not guaranteed to correspond to
the desired DPMA. The port id is used to access this DPMA in
other dual-ported memory area related directives.
**NOTES:**
This directive will not cause the running task to be
preempted.
PORT_DELETE - Delete a port
---------------------------
.. index:: delete a port
**CALLING SEQUENCE:**
.. code:: c
procedure Port_Delete (
ID : in RTEMS.ID;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - port deleted successfully
``RTEMS.INVALID_ID`` - invalid port id
**DESCRIPTION:**
This directive deletes the dual-ported memory area
specified by id. The DPCB for the deleted dual-ported memory
area is reclaimed by RTEMS.
**NOTES:**
This directive will not cause the calling task to be
preempted.
The calling task does not have to be the task that
created the port. Any local task that knows the port id can
delete the port.
PORT_EXTERNAL_TO_INTERNAL - Convert external to internal address
----------------------------------------------------------------
.. index:: convert external to internal address
**CALLING SEQUENCE:**
.. code:: c
procedure Port_External_To_Internal (
ID : in RTEMS.ID;
External : in RTEMS.Address;
Internal : out RTEMS.Address;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.INVALID_ADDRESS`` - ``internal`` is NULL
``RTEMS.SUCCESSFUL`` - successful conversion
**DESCRIPTION:**
This directive converts a dual-ported memory address
from external to internal representation for the specified port.
If the given external address is invalid for the specified
port, then the internal address is set to the given external
address.
**NOTES:**
This directive is callable from an ISR.
This directive will not cause the calling task to be
preempted.
PORT_INTERNAL_TO_EXTERNAL - Convert internal to external address
----------------------------------------------------------------
.. index:: convert internal to external address
**CALLING SEQUENCE:**
.. code:: c
procedure Port_Internal_To_External (
ID : in RTEMS.ID;
Internal : in RTEMS.Address;
External : out RTEMS.Address;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.INVALID_ADDRESS`` - ``external`` is NULL
``RTEMS.SUCCESSFUL`` - successful conversion
**DESCRIPTION:**
This directive converts a dual-ported memory address
from internal to external representation so that it can be
passed to owner of the DPMA represented by the specified port.
If the given internal address is an invalid dual-ported address,
then the external address is set to the given internal address.
**NOTES:**
This directive is callable from an ISR.
This directive will not cause the calling task to be
preempted.
.. COMMENT: COPYRIGHT (c) 1988-2008.
.. COMMENT: On-Line Applications Research Corporation (OAR).
.. COMMENT: All rights reserved.

View File

@ -1,296 +0,0 @@
Event Manager
#############
.. index:: events
Introduction
============
The event manager provides a high performance method
of intertask communication and synchronization. The directives
provided by the event manager are:
- ``rtems.event_send`` - Send event set to a task
- ``rtems.event_receive`` - Receive event condition
Background
==========
Event Sets
----------
.. index:: event flag, definition
.. index:: event set, definition
.. index:: rtems_event_set
An event flag is used by a task (or ISR) to inform
another task of the occurrence of a significant situation.
Thirty-two event flags are associated with each task. A
collection of one or more event flags is referred to as an event
set. The data type ``rtems.event_set`` is used to manage
event sets.
The application developer should remember the following
key characteristics of event operations when utilizing the event
manager:
- Events provide a simple synchronization facility.
- Events are aimed at tasks.
- Tasks can wait on more than one event simultaneously.
- Events are independent of one another.
- Events do not hold or transport data.
- Events are not queued. In other words, if an event is
sent more than once to a task before being received, the second and
subsequent send operations to that same task have no effect.
An event set is posted when it is directed (or sent) to a task. A
pending event is an event that has been posted but not received. An event
condition is used to specify the event set which the task desires to receive
and the algorithm which will be used to determine when the request is
satisfied. An event condition is satisfied based upon one of two
algorithms which are selected by the user. The``RTEMS.EVENT_ANY`` algorithm states that an event condition
is satisfied when at least a single requested event is posted. The``RTEMS.EVENT_ALL`` algorithm states that an event condition
is satisfied when every requested event is posted.
Building an Event Set or Condition
----------------------------------
.. index:: event condition, building
.. index:: event set, building
An event set or condition is built by a bitwise OR of
the desired events. The set of valid events is ``RTEMS.EVENT_0`` through``RTEMS.EVENT_31``. If an event is not explicitly specified in the set or
condition, then it is not present. Events are specifically
designed to be mutually exclusive, therefore bitwise OR and
addition operations are equivalent as long as each event appears
exactly once in the event set list.
For example, when sending the event set consisting of``RTEMS.EVENT_6``, ``RTEMS.EVENT_15``, and ``RTEMS.EVENT_31``,
the event parameter to the ``rtems.event_send``
directive should be ``RTEMS.EVENT_6 or
RTEMS.EVENT_15 or RTEMS.EVENT_31``.
Building an EVENT_RECEIVE Option Set
------------------------------------
In general, an option is built by a bitwise OR of the
desired option components. The set of valid options for the``rtems.event_receive`` directive are listed
in the following table:
- ``RTEMS.WAIT`` - task will wait for event (default)
- ``RTEMS.NO_WAIT`` - task should not wait
- ``RTEMS.EVENT_ALL`` - return after all events (default)
- ``RTEMS.EVENT_ANY`` - return after any events
Option values are specifically designed to be
mutually exclusive, therefore bitwise OR and addition operations
are equivalent as long as each option appears exactly once in
the component list. An option listed as a default is not
required to appear in the option list, although it is a good
programming practice to specify default options. If all
defaults are desired, the option ``RTEMS.DEFAULT_OPTIONS`` should be
specified on this call.
This example demonstrates the option parameter needed
to poll for all events in a particular event condition to
arrive. The option parameter passed to the``rtems.event_receive`` directive should be either``RTEMS.EVENT_ALL or RTEMS.NO_WAIT``
or ``RTEMS.NO_WAIT``. The option parameter can be set to``RTEMS.NO_WAIT`` because ``RTEMS.EVENT_ALL`` is the
default condition for ``rtems.event_receive``.
Operations
==========
Sending an Event Set
--------------------
The ``rtems.event_send`` directive allows a task (or an ISR) to
direct an event set to a target task. Based upon the state of
the target task, one of the following situations applies:
- Target Task is Blocked Waiting for Events
- If the waiting tasks input event condition is
satisfied, then the task is made ready for execution.
- If the waiting tasks input event condition is not
satisfied, then the event set is posted but left pending and the
task remains blocked.
- Target Task is Not Waiting for Events
- The event set is posted and left pending.
Receiving an Event Set
----------------------
The ``rtems.event_receive`` directive is used by tasks to
accept a specific input event condition. The task also
specifies whether the request is satisfied when all requested
events are available or any single requested event is available.
If the requested event condition is satisfied by pending
events, then a successful return code and the satisfying event
set are returned immediately. If the condition is not
satisfied, then one of the following situations applies:
- By default, the calling task will wait forever for the
event condition to be satisfied.
- Specifying the ``RTEMS.NO_WAIT`` option forces an immediate return
with an error status code.
- Specifying a timeout limits the period the task will
wait before returning with an error status code.
Determining the Pending Event Set
---------------------------------
A task can determine the pending event set by calling
the ``rtems.event_receive`` directive with a value of``RTEMS.PENDING_EVENTS`` for the input event condition.
The pending events are returned to the calling task but the event
set is left unaltered.
Receiving all Pending Events
----------------------------
A task can receive all of the currently pending
events by calling the ``rtems.event_receive``
directive with a value of ``RTEMS.ALL_EVENTS``
for the input event condition and``RTEMS.NO_WAIT or RTEMS.EVENT_ANY``
for the option set. The pending events are returned to the
calling task and the event set is cleared. If no events are
pending then the ``RTEMS.UNSATISFIED`` status code will be returned.
Directives
==========
This section details the event managers directives.
A subsection is dedicated to each of this managers directives
and describes the calling sequence, related constants, usage,
and status codes.
EVENT_SEND - Send event set to a task
-------------------------------------
.. index:: send event set to a task
**CALLING SEQUENCE:**
.. code:: c
procedure Event_Send (
ID : in RTEMS.ID;
Event_In : in RTEMS.Event_Set;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - event set sent successfully
``RTEMS.INVALID_ID`` - invalid task id
**DESCRIPTION:**
This directive sends an event set, event_in, to the
task specified by id. If a blocked tasks input event condition
is satisfied by this directive, then it will be made ready. If
its input event condition is not satisfied, then the events
satisfied are updated and the events not satisfied are left
pending. If the task specified by id is not blocked waiting for
events, then the events sent are left pending.
**NOTES:**
Specifying ``RTEMS.SELF`` for id results in the event set being
sent to the calling task.
Identical events sent to a task are not queued. In
other words, the second, and subsequent, posting of an event to
a task before it can perform an ``rtems.event_receive``
has no effect.
The calling task will be preempted if it has
preemption enabled and a higher priority task is unblocked as
the result of this directive.
Sending an event set to a global task which does not
reside on the local node will generate a request telling the
remote node to send the event set to the appropriate task.
EVENT_RECEIVE - Receive event condition
---------------------------------------
.. index:: receive event condition
**CALLING SEQUENCE:**
.. code:: c
procedure Event_Receive (
Event_In : in RTEMS.Event_Set;
Option_Set : in RTEMS.Option;
Ticks : in RTEMS.Interval;
Event_Out : out RTEMS.Event_Set;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - event received successfully
``RTEMS.UNSATISFIED`` - input event not satisfied (``RTEMS.NO_WAIT``)
``RTEMS.INVALID_ADDRESS`` - ``event_out`` is NULL
``RTEMS.TIMEOUT`` - timed out waiting for event
**DESCRIPTION:**
This directive attempts to receive the event
condition specified in event_in. If event_in is set to``RTEMS.PENDING_EVENTS``, then the current pending events are returned in
event_out and left pending. The ``RTEMS.WAIT`` and ``RTEMS.NO_WAIT`` options in the
option_set parameter are used to specify whether or not the task
is willing to wait for the event condition to be satisfied.``RTEMS.EVENT_ANY`` and ``RTEMS.EVENT_ALL`` are used in the option_set parameter are
used to specify whether a single event or the complete event set
is necessary to satisfy the event condition. The event_out
parameter is returned to the calling task with the value that
corresponds to the events in event_in that were satisfied.
If pending events satisfy the event condition, then
event_out is set to the satisfied events and the pending events
in the event condition are cleared. If the event condition is
not satisfied and ``RTEMS.NO_WAIT`` is specified, then event_out is set to
the currently satisfied events. If the calling task chooses to
wait, then it will block waiting for the event condition.
If the calling task must wait for the event condition
to be satisfied, then the timeout parameter is used to specify
the maximum interval to wait. If it is set to ``RTEMS.NO_TIMEOUT``, then
the calling task will wait forever.
**NOTES:**
This directive only affects the events specified in
event_in. Any pending events that do not correspond to any of
the events specified in event_in will be left pending.
The following event receive option constants are defined by
RTEMS:
- ``RTEMS.WAIT`` task will wait for event (default)
- ``RTEMS.NO_WAIT`` task should not wait
- ``RTEMS.EVENT_ALL`` return after all events (default)
- ``RTEMS.EVENT_ANY`` return after any events
A clock tick is required to support the functionality of this directive.
.. COMMENT: COPYRIGHT (c) 1988-2002.
.. COMMENT: On-Line Applications Research Corporation (OAR).
.. COMMENT: All rights reserved.

View File

@ -1,13 +0,0 @@
Example Application
###################
.. code:: c
Currently there is no example Ada application provided.
.. COMMENT: COPYRIGHT (c) 1989-2011.
.. COMMENT: On-Line Applications Research Corporation (OAR).
.. COMMENT: All rights reserved.

View File

@ -1,217 +0,0 @@
Fatal Error Manager
###################
.. index:: fatal errors
Introduction
============
The fatal error manager processes all fatal or irrecoverable errors and other
sources of system termination (for example after exit()). The directives
provided by the fatal error manager are:
- ``rtems.fatal_error_occurred`` - Invoke the fatal error handler
- ``rtems.fatal`` - Invoke the fatal error handler with error source
Background
==========
.. index:: fatal error detection
.. index:: fatal error processing
.. index:: fatal error user extension
The fatal error manager is called upon detection of
an irrecoverable error condition by either RTEMS or the
application software. Fatal errors can be detected from three
sources:
- the executive (RTEMS)
- user system code
- user application code
RTEMS automatically invokes the fatal error manager
upon detection of an error it considers to be fatal. Similarly,
the user should invoke the fatal error manager upon detection of
a fatal error.
Each static or dynamic user extension set may include
a fatal error handler. The fatal error handler in the static
extension set can be used to provide access to debuggers and
monitors which may be present on the target hardware. If any
user-supplied fatal error handlers are installed, the fatal
error manager will invoke them. If no user handlers are
configured or if all the user handler return control to the
fatal error manager, then the RTEMS default fatal error handler
is invoked. If the default fatal error handler is invoked, then
the system state is marked as failed.
Although the precise behavior of the default fatal
error handler is processor specific, in general, it will disable
all maskable interrupts, place the error code in a known
processor dependent place (generally either on the stack or in a
register), and halt the processor. The precise actions of the
RTEMS fatal error are discussed in the Default Fatal Error
Processing chapter of the Applications Supplement document for
a specific target processor.
Operations
==========
Announcing a Fatal Error
------------------------
.. index:: _Internal_errors_What_happened
The ``rtems.fatal_error_occurred`` directive is invoked when a
fatal error is detected. Before invoking any user-supplied
fatal error handlers or the RTEMS fatal error handler, the``rtems.fatal_error_occurred``
directive stores useful information in the
variable ``_Internal_errors_What_happened``. This record
contains three pieces of information:
- the source of the error (API or executive core),
- whether the error was generated internally by the
executive, and a
- a numeric code to indicate the error type.
The error type indicator is dependent on the source
of the error and whether or not the error was internally
generated by the executive. If the error was generated
from an API, then the error code will be of that APIs
error or status codes. The status codes for the RTEMS
API are in cpukit/rtems/include/rtems/rtems/status.h. Those
for the POSIX API can be found in <errno.h>.
The ``rtems.fatal_error_occurred`` directive is responsible
for invoking an optional user-supplied fatal error handler
and/or the RTEMS fatal error handler. All fatal error handlers
are passed an error code to describe the error detected.
Occasionally, an application requires more
sophisticated fatal error processing such as passing control to
a debugger. For these cases, a user-supplied fatal error
handler can be specified in the RTEMS configuration table. The
User Extension Table field fatal contains the address of the
fatal error handler to be executed when the``rtems.fatal_error_occurred``
directive is called. If the field is set to NULL or if the
configured fatal error handler returns to the executive, then
the default handler provided by RTEMS is executed. This default
handler will halt execution on the processor where the error
occurred.
Directives
==========
This section details the fatal error managers
directives. A subsection is dedicated to each of this managers
directives and describes the calling sequence, related
constants, usage, and status codes.
FATAL_ERROR_OCCURRED - Invoke the fatal error handler
-----------------------------------------------------
.. index:: announce fatal error
.. index:: fatal error, announce
**CALLING SEQUENCE:**
.. code:: c
procedure Fatal_Error_Occurred (
The_Error : in RTEMS.Unsigned32
);
**DIRECTIVE STATUS CODES**
NONE
**DESCRIPTION:**
This directive processes fatal errors. If the FATAL
error extension is defined in the configuration table, then the
user-defined error extension is called. If configured and the
provided FATAL error extension returns, then the RTEMS default
error handler is invoked. This directive can be invoked by
RTEMS or by the users application code including initialization
tasks, other tasks, and ISRs.
**NOTES:**
This directive supports local operations only.
Unless the user-defined error extension takes special
actions such as restarting the calling task, this directive WILL
NOT RETURN to the caller.
The user-defined extension for this directive may
wish to initiate a global shutdown.
FATAL - Invoke the fatal error handler with error source
--------------------------------------------------------
.. index:: announce fatal error
.. index:: fatal error, announce
**CALLING SEQUENCE:**
**DIRECTIVE STATUS CODES**
NONE
**DESCRIPTION:**
This directive invokes the internal error handler with is internal set to
false. See also ``rtems.fatal_error_occurred``.
EXCEPTION_FRAME_PRINT - Prints the exception frame
--------------------------------------------------
.. index:: exception frame
**CALLING SEQUENCE:**
**DIRECTIVE STATUS CODES**
NONE
**DESCRIPTION:**
Prints the exception frame via printk().
FATAL_SOURCE_TEXT - Returns a text for a fatal source
-----------------------------------------------------
.. index:: fatal error
**CALLING SEQUENCE:**
**DIRECTIVE STATUS CODES**
The fatal source text or "?" in case the passed fatal source is invalid.
**DESCRIPTION:**
Returns a text for a fatal source. The text for fatal source is the enumerator
constant.
INTERNAL_ERROR_TEXT - Returns a text for an internal error code
---------------------------------------------------------------
.. index:: fatal error
**CALLING SEQUENCE:**
**DIRECTIVE STATUS CODES**
The error code text or "?" in case the passed error code is invalid.
**DESCRIPTION:**
Returns a text for an internal error code. The text for each internal error
code is the enumerator constant.
.. COMMENT: COPYRIGHT (c) 1988-2011.
.. COMMENT: On-Line Applications Research Corporation (OAR).
.. COMMENT: All rights reserved.

View File

@ -1,833 +0,0 @@
Glossary
########
:dfn:`active`
A term used to describe an object
which has been created by an application.
:dfn:`aperiodic task`
A task which must execute only at
irregular intervals and has only a soft deadline.
:dfn:`application`
In this document, software which makes
use of RTEMS.
:dfn:`ASR`
see Asynchronous Signal Routine.
:dfn:`asynchronous`
Not related in order or timing to
other occurrences in the system.
:dfn:`Asynchronous Signal Routine`
Similar to a hardware
interrupt except that it is associated with a task and is run in
the context of a task. The directives provided by the signal
manager are used to service signals.
:dfn:`atomic operations`
Atomic operations are defined in terms of *ISO/IEC 9899:2011*.
:dfn:`awakened`
A term used to describe a task that has
been unblocked and may be scheduled to the CPU.
:dfn:`big endian`
A data representation scheme in which
the bytes composing a numeric value are arranged such that the
most significant byte is at the lowest address.
:dfn:`bit-mapped`
A data encoding scheme in which each bit
in a variable is used to represent something different. This
makes for compact data representation.
:dfn:`block`
A physically contiguous area of memory.
:dfn:`blocked task`
The task state entered by a task which has been previously started and cannot
continue execution until the reason for waiting has been satisfied. Blocked
tasks are not an element of the set of ready tasks of a scheduler instance.
:dfn:`broadcast`
To simultaneously send a message to a
logical set of destinations.
:dfn:`BSP`
see Board Support Package.
:dfn:`Board Support Package`
A collection of device
initialization and control routines specific to a particular
type of board or collection of boards.
:dfn:`buffer`
A fixed length block of memory allocated
from a partition.
:dfn:`calling convention`
The processor and compiler
dependent rules which define the mechanism used to invoke
subroutines in a high-level language. These rules define the
passing of arguments, the call and return mechanism, and the
register set which must be preserved.
:dfn:`Central Processing Unit`
This term is equivalent to
the terms processor and microprocessor.
:dfn:`chain`
A data structure which allows for efficient
dynamic addition and removal of elements. It differs from an
array in that it is not limited to a predefined size.
:dfn:`cluster`
We have clustered scheduling in case the set of processors of a system is
partitioned into non-empty pairwise disjoint subsets. These subsets are called:dfn:`clusters`. Clusters with a cardinality of one are partitions. Each
cluster is owned by exactly one scheduler instance.
:dfn:`coalesce`
The process of merging adjacent holes into
a single larger hole. Sometimes this process is referred to as
garbage collection.
:dfn:`Configuration Table`
A table which contains
information used to tailor RTEMS for a particular application.
:dfn:`context`
All of the processor registers and
operating system data structures associated with a task.
:dfn:`context switch`
Alternate term for task switch.
Taking control of the processor from one task and transferring
it to another task.
:dfn:`control block`
A data structure used by the
executive to define and control an object.
:dfn:`core`
When used in this manual, this term refers to
the internal executive utility functions. In the interest of
application portability, the core of the executive should not be
used directly by applications.
:dfn:`CPU`
An acronym for Central Processing Unit.
:dfn:`critical section`
A section of code which must be
executed indivisibly.
:dfn:`CRT`
An acronym for Cathode Ray Tube. Normally used
in reference to the man-machine interface.
:dfn:`deadline`
A fixed time limit by which a task must
have completed a set of actions. Beyond this point, the results
are of reduced value and may even be considered useless or
harmful.
:dfn:`device`
A peripheral used by the application that
requires special operation software. See also device driver.
:dfn:`device driver`
Control software for special
peripheral devices used by the application.
:dfn:`directives`
RTEMS provided routines that provide
support mechanisms for real-time applications.
:dfn:`dispatch`
The act of loading a tasks context onto
the CPU and transferring control of the CPU to that task.
:dfn:`dormant`
The state entered by a task after it is
created and before it has been started.
:dfn:`Device Driver Table`
A table which contains the
entry points for each of the configured device drivers.
:dfn:`dual-ported`
A term used to describe memory which
can be accessed at two different addresses.
:dfn:`embedded`
An application that is delivered as a
hidden part of a larger system. For example, the software in a
fuel-injection control system is an embedded application found
in many late-model automobiles.
:dfn:`envelope`
A buffer provided by the MPCI layer to
RTEMS which is used to pass messages between nodes in a
multiprocessor system. It typically contains routing
information needed by the MPCI. The contents of an envelope are
referred to as a packet.
:dfn:`entry point`
The address at which a function or task
begins to execute. In C, the entry point of a function is the
functions name.
:dfn:`events`
A method for task communication and
synchronization. The directives provided by the event manager
are used to service events.
:dfn:`exception`
A synonym for interrupt.
:dfn:`executing task`
The task state entered by a task after it has been given control of the
processor. On SMP configurations a task may be registered as executing on more
than one processor for short time frames during task migration. Blocked tasks
can be executing until they issue a thread dispatch.
:dfn:`executive`
In this document, this term is used to
referred to RTEMS. Commonly, an executive is a small real-time
operating system used in embedded systems.
:dfn:`exported`
An object known by all nodes in a
multiprocessor system. An object created with the GLOBAL
attribute will be exported.
:dfn:`external address`
The address used to access
dual-ported memory by all the nodes in a system which do not own
the memory.
:dfn:`FIFO`
An acronym for First In First Out.
:dfn:`First In First Out`
A discipline for manipulating entries in a data structure.
:dfn:`floating point coprocessor`
A component used in
computer systems to enhance performance in mathematically
intensive situations. It is typically viewed as a logical
extension of the primary processor.
:dfn:`freed`
A resource that has been released by the
application to RTEMS.
:dfn:`Giant lock`
The :dfn:`Giant lock` is a recursive SMP lock protecting most parts of the
operating system state. Virtually every operating system service must acquire
and release the Giant lock during its operation.
:dfn:`global`
An object that has been created with the
GLOBAL attribute and exported to all nodes in a multiprocessor
system.
:dfn:`handler`
The equivalent of a manager, except that it
is internal to RTEMS and forms part of the core. A handler is a
collection of routines which provide a related set of functions.
For example, there is a handler used by RTEMS to manage all
objects.
:dfn:`hard real-time system`
A real-time system in which a
missed deadline causes the worked performed to have no value or
to result in a catastrophic effect on the integrity of the
system.
:dfn:`heap`
A data structure used to dynamically allocate
and deallocate variable sized blocks of memory.
:dfn:`heir task`
A task is an :dfn:`heir` if it is registered as an heir in a processor of the
system. A task can be the heir on at most one processor in the system. In
case the executing and heir tasks differ on a processor and a thread dispatch
is marked as necessary, then the next thread dispatch will make the heir task
the executing task.
:dfn:`heterogeneous`
A multiprocessor computer system composed of dissimilar processors.
:dfn:`homogeneous`
A multiprocessor computer system composed of a single type of processor.
:dfn:`ID`
An RTEMS assigned identification tag used to
access an active object.
:dfn:`IDLE task`
A special low priority task which assumes
control of the CPU when no other task is able to execute.
:dfn:`interface`
A specification of the methodology used
to connect multiple independent subsystems.
:dfn:`internal address`
The address used to access
dual-ported memory by the node which owns the memory.
:dfn:`interrupt`
A hardware facility that causes the CPU
to suspend execution, save its status, and transfer control to a
specific location.
:dfn:`interrupt level`
A mask used to by the CPU to
determine which pending interrupts should be serviced. If a
pending interrupt is below the current interrupt level, then the
CPU does not recognize that interrupt.
:dfn:`Interrupt Service Routine`
An ISR is invoked by the
CPU to process a pending interrupt.
:dfn:`I/O`
An acronym for Input/Output.
:dfn:`ISR`
An acronym for Interrupt Service Routine.
:dfn:`kernel`
In this document, this term is used as a
synonym for executive.
:dfn:`list`
A data structure which allows for dynamic
addition and removal of entries. It is not statically limited
to a particular size.
:dfn:`little endian`
A data representation scheme in which
the bytes composing a numeric value are arranged such that the
least significant byte is at the lowest address.
:dfn:`local`
An object which was created with the LOCAL
attribute and is accessible only on the node it was created and
resides upon. In a single processor configuration, all objects
are local.
:dfn:`local operation`
The manipulation of an object which
resides on the same node as the calling task.
:dfn:`logical address`
An address used by an application.
In a system without memory management, logical addresses will
equal physical addresses.
:dfn:`loosely-coupled`
A multiprocessor configuration
where shared memory is not used for communication.
:dfn:`major number`
The index of a device driver in the
Device Driver Table.
:dfn:`manager`
A group of related RTEMS directives which
provide access and control over resources.
:dfn:`memory pool`
Used interchangeably with heap.
:dfn:`message`
A sixteen byte entity used to communicate
between tasks. Messages are sent to message queues and stored
in message buffers.
:dfn:`message buffer`
A block of memory used to store
messages.
:dfn:`message queue`
An RTEMS object used to synchronize
and communicate between tasks by transporting messages between
sending and receiving tasks.
:dfn:`Message Queue Control Block`
A data structure associated with each message queue used by RTEMS
to manage that message queue.
:dfn:`minor number`
A numeric value passed to a device
driver, the exact usage of which is driver dependent.
:dfn:`mode`
An entry in a tasks control block that is
used to determine if the task allows preemption, timeslicing,
processing of signals, and the interrupt disable level used by
the task.
:dfn:`MPCI`
An acronym for Multiprocessor Communications
Interface Layer.
:dfn:`multiprocessing`
The simultaneous execution of two
or more processes by a multiple processor computer system.
:dfn:`multiprocessor`
A computer with multiple CPUs
available for executing applications.
:dfn:`Multiprocessor Communications Interface Layer`
A set
of user-provided routines which enable the nodes in a
multiprocessor system to communicate with one another.
:dfn:`Multiprocessor Configuration Table`
The data structure defining the characteristics of the multiprocessor
target system with which RTEMS will communicate.
:dfn:`multitasking`
The alternation of execution amongst a
group of processes on a single CPU. A scheduling algorithm is
used to determine which process executes at which time.
:dfn:`mutual exclusion`
A term used to describe the act of
preventing other tasks from accessing a resource simultaneously.
:dfn:`nested`
A term used to describe an ASR that occurs
during another ASR or an ISR that occurs during another ISR.
:dfn:`node`
A term used to reference a processor running
RTEMS in a multiprocessor system.
:dfn:`non-existent`
The state occupied by an uncreated or
deleted task.
:dfn:`numeric coprocessor`
A component used in computer
systems to enhance performance in mathematically intensive
situations. It is typically viewed as a logical extension of
the primary processor.
:dfn:`object`
In this document, this term is used to refer
collectively to tasks, timers, message queues, partitions,
regions, semaphores, ports, and rate monotonic periods. All
RTEMS objects have IDs and user-assigned names.
:dfn:`object-oriented`
A term used to describe systems
with common mechanisms for utilizing a variety of entities.
Object-oriented systems shield the application from
implementation details.
:dfn:`operating system`
The software which controls all
the computers resources and provides the base upon which
application programs can be written.
:dfn:`overhead`
The portion of the CPUs processing power
consumed by the operating system.
:dfn:`packet`
A buffer which contains the messages passed
between nodes in a multiprocessor system. A packet is the
contents of an envelope.
:dfn:`partition`
An RTEMS object which is used to allocate
and deallocate fixed size blocks of memory from an dynamically
specified area of memory.
:dfn:`partition`
Clusters with a cardinality of one are :dfn:`partitions`.
:dfn:`Partition Control Block`
A data structure associated
with each partition used by RTEMS to manage that partition.
:dfn:`pending`
A term used to describe a task blocked
waiting for an event, message, semaphore, or signal.
:dfn:`periodic task`
A task which must execute at regular
intervals and comply with a hard deadline.
:dfn:`physical address`
The actual hardware address of a
resource.
:dfn:`poll`
A mechanism used to determine if an event has
occurred by periodically checking for a particular status.
Typical events include arrival of data, completion of an action,
and errors.
:dfn:`pool`
A collection from which resources are
allocated.
:dfn:`portability`
A term used to describe the ease with
which software can be rehosted on another computer.
:dfn:`posting`
The act of sending an event, message,
semaphore, or signal to a task.
:dfn:`preempt`
The act of forcing a task to relinquish the
processor and dispatching to another task.
:dfn:`priority`
A mechanism used to represent the relative
importance of an element in a set of items. RTEMS uses priority
to determine which task should execute.
:dfn:`priority boosting`
A simple approach to extend the priority inheritance protocol for clustered
scheduling is :dfn:`priority boosting`. In case a mutex is owned by a task of
another cluster, then the priority of the owner task is raised to an
artificially high priority, the pseudo-interrupt priority.
:dfn:`priority inheritance`
An algorithm that calls for
the lower priority task holding a resource to have its priority
increased to that of the highest priority task blocked waiting
for that resource. This avoids the problem of priority
inversion.
:dfn:`priority inversion`
A form of indefinite
postponement which occurs when a high priority tasks requests
access to shared resource currently allocated to low priority
task. The high priority task must block until the low priority
task releases the resource.
:dfn:`processor utilization`
The percentage of processor
time used by a task or a set of tasks.
:dfn:`proxy`
An RTEMS control structure used to represent,
on a remote node, a task which must block as part of a remote
operation.
:dfn:`Proxy Control Block`
A data structure associated
with each proxy used by RTEMS to manage that proxy.
:dfn:`PTCB`
An acronym for Partition Control Block.
:dfn:`PXCB`
An acronym for Proxy Control Block.
:dfn:`quantum`
The application defined unit of time in
which the processor is allocated.
:dfn:`queue`
Alternate term for message queue.
:dfn:`QCB`
An acronym for Message Queue Control Block.
:dfn:`ready task`
A task occupies this state when it is available to be given control of a
processor. A ready task has no processor assigned. The scheduler decided that
other tasks are currently more important. A task that is ready to execute and
has a processor assigned is called scheduled.
:dfn:`real-time`
A term used to describe systems which are
characterized by requiring deterministic response times to
external stimuli. The external stimuli require that the
response occur at a precise time or the response is incorrect.
:dfn:`reentrant`
A term used to describe routines which do
not modify themselves or global variables.
:dfn:`region`
An RTEMS object which is used to allocate
and deallocate variable size blocks of memory from a dynamically
specified area of memory.
:dfn:`Region Control Block`
A data structure associated
with each region used by RTEMS to manage that region.
:dfn:`registers`
Registers are locations physically
located within a component, typically used for device control or
general purpose storage.
:dfn:`remote`
Any object that does not reside on the local
node.
:dfn:`remote operation`
The manipulation of an object
which does not reside on the same node as the calling task.
:dfn:`return code`
Also known as error code or return
value.
:dfn:`resource`
A hardware or software entity to which
access must be controlled.
:dfn:`resume`
Removing a task from the suspend state. If
the tasks state is ready following a call to the ``rtems.task_resume``
directive, then the task is available for scheduling.
:dfn:`return code`
A value returned by RTEMS directives to
indicate the completion status of the directive.
:dfn:`RNCB`
An acronym for Region Control Block.
:dfn:`round-robin`
A task scheduling discipline in which
tasks of equal priority are executed in the order in which they
are made ready.
:dfn:`RS-232`
A standard for serial communications.
:dfn:`running`
The state of a rate monotonic timer while
it is being used to delineate a period. The timer exits this
state by either expiring or being canceled.
:dfn:`schedulable`
A set of tasks which can be guaranteed
to meet their deadlines based upon a specific scheduling
algorithm.
:dfn:`schedule`
The process of choosing which task should
next enter the executing state.
:dfn:`scheduled task`
A task is :dfn:`scheduled` if it is allowed to execute and has a processor
assigned. Such a task executes currently on a processor or is about to start
execution. A task about to start execution it is an heir task on exactly one
processor in the system.
:dfn:`scheduler`
A :dfn:`scheduler` or :dfn:`scheduling algorithm` allocates processors to a
subset of its set of ready tasks. So it manages access to the processor
resource. Various algorithms exist to choose the tasks allowed to use a
processor out of the set of ready tasks. One method is to assign each task a
priority number and assign the tasks with the lowest priority number to one
processor of the set of processors owned by a scheduler instance.
:dfn:`scheduler instance`
A :dfn:`scheduler instance` is a scheduling algorithm with a corresponding
context to store its internal state. Each processor in the system is owned by
at most one scheduler instance. The processor to scheduler instance assignment
is determined at application configuration time. See `Configuring Clustered Schedulers`_.
:dfn:`segments`
Variable sized memory blocks allocated
from a region.
:dfn:`semaphore`
An RTEMS object which is used to
synchronize tasks and provide mutually exclusive access to
resources.
:dfn:`Semaphore Control Block`
A data structure associated
with each semaphore used by RTEMS to manage that semaphore.
:dfn:`shared memory`
Memory which is accessible by
multiple nodes in a multiprocessor system.
:dfn:`signal`
An RTEMS provided mechanism to communicate
asynchronously with a task. Upon reception of a signal, the ASR
of the receiving task will be invoked.
:dfn:`signal set`
A thirty-two bit entity which is used to
represent a tasks collection of pending signals and the signals
sent to a task.
:dfn:`SMCB`
An acronym for Semaphore Control Block.
:dfn:`SMP locks`
The :dfn:`SMP locks` ensure mutual exclusion on the lowest level and are a
replacement for the sections of disabled interrupts. Interrupts are usually
disabled while holding an SMP lock. They are implemented using atomic
operations. Currently a ticket lock is used in RTEMS.
:dfn:`SMP barriers`
The :dfn:`SMP barriers` ensure that a defined set of independent threads of
execution on a set of processors reaches a common synchronization point in
time. They are implemented using atomic operations. Currently a sense barrier
is used in RTEMS.
:dfn:`soft real-time system`
A real-time system in which a
missed deadline does not compromise the integrity of the system.
:dfn:`sporadic task`
A task which executes at irregular
intervals and must comply with a hard deadline. A minimum
period of time between successive iterations of the task can be
guaranteed.
:dfn:`stack`
A data structure that is managed using a Last
In First Out (LIFO) discipline. Each task has a stack
associated with it which is used to store return information
and local variables.
:dfn:`status code`
Also known as error code or return
value.
:dfn:`suspend`
A term used to describe a task that is not
competing for the CPU because it has had a ``rtems.task_suspend`` directive.
:dfn:`synchronous`
Related in order or timing to other
occurrences in the system.
:dfn:`system call`
In this document, this is used as an
alternate term for directive.
:dfn:`target`
The system on which the application will
ultimately execute.
:dfn:`task`
A logically complete thread of execution. It consists normally of a set of
registers and a stack. The terms :dfn:`task` and :dfn:`thread` are synonym in
RTEMS. The scheduler assigns processors to a subset of the ready tasks.
:dfn:`Task Control Block`
A data structure associated with
each task used by RTEMS to manage that task.
:dfn:`task migration`
:dfn:`Task migration` happens in case a task stops execution on one processor
and resumes execution on another processor.
:dfn:`task processor affinity`
The set of processors on which a task is allowed to execute.
:dfn:`task switch`
Alternate terminology for context
switch. Taking control of the processor from one task and given
to another.
:dfn:`TCB`
An acronym for Task Control Block.
:dfn:`thread dispatch`
The :dfn:`thread dispatch` transfers control of the processor from the currently
executing thread to the heir thread of the processor.
:dfn:`tick`
The basic unit of time used by RTEMS. It is a
user-configurable number of microseconds. The current tick
expires when the ``rtems.clock_tick``
directive is invoked.
:dfn:`tightly-coupled`
A multiprocessor configuration
system which communicates via shared memory.
:dfn:`timeout`
An argument provided to a number of
directives which determines the maximum length of time an
application task is willing to wait to acquire the resource if
it is not immediately available.
:dfn:`timer`
An RTEMS object used to invoke subprograms at
a later time.
:dfn:`Timer Control Block`
A data structure associated
with each timer used by RTEMS to manage that timer.
:dfn:`timeslicing`
A task scheduling discipline in which
tasks of equal priority are executed for a specific period of
time before being preempted by another task.
:dfn:`timeslice`
The application defined unit of time in
which the processor is allocated.
:dfn:`TMCB`
An acronym for Timer Control Block.
:dfn:`transient overload`
A temporary rise in system
activity which may cause deadlines to be missed. Rate Monotonic
Scheduling can be used to determine if all deadlines will be met
under transient overload.
:dfn:`user extensions`
Software routines provided by the
application to enhance the functionality of RTEMS.
:dfn:`User Extension Table`
A table which contains the
entry points for each user extensions.
:dfn:`User Initialization Tasks Table`
A table which
contains the information needed to create and start each of the
user initialization tasks.
:dfn:`user-provided`
Alternate term for user-supplied.
This term is used to designate any software routines which must
be written by the application designer.
:dfn:`user-supplied`
Alternate term for user-provided.
This term is used to designate any software routines which must
be written by the application designer.
:dfn:`vector`
Memory pointers used by the processor to
fetch the address of routines which will handle various
exceptions and interrupts.
:dfn:`wait queue`
The list of tasks blocked pending the
release of a particular resource. Message queues, regions, and
semaphores have a wait queue associated with them.
:dfn:`yield`
When a task voluntarily releases control of the processor.

View File

@ -1,82 +0,0 @@
======================
RTEMS Ada Users Guide
======================
COPYRIGHT © 1988 - 2015.
On-Line Applications Research Corporation (OAR).
The authors have used their best efforts in preparing
this material. These efforts include the development, research,
and testing of the theories and programs to determine their
effectiveness. No warranty of any kind, expressed or implied,
with regard to the software or the material contained in this
document is provided. No liability arising out of the
application or use of any product described in this document is
assumed. The authors reserve the right to revise this material
and to make changes from time to time in the content hereof
without obligation to notify anyone of such revision or changes.
The RTEMS Project is hosted at http://www.rtems.org. Any
inquiries concerning RTEMS, its related support components, or its
documentation should be directed to the Community Project hosted athttp://www.rtems.org.
Any inquiries for commercial services including training, support, custom
development, application development assistance should be directed tohttp://www.rtems.com.
Table of Contents
-----------------
.. toctree::
preface
.. toctree::
:maxdepth: 3
:numbered:
overview
key_concepts
rtems_data_types
initialization_manager
task_manager
interrupt_manager
clock_manager
timer_manager
semaphore_manager
message_manager
event_manager
signal_manager
partition_manager
region_manager
dual_ports_memory_manager
io_manager
fatal_error_manager
scheduling_concepts
rate_monotonic_manager
barrier_manager
board_support_packages
user_extensions_manager
configuring_a_system
multiprocessing_manager
symmetric_multiprocessing_services
pci_library
stack_bounds_checker
cpu_usage_statistics
object_services
chains
red_black_trees
timepsec_helpers
constant_bandwidth_server_scheduler_api
directive_status_codes
linker_sets
example_application
glossary
command
* :ref:`genindex`
* :ref:`search`

View File

@ -1,280 +0,0 @@
Initialization Manager
######################
Introduction
============
The Initialization Manager is responsible for
initiating and shutting down RTEMS. Initiating RTEMS involves
creating and starting all configured initialization tasks, and
for invoking the initialization routine for each user-supplied
device driver. In a multiprocessor configuration, this manager
also initializes the interprocessor communications layer. The
directives provided by the Initialization Manager are:
- ``rtems.initialize_executive`` - Initialize RTEMS
- ``rtems.shutdown_executive`` - Shutdown RTEMS
Background
==========
Initialization Tasks
--------------------
.. index:: initialization tasks
Initialization task(s) are the mechanism by which
RTEMS transfers initial control to the users application.
Initialization tasks differ from other application tasks in that
they are defined in the User Initialization Tasks Table and
automatically created and started by RTEMS as part of its
initialization sequence. Since the initialization tasks are
scheduled using the same algorithm as all other RTEMS tasks,
they must be configured at a priority and mode which will ensure
that they will complete execution before other application tasks
execute. Although there is no upper limit on the number of
initialization tasks, an application is required to define at
least one.
A typical initialization task will create and start
the static set of application tasks. It may also create any
other objects used by the application. Initialization tasks
which only perform initialization should delete themselves upon
completion to free resources for other tasks. Initialization
tasks may transform themselves into a "normal" application task.
This transformation typically involves changing priority and
execution mode. RTEMS does not automatically delete the
initialization tasks.
System Initialization
---------------------
System Initialization begins with board reset and continues
through RTEMS initialization, initialization of all device
drivers, and eventually a context switch to the first user
task. Remember, that interrupts are disabled during
initialization and the *initialization context* is not
a task in any sense and the user should be very careful
during initialization.
The BSP must ensure that the there is enough stack
space reserved for the initialization context to
successfully execute the initialization routines for
all device drivers and, in multiprocessor configurations, the
Multiprocessor Communications Interface Layer initialization
routine.
The Idle Task
-------------
The Idle Task is the lowest priority task in a system
and executes only when no other task is ready to execute. This
default implementation of this task consists of an infinite
loop. RTEMS allows the Idle Task body to be replaced by a CPU
specific implementation, a BSP specific implementation or an
application specific implementation.
The Idle Task is preemptible and *WILL* be preempted when
any other task is made ready to execute. This characteristic is
critical to the overall behavior of any application.
Initialization Manager Failure
------------------------------
The ``rtems.fatal_error_occurred`` directive will
be invoked from ``rtems.initialize_executive``
for any of the following reasons:
- If either the Configuration Table or the CPU Dependent
Information Table is not provided.
- If the starting address of the RTEMS RAM Workspace,
supplied by the application in the Configuration Table, is NULL
or is not aligned on a four-byte boundary.
- If the size of the RTEMS RAM Workspace is not large
enough to initialize and configure the system.
- If the interrupt stack size specified is too small.
- If multiprocessing is configured and the node entry in
the Multiprocessor Configuration Table is not between one and
the maximum_nodes entry.
- If a multiprocessor system is being configured and no
Multiprocessor Communications Interface is specified.
- If no user initialization tasks are configured. At
least one initialization task must be configured to allow RTEMS
to pass control to the application at the end of the executive
initialization sequence.
- If any of the user initialization tasks cannot be
created or started successfully.
A discussion of RTEMS actions when a fatal error occurs
may be found `Announcing a Fatal Error`_
Operations
==========
Initializing RTEMS
------------------
The Initialization Manager ``rtems.initialize_executive``
directives is called by the ``boot_card`` routine. The ``boot_card``
routine is invoked by the Board Support Package once a basic C run-time
environment is set up. This consists of
- a valid and accessible text section, read-only data, read-write data and
zero-initialized data,
- an initialization stack large enough to initialize the rest of the Board
Support Package, RTEMS and the device drivers,
- all registers and components mandated by Application Binary Interface, and
- disabled interrupts.
The ``rtems.initialize_executive`` directive uses a system
initialization linker set to initialize only those parts of the overall RTEMS
feature set that is necessary for a particular application. See `Linker Sets`_.
Each RTEMS feature used the application may optionally register an
initialization handler. The system initialization API is available via``#included <rtems/sysinit.h>``.
A list of all initialization steps follows. Some steps are optional depending
on the requested feature set of the application. The initialization steps are
execute in the order presented here.
:dfn:`RTEMS_SYSINIT_BSP_WORK_AREAS`
The work areas consisting of C Program Heap and the RTEMS Workspace are
initialized by the Board Support Package. This step is mandatory.
:dfn:`RTEMS_SYSINIT_BSP_START`
Basic initialization step provided by the Board Support Package. This step is
mandatory.
:dfn:`RTEMS_SYSINIT_DATA_STRUCTURES`
This directive is called when the Board Support Package has completed its basic
initialization and allows RTEMS to initialize the application environment based
upon the information in the Configuration Table, User Initialization Tasks
Table, Device Driver Table, User Extension Table, Multiprocessor Configuration
Table, and the Multiprocessor Communications Interface (MPCI) Table.
:dfn:`RTEMS_SYSINIT_BSP_LIBC`
Depending on the application configuration the IO library and root filesystem
is initialized. This step is mandatory.
:dfn:`RTEMS_SYSINIT_BEFORE_DRIVERS`
This directive performs initialization that must occur between basis RTEMS data
structure initialization and device driver initialization. In particular, in a
multiprocessor configuration, this directive will create the MPCI Server Task.
:dfn:`RTEMS_SYSINIT_BSP_PRE_DRIVERS`
Initialization step performed right before device drivers are initialized
provided by the Board Support Package. This step is mandatory.
:dfn:`RTEMS_SYSINIT_DEVICE_DRIVERS`
This step initializes all statically configured device drivers and performs all
RTEMS initialization which requires device drivers to be initialized. This
step is mandatory.
In a multiprocessor configuration, this service will initialize the
Multiprocessor Communications Interface (MPCI) and synchronize with the other
nodes in the system.
:dfn:`RTEMS_SYSINIT_BSP_POST_DRIVERS`
Initialization step performed right after device drivers are initialized
provided by the Board Support Package. This step is mandatory.
The final action of the ``rtems.initialize_executive`` directive
is to start multitasking. RTEMS does not return to the initialization context
and the initialization stack may be re-used for interrupt processing.
Many of RTEMS actions during initialization are based upon
the contents of the Configuration Table. For more information
regarding the format and contents of this table, please refer
to the chapter `Configuring a System`_.
The final action in the initialization sequence is the
initiation of multitasking. When the scheduler and dispatcher
are enabled, the highest priority, ready task will be dispatched
to run. Control will not be returned to the Board Support
Package after multitasking is enabled. The initialization stack may be re-used
for interrupt processing.
Shutting Down RTEMS
-------------------
The ``rtems.shutdown_executive`` directive is invoked by the
application to end multitasking and terminate the system.
Directives
==========
This section details the Initialization Managers
directives. A subsection is dedicated to each of this managers
directives and describes the calling sequence, related
constants, usage, and status codes.
INITIALIZE_EXECUTIVE - Initialize RTEMS
---------------------------------------
.. index:: initialize RTEMS
.. index:: start multitasking
**CALLING SEQUENCE:**
.. code:: c
NOT SUPPORTED FROM Ada BINDING
**DIRECTIVE STATUS CODES:**
NONE
**DESCRIPTION:**
Iterates through the system initialization linker set and invokes the
registered handlers. The final step is to start multitasking.
**NOTES:**
This directive should be called by ``boot_card`` only.
This directive *does not return* to the caller. Errors in the initialization
sequence are usually fatal and lead to a system termination.
SHUTDOWN_EXECUTIVE - Shutdown RTEMS
-----------------------------------
.. index:: shutdown RTEMS
**CALLING SEQUENCE:**
.. code:: c
procedure Shutdown_Executive(
Status : in RTEMS.Unsigned32
);
**DIRECTIVE STATUS CODES:**
NONE
**DESCRIPTION:**
This directive is called when the application wishes to shutdown RTEMS. The
system is terminated with a fatal source of ``RTEMS_FATAL_SOURCE_EXIT`` and
the specified ``result`` code.
**NOTES:**
This directive *must* be the last RTEMS directive
invoked by an application and it *does not return* to the caller.
This directive may be called any time.
.. COMMENT: COPYRIGHT (c) 1988-2014.
.. COMMENT: On-Line Applications Research Corporation (OAR).
.. COMMENT: All rights reserved.

View File

@ -1,555 +0,0 @@
Interrupt Manager
#################
Introduction
============
Any real-time executive must provide a mechanism for
quick response to externally generated interrupts to satisfy the
critical time constraints of the application. The interrupt
manager provides this mechanism for RTEMS. This manager permits
quick interrupt response times by providing the critical ability
to alter task execution which allows a task to be preempted upon
exit from an ISR. The interrupt manager includes the following
directive:
- ``rtems.interrupt_catch`` - Establish an ISR
- ``rtems.interrupt_disable`` - Disable Interrupts
- ``rtems.interrupt_enable`` - Enable Interrupts
- ``rtems.interrupt_flash`` - Flash Interrupt
- ``rtems.interrupt_local_disable`` - Disable Interrupts on Current Processor
- ``rtems.interrupt_local_enable`` - Enable Interrupts on Current Processor
- ``rtems.interrupt_lock_initialize`` - Initialize an ISR Lock
- ``rtems.interrupt_lock_acquire`` - Acquire an ISR Lock
- ``rtems.interrupt_lock_release`` - Release an ISR Lock
- ``rtems.interrupt_lock_acquire_isr`` - Acquire an ISR Lock from ISR
- ``rtems.interrupt_lock_release_isr`` - Release an ISR Lock from ISR
- ``rtems.interrupt_is_in_progress`` - Is an ISR in Progress
Background
==========
Processing an Interrupt
-----------------------
.. index:: interrupt processing
The interrupt manager allows the application to
connect a function to a hardware interrupt vector. When an
interrupt occurs, the processor will automatically vector to
RTEMS. RTEMS saves and restores all registers which are not
preserved by the normal Ada calling convention
for the target
processor and invokes the users ISR. The users ISR is
responsible for processing the interrupt, clearing the interrupt
if necessary, and device specific manipulation... index:: rtems_vector_number
The ``rtems.interrupt_catch``
directive connects a procedure to
an interrupt vector. The vector number is managed using
the ``rtems.vector_number`` data type.
The interrupt service routine is assumed
to abide by these conventions and have a prototype similar to
the following:
.. code:: c
NOT SUPPORTED FROM Ada BINDING
The vector number argument is provided by RTEMS to
allow the application to identify the interrupt source. This
could be used to allow a single routine to service interrupts
from multiple instances of the same device. For example, a
single routine could service interrupts from multiple serial
ports and use the vector number to identify which port requires
servicing.
To minimize the masking of lower or equal priority
level interrupts, the ISR should perform the minimum actions
required to service the interrupt. Other non-essential actions
should be handled by application tasks. Once the users ISR has
completed, it returns control to the RTEMS interrupt manager
which will perform task dispatching and restore the registers
saved before the ISR was invoked.
The RTEMS interrupt manager guarantees that proper
task scheduling and dispatching are performed at the conclusion
of an ISR. A system call made by the ISR may have readied a
task of higher priority than the interrupted task. Therefore,
when the ISR completes, the postponed dispatch processing must
be performed. No dispatch processing is performed as part of
directives which have been invoked by an ISR.
Applications must adhere to the following rule if
proper task scheduling and dispatching is to be performed:
- ** *The interrupt manager must be used for all ISRs which
may be interrupted by the highest priority ISR which invokes an
RTEMS directive.*
Consider a processor which allows a numerically low
interrupt level to interrupt a numerically greater interrupt
level. In this example, if an RTEMS directive is used in a
level 4 ISR, then all ISRs which execute at levels 0 through 4
must use the interrupt manager.
Interrupts are nested whenever an interrupt occurs
during the execution of another ISR. RTEMS supports efficient
interrupt nesting by allowing the nested ISRs to terminate
without performing any dispatch processing. Only when the
outermost ISR terminates will the postponed dispatching occur.
RTEMS Interrupt Levels
----------------------
.. index:: interrupt levels
Many processors support multiple interrupt levels or
priorities. The exact number of interrupt levels is processor
dependent. RTEMS internally supports 256 interrupt levels which
are mapped to the processors interrupt levels. For specific
information on the mapping between RTEMS and the target
processors interrupt levels, refer to the Interrupt Processing
chapter of the Applications Supplement document for a specific
target processor.
Disabling of Interrupts by RTEMS
--------------------------------
.. index:: disabling interrupts
During the execution of directive calls, critical
sections of code may be executed. When these sections are
encountered, RTEMS disables all maskable interrupts before the
execution of the section and restores them to the previous level
upon completion of the section. RTEMS has been optimized to
ensure that interrupts are disabled for a minimum length of
time. The maximum length of time interrupts are disabled by
RTEMS is processor dependent and is detailed in the Timing
Specification chapter of the Applications Supplement document
for a specific target processor.
Non-maskable interrupts (NMI) cannot be disabled, and
ISRs which execute at this level MUST NEVER issue RTEMS system
calls. If a directive is invoked, unpredictable results may
occur due to the inability of RTEMS to protect its critical
sections. However, ISRs that make no system calls may safely
execute as non-maskable interrupts.
Operations
==========
Establishing an ISR
-------------------
The ``rtems.interrupt_catch``
directive establishes an ISR for
the system. The address of the ISR and its associated CPU
vector number are specified to this directive. This directive
installs the RTEMS interrupt wrapper in the processors
Interrupt Vector Table and the address of the users ISR in the
RTEMS Vector Table. This directive returns the previous
contents of the specified vector in the RTEMS Vector Table.
Directives Allowed from an ISR
------------------------------
Using the interrupt manager ensures that RTEMS knows
when a directive is being called from an ISR. The ISR may then
use system calls to synchronize itself with an application task.
The synchronization may involve messages, events or signals
being passed by the ISR to the desired task. Directives invoked
by an ISR must operate only on objects which reside on the local
node. The following is a list of RTEMS system calls that may be
made from an ISR:
- Task Management
Although it is acceptable to operate on the RTEMS_SELF task (e.g.
the currently executing task), while in an ISR, this will refer
to the interrupted task. Most of the time, it is an application
implementation error to use RTEMS_SELF from an ISR.
- rtems_task_suspend
- rtems_task_resume
- Interrupt Management
- rtems_interrupt_enable
- rtems_interrupt_disable
- rtems_interrupt_flash
- rtems_interrupt_lock_acquire
- rtems_interrupt_lock_release
- rtems_interrupt_lock_acquire_isr
- rtems_interrupt_lock_release_isr
- rtems_interrupt_is_in_progress
- rtems_interrupt_catch
- Clock Management
- rtems_clock_set
- rtems_clock_get
- rtems_clock_get_tod
- rtems_clock_get_tod_timeval
- rtems_clock_get_seconds_since_epoch
- rtems_clock_get_ticks_per_second
- rtems_clock_get_ticks_since_boot
- rtems_clock_get_uptime
- rtems_clock_set_nanoseconds_extension
- rtems_clock_tick
- Timer Management
- rtems_timer_cancel
- rtems_timer_reset
- rtems_timer_fire_after
- rtems_timer_fire_when
- rtems_timer_server_fire_after
- rtems_timer_server_fire_when
- Event Management
- rtems_event_send
- rtems_event_system_send
- rtems_event_transient_send
- Semaphore Management
- rtems_semaphore_release
- Message Management
- rtems_message_queue_send
- rtems_message_queue_urgent
- Signal Management
- rtems_signal_send
- Dual-Ported Memory Management
- rtems_port_external_to_internal
- rtems_port_internal_to_external
- IO Management
The following services are safe to call from an ISR if and only if
the device driver service invoked is also safe. The IO Manager itself
is safe but the invoked driver entry point may or may not be.
- rtems_io_initialize
- rtems_io_open
- rtems_io_close
- rtems_io_read
- rtems_io_write
- rtems_io_control
- Fatal Error Management
- rtems_fatal
- rtems_fatal_error_occurred
- Multiprocessing
- rtems_multiprocessing_announce
Directives
==========
This section details the interrupt managers
directives. A subsection is dedicated to each of this managers
directives and describes the calling sequence, related
constants, usage, and status codes.
INTERRUPT_CATCH - Establish an ISR
----------------------------------
.. index:: establish an ISR
.. index:: install an ISR
**CALLING SEQUENCE:**
.. code:: c
NOT SUPPORTED FROM Ada BINDING
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - ISR established successfully
``RTEMS.INVALID_NUMBER`` - illegal vector number
``RTEMS.INVALID_ADDRESS`` - illegal ISR entry point or invalid ``old_isr_handler``
**DESCRIPTION:**
This directive establishes an interrupt service
routine (ISR) for the specified interrupt vector number. The``new_isr_handler`` parameter specifies the entry point of the ISR.
The entry point of the previous ISR for the specified vector is
returned in ``old_isr_handler``.
To release an interrupt vector, pass the old handlers address obtained
when the vector was first capture.
**NOTES:**
This directive will not cause the calling task to be preempted.
INTERRUPT_DISABLE - Disable Interrupts
--------------------------------------
.. index:: disable interrupts
**CALLING SEQUENCE:**
.. code:: c
function Interrupt_Disable return RTEMS.ISR_Level;
**DIRECTIVE STATUS CODES:**
NONE
**DESCRIPTION:**
This directive disables all maskable interrupts and returns
the previous ``level``. A later invocation of the``rtems.interrupt_enable`` directive should be used to
restore the interrupt level.
**NOTES:**
This directive will not cause the calling task to be preempted.
This directive is only available on uni-processor configurations. The
directive ``rtems.interrupt_local_disable`` is available on all
configurations.
INTERRUPT_ENABLE - Enable Interrupts
------------------------------------
.. index:: enable interrupts
**CALLING SEQUENCE:**
.. code:: c
procedure Interrupt_Enable (
Level : in RTEMS.ISR_Level
);
**DIRECTIVE STATUS CODES:**
NONE
**DESCRIPTION:**
This directive enables maskable interrupts to the ``level``
which was returned by a previous call to``rtems.interrupt_disable``.
Immediately prior to invoking this directive, maskable interrupts should
be disabled by a call to ``rtems.interrupt_disable``
and will be enabled when this directive returns to the caller.
**NOTES:**
This directive will not cause the calling task to be preempted.
This directive is only available on uni-processor configurations. The
directive ``rtems.interrupt_local_enable`` is available on all
configurations.
INTERRUPT_FLASH - Flash Interrupts
----------------------------------
.. index:: flash interrupts
**CALLING SEQUENCE:**
.. code:: c
procedure Interrupt_Flash (
Level : in RTEMS.ISR_Level
);
**DIRECTIVE STATUS CODES:**
NONE
**DESCRIPTION:**
This directive temporarily enables maskable interrupts to the ``level``
which was returned by a previous call to``rtems.interrupt_disable``.
Immediately prior to invoking this directive, maskable interrupts should
be disabled by a call to ``rtems.interrupt_disable``
and will be redisabled when this directive returns to the caller.
**NOTES:**
This directive will not cause the calling task to be preempted.
This directive is only available on uni-processor configurations. The
directives ``rtems.interrupt_local_disable`` and``rtems.interrupt_local_enable`` is available on all
configurations.
INTERRUPT_LOCAL_DISABLE - Disable Interrupts on Current Processor
-----------------------------------------------------------------
.. index:: disable interrupts
**CALLING SEQUENCE:**
**DIRECTIVE STATUS CODES:**
NONE
**DESCRIPTION:**
This directive disables all maskable interrupts and returns
the previous ``level``. A later invocation of the``rtems.interrupt_local_enable`` directive should be used to
restore the interrupt level.
**NOTES:**
This directive will not cause the calling task to be preempted.
On SMP configurations this will not ensure system wide mutual exclusion. Use
interrupt locks instead.
INTERRUPT_LOCAL_ENABLE - Enable Interrupts on Current Processor
---------------------------------------------------------------
.. index:: enable interrupts
**CALLING SEQUENCE:**
**DIRECTIVE STATUS CODES:**
NONE
**DESCRIPTION:**
This directive enables maskable interrupts to the ``level``
which was returned by a previous call to``rtems.interrupt_local_disable``.
Immediately prior to invoking this directive, maskable interrupts should
be disabled by a call to ``rtems.interrupt_local_disable``
and will be enabled when this directive returns to the caller.
**NOTES:**
This directive will not cause the calling task to be preempted.
INTERRUPT_LOCK_INITIALIZE - Initialize an ISR Lock
--------------------------------------------------
**CALLING SEQUENCE:**
**DIRECTIVE STATUS CODES:**
NONE
**DESCRIPTION:**
Initializes an interrupt lock.
**NOTES:**
Concurrent initialization leads to unpredictable results.
INTERRUPT_LOCK_ACQUIRE - Acquire an ISR Lock
--------------------------------------------
**CALLING SEQUENCE:**
**DIRECTIVE STATUS CODES:**
NONE
**DESCRIPTION:**
Interrupts will be disabled. On SMP configurations this directive acquires a
SMP lock.
**NOTES:**
This directive will not cause the calling thread to be preempted. This
directive can be used in thread and interrupt context.
INTERRUPT_LOCK_RELEASE - Release an ISR Lock
--------------------------------------------
**CALLING SEQUENCE:**
**DIRECTIVE STATUS CODES:**
NONE
**DESCRIPTION:**
The interrupt status will be restored. On SMP configurations this directive
releases a SMP lock.
**NOTES:**
This directive will not cause the calling thread to be preempted. This
directive can be used in thread and interrupt context.
INTERRUPT_LOCK_ACQUIRE_ISR - Acquire an ISR Lock from ISR
---------------------------------------------------------
**CALLING SEQUENCE:**
**DIRECTIVE STATUS CODES:**
NONE
**DESCRIPTION:**
The interrupt status will remain unchanged. On SMP configurations this
directive acquires a SMP lock.
In case the corresponding interrupt service routine can be interrupted by
higher priority interrupts and these interrupts enter the critical section
protected by this lock, then the result is unpredictable.
**NOTES:**
This directive should be called from the corresponding interrupt service
routine.
INTERRUPT_LOCK_RELEASE_ISR - Release an ISR Lock from ISR
---------------------------------------------------------
**CALLING SEQUENCE:**
**DIRECTIVE STATUS CODES:**
NONE
**DESCRIPTION:**
The interrupt status will remain unchanged. On SMP configurations this
directive releases a SMP lock.
**NOTES:**
This directive should be called from the corresponding interrupt service
routine.
INTERRUPT_IS_IN_PROGRESS - Is an ISR in Progress
------------------------------------------------
.. index:: is interrupt in progress
**CALLING SEQUENCE:**
.. code:: c
function Interrupt_Is_In_Progress return RTEMS.Boolean;
**DIRECTIVE STATUS CODES:**
NONE
**DESCRIPTION:**
This directive returns ``TRUE`` if the processor is currently
servicing an interrupt and ``FALSE`` otherwise. A return value
of ``TRUE`` indicates that the caller is an interrupt service
routine, *NOT* a task. The directives available to an interrupt
service routine are restricted.
**NOTES:**
This directive will not cause the calling task to be preempted.
.. COMMENT: COPYRIGHT (c) 1988-2008
.. COMMENT: On-Line Applications Research Corporation (OAR).
.. COMMENT: All rights reserved.

View File

@ -1,571 +0,0 @@
I/O Manager
###########
.. index:: device drivers
.. index:: IO Manager
Introduction
============
The input/output interface manager provides a
well-defined mechanism for accessing device drivers and a
structured methodology for organizing device drivers. The
directives provided by the I/O manager are:
- ``rtems.io_initialize`` - Initialize a device driver
- ``rtems.io_register_driver`` - Register a device driver
- ``rtems.io_unregister_driver`` - Unregister a device driver
- ``rtems.io_register_name`` - Register a device name
- ``rtems.io_lookup_name`` - Look up a device name
- ``rtems.io_open`` - Open a device
- ``rtems.io_close`` - Close a device
- ``rtems.io_read`` - Read from a device
- ``rtems.io_write`` - Write to a device
- ``rtems.io_control`` - Special device services
Background
==========
Device Driver Table
-------------------
.. index:: Device Driver Table
Each application utilizing the RTEMS I/O manager must specify the
address of a Device Driver Table in its Configuration Table. This table
contains each device drivers entry points that is to be initialised by
RTEMS during initialization. Each device driver may contain the
following entry points:
- Initialization
- Open
- Close
- Read
- Write
- Control
If the device driver does not support a particular
entry point, then that entry in the Configuration Table should
be NULL. RTEMS will return``RTEMS.SUCCESSFUL`` as the executives and
zero (0) as the device drivers return code for these device
driver entry points.
Applications can register and unregister drivers with the RTEMS I/O
manager avoiding the need to have all drivers statically defined and
linked into this table.
The :file:`confdefs.h` entry ``CONFIGURE_MAXIMUM_DRIVERS`` configures
the number of driver slots available to the application.
Major and Minor Device Numbers
------------------------------
.. index:: major device number
.. index:: minor device number
Each call to the I/O manager must provide a devices
major and minor numbers as arguments. The major number is the
index of the requested drivers entry points in the Device
Driver Table, and is used to select a specific device driver.
The exact usage of the minor number is driver specific, but is
commonly used to distinguish between a number of devices
controlled by the same driver... index:: rtems_device_major_number
.. index:: rtems_device_minor_number
The data types ``rtems.device_major_number`` and``rtems.device_minor_number`` are used to
manipulate device major and minor numbers, respectively.
Device Names
------------
.. index:: device names
The I/O Manager provides facilities to associate a
name with a particular device. Directives are provided to
register the name of a device and to look up the major/minor
number pair associated with a device name.
Device Driver Environment
-------------------------
Application developers, as well as device driver
developers, must be aware of the following regarding the RTEMS
I/O Manager:
- A device driver routine executes in the context of the
invoking task. Thus if the driver blocks, the invoking task
blocks.
- The device driver is free to change the modes of the
invoking task, although the driver should restore them to their
original values.
- Device drivers may be invoked from ISRs.
- Only local device drivers are accessible through the I/O
manager.
- A device driver routine may invoke all other RTEMS
directives, including I/O directives, on both local and global
objects.
Although the RTEMS I/O manager provides a framework
for device drivers, it makes no assumptions regarding the
construction or operation of a device driver.
Runtime Driver Registration
---------------------------
.. index:: runtime driver registration
Board support package and application developers can select wether a
device driver is statically entered into the default device table or
registered at runtime.
Dynamic registration helps applications where:
# The BSP and kernel libraries are common to a range of applications
for a specific target platform. An application may be built upon a
common library with all drivers. The application selects and registers
the drivers. Uniform driver name lookup protects the application.
# The type and range of drivers may vary as the application probes a
bus during initialization.
# Support for hot swap bus system such as Compact PCI.
# Support for runtime loadable driver modules.
Device Driver Interface
-----------------------
.. index:: device driver interface
When an application invokes an I/O manager directive,
RTEMS determines which device driver entry point must be
invoked. The information passed by the application to RTEMS is
then passed to the correct device driver entry point. RTEMS
will invoke each device driver entry point assuming it is
compatible with the following prototype:
.. code:: c
function IO_Entry (
Major : in RTEMS.Device_Major_Number;
Minor : in RTEMS.Device_Major_Number;
Argument_Block : in RTEMS.Address
) return RTEMS.Status_Code;
The format and contents of the parameter block are
device driver and entry point dependent.
It is recommended that a device driver avoid
generating error codes which conflict with those used by
application components. A common technique used to generate
driver specific error codes is to make the most significant part
of the status indicate a driver specific code.
Device Driver Initialization
----------------------------
RTEMS automatically initializes all device drivers
when multitasking is initiated via the``rtems.initialize_executive``
directive. RTEMS initializes the device drivers by invoking
each device driver initialization entry point with the following
parameters:
major
the major device number for this device driver.
minor
zero.
argument_block
will point to the Configuration Table.
The returned status will be ignored by RTEMS. If the driver
cannot successfully initialize the device, then it should invoke
the fatal_error_occurred directive.
Operations
==========
Register and Lookup Name
------------------------
The ``rtems.io_register`` directive associates a name with the
specified device (i.e. major/minor number pair). Device names
are typically registered as part of the device driver
initialization sequence. The ``rtems.io_lookup``
directive is used to
determine the major/minor number pair associated with the
specified device name. The use of these directives frees the
application from being dependent on the arbitrary assignment of
major numbers in a particular application. No device naming
conventions are dictated by RTEMS.
Accessing an Device Driver
--------------------------
The I/O manager provides directives which enable the
application program to utilize device drivers in a standard
manner. There is a direct correlation between the RTEMS I/O
manager directives``rtems.io_initialize``,``rtems.io_open``,``rtems.io_close``,``rtems.io_read``,``rtems.io_write``, and``rtems.io_control``
and the underlying device driver entry points.
Directives
==========
This section details the I/O managers directives. A
subsection is dedicated to each of this managers directives and
describes the calling sequence, related constants, usage, and
status codes.
IO_REGISTER_DRIVER - Register a device driver
---------------------------------------------
.. index:: register a device driver
**CALLING SEQUENCE:**
.. code:: c
No Ada implementation.
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - successfully registered
``RTEMS.INVALID_ADDRESS`` - invalid registered major pointer
``RTEMS.INVALID_ADDRESS`` - invalid driver table
``RTEMS.INVALID_NUMBER`` - invalid major device number
``RTEMS.TOO_MANY`` - no available major device table slot
``RTEMS.RESOURCE_IN_USE`` - major device number entry in use
**DESCRIPTION:**
This directive attempts to add a new device driver to the Device Driver
Table. The user can specify a specific major device number via the
directives ``major`` parameter, or let the registration routine find
the next available major device number by specifing a major number of``0``. The selected major device number is returned via the``registered_major`` directive parameter. The directive automatically
allocation major device numbers from the highest value down.
This directive automatically invokes the IO_INITIALIZE directive if
the driver address table has an initialization and open entry.
The directive returns RTEMS.TOO_MANY if Device Driver Table is
full, and RTEMS.RESOURCE_IN_USE if a specific major device
number is requested and it is already in use.
**NOTES:**
The Device Driver Table size is specified in the Configuration Table
condiguration. This needs to be set to maximum size the application
requires.
IO_UNREGISTER_DRIVER - Unregister a device driver
-------------------------------------------------
.. index:: unregister a device driver
**CALLING SEQUENCE:**
.. code:: c
No Ada implementation.
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - successfully registered
``RTEMS.INVALID_NUMBER`` - invalid major device number
**DESCRIPTION:**
This directive removes a device driver from the Device Driver Table.
**NOTES:**
Currently no specific checks are made and the driver is not closed.
IO_INITIALIZE - Initialize a device driver
------------------------------------------
.. index:: initialize a device driver
**CALLING SEQUENCE:**
.. code:: c
procedure IO_Initialize (
Major : in RTEMS.Device_Major_Number;
Minor : in RTEMS.Device_Minor_Number;
Argument : in RTEMS.Address;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - successfully initialized
``RTEMS.INVALID_NUMBER`` - invalid major device number
**DESCRIPTION:**
This directive calls the device driver initialization
routine specified in the Device Driver Table for this major
number. This directive is automatically invoked for each device
driver when multitasking is initiated via the
initialize_executive directive.
A device driver initialization module is responsible
for initializing all hardware and data structures associated
with a device. If necessary, it can allocate memory to be used
during other operations.
**NOTES:**
This directive may or may not cause the calling task
to be preempted. This is dependent on the device driver being
initialized.
IO_REGISTER_NAME - Register a device
------------------------------------
.. index:: register device
**CALLING SEQUENCE:**
.. code:: c
procedure IO_Register_Name (
Name : in String;
Major : in RTEMS.Device_Major_Number;
Minor : in RTEMS.Device_Minor_Number;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - successfully initialized
``RTEMS.TOO_MANY`` - too many devices registered
**DESCRIPTION:**
This directive associates name with the specified
major/minor number pair.
**NOTES:**
This directive will not cause the calling task to be
preempted.
IO_LOOKUP_NAME - Lookup a device
--------------------------------
.. index:: lookup device major and minor number
**CALLING SEQUENCE:**
.. code:: c
procedure IO_Lookup_Name (
Name : in String;
Device_Info : out RTEMS.Driver_Name_t_Pointer;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - successfully initialized
``RTEMS.UNSATISFIED`` - name not registered
**DESCRIPTION:**
This directive returns the major/minor number pair
associated with the given device name in ``device_info``.
**NOTES:**
This directive will not cause the calling task to be
preempted.
IO_OPEN - Open a device
-----------------------
.. index:: open a devive
**CALLING SEQUENCE:**
.. code:: c
procedure IO_Open (
Major : in RTEMS.Device_Major_Number;
Minor : in RTEMS.Device_Minor_Number;
Argument : in RTEMS.Address;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - successfully initialized
``RTEMS.INVALID_NUMBER`` - invalid major device number
**DESCRIPTION:**
This directive calls the device driver open routine
specified in the Device Driver Table for this major number. The
open entry point is commonly used by device drivers to provide
exclusive access to a device.
**NOTES:**
This directive may or may not cause the calling task
to be preempted. This is dependent on the device driver being
invoked.
IO_CLOSE - Close a device
-------------------------
.. index:: close a device
**CALLING SEQUENCE:**
.. code:: c
procedure IO_Close (
Major : in RTEMS.Device_Major_Number;
Minor : in RTEMS.Device_Minor_Number;
Argument : in RTEMS.Address;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - successfully initialized
``RTEMS.INVALID_NUMBER`` - invalid major device number
**DESCRIPTION:**
This directive calls the device driver close routine
specified in the Device Driver Table for this major number. The
close entry point is commonly used by device drivers to
relinquish exclusive access to a device.
**NOTES:**
This directive may or may not cause the calling task
to be preempted. This is dependent on the device driver being
invoked.
IO_READ - Read from a device
----------------------------
.. index:: read from a device
**CALLING SEQUENCE:**
.. code:: c
procedure IO_Read (
Major : in RTEMS.Device_Major_Number;
Minor : in RTEMS.Device_Minor_Number;
Argument : in RTEMS.Address;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - successfully initialized
``RTEMS.INVALID_NUMBER`` - invalid major device number
**DESCRIPTION:**
This directive calls the device driver read routine
specified in the Device Driver Table for this major number.
Read operations typically require a buffer address as part of
the argument parameter block. The contents of this buffer will
be replaced with data from the device.
**NOTES:**
This directive may or may not cause the calling task
to be preempted. This is dependent on the device driver being
invoked.
IO_WRITE - Write to a device
----------------------------
.. index:: write to a device
**CALLING SEQUENCE:**
.. code:: c
procedure IO_Write (
Major : in RTEMS.Device_Major_Number;
Minor : in RTEMS.Device_Minor_Number;
Argument : in RTEMS.Address;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - successfully initialized
``RTEMS.INVALID_NUMBER`` - invalid major device number
**DESCRIPTION:**
This directive calls the device driver write routine
specified in the Device Driver Table for this major number.
Write operations typically require a buffer address as part of
the argument parameter block. The contents of this buffer will
be sent to the device.
**NOTES:**
This directive may or may not cause the calling task
to be preempted. This is dependent on the device driver being
invoked.
IO_CONTROL - Special device services
------------------------------------
.. index:: special device services
.. index:: IO Control
**CALLING SEQUENCE:**
.. code:: c
procedure IO_Control (
Major : in RTEMS.Device_Major_Number;
Minor : in RTEMS.Device_Minor_Number;
Argument : in RTEMS.Address;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - successfully initialized
``RTEMS.INVALID_NUMBER`` - invalid major device number
**DESCRIPTION:**
This directive calls the device driver I/O control
routine specified in the Device Driver Table for this major
number. The exact functionality of the driver entry called by
this directive is driver dependent. It should not be assumed
that the control entries of two device drivers are compatible.
For example, an RS-232 driver I/O control operation may change
the baud rate of a serial line, while an I/O control operation
for a floppy disk driver may cause a seek operation.
**NOTES:**
This directive may or may not cause the calling task
to be preempted. This is dependent on the device driver being
invoked.
.. COMMENT: COPYRIGHT (c) 1988-2002.
.. COMMENT: On-Line Applications Research Corporation (OAR).
.. COMMENT: All rights reserved.

View File

@ -1,321 +0,0 @@
Key Concepts
############
Introduction
============
The facilities provided by RTEMS are built upon a
foundation of very powerful concepts. These concepts must be
understood before the application developer can efficiently
utilize RTEMS. The purpose of this chapter is to familiarize
one with these concepts.
Objects
=======
.. index:: objects
RTEMS provides directives which can be used to
dynamically create, delete, and manipulate a set of predefined
object types. These types include tasks, message queues,
semaphores, memory regions, memory partitions, timers, ports,
and rate monotonic periods. The object-oriented nature of RTEMS
encourages the creation of modular applications built upon
re-usable "building block" routines.
All objects are created on the local node as required
by the application and have an RTEMS assigned ID. All objects
have a user-assigned name. Although a relationship exists
between an objects name and its RTEMS assigned ID, the name and
ID are not identical. Object names are completely arbitrary and
selected by the user as a meaningful "tag" which may commonly
reflect the objects use in the application. Conversely, object
IDs are designed to facilitate efficient object manipulation by
the executive.
Object Names
------------
.. index:: object name
.. index:: rtems_object_name
An object name is an unsigned thirty-two bit entity
associated with the object by the user. The data type``rtems.name`` is used to store object names... index:: rtems_build_name
Although not required by RTEMS, object names are often
composed of four ASCII characters which help identify that object.
For example, a task which causes a light to blink might be
called "LITE". The ``rtems.build_name`` routine
is provided to build an object name from four ASCII characters.
The following example illustrates this:
.. code:: c
My_Name : RTEMS.Name;
My_Name = RTEMS.Build_Name( 'L', 'I', 'T', 'E' );
However, it is not required that the application use ASCII
characters to build object names. For example, if an
application requires one-hundred tasks, it would be difficult to
assign meaningful ASCII names to each task. A more convenient
approach would be to name them the binary values one through
one-hundred, respectively.
Object IDs
----------
.. index:: object ID
.. index:: object ID composition
.. index:: rtems_id
An object ID is a unique unsigned integer value which uniquely identifies
an object instance. Object IDs are passed as arguments to many directives
in RTEMS and RTEMS translates the ID to an internal object pointer. The
efficient manipulation of object IDs is critical to the performance
of RTEMS services. Because of this, there are two object Id formats
defined. Each target architecture specifies which format it will use.
There is a thirty-two bit format which is used for most of the supported
architectures and supports multiprocessor configurations. There is also
a simpler sixteen bit format which is appropriate for smaller target
architectures and does not support multiprocessor configurations.
Thirty-Two Object ID Format
~~~~~~~~~~~~~~~~~~~~~~~~~~~
The thirty-two bit format for an object ID is composed of four parts: API,
object class, node, and index. The data type ``rtems.id``
is used to store object IDs.
.. code:: c
31 27 26 24 23 16 15 0
+---------+-------+--------------+-------------------------------+
| | | | |
| Class | API | Node | Index |
| | | | |
+---------+-------+--------------+-------------------------------+
The most significant five bits are the object class. The next
three bits indicate the API to which the object class belongs.
The next eight bits (16-23) are the number of the node on which
this object was created. The node number is always one (1) in a single
processor system. The least significant sixteen bits form an
identifier within a particular object type. This identifier,
called the object index, ranges in value from 1 to the maximum
number of objects configured for this object type.
Sixteen Bit Object ID Format
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The sixteen bit format for an object ID is composed of three parts: API,
object class, and index. The data type ``rtems.id``
is used to store object IDs.
.. code:: c
15 11 10 8 7 0
+---------+-------+--------------+
| | | |
| Class | API | Index |
| | | |
+---------+-------+--------------+
The sixteen-bit format is designed to be as similar as possible to the
thrity-two bit format. The differences are limited to the eliminatation
of the node field and reduction of the index field from sixteen-bits
to 8-bits. Thus the sixteen bit format only supports up to 255 object
instances per API/Class combination and single processor systems.
As this format is typically utilized by sixteen-bit processors with
limited address space, this is more than enough object instances.
Object ID Description
---------------------
The components of an object ID make it possible
to quickly locate any object in even the most complicated
multiprocessor system. Object IDs are associated with an
object by RTEMS when the object is created and the corresponding
ID is returned by the appropriate object create directive. The
object ID is required as input to all directives involving
objects, except those which create an object or obtain the ID of
an object.
The object identification directives can be used to
dynamically obtain a particular objects ID given its name.
This mapping is accomplished by searching the name table
associated with this object type. If the name is non-unique,
then the ID associated with the first occurrence of the name
will be returned to the application. Since object IDs are
returned when the object is created, the object identification
directives are not necessary in a properly designed single
processor application.
In addition, services are provided to portably examine the
subcomponents of an RTEMS ID. These services are
described in detail later in this manual but are prototyped
as follows:.. index:: obtaining class from object ID
.. index:: obtaining node from object ID
.. index:: obtaining index from object ID
.. index:: get class from object ID
.. index:: get node from object ID
.. index:: get index from object ID
.. index:: rtems_object_id_get_api
.. index:: rtems_object_id_get_class
.. index:: rtems_object_id_get_node
.. index:: rtems_object_id_get_index
.. code:: c
uint32_t rtems_object_id_get_api( rtems_id );
uint32_t rtems_object_id_get_class( rtems_id );
uint32_t rtems_object_id_get_node( rtems_id );
uint32_t rtems_object_id_get_index( rtems_id );
An object control block is a data structure defined
by RTEMS which contains the information necessary to manage a
particular object type. For efficiency reasons, the format of
each object types control block is different. However, many of
the fields are similar in function. The number of each type of
control block is application dependent and determined by the
values specified in the users Configuration Table. An object
control block is allocated at object create time and freed when
the object is deleted. With the exception of user extension
routines, object control blocks are not directly manipulated by
user applications.
Communication and Synchronization
=================================
.. index:: communication and synchronization
In real-time multitasking applications, the ability
for cooperating execution threads to communicate and synchronize
with each other is imperative. A real-time executive should
provide an application with the following capabilities:
- Data transfer between cooperating tasks
- Data transfer between tasks and ISRs
- Synchronization of cooperating tasks
- Synchronization of tasks and ISRs
Most RTEMS managers can be used to provide some form
of communication and/or synchronization. However, managers
dedicated specifically to communication and synchronization
provide well established mechanisms which directly map to the
applications varying needs. This level of flexibility allows
the application designer to match the features of a particular
manager with the complexity of communication and synchronization
required. The following managers were specifically designed for
communication and synchronization:
- Semaphore
- Message Queue
- Event
- Signal
The semaphore manager supports mutual exclusion
involving the synchronization of access to one or more shared
user resources. Binary semaphores may utilize the optional
priority inheritance algorithm to avoid the problem of priority
inversion. The message manager supports both communication and
synchronization, while the event manager primarily provides a
high performance synchronization mechanism. The signal manager
supports only asynchronous communication and is typically used
for exception handling.
Time
====
.. index:: time
The development of responsive real-time applications
requires an understanding of how RTEMS maintains and supports
time-related operations. The basic unit of time in RTEMS is
known as a tick. The frequency of clock ticks is completely
application dependent and determines the granularity and
accuracy of all interval and calendar time operations... index:: rtems_interval
By tracking time in units of ticks, RTEMS is capable
of supporting interval timing functions such as task delays,
timeouts, timeslicing, the delayed execution of timer service
routines, and the rate monotonic scheduling of tasks. An
interval is defined as a number of ticks relative to the current
time. For example, when a task delays for an interval of ten
ticks, it is implied that the task will not execute until ten
clock ticks have occurred.
All intervals are specified using data type``rtems.interval``.
A characteristic of interval timing is that the
actual interval period may be a fraction of a tick less than the
interval requested. This occurs because the time at which the
delay timer is set up occurs at some time between two clock
ticks. Therefore, the first countdown tick occurs in less than
the complete time interval for a tick. This can be a problem if
the clock granularity is large.
The rate monotonic scheduling algorithm is a hard
real-time scheduling methodology. This methodology provides
rules which allows one to guarantee that a set of independent
periodic tasks will always meet their deadlines even under
transient overload conditions. The rate monotonic manager
provides directives built upon the Clock Managers interval
timer support routines.
Interval timing is not sufficient for the many
applications which require that time be kept in wall time or
true calendar form. Consequently, RTEMS maintains the current
date and time. This allows selected time operations to be
scheduled at an actual calendar date and time. For example, a
task could request to delay until midnight on New Years Eve
before lowering the ball at Times Square.
The data type ``rtems.time_of_day`` is used to specify
calendar time in RTEMS services.
See `Time and Date Data Structures`_
... index:: rtems_time_of_day
Obviously, the directives which use intervals or wall
time cannot operate without some external mechanism which
provides a periodic clock tick. This clock tick is typically
provided by a real time clock or counter/timer device.
Memory Management
=================
.. index:: memory management
RTEMS memory management facilities can be grouped
into two classes: dynamic memory allocation and address
translation. Dynamic memory allocation is required by
applications whose memory requirements vary through the
applications course of execution. Address translation is
needed by applications which share memory with another CPU or an
intelligent Input/Output processor. The following RTEMS
managers provide facilities to manage memory:
- Region
- Partition
- Dual Ported Memory
RTEMS memory management features allow an application
to create simple memory pools of fixed size buffers and/or more
complex memory pools of variable size segments. The partition
manager provides directives to manage and maintain pools of
fixed size entities such as resource control blocks.
Alternatively, the region manager provides a more general
purpose memory allocation scheme that supports variable size
blocks of memory which are dynamically obtained and freed by the
application. The dual-ported memory manager provides executive
support for address translation between internal and external
dual-ported RAM address space.
.. COMMENT: COPYRIGHT (c) 1988-2002.
.. COMMENT: On-Line Applications Research Corporation (OAR).
.. COMMENT: All rights reserved.

View File

@ -1,447 +0,0 @@
Linker Sets
###########
.. index:: linkersets
Introduction
============
Linker sets are a flexible means to create arrays of items out of a set of
object files at link-time. For example its possible to define an item *I*
of type *T* in object file *A* and an item *J* of type *T*
in object file *B* to be a member of a linker set *S*. The linker
will then collect these two items *I* and *J* and place them in
consecutive memory locations, so that they can be accessed like a normal array
defined in one object file. The size of a linker set is defined by its begin
and end markers. A linker set may be empty. It should only contain items of
the same type.
The following macros are provided to create, populate and use linker sets.
- ``RTEMS_LINKER_SET_BEGIN`` - Designator of the linker set begin marker
- ``RTEMS_LINKER_SET_END`` - Designator of the linker set end marker
- ``RTEMS_LINKER_SET_SIZE`` - The linker set size in characters
- ``RTEMS_LINKER_ROSET_DECLARE`` - Declares a read-only linker set
- ``RTEMS_LINKER_ROSET`` - Defines a read-only linker set
- ``RTEMS_LINKER_ROSET_ITEM_DECLARE`` - Declares a read-only linker set item
- ``RTEMS_LINKER_ROSET_ITEM_REFERENCE`` - References a read-only linker set item
- ``RTEMS_LINKER_ROSET_ITEM`` - Defines a read-only linker set item
- ``RTEMS_LINKER_ROSET_ITEM_ORDERED`` - Defines an ordered read-only linker set item
- ``RTEMS_LINKER_RWSET_DECLARE`` - Declares a read-write linker set
- ``RTEMS_LINKER_RWSET`` - Defines a read-write linker set
- ``RTEMS_LINKER_RWSET_ITEM_DECLARE`` - Declares a read-write linker set item
- ``RTEMS_LINKER_RWSET_ITEM_REFERENCE`` - References a read-write linker set item
- ``RTEMS_LINKER_RWSET_ITEM`` - Defines a read-write linker set item
- ``RTEMS_LINKER_RWSET_ITEM_ORDERED`` - Defines an ordered read-write linker set item
Background
==========
Linker sets are used not only in RTEMS, but also for example in Linux, in
FreeBSD, for the GNU C constructor extension and for global C++ constructors.
They provide a space efficient and flexible means to initialize modules. A
linker set consists of
- dedicated input sections for the linker (e.g. ``.ctors`` and``.ctors.*`` in the case of global constructors),
- a begin marker (e.g. provided by ``crtbegin.o``, and
- an end marker (e.g. provided by ``ctrend.o``).
A module may place a certain data item into the dedicated input section. The
linker will collect all such data items in this section and creates a begin and
end marker. The initialization code can then use the begin and end markers to
find all the collected data items (e.g. pointers to initialization functions).
In the linker command file of the GNU linker we need the following output
section descriptions.
.. code:: c
/* To be placed in a read-only memory region \*/
.rtemsroset : {
KEEP (\*(SORT(.rtemsroset.*)))
}
/* To be placed in a read-write memory region \*/
.rtemsrwset : {
KEEP (\*(SORT(.rtemsrwset.*)))
}
The ``KEEP()`` ensures that a garbage collection by the linker will not
discard the content of this section. This would normally be the case since the
linker set items are not referenced directly. The ``SORT()`` directive
sorts the input sections lexicographically. Please note the lexicographical
order of the ``.begin``, ``.content`` and ``.end`` section name parts
in the RTEMS linker sets macros which ensures that the position of the begin
and end markers are right.
So, what is the benefit of using linker sets to initialize modules? It can be
used to initialize and include only those RTEMS managers and other components
which are used by the application. For example, in case an application uses
message queues, it must call ``rtems_message_queue_create()``. In the
module implementing this function, we can place a linker set item and register
the message queue handler constructor. Otherwise, in case the application does
not use message queues, there will be no reference to the``rtems_message_queue_create()`` function and the constructor is not
registered, thus nothing of the message queue handler will be in the final
executable.
For an example see test program :file:`sptests/splinkersets01`.
Directives
==========
RTEMS_LINKER_SET_BEGIN - Designator of the linker set begin marker
------------------------------------------------------------------
**CALLING SEQUENCE:**
.. index:: RTEMS_LINKER_SET_BEGIN
.. code:: c
volatile type \*begin = RTEMS_LINKER_SET_BEGIN( set );
**DESCRIPTION:**
This macro generates the designator of the begin marker of the linker set
identified by ``set``. The item at the begin marker address is the first
member of the linker set if it exists, e.g. the linker set is not empty. A
linker set is empty, if and only if the begin and end markers have the same
address.
The ``set`` parameter itself must be a valid C designator on which no macro
expansion is performed. It uniquely identifies the linker set.
RTEMS_LINKER_SET_END - Designator of the linker set end marker
--------------------------------------------------------------
**CALLING SEQUENCE:**
.. index:: RTEMS_LINKER_SET_END
.. code:: c
volatile type \*end = RTEMS_LINKER_SET_END( set );
**DESCRIPTION:**
This macro generates the designator of the end marker of the linker set
identified by ``set``. The item at the end marker address is not a member
of the linker set. The ``set`` parameter itself must be a valid C designator on which no macro
expansion is performed. It uniquely identifies the linker set.
RTEMS_LINKER_SET_SIZE - The linker set size in characters
---------------------------------------------------------
**CALLING SEQUENCE:**
.. index:: RTEMS_LINKER_SET_SIZE
.. code:: c
size_t size = RTEMS_LINKER_SET_SIZE( set );
**DESCRIPTION:**
This macro returns the size of the linker set identified by ``set`` in
characters. The ``set`` parameter itself must be a valid C designator on which no macro
expansion is performed. It uniquely identifies the linker set.
RTEMS_LINKER_ROSET_DECLARE - Declares a read-only linker set
------------------------------------------------------------
**CALLING SEQUENCE:**
.. index:: RTEMS_LINKER_ROSET_DECLARE
.. code:: c
RTEMS_LINKER_ROSET_DECLARE( set, type );
**DESCRIPTION:**
This macro generates declarations for the begin and end markers of a read-only
linker set identified by ``set``. The ``set`` parameter itself must be a valid C designator on which no macro
expansion is performed. It uniquely identifies the linker set. The ``type`` parameter defines the type of the linker set items. The type
must be the same for all macro invocations of a particular linker set.
RTEMS_LINKER_ROSET - Defines a read-only linker set
---------------------------------------------------
**CALLING SEQUENCE:**
.. index:: RTEMS_LINKER_ROSET
.. code:: c
RTEMS_LINKER_ROSET( set, type );
**DESCRIPTION:**
This macro generates definitions for the begin and end markers of a read-only
linker set identified by ``set``. The ``set`` parameter itself must be a valid C designator on which no macro
expansion is performed. It uniquely identifies the linker set. The ``type`` parameter defines the type of the linker set items. The type
must be the same for all macro invocations of a particular linker set.
RTEMS_LINKER_ROSET_ITEM_DECLARE - Declares a read-only linker set item
----------------------------------------------------------------------
**CALLING SEQUENCE:**
.. index:: RTEMS_LINKER_ROSET_ITEM_DECLARE
.. code:: c
RTEMS_LINKER_ROSET_ITEM_DECLARE( set, type, item );
**DESCRIPTION:**
This macro generates a declaration of an item contained in the read-only linker
set identified by ``set``. The ``set`` parameter itself must be a valid C designator on which no macro
expansion is performed. It uniquely identifies the linker set. The ``type`` parameter defines the type of the linker set items. The type
must be the same for all macro invocations of a particular linker set. The ``item`` parameter itself must be a valid C designator on which no macro
expansion is performed. It uniquely identifies an item in the linker set.
RTEMS_LINKER_ROSET_ITEM_REFERENCE - References a read-only linker set item
--------------------------------------------------------------------------
**CALLING SEQUENCE:**
.. index:: RTEMS_LINKER_ROSET_ITEM_REFERENCE
.. code:: c
RTEMS_LINKER_ROSET_ITEM_REFERENCE( set, type, item );
**DESCRIPTION:**
This macro generates a reference to an item contained in the read-only linker set
identified by ``set``. The ``set`` parameter itself must be a valid C designator on which no macro
expansion is performed. It uniquely identifies the linker set. The ``type`` parameter defines the type of the linker set items. The type
must be the same for all macro invocations of a particular linker set. The ``item`` parameter itself must be a valid C designator on which no macro
expansion is performed. It uniquely identifies an item in the linker set.
RTEMS_LINKER_ROSET_ITEM - Defines a read-only linker set item
-------------------------------------------------------------
**CALLING SEQUENCE:**
.. index:: RTEMS_LINKER_ROSET_ITEM
.. code:: c
RTEMS_LINKER_ROSET_ITEM( set, type, item );
**DESCRIPTION:**
This macro generates a definition of an item contained in the read-only linker set
identified by ``set``. The ``set`` parameter itself must be a valid C designator on which no macro
expansion is performed. It uniquely identifies the linker set. The ``type`` parameter defines the type of the linker set items. The type
must be the same for all macro invocations of a particular linker set. The ``item`` parameter itself must be a valid C designator on which no macro
expansion is performed. It uniquely identifies an item in the linker set.
RTEMS_LINKER_ROSET_ITEM_ORDERED - Defines an ordered read-only linker set item
------------------------------------------------------------------------------
**CALLING SEQUENCE:**
.. index:: RTEMS_LINKER_ROSET_ITEM_ORDERED
.. code:: c
RTEMS_LINKER_ROSET_ITEM_ORDERED( set, type, item, order );
**DESCRIPTION:**
This macro generates a definition of an ordered item contained in the read-only
linker set identified by ``set``. The ``set`` parameter itself must be a valid C designator on which no macro
expansion is performed. It uniquely identifies the linker set. The ``type`` parameter defines the type of the linker set items. The type
must be the same for all macro invocations of a particular linker set.
The ``item`` parameter itself must be a valid C designator on which no macro
expansion is performed. It uniquely identifies an item in the linker set. The ``order`` parameter must be a valid linker input section name part on
which macro expansion is performed. The items are lexicographically ordered
according to the ``order`` parameter within a linker set. Ordered items are
placed before unordered items in the linker set.
**NOTES:**
To be resilient to typos in the order parameter, it is recommended to use the
following construct in macros defining items for a particular linker set (see
enum in ``XYZ_ITEM()``).
.. code:: c
#include <rtems/linkersets.h>
typedef struct {
int foo;
} xyz_item;
/* The XYZ-order defines \*/
#define XYZ_ORDER_FIRST 0x00001000
#define XYZ_ORDER_AND_SO_ON 0x00002000
/* Defines an ordered XYZ-item \*/
#define XYZ_ITEM( item, order ) \\
enum { xyz_##item = order - order }; \\
RTEMS_LINKER_ROSET_ITEM_ORDERED( \\
xyz, const xyz_item \*, item, order \\
) = { &item }
/* Example item \*/
static const xyz_item some_item = { 123 };
XYZ_ITEM( some_item, XYZ_ORDER_FIRST );
RTEMS_LINKER_RWSET_DECLARE - Declares a read-write linker set
-------------------------------------------------------------
**CALLING SEQUENCE:**
.. index:: RTEMS_LINKER_RWSET_DECLARE
.. code:: c
RTEMS_LINKER_RWSET_DECLARE( set, type );
**DESCRIPTION:**
This macro generates declarations for the begin and end markers of a read-write
linker set identified by ``set``. The ``set`` parameter itself must be a valid C designator on which no macro
expansion is performed. It uniquely identifies the linker set. The ``type`` parameter defines the type of the linker set items. The type
must be the same for all macro invocations of a particular linker set.
RTEMS_LINKER_RWSET - Defines a read-write linker set
----------------------------------------------------
**CALLING SEQUENCE:**
.. index:: RTEMS_LINKER_RWSET
.. code:: c
RTEMS_LINKER_RWSET( set, type );
**DESCRIPTION:**
This macro generates definitions for the begin and end markers of a read-write
linker set identified by ``set``. The ``set`` parameter itself must be a valid C designator on which no macro
expansion is performed. It uniquely identifies the linker set. The ``type`` parameter defines the type of the linker set items. The type
must be the same for all macro invocations of a particular linker set.
RTEMS_LINKER_RWSET_ITEM_DECLARE - Declares a read-write linker set item
-----------------------------------------------------------------------
**CALLING SEQUENCE:**
.. index:: RTEMS_LINKER_RWSET_ITEM_DECLARE
.. code:: c
RTEMS_LINKER_RWSET_ITEM_DECLARE( set, type, item );
**DESCRIPTION:**
This macro generates a declaration of an item contained in the read-write linker
set identified by ``set``. The ``set`` parameter itself must be a valid C designator on which no macro
expansion is performed. It uniquely identifies the linker set. The ``type`` parameter defines the type of the linker set items. The type
must be the same for all macro invocations of a particular linker set. The ``item`` parameter itself must be a valid C designator on which no macro
expansion is performed. It uniquely identifies an item in the linker set.
RTEMS_LINKER_RWSET_ITEM_REFERENCE - References a read-write linker set item
---------------------------------------------------------------------------
**CALLING SEQUENCE:**
.. index:: RTEMS_LINKER_RWSET_ITEM_REFERENCE
.. code:: c
RTEMS_LINKER_RWSET_ITEM_REFERENCE( set, type, item );
**DESCRIPTION:**
This macro generates a reference to an item contained in the read-write linker set
identified by ``set``. The ``set`` parameter itself must be a valid C designator on which no macro
expansion is performed. It uniquely identifies the linker set. The ``type`` parameter defines the type of the linker set items. The type
must be the same for all macro invocations of a particular linker set. The ``item`` parameter itself must be a valid C designator on which no macro
expansion is performed. It uniquely identifies an item in the linker set.
RTEMS_LINKER_RWSET_ITEM - Defines a read-write linker set item
--------------------------------------------------------------
**CALLING SEQUENCE:**
.. index:: RTEMS_LINKER_RWSET_ITEM
.. code:: c
RTEMS_LINKER_RWSET_ITEM( set, type, item );
**DESCRIPTION:**
This macro generates a definition of an item contained in the read-write linker set
identified by ``set``. The ``set`` parameter itself must be a valid C designator on which no macro
expansion is performed. It uniquely identifies the linker set. The ``type`` parameter defines the type of the linker set items. The type
must be the same for all macro invocations of a particular linker set. The ``item`` parameter itself must be a valid C designator on which no macro
expansion is performed. It uniquely identifies an item in the linker set.
RTEMS_LINKER_RWSET_ITEM_ORDERED - Defines an ordered read-write linker set item
-------------------------------------------------------------------------------
**CALLING SEQUENCE:**
.. index:: RTEMS_LINKER_RWSET_ITEM_ORDERED
.. code:: c
RTEMS_LINKER_RWSET_ITEM_ORDERED( set, type, item, order );
**DESCRIPTION:**
This macro generates a definition of an ordered item contained in the read-write
linker set identified by ``set``. The ``set`` parameter itself must be a valid C designator on which no macro
expansion is performed. It uniquely identifies the linker set. The ``type`` parameter defines the type of the linker set items. The type
must be the same for all macro invocations of a particular linker set.
The ``item`` parameter itself must be a valid C designator on which no macro
expansion is performed. It uniquely identifies an item in the linker set. The ``order`` parameter must be a valid linker input section name part on
which macro expansion is performed. The items are lexicographically ordered
according to the ``order`` parameter within a linker set. Ordered items are
placed before unordered items in the linker set.
**NOTES:**
To be resilient to typos in the order parameter, it is recommended to use the
following construct in macros defining items for a particular linker set (see
enum in ``XYZ_ITEM()``).
.. code:: c
#include <rtems/linkersets.h>
typedef struct {
int foo;
} xyz_item;
/* The XYZ-order defines \*/
#define XYZ_ORDER_FIRST 0x00001000
#define XYZ_ORDER_AND_SO_ON 0x00002000
/* Defines an ordered XYZ-item \*/
#define XYZ_ITEM( item, order ) \\
enum { xyz_##item = order - order }; \\
RTEMS_LINKER_RWSET_ITEM_ORDERED( \\
xyz, const xyz_item \*, item, order \\
) = { &item }
/* Example item \*/
static const xyz_item some_item = { 123 };
XYZ_ITEM( some_item, XYZ_ORDER_FIRST );
.. COMMENT: COPYRIGHT (c) 1989-2014.
.. COMMENT: On-Line Applications Research Corporation (OAR).
.. COMMENT: All rights reserved.

View File

@ -1,697 +0,0 @@
Message Manager
###############
.. index:: messages
.. index:: message queues
Introduction
============
The message manager provides communication and
synchronization capabilities using RTEMS message queues. The
directives provided by the message manager are:
- ``rtems.message_queue_create`` - Create a queue
- ``rtems.message_queue_ident`` - Get ID of a queue
- ``rtems.message_queue_delete`` - Delete a queue
- ``rtems.message_queue_send`` - Put message at rear of a queue
- ``rtems.message_queue_urgent`` - Put message at front of a queue
- ``rtems.message_queue_broadcast`` - Broadcast N messages to a queue
- ``rtems.message_queue_receive`` - Receive message from a queue
- ``rtems.message_queue_get_number_pending`` - Get number of messages pending on a queue
- ``rtems.message_queue_flush`` - Flush all messages on a queue
Background
==========
Messages
--------
A message is a variable length buffer where
information can be stored to support communication. The length
of the message and the information stored in that message are
user-defined and can be actual data, pointer(s), or empty.
Message Queues
--------------
A message queue permits the passing of messages among
tasks and ISRs. Message queues can contain a variable number of
messages. Normally messages are sent to and received from the
queue in FIFO order using the ``rtems.message_queue_send``
directive. However, the ``rtems.message_queue_urgent``
directive can be used to place
messages at the head of a queue in LIFO order.
Synchronization can be accomplished when a task can
wait for a message to arrive at a queue. Also, a task may poll
a queue for the arrival of a message.
The maximum length message which can be sent is set
on a per message queue basis. The message content must be copied in general
to/from an internal buffer of the message queue or directly to a peer in
certain cases. This copy operation is performed with interrupts disabled. So
it is advisable to keep the messages as short as possible.
Building a Message Queue Attribute Set
--------------------------------------
.. index:: message queue attributes
In general, an attribute set is built by a bitwise OR
of the desired attribute components. The set of valid message
queue attributes is provided in the following table:
- ``RTEMS.FIFO`` - tasks wait by FIFO (default)
- ``RTEMS.PRIORITY`` - tasks wait by priority
- ``RTEMS.LOCAL`` - local message queue (default)
- ``RTEMS.GLOBAL`` - global message queue
An attribute listed as a default is not required to
appear in the attribute list, although it is a good programming
practice to specify default attributes. If all defaults are
desired, the attribute ``RTEMS.DEFAULT_ATTRIBUTES``
should be specified on this call.
This example demonstrates the attribute_set parameter
needed to create a local message queue with the task priority
waiting queue discipline. The attribute_set parameter to the``rtems.message_queue_create`` directive could be either``RTEMS.PRIORITY`` or``RTEMS.LOCAL or RTEMS.PRIORITY``.
The attribute_set parameter can be set to ``RTEMS.PRIORITY``
because ``RTEMS.LOCAL`` is the default for all created
message queues. If a similar message queue were to be known globally, then the
attribute_set parameter would be``RTEMS.GLOBAL or RTEMS.PRIORITY``.
Building a MESSAGE_QUEUE_RECEIVE Option Set
-------------------------------------------
In general, an option is built by a bitwise OR of the
desired option components. The set of valid options for the``rtems.message_queue_receive`` directive are
listed in the following table:
- ``RTEMS.WAIT`` - task will wait for a message (default)
- ``RTEMS.NO_WAIT`` - task should not wait
An option listed as a default is not required to
appear in the option OR list, although it is a good programming
practice to specify default options. If all defaults are
desired, the option ``RTEMS.DEFAULT_OPTIONS`` should
be specified on this call.
This example demonstrates the option parameter needed
to poll for a message to arrive. The option parameter passed to
the ``rtems.message_queue_receive`` directive should
be ``RTEMS.NO_WAIT``.
Operations
==========
Creating a Message Queue
------------------------
The ``rtems.message_queue_create`` directive creates a message
queue with the user-defined name. The user specifies the
maximum message size and maximum number of messages which can be
placed in the message queue at one time. The user may select
FIFO or task priority as the method for placing waiting tasks in
the task wait queue. RTEMS allocates a Queue Control Block
(QCB) from the QCB free list to maintain the newly created queue
as well as memory for the message buffer pool associated with
this message queue. RTEMS also generates a message queue ID
which is returned to the calling task.
For GLOBAL message queues, the maximum message size
is effectively limited to the longest message which the MPCI is
capable of transmitting.
Obtaining Message Queue IDs
---------------------------
When a message queue is created, RTEMS generates a
unique message queue ID. The message queue ID may be obtained
by either of two methods. First, as the result of an invocation
of the ``rtems.message_queue_create`` directive, the
queue ID is stored in a user provided location. Second, the queue
ID may be obtained later using the ``rtems.message_queue_ident``
directive. The queue ID is used by other message manager
directives to access this message queue.
Receiving a Message
-------------------
The ``rtems.message_queue_receive`` directive attempts to
retrieve a message from the specified message queue. If at
least one message is in the queue, then the message is removed
from the queue, copied to the callers message buffer, and
returned immediately along with the length of the message. When
messages are unavailable, one of the following situations
applies:
- By default, the calling task will wait forever for the
message to arrive.
- Specifying the ``RTEMS.NO_WAIT`` option forces an immediate return
with an error status code.
- Specifying a timeout limits the period the task will
wait before returning with an error status.
If the task waits for a message, then it is placed in
the message queues task wait queue in either FIFO or task
priority order. All tasks waiting on a message queue are
returned an error code when the message queue is deleted.
Sending a Message
-----------------
Messages can be sent to a queue with the``rtems.message_queue_send`` and``rtems.message_queue_urgent`` directives. These
directives work identically when tasks are waiting to receive a
message. A task is removed from the task waiting queue,
unblocked, and the message is copied to a waiting tasks
message buffer.
When no tasks are waiting at the queue,``rtems.message_queue_send`` places the
message at the rear of the message queue, while``rtems.message_queue_urgent`` places the message at the
front of the queue. The message is copied to a message buffer
from this message queues buffer pool and then placed in the
message queue. Neither directive can successfully send a
message to a message queue which has a full queue of pending
messages.
Broadcasting a Message
----------------------
The ``rtems.message_queue_broadcast`` directive sends the same
message to every task waiting on the specified message queue as
an atomic operation. The message is copied to each waiting
tasks message buffer and each task is unblocked. The number of
tasks which were unblocked is returned to the caller.
Deleting a Message Queue
------------------------
The ``rtems.message_queue_delete`` directive removes a message
queue from the system and frees its control block as well as the
memory associated with this message queues message buffer pool.
A message queue can be deleted by any local task that knows the
message queues ID. As a result of this directive, all tasks
blocked waiting to receive a message from the message queue will
be readied and returned a status code which indicates that the
message queue was deleted. Any subsequent references to the
message queues name and ID are invalid. Any messages waiting
at the message queue are also deleted and deallocated.
Directives
==========
This section details the message managers
directives. A subsection is dedicated to each of this managers
directives and describes the calling sequence, related
constants, usage, and status codes.
MESSAGE_QUEUE_CREATE - Create a queue
-------------------------------------
.. index:: create a message queue
**CALLING SEQUENCE:**
.. code:: c
procedure Message_Queue_Create (
Name : in RTEMS.Name;
Count : in RTEMS.Unsigned32;
Max_Message_Size : in RTEMS.Unsigned32;
Attribute_Set : in RTEMS.Attribute;
ID : out RTEMS.ID;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - queue created successfully
``RTEMS.INVALID_NAME`` - invalid queue name
``RTEMS.INVALID_ADDRESS`` - ``id`` is NULL
``RTEMS.INVALID_NUMBER`` - invalid message count
``RTEMS.INVALID_SIZE`` - invalid message size
``RTEMS.TOO_MANY`` - too many queues created
``RTEMS.UNSATISFIED`` - unable to allocate message buffers
``RTEMS.MP_NOT_CONFIGURED`` - multiprocessing not configured
``RTEMS.TOO_MANY`` - too many global objects
**DESCRIPTION:**
This directive creates a message queue which resides
on the local node with the user-defined name specified in name.
For control and maintenance of the queue, RTEMS allocates and
initializes a QCB. Memory is allocated from the RTEMS Workspace
for the specified count of messages, each of max_message_size
bytes in length. The RTEMS-assigned queue id, returned in id,
is used to access the message queue.
Specifying ``RTEMS.PRIORITY`` in attribute_set causes tasks
waiting for a message to be serviced according to task priority.
When ``RTEMS.FIFO`` is specified, waiting tasks are serviced
in First In-First Out order.
**NOTES:**
This directive will not cause the calling task to be
preempted.
The following message queue attribute constants are
defined by RTEMS:
- ``RTEMS.FIFO`` - tasks wait by FIFO (default)
- ``RTEMS.PRIORITY`` - tasks wait by priority
- ``RTEMS.LOCAL`` - local message queue (default)
- ``RTEMS.GLOBAL`` - global message queue
Message queues should not be made global unless
remote tasks must interact with the created message queue. This
is to avoid the system overhead incurred by the creation of a
global message queue. When a global message queue is created,
the message queues name and id must be transmitted to every
node in the system for insertion in the local copy of the global
object table.
For GLOBAL message queues, the maximum message size
is effectively limited to the longest message which the MPCI is
capable of transmitting.
The total number of global objects, including message
queues, is limited by the maximum_global_objects field in the
configuration table.
MESSAGE_QUEUE_IDENT - Get ID of a queue
---------------------------------------
.. index:: get ID of a message queue
**CALLING SEQUENCE:**
.. code:: c
procedure Message_Queue_Ident (
Name : in RTEMS.Name;
Node : in RTEMS.Unsigned32;
ID : out RTEMS.ID;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - queue identified successfully
``RTEMS.INVALID_ADDRESS`` - ``id`` is NULL
``RTEMS.INVALID_NAME`` - queue name not found
``RTEMS.INVALID_NODE`` - invalid node id
**DESCRIPTION:**
This directive obtains the queue id associated with
the queue name specified in name. If the queue name is not
unique, then the queue id will match one of the queues with that
name. However, this queue id is not guaranteed to correspond to
the desired queue. The queue id is used with other message
related directives to access the message queue.
**NOTES:**
This directive will not cause the running task to be
preempted.
If node is ``RTEMS.SEARCH_ALL_NODES``, all nodes are searched
with the local node being searched first. All other nodes are
searched with the lowest numbered node searched first.
If node is a valid node number which does not
represent the local node, then only the message queues exported
by the designated node are searched.
This directive does not generate activity on remote
nodes. It accesses only the local copy of the global object
table.
MESSAGE_QUEUE_DELETE - Delete a queue
-------------------------------------
.. index:: delete a message queue
**CALLING SEQUENCE:**
.. code:: c
procedure Message_Queue_Delete (
ID : in RTEMS.ID;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - queue deleted successfully
``RTEMS.INVALID_ID`` - invalid queue id
``RTEMS.ILLEGAL_ON_REMOTE_OBJECT`` - cannot delete remote queue
**DESCRIPTION:**
This directive deletes the message queue specified by
id. As a result of this directive, all tasks blocked waiting to
receive a message from this queue will be readied and returned a
status code which indicates that the message queue was deleted.
If no tasks are waiting, but the queue contains messages, then
RTEMS returns these message buffers back to the system message
buffer pool. The QCB for this queue as well as the memory for
the message buffers is reclaimed by RTEMS.
**NOTES:**
The calling task will be preempted if its preemption
mode is enabled and one or more local tasks with a higher
priority than the calling task are waiting on the deleted queue.
The calling task will NOT be preempted if the tasks that are
waiting are remote tasks.
The calling task does not have to be the task that
created the queue, although the task and queue must reside on
the same node.
When the queue is deleted, any messages in the queue
are returned to the free message buffer pool. Any information
stored in those messages is lost.
When a global message queue is deleted, the message
queue id must be transmitted to every node in the system for
deletion from the local copy of the global object table.
Proxies, used to represent remote tasks, are
reclaimed when the message queue is deleted.
MESSAGE_QUEUE_SEND - Put message at rear of a queue
---------------------------------------------------
.. index:: send message to a queue
**CALLING SEQUENCE:**
.. code:: c
procedure Message_Queue_Send (
ID : in RTEMS.ID;
Buffer : in RTEMS.Address;
Size : in RTEMS.Unsigned32;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - message sent successfully
``RTEMS.INVALID_ID`` - invalid queue id
``RTEMS.INVALID_SIZE`` - invalid message size
``RTEMS.INVALID_ADDRESS`` - ``buffer`` is NULL
``RTEMS.UNSATISFIED`` - out of message buffers
``RTEMS.TOO_MANY`` - queues limit has been reached
**DESCRIPTION:**
This directive sends the message buffer of size bytes
in length to the queue specified by id. If a task is waiting at
the queue, then the message is copied to the waiting tasks
buffer and the task is unblocked. If no tasks are waiting at the
queue, then the message is copied to a message buffer which is
obtained from this message queues message buffer pool. The
message buffer is then placed at the rear of the queue.
**NOTES:**
The calling task will be preempted if it has
preemption enabled and a higher priority task is unblocked as
the result of this directive.
Sending a message to a global message queue which
does not reside on the local node will generate a request to the
remote node to post the message on the specified message queue.
If the task to be unblocked resides on a different
node from the message queue, then the message is forwarded to
the appropriate node, the waiting task is unblocked, and the
proxy used to represent the task is reclaimed.
MESSAGE_QUEUE_URGENT - Put message at front of a queue
------------------------------------------------------
.. index:: put message at front of queue
**CALLING SEQUENCE:**
.. code:: c
procedure Message_Queue_Urgent (
ID : in RTEMS.ID;
Buffer : in RTEMS.Address;
Size : in RTEMS.Unsigned32;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - message sent successfully
``RTEMS.INVALID_ID`` - invalid queue id
``RTEMS.INVALID_SIZE`` - invalid message size
``RTEMS.INVALID_ADDRESS`` - ``buffer`` is NULL
``RTEMS.UNSATISFIED`` - out of message buffers
``RTEMS.TOO_MANY`` - queues limit has been reached
**DESCRIPTION:**
This directive sends the message buffer of size bytes
in length to the queue specified by id. If a task is waiting on
the queue, then the message is copied to the tasks buffer and
the task is unblocked. If no tasks are waiting on the queue,
then the message is copied to a message buffer which is obtained
from this message queues message buffer pool. The message
buffer is then placed at the front of the queue.
**NOTES:**
The calling task will be preempted if it has
preemption enabled and a higher priority task is unblocked as
the result of this directive.
Sending a message to a global message queue which
does not reside on the local node will generate a request
telling the remote node to post the message on the specified
message queue.
If the task to be unblocked resides on a different
node from the message queue, then the message is forwarded to
the appropriate node, the waiting task is unblocked, and the
proxy used to represent the task is reclaimed.
MESSAGE_QUEUE_BROADCAST - Broadcast N messages to a queue
---------------------------------------------------------
.. index:: broadcast message to a queue
**CALLING SEQUENCE:**
.. code:: c
procedure Message_Queue_Broadcast (
ID : in RTEMS.ID;
Buffer : in RTEMS.Address;
Size : in RTEMS.Unsigned32;
Count : out RTEMS.Unsigned32;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - message broadcasted successfully
``RTEMS.INVALID_ID`` - invalid queue id
``RTEMS.INVALID_ADDRESS`` - ``buffer`` is NULL
``RTEMS.INVALID_ADDRESS`` - ``count`` is NULL
``RTEMS.INVALID_SIZE`` - invalid message size
**DESCRIPTION:**
This directive causes all tasks that are waiting at
the queue specified by id to be unblocked and sent the message
contained in buffer. Before a task is unblocked, the message
buffer of size byes in length is copied to that tasks message
buffer. The number of tasks that were unblocked is returned in
count.
**NOTES:**
The calling task will be preempted if it has
preemption enabled and a higher priority task is unblocked as
the result of this directive.
The execution time of this directive is directly
related to the number of tasks waiting on the message queue,
although it is more efficient than the equivalent number of
invocations of ``rtems.message_queue_send``.
Broadcasting a message to a global message queue
which does not reside on the local node will generate a request
telling the remote node to broadcast the message to the
specified message queue.
When a task is unblocked which resides on a different
node from the message queue, a copy of the message is forwarded
to the appropriate node, the waiting task is unblocked, and the
proxy used to represent the task is reclaimed.
MESSAGE_QUEUE_RECEIVE - Receive message from a queue
----------------------------------------------------
.. index:: receive message from a queue
**CALLING SEQUENCE:**
.. code:: c
procedure Message_Queue_Receive (
ID : in RTEMS.ID;
Buffer : in RTEMS.Address;
Option_Set : in RTEMS.Option;
Timeout : in RTEMS.Interval;
Size : out RTEMS.Unsigned32;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - message received successfully
``RTEMS.INVALID_ID`` - invalid queue id
``RTEMS.INVALID_ADDRESS`` - ``buffer`` is NULL
``RTEMS.INVALID_ADDRESS`` - ``size`` is NULL
``RTEMS.UNSATISFIED`` - queue is empty
``RTEMS.TIMEOUT`` - timed out waiting for message
``RTEMS.OBJECT_WAS_DELETED`` - queue deleted while waiting
**DESCRIPTION:**
This directive receives a message from the message
queue specified in id. The ``RTEMS.WAIT`` and ``RTEMS.NO_WAIT`` options of the
options parameter allow the calling task to specify whether to
wait for a message to become available or return immediately.
For either option, if there is at least one message in the
queue, then it is copied to buffer, size is set to return the
length of the message in bytes, and this directive returns
immediately with a successful return code. The buffer has to be big enough to
receive a message of the maximum length with respect to this message queue.
If the calling task chooses to return immediately and
the queue is empty, then a status code indicating this condition
is returned. If the calling task chooses to wait at the message
queue and the queue is empty, then the calling task is placed on
the message wait queue and blocked. If the queue was created
with the ``RTEMS.PRIORITY`` option specified, then
the calling task is inserted into the wait queue according to
its priority. But, if the queue was created with the``RTEMS.FIFO`` option specified, then the
calling task is placed at the rear of the wait queue.
A task choosing to wait at the queue can optionally
specify a timeout value in the timeout parameter. The timeout
parameter specifies the maximum interval to wait before the
calling task desires to be unblocked. If it is set to``RTEMS.NO_TIMEOUT``, then the calling task will wait forever.
**NOTES:**
The following message receive option constants are
defined by RTEMS:
- ``RTEMS.WAIT`` - task will wait for a message (default)
- ``RTEMS.NO_WAIT`` - task should not wait
Receiving a message from a global message queue which
does not reside on the local node will generate a request to the
remote node to obtain a message from the specified message
queue. If no message is available and ``RTEMS.WAIT`` was specified, then
the task must be blocked until a message is posted. A proxy is
allocated on the remote node to represent the task until the
message is posted.
A clock tick is required to support the timeout functionality of
this directive.
MESSAGE_QUEUE_GET_NUMBER_PENDING - Get number of messages pending on a queue
----------------------------------------------------------------------------
.. index:: get number of pending messages
**CALLING SEQUENCE:**
.. code:: c
procedure Message_Queue_Get_Number_Pending (
ID : in RTEMS.ID;
Count : out RTEMS.Unsigned32;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - number of messages pending returned successfully
``RTEMS.INVALID_ADDRESS`` - ``count`` is NULL
``RTEMS.INVALID_ID`` - invalid queue id
**DESCRIPTION:**
This directive returns the number of messages pending on this
message queue in count. If no messages are present
on the queue, count is set to zero.
**NOTES:**
Getting the number of pending messages on a global message queue which
does not reside on the local node will generate a request to the
remote node to actually obtain the pending message count for
the specified message queue.
MESSAGE_QUEUE_FLUSH - Flush all messages on a queue
---------------------------------------------------
.. index:: flush messages on a queue
**CALLING SEQUENCE:**
.. code:: c
procedure Message_Queue_Flush (
ID : in RTEMS.ID;
Count : out RTEMS.Unsigned32;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - message queue flushed successfully
``RTEMS.INVALID_ADDRESS`` - ``count`` is NULL
``RTEMS.INVALID_ID`` - invalid queue id
**DESCRIPTION:**
This directive removes all pending messages from the
specified queue id. The number of messages removed is returned
in count. If no messages are present on the queue, count is set
to zero.
**NOTES:**
Flushing all messages on a global message queue which
does not reside on the local node will generate a request to the
remote node to actually flush the specified message queue.
.. COMMENT: COPYRIGHT (c) 1988-2002.
.. COMMENT: On-Line Applications Research Corporation (OAR).
.. COMMENT: All rights reserved.

View File

@ -1,548 +0,0 @@
Multiprocessing Manager
#######################
.. index:: multiprocessing
Introduction
============
In multiprocessor real-time systems, new
requirements, such as sharing data and global resources between
processors, are introduced. This requires an efficient and
reliable communications vehicle which allows all processors to
communicate with each other as necessary. In addition, the
ramifications of multiple processors affect each and every
characteristic of a real-time system, almost always making them
more complicated.
RTEMS addresses these issues by providing simple and
flexible real-time multiprocessing capabilities. The executive
easily lends itself to both tightly-coupled and loosely-coupled
configurations of the target system hardware. In addition,
RTEMS supports systems composed of both homogeneous and
heterogeneous mixtures of processors and target boards.
A major design goal of the RTEMS executive was to
transcend the physical boundaries of the target hardware
configuration. This goal is achieved by presenting the
application software with a logical view of the target system
where the boundaries between processor nodes are transparent.
As a result, the application developer may designate objects
such as tasks, queues, events, signals, semaphores, and memory
blocks as global objects. These global objects may then be
accessed by any task regardless of the physical location of the
object and the accessing task. RTEMS automatically determines
that the object being accessed resides on another processor and
performs the actions required to access the desired object.
Simply stated, RTEMS allows the entire system, both hardware and
software, to be viewed logically as a single system.
Multiprocessing operations are transparent at the application level.
Operations on remote objects are implicitly processed as remote
procedure calls. Although remote operations on objects are supported
from Ada tasks, the calls used to support the multiprocessing
communications should be implemented in C and are not supported
in the Ada binding. Since there is no Ada binding for RTEMS
multiprocessing support services, all examples and data structures
shown in this chapter are in C.
Background
==========
.. index:: multiprocessing topologies
RTEMS makes no assumptions regarding the connection
media or topology of a multiprocessor system. The tasks which
compose a particular application can be spread among as many
processors as needed to satisfy the applications timing
requirements. The application tasks can interact using a subset
of the RTEMS directives as if they were on the same processor.
These directives allow application tasks to exchange data,
communicate, and synchronize regardless of which processor they
reside upon.
The RTEMS multiprocessor execution model is multiple
instruction streams with multiple data streams (MIMD). This
execution model has each of the processors executing code
independent of the other processors. Because of this
parallelism, the application designer can more easily guarantee
deterministic behavior.
By supporting heterogeneous environments, RTEMS
allows the systems designer to select the most efficient
processor for each subsystem of the application. Configuring
RTEMS for a heterogeneous environment is no more difficult than
for a homogeneous one. In keeping with RTEMS philosophy of
providing transparent physical node boundaries, the minimal
heterogeneous processing required is isolated in the MPCI layer.
Nodes
-----
.. index:: nodes, definition
A processor in a RTEMS system is referred to as a
node. Each node is assigned a unique non-zero node number by
the application designer. RTEMS assumes that node numbers are
assigned consecutively from one to the ``maximum_nodes``
configuration parameter. The node
number, node, and the maximum number of nodes, maximum_nodes, in
a system are found in the Multiprocessor Configuration Table.
The maximum_nodes field and the number of global objects,
maximum_global_objects, is required to be the same on all nodes
in a system.
The node number is used by RTEMS to identify each
node when performing remote operations. Thus, the
Multiprocessor Communications Interface Layer (MPCI) must be
able to route messages based on the node number.
Global Objects
--------------
.. index:: global objects, definition
All RTEMS objects which are created with the GLOBAL
attribute will be known on all other nodes. Global objects can
be referenced from any node in the system, although certain
directive specific restrictions (e.g. one cannot delete a remote
object) may apply. A task does not have to be global to perform
operations involving remote objects. The maximum number of
global objects is the system is user configurable and can be
found in the maximum_global_objects field in the Multiprocessor
Configuration Table. The distribution of tasks to processors is
performed during the application design phase. Dynamic task
relocation is not supported by RTEMS.
Global Object Table
-------------------
.. index:: global objects table
RTEMS maintains two tables containing object
information on every node in a multiprocessor system: a local
object table and a global object table. The local object table
on each node is unique and contains information for all objects
created on this node whether those objects are local or global.
The global object table contains information regarding all
global objects in the system and, consequently, is the same on
every node.
Since each node must maintain an identical copy of
the global object table, the maximum number of entries in each
copy of the table must be the same. The maximum number of
entries in each copy is determined by the
maximum_global_objects parameter in the Multiprocessor
Configuration Table. This parameter, as well as the
maximum_nodes parameter, is required to be the same on all
nodes. To maintain consistency among the table copies, every
node in the system must be informed of the creation or deletion
of a global object.
Remote Operations
-----------------
.. index:: MPCI and remote operations
When an application performs an operation on a remote
global object, RTEMS must generate a Remote Request (RQ) message
and send it to the appropriate node. After completing the
requested operation, the remote node will build a Remote
Response (RR) message and send it to the originating node.
Messages generated as a side-effect of a directive (such as
deleting a global task) are known as Remote Processes (RP) and
do not require the receiving node to respond.
Other than taking slightly longer to execute
directives on remote objects, the application is unaware of the
location of the objects it acts upon. The exact amount of
overhead required for a remote operation is dependent on the
media connecting the nodes and, to a lesser degree, on the
efficiency of the user-provided MPCI routines.
The following shows the typical transaction sequence
during a remote application:
# The application issues a directive accessing a
remote global object.
# RTEMS determines the node on which the object
resides.
# RTEMS calls the user-provided MPCI routine
GET_PACKET to obtain a packet in which to build a RQ message.
# After building a message packet, RTEMS calls the
user-provided MPCI routine SEND_PACKET to transmit the packet to
the node on which the object resides (referred to as the
destination node).
# The calling task is blocked until the RR message
arrives, and control of the processor is transferred to another
task.
# The MPCI layer on the destination node senses the
arrival of a packet (commonly in an ISR), and calls the``rtems_multiprocessing_announce``
directive. This directive readies the Multiprocessing Server.
# The Multiprocessing Server calls the user-provided
MPCI routine RECEIVE_PACKET, performs the requested operation,
builds an RR message, and returns it to the originating node.
# The MPCI layer on the originating node senses the
arrival of a packet (typically via an interrupt), and calls the RTEMS``rtems_multiprocessing_announce`` directive. This directive
readies the Multiprocessing Server.
# The Multiprocessing Server calls the user-provided
MPCI routine RECEIVE_PACKET, readies the original requesting
task, and blocks until another packet arrives. Control is
transferred to the original task which then completes processing
of the directive.
If an uncorrectable error occurs in the user-provided
MPCI layer, the fatal error handler should be invoked. RTEMS
assumes the reliable transmission and reception of messages by
the MPCI and makes no attempt to detect or correct errors.
Proxies
-------
.. index:: proxy, definition
A proxy is an RTEMS data structure which resides on a
remote node and is used to represent a task which must block as
part of a remote operation. This action can occur as part of the``rtems.semaphore_obtain`` and``rtems.message_queue_receive`` directives. If the
object were local, the tasks control block would be available
for modification to indicate it was blocking on a message queue
or semaphore. However, the tasks control block resides only on
the same node as the task. As a result, the remote node must
allocate a proxy to represent the task until it can be readied.
The maximum number of proxies is defined in the
Multiprocessor Configuration Table. Each node in a
multiprocessor system may require a different number of proxies
to be configured. The distribution of proxy control blocks is
application dependent and is different from the distribution of
tasks.
Multiprocessor Configuration Table
----------------------------------
The Multiprocessor Configuration Table contains
information needed by RTEMS when used in a multiprocessor
system. This table is discussed in detail in the section
Multiprocessor Configuration Table of the Configuring a System
chapter.
Multiprocessor Communications Interface Layer
=============================================
The Multiprocessor Communications Interface Layer
(MPCI) is a set of user-provided procedures which enable the
nodes in a multiprocessor system to communicate with one
another. These routines are invoked by RTEMS at various times
in the preparation and processing of remote requests.
Interrupts are enabled when an MPCI procedure is invoked. It is
assumed that if the execution mode and/or interrupt level are
altered by the MPCI layer, that they will be restored prior to
returning to RTEMS... index:: MPCI, definition
The MPCI layer is responsible for managing a pool of
buffers called packets and for sending these packets between
system nodes. Packet buffers contain the messages sent between
the nodes. Typically, the MPCI layer will encapsulate the
packet within an envelope which contains the information needed
by the MPCI layer. The number of packets available is dependent
on the MPCI layer implementation... index:: MPCI entry points
The entry points to the routines in the users MPCI
layer should be placed in the Multiprocessor Communications
Interface Table. The user must provide entry points for each of
the following table entries in a multiprocessor system:
- initialization initialize the MPCI
- get_packet obtain a packet buffer
- return_packet return a packet buffer
- send_packet send a packet to another node
- receive_packet called to get an arrived packet
A packet is sent by RTEMS in each of the following situations:
- an RQ is generated on an originating node;
- an RR is generated on a destination node;
- a global object is created;
- a global object is deleted;
- a local task blocked on a remote object is deleted;
- during system initialization to check for system consistency.
If the target hardware supports it, the arrival of a
packet at a node may generate an interrupt. Otherwise, the
real-time clock ISR can check for the arrival of a packet. In
any case, the``rtems_multiprocessing_announce`` directive must be called
to announce the arrival of a packet. After exiting the ISR,
control will be passed to the Multiprocessing Server to process
the packet. The Multiprocessing Server will call the get_packet
entry to obtain a packet buffer and the receive_entry entry to
copy the message into the buffer obtained.
INITIALIZATION
--------------
The INITIALIZATION component of the user-provided
MPCI layer is called as part of the ``rtems_initialize_executive``
directive to initialize the MPCI layer and associated hardware.
It is invoked immediately after all of the device drivers have
been initialized. This component should be adhere to the
following prototype:.. index:: rtems_mpci_entry
.. code:: c
rtems_mpci_entry user_mpci_initialization(
rtems_configuration_table \*configuration
);
where configuration is the address of the users
Configuration Table. Operations on global objects cannot be
performed until this component is invoked. The INITIALIZATION
component is invoked only once in the life of any system. If
the MPCI layer cannot be successfully initialized, the fatal
error manager should be invoked by this routine.
One of the primary functions of the MPCI layer is to
provide the executive with packet buffers. The INITIALIZATION
routine must create and initialize a pool of packet buffers.
There must be enough packet buffers so RTEMS can obtain one
whenever needed.
GET_PACKET
----------
The GET_PACKET component of the user-provided MPCI
layer is called when RTEMS must obtain a packet buffer to send
or broadcast a message. This component should be adhere to the
following prototype:
.. code:: c
rtems_mpci_entry user_mpci_get_packet(
rtems_packet_prefix \**packet
);
where packet is the address of a pointer to a packet.
This routine always succeeds and, upon return, packet will
contain the address of a packet. If for any reason, a packet
cannot be successfully obtained, then the fatal error manager
should be invoked.
RTEMS has been optimized to avoid the need for
obtaining a packet each time a message is sent or broadcast.
For example, RTEMS sends response messages (RR) back to the
originator in the same packet in which the request message (RQ)
arrived.
RETURN_PACKET
-------------
The RETURN_PACKET component of the user-provided MPCI
layer is called when RTEMS needs to release a packet to the free
packet buffer pool. This component should be adhere to the
following prototype:
.. code:: c
rtems_mpci_entry user_mpci_return_packet(
rtems_packet_prefix \*packet
);
where packet is the address of a packet. If the
packet cannot be successfully returned, the fatal error manager
should be invoked.
RECEIVE_PACKET
--------------
The RECEIVE_PACKET component of the user-provided
MPCI layer is called when RTEMS needs to obtain a packet which
has previously arrived. This component should be adhere to the
following prototype:
.. code:: c
rtems_mpci_entry user_mpci_receive_packet(
rtems_packet_prefix \**packet
);
where packet is a pointer to the address of a packet
to place the message from another node. If a message is
available, then packet will contain the address of the message
from another node. If no messages are available, this entry
packet should contain NULL.
SEND_PACKET
-----------
The SEND_PACKET component of the user-provided MPCI
layer is called when RTEMS needs to send a packet containing a
message to another node. This component should be adhere to the
following prototype:
.. code:: c
rtems_mpci_entry user_mpci_send_packet(
uint32_t node,
rtems_packet_prefix \**packet
);
where node is the node number of the destination and packet is the
address of a packet which containing a message. If the packet cannot
be successfully sent, the fatal error manager should be invoked.
If node is set to zero, the packet is to be
broadcasted to all other nodes in the system. Although some
MPCI layers will be built upon hardware which support a
broadcast mechanism, others may be required to generate a copy
of the packet for each node in the system.
.. COMMENT: XXX packet_prefix structure needs to be defined in this document
Many MPCI layers use the ``packet_length`` field of the``rtems_packet_prefix`` portion
of the packet to avoid sending unnecessary data. This is especially
useful if the media connecting the nodes is relatively slow.
The ``to_convert`` field of the ``rtems_packet_prefix`` portion of the
packet indicates how much of the packet in 32-bit units may require conversion
in a heterogeneous system.
Supporting Heterogeneous Environments
-------------------------------------
.. index:: heterogeneous multiprocessing
Developing an MPCI layer for a heterogeneous system
requires a thorough understanding of the differences between the
processors which comprise the system. One difficult problem is
the varying data representation schemes used by different
processor types. The most pervasive data representation problem
is the order of the bytes which compose a data entity.
Processors which place the least significant byte at the
smallest address are classified as little endian processors.
Little endian byte-ordering is shown below:
.. code:: c
+---------------+----------------+---------------+----------------+
| | | | |
| Byte 3 | Byte 2 | Byte 1 | Byte 0 |
| | | | |
+---------------+----------------+---------------+----------------+
Conversely, processors which place the most
significant byte at the smallest address are classified as big
endian processors. Big endian byte-ordering is shown below:
.. code:: c
+---------------+----------------+---------------+----------------+
| | | | |
| Byte 0 | Byte 1 | Byte 2 | Byte 3 |
| | | | |
+---------------+----------------+---------------+----------------+
Unfortunately, sharing a data structure between big
endian and little endian processors requires translation into a
common endian format. An application designer typically chooses
the common endian format to minimize conversion overhead.
Another issue in the design of shared data structures
is the alignment of data structure elements. Alignment is both
processor and compiler implementation dependent. For example,
some processors allow data elements to begin on any address
boundary, while others impose restrictions. Common restrictions
are that data elements must begin on either an even address or
on a long word boundary. Violation of these restrictions may
cause an exception or impose a performance penalty.
Other issues which commonly impact the design of
shared data structures include the representation of floating
point numbers, bit fields, decimal data, and character strings.
In addition, the representation method for negative integers
could be ones or twos complement. These factors combine to
increase the complexity of designing and manipulating data
structures shared between processors.
RTEMS addressed these issues in the design of the
packets used to communicate between nodes. The RTEMS packet
format is designed to allow the MPCI layer to perform all
necessary conversion without burdening the developer with the
details of the RTEMS packet format. As a result, the MPCI layer
must be aware of the following:
- All packets must begin on a four byte boundary.
- Packets are composed of both RTEMS and application data. All RTEMS data
is treated as 32-bit unsigned quantities and is in the first ``to_convert``
32-bit quantities of the packet. The ``to_convert`` field is part of the``rtems_packet_prefix`` portion of the packet.
- The RTEMS data component of the packet must be in native
endian format. Endian conversion may be performed by either the
sending or receiving MPCI layer.
- RTEMS makes no assumptions regarding the application
data component of the packet.
Operations
==========
Announcing a Packet
-------------------
The ``rtems_multiprocessing_announce`` directive is called by
the MPCI layer to inform RTEMS that a packet has arrived from
another node. This directive can be called from an interrupt
service routine or from within a polling routine.
Directives
==========
This section details the additional directives
required to support RTEMS in a multiprocessor configuration. A
subsection is dedicated to each of this managers directives and
describes the calling sequence, related constants, usage, and
status codes.
MULTIPROCESSING_ANNOUNCE - Announce the arrival of a packet
-----------------------------------------------------------
.. index:: announce arrival of package
**CALLING SEQUENCE:**
.. index:: rtems_multiprocessing_announce
.. code:: c
void rtems_multiprocessing_announce( void );
**DIRECTIVE STATUS CODES:**
NONE
**DESCRIPTION:**
This directive informs RTEMS that a multiprocessing
communications packet has arrived from another node. This
directive is called by the user-provided MPCI, and is only used
in multiprocessor configurations.
**NOTES:**
This directive is typically called from an ISR.
This directive will almost certainly cause the
calling task to be preempted.
This directive does not generate activity on remote nodes.
.. COMMENT: COPYRIGHT (c) 2014.
.. COMMENT: On-Line Applications Research Corporation (OAR).
.. COMMENT: All rights reserved.

View File

@ -1,661 +0,0 @@
Object Services
###############
.. index:: object manipulation
Introduction
============
RTEMS provides a collection of services to assist in the
management and usage of the objects created and utilized
via other managers. These services assist in the
manipulation of RTEMS objects independent of the API used
to create them. The object related services provided by
RTEMS are:
- build_id
- ``rtems.build_name`` - build object name from characters
- ``rtems.object_get_classic_name`` - lookup name from Id
- ``rtems.object_get_name`` - obtain object name as string
- ``rtems.object_set_name`` - set object name
- ``rtems.object_id_get_api`` - obtain API from Id
- ``rtems.object_id_get_class`` - obtain class from Id
- ``rtems.object_id_get_node`` - obtain node from Id
- ``rtems.object_id_get_index`` - obtain index from Id
- ``rtems.build_id`` - build object id from components
- ``rtems.object_id_api_minimum`` - obtain minimum API value
- ``rtems.object_id_api_maximum`` - obtain maximum API value
- ``rtems.object_id_api_minimum_class`` - obtain minimum class value
- ``rtems.object_id_api_maximum_class`` - obtain maximum class value
- ``rtems.object_get_api_name`` - obtain API name
- ``rtems.object_get_api_class_name`` - obtain class name
- ``rtems.object_get_class_information`` - obtain class information
Background
==========
APIs
----
RTEMS implements multiple APIs including an Internal API,
the Classic API, and the POSIX API. These
APIs share the common foundation of SuperCore objects and
thus share object management code. This includes a common
scheme for object Ids and for managing object names whether
those names be in the thirty-two bit form used by the Classic
API or C strings.
The object Id contains a field indicating the API that
an object instance is associated with. This field
holds a numerically small non-zero integer.
Object Classes
--------------
Each API consists of a collection of managers. Each manager
is responsible for instances of a particular object class.
Classic API Tasks and POSIX Mutexes example classes.
The object Id contains a field indicating the class that
an object instance is associated with. This field
holds a numerically small non-zero integer. In all APIs,
a class value of one is reserved for tasks or threads.
Object Names
------------
Every RTEMS object which has an Id may also have a
name associated with it. Depending on the API, names
may be either thirty-two bit integers as in the Classic
API or strings as in the POSIX API.
Some objects have Ids but do not have a defined way to associate
a name with them. For example, POSIX threads have
Ids but per POSIX do not have names. In RTEMS, objects
not defined to have thirty-two bit names may have string
names assigned to them via the ``rtems.object_set_name``
service. The original impetus in providing this service
was so the normally anonymous POSIX threads could have
a user defined name in CPU Usage Reports.
Operations
==========
Decomposing and Recomposing an Object Id
----------------------------------------
Services are provided to decompose an object Id into its
subordinate components. The following services are used
to do this:
- ``rtems.object_id_get_api``
- ``rtems.object_id_get_class``
- ``rtems.object_id_get_node``
- ``rtems.object_id_get_index``
The following C language example illustrates the
decomposition of an Id and printing the values.
.. code:: c
void printObjectId(rtems_id id)
{
printf(
"API=%d Class=%d Node=%d Index=%d\\n",
rtems_object_id_get_api(id),
rtems_object_id_get_class(id),
rtems_object_id_get_node(id),
rtems_object_id_get_index(id)
);
}
This prints the components of the Ids as integers.
It is also possible to construct an arbitrary Id using
the ``rtems.build_id`` service. The following
C language example illustrates how to construct the
"next Id."
.. code:: c
rtems_id nextObjectId(rtems_id id)
{
return rtems_build_id(
rtems_object_id_get_api(id),
rtems_object_id_get_class(id),
rtems_object_id_get_node(id),
rtems_object_id_get_index(id) + 1
);
}
Note that this Id may not be valid in this
system or associated with an allocated object.
Printing an Object Id
---------------------
RTEMS also provides services to associate the API and Class
portions of an Object Id with strings. This allows the
application developer to provide more information about
an object in diagnostic messages.
In the following C language example, an Id is decomposed into
its constituent parts and "pretty-printed."
.. code:: c
void prettyPrintObjectId(rtems_id id)
{
int tmpAPI, tmpClass;
tmpAPI = rtems_object_id_get_api(id),
tmpClass = rtems_object_id_get_class(id),
printf(
"API=%s Class=%s Node=%d Index=%d\\n",
rtems_object_get_api_name(tmpAPI),
rtems_object_get_api_class_name(tmpAPI, tmpClass),
rtems_object_id_get_node(id),
rtems_object_id_get_index(id)
);
}
Directives
==========
BUILD_NAME - Build object name from characters
----------------------------------------------
.. index:: build object name
**CALLING SEQUENCE:**
.. code:: c
procedure Build_Name(
c1 : in RTEMS.Unsigned8;
c2 : in RTEMS.Unsigned8;
c3 : in RTEMS.Unsigned8;
c4 : in RTEMS.Unsigned8;
Name : out RTEMS.Name
);
**DIRECTIVE STATUS CODES**
Returns a name constructed from the four characters.
**DESCRIPTION:**
This service takes the four characters provided as arguments
and constructs a thirty-two bit object name with ``c1``
in the most significant byte and ``c4`` in the least
significant byte.
**NOTES:**
This directive is strictly local and does not impact task scheduling.
OBJECT_GET_CLASSIC_NAME - Lookup name from id
---------------------------------------------
.. index:: get name from id
.. index:: obtain name from id
**CALLING SEQUENCE:**
.. code:: c
procedure Object_Get_Classic_Name(
ID : in RTEMS.ID;
Name : out RTEMS.Name;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES**
``RTEMS.SUCCESSFUL`` - name looked up successfully
``RTEMS.INVALID_ADDRESS`` - invalid name pointer
``RTEMS.INVALID_ID`` - invalid object id
**DESCRIPTION:**
This service looks up the name for the object ``id`` specified
and, if found, places the result in ``*name``.
**NOTES:**
This directive is strictly local and does not impact task scheduling.
OBJECT_GET_NAME - Obtain object name as string
----------------------------------------------
.. index:: get object name as string
.. index:: obtain object name as string
**CALLING SEQUENCE:**
.. code:: c
procedure Object_Get_Name(
ID : in RTEMS.ID;
Name : out RTEMS.Name;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES**
Returns a pointer to the name if successful or ``NULL``
otherwise.
**DESCRIPTION:**
This service looks up the name of the object specified by``id`` and places it in the memory pointed to by ``name``.
Every attempt is made to return name as a printable string even
if the object has the Classic API thirty-two bit style name.
**NOTES:**
This directive is strictly local and does not impact task scheduling.
OBJECT_SET_NAME - Set object name
---------------------------------
.. index:: set object name
**CALLING SEQUENCE:**
.. code:: c
procedure Object_Set_Name(
ID : in RTEMS.ID;
Name : in String;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES**
``RTEMS.SUCCESSFUL`` - name looked up successfully
``RTEMS.INVALID_ADDRESS`` - invalid name pointer
``RTEMS.INVALID_ID`` - invalid object id
**DESCRIPTION:**
This service sets the name of ``id`` to that specified
by the string located at ``name``.
**NOTES:**
This directive is strictly local and does not impact task scheduling.
If the object specified by ``id`` is of a class that
has a string name, this method will free the existing
name to the RTEMS Workspace and allocate enough memory
from the RTEMS Workspace to make a copy of the string
located at ``name``.
If the object specified by ``id`` is of a class that
has a thirty-two bit integer style name, then the first
four characters in ``*name`` will be used to construct
the name.
name to the RTEMS Workspace and allocate enough memory
from the RTEMS Workspace to make a copy of the string
OBJECT_ID_GET_API - Obtain API from Id
--------------------------------------
.. index:: obtain API from id
**CALLING SEQUENCE:**
.. code:: c
procedure Object_Id_Get_API(
ID : in RTEMS.ID;
API : out RTEMS.Unsigned32
);
**DIRECTIVE STATUS CODES**
Returns the API portion of the object Id.
**DESCRIPTION:**
This directive returns the API portion of the provided object ``id``.
**NOTES:**
This directive is strictly local and does not impact task scheduling.
This directive does NOT validate the ``id`` provided.
OBJECT_ID_GET_CLASS - Obtain Class from Id
------------------------------------------
.. index:: obtain class from object id
**CALLING SEQUENCE:**
.. code:: c
procedure Object_Id_Get_Class(
ID : in RTEMS.ID;
The_Class : out RTEMS.Unsigned32
);
**DIRECTIVE STATUS CODES**
Returns the class portion of the object Id.
**DESCRIPTION:**
This directive returns the class portion of the provided object ``id``.
**NOTES:**
This directive is strictly local and does not impact task scheduling.
This directive does NOT validate the ``id`` provided.
OBJECT_ID_GET_NODE - Obtain Node from Id
----------------------------------------
.. index:: obtain node from object id
**CALLING SEQUENCE:**
.. code:: c
procedure Object_Id_Get_Node(
ID : in RTEMS.ID;
Node : out RTEMS.Unsigned32
);
**DIRECTIVE STATUS CODES**
Returns the node portion of the object Id.
**DESCRIPTION:**
This directive returns the node portion of the provided object ``id``.
**NOTES:**
This directive is strictly local and does not impact task scheduling.
This directive does NOT validate the ``id`` provided.
OBJECT_ID_GET_INDEX - Obtain Index from Id
------------------------------------------
.. index:: obtain index from object id
**CALLING SEQUENCE:**
.. code:: c
procedure Object_Id_Get_Index(
ID : in RTEMS.ID;
Index : out RTEMS.Unsigned32
);
**DIRECTIVE STATUS CODES**
Returns the index portion of the object Id.
**DESCRIPTION:**
This directive returns the index portion of the provided object ``id``.
**NOTES:**
This directive is strictly local and does not impact task scheduling.
This directive does NOT validate the ``id`` provided.
BUILD_ID - Build Object Id From Components
------------------------------------------
.. index:: build object id from components
**CALLING SEQUENCE:**
.. code:: c
function Build_Id(
the_api : in RTEMS.Unsigned32;
the_class : in RTEMS.Unsigned32;
the_node : in RTEMS.Unsigned32;
the_index : in RTEMS.Unsigned32
) return RTEMS.Id;
**DIRECTIVE STATUS CODES**
Returns an object Id constructed from the provided arguments.
**DESCRIPTION:**
This service constructs an object Id from the provided``the_api``, ``the_class``, ``the_node``, and ``the_index``.
**NOTES:**
This directive is strictly local and does not impact task scheduling.
This directive does NOT validate the arguments provided
or the Object id returned.
OBJECT_ID_API_MINIMUM - Obtain Minimum API Value
------------------------------------------------
.. index:: obtain minimum API value
**CALLING SEQUENCE:**
.. code:: c
function Object_Id_API_Minimum return RTEMS.Unsigned32;
**DIRECTIVE STATUS CODES**
Returns the minimum valid for the API portion of an object Id.
**DESCRIPTION:**
This service returns the minimum valid for the API portion of
an object Id.
**NOTES:**
This directive is strictly local and does not impact task scheduling.
OBJECT_ID_API_MAXIMUM - Obtain Maximum API Value
------------------------------------------------
.. index:: obtain maximum API value
**CALLING SEQUENCE:**
.. code:: c
function Object_Id_API_Maximum return RTEMS.Unsigned32;
**DIRECTIVE STATUS CODES**
Returns the maximum valid for the API portion of an object Id.
**DESCRIPTION:**
This service returns the maximum valid for the API portion of
an object Id.
**NOTES:**
This directive is strictly local and does not impact task scheduling.
OBJECT_API_MINIMUM_CLASS - Obtain Minimum Class Value
-----------------------------------------------------
.. index:: obtain minimum class value
**CALLING SEQUENCE:**
.. code:: c
procedure Object_API_Minimum_Class(
API : in RTEMS.Unsigned32;
Minimum : out RTEMS.Unsigned32
);
**DIRECTIVE STATUS CODES**
If ``api`` is not valid, -1 is returned.
If successful, this service returns the minimum valid for the class
portion of an object Id for the specified ``api``.
**DESCRIPTION:**
This service returns the minimum valid for the class portion of
an object Id for the specified ``api``.
**NOTES:**
This directive is strictly local and does not impact task scheduling.
OBJECT_API_MAXIMUM_CLASS - Obtain Maximum Class Value
-----------------------------------------------------
.. index:: obtain maximum class value
**CALLING SEQUENCE:**
.. code:: c
procedure Object_API_Maximum_Class(
API : in RTEMS.Unsigned32;
Maximum : out RTEMS.Unsigned32
);
**DIRECTIVE STATUS CODES**
If ``api`` is not valid, -1 is returned.
If successful, this service returns the maximum valid for the class
portion of an object Id for the specified ``api``.
**DESCRIPTION:**
This service returns the maximum valid for the class portion of
an object Id for the specified ``api``.
**NOTES:**
This directive is strictly local and does not impact task scheduling.
OBJECT_GET_API_NAME - Obtain API Name
-------------------------------------
.. index:: obtain API name
**CALLING SEQUENCE:**
.. code:: c
procedure Object_Get_API_Name(
API : in RTEMS.Unsigned32;
Name : out String
);
**DIRECTIVE STATUS CODES**
If ``api`` is not valid, the string ``"BAD API"`` is returned.
If successful, this service returns a pointer to a string
containing the name of the specified ``api``.
**DESCRIPTION:**
This service returns the name of the specified ``api``.
**NOTES:**
This directive is strictly local and does not impact task scheduling.
The string returned is from constant space. Do not modify
or free it.
OBJECT_GET_API_CLASS_NAME - Obtain Class Name
---------------------------------------------
.. index:: obtain class name
**CALLING SEQUENCE:**
.. code:: c
procedure Object_Get_API_Class_Name(
The_API : in RTEMS.Unsigned32;
The_Class : in RTEMS.Unsigned32;
Name : out String
);
**DIRECTIVE STATUS CODES**
If ``the_api`` is not valid, the string ``"BAD API"`` is returned.
If ``the_class`` is not valid, the string ``"BAD CLASS"`` is returned.
If successful, this service returns a pointer to a string
containing the name of the specified ``the_api``/``the_class`` pair.
**DESCRIPTION:**
This service returns the name of the object class indicated by the
specified ``the_api`` and ``the_class``.
**NOTES:**
This directive is strictly local and does not impact task scheduling.
The string returned is from constant space. Do not modify
or free it.
OBJECT_GET_CLASS_INFORMATION - Obtain Class Information
-------------------------------------------------------
.. index:: obtain class information
**CALLING SEQUENCE:**
.. code:: c
procedure Object_Get_Class_Information(
The_API : in RTEMS.Unsigned32;
The_Class : in RTEMS.Unsigned32;
Info : out RTEMS.Object_API_Class_Information;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES**
``RTEMS.SUCCESSFUL`` - information obtained successfully
``RTEMS.INVALID_ADDRESS`` - ``info`` is NULL
``RTEMS.INVALID_NUMBER`` - invalid ``api`` or ``the_class``
If successful, the structure located at ``info`` will be filled
in with information about the specified ``api``/``the_class`` pairing.
**DESCRIPTION:**
This service returns information about the object class indicated by the
specified ``api`` and ``the_class``. This structure is defined as
follows:
**NOTES:**
This directive is strictly local and does not impact task scheduling.
.. COMMENT: COPYRIGHT (c) 1988-2008.
.. COMMENT: On-Line Applications Research Corporation (OAR).
.. COMMENT: All rights reserved.

View File

@ -1,489 +0,0 @@
Overview
########
Introduction
============
RTEMS, Real-Time Executive for Multiprocessor Systems, is a
real-time executive (kernel) which provides a high performance
environment for embedded military applications including the
following features:
- multitasking capabilities
- homogeneous and heterogeneous multiprocessor systems
- event-driven, priority-based, preemptive scheduling
- optional rate monotonic scheduling
- intertask communication and synchronization
- priority inheritance
- responsive interrupt management
- dynamic memory allocation
- high level of user configurability
This manual describes the usage of RTEMS for
applications written in the Ada programming language. Those
implementation details that are processor dependent are provided
in the Applications Supplement documents. A supplement
document which addresses specific architectural issues that
affect RTEMS is provided for each processor type that is
supported.
Real-time Application Systems
=============================
Real-time application systems are a special class of
computer applications. They have a complex set of
characteristics that distinguish them from other software
problems. Generally, they must adhere to more rigorous
requirements. The correctness of the system depends not only on
the results of computations, but also on the time at which the
results are produced. The most important and complex
characteristic of real-time application systems is that they
must receive and respond to a set of external stimuli within
rigid and critical time constraints referred to as deadlines.
Systems can be buried by an avalanche of interdependent,
asynchronous or cyclical event streams.
Deadlines can be further characterized as either hard
or soft based upon the value of the results when produced after
the deadline has passed. A deadline is hard if the results have
no value or if their use will result in a catastrophic event.
In contrast, results which are produced after a soft deadline
may have some value.
Another distinguishing requirement of real-time
application systems is the ability to coordinate or manage a
large number of concurrent activities. Since software is a
synchronous entity, this presents special problems. One
instruction follows another in a repeating synchronous cycle.
Even though mechanisms have been developed to allow for the
processing of external asynchronous events, the software design
efforts required to process and manage these events and tasks
are growing more complicated.
The design process is complicated further by
spreading this activity over a set of processors instead of a
single processor. The challenges associated with designing and
building real-time application systems become very complex when
multiple processors are involved. New requirements such as
interprocessor communication channels and global resources that
must be shared between competing processors are introduced. The
ramifications of multiple processors complicate each and every
characteristic of a real-time system.
Real-time Executive
===================
Fortunately, real-time operating systems or real-time
executives serve as a cornerstone on which to build the
application system. A real-time multitasking executive allows
an application to be cast into a set of logical, autonomous
processes or tasks which become quite manageable. Each task is
internally synchronous, but different tasks execute
independently, resulting in an asynchronous processing stream.
Tasks can be dynamically paused for many reasons resulting in a
different task being allowed to execute for a period of time.
The executive also provides an interface to other system
components such as interrupt handlers and device drivers.
System components may request the executive to allocate and
coordinate resources, and to wait for and trigger synchronizing
conditions. The executive system calls effectively extend the
CPU instruction set to support efficient multitasking. By
causing tasks to travel through well-defined state transitions,
system calls permit an application to demand-switch between
tasks in response to real-time events.
By proper grouping of responses to stimuli into
separate tasks, a system can now asynchronously switch between
independent streams of execution, directly responding to
external stimuli as they occur. This allows the system design
to meet critical performance specifications which are typically
measured by guaranteed response time and transaction throughput.
The multiprocessor extensions of RTEMS provide the features
necessary to manage the extra requirements introduced by a
system distributed across several processors. It removes the
physical barriers of processor boundaries from the world of the
system designer, enabling more critical aspects of the system to
receive the required attention. Such a system, based on an
efficient real-time, multiprocessor executive, is a more
realistic model of the outside world or environment for which it
is designed. As a result, the system will always be more
logical, efficient, and reliable.
By using the directives provided by RTEMS, the
real-time applications developer is freed from the problem of
controlling and synchronizing multiple tasks and processors. In
addition, one need not develop, test, debug, and document
routines to manage memory, pass messages, or provide mutual
exclusion. The developer is then able to concentrate solely on
the application. By using standard software components, the
time and cost required to develop sophisticated real-time
applications is significantly reduced.
RTEMS Application Architecture
==============================
One important design goal of RTEMS was to provide a
bridge between two critical layers of typical real-time systems.
As shown in the following figure, RTEMS serves as a buffer between the
project dependent application code and the target hardware.
Most hardware dependencies for real-time applications can be
localized to the low level device drivers.
.. code:: c
+-----------------------------------------------------------+
| Application Dependent Software |
| +----------------------------------------+ |
| | Standard Application Components | |
| | +-------------+---+ |
| +---+-----------+ | | |
| | Board Support | | RTEMS | |
| | Package | | | |
+----+---------------+--------------+-----------------+-----|
| Target Hardware |
+-----------------------------------------------------------+
The RTEMS I/O interface manager provides an efficient tool for incorporating
these hardware dependencies into the system while simultaneously
providing a general mechanism to the application code that
accesses them. A well designed real-time system can benefit
from this architecture by building a rich library of standard
application components which can be used repeatedly in other
real-time projects.
RTEMS Internal Architecture
===========================
RTEMS can be viewed as a set of layered components that work in
harmony to provide a set of services to a real-time application
system. The executive interface presented to the application is
formed by grouping directives into logical sets called resource managers.
Functions utilized by multiple managers such as scheduling,
dispatching, and object management are provided in the executive
core. The executive core depends on a small set of CPU dependent routines.
Together these components provide a powerful run time
environment that promotes the development of efficient real-time
application systems. The following figure illustrates this organization:
.. code:: c
+-----------------------------------------------+
| RTEMS Executive Interface |
+-----------------------------------------------+
| RTEMS Core |
+-----------------------------------------------+
| CPU Dependent Code |
+-----------------------------------------------+
Subsequent chapters present a detailed description of the capabilities
provided by each of the following RTEMS managers:
- initialization
- task
- interrupt
- clock
- timer
- semaphore
- message
- event
- signal
- partition
- region
- dual ported memory
- I/O
- fatal error
- rate monotonic
- user extensions
- multiprocessing
User Customization and Extensibility
====================================
As thirty-two bit microprocessors have decreased in
cost, they have become increasingly common in a variety of
embedded systems. A wide range of custom and general-purpose
processor boards are based on various thirty-two bit processors.
RTEMS was designed to make no assumptions concerning the
characteristics of individual microprocessor families or of
specific support hardware. In addition, RTEMS allows the system
developer a high degree of freedom in customizing and extending
its features.
RTEMS assumes the existence of a supported
microprocessor and sufficient memory for both RTEMS and the
real-time application. Board dependent components such as
clocks, interrupt controllers, or I/O devices can be easily
integrated with RTEMS. The customization and extensibility
features allow RTEMS to efficiently support as many environments
as possible.
Portability
===========
The issue of portability was the major factor in the
creation of RTEMS. Since RTEMS is designed to isolate the
hardware dependencies in the specific board support packages,
the real-time application should be easily ported to any other
processor. The use of RTEMS allows the development of real-time
applications which can be completely independent of a particular
microprocessor architecture.
Memory Requirements
===================
Since memory is a critical resource in many real-time
embedded systems, RTEMS was specifically designed to automatically
leave out all services that are not required from the run-time
environment. Features such as networking, various fileystems,
and many other features are completely optional. This allows
the application designer the flexibility to tailor RTEMS to most
efficiently meet system requirements while still satisfying even
the most stringent memory constraints. As a result, the size
of the RTEMS executive is application dependent.
RTEMS requires RAM to manage each instance of an RTEMS object
that is created. Thus the more RTEMS objects an application
needs, the more memory that must be reserved. See `Configuring a System`_ for more details.
RTEMS utilizes memory for both code and data space.
Although RTEMS data space must be in RAM, its code space can be
located in either ROM or RAM.
Audience
========
This manual was written for experienced real-time
software developers. Although some background is provided, it
is assumed that the reader is familiar with the concepts of task
management as well as intertask communication and
synchronization. Since directives, user related data
structures, and examples are presented in Ada, a basic
understanding of the Ada programming language
is required to fully
understand the material presented. However, because of the
similarity of the Ada and C RTEMS implementations, users will
find that the use and behavior of the two implementations is
very similar. A working knowledge of the target processor is
helpful in understanding some of RTEMS features. A thorough
understanding of the executive cannot be obtained without
studying the entire manual because many of RTEMS concepts and
features are interrelated. Experienced RTEMS users will find
that the manual organization facilitates its use as a reference
document.
Conventions
===========
The following conventions are used in this manual:
- Significant words or phrases as well as all directive
names are printed in bold type.
- Items in bold capital letters are constants defined by
RTEMS. Each language interface provided by RTEMS includes a
file containing the standard set of constants, data types, and
record definitions which can be incorporated into the user
application.
- A number of type definitions are provided by RTEMS and
can be found in rtems.h.
- The characters "0x" preceding a number indicates that
the number is in hexadecimal format. Any other numbers are
assumed to be in decimal format.
Manual Organization
===================
This first chapter has presented the introductory and
background material for the RTEMS executive. The remaining
chapters of this manual present a detailed description of RTEMS
and the environment, including run time behavior, it creates for
the user.
A chapter is dedicated to each manager and provides a
detailed discussion of each RTEMS manager and the directives
which it provides. The presentation format for each directive
includes the following sections:
- Calling sequence
- Directive status codes
- Description
- Notes
The following provides an overview of the remainder
of this manual:
Chapter 2:
Key Concepts: presents an introduction to the ideas which are common
across multiple RTEMS managers.
Chapter 3:
RTEMS Data Types: describes the fundamental data types shared
by the services in the RTEMS Classic API.
Chapter 4:
Scheduling Concepts: details the various RTEMS scheduling algorithms
and task state transitions.
Chapter 5:
Initialization Manager: describes the functionality and directives
provided by the Initialization Manager.
Chapter 6:
Task Manager: describes the functionality and directives provided
by the Task Manager.
Chapter 7:
Interrupt Manager: describes the functionality and directives
provided by the Interrupt Manager.
Chapter 8:
Clock Manager: describes the functionality and directives
provided by the Clock Manager.
Chapter 9:
Timer Manager: describes the functionality and directives provided
by the Timer Manager.
Chapter 10:
Rate Monotonic Manager: describes the functionality and directives
provided by the Rate Monotonic Manager.
Chapter 11:
Semaphore Manager: describes the functionality and directives
provided by the Semaphore Manager.
Chapter 12:
Barrier Manager: describes the functionality and directives
provided by the Barrier Manager.
Chapter 13:
Message Manager: describes the functionality and directives
provided by the Message Manager.
Chapter 14:
Event Manager: describes the
functionality and directives provided by the Event Manager.
Chapter 15:
Signal Manager: describes the
functionality and directives provided by the Signal Manager.
Chapter 16:
Partition Manager: describes the
functionality and directives provided by the Partition Manager.
Chapter 17:
Region Manager: describes the
functionality and directives provided by the Region Manager.
Chapter 18:
Dual-Ported Memory Manager: describes
the functionality and directives provided by the Dual-Ported
Memory Manager.
Chapter 19:
I/O Manager: describes the
functionality and directives provided by the I/O Manager.
Chapter 20:
Fatal Error Manager: describes the functionality and directives
provided by the Fatal Error Manager.
Chapter 21:
Board Support Packages: defines the
functionality required of user-supplied board support packages.
Chapter 22:
User Extensions: shows the user how to
extend RTEMS to incorporate custom features.
Chapter 23:
Configuring a System: details the process by which one tailors RTEMS
for a particular single-processor or multiprocessor application.
Chapter 24:
Multiprocessing Manager: presents a
conceptual overview of the multiprocessing capabilities provided
by RTEMS as well as describing the Multiprocessing
Communications Interface Layer and Multiprocessing Manager
directives.
Chapter 25:
Stack Bounds Checker: presents the capabilities of the RTEMS
task stack checker which can report stack usage as well as detect
bounds violations.
Chapter 26:
CPU Usage Statistics: presents the capabilities of the CPU Usage
statistics gathered on a per task basis along with the mechanisms
for reporting and resetting the statistics.
Chapter 27:
Object Services: presents a collection of helper services useful
when manipulating RTEMS objects. These include methods to assist
in obtaining an objects name in printable form. Additional services
are provided to decompose an object Id and determine which API
and object class it belongs to.
Chapter 28:
Chains: presents the methods provided to build, iterate and
manipulate doubly-linked chains. This manager makes the
chain implementation used internally by RTEMS to user space
applications.
Chapter 29:
Timespec Helpers: presents a set of helper services useful
when manipulating POSIX ``struct timespec`` instances.
Chapter 30:
Constant Bandwidth Server Scheduler API.
Chapter 31:
Directive Status Codes: provides a definition of each of the
directive status codes referenced in this manual.
Chapter 32:
Example Application: provides a template for simple RTEMS applications.
Chapter 33:
Glossary: defines terms used throughout this manual.
.. COMMENT: COPYRIGHT (c) 1988-2007.
.. COMMENT: On-Line Applications Research Corporation (OAR).
.. COMMENT: All rights reserved.
.. COMMENT: The following figure was replaced with an ASCII equivalent.
.. COMMENT: Figure 2-1 Object ID Composition

View File

@ -1,385 +0,0 @@
Partition Manager
#################
.. index:: partitions
Introduction
============
The partition manager provides facilities to
dynamically allocate memory in fixed-size units. The directives
provided by the partition manager are:
- ``rtems.partition_create`` - Create a partition
- ``rtems.partition_ident`` - Get ID of a partition
- ``rtems.partition_delete`` - Delete a partition
- ``rtems.partition_get_buffer`` - Get buffer from a partition
- ``rtems.partition_return_buffer`` - Return buffer to a partition
Background
==========
Partition Manager Definitions
-----------------------------
.. index:: partition, definition
A partition is a physically contiguous memory area
divided into fixed-size buffers that can be dynamically
allocated and deallocated... index:: buffers, definition
Partitions are managed and maintained as a list of
buffers. Buffers are obtained from the front of the partitions
free buffer chain and returned to the rear of the same chain.
When a buffer is on the free buffer chain, RTEMS uses two
pointers of memory from each buffer as the free buffer chain.
When a buffer is allocated, the entire buffer is available for application use.
Therefore, modifying memory that is outside of an allocated
buffer could destroy the free buffer chain or the contents of an
adjacent allocated buffer.
Building a Partition Attribute Set
----------------------------------
.. index:: partition attribute set, building
In general, an attribute set is built by a bitwise OR
of the desired attribute components. The set of valid partition
attributes is provided in the following table:
- ``RTEMS.LOCAL`` - local partition (default)
- ``RTEMS.GLOBAL`` - global partition
Attribute values are specifically designed to be
mutually exclusive, therefore bitwise OR and addition operations
are equivalent as long as each attribute appears exactly once in
the component list. An attribute listed as a default is not
required to appear in the attribute list, although it is a good
programming practice to specify default attributes. If all
defaults are desired, the attribute``RTEMS.DEFAULT_ATTRIBUTES`` should be
specified on this call. The attribute_set parameter should be``RTEMS.GLOBAL`` to indicate that the partition
is to be known globally.
Operations
==========
Creating a Partition
--------------------
The ``rtems.partition_create`` directive creates a partition
with a user-specified name. The partitions name, starting
address, length and buffer size are all specified to the``rtems.partition_create`` directive.
RTEMS allocates a Partition Control
Block (PTCB) from the PTCB free list. This data structure is
used by RTEMS to manage the newly created partition. The number
of buffers in the partition is calculated based upon the
specified partition length and buffer size. If successful,the
unique partition ID is returned to the calling task.
Obtaining Partition IDs
-----------------------
When a partition is created, RTEMS generates a unique
partition ID and assigned it to the created partition until it
is deleted. The partition ID may be obtained by either of two
methods. First, as the result of an invocation of the``rtems.partition_create`` directive, the partition
ID is stored in a user provided location. Second, the partition
ID may be obtained later using the ``rtems.partition_ident``
directive. The partition ID is used by other partition manager directives
to access this partition.
Acquiring a Buffer
------------------
A buffer can be obtained by calling the``rtems.partition_get_buffer`` directive.
If a buffer is available, then
it is returned immediately with a successful return code.
Otherwise, an unsuccessful return code is returned immediately
to the caller. Tasks cannot block to wait for a buffer to
become available.
Releasing a Buffer
------------------
Buffers are returned to a partitions free buffer
chain with the ``rtems.partition_return_buffer`` directive. This
directive returns an error status code if the returned buffer
was not previously allocated from this partition.
Deleting a Partition
--------------------
The ``rtems.partition_delete`` directive allows a partition to
be removed and returned to RTEMS. When a partition is deleted,
the PTCB for that partition is returned to the PTCB free list.
A partition with buffers still allocated cannot be deleted. Any
task attempting to do so will be returned an error status code.
Directives
==========
This section details the partition managers
directives. A subsection is dedicated to each of this managers
directives and describes the calling sequence, related
constants, usage, and status codes.
PARTITION_CREATE - Create a partition
-------------------------------------
.. index:: create a partition
**CALLING SEQUENCE:**
.. code:: c
procedure Partition_Create (
Name : in RTEMS.Name;
Starting_Address : in RTEMS.Address;
Length : in RTEMS.Unsigned32;
Buffer_Size : in RTEMS.Unsigned32;
Attribute_Set : in RTEMS.Attribute;
ID : out RTEMS.ID;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - partition created successfully
``RTEMS.INVALID_NAME`` - invalid partition name
``RTEMS.TOO_MANY`` - too many partitions created
``RTEMS.INVALID_ADDRESS`` - address not on four byte boundary
``RTEMS.INVALID_ADDRESS`` - ``starting_address`` is NULL
``RTEMS.INVALID_ADDRESS`` - ``id`` is NULL
``RTEMS.INVALID_SIZE`` - length or buffer size is 0
``RTEMS.INVALID_SIZE`` - length is less than the buffer size
``RTEMS.INVALID_SIZE`` - buffer size not a multiple of 4
``RTEMS.MP_NOT_CONFIGURED`` - multiprocessing not configured
``RTEMS.TOO_MANY`` - too many global objects
**DESCRIPTION:**
This directive creates a partition of fixed size
buffers from a physically contiguous memory space which starts
at starting_address and is length bytes in size. Each allocated
buffer is to be of ``buffer_size`` in bytes. The assigned
partition id is returned in ``id``. This partition id is used to
access the partition with other partition related directives.
For control and maintenance of the partition, RTEMS allocates a
PTCB from the local PTCB free pool and initializes it.
**NOTES:**
This directive will not cause the calling task to be
preempted.
The ``starting_address`` must be properly aligned for the
target architecture.
The ``buffer_size`` parameter must be a multiple of
the CPU alignment factor. Additionally, ``buffer_size``
must be large enough to hold two pointers on the target
architecture. This is required for RTEMS to manage the
buffers when they are free.
Memory from the partition is not used by RTEMS to
store the Partition Control Block.
The following partition attribute constants are
defined by RTEMS:
- ``RTEMS.LOCAL`` - local partition (default)
- ``RTEMS.GLOBAL`` - global partition
The PTCB for a global partition is allocated on the
local node. The memory space used for the partition must reside
in shared memory. Partitions should not be made global unless
remote tasks must interact with the partition. This is to avoid
the overhead incurred by the creation of a global partition.
When a global partition is created, the partitions name and id
must be transmitted to every node in the system for insertion in
the local copy of the global object table.
The total number of global objects, including
partitions, is limited by the maximum_global_objects field in
the Configuration Table.
PARTITION_IDENT - Get ID of a partition
---------------------------------------
.. index:: get ID of a partition
.. index:: obtain ID of a partition
**CALLING SEQUENCE:**
.. code:: c
procedure Partition_Ident (
Name : in RTEMS.Name;
Node : in RTEMS.Unsigned32;
ID : out RTEMS.ID;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - partition identified successfully
``RTEMS.INVALID_ADDRESS`` - ``id`` is NULL
``RTEMS.INVALID_NAME`` - partition name not found
``RTEMS.INVALID_NODE`` - invalid node id
**DESCRIPTION:**
This directive obtains the partition id associated
with the partition name. If the partition name is not unique,
then the partition id will match one of the partitions with that
name. However, this partition id is not guaranteed to
correspond to the desired partition. The partition id is used
with other partition related directives to access the partition.
**NOTES:**
This directive will not cause the running task to be
preempted.
If node is ``RTEMS.SEARCH_ALL_NODES``, all nodes are searched
with the local node being searched first. All other nodes are
searched with the lowest numbered node searched first.
If node is a valid node number which does not
represent the local node, then only the partitions exported by
the designated node are searched.
This directive does not generate activity on remote
nodes. It accesses only the local copy of the global object
table.
PARTITION_DELETE - Delete a partition
-------------------------------------
.. index:: delete a partition
**CALLING SEQUENCE:**
.. code:: c
procedure Partition_Delete (
ID : in RTEMS.ID;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - partition deleted successfully
``RTEMS.INVALID_ID`` - invalid partition id
``RTEMS.RESOURCE_IN_USE`` - buffers still in use
``RTEMS.ILLEGAL_ON_REMOTE_OBJECT`` - cannot delete remote partition
**DESCRIPTION:**
This directive deletes the partition specified by id.
The partition cannot be deleted if any of its buffers are still
allocated. The PTCB for the deleted partition is reclaimed by
RTEMS.
**NOTES:**
This directive will not cause the calling task to be
preempted.
The calling task does not have to be the task that
created the partition. Any local task that knows the partition
id can delete the partition.
When a global partition is deleted, the partition id
must be transmitted to every node in the system for deletion
from the local copy of the global object table.
The partition must reside on the local node, even if
the partition was created with the ``RTEMS.GLOBAL`` option.
PARTITION_GET_BUFFER - Get buffer from a partition
--------------------------------------------------
.. index:: get buffer from partition
.. index:: obtain buffer from partition
**CALLING SEQUENCE:**
.. code:: c
procedure Partition_Get_Buffer (
ID : in RTEMS.ID;
Buffer : out RTEMS.Address;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - buffer obtained successfully
``RTEMS.INVALID_ADDRESS`` - ``buffer`` is NULL
``RTEMS.INVALID_ID`` - invalid partition id
``RTEMS.UNSATISFIED`` - all buffers are allocated
**DESCRIPTION:**
This directive allows a buffer to be obtained from
the partition specified in id. The address of the allocated
buffer is returned in buffer.
**NOTES:**
This directive will not cause the running task to be
preempted.
All buffers begin on a four byte boundary.
A task cannot wait on a buffer to become available.
Getting a buffer from a global partition which does
not reside on the local node will generate a request telling the
remote node to allocate a buffer from the specified partition.
PARTITION_RETURN_BUFFER - Return buffer to a partition
------------------------------------------------------
.. index:: return buffer to partitition
**CALLING SEQUENCE:**
.. code:: c
procedure Partition_Return_Buffer (
ID : in RTEMS.ID;
Buffer : in RTEMS.Address;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - buffer returned successfully
``RTEMS.INVALID_ADDRESS`` - ``buffer`` is NULL
``RTEMS.INVALID_ID`` - invalid partition id
``RTEMS.INVALID_ADDRESS`` - buffer address not in partition
**DESCRIPTION:**
This directive returns the buffer specified by buffer
to the partition specified by id.
**NOTES:**
This directive will not cause the running task to be
preempted.
Returning a buffer to a global partition which does
not reside on the local node will generate a request telling the
remote node to return the buffer to the specified partition.
Returning a buffer multiple times is an error. It will corrupt the internal
state of the partition.
.. COMMENT: COPYRIGHT (c) 1988-2002.
.. COMMENT: On-Line Applications Research Corporation (OAR).
.. COMMENT: All rights reserved.

View File

@ -1,433 +0,0 @@
PCI Library
###########
.. index:: libpci
Introduction
============
The Peripheral Component Interconnect (PCI) bus is a very common computer
bus architecture that is found in almost every PC today. The PCI bus is
normally located at the motherboard where some PCI devices are soldered
directly onto the PCB and expansion slots allows the user to add custom
devices easily. There is a wide range of PCI hardware available implementing
all sorts of interfaces and functions.
This section describes the PCI Library available in RTEMS used to access the
PCI bus in a portable way across computer architectures supported by RTEMS.
The PCI Library aims to be compatible with PCI 2.3 with a couple of
limitations, for example there is no support for hot-plugging, 64-bit
memory space and cardbus bridges.
In order to support different architectures and with small foot-print embedded
systems in mind the PCI Library offers four different configuration options
listed below. It is selected during compile time by defining the appropriate
macros in confdefs.h. It is also possible to enable PCI_LIB_NONE (No
Configuration) which can be used for debuging PCI access functions.
- Auto Configuration (do Plug & Play)
- Read Configuration (read BIOS or boot loader configuration)
- Static Configuration (write user defined configuration)
- Peripheral Configuration (no access to cfg-space)
Background
==========
The PCI bus is constructed in a way where on-board devices and devices
in expansion slots can be automatically found (probed) and configured
using Plug & Play completely implemented in software. The bus is set up once
during boot up. The Plug & Play information can be read and written from
PCI configuration space. A PCI device is identified in configuration space by
a unique bus, slot and function number. Each PCI slot can have up to 8
functions and interface to another PCI sub-bus by implementing a PCI-to-PCI
bridge according to the PCI Bridge Architecture specification.
Using the unique \[bus:slot:func] any device can be configured regardless of how
PCI is currently set up as long as all PCI buses are enumerated correctly. The
enumeration is done during probing, all bridges are given a bus number in
order for the bridges to respond to accesses from both directions. The PCI
library can assign address ranges to which a PCI device should respond using
Plug & Play technique or a static user defined configuration. After the
configuration has been performed the PCI device drivers can find devices by
the read-only PCI Class type, Vendor ID and Device ID information found in
configuration space for each device.
In some systems there is a boot loader or BIOS which have already configured
all PCI devices, but on embedded targets it is quite common that there is no
BIOS or boot loader, thus RTEMS must configure the PCI bus. Only the PCI host
may do configuration space access, the host driver or BSP is responsible to
translate the \[bus:slot:func] into a valid PCI configuration space access.
If the target is not a host, but a peripheral, configuration space can not be
accessed, the peripheral is set up by the host during start up. In complex
embedded PCI systems the peripheral may need to access other PCI boards than
the host. In such systems a custom (static) configuration of both the host
and peripheral may be a convenient solution.
The PCI bus defines four interrupt signals INTA#..INTD#. The interrupt signals
must be mapped into a system interrupt/vector, it is up to the BSP or host
driver to know the mapping, however the BIOS or boot loader may use the
8-bit read/write "Interrupt Line" register to pass the knowledge along to the
OS.
The PCI standard defines and recommends that the backplane route the interupt
lines in a systematic way, however in standard there is no such requirement.
The PCI Auto Configuration Library implements the recommended way of routing
which is very common but it is also supported to some extent to override the
interrupt routing from the BSP or Host Bridge driver using the configuration
structure.
Software Components
-------------------
The PCI library is located in cpukit/libpci, it consists of different parts:
- PCI Host bridge driver interface
- Configuration routines
- Access (Configuration, I/O and Memory space) routines
- Interrupt routines (implemented by BSP)
- Print routines
- Static/peripheral configuration creation
- PCI shell command
PCI Configuration
-----------------
During start up the PCI bus must be configured in order for host and
peripherals to access one another using Memory or I/O accesses and that
interrupts are properly handled. Three different spaces are defined and
mapped separately:
# I/O space (IO)
# non-prefetchable Memory space (MEMIO)
# prefetchable Memory space (MEM)
Regions of the same type (I/O or Memory) may not overlap which is guaranteed
by the software. MEM regions may be mapped into MEMIO regions, but MEMIO
regions can not be mapped into MEM, for that could lead to prefetching of
registers. The interrupt pin which a board is driving can be read out from
PCI configuration space, however it is up to software to know how interrupt
signals are routed between PCI-to-PCI bridges and how PCI INT[A..D]# pins are
mapped to system IRQ. In systems where previous software (boot loader or BIOS)
has already set up this the configuration is overwritten or simply read out.
In order to support different configuration methods the following configuration
libraries are selectable by the user:
- Auto Configuration (run Plug & Play software)
- Read Configuration (relies on a boot loader or BIOS)
- Static Configuration (write user defined setup, no Plug & Play)
- Peripheral Configuration (user defined setup, no access to
configuration space)
A host driver can be made to support all three configuration methods, or any
combination. It may be defined by the BSP which approach is used.
The configuration software is called from the PCI driver (pci_config_init()).
Regardless of configuration method a PCI device tree is created in RAM during
initialization, the tree can be accessed to find devices and resources without
accessing configuration space later on. The user is responsible to create the
device tree at compile time when using the static/peripheral method.
RTEMS Configuration selection
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The active configuration method can be selected at compile time in the same
way as other project parameters by including rtems/confdefs.h and setting
- CONFIGURE_INIT
- RTEMS_PCI_CONFIG_LIB
- CONFIGURE_PCI_LIB = PCI_LIB_(AUTO,STATIC,READ,PERIPHERAL)
See the RTEMS configuration section how to setup the PCI library.
Auto Configuration
~~~~~~~~~~~~~~~~~~
The auto configuration software enumerates PCI buses and initializes all PCI
devices found using Plug & Play. The auto configuration software requires
that a configuration setup has been registered by the driver or BSP in order
to setup the I/O and Memory regions at the correct address ranges. PCI
interrupt pins can optionally be routed over PCI-to-PCI bridges and mapped
to a system interrupt number. BAR resources are sorted by size and required
alignment, unused "dead" space may be created when PCI bridges are present
due to the PCI bridge window size does not equal the alignment. To cope with
that resources are reordered to fit smaller BARs into the dead space to minimize
the PCI space required. If a BAR or ROM register can not be allocated a PCI
address region (due to too few resources available) the register will be given
the value of pci_invalid_address which defaults to 0.
The auto configuration routines support:
- PCI 2.3
- Little and big endian PCI bus
- one I/O 16 or 32-bit range (IO)
- memory space (MEMIO)
- prefetchable memory space (MEM), if not present MEM will be mapped into
MEMIO
- multiple PCI buses - PCI-to-PCI bridges
- standard BARs, PCI-to-PCI bridge BARs, ROM BARs
- Interrupt routing over bridges
- Interrupt pin to system interrupt mapping
Not supported:
- hot-pluggable devices
- Cardbus bridges
- 64-bit memory space
- 16-bit and 32-bit I/O address ranges at the same time
In PCI 2.3 there may exist I/O BARs that must be located at the low 64kBytes
address range, in order to support this the host driver or BSP must make sure
that I/O addresses region is within this region.
Read Configuration
~~~~~~~~~~~~~~~~~~
When a BIOS or boot loader already has setup the PCI bus the configuration can
be read directly from the PCI resource registers and buses are already
enumerated, this is a much simpler approach than configuring PCI ourselves. The
PCI device tree is automatically created based on the current configuration and
devices present. After initialization is done there is no difference between
the auto or read configuration approaches.
Static Configuration
~~~~~~~~~~~~~~~~~~~~
To support custom configurations and small-footprint PCI systems, the user may
provide the PCI device tree which contains the current configuration. The
PCI buses are enumerated and all resources are written to PCI devices during
initialization. When this approach is selected PCI boards must be located at
the same slots every time and devices can not be removed or added, Plug & Play
is not performed. Boards of the same type may of course be exchanged.
The user can create a configuration by calling pci_cfg_print() on a running
system that has had PCI setup by the auto or read configuration routines, it
can be called from the PCI shell command. The user must provide the PCI device
tree named pci_hb.
Peripheral Configuration
~~~~~~~~~~~~~~~~~~~~~~~~
On systems where a peripheral PCI device needs to access other PCI devices than
the host the peripheral configuration approach may be handy. Most PCI devices
answers on the PCI hosts requests and start DMA accesses into the Hosts memory,
however in some complex systems PCI devices may want to access other devices
on the same bus or at another PCI bus.
A PCI peripheral is not allowed to do PCI configuration cycles, which
means that it must either rely on the host to give it the addresses it
needs, or that the addresses are predefined.
This configuration approach is very similar to the static option, however the
configuration is never written to PCI bus, instead it is only used for drivers
to find PCI devices and resources using the same PCI API as for the host
PCI Access
----------
The PCI access routines are low-level routines provided for drivers,
configuration software, etc. in order to access different regions in a way
not dependent upon the host driver, BSP or platform.
- PCI configuration space
- PCI I/O space
- Registers over PCI memory space
- Translate PCI address into CPU accessible address and vice versa
By using the access routines drivers can be made portable over different
architectures. The access routines take the architecture endianness into
consideration and let the host driver or BSP implement I/O space and
configuration space access.
Some non-standard hardware may also define the PCI bus big-endian, for example
the LEON2 AT697 PCI host bridge and some LEON3 systems may be configured that
way. It is up to the BSP to set the appropriate PCI endianness on compile time
(BSP_PCI_BIG_ENDIAN) in order for inline macros to be correctly defined.
Another possibility is to use the function pointers defined by the access
layer to implement drivers that support "run-time endianness detection".
Configuration space
~~~~~~~~~~~~~~~~~~~
Configuration space is accessed using the routines listed below. The
pci_dev_t type is used to specify a specific PCI bus, device and function. It
is up to the host driver or BSP to create a valid access to the requested
PCI slot. Requests made to slots that are not supported by hardware should
result in PCISTS_MSTABRT and/or data must be ignored (writes) or 0xffffffff
is always returned (reads).
.. code:: c
/* Configuration Space Access Read Routines \*/
extern int pci_cfg_r8(pci_dev_t dev, int ofs, uint8_t \*data);
extern int pci_cfg_r16(pci_dev_t dev, int ofs, uint16_t \*data);
extern int pci_cfg_r32(pci_dev_t dev, int ofs, uint32_t \*data);
/* Configuration Space Access Write Routines \*/
extern int pci_cfg_w8(pci_dev_t dev, int ofs, uint8_t data);
extern int pci_cfg_w16(pci_dev_t dev, int ofs, uint16_t data);
extern int pci_cfg_w32(pci_dev_t dev, int ofs, uint32_t data);
I/O space
~~~~~~~~~
The BSP or driver provide special routines in order to access I/O space. Some
architectures have a special instruction accessing I/O space, others have it
mapped into a "PCI I/O window" in the standard address space accessed by the
CPU. The window size may vary and must be taken into consideration by the
host driver. The below routines must be used to access I/O space. The address
given to the functions is not the PCI I/O addresses, the caller must have
translated PCI I/O addresses (available in the PCI BARs) into a BSP or host
driver custom address, see `Access functions`_ for how
addresses are translated.
.. code:: c
/* Read a register over PCI I/O Space \*/
extern uint8_t pci_io_r8(uint32_t adr);
extern uint16_t pci_io_r16(uint32_t adr);
extern uint32_t pci_io_r32(uint32_t adr);
/* Write a register over PCI I/O Space \*/
extern void pci_io_w8(uint32_t adr, uint8_t data);
extern void pci_io_w16(uint32_t adr, uint16_t data);
extern void pci_io_w32(uint32_t adr, uint32_t data);
Registers over Memory space
~~~~~~~~~~~~~~~~~~~~~~~~~~~
PCI host bridge hardware normally swap data accesses into the endianness of the
host architecture in order to lower the load of the CPU, peripherals can do DMA
without swapping. However, the host controller can not separate a standard
memory access from a memory access to a register, registers may be mapped into
memory space. This leads to register content being swapped, which must be
swapped back. The below routines makes it possible to access registers over PCI
memory space in a portable way on different architectures, the BSP or
architecture must provide necessary functions in order to implement this.
.. code:: c
static inline uint16_t pci_ld_le16(volatile uint16_t \*addr);
static inline void pci_st_le16(volatile uint16_t \*addr, uint16_t val);
static inline uint32_t pci_ld_le32(volatile uint32_t \*addr);
static inline void pci_st_le32(volatile uint32_t \*addr, uint32_t val);
static inline uint16_t pci_ld_be16(volatile uint16_t \*addr);
static inline void pci_st_be16(volatile uint16_t \*addr, uint16_t val);
static inline uint32_t pci_ld_be32(volatile uint32_t \*addr);
static inline void pci_st_be32(volatile uint32_t \*addr, uint32_t val);
In order to support non-standard big-endian PCI bus the above pci_* functions
is required, pci_ld_le16 != ld_le16 on big endian PCI buses.
Access functions
~~~~~~~~~~~~~~~~
The PCI Access Library can provide device drivers with function pointers
executing the above Configuration, I/O and Memory space accesses. The
functions have the same arguments and return values as the above
functions.
The pci_access_func() function defined below can be used to get a function
pointer of a specific access type.
.. code:: c
/* Get Read/Write function for accessing a register over PCI Memory Space
* (non-inline functions).
*
* Arguments
* wr 0(Read), 1(Write)
* size 1(Byte), 2(Word), 4(Double Word)
* func Where function pointer will be stored
* endian PCI_LITTLE_ENDIAN or PCI_BIG_ENDIAN
* type 1(I/O), 3(REG over MEM), 4(CFG)
*
* Return
* 0 Found function
* others No such function defined by host driver or BSP
\*/
int pci_access_func(int wr, int size, void \**func, int endian, int type);
PCI device drivers may be written to support run-time detection of endianess,
this is mosly for debugging or for development systems. When the product is
finally deployed macros switch to using the inline functions instead which
have been configured for the correct endianness.
PCI address translation
~~~~~~~~~~~~~~~~~~~~~~~
When PCI addresses, both I/O and memory space, is not mapped 1:1 address
translation before access is needed. If drivers read the PCI resources directly
using configuration space routines or in the device tree, the addresses given
are PCI addresses. The below functions can be used to translate PCI addresses
into CPU accessible addresses or vice versa, translation may be different for
different PCI spaces/regions.
.. code:: c
/* Translate PCI address into CPU accessible address \*/
static inline int pci_pci2cpu(uint32_t \*address, int type);
/* Translate CPU accessible address into PCI address (for DMA) \*/
static inline int pci_cpu2pci(uint32_t \*address, int type);
PCI Interrupt
-------------
The PCI specification defines four different interrupt lines INTA#..INTD#,
the interrupts are low level sensitive which make it possible to support
multiple interrupt sources on the same interrupt line. Since the lines are
level sensitive the interrupt sources must be acknowledged before clearing the
interrupt contoller, or the interrupt controller must be masked. The BSP must
provide a routine for clearing/acknowledging the interrupt controller, it is
up to the interrupt service routine to acknowledge the interrupt source.
The PCI Library relies on the BSP for implementing shared interrupt handling
through the BSP_PCI_shared_interrupt_* functions/macros, they must be defined
when including bsp.h.
PCI device drivers may use the pci_interrupt_* routines in order to call the
BSP specific functions in a platform independent way. The PCI interrupt
interface has been made similar to the RTEMS IRQ extension so that a BSP can
use the standard RTEMS interrupt functions directly.
PCI Shell command
-----------------
The RTEMS shell has a PCI command pci which makes it possible to read/write
configuration space, print the current PCI configuration and print out a
configuration C-file for the static or peripheral library.
.. COMMENT: COPYRIGHT (c) 1988-2007.
.. COMMENT: On-Line Applications Research Corporation (OAR).
.. COMMENT: All rights reserved.

View File

@ -1,209 +0,0 @@
Preface
#######
In recent years, the cost required to develop a
software product has increased significantly while the target
hardware costs have decreased. Now a larger portion of money is
expended in developing, using, and maintaining software. The
trend in computing costs is the complete dominance of software
over hardware costs. Because of this, it is necessary that
formal disciplines be established to increase the probability
that software is characterized by a high degree of correctness,
maintainability, and portability. In addition, these
disciplines must promote practices that aid in the consistent
and orderly development of a software system within schedule and
budgetary constraints. To be effective, these disciplines must
adopt standards which channel individual software efforts toward
a common goal.
The push for standards in the software development
field has been met with various degrees of success. The
Microprocessor Operating Systems Interfaces (MOSI) effort has
experienced only limited success. As popular as the UNIX
operating system has grown, the attempt to develop a standard
interface definition to allow portable application development
has only recently begun to produce the results needed in this
area. Unfortunately, very little effort has been expended to
provide standards addressing the needs of the real-time
community. Several organizations have addressed this need
during recent years.
The Real Time Executive Interface Definition (RTEID)
was developed by Motorola with technical input from Software
Components Group. RTEID was adopted by the VMEbus International
Trade Association (VITA) as a baseline draft for their proposed
standard multiprocessor, real-time executive interface, Open
Real-Time Kernel Interface Definition (ORKID). These two groups
are currently working together with the IEEE P1003.4 committee
to insure that the functionality of their proposed standards is
adopted as the real-time extensions to POSIX.
This emerging standard defines an interface for the
development of real-time software to ease the writing of
real-time application programs that are directly portable across
multiple real-time executive implementations. This interface
includes both the source code interfaces and run-time behavior
as seen by a real-time application. It does not include the
details of how a kernel implements these functions. The
standards goal is to serve as a complete definition of external
interfaces so that application code that conforms to these
interfaces will execute properly in all real-time executive
environments. With the use of a standards compliant executive,
routines that acquire memory blocks, create and manage message
queues, establish and use semaphores, and send and receive
signals need not be redeveloped for a different real-time
environment as long as the new environment is compliant with the
standard. Software developers need only concentrate on the
hardware dependencies of the real-time system. Furthermore,
most hardware dependencies for real-time applications can be
localized to the device drivers.
A compliant executive provides simple and flexible
real-time multiprocessing. It easily lends itself to both
tightly-coupled and loosely-coupled configurations (depending on
the system hardware configuration). Objects such as tasks,
queues, events, signals, semaphores, and memory blocks can be
designated as global objects and accessed by any task regardless
of which processor the object and the accessing task reside.
The acceptance of a standard for real-time executives
will produce the same advantages enjoyed from the push for UNIX
standardization by AT&Ts System V Interface Definition and
IEEEs POSIX efforts. A compliant multiprocessing executive
will allow close coupling between UNIX systems and real-time
executives to provide the many benefits of the UNIX development
environment to be applied to real-time software development.
Together they provide the necessary laboratory environment to
implement real-time, distributed, embedded systems using a wide
variety of computer architectures.
A study was completed in 1988, within the Research,
Development, and Engineering Center, U.S. Army Missile Command,
which compared the various aspects of the Ada programming
language as they related to the application of Ada code in
distributed and/or multiple processing systems. Several
critical conclusions were derived from the study. These
conclusions have a major impact on the way the Army develops
application software for embedded applications. These impacts
apply to both in-house software development and contractor
developed software.
A conclusion of the analysis, which has been
previously recognized by other agencies attempting to utilize
Ada in a distributed or multiprocessing environment, is that the
Ada programming language does not adequately support
multiprocessing. Ada does provide a mechanism for
multi-tasking, however, this capability exists only for a single
processor system. The language also does not have inherent
capabilities to access global named variables, flags or program
code. These critical features are essential in order for data
to be shared between processors. However, these drawbacks do
have workarounds which are sometimes awkward and defeat the
intent of software maintainability and portability goals.
Another conclusion drawn from the analysis, was that
the run time executives being delivered with the Ada compilers
were too slow and inefficient to be used in modern missile
systems. A run time executive is the core part of the run time
system code, or operating system code, that controls task
scheduling, input/output management and memory management.
Traditionally, whenever efficient executive (also known as
kernel) code was required by the application, the user developed
in-house software. This software was usually written in
assembly language for optimization.
Because of this shortcoming in the Ada programming
language, software developers in research and development and
contractors for project managed systems, are mandated by
technology to purchase and utilize off-the-shelf third party
kernel code. The contractor, and eventually the Government,
must pay a licensing fee for every copy of the kernel code used
in an embedded system.
The main drawback to this development environment is
that the Government does not own, nor has the right to modify
code contained within the kernel. V&V techniques in this
situation are more difficult than if the complete source code
were available. Responsibility for system failures due to faulty
software is yet another area to be resolved under this
environment.
The Guidance and Control Directorate began a software
development effort to address these problems. A project to
develop an experimental run time kernel was begun that will
eliminate the major drawbacks of the Ada programming language
mentioned above. The Real Time Executive for Multiprocessor Systems
(RTEMS) provides full capabilities for management of tasks,
interrupts, time, and multiple processors in addition to those
features typical of generic operating systems. The code is
Government owned, so no licensing fees are necessary. RTEMS has
been implemented in both the Ada and C programming languages.
It has been ported to the following processor families:
- Altera NIOS II
- Analog Devices Blackfin
- Atmel AVR
- ARM
- Freescale (formerly Motorola) MC68xxx
- Freescale (formerly Motorola) MC683xx
- Freescale (formerly Motorola) ColdFire
- Intel i386 and above
- Lattice Semiconductor LM32
- NEC V850
- MIPS
- PowerPC
- Renesas (formerly Hitachi) SuperH
- Renesas (formerly Hitachi) H8/300
- Renesas M32C
- SPARC v7, v8, and V9
Support for other processor families, including RISC, CISC, and DSP, is
planned. Since almost all of RTEMS is written in a high level language,
ports to additional processor families require minimal effort.
RTEMS multiprocessor support is capable of handling
either homogeneous or heterogeneous systems. The kernel
automatically compensates for architectural differences (byte
swapping, etc.) between processors. This allows a much easier
transition from one processor family to another without a major
system redesign.
Since the proposed standards are still in draft form,
RTEMS cannot and does not claim compliance. However, the status
of the standard is being carefully monitored to guarantee that
RTEMS provides the functionality specified in the standard.
Once approved, RTEMS will be made compliant.
This document is a detailed users guide for a
functionally compliant real-time multiprocessor executive. It
describes the user interface and run-time behavior of Release
4.10.99.0 of the Ada interface
to RTEMS.
.. COMMENT: COPYRIGHT (c) 1988-2008.
.. COMMENT: On-Line Applications Research Corporation (OAR).
.. COMMENT: All rights reserved.
.. COMMENT: This chapter is missing the following figures:
.. COMMENT: Figure 1-1 RTEMS Application Architecture
.. COMMENT: Figure 1-2 RTEMS Internal Architecture

File diff suppressed because it is too large Load Diff

View File

@ -1,121 +0,0 @@
Red-Black Trees
###############
.. index:: rbtrees
Introduction
============
The Red-Black Tree API is an interface to the SuperCore (score) rbtree
implementation. Within RTEMS, red-black trees are used when a binary search
tree is needed, including dynamic priority thread queues and non-contiguous
heap memory. The Red-Black Tree API provided by RTEMS is:
- build_id
- ``rtems.rtems_rbtree_node`` - Red-Black Tree node embedded in another struct
- ``rtems.rtems_rbtree_control`` - Red-Black Tree control node for an entire tree
- ``rtems.rtems_rbtree_initialize`` - initialize the red-black tree with nodes
- ``rtems.rtems_rbtree_initialize_empty`` - initialize the red-black tree as empty
- ``rtems.rtems_rbtree_set_off_tree`` - Clear a nodes links
- ``rtems.rtems_rbtree_root`` - Return the red-black trees root node
- ``rtems.rtems_rbtree_min`` - Return the red-black trees minimum node
- ``rtems.rtems_rbtree_max`` - Return the red-black trees maximum node
- ``rtems.rtems_rbtree_left`` - Return a nodes left child node
- ``rtems.rtems_rbtree_right`` - Return a nodes right child node
- ``rtems.rtems_rbtree_parent`` - Return a nodes parent node
- ``rtems.rtems_rbtree_are_nodes_equal`` - Are the nodes equal ?
- ``rtems.rtems_rbtree_is_empty`` - Is the red-black tree empty ?
- ``rtems.rtems_rbtree_is_min`` - Is the Node the minimum in the red-black tree ?
- ``rtems.rtems_rbtree_is_max`` - Is the Node the maximum in the red-black tree ?
- ``rtems.rtems_rbtree_is_root`` - Is the Node the root of the red-black tree ?
- ``rtems.rtems_rbtree_find`` - Find the node with a matching key in the red-black tree
- ``rtems.rtems_rbtree_predecessor`` - Return the in-order predecessor of a node.
- ``rtems.rtems_rbtree_successor`` - Return the in-order successor of a node.
- ``rtems.rtems_rbtree_extract`` - Remove the node from the red-black tree
- ``rtems.rtems_rbtree_get_min`` - Remove the minimum node from the red-black tree
- ``rtems.rtems_rbtree_get_max`` - Remove the maximum node from the red-black tree
- ``rtems.rtems_rbtree_peek_min`` - Returns the minimum node from the red-black tree
- ``rtems.rtems_rbtree_peek_max`` - Returns the maximum node from the red-black tree
- ``rtems.rtems_rbtree_insert`` - Add the node to the red-black tree
Background
==========
The Red-Black Trees API is a thin layer above the SuperCore Red-Black Trees
implementation. A Red-Black Tree is defined by a control node with pointers to
the root, minimum, and maximum nodes in the tree. Each node in the tree
consists of a parent pointer, two children pointers, and a color attribute. A
tree is parameterized as either unique, meaning identical keys are rejected, or
not, in which case duplicate keys are allowed.
Users must provide a comparison functor that gets passed to functions that need
to compare nodes. In addition, no internal synchronization is offered within
the red-black tree implementation, thus users must ensure at most one thread
accesses a red-black tree instance at a time.
Nodes
-----
A red-black tree is made up from nodes that orginate from a red-black tree control
object. A node is of type ``rtems.rtems_rbtree_node``. The node
is designed to be part of a user data structure. To obtain the encapsulating
structure users can use the ``RTEMS_CONTAINER_OF`` macro.
The node can be placed anywhere within the users structure and the macro will
calculate the structures address from the nodes address.
Controls
--------
A red-black tree is rooted with a control object. Red-Black Tree control
provide the user with access to the nodes on the red-black tree. The
implementation does not require special checks for manipulating the root of the
red-black tree. To accomplish this the``rtems.rtems_rbtree_control`` structure is treated as a``rtems.rtems_rbtree_node`` structure with a ``NULL`` parent
and left child pointing to the root.
Operations
==========
Examples for using the red-black trees
can be found in the testsuites/sptests/sprbtree01/init.c file.
Directives
==========
Documentation for the Red-Black Tree Directives
-----------------------------------------------
.. index:: rbtree doc
Source documentation for the Red-Black Tree API can be found in the
generated Doxygen output for cpukit/sapi.
.. COMMENT: COPYRIGHT (c) 1988-2012.
.. COMMENT: On-Line Applications Research Corporation (OAR).
.. COMMENT: All rights reserved.

View File

@ -1,593 +0,0 @@
Region Manager
##############
.. index:: regions
Introduction
============
The region manager provides facilities to dynamically
allocate memory in variable sized units. The directives
provided by the region manager are:
- ``rtems.region_create`` - Create a region
- ``rtems.region_ident`` - Get ID of a region
- ``rtems.region_delete`` - Delete a region
- ``rtems.region_extend`` - Add memory to a region
- ``rtems.region_get_segment`` - Get segment from a region
- ``rtems.region_return_segment`` - Return segment to a region
- ``rtems.region_get_segment_size`` - Obtain size of a segment
- ``rtems.region_resize_segment`` - Change size of a segment
Background
==========
Region Manager Definitions
--------------------------
.. index:: region, definition
.. index:: segment, definition
A region makes up a physically contiguous memory
space with user-defined boundaries from which variable-sized
segments are dynamically allocated and deallocated. A segment
is a variable size section of memory which is allocated in
multiples of a user-defined page size. This page size is
required to be a multiple of four greater than or equal to four.
For example, if a request for a 350-byte segment is made in a
region with 256-byte pages, then a 512-byte segment is allocated.
Regions are organized as doubly linked chains of
variable sized memory blocks. Memory requests are allocated
using a first-fit algorithm. If available, the requester
receives the number of bytes requested (rounded up to the next
page size). RTEMS requires some overhead from the regions
memory for each segment that is allocated. Therefore, an
application should only modify the memory of a segment that has
been obtained from the region. The application should NOT
modify the memory outside of any obtained segments and within
the regions boundaries while the region is currently active in
the system.
Upon return to the region, the free block is
coalesced with its neighbors (if free) on both sides to produce
the largest possible unused block.
Building an Attribute Set
-------------------------
.. index:: region attribute set, building
In general, an attribute set is built by a bitwise OR
of the desired attribute components. The set of valid region
attributes is provided in the following table:
- ``RTEMS.FIFO`` - tasks wait by FIFO (default)
- ``RTEMS.PRIORITY`` - tasks wait by priority
Attribute values are specifically designed to be
mutually exclusive, therefore bitwise OR and addition operations
are equivalent as long as each attribute appears exactly once in
the component list. An attribute listed as a default is not
required to appear in the attribute list, although it is a good
programming practice to specify default attributes. If all
defaults are desired, the attribute``RTEMS.DEFAULT_ATTRIBUTES`` should be
specified on this call.
This example demonstrates the attribute_set parameter
needed to create a region with the task priority waiting queue
discipline. The attribute_set parameter to the``rtems.region_create``
directive should be ``RTEMS.PRIORITY``.
Building an Option Set
----------------------
In general, an option is built by a bitwise OR of the
desired option components. The set of valid options for the``rtems.region_get_segment`` directive are
listed in the following table:
- ``RTEMS.WAIT`` - task will wait for segment (default)
- ``RTEMS.NO_WAIT`` - task should not wait
Option values are specifically designed to be
mutually exclusive, therefore bitwise OR and addition operations
are equivalent as long as each option appears exactly once in
the component list. An option listed as a default is not
required to appear in the option list, although it is a good
programming practice to specify default options. If all
defaults are desired, the option``RTEMS.DEFAULT_OPTIONS`` should be
specified on this call.
This example demonstrates the option parameter needed
to poll for a segment. The option parameter passed to the``rtems.region_get_segment`` directive should
be ``RTEMS.NO_WAIT``.
Operations
==========
Creating a Region
-----------------
The ``rtems.region_create`` directive creates a region with the
user-defined name. The user may select FIFO or task priority as
the method for placing waiting tasks in the task wait queue.
RTEMS allocates a Region Control Block (RNCB) from the RNCB free
list to maintain the newly created region. RTEMS also generates
a unique region ID which is returned to the calling task.
It is not possible to calculate the exact number of
bytes available to the user since RTEMS requires overhead for
each segment allocated. For example, a region with one segment
that is the size of the entire region has more available bytes
than a region with two segments that collectively are the size
of the entire region. This is because the region with one
segment requires only the overhead for one segment, while the
other region requires the overhead for two segments.
Due to automatic coalescing, the number of segments
in the region dynamically changes. Therefore, the total
overhead required by RTEMS dynamically changes.
Obtaining Region IDs
--------------------
When a region is created, RTEMS generates a unique
region ID and assigns it to the created region until it is
deleted. The region ID may be obtained by either of two
methods. First, as the result of an invocation of the``rtems.region_create`` directive,
the region ID is stored in a user
provided location. Second, the region ID may be obtained later
using the ``rtems.region_ident`` directive.
The region ID is used by other region manager directives to
access this region.
Adding Memory to a Region
-------------------------
The ``rtems.region_extend`` directive may be used to add memory
to an existing region. The caller specifies the size in bytes
and starting address of the memory being added.
NOTE: Please see the release notes or RTEMS source
code for information regarding restrictions on the location of
the memory being added in relation to memory already in the
region.
Acquiring a Segment
-------------------
The ``rtems.region_get_segment`` directive attempts to acquire
a segment from a specified region. If the region has enough
available free memory, then a segment is returned successfully
to the caller. When the segment cannot be allocated, one of the
following situations applies:
- By default, the calling task will wait forever to acquire the segment.
- Specifying the ``RTEMS.NO_WAIT`` option forces
an immediate return with an error status code.
- Specifying a timeout limits the interval the task will
wait before returning with an error status code.
If the task waits for the segment, then it is placed
in the regions task wait queue in either FIFO or task priority
order. All tasks waiting on a region are returned an error when
the message queue is deleted.
Releasing a Segment
-------------------
When a segment is returned to a region by the``rtems.region_return_segment`` directive, it is merged with its
unallocated neighbors to form the largest possible segment. The
first task on the wait queue is examined to determine if its
segment request can now be satisfied. If so, it is given a
segment and unblocked. This process is repeated until the first
tasks segment request cannot be satisfied.
Obtaining the Size of a Segment
-------------------------------
The ``rtems.region_get_segment_size`` directive returns the
size in bytes of the specified segment. The size returned
includes any "extra" memory included in the segment because of
rounding up to a page size boundary.
Changing the Size of a Segment
------------------------------
The ``rtems.region_resize_segment`` directive is used
to change the size in bytes of the specified segment. The size may be
increased or decreased. When increasing the size of a segment, it is
possible that the request cannot be satisfied. This directive provides
functionality similar to the ``realloc()`` function in the Standard
C Library.
Deleting a Region
-----------------
A region can be removed from the system and returned
to RTEMS with the ``rtems.region_delete``
directive. When a region is
deleted, its control block is returned to the RNCB free list. A
region with segments still allocated is not allowed to be
deleted. Any task attempting to do so will be returned an
error. As a result of this directive, all tasks blocked waiting
to obtain a segment from the region will be readied and returned
a status code which indicates that the region was deleted.
Directives
==========
This section details the region managers directives.
A subsection is dedicated to each of this managers directives
and describes the calling sequence, related constants, usage,
and status codes.
REGION_CREATE - Create a region
-------------------------------
.. index:: create a region
**CALLING SEQUENCE:**
.. code:: c
procedure Region_Create (
Name : in RTEMS.Name;
Starting_Address : in RTEMS.Address;
Length : in RTEMS.Unsigned32;
Page_Size : in RTEMS.Unsigned32;
Attribute_Set : in RTEMS.Attribute;
ID : out RTEMS.ID;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - region created successfully
``RTEMS.INVALID_NAME`` - invalid region name
``RTEMS.INVALID_ADDRESS`` - ``id`` is NULL
``RTEMS.INVALID_ADDRESS`` - ``starting_address`` is NULL
``RTEMS.INVALID_ADDRESS`` - address not on four byte boundary
``RTEMS.TOO_MANY`` - too many regions created
``RTEMS.INVALID_SIZE`` - invalid page size
**DESCRIPTION:**
This directive creates a region from a physically
contiguous memory space which starts at starting_address and is
length bytes long. Segments allocated from the region will be a
multiple of page_size bytes in length. The assigned region id
is returned in id. This region id is used as an argument to
other region related directives to access the region.
For control and maintenance of the region, RTEMS
allocates and initializes an RNCB from the RNCB free pool. Thus
memory from the region is not used to store the RNCB. However,
some overhead within the region is required by RTEMS each time a
segment is constructed in the region.
Specifying ``RTEMS.PRIORITY`` in attribute_set causes tasks
waiting for a segment to be serviced according to task priority.
Specifying ``RTEMS.FIFO`` in attribute_set or selecting``RTEMS.DEFAULT_ATTRIBUTES`` will cause waiting tasks to
be serviced in First In-First Out order.
The ``starting_address`` parameter must be aligned on a
four byte boundary. The ``page_size`` parameter must be a multiple
of four greater than or equal to eight.
**NOTES:**
This directive will not cause the calling task to be
preempted.
The following region attribute constants are defined
by RTEMS:
- ``RTEMS.FIFO`` - tasks wait by FIFO (default)
- ``RTEMS.PRIORITY`` - tasks wait by priority
REGION_IDENT - Get ID of a region
---------------------------------
.. index:: get ID of a region
.. index:: obtain ID of a region
**CALLING SEQUENCE:**
.. code:: c
procedure Region_Ident (
Name : in RTEMS.Name;
ID : out RTEMS.ID;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - region identified successfully
``RTEMS.INVALID_ADDRESS`` - ``id`` is NULL
``RTEMS.INVALID_NAME`` - region name not found
**DESCRIPTION:**
This directive obtains the region id associated with
the region name to be acquired. If the region name is not
unique, then the region id will match one of the regions with
that name. However, this region id is not guaranteed to
correspond to the desired region. The region id is used to
access this region in other region manager directives.
**NOTES:**
This directive will not cause the running task to be preempted.
REGION_DELETE - Delete a region
-------------------------------
.. index:: delete a region
**CALLING SEQUENCE:**
.. code:: c
procedure Region_Delete (
ID : in RTEMS.ID;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - region deleted successfully
``RTEMS.INVALID_ID`` - invalid region id
``RTEMS.RESOURCE_IN_USE`` - segments still in use
**DESCRIPTION:**
This directive deletes the region specified by id.
The region cannot be deleted if any of its segments are still
allocated. The RNCB for the deleted region is reclaimed by
RTEMS.
**NOTES:**
This directive will not cause the calling task to be preempted.
The calling task does not have to be the task that
created the region. Any local task that knows the region id can
delete the region.
REGION_EXTEND - Add memory to a region
--------------------------------------
.. index:: add memory to a region
.. index:: region, add memory
**CALLING SEQUENCE:**
.. code:: c
procedure Region_Extend (
ID : in RTEMS.ID;
Starting_Address : in RTEMS.Address;
Length : in RTEMS.Unsigned32;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - region extended successfully
``RTEMS.INVALID_ADDRESS`` - ``starting_address`` is NULL
``RTEMS.INVALID_ID`` - invalid region id
``RTEMS.INVALID_ADDRESS`` - invalid address of area to add
**DESCRIPTION:**
This directive adds the memory which starts at
starting_address for length bytes to the region specified by id.
**NOTES:**
This directive will not cause the calling task to be preempted.
The calling task does not have to be the task that
created the region. Any local task that knows the region id can
extend the region.
REGION_GET_SEGMENT - Get segment from a region
----------------------------------------------
.. index:: get segment from region
**CALLING SEQUENCE:**
.. code:: c
procedure Region_Get_Segment (
ID : in RTEMS.ID;
Size : in RTEMS.Unsigned32;
Option_Set : in RTEMS.Option;
Timeout : in RTEMS.Interval;
Segment : out RTEMS.Address;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - segment obtained successfully
``RTEMS.INVALID_ADDRESS`` - ``segment`` is NULL
``RTEMS.INVALID_ID`` - invalid region id
``RTEMS.INVALID_SIZE`` - request is for zero bytes or exceeds
the size of maximum segment which is possible for this region
``RTEMS.UNSATISFIED`` - segment of requested size not available
``RTEMS.TIMEOUT`` - timed out waiting for segment
``RTEMS.OBJECT_WAS_DELETED`` - region deleted while waiting
**DESCRIPTION:**
This directive obtains a variable size segment from
the region specified by id. The address of the allocated
segment is returned in segment. The ``RTEMS.WAIT``
and ``RTEMS.NO_WAIT`` components
of the options parameter are used to specify whether the calling
tasks wish to wait for a segment to become available or return
immediately if no segment is available. For either option, if a
sufficiently sized segment is available, then the segment is
successfully acquired by returning immediately with the``RTEMS.SUCCESSFUL`` status code.
If the calling task chooses to return immediately and
a segment large enough is not available, then an error code
indicating this fact is returned. If the calling task chooses
to wait for the segment and a segment large enough is not
available, then the calling task is placed on the regions
segment wait queue and blocked. If the region was created with
the ``RTEMS.PRIORITY`` option, then the calling
task is inserted into the
wait queue according to its priority. However, if the region
was created with the ``RTEMS.FIFO`` option, then the calling
task is placed at the rear of the wait queue.
The timeout parameter specifies the maximum interval
that a task is willing to wait to obtain a segment. If timeout
is set to ``RTEMS.NO_TIMEOUT``, then the
calling task will wait forever.
**NOTES:**
The actual length of the allocated segment may be
larger than the requested size because a segment size is always
a multiple of the regions page size.
The following segment acquisition option constants
are defined by RTEMS:
- ``RTEMS.WAIT`` - task will wait for segment (default)
- ``RTEMS.NO_WAIT`` - task should not wait
A clock tick is required to support the timeout functionality of
this directive.
REGION_RETURN_SEGMENT - Return segment to a region
--------------------------------------------------
.. index:: return segment to region
**CALLING SEQUENCE:**
.. code:: c
procedure Region_Return_Segment (
ID : in RTEMS.ID;
Segment : in RTEMS.Address;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - segment returned successfully
``RTEMS.INVALID_ADDRESS`` - ``segment`` is NULL
``RTEMS.INVALID_ID`` - invalid region id
``RTEMS.INVALID_ADDRESS`` - segment address not in region
**DESCRIPTION:**
This directive returns the segment specified by
segment to the region specified by id. The returned segment is
merged with its neighbors to form the largest possible segment.
The first task on the wait queue is examined to determine if its
segment request can now be satisfied. If so, it is given a
segment and unblocked. This process is repeated until the first
tasks segment request cannot be satisfied.
**NOTES:**
This directive will cause the calling task to be
preempted if one or more local tasks are waiting for a segment
and the following conditions exist:
- a waiting task has a higher priority than the calling task
- the size of the segment required by the waiting task
is less than or equal to the size of the segment returned.
REGION_GET_SEGMENT_SIZE - Obtain size of a segment
--------------------------------------------------
.. index:: get size of segment
**CALLING SEQUENCE:**
.. code:: c
procedure Region_Get_Segment_Size (
ID : in RTEMS.ID;
Segment : in RTEMS.Address;
Size : out RTEMS.Unsigned32;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - segment obtained successfully
``RTEMS.INVALID_ADDRESS`` - ``segment`` is NULL
``RTEMS.INVALID_ADDRESS`` - ``size`` is NULL
``RTEMS.INVALID_ID`` - invalid region id
``RTEMS.INVALID_ADDRESS`` - segment address not in region
**DESCRIPTION:**
This directive obtains the size in bytes of the specified segment.
**NOTES:**
The actual length of the allocated segment may be
larger than the requested size because a segment size is always
a multiple of the regions page size.
REGION_RESIZE_SEGMENT - Change size of a segment
------------------------------------------------
.. index:: resize segment
**CALLING SEQUENCE:**
.. code:: c
procedure Region_Resize_Segment (
ID : in RTEMS.ID;
Segment : in RTEMS.Address;
Size : in RTEMS.Unsigned32;
Old_Size : out RTEMS.Unsigned32;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - segment obtained successfully
``RTEMS.INVALID_ADDRESS`` - ``segment`` is NULL
``RTEMS.INVALID_ADDRESS`` - ``old_size`` is NULL
``RTEMS.INVALID_ID`` - invalid region id
``RTEMS.INVALID_ADDRESS`` - segment address not in region``RTEMS.UNSATISFIED`` - unable to make segment larger
**DESCRIPTION:**
This directive is used to increase or decrease the size of
a segment. When increasing the size of a segment, it
is possible that there is not memory available contiguous
to the segment. In this case, the request is unsatisfied.
**NOTES:**
If an attempt to increase the size of a segment fails, then
the application may want to allocate a new segment of the desired
size, copy the contents of the original segment to the new, larger
segment and then return the original segment.
.. COMMENT: COPYRIGHT (c) 1988-2002.
.. COMMENT: On-Line Applications Research Corporation (OAR).
.. COMMENT: All rights reserved.

View File

@ -1,379 +0,0 @@
RTEMS Data Types
################
Introduction
============
This chapter contains a complete list of the RTEMS primitive
data types in alphabetical order. This is intended to be
an overview and the user is encouraged to look at the appropriate
chapters in the manual for more information about the
usage of the various data types.
List of Data Types
==================
The following is a complete list of the RTEMS primitive
data types in alphabetical order:
- .. index:: rtems_address
``rtems.address`` is the data type used to manage
addresses. It is equivalent to
the System.Address data type.
- .. index:: rtems_asr
``rtems.asr`` is the return type for an
RTEMS ASR.
- .. index:: rtems_asr_entry
``rtems.asr_entry`` is the address of
the entry point to an RTEMS ASR.
- .. index:: rtems_attribute
``rtems.attribute`` is the data type used
to manage the attributes for RTEMS objects. It is primarily
used as an argument to object create routines to specify
characteristics of the new object.
- .. index:: rtems_boolean
``rtems.boolean`` may only take on the
values of ``TRUE`` and ``FALSE``.
This type is deprecated. Use "bool" instead.
- .. index:: rtems_context
``rtems.context`` is the CPU dependent
data structure used to manage the integer and system
register portion of each tasks context.
- .. index:: rtems_context_fp
``rtems.context_fp`` is the CPU dependent
data structure used to manage the floating point portion of
each tasks context.
- .. index:: rtems_device_driver
``rtems.device_driver`` is the
return type for a RTEMS device driver routine.
- .. index:: rtems_device_driver_entry
``rtems.device_driver_entry`` is the
entry point to a RTEMS device driver routine.
- .. index:: rtems_device_major_number
``rtems.device_major_number`` is the
data type used to manage device major numbers.
- .. index:: rtems_device_minor_number
``rtems.device_minor_number`` is the
data type used to manage device minor numbers.
- .. index:: rtems_double
``rtems.double`` is the RTEMS data
type that corresponds to double precision floating point
on the target hardware.
This type is deprecated. Use "double" instead.
- .. index:: rtems_event_set
``rtems.event_set`` is the data
type used to manage and manipulate RTEMS event sets
with the Event Manager.
- .. index:: rtems_extension
``rtems.extension`` is the return type
for RTEMS user extension routines.
- .. index:: rtems_fatal_extension
``rtems.fatal_extension`` is the
entry point for a fatal error user extension handler routine.
- .. index:: rtems_id
``rtems.id`` is the data type used
to manage and manipulate RTEMS object IDs.
- .. index:: rtems_interrupt_frame
``rtems.interrupt_frame`` is the
data structure that defines the format of the interrupt
stack frame as it appears to a user ISR. This data
structure may not be defined on all ports.
- .. index:: rtems_interrupt_level
``rtems.interrupt_level`` is the
data structure used with the ``rtems.interrupt_disable``,``rtems.interrupt_enable``, and``rtems.interrupt_flash`` routines. This
data type is CPU dependent and usually corresponds to
the contents of the processor register containing
the interrupt mask level.
- .. index:: rtems_interval
``rtems.interval`` is the data
type used to manage and manipulate time intervals.
Intervals are non-negative integers used to measure
the length of time in clock ticks.
- .. index:: rtems_isr
``rtems.isr`` is the return type
of a function implementing an RTEMS ISR.
- .. index:: rtems_isr_entry
``rtems.isr_entry`` is the address of
the entry point to an RTEMS ISR. It is equivalent to the
entry point of the function implementing the ISR.
- .. index:: rtems_mp_packet_classes
``rtems.mp_packet_classes`` is the
enumerated type which specifies the categories of
multiprocessing messages. For example, one of the
classes is for messages that must be processed by
the Task Manager.
- .. index:: rtems_mode
``rtems.mode`` is the data type
used to manage and dynamically manipulate the execution
mode of an RTEMS task.
- .. index:: rtems_mpci_entry
``rtems.mpci_entry`` is the return type
of an RTEMS MPCI routine.
- .. index:: rtems_mpci_get_packet_entry
``rtems.mpci_get_packet_entry`` is the address of
the entry point to the get packet routine for an MPCI implementation.
- .. index:: rtems_mpci_initialization_entry
``rtems.mpci_initialization_entry`` is the address of
the entry point to the initialization routine for an MPCI implementation.
- .. index:: rtems_mpci_receive_packet_entry
``rtems.mpci_receive_packet_entry`` is the address of
the entry point to the receive packet routine for an MPCI implementation.
- .. index:: rtems_mpci_return_packet_entry
``rtems.mpci_return_packet_entry`` is the address of
the entry point to the return packet routine for an MPCI implementation.
- .. index:: rtems_mpci_send_packet_entry
``rtems.mpci_send_packet_entry`` is the address of
the entry point to the send packet routine for an MPCI implementation.
- .. index:: rtems_mpci_table
``rtems.mpci_table`` is the data structure
containing the configuration information for an MPCI.
- .. index:: rtems_name
``rtems.name`` is the data type used to
contain the name of a Classic API object. It is an unsigned
thirty-two bit integer which can be treated as a numeric
value or initialized using ``rtems.build_name`` to
contain four ASCII characters.
- .. index:: rtems_option
``rtems.option`` is the data type
used to specify which behavioral options the caller desires.
It is commonly used with potentially blocking directives to specify
whether the caller is willing to block or return immediately with an error
indicating that the resource was not available.
- .. index:: rtems_packet_prefix
``rtems.packet_prefix`` is the data structure
that defines the first bytes in every packet sent between nodes
in an RTEMS multiprocessor system. It contains routing information
that is expected to be used by the MPCI layer.
- .. index:: rtems_signal_set
``rtems.signal_set`` is the data
type used to manage and manipulate RTEMS signal sets
with the Signal Manager.
- .. index:: int8_t
``int8_t`` is the C99 data type that corresponds to signed eight
bit integers. This data type is defined by RTEMS in a manner that
ensures it is portable across different target processors.
- .. index:: int16_t
``int16_t`` is the C99 data type that corresponds to signed
sixteen bit integers. This data type is defined by RTEMS in a manner
that ensures it is portable across different target processors.
- .. index:: int32_t
``int32_t`` is the C99 data type that corresponds to signed
thirty-two bit integers. This data type is defined by RTEMS in a manner
that ensures it is portable across different target processors.
- .. index:: int64_t
``int64_t`` is the C99 data type that corresponds to signed
sixty-four bit integers. This data type is defined by RTEMS in a manner
that ensures it is portable across different target processors.
- .. index:: rtems_single
``rtems.single`` is the RTEMS data
type that corresponds to single precision floating point
on the target hardware.
This type is deprecated. Use "float" instead.
- .. index:: rtems_status_codes
``rtems.status_codes`` is the return type for most
RTEMS services. This is an enumerated type of approximately twenty-five
values. In general, when a service returns a particular status code, it
indicates that a very specific error condition has occurred.
- .. index:: rtems_task
``rtems.task`` is the return type for an
RTEMS Task.
- .. index:: rtems_task_argument
``rtems.task_argument`` is the data
type for the argument passed to each RTEMS task. In RTEMS 4.7
and older, this is an unsigned thirty-two bit integer. In
RTEMS 4.8 and newer, this is based upon the C99 type ``uintptr_t``
which is guaranteed to be an integer large enough to hold a
pointer on the target architecture.
- .. index:: rtems_task_begin_extension
``rtems.task_begin_extension`` is the
entry point for a task beginning execution user extension handler routine.
- .. index:: rtems_task_create_extension
``rtems.task_create_extension`` is the
entry point for a task creation execution user extension handler routine.
- .. index:: rtems_task_delete_extension
``rtems.task_delete_extension`` is the
entry point for a task deletion user extension handler routine.
- .. index:: rtems_task_entry
``rtems.task_entry`` is the address of
the entry point to an RTEMS ASR. It is equivalent to the
entry point of the function implementing the ASR.
- .. index:: rtems_task_exitted_extension
``rtems.task_exitted_extension`` is the
entry point for a task exitted user extension handler routine.
- .. index:: rtems_task_priority
``rtems.task_priority`` is the data type
used to manage and manipulate task priorities.
- .. index:: rtems_task_restart_extension
``rtems.task_restart_extension`` is the
entry point for a task restart user extension handler routine.
- .. index:: rtems_task_start_extension
``rtems.task_start_extension`` is the
entry point for a task start user extension handler routine.
- .. index:: rtems_task_switch_extension
``rtems.task_switch_extension`` is the
entry point for a task context switch user extension handler routine.
- .. index:: rtems_tcb
``rtems.tcb`` is the data structure associated
with each task in an RTEMS system.
- .. index:: rtems_time_of_day
``rtems.time_of_day`` is the data structure
used to manage and manipulate calendar time in RTEMS.
- .. index:: rtems_timer_service_routine
``rtems.timer_service_routine`` is the
return type for an RTEMS Timer Service Routine.
- .. index:: rtems_timer_service_routine_entry
``rtems.timer_service_routine_entry`` is the address of
the entry point to an RTEMS TSR. It is equivalent to the
entry point of the function implementing the TSR.
- .. index:: rtems_vector_number
``rtems.vector_number`` is the data
type used to manage and manipulate interrupt vector numbers.
- .. index:: uint8_t
``uint8_t`` is the C99 data type that corresponds to unsigned
eight bit integers. This data type is defined by RTEMS in a manner that
ensures it is portable across different target processors.
- .. index:: uint16_t
``uint16_t`` is the C99 data type that corresponds to unsigned
sixteen bit integers. This data type is defined by RTEMS in a manner
that ensures it is portable across different target processors.
- .. index:: uint32_t
``uint32_t`` is the C99 data type that corresponds to unsigned
thirty-two bit integers. This data type is defined by RTEMS in a manner
that ensures it is portable across different target processors.
- .. index:: uint64_t
``uint64_t`` is the C99 data type that corresponds to unsigned
sixty-four bit integers. This data type is defined by RTEMS in a manner
that ensures it is portable across different target processors.
- .. index:: uintptr_t
``uintptr_t`` is the C99 data type that corresponds to the
unsigned integer type that is of sufficient size to represent addresses
as unsigned integers. This data type is defined by RTEMS in a manner
that ensures it is portable across different target processors.
.. COMMENT: COPYRIGHT (c) 1988-2008.
.. COMMENT: On-Line Applications Research Corporation (OAR).
.. COMMENT: All rights reserved.

View File

@ -1,488 +0,0 @@
Scheduling Concepts
###################
.. index:: scheduling
.. index:: task scheduling
Introduction
============
The concept of scheduling in real-time systems dictates the ability to
provide immediate response to specific external events, particularly
the necessity of scheduling tasks to run within a specified time limit
after the occurrence of an event. For example, software embedded in
life-support systems used to monitor hospital patients must take instant
action if a change in the patients status is detected.
The component of RTEMS responsible for providing this capability is
appropriately called the scheduler. The schedulers sole purpose is
to allocate the all important resource of processor time to the various
tasks competing for attention.
Scheduling Algorithms
=====================
.. index:: scheduling algorithms
RTEMS provides a plugin framework which allows it to support
multiple scheduling algorithms. RTEMS now includes multiple
scheduling algorithms in the SuperCore and the user can select which
of these they wish to use in their application. In addition,
the user can implement their own scheduling algorithm and
configure RTEMS to use it.
Supporting multiple scheduling algorithms gives the end user the
option to select the algorithm which is most appropriate to their use
case. Most real-time operating systems schedule tasks using a priority
based algorithm, possibly with preemption control. The classic
RTEMS scheduling algorithm which was the only algorithm available
in RTEMS 4.10 and earlier, is a priority based scheduling algorithm.
This scheduling algoritm is suitable for single core (e.g. non-SMP)
systems and is now known as the *Deterministic Priority Scheduler*.
Unless the user configures another scheduling algorithm, RTEMS will use
this on single core systems.
Priority Scheduling
-------------------
.. index:: priority scheduling
When using priority based scheduling, RTEMS allocates the processor using
a priority-based, preemptive algorithm augmented to provide round-robin
characteristics within individual priority groups. The goal of this
algorithm is to guarantee that the task which is executing on the
processor at any point in time is the one with the highest priority
among all tasks in the ready state.
When a task is added to the ready chain, it is placed behind all other
tasks of the same priority. This rule provides a round-robin within
priority group scheduling characteristic. This means that in a group of
equal priority tasks, tasks will execute in the order they become ready
or FIFO order. Even though there are ways to manipulate and adjust task
priorities, the most important rule to remember is:
- *Priority based scheduling algorithms will always select the
highest priority task that is ready to run when allocating the processor
to a task.*
Priority scheduling is the most commonly used scheduling algorithm.
It should be used by applications in which multiple tasks contend for
CPU time or other resources and there is a need to ensure certain tasks
are given priority over other tasks.
There are a few common methods of accomplishing the mechanics of this
algorithm. These ways involve a list or chain of tasks in the ready state.
- The least efficient method is to randomly place tasks in the ready
chain forcing the scheduler to scan the entire chain to determine which
task receives the processor.
- A more efficient method is to schedule the task by placing it
in the proper place on the ready chain based on the designated scheduling
criteria at the time it enters the ready state. Thus, when the processor
is free, the first task on the ready chain is allocated the processor.
- Another mechanism is to maintain a list of FIFOs per priority.
When a task is readied, it is placed on the rear of the FIFO for its
priority. This method is often used with a bitmap to assist in locating
which FIFOs have ready tasks on them.
RTEMS currently includes multiple priority based scheduling algorithms
as well as other algorithms which incorporate deadline. Each algorithm
is discussed in the following sections.
Deterministic Priority Scheduler
--------------------------------
This is the scheduler implementation which has always been in RTEMS.
After the 4.10 release series, it was factored into pluggable scheduler
selection. It schedules tasks using a priority based algorithm which
takes into account preemption. It is implemented using an array of FIFOs
with a FIFO per priority. It maintains a bitmap which is used to track
which priorities have ready tasks.
This algorithm is deterministic (e.g. predictable and fixed) in execution
time. This comes at the cost of using slightly over three (3) kilobytes
of RAM on a system configured to support 256 priority levels.
This scheduler is only aware of a single core.
Simple Priority Scheduler
-------------------------
This scheduler implementation has the same behaviour as the Deterministic
Priority Scheduler but uses only one linked list to manage all ready
tasks. When a task is readied, a linear search of that linked list is
performed to determine where to insert the newly readied task.
This algorithm uses much less RAM than the Deterministic Priority
Scheduler but is *O(n)* where *n* is the number of ready tasks.
In a small system with a small number of tasks, this will not be a
performance issue. Reducing RAM consumption is often critical in small
systems which are incapable of supporting a large number of tasks.
This scheduler is only aware of a single core.
Simple SMP Priority Scheduler
-----------------------------
This scheduler is based upon the Simple Priority Scheduler and is designed
to have the same behaviour on a single core system. But this scheduler
is capable of scheduling threads across multiple cores in an SMP system.
When given a choice of replacing one of two threads at equal priority
on different cores, this algorithm favors replacing threads which are
preemptible and have executed the longest.
This algorithm is non-deterministic. When scheduling, it must consider
which tasks are to be executed on each core while avoiding superfluous
task migrations.
Earliest Deadline First Scheduler
---------------------------------
.. index:: earliest deadline first scheduling
This is an alternative scheduler in RTEMS for single core applications.
The primary EDF advantage is high total CPU utilization (theoretically
up to 100%). It assumes that tasks have priorities equal to deadlines.
This EDF is initially preemptive, however, individual tasks may be declared
not-preemptive. Deadlines are declared using only Rate Monotonic manager which
goal is to handle periodic behavior. Period is always equal to deadline. All
ready tasks reside in a single ready queue implemented using a red-black tree.
This implementation of EDF schedules two different types of task
priority types while each task may switch between the two types within
its execution. If a task does have a deadline declared using the Rate
Monotonic manager, the task is deadline-driven and its priority is equal
to deadline. On the contrary if a task does not have any deadline or
the deadline is cancelled using the Rate Monotonic manager, the task is
considered a background task with priority equal to that assigned
upon initialization in the same manner as for priority scheduler. Each
background task is of a lower importance than each deadline-driven one
and is scheduled when no deadline-driven task and no higher priority
background task is ready to run.
Every deadline-driven scheduling algorithm requires means for tasks
to claim a deadline. The Rate Monotonic Manager is responsible for
handling periodic execution. In RTEMS periods are equal to deadlines,
thus if a task announces a period, it has to be finished until the
end of this period. The call of ``rtems_rate_monotonic_period``
passes the scheduler the length of oncoming deadline. Moreover, the``rtems_rate_monotonic_cancel`` and ``rtems_rate_monotonic_delete``
calls clear the deadlines assigned to the task.
Constant Bandwidth Server Scheduling (CBS)
------------------------------------------
.. index:: constant bandwidth server scheduling
This is an alternative scheduler in RTEMS for single core applications.
The CBS is a budget aware extension of EDF scheduler. The main goal of this
scheduler is to ensure temporal isolation of tasks meaning that a tasks
execution in terms of meeting deadlines must not be influenced by other
tasks as if they were run on multiple independent processors.
Each task can be assigned a server (current implementation supports only
one task per server). The server is characterized by period (deadline)
and computation time (budget). The ratio budget/period yields bandwidth,
which is the fraction of CPU to be reserved by the scheduler for each
subsequent period.
The CBS is equipped with a set of rules applied to tasks attached to servers
ensuring that deadline miss because of another task cannot occur.
In case a task breaks one of the rules, its priority is pulled to background
until the end of its period and then restored again. The rules are:
- Task cannot exceed its registered budget,
- Task cannot be
unblocked when a ratio between remaining budget and remaining deadline
is higher than declared bandwidth.
The CBS provides an extensive API. Unlike EDF, the``rtems_rate_monotonic_period`` does not declare a deadline because
it is carried out using CBS API. This call only announces next period.
Scheduling Modification Mechanisms
==================================
.. index:: scheduling mechanisms
RTEMS provides four mechanisms which allow the user to alter the task
scheduling decisions:
- user-selectable task priority level
- task preemption control
- task timeslicing control
- manual round-robin selection
Each of these methods provides a powerful capability to customize sets
of tasks to satisfy the unique and particular requirements encountered
in custom real-time applications. Although each mechanism operates
independently, there is a precedence relationship which governs the
effects of scheduling modifications. The evaluation order for scheduling
characteristics is always priority, preemption mode, and timeslicing.
When reading the descriptions of timeslicing and manual round-robin
it is important to keep in mind that preemption (if enabled) of a task
by higher priority tasks will occur as required, overriding the other
factors presented in the description.
Task Priority and Scheduling
----------------------------
.. index:: task priority
The most significant task scheduling modification mechanism is the ability
for the user to assign a priority level to each individual task when it
is created and to alter a tasks priority at run-time. RTEMS supports
up to 255 priority levels. Level 255 is the lowest priority and level
1 is the highest.
Preemption
----------.. index:: preemption
Another way the user can alter the basic scheduling algorithm is by
manipulating the preemption mode flag (``RTEMS.PREEMPT_MASK``)
of individual tasks. If preemption is disabled for a task
(``RTEMS.NO_PREEMPT``), then the task will not relinquish
control of the processor until it terminates, blocks, or re-enables
preemption. Even tasks which become ready to run and possess higher
priority levels will not be allowed to execute. Note that the preemption
setting has no effect on the manner in which a task is scheduled.
It only applies once a task has control of the processor.
Timeslicing
-----------.. index:: timeslicing
.. index:: round robin scheduling
Timeslicing or round-robin scheduling is an additional method which
can be used to alter the basic scheduling algorithm. Like preemption,
timeslicing is specified on a task by task basis using the timeslicing
mode flag (``RTEMS.TIMESLICE_MASK``). If timeslicing is
enabled for a task (``RTEMS.TIMESLICE``), then RTEMS will
limit the amount of time the task can execute before the processor is
allocated to another task. Each tick of the real-time clock reduces
the currently running tasks timeslice. When the execution time equals
the timeslice, RTEMS will dispatch another task of the same priority
to execute. If there are no other tasks of the same priority ready to
execute, then the current task is allocated an additional timeslice and
continues to run. Remember that a higher priority task will preempt
the task (unless preemption is disabled) as soon as it is ready to run,
even if the task has not used up its entire timeslice.
Manual Round-Robin
------------------.. index:: manual round robin
The final mechanism for altering the RTEMS scheduling algorithm is
called manual round-robin. Manual round-robin is invoked by using the``rtems.task_wake_after`` directive with a time interval
of ``RTEMS.YIELD_PROCESSOR``. This allows a task to give
up the processor and be immediately returned to the ready chain at the
end of its priority group. If no other tasks of the same priority are
ready to run, then the task does not lose control of the processor.
Dispatching Tasks
=================.. index:: dispatching
The dispatcher is the RTEMS component responsible for
allocating the processor to a ready task. In order to allocate
the processor to one task, it must be deallocated or retrieved
from the task currently using it. This involves a concept
called a context switch. To perform a context switch, the
dispatcher saves the context of the current task and restores
the context of the task which has been allocated to the
processor. Saving and restoring a tasks context is the
storing/loading of all the essential information about a task to
enable it to continue execution without any effects of the
interruption. For example, the contents of a tasks register
set must be the same when it is given the processor as they were
when it was taken away. All of the information that must be
saved or restored for a context switch is located either in the
TCB or on the tasks stacks.
Tasks that utilize a numeric coprocessor and are created with the``RTEMS.FLOATING_POINT`` attribute require additional
operations during a context switch. These additional operations
are necessary to save and restore the floating point context of``RTEMS.FLOATING_POINT`` tasks. To avoid unnecessary save
and restore operations, the state of the numeric coprocessor is only
saved when a ``RTEMS.FLOATING_POINT`` task is dispatched
and that task was not the last task to utilize the coprocessor.
Task State Transitions
======================.. index:: task state transitions
Tasks in an RTEMS system must always be in one of the
five allowable task states. These states are: executing, ready,
blocked, dormant, and non-existent.
A task occupies the non-existent state before
a ``rtems.task_create`` has been issued on its behalf.
A task enters the non-existent state from any other state in the system
when it is deleted with the ``rtems.task_delete`` directive.
While a task occupies this state it does not have a TCB or a task ID
assigned to it; therefore, no other tasks in the system may reference
this task.
When a task is created via the ``rtems.task_create``
directive it enters the dormant state. This state is not entered through
any other means. Although the task exists in the system, it cannot
actively compete for system resources. It will remain in the dormant
state until it is started via the ``rtems.task_start``
directive, at which time it enters the ready state. The task is now
permitted to be scheduled for the processor and to compete for other
system resources.
.. code:: c
+-------------------------------------------------------------+
| Non-existent |
| +-------------------------------------------------------+ |
| | | |
| | | |
| | Creating +---------+ Deleting | |
| | -------------------> | Dormant | -------------------> | |
| | +---------+ | |
| | | | |
| | Starting | | |
| | | | |
| | V Deleting | |
| | +-------> +-------+ -------------------> | |
| | Yielding / +----- | Ready | ------+ | |
| | / / +-------+ <--+ \\ | |
| | / / \\ \\ Blocking | |
| | / / Dispatching Readying \\ \\ | |
| | / V \\ V | |
| | +-----------+ Blocking +---------+ | |
| | | Executing | --------------> | Blocked | | |
| | +-----------+ +---------+ | |
| | | |
| | | |
| +-------------------------------------------------------+ |
| Non-existent |
+-------------------------------------------------------------+
A task occupies the blocked state whenever it is unable to be scheduled
to run. A running task may block itself or be blocked by other tasks in
the system. The running task blocks itself through voluntary operations
that cause the task to wait. The only way a task can block a task other
than itself is with the ``rtems.task_suspend`` directive.
A task enters the blocked state due to any of the following conditions:
- A task issues a ``rtems.task_suspend`` directive
which blocks either itself or another task in the system.
- The running task issues a ``rtems.barrier_wait``
directive.
- The running task issues a ``rtems.message_queue_receive``
directive with the wait option and the message queue is empty.
- The running task issues an ``rtems.event_receive``
directive with the wait option and the currently pending events do not
satisfy the request.
- The running task issues a ``rtems.semaphore_obtain``
directive with the wait option and the requested semaphore is unavailable.
- The running task issues a ``rtems.task_wake_after``
directive which blocks the task for the given time interval. If the time
interval specified is zero, the task yields the processor and remains
in the ready state.
- The running task issues a ``rtems.task_wake_when``
directive which blocks the task until the requested date and time arrives.
- The running task issues a ``rtems.rate_monotonic_period``
directive and must wait for the specified rate monotonic period
to conclude.
- The running task issues a ``rtems.region_get_segment``
directive with the wait option and there is not an available segment large
enough to satisfy the tasks request.
A blocked task may also be suspended. Therefore, both the suspension
and the blocking condition must be removed before the task becomes ready
to run again.
A task occupies the ready state when it is able to be scheduled to run,
but currently does not have control of the processor. Tasks of the same
or higher priority will yield the processor by either becoming blocked,
completing their timeslice, or being deleted. All tasks with the same
priority will execute in FIFO order. A task enters the ready state due
to any of the following conditions:
- A running task issues a ``rtems.task_resume``
directive for a task that is suspended and the task is not blocked
waiting on any resource.
- A running task issues a ``rtems.message_queue_send``,``rtems.message_queue_broadcast``, or a``rtems.message_queue_urgent`` directive
which posts a message to the queue on which the blocked task is
waiting.
- A running task issues an ``rtems.event_send``
directive which sends an event condition to a task which is blocked
waiting on that event condition.
- A running task issues a ``rtems.semaphore_release``
directive which releases the semaphore on which the blocked task is
waiting.
- A timeout interval expires for a task which was blocked
by a call to the ``rtems.task_wake_after`` directive.
- A timeout period expires for a task which blocked by a
call to the ``rtems.task_wake_when`` directive.
- A running task issues a ``rtems.region_return_segment``
directive which releases a segment to the region on which the blocked task
is waiting and a resulting segment is large enough to satisfy
the tasks request.
- A rate monotonic period expires for a task which blocked
by a call to the ``rtems.rate_monotonic_period`` directive.
- A timeout interval expires for a task which was blocked
waiting on a message, event, semaphore, or segment with a
timeout specified.
- A running task issues a directive which deletes a
message queue, a semaphore, or a region on which the blocked
task is waiting.
- A running task issues a ``rtems.task_restart``
directive for the blocked task.
- The running task, with its preemption mode enabled, may
be made ready by issuing any of the directives that may unblock
a task with a higher priority. This directive may be issued
from the running task itself or from an ISR.
A ready task occupies the executing state when it has
control of the CPU. A task enters the executing state due to
any of the following conditions:
- The task is the highest priority ready task in the
system.
- The running task blocks and the task is next in the
scheduling queue. The task may be of equal priority as in
round-robin scheduling or the task may possess the highest
priority of the remaining ready tasks.
- The running task may reenable its preemption mode and a
task exists in the ready queue that has a higher priority than
the running task.
- The running task lowers its own priority and another
task is of higher priority as a result.
- The running task raises the priority of a task above its
own and the running task is in preemption mode.
.. COMMENT: COPYRIGHT (c) 1988-2013.
.. COMMENT: On-Line Applications Research Corporation (OAR).
.. COMMENT: All rights reserved.
.. COMMENT: Open Issues
.. COMMENT: - nicen up the tables
.. COMMENT: - use math mode to print formulas

View File

@ -1,908 +0,0 @@
Semaphore Manager
#################
.. index:: semaphores
.. index:: binary semaphores
.. index:: counting semaphores
.. index:: mutual exclusion
Introduction
============
The semaphore manager utilizes standard Dijkstra
counting semaphores to provide synchronization and mutual
exclusion capabilities. The directives provided by the
semaphore manager are:
- ``rtems.semaphore_create`` - Create a semaphore
- ``rtems.semaphore_ident`` - Get ID of a semaphore
- ``rtems.semaphore_delete`` - Delete a semaphore
- ``rtems.semaphore_obtain`` - Acquire a semaphore
- ``rtems.semaphore_release`` - Release a semaphore
- ``rtems.semaphore_flush`` - Unblock all tasks waiting on a semaphore
- ``rtems.semaphore_set_priority`` - Set priority by
scheduler for a semaphore
Background
==========
A semaphore can be viewed as a protected variable
whose value can be modified only with the``rtems.semaphore_create``,``rtems.semaphore_obtain``, and``rtems.semaphore_release`` directives. RTEMS
supports both binary and counting semaphores. A binary semaphore
is restricted to values of zero or one, while a counting
semaphore can assume any non-negative integer value.
A binary semaphore can be used to control access to a
single resource. In particular, it can be used to enforce
mutual exclusion for a critical section in user code. In this
instance, the semaphore would be created with an initial count
of one to indicate that no task is executing the critical
section of code. Upon entry to the critical section, a task
must issue the ``rtems.semaphore_obtain``
directive to prevent other tasks from entering the critical section.
Upon exit from the critical section, the task must issue the``rtems.semaphore_release`` directive to
allow another task to execute the critical section.
A counting semaphore can be used to control access to
a pool of two or more resources. For example, access to three
printers could be administered by a semaphore created with an
initial count of three. When a task requires access to one of
the printers, it issues the ``rtems.semaphore_obtain``
directive to obtain access to a printer. If a printer is not currently
available, the task can wait for a printer to become available or return
immediately. When the task has completed printing, it should
issue the ``rtems.semaphore_release``
directive to allow other tasks access to the printer.
Task synchronization may be achieved by creating a
semaphore with an initial count of zero. One task waits for the
arrival of another task by issuing a ``rtems.semaphore_obtain``
directive when it reaches a synchronization point. The other task
performs a corresponding ``rtems.semaphore_release``
operation when it reaches its synchronization point, thus unblocking
the pending task.
Nested Resource Access
----------------------
Deadlock occurs when a task owning a binary semaphore
attempts to acquire that same semaphore and blocks as result.
Since the semaphore is allocated to a task, it cannot be
deleted. Therefore, the task that currently holds the semaphore
and is also blocked waiting for that semaphore will never
execute again.
RTEMS addresses this problem by allowing the task
holding the binary semaphore to obtain the same binary semaphore
multiple times in a nested manner. Each``rtems.semaphore_obtain`` must be accompanied with a``rtems.semaphore_release``. The semaphore will
only be made available for acquisition by other tasks when the
outermost ``rtems.semaphore_obtain`` is matched with
a ``rtems.semaphore_release``.
Simple binary semaphores do not allow nested access and so can be used for task synchronization.
Priority Inversion
------------------
Priority inversion is a form of indefinite
postponement which is common in multitasking, preemptive
executives with shared resources. Priority inversion occurs
when a high priority tasks requests access to shared resource
which is currently allocated to low priority task. The high
priority task must block until the low priority task releases
the resource. This problem is exacerbated when the low priority
task is prevented from executing by one or more medium priority
tasks. Because the low priority task is not executing, it
cannot complete its interaction with the resource and release
that resource. The high priority task is effectively prevented
from executing by lower priority tasks.
Priority Inheritance
--------------------
Priority inheritance is an algorithm that calls for
the lower priority task holding a resource to have its priority
increased to that of the highest priority task blocked waiting
for that resource. Each time a task blocks attempting to obtain
the resource, the task holding the resource may have its
priority increased.
On SMP configurations, in case the task holding the resource and the task that
blocks attempting to obtain the resource are in different scheduler instances,
the priority of the holder is raised to the pseudo-interrupt priority (priority
boosting). The pseudo-interrupt priority is the highest priority.
RTEMS supports priority inheritance for local, binary
semaphores that use the priority task wait queue blocking
discipline. When a task of higher priority than the task
holding the semaphore blocks, the priority of the task holding
the semaphore is increased to that of the blocking task. When
the task holding the task completely releases the binary
semaphore (i.e. not for a nested release), the holders priority
is restored to the value it had before any higher priority was
inherited.
The RTEMS implementation of the priority inheritance
algorithm takes into account the scenario in which a task holds
more than one binary semaphore. The holding task will execute
at the priority of the higher of the highest ceiling priority or
at the priority of the highest priority task blocked waiting for
any of the semaphores the task holds. Only when the task
releases ALL of the binary semaphores it holds will its priority
be restored to the normal value.
Priority Ceiling
----------------
Priority ceiling is an algorithm that calls for the
lower priority task holding a resource to have its priority
increased to that of the highest priority task which will EVER
block waiting for that resource. This algorithm addresses the
problem of priority inversion although it avoids the possibility
of changing the priority of the task holding the resource
multiple times. The priority ceiling algorithm will only change
the priority of the task holding the resource a maximum of one
time. The ceiling priority is set at creation time and must be
the priority of the highest priority task which will ever
attempt to acquire that semaphore.
RTEMS supports priority ceiling for local, binary
semaphores that use the priority task wait queue blocking
discipline. When a task of lower priority than the ceiling
priority successfully obtains the semaphore, its priority is
raised to the ceiling priority. When the task holding the task
completely releases the binary semaphore (i.e. not for a nested
release), the holders priority is restored to the value it had
before any higher priority was put into effect.
The need to identify the highest priority task which
will attempt to obtain a particular semaphore can be a difficult
task in a large, complicated system. Although the priority
ceiling algorithm is more efficient than the priority
inheritance algorithm with respect to the maximum number of task
priority changes which may occur while a task holds a particular
semaphore, the priority inheritance algorithm is more forgiving
in that it does not require this apriori information.
The RTEMS implementation of the priority ceiling
algorithm takes into account the scenario in which a task holds
more than one binary semaphore. The holding task will execute
at the priority of the higher of the highest ceiling priority or
at the priority of the highest priority task blocked waiting for
any of the semaphores the task holds. Only when the task
releases ALL of the binary semaphores it holds will its priority
be restored to the normal value.
Multiprocessor Resource Sharing Protocol
----------------------------------------
The Multiprocessor Resource Sharing Protocol (MrsP) is defined in *A.
Burns and A.J. Wellings, A Schedulability Compatible Multiprocessor Resource
Sharing Protocol - MrsP, Proceedings of the 25th Euromicro Conference on
Real-Time Systems (ECRTS 2013), July 2013*. It is a generalization of the
Priority Ceiling Protocol to SMP systems. Each MrsP semaphore uses a ceiling
priority per scheduler instance. These ceiling priorities can be specified
with ``rtems_semaphore_set_priority()``. A task obtaining or owning a MrsP
semaphore will execute with the ceiling priority for its scheduler instance as
specified by the MrsP semaphore object. Tasks waiting to get ownership of a
MrsP semaphore will not relinquish the processor voluntarily. In case the
owner of a MrsP semaphore gets preempted it can ask all tasks waiting for this
semaphore to help out and temporarily borrow the right to execute on one of
their assigned processors.
Building a Semaphore Attribute Set
----------------------------------
In general, an attribute set is built by a bitwise OR
of the desired attribute components. The following table lists
the set of valid semaphore attributes:
- ``RTEMS.FIFO`` - tasks wait by FIFO (default)
- ``RTEMS.PRIORITY`` - tasks wait by priority
- ``RTEMS.BINARY_SEMAPHORE`` - restrict values to
0 and 1
- ``RTEMS.COUNTING_SEMAPHORE`` - no restriction on values
(default)
- ``RTEMS.SIMPLE_BINARY_SEMAPHORE`` - restrict values to
0 and 1, do not allow nested access, allow deletion of locked semaphore.
- ``RTEMS.NO_INHERIT_PRIORITY`` - do not use priority
inheritance (default)
- ``RTEMS.INHERIT_PRIORITY`` - use priority inheritance
- ``RTEMS.NO_PRIORITY_CEILING`` - do not use priority
ceiling (default)
- ``RTEMS.PRIORITY_CEILING`` - use priority ceiling
- ``RTEMS.NO_MULTIPROCESSOR_RESOURCE_SHARING`` - do not use
Multiprocessor Resource Sharing Protocol (default)
- ``RTEMS.MULTIPROCESSOR_RESOURCE_SHARING`` - use
Multiprocessor Resource Sharing Protocol
- ``RTEMS.LOCAL`` - local semaphore (default)
- ``RTEMS.GLOBAL`` - global semaphore
Attribute values are specifically designed to be
mutually exclusive, therefore bitwise OR and addition operations
are equivalent as long as each attribute appears exactly once in
the component list. An attribute listed as a default is not
required to appear in the attribute list, although it is a good
programming practice to specify default attributes. If all
defaults are desired, the attribute``RTEMS.DEFAULT_ATTRIBUTES`` should be
specified on this call.
This example demonstrates the attribute_set parameter needed to create a
local semaphore with the task priority waiting queue discipline. The
attribute_set parameter passed to the``rtems.semaphore_create`` directive could be either``RTEMS.PRIORITY`` or ``RTEMS.LOCAL or
RTEMS.PRIORITY``. The attribute_set parameter can be set to``RTEMS.PRIORITY`` because ``RTEMS.LOCAL`` is the
default for all created tasks. If a similar semaphore were to be known
globally, then the attribute_set parameter would be``RTEMS.GLOBAL or RTEMS.PRIORITY``.
Some combinatinos of these attributes are invalid. For example, priority
ordered blocking discipline must be applied to a binary semaphore in order
to use either the priority inheritance or priority ceiling functionality.
The following tree figure illustrates the valid combinations.
.. code:: c
Not available in ASCII representation
Building a SEMAPHORE_OBTAIN Option Set
--------------------------------------
In general, an option is built by a bitwise OR of the
desired option components. The set of valid options for the``rtems.semaphore_obtain`` directive are listed
in the following table:
- ``RTEMS.WAIT`` - task will wait for semaphore (default)
- ``RTEMS.NO_WAIT`` - task should not wait
Option values are specifically designed to be mutually exclusive,
therefore bitwise OR and addition operations are equivalent as long as
each attribute appears exactly once in the component list. An option
listed as a default is not required to appear in the list, although it is
a good programming practice to specify default options. If all defaults
are desired, the option ``RTEMS.DEFAULT_OPTIONS`` should be
specified on this call.
This example demonstrates the option parameter needed
to poll for a semaphore. The option parameter passed to the``rtems.semaphore_obtain``
directive should be ``RTEMS.NO_WAIT``.
Operations
==========
Creating a Semaphore
--------------------
The ``rtems.semaphore_create`` directive creates a binary or
counting semaphore with a user-specified name as well as an
initial count. If a binary semaphore is created with a count of
zero (0) to indicate that it has been allocated, then the task
creating the semaphore is considered the current holder of the
semaphore. At create time the method for ordering waiting tasks
in the semaphores task wait queue (by FIFO or task priority) is
specified. Additionally, the priority inheritance or priority
ceiling algorithm may be selected for local, binary semaphores
that use the priority task wait queue blocking discipline. If
the priority ceiling algorithm is selected, then the highest
priority of any task which will attempt to obtain this semaphore
must be specified. RTEMS allocates a Semaphore Control Block
(SMCB) from the SMCB free list. This data structure is used by
RTEMS to manage the newly created semaphore. Also, a unique
semaphore ID is generated and returned to the calling task.
Obtaining Semaphore IDs
-----------------------
When a semaphore is created, RTEMS generates a unique
semaphore ID and assigns it to the created semaphore until it is
deleted. The semaphore ID may be obtained by either of two
methods. First, as the result of an invocation of the``rtems.semaphore_create`` directive, the
semaphore ID is stored in a user provided location. Second,
the semaphore ID may be obtained later using the``rtems.semaphore_ident`` directive. The semaphore ID is
used by other semaphore manager directives to access this
semaphore.
Acquiring a Semaphore
---------------------
The ``rtems.semaphore_obtain`` directive is used to acquire the
specified semaphore. A simplified version of the``rtems.semaphore_obtain`` directive can be described as follows:
.. code:: c
if semaphore's count is greater than zero
then decrement semaphore's count
else wait for release of semaphore
return SUCCESSFUL
When the semaphore cannot be immediately acquired,
one of the following situations applies:
- By default, the calling task will wait forever to
acquire the semaphore.
- Specifying ``RTEMS.NO_WAIT`` forces an immediate return
with an error status code.
- Specifying a timeout limits the interval the task will
wait before returning with an error status code.
If the task waits to acquire the semaphore, then it
is placed in the semaphores task wait queue in either FIFO or
task priority order. If the task blocked waiting for a binary
semaphore using priority inheritance and the tasks priority is
greater than that of the task currently holding the semaphore,
then the holding task will inherit the priority of the blocking
task. All tasks waiting on a semaphore are returned an error
code when the semaphore is deleted.
When a task successfully obtains a semaphore using
priority ceiling and the priority ceiling for this semaphore is
greater than that of the holder, then the holders priority will
be elevated.
Releasing a Semaphore
---------------------
The ``rtems.semaphore_release`` directive is used to release
the specified semaphore. A simplified version of the``rtems.semaphore_release`` directive can be described as
follows:
.. code:: c
if no tasks are waiting on this semaphore
then increment semaphore's count
else assign semaphore to a waiting task
return SUCCESSFUL
If this is the outermost release of a binary
semaphore that uses priority inheritance or priority ceiling and
the task does not currently hold any other binary semaphores,
then the task performing the ``rtems.semaphore_release``
will have its priority restored to its normal value.
Deleting a Semaphore
--------------------
The ``rtems.semaphore_delete`` directive removes a semaphore
from the system and frees its control block. A semaphore can be
deleted by any local task that knows the semaphores ID. As a
result of this directive, all tasks blocked waiting to acquire
the semaphore will be readied and returned a status code which
indicates that the semaphore was deleted. Any subsequent
references to the semaphores name and ID are invalid.
Directives
==========
This section details the semaphore managers
directives. A subsection is dedicated to each of this managers
directives and describes the calling sequence, related
constants, usage, and status codes.
SEMAPHORE_CREATE - Create a semaphore
-------------------------------------
.. index:: create a semaphore
**CALLING SEQUENCE:**
.. code:: c
procedure Semaphore_Create (
Name : in RTEMS.Name;
Count : in RTEMS.Unsigned32;
Attribute_Set : in RTEMS.Attribute;
Priority_Ceiling : in RTEMS.Task_Priority;
ID : out RTEMS.ID;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - semaphore created successfully
``RTEMS.INVALID_NAME`` - invalid semaphore name
``RTEMS.INVALID_ADDRESS`` - ``id`` is NULL
``RTEMS.TOO_MANY`` - too many semaphores created
``RTEMS.NOT_DEFINED`` - invalid attribute set
``RTEMS.INVALID_NUMBER`` - invalid starting count for binary semaphore
``RTEMS.MP_NOT_CONFIGURED`` - multiprocessing not configured
``RTEMS.TOO_MANY`` - too many global objects
**DESCRIPTION:**
This directive creates a semaphore which resides on
the local node. The created semaphore has the user-defined name
specified in name and the initial count specified in count. For
control and maintenance of the semaphore, RTEMS allocates and
initializes a SMCB. The RTEMS-assigned semaphore id is returned
in id. This semaphore id is used with other semaphore related
directives to access the semaphore.
Specifying PRIORITY in attribute_set causes tasks
waiting for a semaphore to be serviced according to task
priority. When FIFO is selected, tasks are serviced in First
In-First Out order.
**NOTES:**
This directive will not cause the calling task to be
preempted.
The priority inheritance and priority ceiling
algorithms are only supported for local, binary semaphores that
use the priority task wait queue blocking discipline.
The following semaphore attribute constants are
defined by RTEMS:
- ``RTEMS.FIFO`` - tasks wait by FIFO (default)
- ``RTEMS.PRIORITY`` - tasks wait by priority
- ``RTEMS.BINARY_SEMAPHORE`` - restrict values to
0 and 1
- ``RTEMS.COUNTING_SEMAPHORE`` - no restriction on values
(default)
- ``RTEMS.SIMPLE_BINARY_SEMAPHORE`` - restrict values to
0 and 1, block on nested access, allow deletion of locked semaphore.
- ``RTEMS.NO_INHERIT_PRIORITY`` - do not use priority
inheritance (default)
- ``RTEMS.INHERIT_PRIORITY`` - use priority inheritance
- ``RTEMS.NO_PRIORITY_CEILING`` - do not use priority
ceiling (default)
- ``RTEMS.PRIORITY_CEILING`` - use priority ceiling
- ``RTEMS.NO_MULTIPROCESSOR_RESOURCE_SHARING`` - do not use
Multiprocessor Resource Sharing Protocol (default)
- ``RTEMS.MULTIPROCESSOR_RESOURCE_SHARING`` - use
Multiprocessor Resource Sharing Protocol
- ``RTEMS.LOCAL`` - local semaphore (default)
- ``RTEMS.GLOBAL`` - global semaphore
Semaphores should not be made global unless remote
tasks must interact with the created semaphore. This is to
avoid the system overhead incurred by the creation of a global
semaphore. When a global semaphore is created, the semaphores
name and id must be transmitted to every node in the system for
insertion in the local copy of the global object table.
Note that some combinations of attributes are not valid. See the
earlier discussion on this.
The total number of global objects, including semaphores, is limited by
the maximum_global_objects field in the Configuration Table.
It is not allowed to create an initially locked MrsP semaphore and the``RTEMS.INVALID_NUMBER`` status code will be returned on SMP
configurations in this case. This prevents lock order reversal problems with
the allocator mutex.
SEMAPHORE_IDENT - Get ID of a semaphore
---------------------------------------
.. index:: get ID of a semaphore
.. index:: obtain ID of a semaphore
**CALLING SEQUENCE:**
.. code:: c
procedure Semaphore_Ident (
Name : in RTEMS.Name;
Node : in RTEMS.Unsigned32;
ID : out RTEMS.ID;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - semaphore identified successfully
``RTEMS.INVALID_NAME`` - semaphore name not found
``RTEMS.INVALID_NODE`` - invalid node id
**DESCRIPTION:**
This directive obtains the semaphore id associated
with the semaphore name. If the semaphore name is not unique,
then the semaphore id will match one of the semaphores with that
name. However, this semaphore id is not guaranteed to
correspond to the desired semaphore. The semaphore id is used
by other semaphore related directives to access the semaphore.
**NOTES:**
This directive will not cause the running task to be
preempted.
If node is ``RTEMS.SEARCH_ALL_NODES``, all nodes are searched
with the local node being searched first. All other nodes are
searched with the lowest numbered node searched first.
If node is a valid node number which does not
represent the local node, then only the semaphores exported by
the designated node are searched.
This directive does not generate activity on remote
nodes. It accesses only the local copy of the global object
table.
SEMAPHORE_DELETE - Delete a semaphore
-------------------------------------
.. index:: delete a semaphore
**CALLING SEQUENCE:**
.. code:: c
procedure Semaphore_Delete (
ID : in RTEMS.ID;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - semaphore deleted successfully
``RTEMS.INVALID_ID`` - invalid semaphore id
``RTEMS.RESOURCE_IN_USE`` - binary semaphore is in use
``RTEMS.ILLEGAL_ON_REMOTE_OBJECT`` - cannot delete remote semaphore
**DESCRIPTION:**
This directive deletes the semaphore specified by ``id``.
All tasks blocked waiting to acquire the semaphore will be
readied and returned a status code which indicates that the
semaphore was deleted. The SMCB for this semaphore is reclaimed
by RTEMS.
**NOTES:**
The calling task will be preempted if it is enabled
by the tasks execution mode and a higher priority local task is
waiting on the deleted semaphore. The calling task will NOT be
preempted if all of the tasks that are waiting on the semaphore
are remote tasks.
The calling task does not have to be the task that
created the semaphore. Any local task that knows the semaphore
id can delete the semaphore.
When a global semaphore is deleted, the semaphore id
must be transmitted to every node in the system for deletion
from the local copy of the global object table.
The semaphore must reside on the local node, even if
the semaphore was created with the ``RTEMS.GLOBAL`` option.
Proxies, used to represent remote tasks, are
reclaimed when the semaphore is deleted.
SEMAPHORE_OBTAIN - Acquire a semaphore
--------------------------------------
.. index:: obtain a semaphore
.. index:: lock a semaphore
**CALLING SEQUENCE:**
.. code:: c
procedure Semaphore_Obtain (
ID : in RTEMS.ID;
Option_Set : in RTEMS.Option;
Timeout : in RTEMS.Interval;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - semaphore obtained successfully
``RTEMS.UNSATISFIED`` - semaphore not available
``RTEMS.TIMEOUT`` - timed out waiting for semaphore
``RTEMS.OBJECT_WAS_DELETED`` - semaphore deleted while waiting
``RTEMS.INVALID_ID`` - invalid semaphore id
**DESCRIPTION:**
This directive acquires the semaphore specified by
id. The ``RTEMS.WAIT`` and ``RTEMS.NO_WAIT`` components of the options parameter
indicate whether the calling task wants to wait for the
semaphore to become available or return immediately if the
semaphore is not currently available. With either ``RTEMS.WAIT`` or``RTEMS.NO_WAIT``, if the current semaphore count is positive, then it is
decremented by one and the semaphore is successfully acquired by
returning immediately with a successful return code.
If the calling task chooses to return immediately and the current
semaphore count is zero or negative, then a status code is returned
indicating that the semaphore is not available. If the calling task
chooses to wait for a semaphore and the current semaphore count is zero or
negative, then it is decremented by one and the calling task is placed on
the semaphores wait queue and blocked. If the semaphore was created with
the ``RTEMS.PRIORITY`` attribute, then the calling task is
inserted into the queue according to its priority. However, if the
semaphore was created with the ``RTEMS.FIFO`` attribute, then
the calling task is placed at the rear of the wait queue. If the binary
semaphore was created with the ``RTEMS.INHERIT_PRIORITY``
attribute, then the priority of the task currently holding the binary
semaphore is guaranteed to be greater than or equal to that of the
blocking task. If the binary semaphore was created with the``RTEMS.PRIORITY_CEILING`` attribute, a task successfully
obtains the semaphore, and the priority of that task is greater than the
ceiling priority for this semaphore, then the priority of the task
obtaining the semaphore is elevated to that of the ceiling.
The timeout parameter specifies the maximum interval the calling task is
willing to be blocked waiting for the semaphore. If it is set to``RTEMS.NO_TIMEOUT``, then the calling task will wait forever.
If the semaphore is available or the ``RTEMS.NO_WAIT`` option
component is set, then timeout is ignored.
Deadlock situations are detected for MrsP semaphores and the``RTEMS.UNSATISFIED`` status code will be returned on SMP
configurations in this case.
**NOTES:**
The following semaphore acquisition option constants
are defined by RTEMS:
- ``RTEMS.WAIT`` - task will wait for semaphore (default)
- ``RTEMS.NO_WAIT`` - task should not wait
Attempting to obtain a global semaphore which does not reside on the local
node will generate a request to the remote node to access the semaphore.
If the semaphore is not available and ``RTEMS.NO_WAIT`` was
not specified, then the task must be blocked until the semaphore is
released. A proxy is allocated on the remote node to represent the task
until the semaphore is released.
A clock tick is required to support the timeout functionality of
this directive.
It is not allowed to obtain a MrsP semaphore more than once by one task at a
time (nested access) and the ``RTEMS.UNSATISFIED`` status code will
be returned on SMP configurations in this case.
SEMAPHORE_RELEASE - Release a semaphore
---------------------------------------
.. index:: release a semaphore
.. index:: unlock a semaphore
**CALLING SEQUENCE:**
.. code:: c
procedure Semaphore_Release (
ID : in RTEMS.ID;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - semaphore released successfully
``RTEMS.INVALID_ID`` - invalid semaphore id
``RTEMS.NOT_OWNER_OF_RESOURCE`` - calling task does not own semaphore
``RTEMS.INCORRECT_STATE`` - invalid unlock order
**DESCRIPTION:**
This directive releases the semaphore specified by
id. The semaphore count is incremented by one. If the count is
zero or negative, then the first task on this semaphores wait
queue is removed and unblocked. The unblocked task may preempt
the running task if the running tasks preemption mode is
enabled and the unblocked task has a higher priority than the
running task.
**NOTES:**
The calling task may be preempted if it causes a
higher priority task to be made ready for execution.
Releasing a global semaphore which does not reside on
the local node will generate a request telling the remote node
to release the semaphore.
If the task to be unblocked resides on a different
node from the semaphore, then the semaphore allocation is
forwarded to the appropriate node, the waiting task is
unblocked, and the proxy used to represent the task is reclaimed.
The outermost release of a local, binary, priority
inheritance or priority ceiling semaphore may result in the
calling task having its priority lowered. This will occur if
the calling task holds no other binary semaphores and it has
inherited a higher priority.
The MrsP semaphores must be released in the reversed obtain order, otherwise
the ``RTEMS.INCORRECT_STATE`` status code will be returned on SMP
configurations in this case.
SEMAPHORE_FLUSH - Unblock all tasks waiting on a semaphore
----------------------------------------------------------
.. index:: flush a semaphore
.. index:: unblock all tasks waiting on a semaphore
**CALLING SEQUENCE:**
.. code:: c
procedure Semaphore_Flush (
ID : in RTEMS.ID;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - semaphore released successfully
``RTEMS.INVALID_ID`` - invalid semaphore id
``RTEMS.NOT_DEFINED`` - operation not defined for the protocol of
the semaphore
``RTEMS.ILLEGAL_ON_REMOTE_OBJECT`` - not supported for remote semaphores
**DESCRIPTION:**
This directive unblocks all tasks waiting on the semaphore specified by
id. Since there are tasks blocked on the semaphore, the semaphores
count is not changed by this directive and thus is zero before and
after this directive is executed. Tasks which are unblocked as the
result of this directive will return from the``rtems.semaphore_obtain`` directive with a
status code of ``RTEMS.UNSATISFIED`` to indicate
that the semaphore was not obtained.
This directive may unblock any number of tasks. Any of the unblocked
tasks may preempt the running task if the running tasks preemption mode is
enabled and an unblocked task has a higher priority than the
running task.
**NOTES:**
The calling task may be preempted if it causes a
higher priority task to be made ready for execution.
If the task to be unblocked resides on a different
node from the semaphore, then the waiting task is
unblocked, and the proxy used to represent the task is reclaimed.
It is not allowed to flush a MrsP semaphore and the``RTEMS.NOT_DEFINED`` status code will be returned on SMP
configurations in this case.
SEMAPHORE_SET_PRIORITY - Set priority by scheduler for a semaphore
------------------------------------------------------------------
.. index:: set priority by scheduler for a semaphore
**CALLING SEQUENCE:**
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - successful operation
``RTEMS.INVALID_ID`` - invalid semaphore or scheduler id
``RTEMS.INVALID_ADDRESS`` - ``old_priority`` is NULL
``RTEMS.INVALID_PRIORITY`` - invalid new priority value
``RTEMS.NOT_DEFINED`` - operation not defined for the protocol of
the semaphore
``RTEMS.ILLEGAL_ON_REMOTE_OBJECT`` - not supported for remote semaphores
**DESCRIPTION:**
This directive sets the priority value with respect to the specified scheduler
of a semaphore.
The special priority value ``RTEMS_CURRENT_PRIORITY`` can be used to get the
current priority value without changing it.
The interpretation of the priority value depends on the protocol of the
semaphore object.
- The Multiprocessor Resource Sharing Protocol needs a ceiling priority per
scheduler instance. This operation can be used to specify these priority
values.
- For the Priority Ceiling Protocol the ceiling priority is used with this
operation.
- For other protocols this operation is not defined.
**EXAMPLE:**
.. code:: c
#include <assert.h>
#include <stdlib.h>
#include <rtems.h>
#define SCHED_A rtems_build_name(' ', ' ', ' ', 'A')
#define SCHED_B rtems_build_name(' ', ' ', ' ', 'B')
static void Init(rtems_task_argument arg)
{
rtems_status_code sc;
rtems_id semaphore_id;
rtems_id scheduler_a_id;
rtems_id scheduler_b_id;
rtems_task_priority prio;
/* Get the scheduler identifiers \*/
sc = rtems_scheduler_ident(SCHED_A, &scheduler_a_id);
assert(sc == RTEMS_SUCCESSFUL);
sc = rtems_scheduler_ident(SCHED_B, &scheduler_b_id);
assert(sc == RTEMS_SUCCESSFUL);
/* Create a MrsP semaphore object \*/
sc = rtems_semaphore_create(
rtems_build_name('M', 'R', 'S', 'P'),
1,
RTEMS_MULTIPROCESSOR_RESOURCE_SHARING
| RTEMS_BINARY_SEMAPHORE,
1,
&semaphore_id
);
assert(sc == RTEMS_SUCCESSFUL);
/*
* The ceiling priority values per scheduler are equal to the value specified
* for object creation.
\*/
prio = RTEMS_CURRENT_PRIORITY;
sc = rtems_semaphore_set_priority(semaphore_id, scheduler_a_id, prio, &prio);
assert(sc == RTEMS_SUCCESSFUL);
assert(prio == 1);
/* Check the old value and set a new ceiling priority for scheduler B \*/
prio = 2;
sc = rtems_semaphore_set_priority(semaphore_id, scheduler_b_id, prio, &prio);
assert(sc == RTEMS_SUCCESSFUL);
assert(prio == 1);
/* Check the ceiling priority values \*/
prio = RTEMS_CURRENT_PRIORITY;
sc = rtems_semaphore_set_priority(semaphore_id, scheduler_a_id, prio, &prio);
assert(sc == RTEMS_SUCCESSFUL);
assert(prio == 1);
prio = RTEMS_CURRENT_PRIORITY;
sc = rtems_semaphore_set_priority(semaphore_id, scheduler_b_id, prio, &prio);
assert(sc == RTEMS_SUCCESSFUL);
assert(prio == 2);
sc = rtems_semaphore_delete(semaphore_id);
assert(sc == RTEMS_SUCCESSFUL);
exit(0);
}
#define CONFIGURE_SMP_APPLICATION
#define CONFIGURE_APPLICATION_NEEDS_CLOCK_DRIVER
#define CONFIGURE_APPLICATION_NEEDS_CONSOLE_DRIVER
#define CONFIGURE_MAXIMUM_TASKS 1
#define CONFIGURE_MAXIMUM_SEMAPHORES 1
#define CONFIGURE_MAXIMUM_MRSP_SEMAPHORES 1
#define CONFIGURE_SMP_MAXIMUM_PROCESSORS 2
#define CONFIGURE_SCHEDULER_SIMPLE_SMP
#include <rtems/scheduler.h>
RTEMS_SCHEDULER_CONTEXT_SIMPLE_SMP(a);
RTEMS_SCHEDULER_CONTEXT_SIMPLE_SMP(b);
#define CONFIGURE_SCHEDULER_CONTROLS \\
RTEMS_SCHEDULER_CONTROL_SIMPLE_SMP(a, SCHED_A), \\
RTEMS_SCHEDULER_CONTROL_SIMPLE_SMP(b, SCHED_B)
#define CONFIGURE_SMP_SCHEDULER_ASSIGNMENTS \\
RTEMS_SCHEDULER_ASSIGN(0, RTEMS_SCHEDULER_ASSIGN_PROCESSOR_MANDATORY), \\
RTEMS_SCHEDULER_ASSIGN(1, RTEMS_SCHEDULER_ASSIGN_PROCESSOR_MANDATORY)
#define CONFIGURE_RTEMS_INIT_TASKS_TABLE
#define CONFIGURE_INIT
#include <rtems/confdefs.h>
.. COMMENT: COPYRIGHT (c) 1988-2002.
.. COMMENT: On-Line Applications Research Corporation (OAR).
.. COMMENT: All rights reserved.

View File

@ -1,322 +0,0 @@
Signal Manager
##############
.. index:: signals
Introduction
============
The signal manager provides the capabilities required
for asynchronous communication. The directives provided by the
signal manager are:
- ``rtems.signal_catch`` - Establish an ASR
- ``rtems.signal_send`` - Send signal set to a task
Background
==========
Signal Manager Definitions
--------------------------
.. index:: asynchronous signal routine
.. index:: ASR
The signal manager allows a task to optionally define
an asynchronous signal routine (ASR). An ASR is to a task what
an ISR is to an applications set of tasks. When the processor
is interrupted, the execution of an application is also
interrupted and an ISR is given control. Similarly, when a
signal is sent to a task, that tasks execution path will be
"interrupted" by the ASR. Sending a signal to a task has no
effect on the receiving tasks current execution state... index:: rtems_signal_set
A signal flag is used by a task (or ISR) to inform
another task of the occurrence of a significant situation.
Thirty-two signal flags are associated with each task. A
collection of one or more signals is referred to as a signal
set. The data type ``rtems.signal_set``
is used to manipulate signal sets.
A signal set is posted when it is directed (or sent) to a
task. A pending signal is a signal that has been sent to a task
with a valid ASR, but has not been processed by that tasks ASR.
A Comparison of ASRs and ISRs
-----------------------------
.. index:: ASR vs. ISR
.. index:: ISR vs. ASR
The format of an ASR is similar to that of an ISR
with the following exceptions:
- ISRs are scheduled by the processor hardware. ASRs are
scheduled by RTEMS.
- ISRs do not execute in the context of a task and may
invoke only a subset of directives. ASRs execute in the context
of a task and may execute any directive.
- When an ISR is invoked, it is passed the vector number
as its argument. When an ASR is invoked, it is passed the
signal set as its argument.
- An ASR has a task mode which can be different from that
of the task. An ISR does not execute as a task and, as a
result, does not have a task mode.
Building a Signal Set
---------------------
.. index:: signal set, building
A signal set is built by a bitwise OR of the desired
signals. The set of valid signals is ``RTEMS.SIGNAL_0`` through``RTEMS.SIGNAL_31``. If a signal is not explicitly specified in the
signal set, then it is not present. Signal values are
specifically designed to be mutually exclusive, therefore
bitwise OR and addition operations are equivalent as long as
each signal appears exactly once in the component list.
This example demonstrates the signal parameter used
when sending the signal set consisting of``RTEMS.SIGNAL_6``,``RTEMS.SIGNAL_15``, and``RTEMS.SIGNAL_31``. The signal parameter provided
to the ``rtems.signal_send`` directive should be``RTEMS.SIGNAL_6 or
RTEMS.SIGNAL_15 or RTEMS.SIGNAL_31``.
Building an ASR Mode
--------------------
.. index:: ASR mode, building
In general, an ASRs mode is built by a bitwise OR of
the desired mode components. The set of valid mode components
is the same as those allowed with the task_create and task_mode
directives. A complete list of mode options is provided in the
following table:
- ``RTEMS.PREEMPT`` is masked by``RTEMS.PREEMPT_MASK`` and enables preemption
- ``RTEMS.NO_PREEMPT`` is masked by``RTEMS.PREEMPT_MASK`` and disables preemption
- ``RTEMS.NO_TIMESLICE`` is masked by``RTEMS.TIMESLICE_MASK`` and disables timeslicing
- ``RTEMS.TIMESLICE`` is masked by``RTEMS.TIMESLICE_MASK`` and enables timeslicing
- ``RTEMS.ASR`` is masked by``RTEMS.ASR_MASK`` and enables ASR processing
- ``RTEMS.NO_ASR`` is masked by``RTEMS.ASR_MASK`` and disables ASR processing
- ``RTEMS.INTERRUPT_LEVEL(0)`` is masked by``RTEMS.INTERRUPT_MASK`` and enables all interrupts
- ``RTEMS.INTERRUPT_LEVEL(n)`` is masked by``RTEMS.INTERRUPT_MASK`` and sets interrupts level n
Mode values are specifically designed to be mutually
exclusive, therefore bitwise OR and addition operations are
equivalent as long as each mode appears exactly once in the
component list. A mode component listed as a default is not
required to appear in the mode list, although it is a good
programming practice to specify default components. If all
defaults are desired, the mode DEFAULT_MODES should be specified
on this call.
This example demonstrates the mode parameter used
with the ``rtems.signal_catch``
to establish an ASR which executes at
interrupt level three and is non-preemptible. The mode should
be set to``RTEMS.INTERRUPT_LEVEL(3) or RTEMS.NO_PREEMPT``
to indicate the
desired processor mode and interrupt level.
Operations
==========
Establishing an ASR
-------------------
The ``rtems.signal_catch`` directive establishes an ASR for the
calling task. The address of the ASR and its execution mode are
specified to this directive. The ASRs mode is distinct from
the tasks mode. For example, the task may allow preemption,
while that tasks ASR may have preemption disabled. Until a
task calls ``rtems.signal_catch`` the first time,
its ASR is invalid, and no signal sets can be sent to the task.
A task may invalidate its ASR and discard all pending
signals by calling ``rtems.signal_catch``
with a value of NULL for the ASRs address. When a tasks
ASR is invalid, new signal sets sent to this task are discarded.
A task may disable ASR processing (``RTEMS.NO_ASR``) via the
task_mode directive. When a tasks ASR is disabled, the signals
sent to it are left pending to be processed later when the ASR
is enabled.
Any directive that can be called from a task can also
be called from an ASR. A task is only allowed one active ASR.
Thus, each call to ``rtems.signal_catch``
replaces the previous one.
Normally, signal processing is disabled for the ASRs
execution mode, but if signal processing is enabled for the ASR,
the ASR must be reentrant.
Sending a Signal Set
--------------------
The ``rtems.signal_send`` directive allows both
tasks and ISRs to send signals to a target task. The target task and
a set of signals are specified to the``rtems.signal_send`` directive. The sending
of a signal to a task has no effect on the execution state of
that task. If the task is not the currently running task, then
the signals are left pending and processed by the tasks ASR the
next time the task is dispatched to run. The ASR is executed
immediately before the task is dispatched. If the currently
running task sends a signal to itself or is sent a signal from
an ISR, its ASR is immediately dispatched to run provided signal
processing is enabled.
If an ASR with signals enabled is preempted by
another task or an ISR and a new signal set is sent, then a new
copy of the ASR will be invoked, nesting the preempted ASR.
Upon completion of processing the new signal set, control will
return to the preempted ASR. In this situation, the ASR must be
reentrant.
Like events, identical signals sent to a task are not
queued. In other words, sending the same signal multiple times
to a task (without any intermediate signal processing occurring
for the task), has the same result as sending that signal to
that task once.
Processing an ASR
-----------------
Asynchronous signals were designed to provide the
capability to generate software interrupts. The processing of
software interrupts parallels that of hardware interrupts. As a
result, the differences between the formats of ASRs and ISRs is
limited to the meaning of the single argument passed to an ASR.
The ASR should have the following calling sequence and adhere to
Ada calling conventions:
.. code:: c
procedure User_Routine (
Signals : in RTEMS.Signal_Set
);
When the ASR returns to RTEMS the mode and execution
path of the interrupted task (or ASR) is restored to the context
prior to entering the ASR.
Directives
==========
This section details the signal managers directives.
A subsection is dedicated to each of this managers directives
and describes the calling sequence, related constants, usage,
and status codes.
SIGNAL_CATCH - Establish an ASR
-------------------------------
.. index:: establish an ASR
.. index:: install an ASR
**CALLING SEQUENCE:**
.. code:: c
procedure Signal_Catch (
ASR_Handler : in RTEMS.ASR_Handler;
Mode_Set : in RTEMS.Mode;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - always successful
**DESCRIPTION:**
This directive establishes an asynchronous signal
routine (ASR) for the calling task. The asr_handler parameter
specifies the entry point of the ASR. If asr_handler is NULL,
the ASR for the calling task is invalidated and all pending
signals are cleared. Any signals sent to a task with an invalid
ASR are discarded. The mode parameter specifies the execution
mode for the ASR. This execution mode supersedes the tasks
execution mode while the ASR is executing.
**NOTES:**
This directive will not cause the calling task to be
preempted.
The following task mode constants are defined by RTEMS:
- ``RTEMS.PREEMPT`` is masked by``RTEMS.PREEMPT_MASK`` and enables preemption
- ``RTEMS.NO_PREEMPT`` is masked by``RTEMS.PREEMPT_MASK`` and disables preemption
- ``RTEMS.NO_TIMESLICE`` is masked by``RTEMS.TIMESLICE_MASK`` and disables timeslicing
- ``RTEMS.TIMESLICE`` is masked by``RTEMS.TIMESLICE_MASK`` and enables timeslicing
- ``RTEMS.ASR`` is masked by``RTEMS.ASR_MASK`` and enables ASR processing
- ``RTEMS.NO_ASR`` is masked by``RTEMS.ASR_MASK`` and disables ASR processing
- ``RTEMS.INTERRUPT_LEVEL(0)`` is masked by``RTEMS.INTERRUPT_MASK`` and enables all interrupts
- ``RTEMS.INTERRUPT_LEVEL(n)`` is masked by``RTEMS.INTERRUPT_MASK`` and sets interrupts level n
SIGNAL_SEND - Send signal set to a task
---------------------------------------
.. index:: send signal set
**CALLING SEQUENCE:**
.. code:: c
procedure Signal_Send (
ID : in RTEMS.ID;
Signal_Set : in RTEMS.Signal_Set;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - signal sent successfully
``RTEMS.INVALID_ID`` - task id invalid
``RTEMS.INVALID_NUMBER`` - empty signal set
``RTEMS.NOT_DEFINED`` - ASR invalid
**DESCRIPTION:**
This directive sends a signal set to the task
specified in id. The signal_set parameter contains the signal
set to be sent to the task.
If a caller sends a signal set to a task with an
invalid ASR, then an error code is returned to the caller. If a
caller sends a signal set to a task whose ASR is valid but
disabled, then the signal set will be caught and left pending
for the ASR to process when it is enabled. If a caller sends a
signal set to a task with an ASR that is both valid and enabled,
then the signal set is caught and the ASR will execute the next
time the task is dispatched to run.
**NOTES:**
Sending a signal set to a task has no effect on that
tasks state. If a signal set is sent to a blocked task, then
the task will remain blocked and the signals will be processed
when the task becomes the running task.
Sending a signal set to a global task which does not
reside on the local node will generate a request telling the
remote node to send the signal set to the specified task.
.. COMMENT: COPYRIGHT (c) 1988-2010.
.. COMMENT: On-Line Applications Research Corporation (OAR).
.. COMMENT: All rights reserved.

View File

@ -1,198 +0,0 @@
Stack Bounds Checker
####################
Introduction
============
The stack bounds checker is an RTEMS support component that determines
if a task has overrun its run-time stack. The routines provided
by the stack bounds checker manager are:
- ``rtems.stack_checker_is_blown`` - Has the Current Task Blown its Stack
- ``rtems.stack_checker_report_usage`` - Report Task Stack Usage
Background
==========
Task Stack
----------
Each task in a system has a fixed size stack associated with it. This
stack is allocated when the task is created. As the task executes, the
stack is used to contain parameters, return addresses, saved registers,
and local variables. The amount of stack space required by a task
is dependent on the exact set of routines used. The peak stack usage
reflects the worst case of subroutine pushing information on the stack.
For example, if a subroutine allocates a local buffer of 1024 bytes, then
this data must be accounted for in the stack of every task that invokes that
routine.
Recursive routines make calculating peak stack usage difficult, if not
impossible. Each call to the recursive routine consumes *n* bytes
of stack space. If the routine recursives 1000 times, then ``1000 * *n*`` bytes of stack space are required.
Execution
---------
The stack bounds checker operates as a set of task extensions. At
task creation time, the tasks stack is filled with a pattern to
indicate the stack is unused. As the task executes, it will overwrite
this pattern in memory. At each task switch, the stack bounds checkers
task switch extension is executed. This extension checks that:
- the last ``n`` bytes of the tasks stack have
not been overwritten. If this pattern has been damaged, it
indicates that at some point since this task was context
switch to the CPU, it has used too much stack space.
- the current stack pointer of the task is not within
the address range allocated for use as the tasks stack.
If either of these conditions is detected, then a blown stack
error is reported using the ``printk`` routine.
The number of bytes checked for an overwrite is processor family dependent.
The minimum stack frame per subroutine call varies widely between processor
families. On CISC families like the Motorola MC68xxx and Intel ix86, all
that is needed is a return address. On more complex RISC processors,
the minimum stack frame per subroutine call may include space to save
a significant number of registers.
Another processor dependent feature that must be taken into account by
the stack bounds checker is the direction that the stack grows. On some
processor families, the stack grows up or to higher addresses as the
task executes. On other families, it grows down to lower addresses. The
stack bounds checker implementation uses the stack description definitions
provided by every RTEMS port to get for this information.
Operations
==========
Initializing the Stack Bounds Checker
-------------------------------------
The stack checker is initialized automatically when its task
create extension runs for the first time.
The application must include the stack bounds checker extension set
in its set of Initial Extensions. This set of extensions is
defined as ``STACK_CHECKER_EXTENSION``. If using ``<rtems/confdefs.h>``
for Configuration Table generation, then all that is necessary is
to define the macro ``CONFIGURE_STACK_CHECKER_ENABLED`` before including``<rtems/confdefs.h>`` as shown below:
.. code:: c
#define CONFIGURE_STACK_CHECKER_ENABLED
...
#include <rtems/confdefs.h>
Checking for Blown Task Stack
-----------------------------
The application may check whether the stack pointer of currently
executing task is within proper bounds at any time by calling
the ``rtems.stack_checker_is_blown`` method. This
method return ``FALSE`` if the task is operating within its
stack bounds and has not damaged its pattern area.
Reporting Task Stack Usage
--------------------------
The application may dynamically report the stack usage for every task
in the system by calling the``rtems.stack_checker_report_usage`` routine.
This routine prints a table with the peak usage and stack size of
every task in the system. The following is an example of the
report generated:
.. code:: c
ID NAME LOW HIGH AVAILABLE USED
0x04010001 IDLE 0x003e8a60 0x003e9667 2952 200
0x08010002 TA1 0x003e5750 0x003e7b57 9096 1168
0x08010003 TA2 0x003e31c8 0x003e55cf 9096 1168
0x08010004 TA3 0x003e0c40 0x003e3047 9096 1104
0xffffffff INTR 0x003ecfc0 0x003effbf 12160 128
Notice the last time. The task id is 0xffffffff and its name is "INTR".
This is not actually a task, it is the interrupt stack.
When a Task Overflows the Stack
-------------------------------
When the stack bounds checker determines that a stack overflow has occurred,
it will attempt to print a message using ``printk`` identifying the
task and then shut the system down. If the stack overflow has caused
corruption, then it is possible that the message cannot be printed.
The following is an example of the output generated:
.. code:: c
BLOWN STACK!!! Offending task(0x3eb360): id=0x08010002; name=0x54413120
stack covers range 0x003e5750 - 0x003e7b57 (9224 bytes)
Damaged pattern begins at 0x003e5758 and is 128 bytes long
The above includes the task id and a pointer to the task control block as
well as enough information so one can look at the tasks stack and
see what was happening.
Routines
========
This section details the stack bounds checkers routines.
A subsection is dedicated to each of routines
and describes the calling sequence, related constants, usage,
and status codes.
.. COMMENT: rtems_stack_checker_is_blown
STACK_CHECKER_IS_BLOWN - Has Current Task Blown Its Stack
---------------------------------------------------------
**CALLING SEQUENCE:**
.. code:: c
function Stack_Checker_Is_Blown return RTEMS.Boolean;
**STATUS CODES:**
``TRUE`` - Stack is operating within its stack limits
``FALSE`` - Current stack pointer is outside allocated area
**DESCRIPTION:**
This method is used to determine if the current stack pointer
of the currently executing task is within bounds.
**NOTES:**
This method checks the current stack pointer against
the high and low addresses of the stack memory allocated when
the task was created and it looks for damage to the high water
mark pattern for the worst case usage of the task being called.
STACK_CHECKER_REPORT_USAGE - Report Task Stack Usage
----------------------------------------------------
**CALLING SEQUENCE:**
.. code:: c
procedure Stack_Checker_Report_Usage;
**STATUS CODES: NONE**
**DESCRIPTION:**
This routine prints a table with the peak stack usage and stack space
allocation of every task in the system.
**NOTES:**
NONE
.. COMMENT: COPYRIGHT (c) 1988-2007.
.. COMMENT: On-Line Applications Research Corporation (OAR).
.. COMMENT: All rights reserved.

View File

@ -1,876 +0,0 @@
Symmetric Multiprocessing Services
##################################
Introduction
============
The Symmetric Multiprocessing (SMP) support of the RTEMS 4.10.99.0 is
available on
- ARM,
- PowerPC, and
- SPARC.
It must be explicitly enabled via the ``--enable-smp`` configure command
line option. To enable SMP in the application configuration see `Enable SMP Support for Applications`_. The default
scheduler for SMP applications supports up to 32 processors and is a global
fixed priority scheduler, see also `Configuring Clustered Schedulers`_. For example applications see:file:`testsuites/smptests`.
*WARNING: The SMP support in RTEMS is work in progress. Before you
start using this RTEMS version for SMP ask on the RTEMS mailing list.*
This chapter describes the services related to Symmetric Multiprocessing
provided by RTEMS.
The application level services currently provided are:
- ``rtems_get_processor_count`` - Get processor count
- ``rtems_get_current_processor`` - Get current processor index
- ``rtems_scheduler_ident`` - Get ID of a scheduler
- ``rtems_scheduler_get_processor_set`` - Get processor set of a scheduler
- ``rtems_task_get_scheduler`` - Get scheduler of a task
- ``rtems_task_set_scheduler`` - Set scheduler of a task
- ``rtems_task_get_affinity`` - Get task processor affinity
- ``rtems_task_set_affinity`` - Set task processor affinity
Background
==========
Uniprocessor versus SMP Parallelism
-----------------------------------
Uniprocessor systems have long been used in embedded systems. In this hardware
model, there are some system execution characteristics which have long been
taken for granted:
- one task executes at a time
- hardware events result in interrupts
There is no true parallelism. Even when interrupts appear to occur
at the same time, they are processed in largely a serial fashion.
This is true even when the interupt service routines are allowed to
nest. From a tasking viewpoint, it is the responsibility of the real-time
operatimg system to simulate parallelism by switching between tasks.
These task switches occur in response to hardware interrupt events and explicit
application events such as blocking for a resource or delaying.
With symmetric multiprocessing, the presence of multiple processors
allows for true concurrency and provides for cost-effective performance
improvements. Uniprocessors tend to increase performance by increasing
clock speed and complexity. This tends to lead to hot, power hungry
microprocessors which are poorly suited for many embedded applications.
The true concurrency is in sharp contrast to the single task and
interrupt model of uniprocessor systems. This results in a fundamental
change to uniprocessor system characteristics listed above. Developers
are faced with a different set of characteristics which, in turn, break
some existing assumptions and result in new challenges. In an SMP system
with N processors, these are the new execution characteristics.
- N tasks execute in parallel
- hardware events result in interrupts
There is true parallelism with a task executing on each processor and
the possibility of interrupts occurring on each processor. Thus in contrast
to their being one task and one interrupt to consider on a uniprocessor,
there are N tasks and potentially N simultaneous interrupts to consider
on an SMP system.
This increase in hardware complexity and presence of true parallelism
results in the application developer needing to be even more cautious
about mutual exclusion and shared data access than in a uniprocessor
embedded system. Race conditions that never or rarely happened when an
application executed on a uniprocessor system, become much more likely
due to multiple threads executing in parallel. On a uniprocessor system,
these race conditions would only happen when a task switch occurred at
just the wrong moment. Now there are N-1 tasks executing in parallel
all the time and this results in many more opportunities for small
windows in critical sections to be hit.
Task Affinity
-------------
.. index:: task affinity
.. index:: thread affinity
RTEMS provides services to manipulate the affinity of a task. Affinity
is used to specify the subset of processors in an SMP system on which
a particular task can execute.
By default, tasks have an affinity which allows them to execute on any
available processor.
Task affinity is a possible feature to be supported by SMP-aware
schedulers. However, only a subset of the available schedulers support
affinity. Although the behavior is scheduler specific, if the scheduler
does not support affinity, it is likely to ignore all attempts to set
affinity.
The scheduler with support for arbitary processor affinities uses a proof of
concept implementation. See https://devel.rtems.org/ticket/2510.
Task Migration
--------------
.. index:: task migration
.. index:: thread migration
With more than one processor in the system tasks can migrate from one processor
to another. There are three reasons why tasks migrate in RTEMS.
- The scheduler changes explicitly via ``rtems_task_set_scheduler()`` or
similar directives.
- The task resumes execution after a blocking operation. On a priority
based scheduler it will evict the lowest priority task currently assigned to a
processor in the processor set managed by the scheduler instance.
- The task moves temporarily to another scheduler instance due to locking
protocols like *Migratory Priority Inheritance* or the*Multiprocessor Resource Sharing Protocol*.
Task migration should be avoided so that the working set of a task can stay on
the most local cache level.
The current implementation of task migration in RTEMS has some implications
with respect to the interrupt latency. It is crucial to preserve the system
invariant that a task can execute on at most one processor in the system at a
time. This is accomplished with a boolean indicator in the task context. The
processor architecture specific low-level task context switch code will mark
that a task context is no longer executing and waits that the heir context
stopped execution before it restores the heir context and resumes execution of
the heir task. So there is one point in time in which a processor is without a
task. This is essential to avoid cyclic dependencies in case multiple tasks
migrate at once. Otherwise some supervising entity is necessary to prevent
life-locks. Such a global supervisor would lead to scalability problems so
this approach is not used. Currently the thread dispatch is performed with
interrupts disabled. So in case the heir task is currently executing on
another processor then this prolongs the time of disabled interrupts since one
processor has to wait for another processor to make progress.
It is difficult to avoid this issue with the interrupt latency since interrupts
normally store the context of the interrupted task on its stack. In case a
task is marked as not executing we must not use its task stack to store such an
interrupt context. We cannot use the heir stack before it stopped execution on
another processor. So if we enable interrupts during this transition we have
to provide an alternative task independent stack for this time frame. This
issue needs further investigation.
Clustered Scheduling
--------------------
We have clustered scheduling in case the set of processors of a system is
partitioned into non-empty pairwise-disjoint subsets. These subsets are called
clusters. Clusters with a cardinality of one are partitions. Each cluster is
owned by exactly one scheduler instance.
Clustered scheduling helps to control the worst-case latencies in
multi-processor systems, see *Brandenburg, Björn B.: Scheduling and
Locking in Multiprocessor Real-Time Operating Systems. PhD thesis, 2011.http://www.cs.unc.edu/~bbb/diss/brandenburg-diss.pdf*. The goal is to
reduce the amount of shared state in the system and thus prevention of lock
contention. Modern multi-processor systems tend to have several layers of data
and instruction caches. With clustered scheduling it is possible to honour the
cache topology of a system and thus avoid expensive cache synchronization
traffic. It is easy to implement. The problem is to provide synchronization
primitives for inter-cluster synchronization (more than one cluster is involved
in the synchronization process). In RTEMS there are currently four means
available
- events,
- message queues,
- semaphores using the `Priority Inheritance`_
protocol (priority boosting), and
- semaphores using the `Multiprocessor Resource Sharing Protocol`_ (MrsP).
The clustered scheduling approach enables separation of functions with
real-time requirements and functions that profit from fairness and high
throughput provided the scheduler instances are fully decoupled and adequate
inter-cluster synchronization primitives are used. This is work in progress.
For the configuration of clustered schedulers see `Configuring Clustered Schedulers`_.
To set the scheduler of a task see `SCHEDULER_IDENT - Get ID of a scheduler`_
and `TASK_SET_SCHEDULER - Set scheduler of a task`_.
Task Priority Queues
--------------------
Due to the support for clustered scheduling the task priority queues need
special attention. It makes no sense to compare the priority values of two
different scheduler instances. Thus, it is not possible to simply use one
plain priority queue for tasks of different scheduler instances.
One solution to this problem is to use two levels of queues. The top level
queue provides FIFO ordering and contains priority queues. Each priority queue
is associated with a scheduler instance and contains only tasks of this
scheduler instance. Tasks are enqueued in the priority queue corresponding to
their scheduler instance. In case this priority queue was empty, then it is
appended to the FIFO. To dequeue a task the highest priority task of the first
priority queue in the FIFO is selected. Then the first priority queue is
removed from the FIFO. In case the previously first priority queue is not
empty, then it is appended to the FIFO. So there is FIFO fairness with respect
to the highest priority task of each scheduler instances. See also *Brandenburg, Björn B.: A fully preemptive multiprocessor semaphore protocol for
latency-sensitive real-time applications. In Proceedings of the 25th Euromicro
Conference on Real-Time Systems (ECRTS 2013), pages 292–302, 2013.http://www.mpi-sws.org/~bbb/papers/pdf/ecrts13b.pdf*.
Such a two level queue may need a considerable amount of memory if fast enqueue
and dequeue operations are desired (depends on the scheduler instance count).
To mitigate this problem an approch of the FreeBSD kernel was implemented in
RTEMS. We have the invariant that a task can be enqueued on at most one task
queue. Thus, we need only as many queues as we have tasks. Each task is
equipped with spare task queue which it can give to an object on demand. The
task queue uses a dedicated memory space independent of the other memory used
for the task itself. In case a task needs to block, then there are two options
- the object already has task queue, then the task enqueues itself to this
already present queue and the spare task queue of the task is added to a list
of free queues for this object, or
- otherwise, then the queue of the task is given to the object and the task
enqueues itself to this queue.
In case the task is dequeued, then there are two options
- the task is the last task on the queue, then it removes this queue from
the object and reclaims it for its own purpose, or
- otherwise, then the task removes one queue from the free list of the
object and reclaims it for its own purpose.
Since there are usually more objects than tasks, this actually reduces the
memory demands. In addition the objects contain only a pointer to the task
queue structure. This helps to hide implementation details and makes it
possible to use self-contained synchronization objects in Newlib and GCC (C++
and OpenMP run-time support).
Scheduler Helping Protocol
--------------------------
The scheduler provides a helping protocol to support locking protocols like*Migratory Priority Inheritance* or the *Multiprocessor Resource
Sharing Protocol*. Each ready task can use at least one scheduler node at a
time to gain access to a processor. Each scheduler node has an owner, a user
and an optional idle task. The owner of a scheduler node is determined a task
creation and never changes during the life time of a scheduler node. The user
of a scheduler node may change due to the scheduler helping protocol. A
scheduler node is in one of the four scheduler help states:
:dfn:`help yourself`
This scheduler node is solely used by the owner task. This task owns no
resources using a helping protocol and thus does not take part in the scheduler
helping protocol. No help will be provided for other tasks.
:dfn:`help active owner`
This scheduler node is owned by a task actively owning a resource and can be
used to help out tasks.
In case this scheduler node changes its state from ready to scheduled and the
task executes using another node, then an idle task will be provided as a user
of this node to temporarily execute on behalf of the owner task. Thus lower
priority tasks are denied access to the processors of this scheduler instance.
In case a task actively owning a resource performs a blocking operation, then
an idle task will be used also in case this node is in the scheduled state.
:dfn:`help active rival`
This scheduler node is owned by a task actively obtaining a resource currently
owned by another task and can be used to help out tasks.
The task owning this node is ready and will give away its processor in case the
task owning the resource asks for help.
:dfn:`help passive`
This scheduler node is owned by a task obtaining a resource currently owned by
another task and can be used to help out tasks.
The task owning this node is blocked.
The following scheduler operations return a task in need for help
- unblock,
- change priority,
- yield, and
- ask for help.
A task in need for help is a task that encounters a scheduler state change from
scheduled to ready (this is a pre-emption by a higher priority task) or a task
that cannot be scheduled in an unblock operation. Such a task can ask tasks
which depend on resources owned by this task for help.
In case it is not possible to schedule a task in need for help, then the
scheduler nodes available for the task will be placed into the set of ready
scheduler nodes of the corresponding scheduler instances. Once a state change
from ready to scheduled happens for one of scheduler nodes it will be used to
schedule the task in need for help.
The ask for help scheduler operation is used to help tasks in need for help
returned by the operations mentioned above. This operation is also used in
case the root of a resource sub-tree owned by a task changes.
The run-time of the ask for help procedures depend on the size of the resource
tree of the task needing help and other resource trees in case tasks in need
for help are produced during this operation. Thus the worst-case latency in
the system depends on the maximum resource tree size of the application.
Critical Section Techniques and SMP
-----------------------------------
As discussed earlier, SMP systems have opportunities for true parallelism
which was not possible on uniprocessor systems. Consequently, multiple
techniques that provided adequate critical sections on uniprocessor
systems are unsafe on SMP systems. In this section, some of these
unsafe techniques will be discussed.
In general, applications must use proper operating system provided mutual
exclusion mechanisms to ensure correct behavior. This primarily means
the use of binary semaphores or mutexes to implement critical sections.
Disable Interrupts and Interrupt Locks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A low overhead means to ensure mutual exclusion in uni-processor configurations
is to disable interrupts around a critical section. This is commonly used in
device driver code and throughout the operating system core. On SMP
configurations, however, disabling the interrupts on one processor has no
effect on other processors. So, this is insufficient to ensure system wide
mutual exclusion. The macros
- ``rtems_interrupt_disable()``,
- ``rtems_interrupt_enable()``, and
- ``rtems_interrupt_flush()``
are disabled on SMP configurations and its use will lead to compiler warnings
and linker errors. In the unlikely case that interrupts must be disabled on
the current processor, then the
- ``rtems_interrupt_local_disable()``, and
- ``rtems_interrupt_local_enable()``
macros are now available in all configurations.
Since disabling of interrupts is not enough to ensure system wide mutual
exclusion on SMP, a new low-level synchronization primitive was added - the
interrupt locks. They are a simple API layer on top of the SMP locks used for
low-level synchronization in the operating system core. Currently they are
implemented as a ticket lock. On uni-processor configurations they degenerate
to simple interrupt disable/enable sequences. It is disallowed to acquire a
single interrupt lock in a nested way. This will result in an infinite loop
with interrupts disabled. While converting legacy code to interrupt locks care
must be taken to avoid this situation.
.. code:: c
void legacy_code_with_interrupt_disable_enable( void )
{
rtems_interrupt_level level;
rtems_interrupt_disable( level );
/* Some critical stuff \*/
rtems_interrupt_enable( level );
}
RTEMS_INTERRUPT_LOCK_DEFINE( static, lock, "Name" )
void smp_ready_code_with_interrupt_lock( void )
{
rtems_interrupt_lock_context lock_context;
rtems_interrupt_lock_acquire( &lock, &lock_context );
/* Some critical stuff \*/
rtems_interrupt_lock_release( &lock, &lock_context );
}
The ``rtems_interrupt_lock`` structure is empty on uni-processor
configurations. Empty structures have a different size in C
(implementation-defined, zero in case of GCC) and C++ (implementation-defined
non-zero value, one in case of GCC). Thus the``RTEMS_INTERRUPT_LOCK_DECLARE()``, ``RTEMS_INTERRUPT_LOCK_DEFINE()``,``RTEMS_INTERRUPT_LOCK_MEMBER()``, and``RTEMS_INTERRUPT_LOCK_REFERENCE()`` macros are provided to ensure ABI
compatibility.
Highest Priority Task Assumption
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
On a uniprocessor system, it is safe to assume that when the highest
priority task in an application executes, it will execute without being
preempted until it voluntarily blocks. Interrupts may occur while it is
executing, but there will be no context switch to another task unless
the highest priority task voluntarily initiates it.
Given the assumption that no other tasks will have their execution
interleaved with the highest priority task, it is possible for this
task to be constructed such that it does not need to acquire a binary
semaphore or mutex for protected access to shared data.
In an SMP system, it cannot be assumed there will never be a single task
executing. It should be assumed that every processor is executing another
application task. Further, those tasks will be ones which would not have
been executed in a uniprocessor configuration and should be assumed to
have data synchronization conflicts with what was formerly the highest
priority task which executed without conflict.
Disable Preemption
~~~~~~~~~~~~~~~~~~
On a uniprocessor system, disabling preemption in a task is very similar
to making the highest priority task assumption. While preemption is
disabled, no task context switches will occur unless the task initiates
them voluntarily. And, just as with the highest priority task assumption,
there are N-1 processors also running tasks. Thus the assumption that no
other tasks will run while the task has preemption disabled is violated.
Task Unique Data and SMP
------------------------
Per task variables are a service commonly provided by real-time operating
systems for application use. They work by allowing the application
to specify a location in memory (typically a ``void *``) which is
logically added to the context of a task. On each task switch, the
location in memory is stored and each task can have a unique value in
the same memory location. This memory location is directly accessed as a
variable in a program.
This works well in a uniprocessor environment because there is one task
executing and one memory location containing a task-specific value. But
it is fundamentally broken on an SMP system because there are always N
tasks executing. With only one location in memory, N-1 tasks will not
have the correct value.
This paradigm for providing task unique data values is fundamentally
broken on SMP systems.
Classic API Per Task Variables
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The Classic API provides three directives to support per task variables. These are:
- ``rtems.task_variable_add`` - Associate per task variable
- ``rtems.task_variable_get`` - Obtain value of a a per task variable
- ``rtems.task_variable_delete`` - Remove per task variable
As task variables are unsafe for use on SMP systems, the use of these services
must be eliminated in all software that is to be used in an SMP environment.
The task variables API is disabled on SMP. Its use will lead to compile-time
and link-time errors. It is recommended that the application developer consider
the use of POSIX Keys or Thread Local Storage (TLS). POSIX Keys are available
in all RTEMS configurations. For the availablity of TLS on a particular
architecture please consult the *RTEMS CPU Architecture Supplement*.
The only remaining user of task variables in the RTEMS code base is the Ada
support. So basically Ada is not available on RTEMS SMP.
OpenMP
------
OpenMP support for RTEMS is available via the GCC provided libgomp. There is
libgomp support for RTEMS in the POSIX configuration of libgomp since GCC 4.9
(requires a Newlib snapshot after 2015-03-12). In GCC 6.1 or later (requires a
Newlib snapshot after 2015-07-30 for <sys/lock.h> provided self-contained
synchronization objects) there is a specialized libgomp configuration for RTEMS
which offers a significantly better performance compared to the POSIX
configuration of libgomp. In addition application configurable thread pools
for each scheduler instance are available in GCC 6.1 or later.
The run-time configuration of libgomp is done via environment variables
documented in the `libgomp
manual <https://gcc.gnu.org/onlinedocs/libgomp/>`_. The environment variables are evaluated in a constructor function
which executes in the context of the first initialization task before the
actual initialization task function is called (just like a global C++
constructor). To set application specific values, a higher priority
constructor function must be used to set up the environment variables.
.. code:: c
#include <stdlib.h>
void __attribute__((constructor(1000))) config_libgomp( void )
{
setenv( "OMP_DISPLAY_ENV", "VERBOSE", 1 );
setenv( "GOMP_SPINCOUNT", "30000", 1 );
setenv( "GOMP_RTEMS_THREAD_POOLS", "1$2@SCHD", 1 );
}
The environment variable ``GOMP_RTEMS_THREAD_POOLS`` is RTEMS-specific. It
determines the thread pools for each scheduler instance. The format for``GOMP_RTEMS_THREAD_POOLS`` is a list of optional``<thread-pool-count>[$<priority>]@<scheduler-name>`` configurations
separated by ``:`` where:
- ``<thread-pool-count>`` is the thread pool count for this scheduler
instance.
- ``$<priority>`` is an optional priority for the worker threads of a
thread pool according to ``pthread_setschedparam``. In case a priority
value is omitted, then a worker thread will inherit the priority of the OpenMP
master thread that created it. The priority of the worker thread is not
changed by libgomp after creation, even if a new OpenMP master thread using the
worker has a different priority.
- ``@<scheduler-name>`` is the scheduler instance name according to the
RTEMS application configuration.
In case no thread pool configuration is specified for a scheduler instance,
then each OpenMP master thread of this scheduler instance will use its own
dynamically allocated thread pool. To limit the worker thread count of the
thread pools, each OpenMP master thread must call ``omp_set_num_threads``.
Lets suppose we have three scheduler instances ``IO``, ``WRK0``, and``WRK1`` with ``GOMP_RTEMS_THREAD_POOLS`` set to``"1@WRK0:3$4@WRK1"``. Then there are no thread pool restrictions for
scheduler instance ``IO``. In the scheduler instance ``WRK0`` there is
one thread pool available. Since no priority is specified for this scheduler
instance, the worker thread inherits the priority of the OpenMP master thread
that created it. In the scheduler instance ``WRK1`` there are three thread
pools available and their worker threads run at priority four.
Thread Dispatch Details
-----------------------
This section gives background information to developers interested in the
interrupt latencies introduced by thread dispatching. A thread dispatch
consists of all work which must be done to stop the currently executing thread
on a processor and hand over this processor to an heir thread.
On SMP systems, scheduling decisions on one processor must be propagated to
other processors through inter-processor interrupts. So, a thread dispatch
which must be carried out on another processor happens not instantaneous. Thus
several thread dispatch requests might be in the air and it is possible that
some of them may be out of date before the corresponding processor has time to
deal with them. The thread dispatch mechanism uses three per-processor
variables,
- the executing thread,
- the heir thread, and
- an boolean flag indicating if a thread dispatch is necessary or not.
Updates of the heir thread and the thread dispatch necessary indicator are
synchronized via explicit memory barriers without the use of locks. A thread
can be an heir thread on at most one processor in the system. The thread context
is protected by a TTAS lock embedded in the context to ensure that it is used
on at most one processor at a time. The thread post-switch actions use a
per-processor lock. This implementation turned out to be quite efficient and
no lock contention was observed in the test suite.
The current implementation of thread dispatching has some implications with
respect to the interrupt latency. It is crucial to preserve the system
invariant that a thread can execute on at most one processor in the system at a
time. This is accomplished with a boolean indicator in the thread context.
The processor architecture specific context switch code will mark that a thread
context is no longer executing and waits that the heir context stopped
execution before it restores the heir context and resumes execution of the heir
thread (the boolean indicator is basically a TTAS lock). So, there is one
point in time in which a processor is without a thread. This is essential to
avoid cyclic dependencies in case multiple threads migrate at once. Otherwise
some supervising entity is necessary to prevent deadlocks. Such a global
supervisor would lead to scalability problems so this approach is not used.
Currently the context switch is performed with interrupts disabled. Thus in
case the heir thread is currently executing on another processor, the time of
disabled interrupts is prolonged since one processor has to wait for another
processor to make progress.
It is difficult to avoid this issue with the interrupt latency since interrupts
normally store the context of the interrupted thread on its stack. In case a
thread is marked as not executing, we must not use its thread stack to store
such an interrupt context. We cannot use the heir stack before it stopped
execution on another processor. If we enable interrupts during this
transition, then we have to provide an alternative thread independent stack for
interrupts in this time frame. This issue needs further investigation.
The problematic situation occurs in case we have a thread which executes with
thread dispatching disabled and should execute on another processor (e.g. it is
an heir thread on another processor). In this case the interrupts on this
other processor are disabled until the thread enables thread dispatching and
starts the thread dispatch sequence. The scheduler (an exception is the
scheduler with thread processor affinity support) tries to avoid such a
situation and checks if a new scheduled thread already executes on a processor.
In case the assigned processor differs from the processor on which the thread
already executes and this processor is a member of the processor set managed by
this scheduler instance, it will reassign the processors to keep the already
executing thread in place. Therefore normal scheduler requests will not lead
to such a situation. Explicit thread migration requests, however, can lead to
this situation. Explicit thread migrations may occur due to the scheduler
helping protocol or explicit scheduler instance changes. The situation can
also be provoked by interrupts which suspend and resume threads multiple times
and produce stale asynchronous thread dispatch requests in the system.
Operations
==========
Setting Affinity to a Single Processor
--------------------------------------
On some embedded applications targeting SMP systems, it may be beneficial to
lock individual tasks to specific processors. In this way, one can designate a
processor for I/O tasks, another for computation, etc.. The following
illustrates the code sequence necessary to assign a task an affinity for
processor with index ``processor_index``.
.. code:: c
#include <rtems.h>
#include <assert.h>
void pin_to_processor(rtems_id task_id, int processor_index)
{
rtems_status_code sc;
cpu_set_t cpuset;
CPU_ZERO(&cpuset);
CPU_SET(processor_index, &cpuset);
sc = rtems_task_set_affinity(task_id, sizeof(cpuset), &cpuset);
assert(sc == RTEMS_SUCCESSFUL);
}
It is important to note that the ``cpuset`` is not validated until the``rtems.task_set_affinity`` call is made. At that point,
it is validated against the current system configuration.
Directives
==========
This section details the symmetric multiprocessing services. A subsection
is dedicated to each of these services and describes the calling sequence,
related constants, usage, and status codes.
.. COMMENT: rtems_get_processor_count
GET_PROCESSOR_COUNT - Get processor count
-----------------------------------------
**CALLING SEQUENCE:**
**DIRECTIVE STATUS CODES:**
The count of processors in the system.
**DESCRIPTION:**
On uni-processor configurations a value of one will be returned.
On SMP configurations this returns the value of a global variable set during
system initialization to indicate the count of utilized processors. The
processor count depends on the physically or virtually available processors and
application configuration. The value will always be less than or equal to the
maximum count of application configured processors.
**NOTES:**
None.
.. COMMENT: rtems_get_current_processor
GET_CURRENT_PROCESSOR - Get current processor index
---------------------------------------------------
**CALLING SEQUENCE:**
**DIRECTIVE STATUS CODES:**
The index of the current processor.
**DESCRIPTION:**
On uni-processor configurations a value of zero will be returned.
On SMP configurations an architecture specific method is used to obtain the
index of the current processor in the system. The set of processor indices is
the range of integers starting with zero up to the processor count minus one.
Outside of sections with disabled thread dispatching the current processor
index may change after every instruction since the thread may migrate from one
processor to another. Sections with disabled interrupts are sections with
thread dispatching disabled.
**NOTES:**
None.
.. COMMENT: rtems_scheduler_ident
SCHEDULER_IDENT - Get ID of a scheduler
---------------------------------------
**CALLING SEQUENCE:**
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - successful operation
``RTEMS.INVALID_ADDRESS`` - ``id`` is NULL
``RTEMS.INVALID_NAME`` - invalid scheduler name
``RTEMS.UNSATISFIED`` - - a scheduler with this name exists, but
the processor set of this scheduler is empty
**DESCRIPTION:**
Identifies a scheduler by its name. The scheduler name is determined by the
scheduler configuration. See `Configuring Clustered Schedulers`_.
**NOTES:**
None.
.. COMMENT: rtems_scheduler_get_processor_set
SCHEDULER_GET_PROCESSOR_SET - Get processor set of a scheduler
--------------------------------------------------------------
**CALLING SEQUENCE:**
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - successful operation
``RTEMS.INVALID_ADDRESS`` - ``cpuset`` is NULL
``RTEMS.INVALID_ID`` - invalid scheduler id
``RTEMS.INVALID_NUMBER`` - the affinity set buffer is too small for
set of processors owned by the scheduler
**DESCRIPTION:**
Returns the processor set owned by the scheduler in ``cpuset``. A set bit
in the processor set means that this processor is owned by the scheduler and a
cleared bit means the opposite.
**NOTES:**
None.
.. COMMENT: rtems_task_get_scheduler
TASK_GET_SCHEDULER - Get scheduler of a task
--------------------------------------------
**CALLING SEQUENCE:**
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - successful operation
``RTEMS.INVALID_ADDRESS`` - ``scheduler_id`` is NULL
``RTEMS.INVALID_ID`` - invalid task id
**DESCRIPTION:**
Returns the scheduler identifier of a task identified by ``task_id`` in``scheduler_id``.
**NOTES:**
None.
.. COMMENT: rtems_task_set_scheduler
TASK_SET_SCHEDULER - Set scheduler of a task
--------------------------------------------
**CALLING SEQUENCE:**
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - successful operation
``RTEMS.INVALID_ID`` - invalid task or scheduler id
``RTEMS.INCORRECT_STATE`` - the task is in the wrong state to
perform a scheduler change
**DESCRIPTION:**
Sets the scheduler of a task identified by ``task_id`` to the scheduler
identified by ``scheduler_id``. The scheduler of a task is initialized to
the scheduler of the task that created it.
**NOTES:**
None.
**EXAMPLE:**
.. code:: c
#include <rtems.h>
#include <assert.h>
void task(rtems_task_argument arg);
void example(void)
{
rtems_status_code sc;
rtems_id task_id;
rtems_id scheduler_id;
rtems_name scheduler_name;
scheduler_name = rtems_build_name('W', 'O', 'R', 'K');
sc = rtems_scheduler_ident(scheduler_name, &scheduler_id);
assert(sc == RTEMS_SUCCESSFUL);
sc = rtems_task_create(
rtems_build_name('T', 'A', 'S', 'K'),
1,
RTEMS_MINIMUM_STACK_SIZE,
RTEMS_DEFAULT_MODES,
RTEMS_DEFAULT_ATTRIBUTES,
&task_id
);
assert(sc == RTEMS_SUCCESSFUL);
sc = rtems_task_set_scheduler(task_id, scheduler_id);
assert(sc == RTEMS_SUCCESSFUL);
sc = rtems_task_start(task_id, task, 0);
assert(sc == RTEMS_SUCCESSFUL);
}
.. COMMENT: rtems_task_get_affinity
TASK_GET_AFFINITY - Get task processor affinity
-----------------------------------------------
**CALLING SEQUENCE:**
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - successful operation
``RTEMS.INVALID_ADDRESS`` - ``cpuset`` is NULL
``RTEMS.INVALID_ID`` - invalid task id
``RTEMS.INVALID_NUMBER`` - the affinity set buffer is too small for
the current processor affinity set of the task
**DESCRIPTION:**
Returns the current processor affinity set of the task in ``cpuset``. A set
bit in the affinity set means that the task can execute on this processor and a
cleared bit means the opposite.
**NOTES:**
None.
.. COMMENT: rtems_task_set_affinity
TASK_SET_AFFINITY - Set task processor affinity
-----------------------------------------------
**CALLING SEQUENCE:**
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - successful operation
``RTEMS.INVALID_ADDRESS`` - ``cpuset`` is NULL
``RTEMS.INVALID_ID`` - invalid task id
``RTEMS.INVALID_NUMBER`` - invalid processor affinity set
**DESCRIPTION:**
Sets the processor affinity set for the task specified by ``cpuset``. A set
bit in the affinity set means that the task can execute on this processor and a
cleared bit means the opposite.
**NOTES:**
This function will not change the scheduler of the task. The intersection of
the processor affinity set and the set of processors owned by the scheduler of
the task must be non-empty. It is not an error if the processor affinity set
contains processors that are not part of the set of processors owned by the
scheduler instance of the task. A task will simply not run under normal
circumstances on these processors since the scheduler ignores them. Some
locking protocols may temporarily use processors that are not included in the
processor affinity set of the task. It is also not an error if the processor
affinity set contains processors that are not part of the system.
.. COMMENT: COPYRIGHT (c) 2011,2015
.. COMMENT: Aeroflex Gaisler AB
.. COMMENT: All rights reserved.

File diff suppressed because it is too large Load Diff

View File

@ -1,405 +0,0 @@
Timespec Helpers
################
Introduction
============
The Timespec helpers manager provides directives to assist in manipulating
instances of the POSIX ``struct timespec`` structure.
The directives provided by the timespec helpers manager are:
- ``rtems_timespec_set`` - Set timespecs value
- ``rtems_timespec_zero`` - Zero timespecs value
- ``rtems_timespec_is_valid`` - Check if timespec is valid
- ``rtems_timespec_add_to`` - Add two timespecs
- ``rtems_timespec_subtract`` - Subtract two timespecs
- ``rtems_timespec_divide`` - Divide two timespecs
- ``rtems_timespec_divide_by_integer`` - Divide timespec by integer
- ``rtems_timespec_less_than`` - Less than operator
- ``rtems_timespec_greater_than`` - Greater than operator
- ``rtems_timespec_equal_to`` - Check if two timespecs are equal
- ``rtems_timespec_get_seconds`` - Obtain seconds portion of timespec
- ``rtems_timespec_get_nanoseconds`` - Obtain nanoseconds portion of timespec
- ``rtems_timespec_to_ticks`` - Convert timespec to number of ticks
- ``rtems_timespec_from_ticks`` - Convert ticks to timespec
Background
==========
Time Storage Conventions
------------------------
Time can be stored in many ways. One of them is the ``struct timespec``
format which is a structure that consists of the fields ``tv_sec``
to represent seconds and ``tv_nsec`` to represent nanoseconds. The``struct timeval`` structure is simular and consists of seconds (stored
in ``tv_sec``) and microseconds (stored in ``tv_usec``). Either``struct timespec`` or ``struct timeval`` can be used to represent
elapsed time, time of executing some operations, or time of day.
Operations
==========
Set and Obtain Timespec Value
-----------------------------
A user may write a specific time by passing the desired seconds and
nanoseconds values and the destination ``struct timespec`` using the``rtems_timespec_set`` directive.
The ``rtems_timespec_zero`` directive is used to zero the seconds
and nanoseconds portions of a ``struct timespec`` instance.
Users may obtain the seconds or nanoseconds portions of a ``struct
timespec`` instance with the ``rtems_timespec_get_seconds`` or``rtems_timespec_get_nanoseconds`` methods, respectively.
Timespec Math
-------------
A user can perform multiple operations on ``struct timespec``
instances. The helpers in this manager assist in adding, subtracting, and
performing divison on ``struct timespec`` instances.
- Adding two ``struct timespec`` can be done using the``rtems_timespec_add_to`` directive. This directive is used mainly
to calculate total amount of time consumed by multiple operations.
- The ``rtems_timespec_subtract`` is used to subtract two``struct timespecs`` instances and determine the elapsed time between
those two points in time.
- The ``rtems_timespec_divide`` is used to use to divide one``struct timespec`` instance by another. This calculates the percentage
with a precision to three decimal points.
- The ``rtems_timespec_divide_by_integer`` is used to divide a``struct timespec`` instance by an integer. It is commonly used in
benchmark calculations to dividing duration by the number of iterations
performed.
Comparing struct timespec Instances
-----------------------------------
A user can compare two ``struct timespec`` instances using the``rtems_timespec_less_than``, ``rtems_timespec_greater_than``
or ``rtems_timespec_equal_to`` routines.
Conversions and Validity Check
------------------------------
Conversion to and from clock ticks may be performed by using the``rtems_timespec_to_ticks`` and ``rtems_timespec_from_ticks``
directives.
User can also check validity of timespec with``rtems_timespec_is_valid`` routine.
Directives
==========
This section details the Timespec Helpers managers directives.
A subsection is dedicated to each of this managers directives
and describes the calling sequence, related constants, usage,
and status codes.
TIMESPEC_SET - Set struct timespec Instance
-------------------------------------------
**CALLING SEQUENCE:**
Not Currently Supported In Ada
**STATUS CODES:**
NONE
**DESCRIPTION:**
This directive sets the ``struct timespec`` ``time`` value to the
desired ``seconds`` and ``nanoseconds`` values.
**NOTES:**
This method does NOT check if ``nanoseconds`` is less than the
maximum number of nanoseconds in a second.
TIMESPEC_ZERO - Zero struct timespec Instance
---------------------------------------------
**CALLING SEQUENCE:**
Not Currently Supported In Ada
**STATUS CODES:**
NONE
**DESCRIPTION:**
This routine sets the contents of the ``struct timespec`` instance``time`` to zero.
**NOTES:**
NONE
TIMESPEC_IS_VALID - Check validity of a struct timespec instance
----------------------------------------------------------------
**CALLING SEQUENCE:**
Not Currently Supported In Ada
**STATUS CODES:**
This method returns ``true`` if the instance is valid, and ``false``
otherwise.
**DESCRIPTION:**
This routine check validity of a ``struct timespec`` instance. It
checks if the nanoseconds portion of the ``struct timespec`` instanceis
in allowed range (less than the maximum number of nanoseconds per second).
**NOTES:**
TIMESPEC_ADD_TO - Add Two struct timespec Instances
---------------------------------------------------
**CALLING SEQUENCE:**
Not Currently Supported In Ada
**STATUS CODES:**
The method returns the number of seconds ``time`` increased by.
**DESCRIPTION:**
This routine adds two ``struct timespec`` instances. The second argument is added to the first. The parameter ``time`` is the base time to which the ``add`` parameter is added.
**NOTES:**
NONE
TIMESPEC_SUBTRACT - Subtract Two struct timespec Instances
----------------------------------------------------------
**CALLING SEQUENCE:**
Not Currently Supported In Ada
**STATUS CODES:**
NONE
**DESCRIPTION:**
This routine subtracts ``start`` from ``end`` saves the difference
in ``result``. The primary use of this directive is to calculate
elapsed time.
**NOTES:**
It is possible to subtract when ``end`` is less than ``start``
and it produce negative ``result``. When doing this you should be
careful and remember that only the seconds portion of a ``struct
timespec`` instance is signed, which means that nanoseconds portion is
always increasing. Due to that when your timespec has seconds = -1 and
nanoseconds=500,000,000 it means that result is -0.5 second, NOT the
expected -1.5!
TIMESPEC_DIVIDE - Divide Two struct timespec Instances
------------------------------------------------------
**CALLING SEQUENCE:**
Not Currently Supported In Ada
**STATUS CODES:**
NONE
**DESCRIPTION:**
This routine divides the ``struct timespec`` instance ``lhs`` by
the ``struct timespec`` instance ``rhs``. The result is returned
in the ``ival_percentage`` and ``fval_percentage``, representing
the integer and fractional results of the division respectively.
The ``ival_percentage`` is integer value of calculated percentage and ``fval_percentage`` is fractional part of calculated percentage.
**NOTES:**
The intended use is calculating percentges to three decimal points.
When dividing by zero, this routine return both ``ival_percentage``
and ``fval_percentage`` equal zero.
The division is performed using exclusively integer operations.
TIMESPEC_DIVIDE_BY_INTEGER - Divide a struct timespec Instance by an Integer
----------------------------------------------------------------------------
**CALLING SEQUENCE:**
Not Currently Supported In Ada
**STATUS CODES:**
NONE
**DESCRIPTION:**
This routine divides the ``struct timespec`` instance ``time`` by the integer value ``iterations``.
The result is saved in ``result``.
**NOTES:**
The expected use is to assist in benchmark calculations where you
typically divide a duration (``time``) by a number of iterations what
gives average time.
TIMESPEC_LESS_THAN - Less than operator
---------------------------------------
**CALLING SEQUENCE:**
Not Currently Supported In Ada
**STATUS CODES:**
This method returns ``struct true`` if ``lhs`` is less than``rhs`` and ``struct false`` otherwise.
**DESCRIPTION:**
This method is the less than operator for ``struct timespec`` instances. The first parameter is the left hand side and the second is the right hand side of the comparison.
**NOTES:**
NONE
TIMESPEC_GREATER_THAN - Greater than operator
---------------------------------------------
**CALLING SEQUENCE:**
Not Currently Supported In Ada
**STATUS CODES:**
This method returns ``struct true`` if ``lhs`` is greater than``rhs`` and ``struct false`` otherwise.
**DESCRIPTION:**
This method is greater than operator for ``struct timespec`` instances.
**NOTES:**
NONE
TIMESPEC_EQUAL_TO - Check equality of timespecs
-----------------------------------------------
**CALLING SEQUENCE:**
Not Currently Supported In Ada
**STATUS CODES:**
This method returns ``struct true`` if ``lhs`` is equal to``rhs`` and ``struct false`` otherwise.
**DESCRIPTION:**
This method is equality operator for ``struct timespec`` instances.
**NOTES:**
NONE
TIMESPEC_GET_SECONDS - Get Seconds Portion of struct timespec Instance
----------------------------------------------------------------------
**CALLING SEQUENCE:**
Not Currently Supported In Ada
**STATUS CODES:**
This method returns the seconds portion of the specified ``struct
timespec`` instance.
**DESCRIPTION:**
This method returns the seconds portion of the specified ``struct timespec`` instance ``time``.
**NOTES:**
NONE
TIMESPEC_GET_NANOSECONDS - Get Nanoseconds Portion of the struct timespec Instance
----------------------------------------------------------------------------------
**CALLING SEQUENCE:**
Not Currently Supported In Ada
**STATUS CODES:**
This method returns the nanoseconds portion of the specified ``struct
timespec`` instance.
**DESCRIPTION:**
This method returns the nanoseconds portion of the specified timespec
which is pointed by ``_time``.
**NOTES:**
TIMESPEC_TO_TICKS - Convert struct timespec Instance to Ticks
-------------------------------------------------------------
**CALLING SEQUENCE:**
Not Currently Supported In Ada
**STATUS CODES:**
This directive returns the number of ticks computed.
**DESCRIPTION:**
This directive converts the ``time`` timespec to the corresponding number of clock ticks.
**NOTES:**
NONE
TIMESPEC_FROM_TICKS - Convert Ticks to struct timespec Representation
---------------------------------------------------------------------
**CALLING SEQUENCE:**
Not Currently Supported In Ada
**STATUS CODES:**
NONE
**DESCRIPTION:**
This routine converts the ``ticks`` to the corresponding ``struct timespec`` representation and stores it in ``time``.
**NOTES:**
NONE
.. COMMENT: COPYRIGHT (c) 2011.
.. COMMENT: On-Line Applications Research Corporation (OAR).
.. COMMENT: All rights reserved.

View File

@ -1,565 +0,0 @@
Timer Manager
#############
.. index:: timers
Introduction
============
The timer manager provides support for timer
facilities. The directives provided by the timer manager are:
- ``rtems.timer_create`` - Create a timer
- ``rtems.timer_ident`` - Get ID of a timer
- ``rtems.timer_cancel`` - Cancel a timer
- ``rtems.timer_delete`` - Delete a timer
- ``rtems.timer_fire_after`` - Fire timer after interval
- ``rtems.timer_fire_when`` - Fire timer when specified
- ``rtems.timer_initiate_server`` - Initiate server for task-based timers
- ``rtems.timer_server_fire_after`` - Fire task-based timer after interval
- ``rtems.timer_server_fire_when`` - Fire task-based timer when specified
- ``rtems.timer_reset`` - Reset an interval timer
Background
==========
Required Support
----------------
A clock tick is required to support the functionality provided by this manager.
Timers
------
A timer is an RTEMS object which allows the
application to schedule operations to occur at specific times in
the future. User supplied timer service routines are invoked by
either the ``rtems.clock_tick`` directive or
a special Timer Server task when the timer fires. Timer service
routines may perform any operations or directives which normally
would be performed by the application code which invoked the``rtems.clock_tick`` directive.
The timer can be used to implement watchdog routines
which only fire to denote that an application error has
occurred. The timer is reset at specific points in the
application to ensure that the watchdog does not fire. Thus, if
the application does not reset the watchdog timer, then the
timer service routine will fire to indicate that the application
has failed to reach a reset point. This use of a timer is
sometimes referred to as a "keep alive" or a "deadman" timer.
Timer Server
------------
The Timer Server task is responsible for executing the timer
service routines associated with all task-based timers.
This task executes at a priority higher than any RTEMS application
task, and is created non-preemptible, and thus can be viewed logically as
the lowest priority interrupt.
By providing a mechanism where timer service routines execute
in task rather than interrupt space, the application is
allowed a bit more flexibility in what operations a timer
service routine can perform. For example, the Timer Server
can be configured to have a floating point context in which case
it would be safe to perform floating point operations
from a task-based timer. Most of the time, executing floating
point instructions from an interrupt service routine
is not considered safe. However, since the Timer Server task
is non-preemptible, only directives allowed from an ISR can be
called in the timer service routine.
The Timer Server is designed to remain blocked until a
task-based timer fires. This reduces the execution overhead
of the Timer Server.
Timer Service Routines
----------------------
The timer service routine should adhere to Ada calling
conventions and have a prototype similar to the following:
.. code:: c
procedure User_Routine(
Timer_ID : in RTEMS.ID;
User_Data : in System.Address
);
Where the timer_id parameter is the RTEMS object ID
of the timer which is being fired and user_data is a pointer to
user-defined information which may be utilized by the timer
service routine. The argument user_data may be NULL.
Operations
==========
Creating a Timer
----------------
The ``rtems.timer_create`` directive creates a timer by
allocating a Timer Control Block (TMCB), assigning the timer a
user-specified name, and assigning it a timer ID. Newly created
timers do not have a timer service routine associated with them
and are not active.
Obtaining Timer IDs
-------------------
When a timer is created, RTEMS generates a unique
timer ID and assigns it to the created timer until it is
deleted. The timer ID may be obtained by either of two methods.
First, as the result of an invocation of the``rtems.timer_create``
directive, the timer ID is stored in a user provided location.
Second, the timer ID may be obtained later using the``rtems.timer_ident`` directive. The timer ID
is used by other directives to manipulate this timer.
Initiating an Interval Timer
----------------------------
The ``rtems.timer_fire_after``
and ``rtems.timer_server_fire_after``
directives initiate a timer to fire a user provided
timer service routine after the specified
number of clock ticks have elapsed. When the interval has
elapsed, the timer service routine will be invoked from the``rtems.clock_tick`` directive if it was initiated
by the ``rtems.timer_fire_after`` directive
and from the Timer Server task if initiated by the``rtems.timer_server_fire_after`` directive.
Initiating a Time of Day Timer
------------------------------
The ``rtems.timer_fire_when``
and ``rtems.timer_server_fire_when``
directive initiate a timer to
fire a user provided timer service routine when the specified
time of day has been reached. When the interval has elapsed,
the timer service routine will be invoked from the``rtems.clock_tick`` directive
by the ``rtems.timer_fire_when`` directive
and from the Timer Server task if initiated by the``rtems.timer_server_fire_when`` directive.
Canceling a Timer
-----------------
The ``rtems.timer_cancel`` directive is used to halt the
specified timer. Once canceled, the timer service routine will
not fire unless the timer is reinitiated. The timer can be
reinitiated using the ``rtems.timer_reset``,``rtems.timer_fire_after``, and``rtems.timer_fire_when`` directives.
Resetting a Timer
-----------------
The ``rtems.timer_reset`` directive is used to restore an
interval timer initiated by a previous invocation of``rtems.timer_fire_after`` or``rtems.timer_server_fire_after`` to
its original interval length. If the
timer has not been used or the last usage of this timer
was by the ``rtems.timer_fire_when``
or ``rtems.timer_server_fire_when``
directive, then an error is returned. The timer service routine
is not changed or fired by this directive.
Initiating the Timer Server
---------------------------
The ``rtems.timer_initiate_server`` directive is used to
allocate and start the execution of the Timer Server task. The
application can specify both the stack size and attributes of the
Timer Server. The Timer Server executes at a priority higher than
any application task and thus the user can expect to be preempted
as the result of executing the ``rtems.timer_initiate_server``
directive.
Deleting a Timer
----------------
The ``rtems.timer_delete`` directive is used to delete a timer.
If the timer is running and has not expired, the timer is
automatically canceled. The timers control block is returned
to the TMCB free list when it is deleted. A timer can be
deleted by a task other than the task which created the timer.
Any subsequent references to the timers name and ID are invalid.
Directives
==========
This section details the timer managers directives.
A subsection is dedicated to each of this managers directives
and describes the calling sequence, related constants, usage,
and status codes.
TIMER_CREATE - Create a timer
-----------------------------
.. index:: create a timer
**CALLING SEQUENCE:**
.. code:: c
procedure Timer_Create (
Name : in RTEMS.Name;
ID : out RTEMS.ID;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - timer created successfully
``RTEMS.INVALID_ADDRESS`` - ``id`` is NULL
``RTEMS.INVALID_NAME`` - invalid timer name
``RTEMS.TOO_MANY`` - too many timers created
**DESCRIPTION:**
This directive creates a timer. The assigned timer
id is returned in id. This id is used to access the timer with
other timer manager directives. For control and maintenance of
the timer, RTEMS allocates a TMCB from the local TMCB free pool
and initializes it.
**NOTES:**
This directive will not cause the calling task to be
preempted.
TIMER_IDENT - Get ID of a timer
-------------------------------
.. index:: obtain the ID of a timer
**CALLING SEQUENCE:**
.. code:: c
procedure Timer_Ident (
Name : in RTEMS.Name;
ID : out RTEMS.ID;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - timer identified successfully
``RTEMS.INVALID_ADDRESS`` - ``id`` is NULL
``RTEMS.INVALID_NAME`` - timer name not found
**DESCRIPTION:**
This directive obtains the timer id associated with
the timer name to be acquired. If the timer name is not unique,
then the timer id will match one of the timers with that name.
However, this timer id is not guaranteed to correspond to the
desired timer. The timer id is used to access this timer in
other timer related directives.
**NOTES:**
This directive will not cause the running task to be
preempted.
TIMER_CANCEL - Cancel a timer
-----------------------------
.. index:: cancel a timer
**CALLING SEQUENCE:**
.. code:: c
procedure Timer_Cancel (
ID : in RTEMS.ID;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - timer canceled successfully
``RTEMS.INVALID_ID`` - invalid timer id
**DESCRIPTION:**
This directive cancels the timer id. This timer will
be reinitiated by the next invocation of ``rtems.timer_reset``,``rtems.timer_fire_after``, or``rtems.timer_fire_when`` with this id.
**NOTES:**
This directive will not cause the running task to be preempted.
TIMER_DELETE - Delete a timer
-----------------------------
.. index:: delete a timer
**CALLING SEQUENCE:**
.. code:: c
procedure Timer_Delete (
ID : in RTEMS.ID;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - timer deleted successfully
``RTEMS.INVALID_ID`` - invalid timer id
**DESCRIPTION:**
This directive deletes the timer specified by id. If
the timer is running, it is automatically canceled. The TMCB
for the deleted timer is reclaimed by RTEMS.
**NOTES:**
This directive will not cause the running task to be
preempted.
A timer can be deleted by a task other than the task
which created the timer.
TIMER_FIRE_AFTER - Fire timer after interval
--------------------------------------------
.. index:: fire a timer after an interval
**CALLING SEQUENCE:**
.. code:: c
procedure Timer_Fire_After (
ID : in RTEMS.ID;
Ticks : in RTEMS.Interval;
Routine : in RTEMS.Timer_Service_Routine;
User_Data : in RTEMS.Address;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - timer initiated successfully
``RTEMS.INVALID_ADDRESS`` - ``routine`` is NULL
``RTEMS.INVALID_ID`` - invalid timer id
``RTEMS.INVALID_NUMBER`` - invalid interval
**DESCRIPTION:**
This directive initiates the timer specified by id.
If the timer is running, it is automatically canceled before
being initiated. The timer is scheduled to fire after an
interval ticks clock ticks has passed. When the timer fires,
the timer service routine routine will be invoked with the
argument user_data.
**NOTES:**
This directive will not cause the running task to be
preempted.
TIMER_FIRE_WHEN - Fire timer when specified
-------------------------------------------
.. index:: fire a timer at wall time
**CALLING SEQUENCE:**
.. code:: c
procedure Timer_Fire_When (
ID : in RTEMS.ID;
Wall_Time : in RTEMS.Time_Of_Day;
Routine : in RTEMS.Timer_Service_Routine;
User_Data : in RTEMS.Address;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - timer initiated successfully
``RTEMS.INVALID_ADDRESS`` - ``routine`` is NULL
``RTEMS.INVALID_ADDRESS`` - ``wall_time`` is NULL
``RTEMS.INVALID_ID`` - invalid timer id
``RTEMS.NOT_DEFINED`` - system date and time is not set
``RTEMS.INVALID_CLOCK`` - invalid time of day
**DESCRIPTION:**
This directive initiates the timer specified by id.
If the timer is running, it is automatically canceled before
being initiated. The timer is scheduled to fire at the time of
day specified by wall_time. When the timer fires, the timer
service routine routine will be invoked with the argument
user_data.
**NOTES:**
This directive will not cause the running task to be
preempted.
TIMER_INITIATE_SERVER - Initiate server for task-based timers
-------------------------------------------------------------
.. index:: initiate the Timer Server
**CALLING SEQUENCE:**
.. code:: c
procedure Timer_Initiate_Server (
Server_Priority : in RTEMS.Task_Priority;
Stack_Size : in RTEMS.Unsigned32;
Attribute_Set : in RTEMS.Attribute;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - Timer Server initiated successfully
``RTEMS.TOO_MANY`` - too many tasks created
**DESCRIPTION:**
This directive initiates the Timer Server task. This task
is responsible for executing all timers initiated via the``rtems.timer_server_fire_after`` or``rtems.timer_server_fire_when`` directives.
**NOTES:**
This directive could cause the calling task to be preempted.
The Timer Server task is created using the``rtems.task_create`` service and must be accounted
for when configuring the system.
Even through this directive invokes the ``rtems.task_create``
and ``rtems.task_start`` directives, it should only fail
due to resource allocation problems.
TIMER_SERVER_FIRE_AFTER - Fire task-based timer after interval
--------------------------------------------------------------
.. index:: fire task-based a timer after an interval
**CALLING SEQUENCE:**
.. code:: c
procedure Timer_Fire_Server_After (
ID : in RTEMS.ID;
Ticks : in RTEMS.Interval;
Routine : in RTEMS.Timer_Service_Routine;
User_Data : in RTEMS.Address;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - timer initiated successfully
``RTEMS.INVALID_ADDRESS`` - ``routine`` is NULL
``RTEMS.INVALID_ID`` - invalid timer id
``RTEMS.INVALID_NUMBER`` - invalid interval
``RTEMS.INCORRECT_STATE`` - Timer Server not initiated
**DESCRIPTION:**
This directive initiates the timer specified by id and specifies
that when it fires it will be executed by the Timer Server.
If the timer is running, it is automatically canceled before
being initiated. The timer is scheduled to fire after an
interval ticks clock ticks has passed. When the timer fires,
the timer service routine routine will be invoked with the
argument user_data.
**NOTES:**
This directive will not cause the running task to be
preempted.
TIMER_SERVER_FIRE_WHEN - Fire task-based timer when specified
-------------------------------------------------------------
.. index:: fire a task-based timer at wall time
**CALLING SEQUENCE:**
.. code:: c
procedure Timer_Fire_Server_When (
ID : in RTEMS.ID;
Wall_Time : in RTEMS.Time_Of_Day;
Routine : in RTEMS.Timer_Service_Routine;
User_Data : in RTEMS.Address;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - timer initiated successfully
``RTEMS.INVALID_ADDRESS`` - ``routine`` is NULL
``RTEMS.INVALID_ADDRESS`` - ``wall_time`` is NULL
``RTEMS.INVALID_ID`` - invalid timer id
``RTEMS.NOT_DEFINED`` - system date and time is not set
``RTEMS.INVALID_CLOCK`` - invalid time of day
``RTEMS.INCORRECT_STATE`` - Timer Server not initiated
**DESCRIPTION:**
This directive initiates the timer specified by id and specifies
that when it fires it will be executed by the Timer Server.
If the timer is running, it is automatically canceled before
being initiated. The timer is scheduled to fire at the time of
day specified by wall_time. When the timer fires, the timer
service routine routine will be invoked with the argument
user_data.
**NOTES:**
This directive will not cause the running task to be
preempted.
TIMER_RESET - Reset an interval timer
-------------------------------------
.. index:: reset a timer
**CALLING SEQUENCE:**
.. code:: c
procedure Timer_Reset (
ID : in RTEMS.ID;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - timer reset successfully
``RTEMS.INVALID_ID`` - invalid timer id
``RTEMS.NOT_DEFINED`` - attempted to reset a when or newly created timer
**DESCRIPTION:**
This directive resets the timer associated with id.
This timer must have been previously initiated with either the``rtems.timer_fire_after`` or``rtems.timer_server_fire_after``
directive. If active the timer is canceled,
after which the timer is reinitiated using the same interval and
timer service routine which the original``rtems.timer_fire_after````rtems.timer_server_fire_after``
directive used.
**NOTES:**
If the timer has not been used or the last usage of this timer
was by a ``rtems.timer_fire_when`` or``rtems.timer_server_fire_when``
directive, then the ``RTEMS.NOT_DEFINED`` error is
returned.
Restarting a cancelled after timer results in the timer being
reinitiated with its previous timer service routine and interval.
This directive will not cause the running task to be preempted.
.. COMMENT: COPYRIGHT (c) 1988-2007.
.. COMMENT: On-Line Applications Research Corporation (OAR).
.. COMMENT: All rights reserved.

View File

@ -1,589 +0,0 @@
User Extensions Manager
#######################
.. index:: user extensions
Introduction
============
The RTEMS User Extensions Manager allows the
application developer to augment the executive by allowing them
to supply extension routines which are invoked at critical
system events. The directives provided by the user extensions
manager are:
- ``rtems.extension_create`` - Create an extension set
- ``rtems.extension_ident`` - Get ID of an extension set
- ``rtems.extension_delete`` - Delete an extension set
Background
==========
User extension routines are invoked when the
following system events occur:
- Task creation
- Task initiation
- Task reinitiation
- Task deletion
- Task context switch
- Post task context switch
- Task begin
- Task exits
- Fatal error detection
These extensions are invoked as a function with
arguments that are appropriate to the system event.
Extension Sets
--------------
.. index:: extension set
An extension set is defined as a set of routines
which are invoked at each of the critical system events at which
user extension routines are invoked. Together a set of these
routines typically perform a specific functionality such as
performance monitoring or debugger support. RTEMS is informed of
the entry points which constitute an extension set via the
following record:.. index:: rtems_extensions_table
.. code:: c
type Extensions_Table is
record
Task_Create : RTEMS.Task_Create_Extension;
Task_Start : RTEMS.Task_Start_Extension;
Task_Restart : RTEMS.Task_Restart_Extension;
Task_Delete : RTEMS.Task_Delete_Extension;
Task_Switch : RTEMS.Task_Switch_Extension;
Task_Post_Switch : RTEMS.Task_Post_Switch_Extension;
Task_Begin : RTEMS.Task_Begin_Extension;
Task_Exitted : RTEMS.Task_Exitted_Extension;
Fatal : RTEMS.Fatal_Error_Extension;
end record;
RTEMS allows the user to have multiple extension sets
active at the same time. First, a single static extension set
may be defined as the applications User Extension Table which
is included as part of the Configuration Table. This extension
set is active for the entire life of the system and may not be
deleted. This extension set is especially important because it
is the only way the application can provided a FATAL error
extension which is invoked if RTEMS fails during the
initialize_executive directive. The static extension set is
optional and may be configured as NULL if no static extension
set is required.
Second, the user can install dynamic extensions using
the ``rtems.extension_create``
directive. These extensions are RTEMS
objects in that they have a name, an ID, and can be dynamically
created and deleted. In contrast to the static extension set,
these extensions can only be created and installed after the
initialize_executive directive successfully completes execution.
Dynamic extensions are useful for encapsulating the
functionality of an extension set. For example, the application
could use extensions to manage a special coprocessor, do
performance monitoring, and to do stack bounds checking. Each
of these extension sets could be written and installed
independently of the others.
All user extensions are optional and RTEMS places no
naming restrictions on the user. The user extension entry points
are copied into an internal RTEMS structure. This means the user
does not need to keep the table after creating it, and changing the
handler entry points dynamically in a table once created has no
effect. Creating a table local to a function can save space in
space limited applications.
Extension switches do not effect the context switch overhead if
no switch handler is installed.
TCB Extension Area
------------------
.. index:: TCB extension area
RTEMS provides for a pointer to a user-defined data
area for each extension set to be linked to each tasks control
block. This set of pointers is an extension of the TCB and can
be used to store additional data required by the users
extension functions.
The TCB extension is an array of pointers in the TCB. The
index into the table can be obtained from the extension id
returned when the extension is created:.. index:: rtems extensions table index
.. code:: c
There is currently no example for Ada.
The number of pointers in the area is the same as the number of
user extension sets configured. This allows an application to
augment the TCB with user-defined information. For example, an
application could implement task profiling by storing timing
statistics in the TCBs extended memory area. When a task
context switch is being executed, the TASK_SWITCH extension
could read a real-time clock to calculate how long the task
being swapped out has run as well as timestamp the starting time
for the task being swapped in.
If used, the extended memory area for the TCB should
be allocated and the TCB extension pointer should be set at the
time the task is created or started by either the TASK_CREATE or
TASK_START extension. The application is responsible for
managing this extended memory area for the TCBs. The memory may
be reinitialized by the TASK_RESTART extension and should be
deallocated by the TASK_DELETE extension when the task is
deleted. Since the TCB extension buffers would most likely be
of a fixed size, the RTEMS partition manager could be used to
manage the applications extended memory area. The application
could create a partition of fixed size TCB extension buffers and
use the partition managers allocation and deallocation
directives to obtain and release the extension buffers.
Extensions
----------
The sections that follow will contain a description
of each extension. Each section will contain a prototype of a
function with the appropriate calling sequence for the
corresponding extension. The names given for the Ada
subprogram and
its arguments are all defined by the user. The names used in
the examples were arbitrarily chosen and impose no naming
conventions on the user.
TASK_CREATE Extension
~~~~~~~~~~~~~~~~~~~~~
The TASK_CREATE extension directly corresponds to the``rtems.task_create`` directive. If this extension
is defined in any
static or dynamic extension set and a task is being created,
then the extension routine will automatically be invoked by
RTEMS. The extension should have a prototype similar to the
following:.. index:: rtems_task_create_extension
.. index:: rtems_extension
.. code:: c
function User_Task_Create (
Current_Task : in RTEMS.TCB_Pointer;
New_Task : in RTEMS.TCB_Pointer
) returns Boolean;
where ``current_task`` can be used to access the TCB for
the currently executing task, and new_task can be used to access
the TCB for the new task being created. This extension is
invoked from the ``rtems.task_create``
directive after ``new_task`` has been
completely initialized, but before it is placed on a ready TCB
chain.
The user extension is expected to return the boolean
value ``true`` if it successfully executed and``false`` otherwise. A task create user extension
will frequently attempt to allocate resources. If this
allocation fails, then the extension should return``false`` and the entire task create operation
will fail.
TASK_START Extension
~~~~~~~~~~~~~~~~~~~~
The TASK_START extension directly corresponds to the
task_start directive. If this extension is defined in any
static or dynamic extension set and a task is being started,
then the extension routine will automatically be invoked by
RTEMS. The extension should have a prototype similar to the
following:.. index:: rtems_task_start_extension
.. code:: c
procedure User_Task_Start (
Current_Task : in RTEMS.TCB_Pointer;
Started_Task : in RTEMS.TCB_Pointer
);
where current_task can be used to access the TCB for
the currently executing task, and started_task can be used to
access the TCB for the dormant task being started. This
extension is invoked from the task_start directive after
started_task has been made ready to start execution, but before
it is placed on a ready TCB chain.
TASK_RESTART Extension
~~~~~~~~~~~~~~~~~~~~~~
The TASK_RESTART extension directly corresponds to
the task_restart directive. If this extension is defined in any
static or dynamic extension set and a task is being restarted,
then the extension should have a prototype similar to the
following:.. index:: rtems_task_restart_extension
.. code:: c
procedure User_Task_Restart (
Current_Task : in RTEMS.TCB_Pointer;
Restarted_Task : in RTEMS.TCB_Pointer
);
where current_task can be used to access the TCB for
the currently executing task, and restarted_task can be used to
access the TCB for the task being restarted. This extension is
invoked from the task_restart directive after restarted_task has
been made ready to start execution, but before it is placed on a
ready TCB chain.
TASK_DELETE Extension
~~~~~~~~~~~~~~~~~~~~~
The TASK_DELETE extension is associated with the
task_delete directive. If this extension is defined in any
static or dynamic extension set and a task is being deleted,
then the extension routine will automatically be invoked by
RTEMS. The extension should have a prototype similar to the
following:.. index:: rtems_task_delete_extension
.. code:: c
procedure User_Task_Delete (
Current_Task : in RTEMS.TCB_Pointer;
Deleted_Task : in RTEMS.TCB_Pointer
);
where current_task can be used to access the TCB for
the currently executing task, and deleted_task can be used to
access the TCB for the task being deleted. This extension is
invoked from the task_delete directive after the TCB has been
removed from a ready TCB chain, but before all its resources
including the TCB have been returned to their respective free
pools. This extension should not call any RTEMS directives if a
task is deleting itself (current_task is equal to deleted_task).
TASK_SWITCH Extension
~~~~~~~~~~~~~~~~~~~~~
The TASK_SWITCH extension corresponds to a task
context switch. If this extension is defined in any static or
dynamic extension set and a task context switch is in progress,
then the extension routine will automatically be invoked by
RTEMS. The extension should have a prototype similar to the
following:.. index:: rtems_task_switch_extension
.. code:: c
procedure User_Task_Switch (
Current_Task : in RTEMS.TCB_Pointer;
Heir_Task : in RTEMS.TCB_Pointer
);
where current_task can be used to access the TCB for
the task that is being swapped out, and heir_task can be used to
access the TCB for the task being swapped in. This extension is
invoked from RTEMS dispatcher routine after the current_task
context has been saved, but before the heir_task context has
been restored. This extension should not call any RTEMS
directives.
TASK_BEGIN Extension
~~~~~~~~~~~~~~~~~~~~
The TASK_BEGIN extension is invoked when a task
begins execution. It is invoked immediately before the body of
the starting procedure and executes in the context in the task.
This user extension have a prototype similar to the following:.. index:: rtems_task_begin_extension
.. code:: c
procedure User_Task_Begin (
Current_Task : in RTEMS.TCB_Pointer
);
where current_task can be used to access the TCB for
the currently executing task which has begun. The distinction
between the TASK_BEGIN and TASK_START extension is that the
TASK_BEGIN extension is executed in the context of the actual
task while the TASK_START extension is executed in the context
of the task performing the task_start directive. For most
extensions, this is not a critical distinction.
TASK_EXITTED Extension
~~~~~~~~~~~~~~~~~~~~~~
The TASK_EXITTED extension is invoked when a task
exits the body of the starting procedure by either an implicit
or explicit return statement. This user extension have a
prototype similar to the following:.. index:: rtems_task_exitted_extension
.. code:: c
procedure User_Task_Exitted (
Current_Task : in RTEMS.TCB_Pointer
);
where current_task can be used to access the TCB for
the currently executing task which has just exitted.
Although exiting of task is often considered to be a
fatal error, this extension allows recovery by either restarting
or deleting the exiting task. If the user does not wish to
recover, then a fatal error may be reported. If the user does
not provide a TASK_EXITTED extension or the provided handler
returns control to RTEMS, then the RTEMS default handler will be
used. This default handler invokes the directive
fatal_error_occurred with the ``RTEMS.TASK_EXITTED`` directive status.
FATAL Error Extension
~~~~~~~~~~~~~~~~~~~~~
The FATAL error extension is associated with the
fatal_error_occurred directive. If this extension is defined in
any static or dynamic extension set and the fatal_error_occurred
directive has been invoked, then this extension will be called.
This extension should have a prototype similar to the following:.. index:: rtems_fatal_extension
.. code:: c
procedure User_Fatal_Error (
Error : in RTEMS.Unsigned32
);
where the_error is the error code passed to the
fatal_error_occurred directive. This extension is invoked from
the fatal_error_occurred directive.
If defined, the users FATAL error extension is
invoked before RTEMS default fatal error routine is invoked and
the processor is stopped. For example, this extension could be
used to pass control to a debugger when a fatal error occurs.
This extension should not call any RTEMS directives.
Order of Invocation
-------------------
When one of the critical system events occur, the
user extensions are invoked in either "forward" or "reverse"
order. Forward order indicates that the static extension set is
invoked followed by the dynamic extension sets in the order in
which they were created. Reverse order means that the dynamic
extension sets are invoked in the opposite of the order in which
they were created followed by the static extension set. By
invoking the extension sets in this order, extensions can be
built upon one another. At the following system events, the
extensions are invoked in forward order:
- Task creation
- Task initiation
- Task reinitiation
- Task deletion
- Task context switch
- Post task context switch
- Task begins to execute
At the following system events, the extensions are
invoked in reverse order:
- Task deletion
- Fatal error detection
At these system events, the extensions are invoked in
reverse order to insure that if an extension set is built upon
another, the more complicated extension is invoked before the
extension set it is built upon. For example, by invoking the
static extension set last it is known that the "system" fatal
error extension will be the last fatal error extension executed.
Another example is use of the task delete extension by the
Standard C Library. Extension sets which are installed after
the Standard C Library will operate correctly even if they
utilize the C Library because the C Librarys TASK_DELETE
extension is invoked after that of the other extensions.
Operations
==========
Creating an Extension Set
-------------------------
The ``rtems.extension_create`` directive creates and installs
an extension set by allocating a Extension Set Control Block
(ESCB), assigning the extension set a user-specified name, and
assigning it an extension set ID. Newly created extension sets
are immediately installed and are invoked upon the next system
even supporting an extension.
Obtaining Extension Set IDs
---------------------------
When an extension set is created, RTEMS generates a
unique extension set ID and assigns it to the created extension
set until it is deleted. The extension ID may be obtained by
either of two methods. First, as the result of an invocation of
the ``rtems.extension_create``
directive, the extension set ID is stored
in a user provided location. Second, the extension set ID may
be obtained later using the ``rtems.extension_ident``
directive. The extension set ID is used by other directives
to manipulate this extension set.
Deleting an Extension Set
-------------------------
The ``rtems.extension_delete`` directive is used to delete an
extension set. The extension sets control block is returned to
the ESCB free list when it is deleted. An extension set can be
deleted by a task other than the task which created the
extension set. Any subsequent references to the extensions
name and ID are invalid.
Directives
==========
This section details the user extension managers
directives. A subsection is dedicated to each of this managers
directives and describes the calling sequence, related
constants, usage, and status codes.
EXTENSION_CREATE - Create a extension set
-----------------------------------------
.. index:: create an extension set
**CALLING SEQUENCE:**
.. code:: c
procedure Extension_Create (
Name : in RTEMS.Name;
Table : in RTEMS.Extensions_Table_Pointer;
ID : out RTEMS.ID;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - extension set created successfully
``RTEMS.INVALID_NAME`` - invalid extension set name
``RTEMS.TOO_MANY`` - too many extension sets created
**DESCRIPTION:**
This directive creates a extension set. The assigned
extension set id is returned in id. This id is used to access
the extension set with other user extension manager directives.
For control and maintenance of the extension set, RTEMS
allocates an ESCB from the local ESCB free pool and initializes
it.
**NOTES:**
This directive will not cause the calling task to be
preempted.
EXTENSION_IDENT - Get ID of a extension set
-------------------------------------------
.. index:: get ID of an extension set
.. index:: obtain ID of an extension set
**CALLING SEQUENCE:**
.. code:: c
procedure Extension_Ident (
Name : in RTEMS.Name;
ID : out RTEMS.ID;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - extension set identified successfully
``RTEMS.INVALID_NAME`` - extension set name not found
**DESCRIPTION:**
This directive obtains the extension set id
associated with the extension set name to be acquired. If the
extension set name is not unique, then the extension set id will
match one of the extension sets with that name. However, this
extension set id is not guaranteed to correspond to the desired
extension set. The extension set id is used to access this
extension set in other extension set related directives.
**NOTES:**
This directive will not cause the running task to be
preempted.
EXTENSION_DELETE - Delete a extension set
-----------------------------------------
.. index:: delete an extension set
**CALLING SEQUENCE:**
.. code:: c
procedure Extension_Delete (
ID : in RTEMS.ID;
Result : out RTEMS.Status_Codes
);
**DIRECTIVE STATUS CODES:**
``RTEMS.SUCCESSFUL`` - extension set deleted successfully
``RTEMS.INVALID_ID`` - invalid extension set id
**DESCRIPTION:**
This directive deletes the extension set specified by
id. If the extension set is running, it is automatically
canceled. The ESCB for the deleted extension set is reclaimed
by RTEMS.
**NOTES:**
This directive will not cause the running task to be
preempted.
A extension set can be deleted by a task other than
the task which created the extension set.
**NOTES:**
This directive will not cause the running task to be
preempted.
.. COMMENT: COPYRIGHT (c) 1988-2015.
.. COMMENT: On-Line Applications Research Corporation (OAR).
.. COMMENT: All rights reserved.
.. COMMENT: TODO:
.. COMMENT: + Ensure all macros are documented.
.. COMMENT: + Verify which structures may actually be defined by a user
.. COMMENT: + Add Go configuration.
.. COMMENT: Questions:
.. COMMENT: + Should there be examples of defining your own
.. COMMENT: Device Driver Table, Init task table, etc.?

View File

@ -1,11 +0,0 @@
from sys import path
from os.path import abspath
path.append(abspath('../common/'))
from waf import cmd_configure, cmd_build
def configure(ctx):
cmd_configure(ctx)
def build(ctx):
cmd_build(ctx)