This commit is contained in:
Chris Johns 2016-02-04 10:19:13 +13:00 committed by Amar Takhar
parent 8ef6ea80ef
commit c9aaf3145f
6 changed files with 1726 additions and 1683 deletions

File diff suppressed because it is too large Load Diff

View File

@ -15,9 +15,9 @@ user-supplied device driver. In a multiprocessor configuration, this manager
also initializes the interprocessor communications layer. The directives also initializes the interprocessor communications layer. The directives
provided by the Initialization Manager are: provided by the Initialization Manager are:
- ``rtems_initialize_executive`` - Initialize RTEMS - rtems_initialize_executive_ - Initialize RTEMS
- ``rtems_shutdown_executive`` - Shutdown RTEMS - rtems_shutdown_executive_ - Shutdown RTEMS
Background Background
========== ==========
@ -104,7 +104,7 @@ The ``rtems_fatal_error_occurred`` directive will be invoked from
successfully. successfully.
A discussion of RTEMS actions when a fatal error occurs may be found A discussion of RTEMS actions when a fatal error occurs may be found
`Announcing a Fatal Error`_. :ref:`Announcing a Fatal Error`.
Operations Operations
========== ==========
@ -129,7 +129,7 @@ consists of
The ``rtems_initialize_executive`` directive uses a system initialization The ``rtems_initialize_executive`` directive uses a system initialization
linker set to initialize only those parts of the overall RTEMS feature set that linker set to initialize only those parts of the overall RTEMS feature set that
is necessary for a particular application. See `Linker Sets`_. Each RTEMS is necessary for a particular application. See :ref:`Linker Sets`. Each RTEMS
feature used the application may optionally register an initialization handler. feature used the application may optionally register an initialization handler.
The system initialization API is available via``#included <rtems/sysinit.h>``. The system initialization API is available via``#included <rtems/sysinit.h>``.
@ -184,7 +184,7 @@ initialization stack may be re-used for interrupt processing.
Many of RTEMS actions during initialization are based upon the contents of the Many of RTEMS actions during initialization are based upon the contents of the
Configuration Table. For more information regarding the format and contents of Configuration Table. For more information regarding the format and contents of
this table, please refer to the chapter `Configuring a System`_. this table, please refer to the chapter :ref:`Configuring a System`.
The final action in the initialization sequence is the initiation of The final action in the initialization sequence is the initiation of
multitasking. When the scheduler and dispatcher are enabled, the highest multitasking. When the scheduler and dispatcher are enabled, the highest
@ -205,6 +205,8 @@ This section details the Initialization Manager's directives. A subsection is
dedicated to each of this manager's directives and describes the calling dedicated to each of this manager's directives and describes the calling
sequence, related constants, usage, and status codes. sequence, related constants, usage, and status codes.
.. _rtems_initialize_executive:
INITIALIZE_EXECUTIVE - Initialize RTEMS INITIALIZE_EXECUTIVE - Initialize RTEMS
--------------------------------------- ---------------------------------------
.. index:: initialize RTEMS .. index:: initialize RTEMS
@ -234,6 +236,8 @@ This directive should be called by ``boot_card`` only.
This directive *does not return* to the caller. Errors in the initialization This directive *does not return* to the caller. Errors in the initialization
sequence are usually fatal and lead to a system termination. sequence are usually fatal and lead to a system termination.
.. _rtems_shutdown_executive:
SHUTDOWN_EXECUTIVE - Shutdown RTEMS SHUTDOWN_EXECUTIVE - Shutdown RTEMS
----------------------------------- -----------------------------------
.. index:: shutdown RTEMS .. index:: shutdown RTEMS

View File

@ -243,7 +243,9 @@ The development of responsive real-time applications requires an understanding
of how RTEMS maintains and supports time-related operations. The basic unit of of how RTEMS maintains and supports time-related operations. The basic unit of
time in RTEMS is known as a tick. The frequency of clock ticks is completely time in RTEMS is known as a tick. The frequency of clock ticks is completely
application dependent and determines the granularity and accuracy of all application dependent and determines the granularity and accuracy of all
interval and calendar time operations... index:: rtems_interval interval and calendar time operations.
.. index:: rtems_interval
By tracking time in units of ticks, RTEMS is capable of supporting interval By tracking time in units of ticks, RTEMS is capable of supporting interval
timing functions such as task delays, timeouts, timeslicing, the delayed timing functions such as task delays, timeouts, timeslicing, the delayed

View File

@ -1,3 +1,7 @@
.. COMMENT: COPYRIGHT (c) 1988-2008.
.. COMMENT: On-Line Applications Research Corporation (OAR).
.. COMMENT: All rights reserved.
Multiprocessing Manager Multiprocessing Manager
####################### #######################
@ -6,255 +10,233 @@ Multiprocessing Manager
Introduction Introduction
============ ============
In multiprocessor real-time systems, new In multiprocessor real-time systems, new requirements, such as sharing data and
requirements, such as sharing data and global resources between global resources between processors, are introduced. This requires an
processors, are introduced. This requires an efficient and efficient and reliable communications vehicle which allows all processors to
reliable communications vehicle which allows all processors to communicate with each other as necessary. In addition, the ramifications of
communicate with each other as necessary. In addition, the multiple processors affect each and every characteristic of a real-time system,
ramifications of multiple processors affect each and every almost always making them more complicated.
characteristic of a real-time system, almost always making them
more complicated.
RTEMS addresses these issues by providing simple and RTEMS addresses these issues by providing simple and flexible real-time
flexible real-time multiprocessing capabilities. The executive multiprocessing capabilities. The executive easily lends itself to both
easily lends itself to both tightly-coupled and loosely-coupled tightly-coupled and loosely-coupled configurations of the target system
configurations of the target system hardware. In addition, hardware. In addition, RTEMS supports systems composed of both homogeneous and
RTEMS supports systems composed of both homogeneous and
heterogeneous mixtures of processors and target boards. heterogeneous mixtures of processors and target boards.
A major design goal of the RTEMS executive was to A major design goal of the RTEMS executive was to transcend the physical
transcend the physical boundaries of the target hardware boundaries of the target hardware configuration. This goal is achieved by
configuration. This goal is achieved by presenting the presenting the application software with a logical view of the target system
application software with a logical view of the target system where the boundaries between processor nodes are transparent. As a result, the
where the boundaries between processor nodes are transparent. application developer may designate objects such as tasks, queues, events,
As a result, the application developer may designate objects signals, semaphores, and memory blocks as global objects. These global objects
such as tasks, queues, events, signals, semaphores, and memory may then be accessed by any task regardless of the physical location of the
blocks as global objects. These global objects may then be object and the accessing task. RTEMS automatically determines that the object
accessed by any task regardless of the physical location of the being accessed resides on another processor and performs the actions required
object and the accessing task. RTEMS automatically determines to access the desired object. Simply stated, RTEMS allows the entire system,
that the object being accessed resides on another processor and both hardware and software, to be viewed logically as a single system.
performs the actions required to access the desired object.
Simply stated, RTEMS allows the entire system, both hardware and The directives provided by the Manager are:
software, to be viewed logically as a single system.
- rtems_multiprocessing_announce_ - A multiprocessing communications packet has
arrived
Background Background
========== ==========
.. index:: multiprocessing topologies .. index:: multiprocessing topologies
RTEMS makes no assumptions regarding the connection RTEMS makes no assumptions regarding the connection media or topology of a
media or topology of a multiprocessor system. The tasks which multiprocessor system. The tasks which compose a particular application can be
compose a particular application can be spread among as many spread among as many processors as needed to satisfy the application's timing
processors as needed to satisfy the application's timing requirements. The application tasks can interact using a subset of the RTEMS
requirements. The application tasks can interact using a subset directives as if they were on the same processor. These directives allow
of the RTEMS directives as if they were on the same processor. application tasks to exchange data, communicate, and synchronize regardless of
These directives allow application tasks to exchange data, which processor they reside upon.
communicate, and synchronize regardless of which processor they
reside upon.
The RTEMS multiprocessor execution model is multiple The RTEMS multiprocessor execution model is multiple instruction streams with
instruction streams with multiple data streams (MIMD). This multiple data streams (MIMD). This execution model has each of the processors
execution model has each of the processors executing code executing code independent of the other processors. Because of this
independent of the other processors. Because of this parallelism, the application designer can more easily guarantee deterministic
parallelism, the application designer can more easily guarantee behavior.
deterministic behavior.
By supporting heterogeneous environments, RTEMS By supporting heterogeneous environments, RTEMS allows the systems designer to
allows the systems designer to select the most efficient select the most efficient processor for each subsystem of the application.
processor for each subsystem of the application. Configuring Configuring RTEMS for a heterogeneous environment is no more difficult than for
RTEMS for a heterogeneous environment is no more difficult than a homogeneous one. In keeping with RTEMS philosophy of providing transparent
for a homogeneous one. In keeping with RTEMS philosophy of physical node boundaries, the minimal heterogeneous processing required is
providing transparent physical node boundaries, the minimal isolated in the MPCI layer.
heterogeneous processing required is isolated in the MPCI layer.
Nodes Nodes
----- -----
.. index:: nodes, definition .. index:: nodes, definition
A processor in a RTEMS system is referred to as a A processor in a RTEMS system is referred to as a node. Each node is assigned
node. Each node is assigned a unique non-zero node number by a unique non-zero node number by the application designer. RTEMS assumes that
the application designer. RTEMS assumes that node numbers are node numbers are assigned consecutively from one to the ``maximum_nodes``
assigned consecutively from one to the ``maximum_nodes`` configuration parameter. The node number, node, and the maximum number of
configuration parameter. The node nodes, ``maximum_nodes``, in a system are found in the Multiprocessor
number, node, and the maximum number of nodes, maximum_nodes, in Configuration Table. The ``maximum_nodes`` field and the number of global
a system are found in the Multiprocessor Configuration Table. objects, ``maximum_global_objects``, is required to be the same on all nodes in
The maximum_nodes field and the number of global objects, a system.
maximum_global_objects, is required to be the same on all nodes
in a system.
The node number is used by RTEMS to identify each The node number is used by RTEMS to identify each node when performing remote
node when performing remote operations. Thus, the operations. Thus, the Multiprocessor Communications Interface Layer (MPCI)
Multiprocessor Communications Interface Layer (MPCI) must be must be able to route messages based on the node number.
able to route messages based on the node number.
Global Objects Global Objects
-------------- --------------
.. index:: global objects, definition .. index:: global objects, definition
All RTEMS objects which are created with the GLOBAL All RTEMS objects which are created with the GLOBAL attribute will be known on
attribute will be known on all other nodes. Global objects can all other nodes. Global objects can be referenced from any node in the system,
be referenced from any node in the system, although certain although certain directive specific restrictions (e.g. one cannot delete a
directive specific restrictions (e.g. one cannot delete a remote remote object) may apply. A task does not have to be global to perform
object) may apply. A task does not have to be global to perform operations involving remote objects. The maximum number of global objects is
operations involving remote objects. The maximum number of the system is user configurable and can be found in the maximum_global_objects
global objects is the system is user configurable and can be field in the Multiprocessor Configuration Table. The distribution of tasks to
found in the maximum_global_objects field in the Multiprocessor processors is performed during the application design phase. Dynamic task
Configuration Table. The distribution of tasks to processors is
performed during the application design phase. Dynamic task
relocation is not supported by RTEMS. relocation is not supported by RTEMS.
Global Object Table Global Object Table
------------------- -------------------
.. index:: global objects table .. index:: global objects table
RTEMS maintains two tables containing object RTEMS maintains two tables containing object information on every node in a
information on every node in a multiprocessor system: a local multiprocessor system: a local object table and a global object table. The
object table and a global object table. The local object table local object table on each node is unique and contains information for all
on each node is unique and contains information for all objects objects created on this node whether those objects are local or global. The
created on this node whether those objects are local or global. global object table contains information regarding all global objects in the
The global object table contains information regarding all system and, consequently, is the same on every node.
global objects in the system and, consequently, is the same on
every node.
Since each node must maintain an identical copy of Since each node must maintain an identical copy of the global object table, the
the global object table, the maximum number of entries in each maximum number of entries in each copy of the table must be the same. The
copy of the table must be the same. The maximum number of maximum number of entries in each copy is determined by the
entries in each copy is determined by the maximum_global_objects parameter in the Multiprocessor Configuration Table.
maximum_global_objects parameter in the Multiprocessor This parameter, as well as the maximum_nodes parameter, is required to be the
Configuration Table. This parameter, as well as the same on all nodes. To maintain consistency among the table copies, every node
maximum_nodes parameter, is required to be the same on all in the system must be informed of the creation or deletion of a global object.
nodes. To maintain consistency among the table copies, every
node in the system must be informed of the creation or deletion
of a global object.
Remote Operations Remote Operations
----------------- -----------------
.. index:: MPCI and remote operations .. index:: MPCI and remote operations
When an application performs an operation on a remote When an application performs an operation on a remote global object, RTEMS must
global object, RTEMS must generate a Remote Request (RQ) message generate a Remote Request (RQ) message and send it to the appropriate node.
and send it to the appropriate node. After completing the After completing the requested operation, the remote node will build a Remote
requested operation, the remote node will build a Remote Response (RR) message and send it to the originating node. Messages generated
Response (RR) message and send it to the originating node. as a side-effect of a directive (such as deleting a global task) are known as
Messages generated as a side-effect of a directive (such as Remote Processes (RP) and do not require the receiving node to respond.
deleting a global task) are known as Remote Processes (RP) and
do not require the receiving node to respond.
Other than taking slightly longer to execute Other than taking slightly longer to execute directives on remote objects, the
directives on remote objects, the application is unaware of the application is unaware of the location of the objects it acts upon. The exact
location of the objects it acts upon. The exact amount of amount of overhead required for a remote operation is dependent on the media
overhead required for a remote operation is dependent on the connecting the nodes and, to a lesser degree, on the efficiency of the
media connecting the nodes and, to a lesser degree, on the user-provided MPCI routines.
efficiency of the user-provided MPCI routines.
The following shows the typical transaction sequence The following shows the typical transaction sequence during a remote
during a remote application: application:
# The application issues a directive accessing a #. The application issues a directive accessing a remote global object.
remote global object.
# RTEMS determines the node on which the object #. RTEMS determines the node on which the object resides.
resides.
# RTEMS calls the user-provided MPCI routine #. RTEMS calls the user-provided MPCI routine ``GET_PACKET`` to obtain a packet
GET_PACKET to obtain a packet in which to build a RQ message. in which to build a RQ message.
# After building a message packet, RTEMS calls the #. After building a message packet, RTEMS calls the user-provided MPCI routine
user-provided MPCI routine SEND_PACKET to transmit the packet to ``SEND_PACKET`` to transmit the packet to the node on which the object
the node on which the object resides (referred to as the resides (referred to as the destination node).
destination node).
# The calling task is blocked until the RR message #. The calling task is blocked until the RR message arrives, and control of the
arrives, and control of the processor is transferred to another processor is transferred to another task.
task.
# The MPCI layer on the destination node senses the #. The MPCI layer on the destination node senses the arrival of a packet
arrival of a packet (commonly in an ISR), and calls the``rtems_multiprocessing_announce`` (commonly in an ISR), and calls the ``rtems_multiprocessing_announce``
directive. This directive readies the Multiprocessing Server. directive. This directive readies the Multiprocessing Server.
# The Multiprocessing Server calls the user-provided #. The Multiprocessing Server calls the user-provided MPCI routine
MPCI routine RECEIVE_PACKET, performs the requested operation, ``RECEIVE_PACKET``, performs the requested operation, builds an RR message,
builds an RR message, and returns it to the originating node. and returns it to the originating node.
# The MPCI layer on the originating node senses the #. The MPCI layer on the originating node senses the arrival of a packet
arrival of a packet (typically via an interrupt), and calls the RTEMS``rtems_multiprocessing_announce`` directive. This directive (typically via an interrupt), and calls the
readies the Multiprocessing Server. RTEMS``rtems_multiprocessing_announce`` directive. This directive readies
the Multiprocessing Server.
# The Multiprocessing Server calls the user-provided #. The Multiprocessing Server calls the user-provided MPCI routine
MPCI routine RECEIVE_PACKET, readies the original requesting ``RECEIVE_PACKET``, readies the original requesting task, and blocks until
task, and blocks until another packet arrives. Control is another packet arrives. Control is transferred to the original task which
transferred to the original task which then completes processing then completes processing of the directive.
of the directive.
If an uncorrectable error occurs in the user-provided If an uncorrectable error occurs in the user-provided MPCI layer, the fatal
MPCI layer, the fatal error handler should be invoked. RTEMS error handler should be invoked. RTEMS assumes the reliable transmission and
assumes the reliable transmission and reception of messages by reception of messages by the MPCI and makes no attempt to detect or correct
the MPCI and makes no attempt to detect or correct errors. errors.
Proxies Proxies
------- -------
.. index:: proxy, definition .. index:: proxy, definition
A proxy is an RTEMS data structure which resides on a A proxy is an RTEMS data structure which resides on a remote node and is used
remote node and is used to represent a task which must block as to represent a task which must block as part of a remote operation. This action
part of a remote operation. This action can occur as part of the``rtems_semaphore_obtain`` and``rtems_message_queue_receive`` directives. If the can occur as part of the ``rtems_semaphore_obtain`` and
object were local, the task's control block would be available ``rtems_message_queue_receive`` directives. If the object were local, the
for modification to indicate it was blocking on a message queue task's control block would be available for modification to indicate it was
or semaphore. However, the task's control block resides only on blocking on a message queue or semaphore. However, the task's control block
the same node as the task. As a result, the remote node must resides only on the same node as the task. As a result, the remote node must
allocate a proxy to represent the task until it can be readied. allocate a proxy to represent the task until it can be readied.
The maximum number of proxies is defined in the The maximum number of proxies is defined in the Multiprocessor Configuration
Multiprocessor Configuration Table. Each node in a Table. Each node in a multiprocessor system may require a different number of
multiprocessor system may require a different number of proxies proxies to be configured. The distribution of proxy control blocks is
to be configured. The distribution of proxy control blocks is application dependent and is different from the distribution of tasks.
application dependent and is different from the distribution of
tasks.
Multiprocessor Configuration Table Multiprocessor Configuration Table
---------------------------------- ----------------------------------
The Multiprocessor Configuration Table contains The Multiprocessor Configuration Table contains information needed by RTEMS
information needed by RTEMS when used in a multiprocessor when used in a multiprocessor system. This table is discussed in detail in the
system. This table is discussed in detail in the section section Multiprocessor Configuration Table of the Configuring a System chapter.
Multiprocessor Configuration Table of the Configuring a System
chapter.
Multiprocessor Communications Interface Layer Multiprocessor Communications Interface Layer
============================================= =============================================
The Multiprocessor Communications Interface Layer The Multiprocessor Communications Interface Layer (MPCI) is a set of
(MPCI) is a set of user-provided procedures which enable the user-provided procedures which enable the nodes in a multiprocessor system to
nodes in a multiprocessor system to communicate with one communicate with one another. These routines are invoked by RTEMS at various
another. These routines are invoked by RTEMS at various times times in the preparation and processing of remote requests. Interrupts are
in the preparation and processing of remote requests. enabled when an MPCI procedure is invoked. It is assumed that if the execution
Interrupts are enabled when an MPCI procedure is invoked. It is mode and/or interrupt level are altered by the MPCI layer, that they will be
assumed that if the execution mode and/or interrupt level are restored prior to returning to RTEMS.
altered by the MPCI layer, that they will be restored prior to
returning to RTEMS... index:: MPCI, definition
The MPCI layer is responsible for managing a pool of .. index:: MPCI, definition
buffers called packets and for sending these packets between
system nodes. Packet buffers contain the messages sent between
the nodes. Typically, the MPCI layer will encapsulate the
packet within an envelope which contains the information needed
by the MPCI layer. The number of packets available is dependent
on the MPCI layer implementation... index:: MPCI entry points
The entry points to the routines in the user's MPCI The MPCI layer is responsible for managing a pool of buffers called packets and
layer should be placed in the Multiprocessor Communications for sending these packets between system nodes. Packet buffers contain the
Interface Table. The user must provide entry points for each of messages sent between the nodes. Typically, the MPCI layer will encapsulate
the following table entries in a multiprocessor system: the packet within an envelope which contains the information needed by the MPCI
layer. The number of packets available is dependent on the MPCI layer
implementation.
- initialization initialize the MPCI .. index:: MPCI entry points
- get_packet obtain a packet buffer The entry points to the routines in the user's MPCI layer should be placed in
the Multiprocessor Communications Interface Table. The user must provide entry
points for each of the following table entries in a multiprocessor system:
- return_packet return a packet buffer .. list-table::
:class: rtems-table
- send_packet send a packet to another node * - initialization
- initialize the MPCI
- receive_packet called to get an arrived packet * - get_packet
- obtain a packet buffer
* - return_packet
- return a packet buffer
* - send_packet
- send a packet to another node
* - receive_packet
- called to get an arrived packet
A packet is sent by RTEMS in each of the following situations: A packet is sent by RTEMS in each of the following situations:
@ -270,153 +252,144 @@ A packet is sent by RTEMS in each of the following situations:
- during system initialization to check for system consistency. - during system initialization to check for system consistency.
If the target hardware supports it, the arrival of a If the target hardware supports it, the arrival of a packet at a node may
packet at a node may generate an interrupt. Otherwise, the generate an interrupt. Otherwise, the real-time clock ISR can check for the
real-time clock ISR can check for the arrival of a packet. In arrival of a packet. In any case, the ``rtems_multiprocessing_announce``
any case, the``rtems_multiprocessing_announce`` directive must be called directive must be called to announce the arrival of a packet. After exiting
to announce the arrival of a packet. After exiting the ISR, the ISR, control will be passed to the Multiprocessing Server to process the
control will be passed to the Multiprocessing Server to process packet. The Multiprocessing Server will call the get_packet entry to obtain a
the packet. The Multiprocessing Server will call the get_packet packet buffer and the receive_entry entry to copy the message into the buffer
entry to obtain a packet buffer and the receive_entry entry to obtained.
copy the message into the buffer obtained.
INITIALIZATION INITIALIZATION
-------------- --------------
The INITIALIZATION component of the user-provided The INITIALIZATION component of the user-provided MPCI layer is called as part
MPCI layer is called as part of the ``rtems_initialize_executive`` of the ``rtems_initialize_executive`` directive to initialize the MPCI layer
directive to initialize the MPCI layer and associated hardware. and associated hardware. It is invoked immediately after all of the device
It is invoked immediately after all of the device drivers have drivers have been initialized. This component should be adhere to the
been initialized. This component should be adhere to the following prototype:
following prototype:.. index:: rtems_mpci_entry
.. index:: rtems_mpci_entry
.. code:: c .. code:: c
rtems_mpci_entry user_mpci_initialization( rtems_mpci_entry user_mpci_initialization(
rtems_configuration_table \*configuration rtems_configuration_table *configuration
); );
where configuration is the address of the user's where configuration is the address of the user's Configuration Table.
Configuration Table. Operations on global objects cannot be Operations on global objects cannot be performed until this component is
performed until this component is invoked. The INITIALIZATION invoked. The INITIALIZATION component is invoked only once in the life of any
component is invoked only once in the life of any system. If system. If the MPCI layer cannot be successfully initialized, the fatal error
the MPCI layer cannot be successfully initialized, the fatal manager should be invoked by this routine.
error manager should be invoked by this routine.
One of the primary functions of the MPCI layer is to One of the primary functions of the MPCI layer is to provide the executive with
provide the executive with packet buffers. The INITIALIZATION packet buffers. The INITIALIZATION routine must create and initialize a pool
routine must create and initialize a pool of packet buffers. of packet buffers. There must be enough packet buffers so RTEMS can obtain one
There must be enough packet buffers so RTEMS can obtain one
whenever needed. whenever needed.
GET_PACKET GET_PACKET
---------- ----------
The GET_PACKET component of the user-provided MPCI The GET_PACKET component of the user-provided MPCI layer is called when RTEMS
layer is called when RTEMS must obtain a packet buffer to send must obtain a packet buffer to send or broadcast a message. This component
or broadcast a message. This component should be adhere to the should be adhere to the following prototype:
following prototype:
.. code:: c .. code:: c
rtems_mpci_entry user_mpci_get_packet( rtems_mpci_entry user_mpci_get_packet(
rtems_packet_prefix \**packet rtems_packet_prefix **packet
); );
where packet is the address of a pointer to a packet. where packet is the address of a pointer to a packet. This routine always
This routine always succeeds and, upon return, packet will succeeds and, upon return, packet will contain the address of a packet. If for
contain the address of a packet. If for any reason, a packet any reason, a packet cannot be successfully obtained, then the fatal error
cannot be successfully obtained, then the fatal error manager manager should be invoked.
should be invoked.
RTEMS has been optimized to avoid the need for RTEMS has been optimized to avoid the need for obtaining a packet each time a
obtaining a packet each time a message is sent or broadcast. message is sent or broadcast. For example, RTEMS sends response messages (RR)
For example, RTEMS sends response messages (RR) back to the back to the originator in the same packet in which the request message (RQ)
originator in the same packet in which the request message (RQ)
arrived. arrived.
RETURN_PACKET RETURN_PACKET
------------- -------------
The RETURN_PACKET component of the user-provided MPCI The RETURN_PACKET component of the user-provided MPCI layer is called when
layer is called when RTEMS needs to release a packet to the free RTEMS needs to release a packet to the free packet buffer pool. This component
packet buffer pool. This component should be adhere to the should be adhere to the following prototype:
following prototype:
.. code:: c .. code:: c
rtems_mpci_entry user_mpci_return_packet( rtems_mpci_entry user_mpci_return_packet(
rtems_packet_prefix \*packet rtems_packet_prefix *packet
); );
where packet is the address of a packet. If the where packet is the address of a packet. If the packet cannot be successfully
packet cannot be successfully returned, the fatal error manager returned, the fatal error manager should be invoked.
should be invoked.
RECEIVE_PACKET RECEIVE_PACKET
-------------- --------------
The RECEIVE_PACKET component of the user-provided The RECEIVE_PACKET component of the user-provided MPCI layer is called when
MPCI layer is called when RTEMS needs to obtain a packet which RTEMS needs to obtain a packet which has previously arrived. This component
has previously arrived. This component should be adhere to the should be adhere to the following prototype:
following prototype:
.. code:: c .. code:: c
rtems_mpci_entry user_mpci_receive_packet( rtems_mpci_entry user_mpci_receive_packet(
rtems_packet_prefix \**packet rtems_packet_prefix **packet
); );
where packet is a pointer to the address of a packet where packet is a pointer to the address of a packet to place the message from
to place the message from another node. If a message is another node. If a message is available, then packet will contain the address
available, then packet will contain the address of the message of the message from another node. If no messages are available, this entry
from another node. If no messages are available, this entry
packet should contain NULL. packet should contain NULL.
SEND_PACKET SEND_PACKET
----------- -----------
The SEND_PACKET component of the user-provided MPCI The SEND_PACKET component of the user-provided MPCI layer is called when RTEMS
layer is called when RTEMS needs to send a packet containing a needs to send a packet containing a message to another node. This component
message to another node. This component should be adhere to the should be adhere to the following prototype:
following prototype:
.. code:: c .. code:: c
rtems_mpci_entry user_mpci_send_packet( rtems_mpci_entry user_mpci_send_packet(
uint32_t node, uint32_t node,
rtems_packet_prefix \**packet rtems_packet_prefix **packet
); );
where node is the node number of the destination and packet is the where node is the node number of the destination and packet is the address of a
address of a packet which containing a message. If the packet cannot packet which containing a message. If the packet cannot be successfully sent,
be successfully sent, the fatal error manager should be invoked. the fatal error manager should be invoked.
If node is set to zero, the packet is to be If node is set to zero, the packet is to be broadcasted to all other nodes in
broadcasted to all other nodes in the system. Although some the system. Although some MPCI layers will be built upon hardware which
MPCI layers will be built upon hardware which support a support a broadcast mechanism, others may be required to generate a copy of the
broadcast mechanism, others may be required to generate a copy packet for each node in the system.
of the packet for each node in the system.
.. COMMENT: XXX packet_prefix structure needs to be defined in this document .. COMMENT: XXX packet_prefix structure needs to be defined in this document
Many MPCI layers use the ``packet_length`` field of the``rtems_packet_prefix`` portion Many MPCI layers use the ``packet_length`` field of the ``rtems_packet_prefix``
of the packet to avoid sending unnecessary data. This is especially portion of the packet to avoid sending unnecessary data. This is especially
useful if the media connecting the nodes is relatively slow. useful if the media connecting the nodes is relatively slow.
The ``to_convert`` field of the ``rtems_packet_prefix`` portion of the The ``to_convert`` field of the ``rtems_packet_prefix`` portion of the packet
packet indicates how much of the packet in 32-bit units may require conversion indicates how much of the packet in 32-bit units may require conversion in a
in a heterogeneous system. heterogeneous system.
Supporting Heterogeneous Environments Supporting Heterogeneous Environments
------------------------------------- -------------------------------------
.. index:: heterogeneous multiprocessing .. index:: heterogeneous multiprocessing
Developing an MPCI layer for a heterogeneous system Developing an MPCI layer for a heterogeneous system requires a thorough
requires a thorough understanding of the differences between the understanding of the differences between the processors which comprise the
processors which comprise the system. One difficult problem is system. One difficult problem is the varying data representation schemes used
the varying data representation schemes used by different by different processor types. The most pervasive data representation problem
processor types. The most pervasive data representation problem is the order of the bytes which compose a data entity. Processors which place
is the order of the bytes which compose a data entity. the least significant byte at the smallest address are classified as little
Processors which place the least significant byte at the endian processors. Little endian byte-ordering is shown below:
smallest address are classified as little endian processors.
Little endian byte-ordering is shown below:
.. code:: c .. code:: c
@ -426,9 +399,10 @@ Little endian byte-ordering is shown below:
| | | | | | | | | |
+---------------+----------------+---------------+----------------+ +---------------+----------------+---------------+----------------+
Conversely, processors which place the most Conversely, processors which place the most significant byte at the smallest
significant byte at the smallest address are classified as big address are classified as big endian processors. Big endian byte-ordering is
endian processors. Big endian byte-ordering is shown below: shown below:
.. code:: c .. code:: c
+---------------+----------------+---------------+----------------+ +---------------+----------------+---------------+----------------+
@ -437,47 +411,45 @@ endian processors. Big endian byte-ordering is shown below:
| | | | | | | | | |
+---------------+----------------+---------------+----------------+ +---------------+----------------+---------------+----------------+
Unfortunately, sharing a data structure between big Unfortunately, sharing a data structure between big endian and little endian
endian and little endian processors requires translation into a processors requires translation into a common endian format. An application
common endian format. An application designer typically chooses designer typically chooses the common endian format to minimize conversion
the common endian format to minimize conversion overhead. overhead.
Another issue in the design of shared data structures Another issue in the design of shared data structures is the alignment of data
is the alignment of data structure elements. Alignment is both structure elements. Alignment is both processor and compiler implementation
processor and compiler implementation dependent. For example, dependent. For example, some processors allow data elements to begin on any
some processors allow data elements to begin on any address address boundary, while others impose restrictions. Common restrictions are
boundary, while others impose restrictions. Common restrictions that data elements must begin on either an even address or on a long word
are that data elements must begin on either an even address or boundary. Violation of these restrictions may cause an exception or impose a
on a long word boundary. Violation of these restrictions may performance penalty.
cause an exception or impose a performance penalty.
Other issues which commonly impact the design of Other issues which commonly impact the design of shared data structures include
shared data structures include the representation of floating the representation of floating point numbers, bit fields, decimal data, and
point numbers, bit fields, decimal data, and character strings. character strings. In addition, the representation method for negative
In addition, the representation method for negative integers integers could be one's or two's complement. These factors combine to increase
could be one's or two's complement. These factors combine to the complexity of designing and manipulating data structures shared between
increase the complexity of designing and manipulating data processors.
structures shared between processors.
RTEMS addressed these issues in the design of the RTEMS addressed these issues in the design of the packets used to communicate
packets used to communicate between nodes. The RTEMS packet between nodes. The RTEMS packet format is designed to allow the MPCI layer to
format is designed to allow the MPCI layer to perform all perform all necessary conversion without burdening the developer with the
necessary conversion without burdening the developer with the details of the RTEMS packet format. As a result, the MPCI layer must be aware
details of the RTEMS packet format. As a result, the MPCI layer of the following:
must be aware of the following:
- All packets must begin on a four byte boundary. - All packets must begin on a four byte boundary.
- Packets are composed of both RTEMS and application data. All RTEMS data - Packets are composed of both RTEMS and application data. All RTEMS data is
is treated as 32-bit unsigned quantities and is in the first ``to_convert`` treated as 32-bit unsigned quantities and is in the first ``to_convert``
32-bit quantities of the packet. The ``to_convert`` field is part of the``rtems_packet_prefix`` portion of the packet. 32-bit quantities of the packet. The ``to_convert`` field is part of the
``rtems_packet_prefix`` portion of the packet.
- The RTEMS data component of the packet must be in native - The RTEMS data component of the packet must be in native endian format.
endian format. Endian conversion may be performed by either the Endian conversion may be performed by either the sending or receiving MPCI
sending or receiving MPCI layer. layer.
- RTEMS makes no assumptions regarding the application - RTEMS makes no assumptions regarding the application data component of the
data component of the packet. packet.
Operations Operations
========== ==========
@ -485,19 +457,19 @@ Operations
Announcing a Packet Announcing a Packet
------------------- -------------------
The ``rtems_multiprocessing_announce`` directive is called by The ``rtems_multiprocessing_announce`` directive is called by the MPCI layer to
the MPCI layer to inform RTEMS that a packet has arrived from inform RTEMS that a packet has arrived from another node. This directive can
another node. This directive can be called from an interrupt be called from an interrupt service routine or from within a polling routine.
service routine or from within a polling routine.
Directives Directives
========== ==========
This section details the additional directives This section details the additional directives required to support RTEMS in a
required to support RTEMS in a multiprocessor configuration. A multiprocessor configuration. A subsection is dedicated to each of this
subsection is dedicated to each of this manager's directives and manager's directives and describes the calling sequence, related constants,
describes the calling sequence, related constants, usage, and usage, and status codes.
status codes.
.. _rtems_multiprocessing_announce:
MULTIPROCESSING_ANNOUNCE - Announce the arrival of a packet MULTIPROCESSING_ANNOUNCE - Announce the arrival of a packet
----------------------------------------------------------- -----------------------------------------------------------
@ -517,23 +489,14 @@ NONE
**DESCRIPTION:** **DESCRIPTION:**
This directive informs RTEMS that a multiprocessing This directive informs RTEMS that a multiprocessing communications packet has
communications packet has arrived from another node. This arrived from another node. This directive is called by the user-provided MPCI,
directive is called by the user-provided MPCI, and is only used and is only used in multiprocessor configurations.
in multiprocessor configurations.
**NOTES:** **NOTES:**
This directive is typically called from an ISR. This directive is typically called from an ISR.
This directive will almost certainly cause the This directive will almost certainly cause the calling task to be preempted.
calling task to be preempted.
This directive does not generate activity on remote nodes. This directive does not generate activity on remote nodes.
.. COMMENT: COPYRIGHT (c) 2014.
.. COMMENT: On-Line Applications Research Corporation (OAR).
.. COMMENT: All rights reserved.

View File

@ -1,3 +1,7 @@
.. COMMENT: COPYRIGHT (c) 1988-2008.
.. COMMENT: On-Line Applications Research Corporation (OAR).
.. COMMENT: All rights reserved.
PCI Library PCI Library
########### ###########
@ -6,19 +10,19 @@ PCI Library
Introduction Introduction
============ ============
The Peripheral Component Interconnect (PCI) bus is a very common computer The Peripheral Component Interconnect (PCI) bus is a very common computer bus
bus architecture that is found in almost every PC today. The PCI bus is architecture that is found in almost every PC today. The PCI bus is normally
normally located at the motherboard where some PCI devices are soldered located at the motherboard where some PCI devices are soldered directly onto
directly onto the PCB and expansion slots allows the user to add custom the PCB and expansion slots allows the user to add custom devices easily. There
devices easily. There is a wide range of PCI hardware available implementing is a wide range of PCI hardware available implementing all sorts of interfaces
all sorts of interfaces and functions. and functions.
This section describes the PCI Library available in RTEMS used to access the This section describes the PCI Library available in RTEMS used to access the
PCI bus in a portable way across computer architectures supported by RTEMS. PCI bus in a portable way across computer architectures supported by RTEMS.
The PCI Library aims to be compatible with PCI 2.3 with a couple of The PCI Library aims to be compatible with PCI 2.3 with a couple of
limitations, for example there is no support for hot-plugging, 64-bit limitations, for example there is no support for hot-plugging, 64-bit memory
memory space and cardbus bridges. space and cardbus bridges.
In order to support different architectures and with small foot-print embedded In order to support different architectures and with small foot-print embedded
systems in mind the PCI Library offers four different configuration options systems in mind the PCI Library offers four different configuration options
@ -26,7 +30,7 @@ listed below. It is selected during compile time by defining the appropriate
macros in confdefs.h. It is also possible to enable PCI_LIB_NONE (No macros in confdefs.h. It is also possible to enable PCI_LIB_NONE (No
Configuration) which can be used for debuging PCI access functions. Configuration) which can be used for debuging PCI access functions.
- Auto Configuration (do Plug & Play) - Auto Configuration (Plug & Play)
- Read Configuration (read BIOS or boot loader configuration) - Read Configuration (read BIOS or boot loader configuration)
@ -37,24 +41,24 @@ Configuration) which can be used for debuging PCI access functions.
Background Background
========== ==========
The PCI bus is constructed in a way where on-board devices and devices The PCI bus is constructed in a way where on-board devices and devices in
in expansion slots can be automatically found (probed) and configured expansion slots can be automatically found (probed) and configured using Plug &
using Plug & Play completely implemented in software. The bus is set up once Play completely implemented in software. The bus is set up once during boot
during boot up. The Plug & Play information can be read and written from up. The Plug & Play information can be read and written from PCI configuration
PCI configuration space. A PCI device is identified in configuration space by space. A PCI device is identified in configuration space by a unique bus, slot
a unique bus, slot and function number. Each PCI slot can have up to 8 and function number. Each PCI slot can have up to 8 functions and interface to
functions and interface to another PCI sub-bus by implementing a PCI-to-PCI another PCI sub-bus by implementing a PCI-to-PCI bridge according to the PCI
bridge according to the PCI Bridge Architecture specification. Bridge Architecture specification.
Using the unique \[bus:slot:func] any device can be configured regardless of how Using the unique \[bus:slot:func] any device can be configured regardless of
PCI is currently set up as long as all PCI buses are enumerated correctly. The how PCI is currently set up as long as all PCI buses are enumerated
enumeration is done during probing, all bridges are given a bus number in correctly. The enumeration is done during probing, all bridges are given a bus
order for the bridges to respond to accesses from both directions. The PCI number in order for the bridges to respond to accesses from both
library can assign address ranges to which a PCI device should respond using directions. The PCI library can assign address ranges to which a PCI device
Plug & Play technique or a static user defined configuration. After the should respond using Plug & Play technique or a static user defined
configuration has been performed the PCI device drivers can find devices by configuration. After the configuration has been performed the PCI device
the read-only PCI Class type, Vendor ID and Device ID information found in drivers can find devices by the read-only PCI Class type, Vendor ID and Device
configuration space for each device. ID information found in configuration space for each device.
In some systems there is a boot loader or BIOS which have already configured In some systems there is a boot loader or BIOS which have already configured
all PCI devices, but on embedded targets it is quite common that there is no all PCI devices, but on embedded targets it is quite common that there is no
@ -65,14 +69,13 @@ translate the \[bus:slot:func] into a valid PCI configuration space access.
If the target is not a host, but a peripheral, configuration space can not be If the target is not a host, but a peripheral, configuration space can not be
accessed, the peripheral is set up by the host during start up. In complex accessed, the peripheral is set up by the host during start up. In complex
embedded PCI systems the peripheral may need to access other PCI boards than embedded PCI systems the peripheral may need to access other PCI boards than
the host. In such systems a custom (static) configuration of both the host the host. In such systems a custom (static) configuration of both the host and
and peripheral may be a convenient solution. peripheral may be a convenient solution.
The PCI bus defines four interrupt signals INTA#..INTD#. The interrupt signals The PCI bus defines four interrupt signals INTA#..INTD#. The interrupt signals
must be mapped into a system interrupt/vector, it is up to the BSP or host must be mapped into a system interrupt/vector, it is up to the BSP or host
driver to know the mapping, however the BIOS or boot loader may use the driver to know the mapping, however the BIOS or boot loader may use the 8-bit
8-bit read/write "Interrupt Line" register to pass the knowledge along to the read/write "Interrupt Line" register to pass the knowledge along to the OS.
OS.
The PCI standard defines and recommends that the backplane route the interupt The PCI standard defines and recommends that the backplane route the interupt
lines in a systematic way, however in standard there is no such requirement. lines in a systematic way, however in standard there is no such requirement.
@ -105,8 +108,8 @@ PCI Configuration
During start up the PCI bus must be configured in order for host and During start up the PCI bus must be configured in order for host and
peripherals to access one another using Memory or I/O accesses and that peripherals to access one another using Memory or I/O accesses and that
interrupts are properly handled. Three different spaces are defined and interrupts are properly handled. Three different spaces are defined and mapped
mapped separately: separately:
# I/O space (IO) # I/O space (IO)
@ -114,14 +117,14 @@ mapped separately:
# prefetchable Memory space (MEM) # prefetchable Memory space (MEM)
Regions of the same type (I/O or Memory) may not overlap which is guaranteed Regions of the same type (I/O or Memory) may not overlap which is guaranteed by
by the software. MEM regions may be mapped into MEMIO regions, but MEMIO the software. MEM regions may be mapped into MEMIO regions, but MEMIO regions
regions can not be mapped into MEM, for that could lead to prefetching of can not be mapped into MEM, for that could lead to prefetching of
registers. The interrupt pin which a board is driving can be read out from registers. The interrupt pin which a board is driving can be read out from PCI
PCI configuration space, however it is up to software to know how interrupt configuration space, however it is up to software to know how interrupt signals
signals are routed between PCI-to-PCI bridges and how PCI INT[A..D]# pins are are routed between PCI-to-PCI bridges and how PCI INT[A..D]# pins are mapped to
mapped to system IRQ. In systems where previous software (boot loader or BIOS) system IRQ. In systems where previous software (boot loader or BIOS) has
has already set up this the configuration is overwritten or simply read out. already set up this the configuration is overwritten or simply read out.
In order to support different configuration methods the following configuration In order to support different configuration methods the following configuration
libraries are selectable by the user: libraries are selectable by the user:
@ -138,7 +141,8 @@ libraries are selectable by the user:
A host driver can be made to support all three configuration methods, or any A host driver can be made to support all three configuration methods, or any
combination. It may be defined by the BSP which approach is used. combination. It may be defined by the BSP which approach is used.
The configuration software is called from the PCI driver (pci_config_init()). The configuration software is called from the PCI driver
(``pci_config_init()``).
Regardless of configuration method a PCI device tree is created in RAM during Regardless of configuration method a PCI device tree is created in RAM during
initialization, the tree can be accessed to find devices and resources without initialization, the tree can be accessed to find devices and resources without
@ -148,14 +152,14 @@ device tree at compile time when using the static/peripheral method.
RTEMS Configuration selection RTEMS Configuration selection
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The active configuration method can be selected at compile time in the same The active configuration method can be selected at compile time in the same way
way as other project parameters by including rtems/confdefs.h and setting as other project parameters by including rtems/confdefs.h and setting
- CONFIGURE_INIT - ``CONFIGURE_INIT``
- RTEMS_PCI_CONFIG_LIB - ``RTEMS_PCI_CONFIG_LIB``
- CONFIGURE_PCI_LIB = PCI_LIB_(AUTO,STATIC,READ,PERIPHERAL) - ``CONFIGURE_PCI_LIB`` = PCI_LIB_(AUTO,STATIC,READ,PERIPHERAL)
See the RTEMS configuration section how to setup the PCI library. See the RTEMS configuration section how to setup the PCI library.
@ -163,17 +167,17 @@ Auto Configuration
~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~
The auto configuration software enumerates PCI buses and initializes all PCI The auto configuration software enumerates PCI buses and initializes all PCI
devices found using Plug & Play. The auto configuration software requires devices found using Plug & Play. The auto configuration software requires that
that a configuration setup has been registered by the driver or BSP in order a configuration setup has been registered by the driver or BSP in order to
to setup the I/O and Memory regions at the correct address ranges. PCI setup the I/O and Memory regions at the correct address ranges. PCI interrupt
interrupt pins can optionally be routed over PCI-to-PCI bridges and mapped pins can optionally be routed over PCI-to-PCI bridges and mapped to a system
to a system interrupt number. BAR resources are sorted by size and required interrupt number. BAR resources are sorted by size and required alignment,
alignment, unused "dead" space may be created when PCI bridges are present unused "dead" space may be created when PCI bridges are present due to the PCI
due to the PCI bridge window size does not equal the alignment. To cope with bridge window size does not equal the alignment. To cope with that resources
that resources are reordered to fit smaller BARs into the dead space to minimize are reordered to fit smaller BARs into the dead space to minimize the PCI space
the PCI space required. If a BAR or ROM register can not be allocated a PCI required. If a BAR or ROM register can not be allocated a PCI address region
address region (due to too few resources available) the register will be given (due to too few resources available) the register will be given the value of
the value of pci_invalid_address which defaults to 0. pci_invalid_address which defaults to 0.
The auto configuration routines support: The auto configuration routines support:
@ -185,8 +189,7 @@ The auto configuration routines support:
- memory space (MEMIO) - memory space (MEMIO)
- prefetchable memory space (MEM), if not present MEM will be mapped into - prefetchable memory space (MEM), if not present MEM will be mapped into MEMIO
MEMIO
- multiple PCI buses - PCI-to-PCI bridges - multiple PCI buses - PCI-to-PCI bridges
@ -224,8 +227,8 @@ Static Configuration
~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~
To support custom configurations and small-footprint PCI systems, the user may To support custom configurations and small-footprint PCI systems, the user may
provide the PCI device tree which contains the current configuration. The provide the PCI device tree which contains the current configuration. The PCI
PCI buses are enumerated and all resources are written to PCI devices during buses are enumerated and all resources are written to PCI devices during
initialization. When this approach is selected PCI boards must be located at initialization. When this approach is selected PCI boards must be located at
the same slots every time and devices can not be removed or added, Plug & Play the same slots every time and devices can not be removed or added, Plug & Play
is not performed. Boards of the same type may of course be exchanged. is not performed. Boards of the same type may of course be exchanged.
@ -240,13 +243,13 @@ Peripheral Configuration
On systems where a peripheral PCI device needs to access other PCI devices than On systems where a peripheral PCI device needs to access other PCI devices than
the host the peripheral configuration approach may be handy. Most PCI devices the host the peripheral configuration approach may be handy. Most PCI devices
answers on the PCI host's requests and start DMA accesses into the Hosts memory, answers on the PCI host's requests and start DMA accesses into the Hosts
however in some complex systems PCI devices may want to access other devices memory, however in some complex systems PCI devices may want to access other
on the same bus or at another PCI bus. devices on the same bus or at another PCI bus.
A PCI peripheral is not allowed to do PCI configuration cycles, which A PCI peripheral is not allowed to do PCI configuration cycles, which means
means that it must either rely on the host to give it the addresses it that it must either rely on the host to give it the addresses it needs, or that
needs, or that the addresses are predefined. the addresses are predefined.
This configuration approach is very similar to the static option, however the This configuration approach is very similar to the static option, however the
configuration is never written to PCI bus, instead it is only used for drivers configuration is never written to PCI bus, instead it is only used for drivers
@ -256,8 +259,8 @@ PCI Access
---------- ----------
The PCI access routines are low-level routines provided for drivers, The PCI access routines are low-level routines provided for drivers,
configuration software, etc. in order to access different regions in a way configuration software, etc. in order to access different regions in a way not
not dependent upon the host driver, BSP or platform. dependent upon the host driver, BSP or platform.
- PCI configuration space - PCI configuration space
@ -275,26 +278,28 @@ configuration space access.
Some non-standard hardware may also define the PCI bus big-endian, for example Some non-standard hardware may also define the PCI bus big-endian, for example
the LEON2 AT697 PCI host bridge and some LEON3 systems may be configured that the LEON2 AT697 PCI host bridge and some LEON3 systems may be configured that
way. It is up to the BSP to set the appropriate PCI endianness on compile time way. It is up to the BSP to set the appropriate PCI endianness on compile time
(BSP_PCI_BIG_ENDIAN) in order for inline macros to be correctly defined. (``BSP_PCI_BIG_ENDIAN``) in order for inline macros to be correctly defined.
Another possibility is to use the function pointers defined by the access Another possibility is to use the function pointers defined by the access layer
layer to implement drivers that support "run-time endianness detection". to implement drivers that support "run-time endianness detection".
Configuration space Configuration space
~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~
Configuration space is accessed using the routines listed below. The Configuration space is accessed using the routines listed below. The pci_dev_t
pci_dev_t type is used to specify a specific PCI bus, device and function. It type is used to specify a specific PCI bus, device and function. It is up to
is up to the host driver or BSP to create a valid access to the requested the host driver or BSP to create a valid access to the requested PCI
PCI slot. Requests made to slots that are not supported by hardware should slot. Requests made to slots that are not supported by hardware should result
result in PCISTS_MSTABRT and/or data must be ignored (writes) or 0xffffffff in ``PCISTS_MSTABRT`` and/or data must be ignored (writes) or ``0xFFFFFFFF`` is
is always returned (reads). always returned (reads).
.. code:: c .. code:: c
/* Configuration Space Access Read Routines \*/ /* Configuration Space Access Read Routines */
extern int pci_cfg_r8(pci_dev_t dev, int ofs, uint8_t \*data); extern int pci_cfg_r8(pci_dev_t dev, int ofs, uint8_t *data);
extern int pci_cfg_r16(pci_dev_t dev, int ofs, uint16_t \*data); extern int pci_cfg_r16(pci_dev_t dev, int ofs, uint16_t *data);
extern int pci_cfg_r32(pci_dev_t dev, int ofs, uint32_t \*data); extern int pci_cfg_r32(pci_dev_t dev, int ofs, uint32_t *data);
/* Configuration Space Access Write Routines \*/
/* Configuration Space Access Write Routines */
extern int pci_cfg_w8(pci_dev_t dev, int ofs, uint8_t data); extern int pci_cfg_w8(pci_dev_t dev, int ofs, uint8_t data);
extern int pci_cfg_w16(pci_dev_t dev, int ofs, uint16_t data); extern int pci_cfg_w16(pci_dev_t dev, int ofs, uint16_t data);
extern int pci_cfg_w32(pci_dev_t dev, int ofs, uint32_t data); extern int pci_cfg_w32(pci_dev_t dev, int ofs, uint32_t data);
@ -305,20 +310,20 @@ I/O space
The BSP or driver provide special routines in order to access I/O space. Some The BSP or driver provide special routines in order to access I/O space. Some
architectures have a special instruction accessing I/O space, others have it architectures have a special instruction accessing I/O space, others have it
mapped into a "PCI I/O window" in the standard address space accessed by the mapped into a "PCI I/O window" in the standard address space accessed by the
CPU. The window size may vary and must be taken into consideration by the CPU. The window size may vary and must be taken into consideration by the host
host driver. The below routines must be used to access I/O space. The address driver. The below routines must be used to access I/O space. The address given
given to the functions is not the PCI I/O addresses, the caller must have to the functions is not the PCI I/O addresses, the caller must have translated
translated PCI I/O addresses (available in the PCI BARs) into a BSP or host PCI I/O addresses (available in the PCI BARs) into a BSP or host driver custom
driver custom address, see `Access functions`_ for how address, see `Access functions`_ for how addresses are translated.
addresses are translated.
.. code:: c .. code:: c
/* Read a register over PCI I/O Space \*/ /* Read a register over PCI I/O Space */
extern uint8_t pci_io_r8(uint32_t adr); extern uint8_t pci_io_r8(uint32_t adr);
extern uint16_t pci_io_r16(uint32_t adr); extern uint16_t pci_io_r16(uint32_t adr);
extern uint32_t pci_io_r32(uint32_t adr); extern uint32_t pci_io_r32(uint32_t adr);
/* Write a register over PCI I/O Space \*/
/* Write a register over PCI I/O Space */
extern void pci_io_w8(uint32_t adr, uint8_t data); extern void pci_io_w8(uint32_t adr, uint8_t data);
extern void pci_io_w16(uint32_t adr, uint16_t data); extern void pci_io_w16(uint32_t adr, uint16_t data);
extern void pci_io_w32(uint32_t adr, uint32_t data); extern void pci_io_w32(uint32_t adr, uint32_t data);
@ -334,52 +339,53 @@ memory space. This leads to register content being swapped, which must be
swapped back. The below routines makes it possible to access registers over PCI swapped back. The below routines makes it possible to access registers over PCI
memory space in a portable way on different architectures, the BSP or memory space in a portable way on different architectures, the BSP or
architecture must provide necessary functions in order to implement this. architecture must provide necessary functions in order to implement this.
.. code:: c .. code:: c
static inline uint16_t pci_ld_le16(volatile uint16_t \*addr); static inline uint16_t pci_ld_le16(volatile uint16_t *addr);
static inline void pci_st_le16(volatile uint16_t \*addr, uint16_t val); static inline void pci_st_le16(volatile uint16_t *addr, uint16_t val);
static inline uint32_t pci_ld_le32(volatile uint32_t \*addr); static inline uint32_t pci_ld_le32(volatile uint32_t *addr);
static inline void pci_st_le32(volatile uint32_t \*addr, uint32_t val); static inline void pci_st_le32(volatile uint32_t *addr, uint32_t val);
static inline uint16_t pci_ld_be16(volatile uint16_t \*addr); static inline uint16_t pci_ld_be16(volatile uint16_t *addr);
static inline void pci_st_be16(volatile uint16_t \*addr, uint16_t val); static inline void pci_st_be16(volatile uint16_t *addr, uint16_t val);
static inline uint32_t pci_ld_be32(volatile uint32_t \*addr); static inline uint32_t pci_ld_be32(volatile uint32_t *addr);
static inline void pci_st_be32(volatile uint32_t \*addr, uint32_t val); static inline void pci_st_be32(volatile uint32_t *addr, uint32_t val);
In order to support non-standard big-endian PCI bus the above pci_* functions In order to support non-standard big-endian PCI bus the above ``pci_*``
is required, pci_ld_le16 != ld_le16 on big endian PCI buses. functions is required, ``pci_ld_le16 != ld_le16`` on big endian PCI buses.
Access functions Access functions
~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~
The PCI Access Library can provide device drivers with function pointers The PCI Access Library can provide device drivers with function pointers
executing the above Configuration, I/O and Memory space accesses. The executing the above Configuration, I/O and Memory space accesses. The functions
functions have the same arguments and return values as the above have the same arguments and return values as the above functions.
functions.
The pci_access_func() function defined below can be used to get a function The pci_access_func() function defined below can be used to get a function
pointer of a specific access type. pointer of a specific access type.
.. code:: c .. code:: c
/* Get Read/Write function for accessing a register over PCI Memory Space /* Get Read/Write function for accessing a register over PCI Memory Space
* (non-inline functions). * (non-inline functions).
* *
* Arguments * Arguments
* wr 0(Read), 1(Write) * wr 0(Read), 1(Write)
* size 1(Byte), 2(Word), 4(Double Word) * size 1(Byte), 2(Word), 4(Double Word)
* func Where function pointer will be stored * func Where function pointer will be stored
* endian PCI_LITTLE_ENDIAN or PCI_BIG_ENDIAN * endian PCI_LITTLE_ENDIAN or PCI_BIG_ENDIAN
* type 1(I/O), 3(REG over MEM), 4(CFG) * type 1(I/O), 3(REG over MEM), 4(CFG)
* *
* Return * Return
* 0 Found function * 0 Found function
* others No such function defined by host driver or BSP * others No such function defined by host driver or BSP
\*/ */
int pci_access_func(int wr, int size, void \**func, int endian, int type); int pci_access_func(int wr, int size, void **func, int endian, int type);
PCI device drivers may be written to support run-time detection of endianess, PCI device drivers may be written to support run-time detection of endianess,
this is mosly for debugging or for development systems. When the product is this is mosly for debugging or for development systems. When the product is
finally deployed macros switch to using the inline functions instead which finally deployed macros switch to using the inline functions instead which have
have been configured for the correct endianness. been configured for the correct endianness.
PCI address translation PCI address translation
~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~
@ -390,23 +396,25 @@ using configuration space routines or in the device tree, the addresses given
are PCI addresses. The below functions can be used to translate PCI addresses are PCI addresses. The below functions can be used to translate PCI addresses
into CPU accessible addresses or vice versa, translation may be different for into CPU accessible addresses or vice versa, translation may be different for
different PCI spaces/regions. different PCI spaces/regions.
.. code:: c .. code:: c
/* Translate PCI address into CPU accessible address \*/ /* Translate PCI address into CPU accessible address */
static inline int pci_pci2cpu(uint32_t \*address, int type); static inline int pci_pci2cpu(uint32_t *address, int type);
/* Translate CPU accessible address into PCI address (for DMA) \*/
static inline int pci_cpu2pci(uint32_t \*address, int type); /* Translate CPU accessible address into PCI address (for DMA) */
static inline int pci_cpu2pci(uint32_t *address, int type);
PCI Interrupt PCI Interrupt
------------- -------------
The PCI specification defines four different interrupt lines INTA#..INTD#, The PCI specification defines four different interrupt lines INTA#..INTD#, the
the interrupts are low level sensitive which make it possible to support interrupts are low level sensitive which make it possible to support multiple
multiple interrupt sources on the same interrupt line. Since the lines are interrupt sources on the same interrupt line. Since the lines are level
level sensitive the interrupt sources must be acknowledged before clearing the sensitive the interrupt sources must be acknowledged before clearing the
interrupt contoller, or the interrupt controller must be masked. The BSP must interrupt contoller, or the interrupt controller must be masked. The BSP must
provide a routine for clearing/acknowledging the interrupt controller, it is provide a routine for clearing/acknowledging the interrupt controller, it is up
up to the interrupt service routine to acknowledge the interrupt source. to the interrupt service routine to acknowledge the interrupt source.
The PCI Library relies on the BSP for implementing shared interrupt handling The PCI Library relies on the BSP for implementing shared interrupt handling
through the BSP_PCI_shared_interrupt_* functions/macros, they must be defined through the BSP_PCI_shared_interrupt_* functions/macros, they must be defined
@ -423,10 +431,3 @@ PCI Shell command
The RTEMS shell has a PCI command 'pci' which makes it possible to read/write The RTEMS shell has a PCI command 'pci' which makes it possible to read/write
configuration space, print the current PCI configuration and print out a configuration space, print the current PCI configuration and print out a
configuration C-file for the static or peripheral library. configuration C-file for the static or peripheral library.
.. COMMENT: COPYRIGHT (c) 1988-2007.
.. COMMENT: On-Line Applications Research Corporation (OAR).
.. COMMENT: All rights reserved.

View File

@ -1,11 +1,15 @@
.. COMMENT: COPYRIGHT (c) 2011,2015
.. COMMENT: Aeroflex Gaisler AB
.. COMMENT: All rights reserved.
Symmetric Multiprocessing Services Symmetric Multiprocessing Services
################################## ##################################
Introduction Introduction
============ ============
The Symmetric Multiprocessing (SMP) support of the RTEMS 4.10.99.0 is The Symmetric Multiprocessing (SMP) support of the RTEMS 4.11.0 and later is available
available on on
- ARM, - ARM,
@ -13,34 +17,38 @@ available on
- SPARC. - SPARC.
It must be explicitly enabled via the ``--enable-smp`` configure command It must be explicitly enabled via the ``--enable-smp`` configure command line
line option. To enable SMP in the application configuration see `Enable SMP Support for Applications`_. The default option. To enable SMP in the application configuration see `Enable SMP Support
scheduler for SMP applications supports up to 32 processors and is a global for Applications`_. The default scheduler for SMP applications supports up to
fixed priority scheduler, see also `Configuring Clustered Schedulers`_. For example applications see:file:`testsuites/smptests`. 32 processors and is a global fixed priority scheduler, see also
:ref:`Configuring Clustered Schedulers`. For example applications
see:file:`testsuites/smptests`.
*WARNING: The SMP support in RTEMS is work in progress. Before you .. warning::
start using this RTEMS version for SMP ask on the RTEMS mailing list.*
The SMP support in RTEMS is work in progress. Before you start using this
RTEMS version for SMP ask on the RTEMS mailing list.
This chapter describes the services related to Symmetric Multiprocessing This chapter describes the services related to Symmetric Multiprocessing
provided by RTEMS. provided by RTEMS.
The application level services currently provided are: The application level services currently provided are:
- ``rtems_get_processor_count`` - Get processor count - rtems_get_processor_count_ - Get processor count
- ``rtems_get_current_processor`` - Get current processor index - rtems_get_current_processor_ - Get current processor index
- ``rtems_scheduler_ident`` - Get ID of a scheduler - rtems_scheduler_ident_ - Get ID of a scheduler
- ``rtems_scheduler_get_processor_set`` - Get processor set of a scheduler - rtems_scheduler_get_processor_set_ - Get processor set of a scheduler
- ``rtems_task_get_scheduler`` - Get scheduler of a task - rtems_task_get_scheduler_ - Get scheduler of a task
- ``rtems_task_set_scheduler`` - Set scheduler of a task - rtems_task_set_scheduler_ - Set scheduler of a task
- ``rtems_task_get_affinity`` - Get task processor affinity - rtems_task_get_affinity_ - Get task processor affinity
- ``rtems_task_set_affinity`` - Set task processor affinity - rtems_task_set_affinity_ - Set task processor affinity
Background Background
========== ==========
@ -56,65 +64,62 @@ taken for granted:
- hardware events result in interrupts - hardware events result in interrupts
There is no true parallelism. Even when interrupts appear to occur There is no true parallelism. Even when interrupts appear to occur at the same
at the same time, they are processed in largely a serial fashion. time, they are processed in largely a serial fashion. This is true even when
This is true even when the interupt service routines are allowed to the interupt service routines are allowed to nest. From a tasking viewpoint,
nest. From a tasking viewpoint, it is the responsibility of the real-time it is the responsibility of the real-time operatimg system to simulate
operatimg system to simulate parallelism by switching between tasks. parallelism by switching between tasks. These task switches occur in response
These task switches occur in response to hardware interrupt events and explicit to hardware interrupt events and explicit application events such as blocking
application events such as blocking for a resource or delaying. for a resource or delaying.
With symmetric multiprocessing, the presence of multiple processors With symmetric multiprocessing, the presence of multiple processors allows for
allows for true concurrency and provides for cost-effective performance true concurrency and provides for cost-effective performance
improvements. Uniprocessors tend to increase performance by increasing improvements. Uniprocessors tend to increase performance by increasing clock
clock speed and complexity. This tends to lead to hot, power hungry speed and complexity. This tends to lead to hot, power hungry microprocessors
microprocessors which are poorly suited for many embedded applications. which are poorly suited for many embedded applications.
The true concurrency is in sharp contrast to the single task and The true concurrency is in sharp contrast to the single task and interrupt
interrupt model of uniprocessor systems. This results in a fundamental model of uniprocessor systems. This results in a fundamental change to
change to uniprocessor system characteristics listed above. Developers uniprocessor system characteristics listed above. Developers are faced with a
are faced with a different set of characteristics which, in turn, break different set of characteristics which, in turn, break some existing
some existing assumptions and result in new challenges. In an SMP system assumptions and result in new challenges. In an SMP system with N processors,
with N processors, these are the new execution characteristics. these are the new execution characteristics.
- N tasks execute in parallel - N tasks execute in parallel
- hardware events result in interrupts - hardware events result in interrupts
There is true parallelism with a task executing on each processor and There is true parallelism with a task executing on each processor and the
the possibility of interrupts occurring on each processor. Thus in contrast possibility of interrupts occurring on each processor. Thus in contrast to
to their being one task and one interrupt to consider on a uniprocessor, their being one task and one interrupt to consider on a uniprocessor, there are
there are N tasks and potentially N simultaneous interrupts to consider N tasks and potentially N simultaneous interrupts to consider on an SMP system.
on an SMP system.
This increase in hardware complexity and presence of true parallelism This increase in hardware complexity and presence of true parallelism results
results in the application developer needing to be even more cautious in the application developer needing to be even more cautious about mutual
about mutual exclusion and shared data access than in a uniprocessor exclusion and shared data access than in a uniprocessor embedded system. Race
embedded system. Race conditions that never or rarely happened when an conditions that never or rarely happened when an application executed on a
application executed on a uniprocessor system, become much more likely uniprocessor system, become much more likely due to multiple threads executing
due to multiple threads executing in parallel. On a uniprocessor system, in parallel. On a uniprocessor system, these race conditions would only happen
these race conditions would only happen when a task switch occurred at when a task switch occurred at just the wrong moment. Now there are N-1 tasks
just the wrong moment. Now there are N-1 tasks executing in parallel executing in parallel all the time and this results in many more opportunities
all the time and this results in many more opportunities for small for small windows in critical sections to be hit.
windows in critical sections to be hit.
Task Affinity Task Affinity
------------- -------------
.. index:: task affinity .. index:: task affinity
.. index:: thread affinity .. index:: thread affinity
RTEMS provides services to manipulate the affinity of a task. Affinity RTEMS provides services to manipulate the affinity of a task. Affinity is used
is used to specify the subset of processors in an SMP system on which to specify the subset of processors in an SMP system on which a particular task
a particular task can execute. can execute.
By default, tasks have an affinity which allows them to execute on any By default, tasks have an affinity which allows them to execute on any
available processor. available processor.
Task affinity is a possible feature to be supported by SMP-aware Task affinity is a possible feature to be supported by SMP-aware
schedulers. However, only a subset of the available schedulers support schedulers. However, only a subset of the available schedulers support
affinity. Although the behavior is scheduler specific, if the scheduler affinity. Although the behavior is scheduler specific, if the scheduler does
does not support affinity, it is likely to ignore all attempts to set not support affinity, it is likely to ignore all attempts to set affinity.
affinity.
The scheduler with support for arbitary processor affinities uses a proof of The scheduler with support for arbitary processor affinities uses a proof of
concept implementation. See https://devel.rtems.org/ticket/2510. concept implementation. See https://devel.rtems.org/ticket/2510.
@ -130,12 +135,13 @@ to another. There are three reasons why tasks migrate in RTEMS.
- The scheduler changes explicitly via ``rtems_task_set_scheduler()`` or - The scheduler changes explicitly via ``rtems_task_set_scheduler()`` or
similar directives. similar directives.
- The task resumes execution after a blocking operation. On a priority - The task resumes execution after a blocking operation. On a priority based
based scheduler it will evict the lowest priority task currently assigned to a scheduler it will evict the lowest priority task currently assigned to a
processor in the processor set managed by the scheduler instance. processor in the processor set managed by the scheduler instance.
- The task moves temporarily to another scheduler instance due to locking - The task moves temporarily to another scheduler instance due to locking
protocols like *Migratory Priority Inheritance* or the*Multiprocessor Resource Sharing Protocol*. protocols like *Migratory Priority Inheritance* or the *Multiprocessor
Resource Sharing Protocol*.
Task migration should be avoided so that the working set of a task can stay on Task migration should be avoided so that the working set of a task can stay on
the most local cache level. the most local cache level.
@ -173,8 +179,9 @@ clusters. Clusters with a cardinality of one are partitions. Each cluster is
owned by exactly one scheduler instance. owned by exactly one scheduler instance.
Clustered scheduling helps to control the worst-case latencies in Clustered scheduling helps to control the worst-case latencies in
multi-processor systems, see *Brandenburg, Bjorn B.: Scheduling and multi-processor systems, see *Brandenburg, Bjorn B.: Scheduling and Locking in
Locking in Multiprocessor Real-Time Operating Systems. PhD thesis, 2011.http://www.cs.unc.edu/~bbb/diss/brandenburg-diss.pdf*. The goal is to Multiprocessor Real-Time Operating Systems. PhD thesis,
2011.http://www.cs.unc.edu/~bbb/diss/brandenburg-diss.pdf*. The goal is to
reduce the amount of shared state in the system and thus prevention of lock reduce the amount of shared state in the system and thus prevention of lock
contention. Modern multi-processor systems tend to have several layers of data contention. Modern multi-processor systems tend to have several layers of data
and instruction caches. With clustered scheduling it is possible to honour the and instruction caches. With clustered scheduling it is possible to honour the
@ -188,8 +195,8 @@ available
- message queues, - message queues,
- semaphores using the `Priority Inheritance`_ - semaphores using the `Priority Inheritance`_ protocol (priority boosting),
protocol (priority boosting), and and
- semaphores using the `Multiprocessor Resource Sharing Protocol`_ (MrsP). - semaphores using the `Multiprocessor Resource Sharing Protocol`_ (MrsP).
@ -198,9 +205,10 @@ real-time requirements and functions that profit from fairness and high
throughput provided the scheduler instances are fully decoupled and adequate throughput provided the scheduler instances are fully decoupled and adequate
inter-cluster synchronization primitives are used. This is work in progress. inter-cluster synchronization primitives are used. This is work in progress.
For the configuration of clustered schedulers see `Configuring Clustered Schedulers`_. For the configuration of clustered schedulers see `Configuring Clustered
Schedulers`_.
To set the scheduler of a task see `SCHEDULER_IDENT - Get ID of a scheduler`_ To set the scheduler of a task see `SCHEDULER_IDENT - Get ID of a scheduler`_
and `TASK_SET_SCHEDULER - Set scheduler of a task`_. and `TASK_SET_SCHEDULER - Set scheduler of a task`_.
Task Priority Queues Task Priority Queues
@ -220,9 +228,11 @@ appended to the FIFO. To dequeue a task the highest priority task of the first
priority queue in the FIFO is selected. Then the first priority queue is priority queue in the FIFO is selected. Then the first priority queue is
removed from the FIFO. In case the previously first priority queue is not removed from the FIFO. In case the previously first priority queue is not
empty, then it is appended to the FIFO. So there is FIFO fairness with respect empty, then it is appended to the FIFO. So there is FIFO fairness with respect
to the highest priority task of each scheduler instances. See also *Brandenburg, Bjorn B.: A fully preemptive multiprocessor semaphore protocol for to the highest priority task of each scheduler instances. See also
latency-sensitive real-time applications. In Proceedings of the 25th Euromicro *Brandenburg, Bjorn B.: A fully preemptive multiprocessor semaphore protocol
Conference on Real-Time Systems (ECRTS 2013), pages 292-302, 2013.http://www.mpi-sws.org/~bbb/papers/pdf/ecrts13b.pdf*. for latency-sensitive real-time applications. In Proceedings of the 25th
Euromicro Conference on Real-Time Systems (ECRTS 2013), pages 292-302,
2013.http://www.mpi-sws.org/~bbb/papers/pdf/ecrts13b.pdf*.
Such a two level queue may need a considerable amount of memory if fast enqueue Such a two level queue may need a considerable amount of memory if fast enqueue
and dequeue operations are desired (depends on the scheduler instance count). and dequeue operations are desired (depends on the scheduler instance count).
@ -242,11 +252,11 @@ for the task itself. In case a task needs to block, then there are two options
In case the task is dequeued, then there are two options In case the task is dequeued, then there are two options
- the task is the last task on the queue, then it removes this queue from - the task is the last task on the queue, then it removes this queue from the
the object and reclaims it for its own purpose, or object and reclaims it for its own purpose, or
- otherwise, then the task removes one queue from the free list of the - otherwise, then the task removes one queue from the free list of the object
object and reclaims it for its own purpose. and reclaims it for its own purpose.
Since there are usually more objects than tasks, this actually reduces the Since there are usually more objects than tasks, this actually reduces the
memory demands. In addition the objects contain only a pointer to the task memory demands. In addition the objects contain only a pointer to the task
@ -257,39 +267,40 @@ and OpenMP run-time support).
Scheduler Helping Protocol Scheduler Helping Protocol
-------------------------- --------------------------
The scheduler provides a helping protocol to support locking protocols like*Migratory Priority Inheritance* or the *Multiprocessor Resource The scheduler provides a helping protocol to support locking protocols like
Sharing Protocol*. Each ready task can use at least one scheduler node at a *Migratory Priority Inheritance* or the *Multiprocessor Resource Sharing
time to gain access to a processor. Each scheduler node has an owner, a user Protocol*. Each ready task can use at least one scheduler node at a time to
and an optional idle task. The owner of a scheduler node is determined a task gain access to a processor. Each scheduler node has an owner, a user and an
optional idle task. The owner of a scheduler node is determined a task
creation and never changes during the life time of a scheduler node. The user creation and never changes during the life time of a scheduler node. The user
of a scheduler node may change due to the scheduler helping protocol. A of a scheduler node may change due to the scheduler helping protocol. A
scheduler node is in one of the four scheduler help states: scheduler node is in one of the four scheduler help states:
:dfn:`help yourself` :dfn:`help yourself`
This scheduler node is solely used by the owner task. This task owns no This scheduler node is solely used by the owner task. This task owns no
resources using a helping protocol and thus does not take part in the scheduler resources using a helping protocol and thus does not take part in the
helping protocol. No help will be provided for other tasks. scheduler helping protocol. No help will be provided for other tasks.
:dfn:`help active owner` :dfn:`help active owner`
This scheduler node is owned by a task actively owning a resource and can be This scheduler node is owned by a task actively owning a resource and can
used to help out tasks. be used to help out tasks. In case this scheduler node changes its state
In case this scheduler node changes its state from ready to scheduled and the from ready to scheduled and the task executes using another node, then an
task executes using another node, then an idle task will be provided as a user idle task will be provided as a user of this node to temporarily execute on
of this node to temporarily execute on behalf of the owner task. Thus lower behalf of the owner task. Thus lower priority tasks are denied access to
priority tasks are denied access to the processors of this scheduler instance. the processors of this scheduler instance. In case a task actively owning
In case a task actively owning a resource performs a blocking operation, then a resource performs a blocking operation, then an idle task will be used
an idle task will be used also in case this node is in the scheduled state. also in case this node is in the scheduled state.
:dfn:`help active rival` :dfn:`help active rival`
This scheduler node is owned by a task actively obtaining a resource currently This scheduler node is owned by a task actively obtaining a resource
owned by another task and can be used to help out tasks. currently owned by another task and can be used to help out tasks. The
The task owning this node is ready and will give away its processor in case the task owning this node is ready and will give away its processor in case the
task owning the resource asks for help. task owning the resource asks for help.
:dfn:`help passive` :dfn:`help passive`
This scheduler node is owned by a task obtaining a resource currently owned by This scheduler node is owned by a task obtaining a resource currently owned
another task and can be used to help out tasks. by another task and can be used to help out tasks. The task owning this
The task owning this node is blocked. node is blocked.
The following scheduler operations return a task in need for help The following scheduler operations return a task in need for help
@ -324,15 +335,15 @@ the system depends on the maximum resource tree size of the application.
Critical Section Techniques and SMP Critical Section Techniques and SMP
----------------------------------- -----------------------------------
As discussed earlier, SMP systems have opportunities for true parallelism As discussed earlier, SMP systems have opportunities for true parallelism which
which was not possible on uniprocessor systems. Consequently, multiple was not possible on uniprocessor systems. Consequently, multiple techniques
techniques that provided adequate critical sections on uniprocessor that provided adequate critical sections on uniprocessor systems are unsafe on
systems are unsafe on SMP systems. In this section, some of these SMP systems. In this section, some of these unsafe techniques will be
unsafe techniques will be discussed. discussed.
In general, applications must use proper operating system provided mutual In general, applications must use proper operating system provided mutual
exclusion mechanisms to ensure correct behavior. This primarily means exclusion mechanisms to ensure correct behavior. This primarily means the use
the use of binary semaphores or mutexes to implement critical sections. of binary semaphores or mutexes to implement critical sections.
Disable Interrupts and Interrupt Locks Disable Interrupts and Interrupt Locks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -369,80 +380,85 @@ to simple interrupt disable/enable sequences. It is disallowed to acquire a
single interrupt lock in a nested way. This will result in an infinite loop single interrupt lock in a nested way. This will result in an infinite loop
with interrupts disabled. While converting legacy code to interrupt locks care with interrupts disabled. While converting legacy code to interrupt locks care
must be taken to avoid this situation. must be taken to avoid this situation.
.. code:: c
.. code-block:: c
:linenos:
void legacy_code_with_interrupt_disable_enable( void ) void legacy_code_with_interrupt_disable_enable( void )
{ {
rtems_interrupt_level level; rtems_interrupt_level level;
rtems_interrupt_disable( level ); rtems_interrupt_disable( level );
/* Some critical stuff \*/ /* Some critical stuff */
rtems_interrupt_enable( level ); rtems_interrupt_enable( level );
} }
RTEMS_INTERRUPT_LOCK_DEFINE( static, lock, "Name" )
RTEMS_INTERRUPT_LOCK_DEFINE( static, lock, "Name" );
void smp_ready_code_with_interrupt_lock( void ) void smp_ready_code_with_interrupt_lock( void )
{ {
rtems_interrupt_lock_context lock_context; rtems_interrupt_lock_context lock_context;
rtems_interrupt_lock_acquire( &lock, &lock_context ); rtems_interrupt_lock_acquire( &lock, &lock_context );
/* Some critical stuff \*/ /* Some critical stuff */
rtems_interrupt_lock_release( &lock, &lock_context ); rtems_interrupt_lock_release( &lock, &lock_context );
} }
The ``rtems_interrupt_lock`` structure is empty on uni-processor The ``rtems_interrupt_lock`` structure is empty on uni-processor
configurations. Empty structures have a different size in C configurations. Empty structures have a different size in C
(implementation-defined, zero in case of GCC) and C++ (implementation-defined (implementation-defined, zero in case of GCC) and C++ (implementation-defined
non-zero value, one in case of GCC). Thus the``RTEMS_INTERRUPT_LOCK_DECLARE()``, ``RTEMS_INTERRUPT_LOCK_DEFINE()``,``RTEMS_INTERRUPT_LOCK_MEMBER()``, and``RTEMS_INTERRUPT_LOCK_REFERENCE()`` macros are provided to ensure ABI non-zero value, one in case of GCC). Thus the
compatibility. ``RTEMS_INTERRUPT_LOCK_DECLARE()``, ``RTEMS_INTERRUPT_LOCK_DEFINE()``,
``RTEMS_INTERRUPT_LOCK_MEMBER()``, and ``RTEMS_INTERRUPT_LOCK_REFERENCE()``
macros are provided to ensure ABI compatibility.
Highest Priority Task Assumption Highest Priority Task Assumption
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
On a uniprocessor system, it is safe to assume that when the highest On a uniprocessor system, it is safe to assume that when the highest priority
priority task in an application executes, it will execute without being task in an application executes, it will execute without being preempted until
preempted until it voluntarily blocks. Interrupts may occur while it is it voluntarily blocks. Interrupts may occur while it is executing, but there
executing, but there will be no context switch to another task unless will be no context switch to another task unless the highest priority task
the highest priority task voluntarily initiates it. voluntarily initiates it.
Given the assumption that no other tasks will have their execution Given the assumption that no other tasks will have their execution interleaved
interleaved with the highest priority task, it is possible for this with the highest priority task, it is possible for this task to be constructed
task to be constructed such that it does not need to acquire a binary such that it does not need to acquire a binary semaphore or mutex for protected
semaphore or mutex for protected access to shared data. access to shared data.
In an SMP system, it cannot be assumed there will never be a single task In an SMP system, it cannot be assumed there will never be a single task
executing. It should be assumed that every processor is executing another executing. It should be assumed that every processor is executing another
application task. Further, those tasks will be ones which would not have application task. Further, those tasks will be ones which would not have been
been executed in a uniprocessor configuration and should be assumed to executed in a uniprocessor configuration and should be assumed to have data
have data synchronization conflicts with what was formerly the highest synchronization conflicts with what was formerly the highest priority task
priority task which executed without conflict. which executed without conflict.
Disable Preemption Disable Preemption
~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~
On a uniprocessor system, disabling preemption in a task is very similar On a uniprocessor system, disabling preemption in a task is very similar to
to making the highest priority task assumption. While preemption is making the highest priority task assumption. While preemption is disabled, no
disabled, no task context switches will occur unless the task initiates task context switches will occur unless the task initiates them
them voluntarily. And, just as with the highest priority task assumption, voluntarily. And, just as with the highest priority task assumption, there are
there are N-1 processors also running tasks. Thus the assumption that no N-1 processors also running tasks. Thus the assumption that no other tasks will
other tasks will run while the task has preemption disabled is violated. run while the task has preemption disabled is violated.
Task Unique Data and SMP Task Unique Data and SMP
------------------------ ------------------------
Per task variables are a service commonly provided by real-time operating Per task variables are a service commonly provided by real-time operating
systems for application use. They work by allowing the application systems for application use. They work by allowing the application to specify a
to specify a location in memory (typically a ``void *``) which is location in memory (typically a ``void *``) which is logically added to the
logically added to the context of a task. On each task switch, the context of a task. On each task switch, the location in memory is stored and
location in memory is stored and each task can have a unique value in each task can have a unique value in the same memory location. This memory
the same memory location. This memory location is directly accessed as a location is directly accessed as a variable in a program.
variable in a program.
This works well in a uniprocessor environment because there is one task This works well in a uniprocessor environment because there is one task
executing and one memory location containing a task-specific value. But executing and one memory location containing a task-specific value. But it is
it is fundamentally broken on an SMP system because there are always N fundamentally broken on an SMP system because there are always N tasks
tasks executing. With only one location in memory, N-1 tasks will not executing. With only one location in memory, N-1 tasks will not have the
have the correct value. correct value.
This paradigm for providing task unique data values is fundamentally This paradigm for providing task unique data values is fundamentally broken on
broken on SMP systems. SMP systems.
Classic API Per Task Variables Classic API Per Task Variables
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -479,50 +495,54 @@ configuration of libgomp. In addition application configurable thread pools
for each scheduler instance are available in GCC 6.1 or later. for each scheduler instance are available in GCC 6.1 or later.
The run-time configuration of libgomp is done via environment variables The run-time configuration of libgomp is done via environment variables
documented in the `libgomp documented in the `libgomp manual <https://gcc.gnu.org/onlinedocs/libgomp/>`_.
manual <https://gcc.gnu.org/onlinedocs/libgomp/>`_. The environment variables are evaluated in a constructor function The environment variables are evaluated in a constructor function which
which executes in the context of the first initialization task before the executes in the context of the first initialization task before the actual
actual initialization task function is called (just like a global C++ initialization task function is called (just like a global C++ constructor).
constructor). To set application specific values, a higher priority To set application specific values, a higher priority constructor function must
constructor function must be used to set up the environment variables. be used to set up the environment variables.
.. code:: c .. code:: c
#include <stdlib.h> #include <stdlib.h>
void __attribute__((constructor(1000))) config_libgomp( void ) void __attribute__((constructor(1000))) config_libgomp( void )
{ {
setenv( "OMP_DISPLAY_ENV", "VERBOSE", 1 ); setenv( "OMP_DISPLAY_ENV", "VERBOSE", 1 );
setenv( "GOMP_SPINCOUNT", "30000", 1 ); setenv( "GOMP_SPINCOUNT", "30000", 1 );
setenv( "GOMP_RTEMS_THREAD_POOLS", "1$2@SCHD", 1 ); setenv( "GOMP_RTEMS_THREAD_POOLS", "1$2@SCHD", 1 );
} }
The environment variable ``GOMP_RTEMS_THREAD_POOLS`` is RTEMS-specific. It The environment variable ``GOMP_RTEMS_THREAD_POOLS`` is RTEMS-specific. It
determines the thread pools for each scheduler instance. The format for``GOMP_RTEMS_THREAD_POOLS`` is a list of optional``<thread-pool-count>[$<priority>]@<scheduler-name>`` configurations determines the thread pools for each scheduler instance. The format for
separated by ``:`` where: ``GOMP_RTEMS_THREAD_POOLS`` is a list of optional
``<thread-pool-count>[$<priority>]@<scheduler-name>`` configurations separated
by ``:`` where:
- ``<thread-pool-count>`` is the thread pool count for this scheduler - ``<thread-pool-count>`` is the thread pool count for this scheduler instance.
instance.
- ``$<priority>`` is an optional priority for the worker threads of a - ``$<priority>`` is an optional priority for the worker threads of a thread
thread pool according to ``pthread_setschedparam``. In case a priority pool according to ``pthread_setschedparam``. In case a priority value is
value is omitted, then a worker thread will inherit the priority of the OpenMP omitted, then a worker thread will inherit the priority of the OpenMP master
master thread that created it. The priority of the worker thread is not thread that created it. The priority of the worker thread is not changed by
changed by libgomp after creation, even if a new OpenMP master thread using the libgomp after creation, even if a new OpenMP master thread using the worker
worker has a different priority. has a different priority.
- ``@<scheduler-name>`` is the scheduler instance name according to the - ``@<scheduler-name>`` is the scheduler instance name according to the RTEMS
RTEMS application configuration. application configuration.
In case no thread pool configuration is specified for a scheduler instance, In case no thread pool configuration is specified for a scheduler instance,
then each OpenMP master thread of this scheduler instance will use its own then each OpenMP master thread of this scheduler instance will use its own
dynamically allocated thread pool. To limit the worker thread count of the dynamically allocated thread pool. To limit the worker thread count of the
thread pools, each OpenMP master thread must call ``omp_set_num_threads``. thread pools, each OpenMP master thread must call ``omp_set_num_threads``.
Lets suppose we have three scheduler instances ``IO``, ``WRK0``, and``WRK1`` with ``GOMP_RTEMS_THREAD_POOLS`` set to``"1@WRK0:3$4@WRK1"``. Then there are no thread pool restrictions for Lets suppose we have three scheduler instances ``IO``, ``WRK0``, and ``WRK1``
scheduler instance ``IO``. In the scheduler instance ``WRK0`` there is with ``GOMP_RTEMS_THREAD_POOLS`` set to ``"1@WRK0:3$4@WRK1"``. Then there are
one thread pool available. Since no priority is specified for this scheduler no thread pool restrictions for scheduler instance ``IO``. In the scheduler
instance, the worker thread inherits the priority of the OpenMP master thread instance ``WRK0`` there is one thread pool available. Since no priority is
that created it. In the scheduler instance ``WRK1`` there are three thread specified for this scheduler instance, the worker thread inherits the priority
pools available and their worker threads run at priority four. of the OpenMP master thread that created it. In the scheduler instance
``WRK1`` there are three thread pools available and their worker threads run at
priority four.
Thread Dispatch Details Thread Dispatch Details
----------------------- -----------------------
@ -548,10 +568,10 @@ variables,
Updates of the heir thread and the thread dispatch necessary indicator are Updates of the heir thread and the thread dispatch necessary indicator are
synchronized via explicit memory barriers without the use of locks. A thread synchronized via explicit memory barriers without the use of locks. A thread
can be an heir thread on at most one processor in the system. The thread context can be an heir thread on at most one processor in the system. The thread
is protected by a TTAS lock embedded in the context to ensure that it is used context is protected by a TTAS lock embedded in the context to ensure that it
on at most one processor at a time. The thread post-switch actions use a is used on at most one processor at a time. The thread post-switch actions use
per-processor lock. This implementation turned out to be quite efficient and a per-processor lock. This implementation turned out to be quite efficient and
no lock contention was observed in the test suite. no lock contention was observed in the test suite.
The current implementation of thread dispatching has some implications with The current implementation of thread dispatching has some implications with
@ -607,31 +627,34 @@ lock individual tasks to specific processors. In this way, one can designate a
processor for I/O tasks, another for computation, etc.. The following processor for I/O tasks, another for computation, etc.. The following
illustrates the code sequence necessary to assign a task an affinity for illustrates the code sequence necessary to assign a task an affinity for
processor with index ``processor_index``. processor with index ``processor_index``.
.. code:: c .. code:: c
#include <rtems.h> #include <rtems.h>
#include <assert.h> #include <assert.h>
void pin_to_processor(rtems_id task_id, int processor_index) void pin_to_processor(rtems_id task_id, int processor_index)
{ {
rtems_status_code sc; rtems_status_code sc;
cpu_set_t cpuset; cpu_set_t cpuset;
CPU_ZERO(&cpuset); CPU_ZERO(&cpuset);
CPU_SET(processor_index, &cpuset); CPU_SET(processor_index, &cpuset);
sc = rtems_task_set_affinity(task_id, sizeof(cpuset), &cpuset); sc = rtems_task_set_affinity(task_id, sizeof(cpuset), &cpuset);
assert(sc == RTEMS_SUCCESSFUL); assert(sc == RTEMS_SUCCESSFUL);
} }
It is important to note that the ``cpuset`` is not validated until the``rtems_task_set_affinity`` call is made. At that point, It is important to note that the ``cpuset`` is not validated until the
it is validated against the current system configuration. ``rtems_task_set_affinity`` call is made. At that point, it is validated
against the current system configuration.
Directives Directives
========== ==========
This section details the symmetric multiprocessing services. A subsection This section details the symmetric multiprocessing services. A subsection is
is dedicated to each of these services and describes the calling sequence, dedicated to each of these services and describes the calling sequence, related
related constants, usage, and status codes. constants, usage, and status codes.
.. COMMENT: rtems_get_processor_count .. _rtems_get_processor_count:
GET_PROCESSOR_COUNT - Get processor count GET_PROCESSOR_COUNT - Get processor count
----------------------------------------- -----------------------------------------
@ -660,7 +683,7 @@ maximum count of application configured processors.
None. None.
.. COMMENT: rtems_get_current_processor .. _rtems_get_current_processor:
GET_CURRENT_PROCESSOR - Get current processor index GET_CURRENT_PROCESSOR - Get current processor index
--------------------------------------------------- ---------------------------------------------------
@ -692,8 +715,7 @@ thread dispatching disabled.
None. None.
.. COMMENT: rtems_scheduler_ident .. _rtems_scheduler_ident:
SCHEDULER_IDENT - Get ID of a scheduler SCHEDULER_IDENT - Get ID of a scheduler
--------------------------------------- ---------------------------------------
@ -703,17 +725,24 @@ SCHEDULER_IDENT - Get ID of a scheduler
.. code:: c .. code:: c
rtems_status_code rtems_scheduler_ident( rtems_status_code rtems_scheduler_ident(
rtems_name name, rtems_name name,
rtems_id \*id rtems_id *id
); );
**DIRECTIVE STATUS CODES:** **DIRECTIVE STATUS CODES:**
``RTEMS_SUCCESSFUL`` - successful operation .. list-table::
``RTEMS_INVALID_ADDRESS`` - ``id`` is NULL :class: rtems-table
``RTEMS_INVALID_NAME`` - invalid scheduler name
``RTEMS_UNSATISFIED`` - - a scheduler with this name exists, but * - ``RTEMS_SUCCESSFUL``
the processor set of this scheduler is empty - successful operation
* - ``RTEMS_INVALID_ADDRESS``
- ``id`` is NULL
* - ``RTEMS_INVALID_NAME``
- invalid scheduler name
* - ``RTEMS_UNSATISFIED``
- a scheduler with this name exists, but the processor set of this scheduler
is empty
**DESCRIPTION:** **DESCRIPTION:**
@ -724,7 +753,7 @@ scheduler configuration. See `Configuring a System`_.
None. None.
.. COMMENT: rtems_scheduler_get_processor_set .. _rtems_scheduler_get_processor_set:
SCHEDULER_GET_PROCESSOR_SET - Get processor set of a scheduler SCHEDULER_GET_PROCESSOR_SET - Get processor set of a scheduler
-------------------------------------------------------------- --------------------------------------------------------------
@ -734,30 +763,37 @@ SCHEDULER_GET_PROCESSOR_SET - Get processor set of a scheduler
.. code:: c .. code:: c
rtems_status_code rtems_scheduler_get_processor_set( rtems_status_code rtems_scheduler_get_processor_set(
rtems_id scheduler_id, rtems_id scheduler_id,
size_t cpusetsize, size_t cpusetsize,
cpu_set_t \*cpuset cpu_set_t *cpuset
); );
**DIRECTIVE STATUS CODES:** **DIRECTIVE STATUS CODES:**
``RTEMS_SUCCESSFUL`` - successful operation .. list-table::
``RTEMS_INVALID_ADDRESS`` - ``cpuset`` is NULL :class: rtems-table
``RTEMS_INVALID_ID`` - invalid scheduler id
``RTEMS_INVALID_NUMBER`` - the affinity set buffer is too small for * - ``RTEMS_SUCCESSFUL``
set of processors owned by the scheduler - successful operation
* - ``RTEMS_INVALID_ADDRESS``
- ``cpuset`` is NULL
* - ``RTEMS_INVALID_ID``
- invalid scheduler id
* - ``RTEMS_INVALID_NUMBER``
- the affinity set buffer is too small for set of processors owned by the
scheduler
**DESCRIPTION:** **DESCRIPTION:**
Returns the processor set owned by the scheduler in ``cpuset``. A set bit Returns the processor set owned by the scheduler in ``cpuset``. A set bit in
in the processor set means that this processor is owned by the scheduler and a the processor set means that this processor is owned by the scheduler and a
cleared bit means the opposite. cleared bit means the opposite.
**NOTES:** **NOTES:**
None. None.
.. COMMENT: rtems_task_get_scheduler .. _rtems_task_get_scheduler:
TASK_GET_SCHEDULER - Get scheduler of a task TASK_GET_SCHEDULER - Get scheduler of a task
-------------------------------------------- --------------------------------------------
@ -767,26 +803,32 @@ TASK_GET_SCHEDULER - Get scheduler of a task
.. code:: c .. code:: c
rtems_status_code rtems_task_get_scheduler( rtems_status_code rtems_task_get_scheduler(
rtems_id task_id, rtems_id task_id,
rtems_id \*scheduler_id rtems_id *scheduler_id
); );
**DIRECTIVE STATUS CODES:** **DIRECTIVE STATUS CODES:**
``RTEMS_SUCCESSFUL`` - successful operation .. list-table::
``RTEMS_INVALID_ADDRESS`` - ``scheduler_id`` is NULL :class: rtems-table
``RTEMS_INVALID_ID`` - invalid task id
* - ``RTEMS_SUCCESSFUL``
- successful operation
* - ``RTEMS_INVALID_ADDRESS``
- ``scheduler_id`` is NULL
* - ``RTEMS_INVALID_ID``
- invalid task id
**DESCRIPTION:** **DESCRIPTION:**
Returns the scheduler identifier of a task identified by ``task_id`` in``scheduler_id``. Returns the scheduler identifier of a task identified by ``task_id`` in
``scheduler_id``.
**NOTES:** **NOTES:**
None. None.
.. COMMENT: rtems_task_set_scheduler .. _rtems_task_set_scheduler:
TASK_SET_SCHEDULER - Set scheduler of a task TASK_SET_SCHEDULER - Set scheduler of a task
-------------------------------------------- --------------------------------------------
@ -796,22 +838,27 @@ TASK_SET_SCHEDULER - Set scheduler of a task
.. code:: c .. code:: c
rtems_status_code rtems_task_set_scheduler( rtems_status_code rtems_task_set_scheduler(
rtems_id task_id, rtems_id task_id,
rtems_id scheduler_id rtems_id scheduler_id
); );
**DIRECTIVE STATUS CODES:** **DIRECTIVE STATUS CODES:**
``RTEMS_SUCCESSFUL`` - successful operation .. list-table::
``RTEMS_INVALID_ID`` - invalid task or scheduler id :class: rtems-table
``RTEMS_INCORRECT_STATE`` - the task is in the wrong state to
perform a scheduler change * - ``RTEMS_SUCCESSFUL``
- successful operation
* - ``RTEMS_INVALID_ID``
- invalid task or scheduler id
* - ``RTEMS_INCORRECT_STATE``
- the task is in the wrong state to perform a scheduler change
**DESCRIPTION:** **DESCRIPTION:**
Sets the scheduler of a task identified by ``task_id`` to the scheduler Sets the scheduler of a task identified by ``task_id`` to the scheduler
identified by ``scheduler_id``. The scheduler of a task is initialized to identified by ``scheduler_id``. The scheduler of a task is initialized to the
the scheduler of the task that created it. scheduler of the task that created it.
**NOTES:** **NOTES:**
@ -819,36 +866,44 @@ None.
**EXAMPLE:** **EXAMPLE:**
.. code:: c .. code-block:: c
:linenos:
#include <rtems.h> #include <rtems.h>
#include <assert.h> #include <assert.h>
void task(rtems_task_argument arg); void task(rtems_task_argument arg);
void example(void) void example(void)
{ {
rtems_status_code sc; rtems_status_code sc;
rtems_id task_id; rtems_id task_id;
rtems_id scheduler_id; rtems_id scheduler_id;
rtems_name scheduler_name; rtems_name scheduler_name;
scheduler_name = rtems_build_name('W', 'O', 'R', 'K');
sc = rtems_scheduler_ident(scheduler_name, &scheduler_id); scheduler_name = rtems_build_name('W', 'O', 'R', 'K');
assert(sc == RTEMS_SUCCESSFUL);
sc = rtems_task_create( sc = rtems_scheduler_ident(scheduler_name, &scheduler_id);
rtems_build_name('T', 'A', 'S', 'K'), assert(sc == RTEMS_SUCCESSFUL);
1,
RTEMS_MINIMUM_STACK_SIZE, sc = rtems_task_create(
RTEMS_DEFAULT_MODES, rtems_build_name('T', 'A', 'S', 'K'),
RTEMS_DEFAULT_ATTRIBUTES, 1,
&task_id RTEMS_MINIMUM_STACK_SIZE,
); RTEMS_DEFAULT_MODES,
assert(sc == RTEMS_SUCCESSFUL); RTEMS_DEFAULT_ATTRIBUTES,
sc = rtems_task_set_scheduler(task_id, scheduler_id); &task_id
assert(sc == RTEMS_SUCCESSFUL); );
sc = rtems_task_start(task_id, task, 0); assert(sc == RTEMS_SUCCESSFUL);
assert(sc == RTEMS_SUCCESSFUL);
sc = rtems_task_set_scheduler(task_id, scheduler_id);
assert(sc == RTEMS_SUCCESSFUL);
sc = rtems_task_start(task_id, task, 0);
assert(sc == RTEMS_SUCCESSFUL);
} }
.. COMMENT: rtems_task_get_affinity .. _rtems_task_get_affinity:
TASK_GET_AFFINITY - Get task processor affinity TASK_GET_AFFINITY - Get task processor affinity
----------------------------------------------- -----------------------------------------------
@ -858,18 +913,25 @@ TASK_GET_AFFINITY - Get task processor affinity
.. code:: c .. code:: c
rtems_status_code rtems_task_get_affinity( rtems_status_code rtems_task_get_affinity(
rtems_id id, rtems_id id,
size_t cpusetsize, size_t cpusetsize,
cpu_set_t \*cpuset cpu_set_t *cpuset
); );
**DIRECTIVE STATUS CODES:** **DIRECTIVE STATUS CODES:**
``RTEMS_SUCCESSFUL`` - successful operation .. list-table::
``RTEMS_INVALID_ADDRESS`` - ``cpuset`` is NULL :class: rtems-table
``RTEMS_INVALID_ID`` - invalid task id
``RTEMS_INVALID_NUMBER`` - the affinity set buffer is too small for * - ``RTEMS_SUCCESSFUL``
the current processor affinity set of the task - successful operation
* - ``RTEMS_INVALID_ADDRESS``
- ``cpuset`` is NULL
* - ``RTEMS_INVALID_ID``
- invalid task id
* - ``RTEMS_INVALID_NUMBER``
- the affinity set buffer is too small for the current processor affinity
set of the task
**DESCRIPTION:** **DESCRIPTION:**
@ -881,7 +943,7 @@ cleared bit means the opposite.
None. None.
.. COMMENT: rtems_task_set_affinity .. _rtems_task_set_affinity:
TASK_SET_AFFINITY - Set task processor affinity TASK_SET_AFFINITY - Set task processor affinity
----------------------------------------------- -----------------------------------------------
@ -891,17 +953,24 @@ TASK_SET_AFFINITY - Set task processor affinity
.. code:: c .. code:: c
rtems_status_code rtems_task_set_affinity( rtems_status_code rtems_task_set_affinity(
rtems_id id, rtems_id id,
size_t cpusetsize, size_t cpusetsize,
const cpu_set_t \*cpuset const cpu_set_t *cpuset
); );
**DIRECTIVE STATUS CODES:** **DIRECTIVE STATUS CODES:**
``RTEMS_SUCCESSFUL`` - successful operation .. list-table::
``RTEMS_INVALID_ADDRESS`` - ``cpuset`` is NULL :class: rtems-table
``RTEMS_INVALID_ID`` - invalid task id
``RTEMS_INVALID_NUMBER`` - invalid processor affinity set * - ``RTEMS_SUCCESSFUL``
- successful operation
* - ``RTEMS_INVALID_ADDRESS``
- ``cpuset`` is NULL
* - ``RTEMS_INVALID_ID``
- invalid task id
* - ``RTEMS_INVALID_NUMBER``
- invalid processor affinity set
**DESCRIPTION:** **DESCRIPTION:**
@ -921,9 +990,3 @@ locking protocols may temporarily use processors that are not included in the
processor affinity set of the task. It is also not an error if the processor processor affinity set of the task. It is also not an error if the processor
affinity set contains processors that are not part of the system. affinity set contains processors that are not part of the system.
.. COMMENT: COPYRIGHT (c) 2011,2015
.. COMMENT: Aeroflex Gaisler AB
.. COMMENT: All rights reserved.