eng: Add formal verification chapter

This commit is contained in:
Andrew Butterfield 2023-11-09 11:26:56 +00:00 committed by Sebastian Huber
parent 5da4bcda3b
commit 2c88912893
12 changed files with 3343 additions and 0 deletions

View File

@ -9,6 +9,23 @@
year = {1973}, year = {1973},
pages = {46-61}, pages = {46-61},
} }
@article{Djikstra:1975:GCL,
author = {Dijkstra, Edsger W.},
title = {Guarded Commands, Nondeterminacy and Formal Derivation of Programs},
year = {1975},
issue_date = {Aug. 1975},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {18},
number = {8},
issn = {0001-0782},
url = {https://doi.org/10.1145/360933.360975},
doi = {10.1145/360933.360975},
journal = {Commun. ACM},
month = {aug},
pages = {453457},
numpages = {5},
}
@inproceedings{Varghese:1987:TimerWheel, @inproceedings{Varghese:1987:TimerWheel,
author = {Varghese, G. and Lauck, T.}, author = {Varghese, G. and Lauck, T.},
title = {{Hashed and Hierarchical Timing Wheels: Data Structures for the Efficient Implementation of a Timer Facility}}, title = {{Hashed and Hierarchical Timing Wheels: Data Structures for the Efficient Implementation of a Timer Facility}},
@ -159,6 +176,31 @@
year = {2007}, year = {2007},
url = {http://www.akkadia.org/drepper/cpumemory.pdf}, url = {http://www.akkadia.org/drepper/cpumemory.pdf},
} }
@article{Hierons:2009:FMT,
author = {Robert M. Hierons and
Kirill Bogdanov and
Jonathan P. Bowen and
Rance Cleaveland and
John Derrick and
Jeremy Dick and
Marian Gheorghe and
Mark Harman and
Kalpesh Kapoor and
Paul J. Krause and
Gerald L{\"{u}}ttgen and
Anthony J. H. Simons and
Sergiy A. Vilkomir and
Martin R. Woodward and
Hussein Zedan},
title = {Using formal specifications to support testing},
journal = {{ACM} Comput. Surv.},
volume = {41},
number = {2},
pages = {9:1--9:76},
year = {2009},
url = {https://doi.org/10.1145/1459352.1459354},
doi = {10.1145/1459352.1459354},
}
@inproceedings{Mavin:2009:EARS, @inproceedings{Mavin:2009:EARS,
author = {Mavin, Alistair and Wilkinson, Philip and Harwood, Adrian and Novak, Mark}, author = {Mavin, Alistair and Wilkinson, Philip and Harwood, Adrian and Novak, Mark},
title = {{Easy approach to requirements syntax (EARS)}}, title = {{Easy approach to requirements syntax (EARS)}},
@ -369,6 +411,39 @@
doi = {10.1109/RE.2016.38}, doi = {10.1109/RE.2016.38},
url = {https://www.researchgate.net/profile/Alistair_Mavin/publication/308970788_Listens_Learned_8_Lessons_Learned_Applying_EARS/links/5ab0d42caca2721710fe5017/Listens-Learned-8-Lessons-Learned-Applying-EARS.pdf}, url = {https://www.researchgate.net/profile/Alistair_Mavin/publication/308970788_Listens_Learned_8_Lessons_Learned_Applying_EARS/links/5ab0d42caca2721710fe5017/Listens-Learned-8-Lessons-Learned-Applying-EARS.pdf},
} }
@manual{Butterfield:2021:FV1-200,
author = {Andrew Butterfield and Mike Hinchey},
organization = {Lero -- the Irish Software Research Centre},
title = {FV1-200: Formal Verification Plan},
year = {2021}
}
@mastersthesis{Jennings:2021:FV,
author = {Robert Jennings},
title = {{Formal Verification of a Real-Time Multithreaded Operating System}},
school = {School of Computer Science and Statistics},
year = {2021},
type = {Master of Science in {Computer Science (MCS)}},
address = {Trinity College, Dublin 2, Ireland},
month = {August},
}
@mastersthesis{Jaskuc:2022:TESTGEN,
author = {Jerzy Ja{\'{s}}ku{\'{c}}},
title = {{SPIN/Promela-Based Test Generation Framework for RTEMS Barrier Manager}},
school = {School of Computer Science and Statistics},
year = {2022},
type = {Master of Science in {Computer Science (MCS)}},
address = {Trinity College, Dublin 2, Ireland},
month = {April},
}
@mastersthesis{Lynch:2022:TESTGEN,
author = {Eoin Lynch},
title = {{Using Promela/SPIN to do Test Generation for RTEMS-SMP}},
school = {School of Engineering},
year = {2022},
type = {Master of {Engineering (Computer Engineering)}},
address = {Trinity College, Dublin 2, Ireland},
month = {April},
}
@misc{RTEMS:CPU, @misc{RTEMS:CPU,
title = {{RTEMS CPU Architecture Supplement}}, title = {{RTEMS CPU Architecture Supplement}},
url = {https://docs.rtems.org/branches/master/cpu-supplement.pdf}, url = {https://docs.rtems.org/branches/master/cpu-supplement.pdf},

1749
eng/fv/appendix-fv.rst Executable file

File diff suppressed because it is too large Load Diff

178
eng/fv/approaches.rst Normal file
View File

@ -0,0 +1,178 @@
.. SPDX-License-Identifier: CC-BY-SA-4.0
.. Copyright (C) 2022 Trinity College Dublin
.. _FormalVerifApproaches:
Formal Verification Approaches
==============================
This is an overview of a range of formal methods and tools
that look feasible for use with RTEMS.
A key criterion for any proposed tool is the ability to deploy it in a highly
automated manner. This amounts to the tool having a command-line interface that
covers all the required features.
One such feature is that the tool generates output that can be
easily transformed into the formats useful for qualification.
Tools with GUI interfaces can be very helpful while developing
and deploying formal models, as long as the models/tests/proofs
can be re-run automatically via the command-line.
Other important criteria concerns the support available
for test generation support,
and how close the connection is between the formalism and actual C code.
The final key criteria is whatever techniques are proposed should fit in
with the RTEMS Project Mission Statement,
in the Software Engineering manual.
This requires, among other things,
that any tool added to the tool-chain needs to be open-source.
A more detailed report regarding this can be found in
:cite:`Butterfield:2021:FV1-200`.
Next is a general overview of formal methods and testing,
and discusses a number of formalisms and tools against the criteria above.
Formal Methods Overview
-----------------------
Formal specification languages can be divided into the following groups:
Model-based: e.g., Z, VDM, B
These have a language that describes a system in terms of having an abstract
state and how it is modified by operations. Reasoning is typically based
around the notions of pre- and post-conditions and state invariants.
The usual method of reasoning is by using theorem-proving. The resulting
models often have an unbounded number of possible states, and are capable
of describing unbounded numbers of operation steps.
Finite State-based: e.g., finite-state machines (FSMs), SDL, Statecharts
These are a variant of model-based specification, with the added constraint
that the number of states are bounded. Desired model properties are often
expressed using some form of temporal logic. The languages used to describe
these are often more constrained than in more general model-based
approaches. The finiteness allows reasoning by searching the model,
including doing exhaustive searches, a.k.a. model-checking.
Process Algebras: e.g., CSP, CCS, pi-calculus, LOTOS
These model systems in terms of the sequence of externally observable
events that they perform. There is no explicit definition of the abstract
states, but their underlying semantics is given as a state machine,
where the states are deduced from the overall behavior of the system,
and events denote transitions between these states. In general both the
number of such states and length of observed event sequences are unbounded.
While temporal logics can be used to express properties, many process
algebras use their own notation to express desired properties by simpler
systems. A technique called bisimulation is used to reason about the
relationships between these.
Most of the methods above start with formal specifications/models. Also
needed is a way to bridge the gap to actual code. The relationship between
specification and code is often referred to as a :term:`refinement`
(some prefer the term :term:`reification`). Most model-based methods have refinement,
with the concept baked in as a key part of the methodology.
Theorem Provers: e.g., CoQ, HOL4, PVS, Isabelle/HOL
Many modern theorem provers are not only useful to help reason about the
formalisms mentioned above, but are often powerful enough to be used to
describe formal models in their own terms and then apply their proof
systems directly to those.
Model Checkers: e.g., SPIN, FDR
Model checkers are tools that do exhaustive searches over models with a
finite number of states. These are most commonly used with the finite-state
methods, as well as the process algebras where some bound is put on the
state-space. As model-checking is basically exhaustive testing, these are
often the easiest way to get test generation from formal techniques.
Formal Development frameworks: e.g. TLA+, Frama-C, KeY
There are also a number of frameworks that support a close connection
between a programming language, a formalism to specify desired behavior
for programs in that language, as well as tools to support the reasoning
(proof, simulation, test).
Formal Methods actively considered
----------------------------------
Given the emphasis on verifying RTEMS C code,
the focus is on freely available tools that could easily connect to C.
These include: Frama-C, TLA+/PlusCal, Isabelle/HOL, and Promela/SPIN.
Further investigation ruled out TLA+/PlusCal because it is Java-based,
and requires installing a Java Runtime Environment.
Frama-C, Isabelle/HOL, and Promela/SPIN are discussed below in more detail,
Frama-C
^^^^^^^
Frama-C (frama-c.com) is a platform supporting a range of tools for analysing C
code, including static analysers, support for functional specifications (ANSI-C
Specification Language ACSL), and links to theorem provers. Some of its
analyses require code annotations, while others can extract useful information
from un-annotated code. It has a plug-in architecture, which makes it easy to
extend. It is used extensively by Airbus.
Frama-C, and its plugins, are implemented in OCaml,
and it is installed using the ``opam`` package manager.
An issue here was that Frama-C has many quite large dependencies.
There was support for test generation, but it was not freely available.
Another issue was that Frama-C only supported C99, and not C11
(the issue is how to handle C11 Atomics in terms of their semantics).
Isabelle/HOL
^^^^^^^^^^^^
Isabelle/HOL is a wide-spectrum theorem-prover, implemented as an embedding of
Higher-Order Logic (HOL) into the Isabelle generic proof assistant
(isabelle.in.tum.de). It has a high degree of automation, including an ability
to link to third-party verification tools, and a very large library of verified
mathematical theorems, covering number and set theory, algebra, analysis. It is
based on the idea of a small trusted code kernel that defines an encapsulated
datatype representing a theorem, which can only be constructed using methods in
the kernel for that datatype, but which also scales effectively regardless of
how many theorems are so proven.
It is implemented using `polyml`, with the IDE implemented using Scala,
is open-source, and is easy to install.
However, like Frama-C, it is also a very large software suite.
Formal Method actually used
---------------------------
A good survey of formal techniques and testing
is found in a 2009 ACM survey paper :cite:`Hierons:2009:FMT`.
Here they clearly state:
"The most important role for formal verification in testing
is in the automated generation of test cases.
In this context,
model checking is the formal verification technology of choice;
this is due to the ability of model checkers
to produce counterexamples
in case a temporal property does not hold for a system model."
Promela/SPIN
^^^^^^^^^^^^
The current use of formal methods in RTEMS is based on using the Promela
language to model key RTEMS features,
in such a way that tests can be generated using the SPIN model checker
(spinroot.com).
Promela is quite a low-level modelling language that makes it easy to get close
to code level, and is specifically targeted to modelling software. It is one of
the most widely used model-checkers, both in industry and education. It uses
assertions, and :term:`Linear Temporal Logic` (LTL) to express properties of
interest.
Given a Promela model that checks key properties successfully,
tests can be generated for a property *P* by asking
SPIN to check the negation of that property.
There are ways to get SPIN to generate multiple/all possible counterexamples,
as well as getting it to find the shortest.

16
eng/fv/index.rst Executable file
View File

@ -0,0 +1,16 @@
.. SPDX-License-Identifier: CC-BY-SA-4.0
.. Copyright (C) 2022 Trinity College Dublin
.. _FormalVerif:
Formal Verification
*******************
.. toctree::
overview
approaches
methodology
promela-index

62
eng/fv/methodology.rst Executable file
View File

@ -0,0 +1,62 @@
.. SPDX-License-Identifier: CC-BY-SA-4.0
.. Copyright (C) 2022 Trinity College Dublin
.. _FormalVerifMethodology:
Test Generation Methodology
===========================
The general approach to using any model-checking technology for test generation
has three major steps:
Model desired behavior
----------------------
Construct a model that describes the desired properties (`P1`, ..., `PN`)
and use the model-checker to verify those properties.
Promela can specify properties using the ``assert()`` statement, to be
true at the point where it gets executed,
and can use :term:`Linear Temporal Logic`
(LTL) to specify more complex properties over execution sequences. SPIN will
also check generic correctness properties such as deadlock and
livelock freedom.
Make claims about undesired behavior
------------------------------------
Given a fully verified model, systematically negate each specified property.
Given that each property was verified as true,
then these negated properties will fail model-checking,
and counter-examples will be
generated. These counter-examples will in fact be scenarios describing correct
behavior of the system, demonstrating the truth of each property.
.. warning::
It is very important that the negations only apply to stated properties,
and do not alter the possible behaviors of the model in any way.
The behaviours of the model are determined by the control-flow constructs,
so any boolean-valued expression statements used in these,
or used in sequential code to wait for some some condition,
should not be altered.
What can be altered are the expressions in ``assert()`` statements,
and any LTL properties.
With Promela, there are a number of different ways to do systematic
negation. The precise approach adopted depends on the nature of the models, and
more details can be found
in the RTEMS Formal Models Guide Appendix in this document.
Map good behavior scenarios to tests
------------------------------------
Define a mapping from counter-example output to test code,
and use this in the process of constructing a test program.
A YAML file is used to define a mapping from SPIN output to
relevant fragments of RTEMS C test code, using the Test Framework section
in this document.
The process is automated by a python script called ``testbuilder``.

38
eng/fv/overview.rst Executable file
View File

@ -0,0 +1,38 @@
.. SPDX-License-Identifier: CC-BY-SA-4.0
.. Copyright (C) 2022 Trinity College Dublin
.. _FormalVerifOverview:
Formal Verification Overview
============================
Formal Verification is a technique based on writing key design artifacts using
notations that have a well-defined mathematical :term:`semantics`. This means
that these descriptions can be rigorously analyzed using logic and other
mathematical tools. The term :term:`formal model` is used to refer to any such
description.
Having a formal model of a software engineering artifact (requirements,
specification, code) allows it to be analyzed to assess the behavior it
describes. This means checks can be done that the model has desired properties,
and that it lacks undesired ones. A key feature of having a formal description
is that tools can be developed that parse the notation and perform much,
if not most, of the analysis. An industrial-strength formalism is one that has
very good tool support.
Having two formal models of the same software object at different levels
of abstraction (specification and code, say) allows their comparison. In
particular, a formal analysis can establish if a lower level artifact like
code satisfies the properties described by a higher level,
such as a specification. This relationship is commonly referred to as a
:term:`refinement`.
Often it is quite difficult to get a useful formal model of real code. Some
formal modelling approaches are capable of generating machine-readable
:term:`scenarios` that describe possible correct behaviors of the system at the
relevant level of abstraction. A refinement for these can be defined by
using them to generate test code.
This is the technique that is used in :ref:`FormalVerifMethodology` to
verify parts of RTEMS. Formal models are constructed based on requirements
documentation, and are used as a basis for test generation.

9
eng/fv/promela-index.rst Normal file
View File

@ -0,0 +1,9 @@
.. SPDX-License-Identifier: CC-BY-SA-4.0
.. Copyright (C) 2022 Trinity College Dublin
.. toctree::
tool-setup
promela
refinement

323
eng/fv/promela.rst Executable file
View File

@ -0,0 +1,323 @@
.. SPDX-License-Identifier: CC-BY-SA-4.0
.. Copyright (C) 2022 Trinity College Dublin
.. _PromelaModelling:
Modelling with Promela
======================
Promela is a large language with many features,
but only a subset is used here for test generation.
This is a short overview of that subset.
The definitive documentation can be found at
https://spinroot.com/spin/Man/promela.html.
Promela Execution
-----------------
Promela is a *modelling* language, not a programming language. It is designed
to describe the kind of runtime behaviors that make reasoning about low-level
concurrency so difficult: namely shared mutable state and effectively
non-deterministic interleaving of concurrent threads. This means that there are
control constructs that specify non-deterministic outcomes,
and an execution model that allows the specification of when threads should
block.
The execution model is based on the following concepts:
Interleaving Concurrency
A running Promela system consists of one or more concurrent processes. Each
process is described by a segment of code that defines a sequence of
atomic steps. The scheduler looks at all the available next-steps and makes
a **non-deterministic choice** of which one will run. The scheduler is
invoked after every atomic step.
Executability
At any point in time, a Promela process is either able to perform a step,
and is considered executable, or is unable to do so, and is considered
blocked. Whether a statement is executable or blocked may depend on the
global state of the model. The scheduler will only select from among the
executable processes.
The Promela language is based loosely on C, and the SPIN model-checking tool
converts a Promela model into a C program that has the specific model
hard-coded and optimized for whatever analysis has been invoked. It also
supports the use of the C pre-processor.
Simulation vs. Verification
^^^^^^^^^^^^^^^^^^^^^^^^^^^
SPIN can run a model in several distinct modes:
Simulation
SPIN simply makes random choices for the scheduler to produce a possible
execution sequence (a.k.a. scenario) allowed by the model. A readable
transcript is written to ``stdout`` as the simulation runs.
The simplest SPIN invocation does simulation by default:
.. code:: shell
spin model.pml
Verification
SPIN does an analysis of the whole model by exploring all the possible
choices that the scheduler can make. This will continue until either all
possible choices have been covered, or some form of error is uncovered.
If verification ends successfully, then this is simply reported as ok.
If an error occurs, verification stops, and the sequence of steps that led
to that failure are output to a so-called trail file.
The simplest way to run a verification is to give the ``-run`` option:
.. code:: shell
spin -run model.pml
Replaying
A trail file is an uninformative list of number-triples, but can be replayed
in simulation mode to produce human-readable output.
.. code:: shell
spin -t model.pml
Promela Datatypes
-----------------
Promela supports a subset of C scalar types (``short``, ``int``), but also
adds some of its own (``bit``, ``bool``, ``byte``, ``unsigned``).
It has support for one-dimensional arrays,
and its own variation of the C struct concept
(confusingly called a ``typedef``!).
It has a single enumeration type called ``mtype``.
There are no pointers in Promela, which means that modelling pointer
usage requires the use of arrays with their indices acting as proxies for
pointers.
Promela Declarations
--------------------
Variables and one-dimensional arrays can be declared in Promela in much the
same way as they are done in C:
.. code-block:: C
int x, y[3] ;
All global variables and arrays are initialized to zero.
The identifier ``unsigned`` is the name of a type, rather than a modifier.
It is used to declare an unsigned number variable with a given bit-width:
.. code-block:: C
unsigned mask : 4 ;
Structure-like datatypes in Promela are defined using the ``typedef`` keyword
that associates a name with what is basically a C ``struct``:
.. code-block:: C
typedef CBuffer {
short count;
byte buffer[8]
}
CBuffers cbuf[6];
Note that we can have arrays of ``typedef``\ s that themselves contain arrays.
This is the only way to get multi-dimensional arrays in Promela.
There is only one enumeration type, which can be defined incrementally.
Consider the following sequence of four declarations that defines the values in
``mtype`` and declares two variables of that type:
.. code-block:: C
mtype = { up, down } ;
mtype dir1;
mtype = { left, right} ;
mtype dir2;
This gives the same outcome with the following two declarations:
.. code-block:: C
mtype = { left, right, up, down } ;
mtype dir1, dir2;
Special Identifiers
^^^^^^^^^^^^^^^^^^^
The are a number of variable identifiers that have a special meaning in Promela.
These all start with an underscore. We use the following:
Process Id
``_pid`` holds the process id of the currently active process
Process Count
``_nr_pr`` gives the number of currently active processes.
Promela Atomic Statements
-------------------------
Assignment
``x = e`` where ``x`` is a variable and ``e`` is an expression.
Expression ``e`` must have no side-effects. An assignment is always
executable. Its effect is to update the value of ``x`` with the current
value of ``e``.
Condition Statement
``e`` where ``e`` is an expression
Expression ``e``, used standalone as a statement, is executable if its
value in the current state is non-zero. If its current value is zero,
then it is blocked. It behaves like a NO-OP when executed.
Skip
``skip``, a keyword
``skip`` is always executable, and behaves like a NO-OP when executed.
Assertion
``assert(e)`` where ``e`` is an expression
An assertion is always executable. When executed, it evaluates its
expression. If the value is non-zero, then it behaves like a NO-OP. If the
value is zero, then it generates an assertion error and aborts further
simulation/verification of the model.
Printing
``printf(string,args)`` where ``string`` is a format-string and ``args``
are values and expressions.
A ``printf`` statement is completely ignored in verification mode.
In simulation mode, it is always executable,
and generates output to ``stdout`` in much the same way as in C.
This is is used in a structured way to assist with test generation.
Goto
``goto lbl`` where ``lbl`` is a statement label.
Promela supports labels for statements, in the same manner as C.
The ``goto`` statement is always executable.
When executed, flow of control goes to the statement labelled by ``lbl:``.
Break
``break``, a keyword
Can only occur within a loop (``do ... od``, see below). It is always
executable, and when executed performs a ``goto`` to the statement just
after the end of the innermost enclosing loop.
Promela Composite Statements
----------------------------
Sequencing
``{ <stmt> ; <stmt> ; ... ; <stmt> }`` where each ``<stmt>`` can be any
kind of statement, atomic or composite. The sub-statements execute in
sequence in the usual way.
Selection
.. code-block:: none
if
:: <stmt>
:: <stmt>
...
:: <stmt>
fi
A selection construct is blocked if all the ``<stmt>`` are blocked. It is
executable if at least one ``<stmt>`` is executable. The scheduler will make
a non-deterministic choice from all of those ``<stmt>`` that are executable.
The construct terminates when/if the chosen ``<stmt>`` does.
Typically, a selection statement will be a sequence of the form
``g ; s1 ; ... ; sN`` where ``g`` is an expression acting as a guard,
and the rest of the statements are intended to run if ``g`` is non-zero.
The symbol ``->`` plays the same syntactic role as ``;``, so this allows
for a more intuitive look and feel; ``g -> s1 ; ... ; sN``.
If the last ``<stmt>`` has the form ``else -> ...``, then the ``else`` is
executable only when all the other selection statements are blocked.
Repetition
.. code-block:: none
do
:: <stmt>
:: <stmt>
...
:: <stmt>
od
The executability rules here are the same as for Selection above. The key
difference is that when/if a chosen ``<stmt>`` terminates, then the whole
construct is re-executed, making it basically an infinite loop. The only
way to exit this loop is for an active ``<stmt>`` to execute a ``break``
or ``goto`` statement.
A ``break`` statement only makes sense inside a Repetition, is always
executable, and its effect is to jump to the next statement after the
next ``od`` keyword in text order.
The selection and repetition syntax and semantics are based on Edsger
Djikstra's Guarded Command Language :cite:`Djikstra:1975:GCL` .
Atomic Composite
``atomic{stmt}`` where ``stmt`` is usually a (sequential) composite.
Wrapping the ``atomic`` keyword around a statement makes its entire
execution proceed without any interference from the scheduler. Once it is
executable, if the scheduler chooses it to run, then it runs to completion.
There is one very important exception: if it should block internally because
some sub-statement is blocked, then the atomicity gets broken, and the
scheduler is free to find some other executable process to run. When/if the
sub-statement eventually becomes executable, once it gets chosen to run by
the scheduler then it continues to run atomically.
Processes
``proctype name (args) { sequence }`` where ``args`` is a list of zero
or more typed parameter declarations,
and ``sequence`` is a list of local declarations and statements.
This defines a process type called ``name`` which takes parameters defined
by ``args`` and has the behavior defined by ``sequence``. When invoked, it
runs as a distinct concurrent process. Processes can be invoked explicitly
by an existing process using the ``run`` statement,
or can be setup to start automatically.
Run
``run name (exprs)`` where ``exprs`` is a list of expressions that match
the arguments defined the ``proctype`` declaration for ``name``.
This is executable as long as the maximum process limit has not been reached,
and immediately starts the process running.
It is an atomic statement.
Inlining
``inline name (names) { sequence }`` where ``names`` is a list of zero or
more identifiers, and ``sequence`` is a list of declarations and statements.
Inlining does textual substitution, and does not represent some kind of
function call. An invocation ``name(texts)`` gets replaced by
``{ sequence }`` where each occurrence of an identifier in ``names`` is
replaced by the corresponding text in ``texts``. Such an invocation can
only appear in a context where a Promela statement can appear.
Promela Top-Level
-----------------
At the top-level, a Promela model is a list of declarations, much like a C
program. The Promela equivalent of ``main`` is a process called ``init`` that
has no arguments. There must be at least one Promela process setup to be running
at the start. This can be ``init``, or one or more ``proctype``\ s declared as
``active``.

559
eng/fv/refinement.rst Executable file
View File

@ -0,0 +1,559 @@
.. SPDX-License-Identifier: CC-BY-SA-4.0
.. Copyright (C) 2022 Trinity College Dublin
.. _Promela2CRefinement:
Promela to C Refinement
=======================
Promela models are more abstract than concrete C code.
A rigorous link, known as a :term:`refinement`, needs to be established
from Promela to C.
This is composed of two parts:
manual annotations in the Promela model to make its behavior easy
to identify and parse;
and a refinement defined as a YAML file that maps from
annotations to corresponding C code.
A single refinement YAML file is associated with each Promela model.
Model Annotations
-----------------
Promela ``printf`` statements are used in the models to output information that
is used by ``spin2test`` to generate test code. This information is used to
lookup keys in a YAML file that defines the mapping to C code. It uses a simple
format that makes it easy to parse and process, and is also designed to be
easily understood by a human reader. This is important because any V&V process
will require this information to be carefully assessed.
Annotation Syntax
^^^^^^^^^^^^^^^^^
Format, where :math:`N \geq 0`:
``@@@ <pid> <KEYWORD> <parameter1> ... <parameterN>``
The leading ``@@@`` is a marker that makes it easy to filter out this
information from other SPIN output.
Parameters take the following forms:
``<pid>`` Promela Process Id of ``proctype`` generating annotation
``<word>`` chunk of text containing no white space
``<name>`` Promela variable/structure/constant identifier
``<type>`` Promela type identifier
``<tid>`` OS Task/Thread/Process Id
``_`` unused argument (within containers)
Each ``<KEYWORD>`` is associated with specific forms of parameters:
.. code-block:: none
LOG <word1> ... <wordN>
NAME <name>
INIT
DEF <name> <value>
DECL <type> <name> [<value>]
DCLARRAY <type> <name> <value>
TASK <name>
SIGNAL <name> <value>
WAIT <name> <value>
STATE tid <name>
SCALAR (<name>|_) [<index>] <value>
PTR <name> <value>
STRUCT <name>
SEQ <name>
END <name>
CALL <name> <value1> ... <valueN>
Annotation Lookup
-----------------
The way that code is generated depends on the keyword in the annotation.
In particular, the keyword determines how, or if,
the YAML refinement file is looked up.
Direct Output - no lookup
(``LOG``, ``DEF``)
Keyword Refinement - lookup the ``<KEYWORD>``
(``NAME``, ``INIT``, ``SIGNAL``, ``WAIT``)
Name Refinement - lookup first parameter (considered as text)
(``TASK``, ``DECL``, ``DCLARRAY``, ``PTR``, ``CALL``, ``SCALAR``)
The same name may appear in different contexts,
and the name can be extended with a suffix of the form
``_XXX`` to lookup less frequent uses:
``_DCL`` - A variable declaration
``_PTR`` - The pointer value itself
``_FSCALAR`` - A scalar that is a struct field
``_FPTR`` - A pointer that is a struct field
Generally, the keyword, or name is used to lookup ``mymodel-rfn.yml`` to get a
string result. This string typically has substitution placeholders, as used by
the Python ``format()`` method for strings. The other parameters in the
annotation are then textually substituted in, to produce a segment of test code.
Specifying Refinement
---------------------
Using the terminology of the :ref:`RTEMSTestFramework`
each Promela model is converted into a set of Test Cases,
one for each complete scenario produced by test generation.
There are a number of template files, tailored for each model,
that are used to assemble the test code sources,
along with code segments for each Promela process,
based on the annotations output for any given scenario.
The refinement mapping from annotations to code is defined in a YAML file that
describes a Python dictionary that maps a name to some C code, with placeholders
that are used to allow for substituting in actual test values.
The YAML file has entries of the following form:
.. code:: yaml
entity: |
C code line1{0}
...
C code lineM{2}
The entity is a reference to an annotation concept, which can refer to key
declarations, values, variables, types, API calls or model events. There can be
zero or more arguments in the annotations, and any occurrence of braces
enclosing a number in the C code means that the corresponding annotation
argument string is substituted in (using the python string object `format()`
method).
The general pattern is working through all the annotations in order. The
code obtained by looking up the YAML file is collated into different
code-segments, one for each Promela process id (``<pid>``).
In addition to the explicit annotations generated by the Promela models, there
are two implicit annotations produced by the python refinement code. These
occur when the ``<pid>`` part of a given explicit annotation is different to the
one belonging to the immediately preceding annotation. This indicates a point in
a test generation scenario where one task is suspended and another resumes. This
generates internal annotations with keywords ``SUSPEND`` and ``WAKEUP`` which
should have entries in the refinement file to provide code to ensure that the
corresponding RTEMS tasks in the test behave accordingly.
The annotations can also be output as comments into the generated test-code, so
it is easy to check that parameters are correct, and the generated code is
correct.
If a lookup fails, a C comment line is output stating the lookup failed.
The translation continues in any case.
Lookup Example
^^^^^^^^^^^^^^
Consider the following annotation, from the Events Manager model:
``@@@ 1 CALL event_send 1 2 10 sendrc``
This uses Name refinement so a lookup with ``event_send`` as the key
and gets back the following text:
.. code-block:: python3
T_log( T_NORMAL, "Calling Send(%d,%d)", mapid( ctx, {1}), {2} );
{3} = ( *ctx->send )( mapid( ctx, {1} ), {2} );
T_log( T_NORMAL, "Returned 0x%x from Send", {3} );
Arguments ``1``, ``2``, ``10``, and ``sendrc``
are then substituted to get the code:
.. code-block:: c
T_log( T_NORMAL, "Calling Send(%d,%d)", mapid( ctx, 2), 10 );
sendrc = ( *ctx->send )( mapid( ctx, 2 ), 10 );
T_log( T_NORMAL, "Returned 0x%x from Send", sendrc );
Given a Promela process id of ``1``, this code is put into a code segment
for the corresponding RTEMS task.
Annotation Refinement Guide
---------------------------
This guide describes how each annotation is processed
by the test generation software.
LOG
^^^
``LOG <word1> ... <wordN>`` (Direct Output)
Generate a call to ``T_log()`` with a message formed from the ``<word..>``
parameters.
This message will appear in the test output for certain verbosity settings.
NAME
^^^^
``NAME <name>`` (Keyword Refinement)
Looks up ``NAME`` (currently ignoring ``<name>``) and returns the resulting
text as is as part of the code. This code should define the name of the
testcase, if needed.
INIT
^^^^
``INIT`` (Keyword Refinement)
Lookup ``INIT`` and expect to obtain test initialisation code.
TASK
^^^^
``TASK <name>`` (Name Refinement)
Lookup ``<name>`` and return corresponding C code.
SIGNAL
^^^^^^
``SIGNAL <value>`` (Keyword Refinement)
Lookup `SIGNAL` and return code with `<value>` substituted in.
WAIT
^^^^
``WAIT <value>`` (Keyword Refinement)
Lookup `WAIT` and return code with `<value>` substituted in.
DEF
^^^
``DEF <name> <value>`` (Direct Output)
Output ``#define <name> <value>``.
DECL
^^^^
``DECL <type> <name> [<value>]`` (Name Refinement)
Lookup ``<name>_DCL`` and substitute ``<name>`` in. If ``<value>`` is
present, append ``=<value>``. Add a final semicolon. If the `<pid>` value
is zero, prepend ``static``.
DCLARRAY
^^^^^^^^
``DCLARRAY <type> <name> <value>`` (Name Refinement)
Lookup ``<name>_DCL`` and substitute ``<name>`` and ``<value>`` in. If the
`<pid>` value is zero, prepend ``static``.
CALL
^^^^
``CALL <name> <value0> .. <valueN>`` (Name refinement, ``N`` < 6)
Lookup ``<name>`` and substitute all ``<value..>`` in.
STATE
^^^^^
``STATE <tid> <name>`` (Name Refinement)
Lookup ``<name>`` and substitute in ``<tid>``.
STRUCT
^^^^^^
``STRUCT <name>``
Declares the output of the contents of variable ``<name>``
that is itself a structure. The ``<name>`` is noted, as is the fact
that a structured value is being processes.
Should not occur if already be processing a structure or a sequence.
SEQ
^^^
``SEQ <name>``
Declares the output of the contents of array variable ``<name>``.
The ``<name>`` is noted, as is the fact that an array is being processed.
Values are accumulated in a string now initialsed to empty.
Should not occur if already processing a structure or a sequence.
PTR
^^^
``PTR <name> <value>`` (Name Refinement)
If not inside a ``STRUCT``, lookup ``<name>_PTR``. Two lines of code should
be returned. If the ``<value>`` is zero, use the first line, otherwise use
the second with ``<value>`` substituted in. This is required to handle NULL
pointers.
If inside a ``STRUCT``, lookup ``<name>_FPTR``. Two lines of code should
be returned. If the ``<value>`` is zero, use the first line, otherwise use
the second with ``<value>`` substituted in. This is required to handle NULL
pointers.
SCALAR
^^^^^^
There are quite a few variations here.
``SCALAR _ <value>``
Should only be used inside a ``SEQ``. A space and ``<value>`` is appended
to the string being accumulated by this ``SEQ``.
``SCALAR <name> <value>`` (Name Refinement)
If not inside a ``STRUCT``, lookup ``<name>``, and substitute ``<value>``
into the resulting code.
If inside a ``STRUCT``, lookup ``<name>_FSCALAR`` and substitute the saved
``STRUCT`` name and ``<value>`` into the resulting code.
This should not be used inside a ``SEQ``.
``SCALAR <name> <index> <value>`` (Name Refinement)
If not inside a ``STRUCT``, lookup ``<name>``, and substitute ``<index>``
and ``<value>`` into the resulting code.
If inside a ``STRUCT``, lookup ``<name>_FSCALAR`` and substitute the saved
``STRUCT`` name and ``<value>`` into the resulting code.
This should not be used inside a ``SEQ``.
END
^^^
``END <name>``
If inside a ``STRUCT``, terminates processing a
structured variable.
If inside a ``SEQ``, lookup ``<name>_SEQ``. For each line of code obtained,
substitute in the saved sequence string.
Terminates processing a sequence/array variable.
This should not be encountered outside of a ``STRUCT`` or ``SEQ``.
SUSPEND and WAKEUP
^^^^^^^^^^^^^^^^^^
A change of Promela process id from ``oldid`` to ``newid`` has been found.
Increment a counter that tracks the number of such changes.
``SUSPEND`` (Keyword Refinement)
Lookup ``SUSPEND`` and substitute in the counter value, ``oldid`` and
``newid``.
``WAKEUP`` (Keyword Refinement)
Lookup ``WAKEUP`` and substitute in the counter value, ``newid`` and
``oldid``.
Annotation Ordering
-------------------
While most annotations occur in an order determined by any given test scenario,
there are some annotations that need to be issued first. These are, in order:
``NAME``, ``DEF``, ``DECL`` and ``DCLARRAY``, and finally, ``INIT``.
Test Code Assembly
------------------
The code snippets produced by refining annotations are not enough.
We also need boilerplate code to setup, coordinate and teardown the tests,
as well as providing useful C support functions.
For a model called `mymodel` the following files are required:
* ``mymodel.pml`` - the Promela model
* ``mymodel-rfn.yml`` - the model refinement to C test code
* ``tc-mymodel.c`` - the testcase setup C file
* ``tr-mymodel.h`` - the test-runner header file
* ``tr-mymodel.c`` - the test-runner setup C file
The following files are templates used to assemble
a single test-runner C file
for each scenario generated by the Promela model:
* ``mymodel-pre.h`` - preamble material at the start
* ``mymodel-run.h`` - test runner material
* ``mymodel-post.h`` - postamble material at the end.
The process is entirely automated:
.. code-block:: shell
tbuild all mymodel
The steps of the process are as follows:
Scenario Generation
^^^^^^^^^^^^^^^^^^^
When SPIN is invoked to find all scenarios,
it will produce a number (N) of counterexample files
with a ``.trail`` extension.
These files hold numeric data that refer to SPIN internals.
.. code-block:: none
mymodel.pml1.trail
...
mymodel.pmlN.trail
SPIN is then used to take each trail file and produce a human-readable
text file, using the same format as the SPIN simulation output.
SPIN numbers files from 1 up,
whereas the RTEMS convention is to number things,
including filenames, from zero.
SPIN is used to produce readable output in text files with a ``.spn`` extension,
with 1 subtracted from the trail file number.
This results in the following files:
.. code-block:: shell
mymodel-0.spn
...
mymodel-{N-1}.spn
Test Code Generation
^^^^^^^^^^^^^^^^^^^^
Each SPIN output file ``mymodel-I.spn``
is converted to a C test runner file ``tr-mymodel-I.c``
by concatenating the following components:
``mymodel-pre.h``
This is a fairly standard first part of an RTEMS test C file.
It is used unchanged.
refined test segments
The annotations in ``mymodel-I.spn`` are converted, in order,
into test code snippets using the refinement file ``mymodel-rfn.yml``.
Snippets are gathered into distinct code segments based on the Promela
process number reported in each annotation.
Each code segment is used to construct a C function called
``TestSegmentP()``, where ``P`` is the relevant process number.
``mymodel-post.h``
This is static code that declares the top-level RTEMS Tasks
used in the test.
These are where the code segments above get invoked.
``mymodel-run.h``
This defines top-level C functions that implement a given test runner. The top-level function has a name like ``RtemsMyModel_Run``
This is not valid C as it needs to produce a name parameterized by
the relevant scenario number. It contains Python string format substitution
placeholders that allow the relevant number to be added to end of the
function name. So the above run function actually appears in this file as ``RtemsMyModel_Run{0}``, so ``I`` will be substituted in for ``{0}`` to result in the name ``RtemsMyModel_RunI``.
In particular, it invokes ``TestSegment0()`` which contains code
generated from Promela process 0, which is the main model function.
This declares test variables, and initializes them.
These will generate test-runner test files as follows:
.. code-block:: none
tr-mymodel-0.c
...
tr-mymodel-{N-1}.c
In addition, the test case file ``tc-mymodel.c`` needs to have entries added
manually of the form below, for ``I`` in the range 0 to N-1.:
.. code-block:: c
T_TEST_CASE( RtemsMyModelI )
{
RtemsMyModel_RunI(
...
);
}
These define the individual test cases in the model, each corresponding to precisely one SPIN scenario.
Test Code Deployment
^^^^^^^^^^^^^^^^^^^^
All files starting with ``tc-`` or ``tr-`` are copied to the
relevant testsuite directory.
At present, this is ``testsuites/validation`` at the top level in
the ``rtems`` repository.
All the names of the above files with a ``.c`` extension are added
into a YAML file that
defines the Promela generated-test sources.
At present, this
is ``spec/build/testsuites/validation/model-0.yml``
at the top-level in the ``rtems`` repository.
They appear in the YAML file under the ``source`` key:
.. code-block:: yaml
source:
- testsuites/validation/tc-mymodel.c
- testsuites/validation/tr-mymodel-0.c
...
- testsuites/validation/tr-mymodel-{N-1}.c
- testsuites/validation/ts-model-0.c
Performing Tests
^^^^^^^^^^^^^^^^
At this point build RTEMS as normal. e.g., with ``waf``,
and the tests will get built.
The executable will be found in the designated build directory,
*(e.g.):*
``rtems/build/sparc/gr740/testsuites/validation/ts-model-0.exe``
This can be run using the RTEMS Tester
(RTEMS User Manual, Host Tools, RTEMS Tester and Run).
Both building the code and running on the tester is also automated
(see :ref:`FormalToolSetup`).
Traceability
------------
Traceability between requirements, specifications, designs, code, and tests,
is a key part of any qualification/certification effort. The test generation
methodology developed here supports this in two ways, when refining an
annotation:
1. If the annotation is for a declaration of some sort, the annotation itself
is added as a comment to the output code, just before the refined declaration.
.. code-block:: c
// @@@ 0 NAME Chain_AutoGen
// @@@ 0 DEF MAX_SIZE 8
#define MAX_SIZE 8
// @@@ 0 DCLARRAY Node memory MAX_SIZE
static item memory[MAX_SIZE];
// @@@ 0 DECL unsigned nptr NULL
static item * nptr = NULL;
// @@@ 0 DECL Control chain
static rtems_chain_control chain;
2. If the annotation is for a test of some sort, a call to ``T_log()`` is
generated with the annotation as its text, just before the test code.
.. code-block:: c
T_log(T_NORMAL,"@@@ 0 INIT");
rtems_chain_initialize_empty( &chain );
T_log(T_NORMAL,"@@@ 0 SEQ chain");
T_log(T_NORMAL,"@@@ 0 END chain");
show_chain( &chain, ctx->buffer );
T_eq_str( ctx->buffer, " 0" );
In addition to traceability, these also help when debugging models, refinement
files, and the resulting test code.

317
eng/fv/tool-setup.rst Executable file
View File

@ -0,0 +1,317 @@
.. SPDX-License-Identifier: CC-BY-SA-4.0
.. Copyright (C) 2022 Trinity College Dublin
.. _FormalToolSetup:
Formal Tools Setup
==================
The required formal tools consist of
the model checking software (Promela/SPIN),
and the test generation software (spin2test/testbuilder).
Installing Tools
----------------
Installing Promela/SPIN
^^^^^^^^^^^^^^^^^^^^^^^
Follow the installation instructions for Promela/Spin
at https://spinroot.com/spin/Man/README.html.
There are references there to the Spin Distribution which is now on
Github (https://github.com/nimble-code/Spin).
Installing Test Generation Tools
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The test generation tools are found in ``formal/promela/src``, written in
Python3, and installed using a virtual environment.
To build the tools, enter ``formal/promela/src`` and issue the
commands:
.. code:: shell
make env
. env/bin/activate
make py
The test generation tools need to be used from within this Python virtual
environment. Use the ``deactivate`` command to exit from it.
Test generation is managed at the top level by the script ``testbuilder.py``
located in the top-level of ``formal/promela/src``.
To avoid using (long) absolute pathnames,
it helps to define an suitable alias
*(e.g.)*:
.. code-block:: shell
alias tbuild='python3 /..../formal/promela/src/testbuilder.py'
This alias is used subsequently in this documentation.
To check for a successful tool build, invoke the command without any
arguments, which should result in an extended help message being displayed:
.. code-block:: shell
(env) prompt % tbuild
USAGE:
help - more details about usage and commands below
all modelname - runs clean, spin, gentests, copy, compile and run
clean modelname - remove spin, test files
archive modelname - archives spin, test files
zero - remove all tesfiles from RTEMS
spin modelname - generate spin files
gentests modelname - generate test files
copy modelname - copy test files and configuration to RTEMS
compile - compiles RTEMS tests
run - runs RTEMS tests
The tool is not yet ready for use, as it needs to be configured.
Tool Configuration
------------------
Tool configuration involves setting up a new testsuite in RTEMS, and providing
information to ``tbuild`` that tells it where to find key locations, and some
command-line arguments for some of the tools.
A template file ``testbuilder-template.yml`` is included,
and contains the following entries:
.. code-block:: python
# This should be specialised for your setup, as testbuilder.yml,
# located in the same directory as testbuilder.py
# All pathnames should be absolute
spin2test: <spin2test_directory>/spin2test.py
rtems: <path-to-main-rtems-directory> # rtems.git, or ..../modules/rtems/
rsb: <rsb-build_directory>/rtems/6/bin/
simulator: <path-to>/sparc-rtems6-sis
testyamldir: <rtems>/spec/build/testsuites/validation/ # directory containing <modelname>.yml
testcode: <rtems>/testsuites/validation/
testexedir: <rtems>/build/.../testsuites/validation/ # directory containing ts-<modelname>.exe
testsuite: model-0
simulatorargs: -leon3 -r s -m 2 # run command executes "<simulator> <simargs> <testexedir>/ts-<testsuite>.exe"
spinallscenarios: -DTEST_GEN -run -E -c0 -e # trail generation "spin <spinallscenarios> <model>.pml"
This template should be copied/renamed to ``testbuilder.yml``
and each entry updated as follows:
* spin2test:
This should be the absolute path to ``spin2test.py``
in the Promela sources directory.
``/.../formal/promela/src/spin2test.py``
* rtems:
This should be the absolute path to your RTEMS source directory,
with the terminating ``/``.
From ``rtems-central`` this would be:
``/.../rtems-central/modules/rtems/``
For a separate ``rtems`` installation
it would be where ``rtems.git`` was cloned.
We refer to this path below as ``<rtems>``.
* rsb:
This should be the absolute path
to your RTEMS source-builder binaries directory,
with the terminating ``/``.
From ``rtems-central`` this would be (assuming RTEMS 6):
``/.../rtems-central/modules/rsb/6/bin/``
* simulator:
This should be the absolute path to the RTEMS Tester
(See Host Tools in the RTEMS User Manual)
It defaults at present to the ``sis`` simulator
``/.../rtems-central/modules/rsb/6/bin/sparc-rtems6-sis``
* testsuite:
This is the name for the testsuite :
Default value: ``model-0``
* testyamldir:
This should be the absolute path to where validation tests are *specified*:
``<rtems>/spec/build/testsuites/validation/``
* testcode:
This should be the absolute path to where validation test sources
are found:
``<rtems>/testsuites/validation/``
* testexedir:
This should be the absolute path to where
the model-based validation test executable
will be found:
``<rtems>/build/.../testsuites/validation/``
This will contain ``ts-<testsuite>.exe`` (e.g. ``ts-model-0.exe``)
* simulatorargs:
These are the command line arguments for the RTEMS Tester.
It defaults at present to those for the ``sis`` simulator.
``-<bsp> -r s -m <cpus>``
The first argument should be the BSP used when building RTEMS sources.
BSPs ``leon3``, ``gr712rc`` and ``gr740`` have been used.
The argument to the ``-m`` flag is the number of cores.
Possible values are: 1, 2 and 4 (BSP dependent)
Default: ``-leon3 -r s -m 2``
* spinallscenarios:
These are command line arguments for SPIN,
that ensure that all counter-examples are generated.
Default: ``-DTEST_GEN -run -E -c0 -e`` (recommended)
Testsuite Setup
^^^^^^^^^^^^^^^
The C test code generated by these tools is installed into the main ``rtems``
repository at ``testsuites/validation`` in the exact same way as other RTEMS
test code.
This means that whenever ``waf`` is used at the top level to build and/or run
tests, that the formally generated code is automatically included.
This requires adding and modifying some *Specification Items*
(See Software Requirements Engineering section in this document).
To create a testsuite called ``model-0`` (say), do the following, in the
``spec/build/testsuites/validation`` directory:
* Edit ``grp.yml`` and add the following two lines into the `links` entry:
.. code-block:: YAML
- role: build-dependency
uid: model-0
* Copy ``validation-0.yml`` (say) to ``model-0.yml``, and change the following
entries as shown:
.. code-block:: YAML
enabled-by: RTEMS_SMP
source:
- testsuites/validation/ts-model-0.c
target: testsuites/validation/ts-model-0.exe
Then, go to the ``testsuites/validation`` directory, and copy
``ts-validation-0.c`` to ``ts-model-0.c``, and edit as follows:
* Change all occurrences of ``Validation0`` in comments to ``Model0``.
* Change ``rtems_test_name`` to ``Model0``.
Running Test Generation
-----------------------
The testbuilder takes a command as its first command-line argument. Some of
these commands require the model-name as a second argument:
Usage: ``tbuild <command> [<modelname>]``
The commands provided are:
``clean <model>``
Removes generated files.
``spin <model>``
Runs SPIN to find all scenarios. The scenarios are found in numbered files
called ``<model>N.spn``.
``gentests <model>``
Convert SPIN scenarios to test sources. Each ``<model>N.spn`` produces a numbered
test source file.
``copy <model>``
Copies the generated test files to the relevant test source directory, and
updates the relevant test configuration files.
``archive <model>``
Copies generated spn, trail, source, and test log files to an archive
sub-directory of the model directory.
``compile``
Rebuilds the test executable.
``run``
Runs tests in a simulator.
``all <model>``
Does clean, spin, gentests, copy, compile, and run.
``zero``
Removes all generated test filenames from the test configuration files, but
does NOT remove the test sources from the test source directory.
In order to generate test files the following input files are required:
``<model>.pml``,
``<model>-rfn.yml``,
``<model>-pre.h``,
``<model>-post.h``, and
``<model>-run.h``.
In addition there may be other files
whose names have <model> embedded in them. These are included in what is
transfered to the test source directory by the copy command.
The simplest way to check test generation is setup properly is to visit one of
the models, found under ``formal/promela/models`` and execute the following
command:
.. code-block:: shell
tbuild all mymodel
This should end by generating a file `model-0-test.log`. The output is
identical to that generated by the regular RTEMS tests, using the
*Software Test Framework* described elsewhere in this document.
Output for the Event Manager model, highly redacted:
.. code-block::
SIS - SPARC/RISCV instruction simulator 2.29, copyright Jiri Gaisler 2020
Bug-reports to jiri@gaisler.se
GR740/LEON4 emulation enabled, 4 cpus online, delta 50 clocks
Loaded ts-model-0.exe, entry 0x00000000
*** BEGIN OF TEST Model0 ***
*** TEST VERSION: 6.0.0.03337dab21e961585d323a9974c8eea6106c803d
*** TEST STATE: EXPECTED_PASS
*** TEST BUILD: RTEMS_SMP
*** TEST TOOLS: 10.3.1 20210409 (RTEMS 6, RSB 889cf95db0122bd1a6b21598569620c40ff2069d, Newlib eb03ac1)
A:Model0
S:Platform:RTEMS
...
B:RtemsModelSystemEventsMgr8
...
L:@@@ 3 CALL event_send 1 2 10 sendrc
L:Calling Send(167837697,10)
L:Returned 0x0 from Send
...
E:RtemsModelEventsMgr0:N:21:F:0:D:0.005648
Z:Model0:C:18:N:430:F:0:D:0.130464
Y:ReportHash:SHA256:5EeLdWsRd25IE-ZsS6pduLDsrD_qzB59dMU-Mg2-BDA=
*** END OF TEST Model0 ***
cpu 0 in error mode (tt = 0x80)
6927700 0000d580: 91d02000 ta 0x0

View File

@ -63,6 +63,20 @@ Glossary
ISVV ISVV
This term is an acronym for Independent Software Verification and Validation. This term is an acronym for Independent Software Verification and Validation.
Linear Temporal Logic
This is a logic that states properties about
(possibly infinite) sequences of states.
LTL
This term is an acronym for Linear Temporal Logic.
refinement
A *refinement* is a relationship between a specification
and its implementation as code.
reification
Another term used to denote :term:`refinement`.
ReqIF ReqIF
This term is an acronym for This term is an acronym for
`Requirements Interchange Format <https://www.omg.org/spec/ReqIF/About-ReqIF/>`_. `Requirements Interchange Format <https://www.omg.org/spec/ReqIF/About-ReqIF/>`_.

View File

@ -11,6 +11,7 @@ RTEMS Software Engineering (|version|)
.. topic:: Copyrights and License .. topic:: Copyrights and License
| |copy| 2022 Trinity College Dublin
| |copy| 2018, 2020 embedded brains GmbH & Co. KG | |copy| 2018, 2020 embedded brains GmbH & Co. KG
| |copy| 2018, 2020 Sebastian Huber | |copy| 2018, 2020 Sebastian Huber
| |copy| 1988, 2015 On-Line Applications Research Corporation (OAR) | |copy| 1988, 2015 On-Line Applications Research Corporation (OAR)
@ -31,11 +32,13 @@ RTEMS Software Engineering (|version|)
management management
test-plan test-plan
test-framework test-framework
fv/index
build-system build-system
release-mgmt release-mgmt
users-manuals users-manuals
license-requirements license-requirements
appendix-a appendix-a
fv/appendix-fv
function_and_variable function_and_variable
concept concept
glossary glossary