For tests can be run the python packages yaml and psutil need to be installed.
This can be done using pip as follows:
pip install pyyaml psutil
Before executing tests, make sure that your build has been configured
with the value of
MCRL2_ENABLE_TESTS set to
YES and that
the toolset has been compiled.
To conduct the tests specified in the build tree execute:
Specific tests can be run by using
ctest. Below some examples are given.
To displays the list of tests, without running them, execute:
ctest -N .
To execute the 2nd up until the 5th test targets, execute:
ctest -I2,5 .
To execute tests targets that match the description “FOO”, execute:
ctest -R FOO .
For additional help, run:
The toolset contains several types of regression tests. These tests are
automatically executed on a continuous integration service to maintain the
quality of the toolset, for information see Regression testing with TeamCity. First of
all, each library contains a test directory consisting of several
files in which unit and integration tests are defined using the unit testing
framework of the Boost library. Existing tests give an idea on how the tests are
There are also tests which execute the compiled tools themselves and check for
errors in the results. Here, we have random tests that generate random (P)BES or
mCRL2 specifications. See the section below for a detailed description of their
setup and how to declare new tests. There are also regression tests with
concrete specifications as input for issues that have been reported in the past
or that have been observed during development. Regression tests are located in
tests/regression/ directory and use a similar setup as random
All tests are set up such that they can be executed locally by using ctest. For example, by executing ctest -j8 in the build directory we can run all test on eight cores. This can take more than an hour to complete. Furthermore, using the -R option we can only execute tests that contain the given string in their name.
The random tests are implemented by the
script. There are three types of random tests, modeled by the following classes:
ProcessTest - tests with a randomly generated process specification as input
PbesTest - tests with a randomly generated PBES as input
BesTest - tests with a randomly generated BES as input
In the constructor of each of these classes the parameters of the random generation are set.
To add a random test, the following steps must be taken:
Define a new class that inherits from one of those three. There are many examples of this in the script.
Optionally change the parameters of the random generation in the constructor.
Create a test specification in the directory tests/specification. This is a graph specified in YAML format, with three sections.
The section ‘tools’ contains nodes that represent tool invocations. For example the following specifies an invocation of the tool lps2pbes with inputs l2 (an LPS) and l5 (a modal formula), output l8 (a PBES) and with additional command line flag –structured:
input: [l2, l5]
The section ‘nodes’ contains nodes of the types below. These nodes are the inputs and outputs of tools:
mcrl2 - a process specification in text format
lps - a linearized process specification in .lps format
lts - an LTS in .lts format
aut - an LTS in .aut format
pbesspec - a PBES in text format
pbes - a PBES in .pbes format
mcf - a modal formula in text format
text - a text file
The section ‘result’ contains a python expression that is used to evaluate the result, for example:
result = t4.value['solution'] == t6.value['solution'] == t8.value['solution'] == (not t2.value['has-deadlock'])
Note that after a tool node has been executed, the results of the execution are put in the mapping value. The command line output of the execution is stored in the attributes stdout and stderr, so if needed this can also be used to determine the result of a test. There are many examples available in the directory tests/specifications. Moreover there is a python script draw.py that generates a graphical representation of the test. This is useful to check if the test has been specified correctly.
Add an entry in the mapping available_tests. For example:
'lpsconstelm' : lambda name, settings: LpsConstelmTest(name, settings)
The parameter settings is the dictionary that corresponds to the YAML file of the random test. This file has to be specified during construction of the class LpsConstelmTest.
The random tests can be run using commands like this:
--- run one repetition of all available tests ---
python3 random_testing.py -r1
--- run 5 repetitions of all tests that match the pattern parelm ---
python3 random_testing.py -pparelm -r5
--- run 10 repetitions of bessolve and keep the results ---
python3 random_testing.py -pbessolve -r10 -k
When a test results in Indeterminate, it means that there was either a timeout, or the memory limit was exceeded.