Defining what to run using recipe files

If you’ve followed the previous documentation sections, you should now be able to write exec-test tests and also avocado-instrumented tests. These tests should be found when you run avocado run /reference/to/a/test. Internally, though, these will be defined as a avocado.core.nrunner.runnable.Runnable.

This is interesting because you are able to have a shortcut into what Avocado runs by defining a Runnable. Runnables can be defined using pure Python code, such as in the following Job example:

#!/usr/bin/env python3

import sys

from avocado.core.job import Job
from avocado.core.nrunner.runnable import Runnable
from avocado.core.suite import TestSuite

# an exec-test runnable consists of a runnable type (exec-test),
# an uri (examples/tests/sleeptest.sh), followed by zero to n arguments
# ending with zero to m keyword arguments.
#
# During the execution, arguments are appended to the uri and keyword
# arguments are converted to environment variable.

# here, SLEEP_LENGTH become an environment variable
sleeptest = Runnable("exec-test", "examples/tests/sleeptest.sh", SLEEP_LENGTH="2")
# here, 'Hello World!' is appended to the uri (/usr/bin/echo)
echo = Runnable("exec-test", "/usr/bin/echo", "Hello World!")

# the execution of examples/tests/sleeptest.sh takes around 2 seconds
# and the output of the /usr/bin/echo test is available at the
# job-results/latest/test-results/exec-test-2-_usr_bin_echo/stdout file.
suite = TestSuite(name="exec-test", tests=[sleeptest, echo])

with Job(test_suites=[suite]) as j:
    sys.exit(j.run())

But, they can also be defined in JSON files, which we call “runnable recipes”, such as:

{"kind": "exec-test", "uri": "/bin/sleep", "args": ["3"]}

Runnable recipe format

While it should be somewhat easy to see the similarities between between the fields in the avocado.core.nrunner.runnable.Runnable structure and a runnable recipe JSON data, Avocado actually ships with a schema that defines the exact format of the runnable recipe:

{
    "$schema": "https://json-schema.org/draft/2020-12/schema",
    "$id": "https://avocado-project.org/runnable-recipe.schema.json",
    "title": "runnable-recipe",
    "description": "Runnable serialized in a JSON based recipe file",
    "type": "object",
    "properties": {
        "kind": {
            "description": "The kind of runnable, which should be matched to a capable runner",
            "type": "string"
        },
        "uri": {
            "description": "The main reference to what needs to be run. This is free form, but commonly set to the path to a file containing the test or being the test, or an actual URI with multiple parts",
            "type": "string"
        },
        "args": {
            "description": "Sequence of arguments to be interpreted by the runner.  For instance, exec-test turns these into positional command line arguments",
            "type": "array",
            "items": {
                "type": "string"
            },
            "uniqueItems": false
        },
        "kwargs": {
            "description": "Keyword based arguments, that is, a sequence of key and values.  The exec-test, for instance, will turn these into environment variables",
            "type": "object"
        },
        "config": {
            "description": "Avocado settings that should be applied to this runnable.  At least the ones declared as CONFIGURATION_USED in the runner specific for this kind should be present",
            "type": "object"
        }
    },
    "additionalProperties": false,
    "required": [ "kind" ]
}

Avocado will attempt to enforce the JSON schema any time a Runnable is loaded from such recipe files.

Using runnable recipes as references

Avocado ships with a runnable-recipe resolver plugin, which means that you can use runnable recipe file as a reference, and get something that Avocado can run (that is, a Runnable). Example:

avocado list examples/nrunner/recipes/runnable/python_unittest.json
python-unittest selftests/unit/test.py:TestClassTestUnit.test_long_name

And just as runnable recipe’s resolution can be listed, they can also be executed:

avocado run examples/nrunner/recipes/runnable/python_unittest.json
JOB ID     : bca087e0e5f16e62f24430602f87df67ecf093f7
JOB LOG    : ~/avocado/job-results/job-2024-04-17T11.53-bca087e/job.log
 (1/1) selftests/unit/test.py:TestClassTestUnit.test_long_name: STARTED
 (1/1) selftests/unit/test.py:TestClassTestUnit.test_long_name: PASS (0.02 s)
RESULTS    : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
JOB TIME   : 2.72 s

Tip

As a possible integration strategy with existing tests, you can have one or more runnable recipe files that are passed to Avocado to be executed.

Combining multiple recipes in a single file

Avocado also ships with a slightly difference resolver, called runnables-recipe. It reads a recipe file that, instead of containing one single runnable, contains (potentially) many. It should contain nothing more than an a list of runnables.

For instance, to run both /bin/true and /bin/false, you can define a file like:

[
    {"kind": "exec-test",
     "uri": "/bin/true"},

    {"kind": "exec-test",
     "uri": "/bin/false"}
]

That will be parsed by the runnables-recipe resolver, like in avocado list examples/nrunner/recipes/runnables/true_false.json:

exec-test /bin/true
exec-test /bin/false