Internal (Core) APIs

Internal APIs that may be of interest to Avocado hackers.

Everything under avocado.core is part of the application’s infrastructure and should not be used by tests.

Extensions and Plugins can use the core libraries, but API stability is not guaranteed at any level.

Submodules

avocado.core.app module

The core Avocado application.

class avocado.core.app.AvocadoApp

Bases: object

Avocado application.

run()

avocado.core.data_dir module

Library used to let avocado tests find important paths in the system.

The general reasoning to find paths is:

  • When running in tree, don’t honor avocado.conf. Also, we get to run/display the example tests shipped in tree.
  • When avocado.conf is in /etc/avocado, or ~/.config/avocado, then honor the values there as much as possible. If they point to a location where we can’t write to, use the next best location available.
  • The next best location is the default system wide one.
  • The next best location is the default user specific one.
avocado.core.data_dir.clean_tmp_files()

Try to clean the tmp directory by removing it.

This is a useful function for avocado entry points looking to clean after tests/jobs are done. If OSError is raised, silently ignore the error.

avocado.core.data_dir.create_job_logs_dir(base_dir=None, unique_id=None)

Create a log directory for a job, or a stand alone execution of a test.

Parameters:
  • base_dir – Base log directory, if None, use value from configuration.
  • unique_id – The unique identification. If None, create one.
Return type:

str

avocado.core.data_dir.get_base_dir()

Get the most appropriate base dir.

The base dir is the parent location for most of the avocado other important directories.

Examples:
  • Log directory
  • Data directory
  • Tests directory
avocado.core.data_dir.get_cache_dirs()

Returns the list of cache dirs, according to configuration and convention

avocado.core.data_dir.get_data_dir()

Get the most appropriate data dir location.

The data dir is the location where any data necessary to job and test operations are located.

Examples:
  • ISO files
  • GPG files
  • VM images
  • Reference bitmaps
avocado.core.data_dir.get_datafile_path(*args)

Get a path relative to the data dir.

Parameters:args – Arguments passed to os.path.join. Ex (‘images’, ‘jeos.qcow2’)
avocado.core.data_dir.get_job_results_dir(job_ref, logs_dir=None)

Get the job results directory from a job reference.

Parameters:
  • job_ref – job reference, which can be: * an valid path to the job results directory. In this case it is checked if ‘id’ file exists * the path to ‘id’ file * the job id, which can be ‘latest’ * an partial job id
  • logs_dir – path to base logs directory (optional), otherwise it uses the value from settings.
avocado.core.data_dir.get_logs_dir()

Get the most appropriate log dir location.

The log dir is where we store job/test logs in general.

avocado.core.data_dir.get_test_dir()

Get the most appropriate test location.

The test location is where we store tests written with the avocado API.

The heuristics used to determine the test dir are: 1) If an explicit test dir is set in the configuration system, it is used. 2) If user is running Avocado from its source code tree, the example test dir is used. 3) System wide test dir is used. 4) User default test dir (~/avocado/tests) is used.

avocado.core.data_dir.get_tmp_dir(basedir=None)

Get the most appropriate tmp dir location.

The tmp dir is where artifacts produced by the test are kept.

Examples:
  • Copies of a test suite source code
  • Compiled test suite source code

avocado.core.decorators module

avocado.core.decorators.cancel_on(exceptions=None)

Cancel the test when decorated function produces exception of the specified type.

Parameters:exceptions – Tuple or single exception to be assumed as test CANCEL [Exception].
Note:self.error, self.cancel and self.fail remain intact.
Note:to allow simple usage param ‘exceptions’ must not be callable.
avocado.core.decorators.deco_factory(behavior, signal)

Decorator factory.

Returns a decorator used to signal the test when specified exception is raised. :param behavior: expected test result behavior. :param signal: delegating exception.

avocado.core.decorators.fail_on(exceptions=None)

Fail the test when decorated function produces exception of the specified type.

Parameters:exceptions – Tuple or single exception to be assumed as test FAIL [Exception].
Note:self.error, self.cancel and self.fail remain intact.
Note:to allow simple usage param ‘exceptions’ must not be callable.
avocado.core.decorators.skip(message=None)

Decorator to skip a test.

avocado.core.decorators.skipIf(condition, message=None)

Decorator to skip a test if a condition is True.

avocado.core.decorators.skipUnless(condition, message=None)

Decorator to skip a test if a condition is False.

avocado.core.dispatcher module

Extensions/plugins dispatchers

Besides the dispatchers listed here, there’s also a lower level dispatcher that these depend upon: avocado.core.settings_dispatcher.SettingsDispatcher

class avocado.core.dispatcher.CLICmdDispatcher

Bases: avocado.core.enabled_extension_manager.EnabledExtensionManager

Calls extensions on configure/run

Automatically adds all the extension with entry points registered under ‘avocado.plugins.cli.cmd’

class avocado.core.dispatcher.CLIDispatcher

Bases: avocado.core.enabled_extension_manager.EnabledExtensionManager

Calls extensions on configure/run

Automatically adds all the extension with entry points registered under ‘avocado.plugins.cli’

class avocado.core.dispatcher.InitDispatcher

Bases: avocado.core.enabled_extension_manager.EnabledExtensionManager

class avocado.core.dispatcher.JobPrePostDispatcher

Bases: avocado.core.enabled_extension_manager.EnabledExtensionManager

Calls extensions before Job execution

Automatically adds all the extension with entry points registered under ‘avocado.plugins.job.prepost’

class avocado.core.dispatcher.ResultDispatcher

Bases: avocado.core.enabled_extension_manager.EnabledExtensionManager

class avocado.core.dispatcher.ResultEventsDispatcher(config)

Bases: avocado.core.enabled_extension_manager.EnabledExtensionManager

class avocado.core.dispatcher.RunnerDispatcher

Bases: avocado.core.enabled_extension_manager.EnabledExtensionManager

class avocado.core.dispatcher.SpawnerDispatcher(config=None)

Bases: avocado.core.enabled_extension_manager.EnabledExtensionManager

class avocado.core.dispatcher.VarianterDispatcher

Bases: avocado.core.enabled_extension_manager.EnabledExtensionManager

map_method_with_return(method_name, *args, **kwargs)

The same as map_method but additionally reports the list of returned values and optionally deepcopies the passed arguments

Parameters:
  • method_name – Name of the method to be called on each ext
  • args – Arguments to be passed to all called functions
  • kwargs – Key-word arguments to be passed to all called functions if “deepcopy” == True is present in kwargs the args and kwargs are deepcopied before passing it to each called function.
map_method_with_return_copy(method_name, *args, **kwargs)

The same as map_method_with_return, but use copy.deepcopy on each passed arg

avocado.core.enabled_extension_manager module

Extension manager with disable/ordering support

class avocado.core.enabled_extension_manager.EnabledExtensionManager(namespace, invoke_kwds=None)

Bases: avocado.core.extension_manager.ExtensionManager

enabled(extension)

Checks configuration for explicit mention of plugin in a disable list

If configuration section or key doesn’t exist, it means no plugin is disabled.

avocado.core.exceptions module

Exception classes, useful for tests, and other parts of the framework code.

exception avocado.core.exceptions.JobBaseException

Bases: Exception

The parent of all job exceptions.

You should be never raising this, but just in case, we’ll set its status’ as FAIL.

status = 'FAIL'
exception avocado.core.exceptions.JobError

Bases: avocado.core.exceptions.JobBaseException

A generic error happened during a job execution.

status = 'ERROR'
exception avocado.core.exceptions.JobTestSuiteEmptyError

Bases: avocado.core.exceptions.JobTestSuiteError

Error raised when the creation of a test suite results in an empty suite

status = 'ERROR'
exception avocado.core.exceptions.JobTestSuiteError

Bases: avocado.core.exceptions.JobBaseException

Generic error happened during the creation of a job’s test suite

status = 'ERROR'
exception avocado.core.exceptions.JobTestSuiteReferenceResolutionError

Bases: avocado.core.exceptions.JobTestSuiteError

Test References did not produce a valid reference by any resolver

status = 'ERROR'
exception avocado.core.exceptions.OptionValidationError

Bases: Exception

An invalid option was passed to the test runner

status = 'ERROR'
exception avocado.core.exceptions.TestAbortError

Bases: avocado.core.exceptions.TestBaseException

Indicates that the test was prematurely aborted.

status = 'ERROR'
exception avocado.core.exceptions.TestBaseException

Bases: Exception

The parent of all test exceptions.

You should be never raising this, but just in case, we’ll set its status’ as FAIL.

status = 'FAIL'
exception avocado.core.exceptions.TestCancel

Bases: avocado.core.exceptions.TestBaseException

Indicates that a test was canceled.

Should be thrown when the cancel() test method is used.

status = 'CANCEL'
exception avocado.core.exceptions.TestError

Bases: avocado.core.exceptions.TestBaseException

Indicates that the test was not fully executed and an error happened.

This is the sort of exception you raise if the test was partially executed and could not complete due to a setup, configuration, or another fatal condition.

status = 'ERROR'
exception avocado.core.exceptions.TestFail

Bases: avocado.core.exceptions.TestBaseException, AssertionError

Indicates that the test failed.

TestFail inherits from AssertionError in order to keep compatibility with vanilla python unittests (they only consider failures the ones deriving from AssertionError).

status = 'FAIL'
exception avocado.core.exceptions.TestInterruptedError

Bases: avocado.core.exceptions.TestBaseException

Indicates that the test was interrupted by the user (Ctrl+C)

status = 'INTERRUPTED'
exception avocado.core.exceptions.TestNotFoundError

Bases: avocado.core.exceptions.TestBaseException

Indicates that the test was not found in the test directory.

status = 'ERROR'
exception avocado.core.exceptions.TestSetupFail

Bases: avocado.core.exceptions.TestBaseException

Indicates an error during a setup or cleanup procedure.

status = 'ERROR'
exception avocado.core.exceptions.TestSkipError

Bases: avocado.core.exceptions.TestBaseException

Indicates that the test is skipped.

Should be thrown when various conditions are such that the test is inappropriate. For example, inappropriate architecture, wrong OS version, program being tested does not have the expected capability (older version).

status = 'SKIP'
exception avocado.core.exceptions.TestTimeoutInterrupted

Bases: avocado.core.exceptions.TestBaseException

Indicates that the test did not finish before the timeout specified.

status = 'INTERRUPTED'
exception avocado.core.exceptions.TestWarn

Bases: avocado.core.exceptions.TestBaseException

Indicates that bad things (may) have happened, but not an explicit failure.

status = 'WARN'

avocado.core.exit_codes module

Avocado exit codes.

These codes are returned on the command line and may be used by applications that interface (that is, run) the Avocado command line application.

Besides main status about the execution of the command line application, these exit status may also give extra, although limited, information about test statuses.

avocado.core.exit_codes.AVOCADO_ALL_OK = 0

Both job and tests PASSed

avocado.core.exit_codes.AVOCADO_FAIL = 4

Something else went wrong and avocado failed (or crashed). Commonly used on command line validation errors.

avocado.core.exit_codes.AVOCADO_GENERIC_CRASH = -1

Avocado generic crash

avocado.core.exit_codes.AVOCADO_JOB_FAIL = 2

Something went wrong with an Avocado Job execution, usually by an explicit avocado.core.exceptions.JobError exception.

avocado.core.exit_codes.AVOCADO_JOB_INTERRUPTED = 8

The job was explicitly interrupted. Usually this means that a user hit CTRL+C while the job was still running.

avocado.core.exit_codes.AVOCADO_TESTS_FAIL = 1

Job went fine, but some tests FAILed or ERRORed

avocado.core.extension_manager module

Base extension manager

This is a mix of stevedore-like APIs and behavior, with Avocado’s own look and feel.

class avocado.core.extension_manager.Extension(name, entry_point, plugin, obj)

Bases: object

This is a verbatim copy from the stevedore.extension class with the same name

class avocado.core.extension_manager.ExtensionManager(namespace, invoke_kwds=None)

Bases: object

NAMESPACE_PREFIX = 'avocado.plugins.'

Default namespace prefix for Avocado extensions

enabled(extension)

Checks if a plugin is enabled

Sub classes can change this implementation to determine their own criteria.

fully_qualified_name(extension)

Returns the Avocado fully qualified plugin name

Parameters:extension (Extension) – an Extension instance
map_method(method_name, *args)

Maps method_name on each extension in case the extension has the attr

Parameters:
  • method_name – Name of the method to be called on each ext
  • args – Arguments to be passed to all called functions
map_method_with_return(method_name, *args, **kwargs)

The same as map_method but additionally reports the list of returned values and optionally deepcopies the passed arguments

Parameters:
  • method_name – Name of the method to be called on each ext
  • args – Arguments to be passed to all called functions
  • kwargs – Key-word arguments to be passed to all called functions if “deepcopy” == True is present in kwargs the args and kwargs are deepcopied before passing it to each called function.
names()

Returns the names of the discovered extensions

This differs from stevedore.extension.ExtensionManager.names() in that it returns names in a predictable order, by using standard sorted().

plugin_type()

Subset of entry points namespace for this dispatcher

Given an entry point avocado.plugins.foo, plugin type is foo. If entry point does not conform to the Avocado standard prefix, it’s returned unchanged.

settings_section()

Returns the config section name for the plugin type handled by itself

avocado.core.job module

Job module - describes a sequence of automated test operations.

class avocado.core.job.Job(config=None, test_suites=None)

Bases: object

A Job is a set of operations performed on a test machine.

Most of the time, we are interested in simply running tests, along with setup operations and event recording.

A job has multiple test suites attached to it. Please keep in mind that when creating jobs from the constructor (Job()), we are assuming that you would like to have control of the test suites and you are going to build your own TestSuites.

If you would like any help to create the job’s test_suites from the config provided, please use Job.from_config() method and we are going to do our best to create the test suites.

So, basically, as described we have two “main ways” to create a job:

  1. Automatic discovery, using from_config() method:

    job = Job.from_config(job_config=job_config,
                          suites_configs=[suite_cfg1, suite_cfg2])
    
  2. Manual or Custom discovery, using the constructor:

    job = Job(config=config,
              test_suites=[suite1, suite2, suite3])
    

Creates an instance of Job class.

Note that config and test_suites are optional, if not passed you need to change this before running your tests. Otherwise nothing will run. If you need any help to create the test_suites from the config, then use the Job.from_config() method.

Parameters:
  • config (dict) – the job configuration, usually set by command line options and argument parsing
  • test_suites (list) – A list with TestSuite objects. If is None the job will have an empty list and you can add suites after init accessing job.test_suites.
cleanup()

Cleanup the temporary job handlers (dirs, global setting, …)

create_test_suite()
classmethod from_config(job_config, suites_configs=None)

Helper method to create a job from config dicts.

This is different from the Job() initialization because here we are assuming that you need some help to build the test suites. Avocado will try to resolve tests based on the configuration information instead of assuming pre populated test suites.

Keep in mind that here we are going to replace the suite.name with a counter.

If you need create a custom Job with your own TestSuites, please use the Job() constructor instead of this method.

Parameters:
  • job_config (dict) – A config dict to be used on this job and also as a ‘global’ config for each test suite.
  • suites_configs (list) – A list of specific config dict to be used on each test suite. Each suite config will be merged with the job_config dict. If None is passed then this job will have only one test_suite with the same config as job_config.
logdir = None

The log directory for this job, also known as the job results directory. If it’s set to None, it means that the job results directory has not yet been created.

post_tests()

Run the post tests execution hooks

By default this runs the plugins that implement the avocado.core.plugin_interfaces.JobPostTests interface.

pre_tests()

Run the pre tests execution hooks

By default this runs the plugins that implement the avocado.core.plugin_interfaces.JobPreTests interface.

render_results()

Render test results that depend on all tests having finished.

By default this runs the plugins that implement the avocado.core.plugin_interfaces.Result interface.

result_events_dispatcher
run()

Runs all job phases, returning the test execution results.

This method is supposed to be the simplified interface for jobs, that is, they run all phases of a job.

Returns:Integer with overall job status. See avocado.core.exit_codes for more information.
run_tests()

The actual test execution phase

setup()

Setup the temporary job handlers (dirs, global setting, …)

size

Job size is the sum of all test suites sizes.

test_suite

This is the first test suite of this job (deprecated).

Please, use test_suites instead.

time_elapsed = None

The total amount of time the job took from start to finish, or -1 if it has not been started by means of the run() method

time_end = None

The time at which the job has finished or -1 if it has not been started by means of the run() method.

time_start = None

The time at which the job has started or -1 if it has not been started by means of the run() method.

timeout
unique_id
avocado.core.job.register_job_options()

Register the few core options that the support the job operation.

avocado.core.job_id module

avocado.core.job_id.create_unique_job_id()

Create a 40 digit hex number to be used as a job ID string. (similar to SHA1)

Returns:40 digit hex number string
Return type:str

avocado.core.jobdata module

Record/retrieve job information

avocado.core.jobdata.get_variants_path(resultsdir)

Retrieves the variants path from the results directory.

avocado.core.jobdata.record(job, cmdline=None)

Records all required job information.

avocado.core.jobdata.retrieve_cmdline(resultsdir)

Retrieves the job command line from the results directory.

avocado.core.jobdata.retrieve_config(resultsdir)

Retrieves the job settings from the results directory.

avocado.core.jobdata.retrieve_job_config(resultsdir)

Retrieves the job config from the results directory.

avocado.core.jobdata.retrieve_pwd(resultsdir)

Retrieves the job pwd from the results directory.

avocado.core.jobdata.retrieve_references(resultsdir)

Retrieves the job test references from the results directory.

avocado.core.loader module

Test loader module.

class avocado.core.loader.AccessDeniedPath

Bases: object

Dummy object to represent reference pointing to a inaccessible path

Bases: object

Dummy object to represent reference pointing to a BrokenSymlink path

class avocado.core.loader.DiscoverMode

Bases: enum.Enum

An enumeration.

ALL = <object object>

All tests (including broken ones)

AVAILABLE = <object object>

Available tests (for listing purposes)

DEFAULT = <object object>

Show default tests (for execution)

class avocado.core.loader.ExternalLoader(config, extra_params)

Bases: avocado.core.loader.TestLoader

External-runner loader class

discover(reference, which_tests=<DiscoverMode.DEFAULT: <object object>>)
Parameters:
  • reference – arguments passed to the external_runner
  • which_tests (DiscoverMode) – Limit tests to be displayed
Returns:

list of matching tests

static get_decorator_mapping()

Get label mapping for display in test listing.

Returns:Dict {TestClass: decorator function}
static get_type_label_mapping()

Get label mapping for display in test listing.

Returns:Dict {TestClass: ‘TEST_LABEL_STRING’}
name = 'external'
class avocado.core.loader.FileLoader(config, extra_params)

Bases: avocado.core.loader.SimpleFileLoader

Test loader class.

NOT_TEST_STR = 'Not an INSTRUMENTED (avocado.Test based), PyUNITTEST (unittest.TestCase based) or SIMPLE (executable) test'
static get_decorator_mapping()

Get label mapping for display in test listing.

Returns:Dict {TestClass: decorator function}
static get_type_label_mapping()

Get label mapping for display in test listing.

Returns:Dict {TestClass: ‘TEST_LABEL_STRING’}
name = 'file'
exception avocado.core.loader.InvalidLoaderPlugin

Bases: avocado.core.loader.LoaderError

Invalid loader plugin

exception avocado.core.loader.LoaderError

Bases: Exception

Loader exception

exception avocado.core.loader.LoaderUnhandledReferenceError(unhandled_references, plugins)

Bases: avocado.core.loader.LoaderError

Test References not handled by any resolver

class avocado.core.loader.MissingTest

Bases: object

Class representing reference which failed to be discovered

class avocado.core.loader.NotATest

Bases: object

Class representing something that is not a test

class avocado.core.loader.SimpleFileLoader(config, extra_params)

Bases: avocado.core.loader.TestLoader

Test loader class.

NOT_TEST_STR = 'Not a supported test'
discover(reference, which_tests=<DiscoverMode.DEFAULT: <object object>>)

Discover (possible) tests from a directory.

Recursively walk in a directory and find tests params. The tests are returned in alphabetic order.

Afterwards when “allowed_test_types” is supplied it verifies if all found tests are of the allowed type. If not return None (even on partial match).

Parameters:
  • reference – the directory path to inspect.
  • which_tests (DiscoverMode) – Limit tests to be displayed
Returns:

list of matching tests

static get_decorator_mapping()

Get label mapping for display in test listing.

Returns:Dict {TestClass: decorator function}
static get_type_label_mapping()

Get label mapping for display in test listing.

Returns:Dict {TestClass: ‘TEST_LABEL_STRING’}
name = 'file'
class avocado.core.loader.TapLoader(config, extra_params)

Bases: avocado.core.loader.SimpleFileLoader

Test Anything Protocol loader class

static get_decorator_mapping()

Get label mapping for display in test listing.

Returns:Dict {TestClass: decorator function}
static get_type_label_mapping()

Get label mapping for display in test listing.

Returns:Dict {TestClass: ‘TEST_LABEL_STRING’}
name = 'tap'
class avocado.core.loader.TestLoader(config, extra_params)

Bases: object

Base for test loader classes

discover(reference, which_tests=<DiscoverMode.DEFAULT: <object object>>)

Discover (possible) tests from an reference.

Parameters:
  • reference (str) – the reference to be inspected.
  • which_tests (DiscoverMode) – Limit tests to be displayed
Returns:

a list of test matching the reference as params.

static get_decorator_mapping()

Get label mapping for display in test listing.

Returns:Dict {TestClass: decorator function}
get_extra_listing()
get_full_decorator_mapping()

Allows extending the decorator-mapping after the object is initialized

get_full_type_label_mapping()

Allows extending the type-label-mapping after the object is initialized

static get_type_label_mapping()

Get label mapping for display in test listing.

Returns:Dict {TestClass: ‘TEST_LABEL_STRING’}
name = None
class avocado.core.loader.TestLoaderProxy

Bases: object

clear_plugins()
discover(references, which_tests=<DiscoverMode.DEFAULT: <object object>>, force=None)

Discover (possible) tests from test references.

Parameters:
  • references (builtin.list) – a list of tests references; if [] use plugin defaults
  • which_tests (DiscoverMode) – Limit tests to be displayed
  • force – don’t raise an exception when some test references are not resolved to tests.
Returns:

A list of test factories (tuples (TestClass, test_params))

get_base_keywords()
get_decorator_mapping()
get_extra_listing()
get_type_label_mapping()
load_plugins(config)
load_test(test_factory)

Load test from the test factory.

Parameters:test_factory (tuple) – a pair of test class and parameters.
Returns:an instance of avocado.core.test.Test.
register_plugin(plugin)
avocado.core.loader.add_loader_options(parser, section='run')

avocado.core.main module

avocado.core.main.get_crash_dir()
avocado.core.main.handle_exception(*exc_info)
avocado.core.main.main()

avocado.core.nrunner module

class avocado.core.nrunner.BaseRunner(runnable)

Bases: object

Base interface for a Runner

prepare_status(status_type, additional_info=None)

Prepare a status dict with some basic information.

This will add the keyword ‘status’ and ‘time’ to all status.

Param:status_type: The type of event (‘started’, ‘running’, ‘finished’)
Param:addional_info: Any additional information that you would like to add to the dict. This must be a dict.
Return type:dict
run()
class avocado.core.nrunner.BaseRunnerApp(echo=<built-in function print>, prog=None, description=None)

Bases: object

Helper base class for common runner application behavior

CMD_RUNNABLE_RUN_ARGS = ((('-k', '--kind'), {'type': <class 'str'>, 'required': True, 'help': 'Kind of runnable'}), (('-u', '--uri'), {'type': <class 'str'>, 'default': None, 'help': 'URI of runnable'}), (('-a', '--arg'), {'action': 'append', 'default': [], 'help': 'Simple arguments to runnable'}), (('kwargs',), {'default': [], 'type': <function _parse_key_val>, 'nargs': '*', 'metavar': 'KEY_VAL', 'help': 'Keyword (key=val) arguments to runnable'}))

The command line arguments to the “runnable-run” command

CMD_RUNNABLE_RUN_RECIPE_ARGS = ((('recipe',), {'type': <class 'str'>, 'help': 'Path to the runnable recipe file'}),)
CMD_STATUS_SERVER_ARGS = ((('uri',), {'type': <class 'str'>, 'help': 'URI to bind a status server to'}),)
CMD_TASK_RUN_ARGS = ((('-i', '--identifier'), {'type': <class 'str'>, 'required': True, 'help': 'Task unique identifier'}), (('-s', '--status-uri'), {'action': 'append', 'default': None, 'help': 'URIs of status services to report to'}), (('-k', '--kind'), {'type': <class 'str'>, 'required': True, 'help': 'Kind of runnable'}), (('-u', '--uri'), {'type': <class 'str'>, 'default': None, 'help': 'URI of runnable'}), (('-a', '--arg'), {'action': 'append', 'default': [], 'help': 'Simple arguments to runnable'}), (('kwargs',), {'default': [], 'type': <function _parse_key_val>, 'nargs': '*', 'metavar': 'KEY_VAL', 'help': 'Keyword (key=val) arguments to runnable'}))
CMD_TASK_RUN_RECIPE_ARGS = ((('recipe',), {'type': <class 'str'>, 'help': 'Path to the task recipe file'}),)
PROG_DESCRIPTION = ''

The description of the command line application given to the command line parser

PROG_NAME = ''

The name of the command line application given to the command line parser

RUNNABLE_KINDS_CAPABLE = {}

The types of runnables that this runner can handle. Dictionary key is a name, and value is a class that inherits from BaseRunner

command_capabilities(_)

Outputs capabilities, including runnables and commands

The output is intended to be consumed by upper layers of Avocado, such as the Job layer selecting the right runner script to handle a runnable of a given kind, or identifying if a runner script has a given feature (as implemented by a command).

command_runnable_run(args)

Runs a runnable definition from arguments

This defines a Runnable instance purely from the command line arguments, then selects a suitable Runner, and runs it.

Parameters:args (dict) – parsed command line arguments turned into a dictionary
command_runnable_run_recipe(args)

Runs a runnable definition from a recipe

Parameters:args (dict) – parsed command line arguments turned into a dictionary
command_task_run(args)

Runs a task from arguments

Parameters:args (dict) – parsed command line arguments turned into a dictionary
command_task_run_recipe(args)

Runs a task from a recipe

Parameters:args (dict) – parsed command line arguments turned into a dictionary
get_capabilities()

Returns the runner capabilities, including runnables and commands

This can be used by higher level tools, such as the entity spawning runners, to know which runner can be used to handle each runnable type.

Return type:dict
get_commands()

Return the command names, as seen on the command line application

For every method whose name starts with “command”, and the name of the command follows, with underscores replaced by dashes. So, a method named “command_foo_bar”, will be a command available on the command line as “foo-bar”.

Return type:list
get_runner_from_runnable(runnable)

Returns a runner that is suitable to run the given runnable

Return type:instance of class inheriting from BaseRunner
Raises:ValueError if runnable is now supported
run()

Runs the application by finding a suitable command method to call

class avocado.core.nrunner.ExecRunner(runnable)

Bases: avocado.core.nrunner.BaseRunner

Runner for standalone executables with or without arguments

Runnable attributes usage:

  • uri: path to a binary to be executed as another process
  • args: arguments to be given on the command line to the binary given by path
  • kwargs: key=val to be set as environment variables to the process
run()
class avocado.core.nrunner.ExecTestRunner(runnable)

Bases: avocado.core.nrunner.ExecRunner

Runner for standalone executables treated as tests

This is similar in concept to the Avocado “SIMPLE” test type, in which an executable returning 0 means that a test passed, and anything else means that a test failed.

Runnable attributes usage is identical to ExecRunner

run()
class avocado.core.nrunner.NoOpRunner(runnable)

Bases: avocado.core.nrunner.BaseRunner

Sample runner that performs no action before reporting FINISHED status

Runnable attributes usage:

  • uri: not used
  • args: not used
run()
class avocado.core.nrunner.PythonUnittestRunner(runnable)

Bases: avocado.core.nrunner.BaseRunner

Runner for Python unittests

The runnable uri is used as the test name that the native unittest TestLoader will use to find the test. A native unittest test runner (TextTestRunner) will be used to execute the test.

Runnable attributes usage:

  • uri: a “dotted name” that can be given to Python standard
    library’s unittest.TestLoader.loadTestsFromName() method. While it’s not enforced, it’s highly recommended that this is “a test method within a test case class” within a test module. Example is: “module.Class.test_method”.
  • args: not used
  • kwargs: not used
run()
avocado.core.nrunner.RUNNERS_REGISTRY_PYTHON_CLASS = {'exec': <class 'avocado.core.nrunner.ExecRunner'>, 'exec-test': <class 'avocado.core.nrunner.ExecTestRunner'>, 'noop': <class 'avocado.core.nrunner.NoOpRunner'>, 'python-unittest': <class 'avocado.core.nrunner.PythonUnittestRunner'>}

All known runner Python classes. This is a dictionary keyed by a runnable kind, and value is a class that inherits from BaseRunner. Suitable for spawners compatible with SpawnMethod.PYTHON_CLASS

avocado.core.nrunner.RUNNERS_REGISTRY_STANDALONE_EXECUTABLE = {}

All known runner commands, capable of being used by a SpawnMethod.STANDALONE_EXECUTABLE compatible spawners

avocado.core.nrunner.RUNNER_RUN_CHECK_INTERVAL = 0.01

The amount of time (in seconds) between each internal status check

avocado.core.nrunner.RUNNER_RUN_STATUS_INTERVAL = 0.5

The amount of time (in seconds) between a status report from a runner that performs its work asynchronously

class avocado.core.nrunner.Runnable(kind, uri, *args, **kwargs)

Bases: object

Describes an entity that be executed in the context of a task

A instance of BaseRunner is the entity that will actually execute a runnable.

classmethod from_args(args)

Returns a runnable from arguments

classmethod from_recipe(recipe_path)

Returns a runnable from a runnable recipe file

Parameters:recipe_path – Path to a recipe file
Return type:instance of Runnable
get_command_args()

Returns the command arguments that adhere to the runner interface

This is useful for building ‘runnable-run’ and ‘task-run’ commands that can be executed on a command line interface.

Returns:the arguments that can be used on an avocado-runner command
Return type:list
get_dict()

Returns a dictionary representation for the current runnable

This is usually the format that will be converted to a format that can be serialized to disk, such as JSON.

Return type:collections.OrderedDict
get_json()

Returns a JSON representation

Return type:str
get_serializable_tags()
is_kind_supported_by_runner_command(runner_command)

Checks if a runner command that seems a good fit declares support.

pick_runner_class(runners_registry=None)

Selects a runner class from the registry based on kind.

This is related to the SpawnMethod.PYTHON_CLASS

Parameters:
  • runners_registry – a registry with previously registered runner classes, keyed by runnable kind
  • runners_registry – dict
Returns:

a class that inherits from BaseRunner

Raises:

ValueError if kind there’s no runner from kind of runnable

pick_runner_class_from_entry_point()

Selects a runner class from entry points based on kind.

This is related to the SpawnMethod.PYTHON_CLASS. This complements the RUNNERS_REGISTRY_PYTHON_CLASS on systems that have setuptools available.

Returns:a class that inherits from BaseRunner or None
pick_runner_command(runners_registry=None)

Selects a runner command based on the runner.

And when finding a suitable runner, keeps found runners in registry.

This utility function will look at the given task and try to find a matching runner. The matching runner probe results are kept in a registry (that is modified by this function) so that further executions take advantage of previous probes.

This is related to the SpawnMethod.STANDALONE_EXECUTABLE

Parameters:
  • runners_registry – a registry with previously found (and not found) runners keyed by runnable kind
  • runners_registry – dict
Returns:

command line arguments to execute the runner

Return type:

list of str or None

write_json(recipe_path)

Writes a file with a JSON representation (also known as a recipe)

class avocado.core.nrunner.RunnerApp(echo=<built-in function print>, prog=None, description=None)

Bases: avocado.core.nrunner.BaseRunnerApp

PROG_DESCRIPTION = 'nrunner base application'
PROG_NAME = 'avocado-runner'
RUNNABLE_KINDS_CAPABLE = {'exec': <class 'avocado.core.nrunner.ExecRunner'>, 'exec-test': <class 'avocado.core.nrunner.ExecTestRunner'>, 'noop': <class 'avocado.core.nrunner.NoOpRunner'>, 'python-unittest': <class 'avocado.core.nrunner.PythonUnittestRunner'>}
class avocado.core.nrunner.StatusEncoder(*, skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, sort_keys=False, indent=None, separators=None, default=None)

Bases: json.encoder.JSONEncoder

Constructor for JSONEncoder, with sensible defaults.

If skipkeys is false, then it is a TypeError to attempt encoding of keys that are not str, int, float or None. If skipkeys is True, such items are simply skipped.

If ensure_ascii is true, the output is guaranteed to be str objects with all incoming non-ASCII characters escaped. If ensure_ascii is false, the output can contain non-ASCII characters.

If check_circular is true, then lists, dicts, and custom encoded objects will be checked for circular references during encoding to prevent an infinite recursion (which would cause an OverflowError). Otherwise, no such check takes place.

If allow_nan is true, then NaN, Infinity, and -Infinity will be encoded as such. This behavior is not JSON specification compliant, but is consistent with most JavaScript based encoders and decoders. Otherwise, it will be a ValueError to encode such floats.

If sort_keys is true, then the output of dictionaries will be sorted by key; this is useful for regression tests to ensure that JSON serializations can be compared on a day-to-day basis.

If indent is a non-negative integer, then JSON array elements and object members will be pretty-printed with that indent level. An indent level of 0 will only insert newlines. None is the most compact representation.

If specified, separators should be an (item_separator, key_separator) tuple. The default is (’, ‘, ‘: ‘) if indent is None and (‘,’, ‘: ‘) otherwise. To get the most compact JSON representation, you should specify (‘,’, ‘:’) to eliminate whitespace.

If specified, default is a function that gets called for objects that can’t otherwise be serialized. It should return a JSON encodable version of the object or raise a TypeError.

default(o)

Implement this method in a subclass such that it returns a serializable object for o, or calls the base implementation (to raise a TypeError).

For example, to support arbitrary iterators, you could implement default like this:

def default(self, o):
    try:
        iterable = iter(o)
    except TypeError:
        pass
    else:
        return list(iterable)
    # Let the base class default method raise the TypeError
    return JSONEncoder.default(self, o)
class avocado.core.nrunner.Task(identifier, runnable, status_uris=None, known_runners=None)

Bases: object

Wraps the execution of a runnable

While a runnable describes what to be run, and gets run by a runner, a task should be a unique entity to track its state, that is, whether it is pending, is running or has finished.

Parameters:
  • identifier
  • runnable
are_requirements_available(runners_registry=None)

Verifies if requirements needed to run this task are available.

This currently checks the runner command only, but can be expanded once the handling of other types of requirements are implemented. See BP002.

classmethod from_recipe(task_path, known_runners)

Creates a task (which contains a runnable) from a task recipe file

Parameters:
  • task_path – Path to a recipe file
  • known_runners – Dictionary with runner names and implementations
Return type:

instance of Task

get_command_args()

Returns the command arguments that adhere to the runner interface

This is useful for building ‘task-run’ commands that can be executed on a command line interface.

Returns:the arguments that can be used on an avocado-runner command
Return type:list
run()
setup_output_dir()
class avocado.core.nrunner.TaskStatusService(uri)

Bases: object

Implementation of interface that a task can use to post status updates

TODO: make the interface generic and this just one of the implementations

close()
post(status)
avocado.core.nrunner.check_tasks_requirements(tasks, runners_registry=None)

Checks if tasks have runner requirements fulfilled

Parameters:
  • tasks (list of Task) – the tasks whose runner requirements will be checked
  • runners_registry (dict) – a registry with previously found (and not found) runners keyed by a task’s runnable kind. Defaults to RUNNERS_REGISTRY_STANDALONE_EXECUTABLE
Returns:

two list of tasks in a tuple, with the first being the tasks that pass the requirements check and the second the tasks that fail the requirements check

Return type:

tuple of (list, list)

avocado.core.nrunner.json_dumps(data)
avocado.core.nrunner.main(app_class=<class 'avocado.core.nrunner.RunnerApp'>)

avocado.core.nrunner_avocado_instrumented module

class avocado.core.nrunner_avocado_instrumented.AvocadoInstrumentedTestRunner(runnable)

Bases: avocado.core.nrunner.BaseRunner

Runner for Avocado INSTRUMENTED tests

Runnable attributes usage:

  • uri: path to a test file, combined with an Avocado.Test inherited class name and method. The test file path and class and method names should be separated by a “:”. One example of a valid uri is “mytest.py:Class.test_method”.
  • args: not used
run()
class avocado.core.nrunner_avocado_instrumented.RunnerApp(echo=<built-in function print>, prog=None, description=None)

Bases: avocado.core.nrunner.BaseRunnerApp

PROG_DESCRIPTION = 'nrunner application for avocado-instrumented tests'
PROG_NAME = 'avocado-runner-avocado-instrumented'
RUNNABLE_KINDS_CAPABLE = {'avocado-instrumented': <class 'avocado.core.nrunner_avocado_instrumented.AvocadoInstrumentedTestRunner'>}
avocado.core.nrunner_avocado_instrumented.main()

avocado.core.nrunner_tap module

class avocado.core.nrunner_tap.RunnerApp(echo=<built-in function print>, prog=None, description=None)

Bases: avocado.core.nrunner.BaseRunnerApp

PROG_DESCRIPTION = 'nrunner application for executable tests that produce TAP'
PROG_NAME = 'avocado-runner-tap'
RUNNABLE_KINDS_CAPABLE = {'tap': <class 'avocado.core.nrunner_tap.TAPRunner'>}
class avocado.core.nrunner_tap.TAPRunner(runnable)

Bases: avocado.core.nrunner.BaseRunner

Runner for standalone executables treated as TAP

When creating the Runnable, use the following attributes:

  • kind: should be ‘tap’;
  • uri: path to a binary to be executed as another process. This must provides a TAP output.
  • args: any runnable argument will be given on the command line to the binary given by path
  • kwargs: you can specify multiple key=val as kwargs. This will be used as environment variables to the process.

Example:

runnable = Runnable(kind=’tap’,
uri=’tests/foo.sh’, ‘bar’, # arg 1 DEBUG=’false’) # kwargs 1 (environment)
run()
avocado.core.nrunner_tap.main()

avocado.core.output module

Manages output and logging in avocado applications.

class avocado.core.output.FilterInfoAndLess(name='')

Bases: logging.Filter

Initialize a filter.

Initialize with the name of the logger which, together with its children, will have its events allowed through the filter. If no name is specified, allow every event.

filter(record)

Determine if the specified record is to be logged.

Is the specified record to be logged? Returns 0 for no, nonzero for yes. If deemed appropriate, the record may be modified in-place.

class avocado.core.output.FilterWarnAndMore(name='')

Bases: logging.Filter

Initialize a filter.

Initialize with the name of the logger which, together with its children, will have its events allowed through the filter. If no name is specified, allow every event.

filter(record)

Determine if the specified record is to be logged.

Is the specified record to be logged? Returns 0 for no, nonzero for yes. If deemed appropriate, the record may be modified in-place.

avocado.core.output.LOG_JOB = <Logger avocado.test (WARNING)>

Pre-defined Avocado job/test logger

avocado.core.output.LOG_UI = <Logger avocado.app (WARNING)>

Pre-defined Avocado human UI logger

class avocado.core.output.LoggingFile(prefixes=None, level=10, loggers=None)

Bases: object

File-like object that will receive messages pass them to logging.

Constructor. Sets prefixes and which loggers are going to be used.

Parameters:
  • prefixes – Prefix per logger to be prefixed to each line.
  • level – Log level to be used when writing messages.
  • loggers – Loggers into which write should be issued. (list)
add_logger(logger, prefix='')
flush()
isatty()
rm_logger(logger)
write(data)

” Splits the line to individual lines and forwards them into loggers with expected prefixes. It includes the tailing newline <lf> as well as the last partial message. Do configure your logging to not to add newline <lf> automatically. :param data - Raw data (a string) that will be processed.

class avocado.core.output.MemStreamHandler(stream=None)

Bases: logging.StreamHandler

Handler that stores all records in self.log (shared in all instances)

Initialize the handler.

If stream is not specified, sys.stderr is used.

emit(record)

Emit a record.

If a formatter is specified, it is used to format the record. The record is then written to the stream with a trailing newline. If exception information is present, it is formatted using traceback.print_exception and appended to the stream. If the stream has an ‘encoding’ attribute, it is used to determine how to do the output to the stream.

flush()

This is in-mem object, it does not require flushing

log = []
class avocado.core.output.Paginator

Bases: object

Paginator that uses less to display contents on the terminal.

Contains cleanup handling for when user presses ‘q’ (to quit less).

close()
flush()
write(msg)
class avocado.core.output.ProgressStreamHandler(stream=None)

Bases: logging.StreamHandler

Handler class that allows users to skip new lines on each emission.

Initialize the handler.

If stream is not specified, sys.stderr is used.

emit(record)

Emit a record.

If a formatter is specified, it is used to format the record. The record is then written to the stream with a trailing newline. If exception information is present, it is formatted using traceback.print_exception and appended to the stream. If the stream has an ‘encoding’ attribute, it is used to determine how to do the output to the stream.

avocado.core.output.STD_OUTPUT = <avocado.core.output.StdOutput object>

Allows modifying the sys.stdout/sys.stderr

class avocado.core.output.StdOutput

Bases: object

Class to modify sys.stdout/sys.stderr

close()

Enable original sys.stdout/sys.stderr and cleanup

configured

Determines if a configuration of any sort has been performed

enable_outputs()

Enable sys.stdout/sys.stderr (either with 2 streams or with paginator)

enable_paginator()

Enable paginator

enable_stderr()

Enable sys.stderr and disable sys.stdout

fake_outputs()

Replace sys.stdout/sys.stderr with in-memory-objects

print_records()

Prints all stored messages as they occurred into streams they were produced for.

records = []

List of records of stored output when stdout/stderr is disabled

avocado.core.output.TERM_SUPPORT = <avocado.core.output.TermSupport object>

Transparently handles colored terminal, when one is used

avocado.core.output.TEST_STATUS_DECORATOR_MAPPING = {'CANCEL': <bound method TermSupport.skip_str of <avocado.core.output.TermSupport object>>, 'ERROR': <bound method TermSupport.error_str of <avocado.core.output.TermSupport object>>, 'FAIL': <bound method TermSupport.fail_str of <avocado.core.output.TermSupport object>>, 'INTERRUPTED': <bound method TermSupport.interrupt_str of <avocado.core.output.TermSupport object>>, 'PASS': <bound method TermSupport.pass_str of <avocado.core.output.TermSupport object>>, 'SKIP': <bound method TermSupport.skip_str of <avocado.core.output.TermSupport object>>, 'WARN': <bound method TermSupport.warn_str of <avocado.core.output.TermSupport object>>}

A collection of mapping from test status to formatting functions to be used consistently across the various plugins

avocado.core.output.TEST_STATUS_MAPPING = {'CANCEL': '', 'ERROR': '', 'FAIL': '', 'INTERRUPTED': '', 'PASS': '', 'SKIP': '', 'WARN': ''}

A collection of mapping from test statuses to colors to be used consistently across the various plugins

class avocado.core.output.TermSupport

Bases: object

COLOR_BLUE = '\x1b[94m'
COLOR_DARKGREY = '\x1b[90m'
COLOR_GREEN = '\x1b[92m'
COLOR_RED = '\x1b[91m'
COLOR_YELLOW = '\x1b[93m'
CONTROL_END = '\x1b[0m'
ESCAPE_CODES = ['\x1b[94m', '\x1b[92m', '\x1b[93m', '\x1b[91m', '\x1b[90m', '\x1b[0m', '\x1b[1D', '\x1b[1C']

Class to help applications to colorize their outputs for terminals.

This will probe the current terminal and colorize output only if the stdout is in a tty or the terminal type is recognized.

MOVE_BACK = '\x1b[1D'
MOVE_FORWARD = '\x1b[1C'
disable()

Disable colors from the strings output by this class.

error_str(msg='ERROR', move='\x1b[1D')

Print a error string (red colored).

If the output does not support colors, just return the original string.

fail_header_str(msg)

Print a fail header string (red colored).

If the output does not support colors, just return the original string.

fail_str(msg='FAIL', move='\x1b[1D')

Print a fail string (red colored).

If the output does not support colors, just return the original string.

header_str(msg)

Print a header string (blue colored).

If the output does not support colors, just return the original string.

healthy_str(msg)

Print a healthy string (green colored).

If the output does not support colors, just return the original string.

interrupt_str(msg='INTERRUPT', move='\x1b[1D')

Print an interrupt string (red colored).

If the output does not support colors, just return the original string.

partial_str(msg)

Print a string that denotes partial progress (yellow colored).

If the output does not support colors, just return the original string.

pass_str(msg='PASS', move='\x1b[1D')

Print a pass string (green colored).

If the output does not support colors, just return the original string.

skip_str(msg='SKIP', move='\x1b[1D')

Print a skip string (yellow colored).

If the output does not support colors, just return the original string.

warn_header_str(msg)

Print a warning header string (yellow colored).

If the output does not support colors, just return the original string.

warn_str(msg='WARN', move='\x1b[1D')

Print an warning string (yellow colored).

If the output does not support colors, just return the original string.

class avocado.core.output.Throbber

Bases: object

Produces a spinner used to notify progress in the application UI.

MOVES = ['', '', '', '']
STEPS = ['-', '\\', '|', '/']
render()
avocado.core.output.add_log_handler(logger, klass=<class 'logging.StreamHandler'>, stream=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>, level=20, fmt='%(name)s: %(message)s')

Add handler to a logger.

Parameters:
  • logger_name – the name of a logging.Logger instance, that is, the parameter to logging.getLogger()
  • klass – Handler class (defaults to logging.StreamHandler)
  • stream – Logging stream, to be passed as an argument to klass (defaults to sys.stdout)
  • level – Log level (defaults to INFO`)
  • fmt – Logging format (defaults to %(name)s: %(message)s)
avocado.core.output.del_last_configuration()
avocado.core.output.disable_log_handler(logger)
avocado.core.output.early_start()

Replace all outputs with in-memory handlers

avocado.core.output.log_plugin_failures(failures)

Log in the application UI failures to load a set of plugins

Parameters:failures – a list of load failures, usually coming from a avocado.core.dispatcher.Dispatcher attribute load_failures
avocado.core.output.reconfigure(args)

Adjust logging handlers accordingly to app args and re-log messages.

avocado.core.parameters module

Module related to test parameters

class avocado.core.parameters.AvocadoParam(leaves, name)

Bases: object

This is a single slice params. It can contain multiple leaves and tries to find matching results.

Parameters:
  • leaves – this slice’s leaves
  • name – this slice’s name (identifier used in exceptions)
get_or_die(path, key)

Get a value or raise exception if not present :raise NoMatchError: When no matches :raise KeyError: When value is not certain (multiple matches)

iteritems()

Very basic implementation which iterates through __ALL__ params, which generates lots of duplicate entries due to inherited values.

str_leaves_variant

String with identifier and all params

class avocado.core.parameters.AvocadoParams(leaves, paths, logger_name=None)

Bases: object

Params object used to retrieve params from given path. It supports absolute and relative paths. For relative paths one can define multiple paths to search for the value. It contains compatibility wrapper to act as the original avocado Params, but by special usage you can utilize the new API. See get() docstring for details.

You can also iterate through all keys, but this can generate quite a lot of duplicate entries inherited from ancestor nodes. It shouldn’t produce false values, though.

Parameters:
  • leaves – List of TreeNode leaves defining current variant
  • paths – list of entry points
  • logger_name (str) – the name of a logger to use to record attempts to get parameters
get(key, path=None, default=None)

Retrieve value associated with key from params :param key: Key you’re looking for :param path: namespace [‘*’] :param default: default value when not found :raise KeyError: In case of multiple different values (params clash)

iteritems()

Iterate through all available params and yield origin, key and value of each unique value.

objects(key, path=None)

Return the names of objects defined using a given key.

Parameters:key – The name of the key whose value lists the objects (e.g. ‘nics’).
exception avocado.core.parameters.NoMatchError

Bases: KeyError

avocado.core.parser module

Avocado application command line parsing.

class avocado.core.parser.ArgumentParser(prog=None, usage=None, description=None, epilog=None, parents=[], formatter_class=<class 'argparse.HelpFormatter'>, prefix_chars='-', fromfile_prefix_chars=None, argument_default=None, conflict_handler='error', add_help=True, allow_abbrev=True)

Bases: argparse.ArgumentParser

Class to override argparse functions

error(message: string)

Prints a usage message incorporating the message to stderr and exits.

If you override this in a subclass, it should not return – it should either exit or raise an exception.

class avocado.core.parser.FileOrStdoutAction(option_strings, dest, nargs=None, const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)

Bases: argparse.Action

Controls claiming the right to write to the application standard output

class avocado.core.parser.HintParser(filename)

Bases: object

get_resolutions()

Return a list of resolutions based on the file definitions.

validate_kind_section(kind)

Validates a specific “kind section”.

This method will raise a settings.SettingsError if any problem is found on the file.

Parameters:kind – a string with the specific section.
class avocado.core.parser.Parser

Bases: object

Class to Parse the command line arguments.

finish()

Finish the process of parsing arguments.

Side effect: set the final value on attribute config.

start()

Start to parsing arguments.

At the end of this method, the support for subparsers is activated. Side effect: update attribute args (the namespace).

avocado.core.parser_common_args module

avocado.core.parser_common_args.add_tag_filter_args(parser)

avocado.core.plugin_interfaces module

class avocado.core.plugin_interfaces.CLI

Bases: avocado.core.plugin_interfaces.Plugin

Base plugin interface for adding options (non-commands) to the command line.

Plugins that want to add extra options to the core command line application or to sub commands should use the ‘avocado.plugins.cli’ namespace.

configure(parser)

Configures the command line parser with options specific to this plugin.

run(config)

Execute any action the plugin intends.

Example of action may include activating a special features upon finding that the requested command line options were set by the user.

Note: this plugin class is not intended for adding new commands, for that please use CLICmd.

class avocado.core.plugin_interfaces.CLICmd

Bases: avocado.core.plugin_interfaces.Plugin

Base plugin interface for adding new commands to the command line app.

Plugins that want to add extensions to the run command should use the ‘avocado.plugins.cli.cmd’ namespace.

configure(parser)

Lets the extension add command line options and do early configuration.

By default it will register its name as the command name and give its description as the help message.

description = None
name = None
run(config)

Entry point for actually running the command.

class avocado.core.plugin_interfaces.Init

Bases: avocado.core.plugin_interfaces.Plugin

Base plugin interface for plugins that needs to initialize itself.

initialize()

Entry point for the plugin to perform its initialization.

class avocado.core.plugin_interfaces.JobPost

Bases: avocado.core.plugin_interfaces.Plugin

Base plugin interface for adding actions after a job runs.

Plugins that want to add actions to be run after a job runs, should use the ‘avocado.plugins.job.prepost’ namespace and implement the defined interface.

post(job)

Entry point for actually running the post job action.

class avocado.core.plugin_interfaces.JobPostTests

Bases: avocado.core.plugin_interfaces.Plugin

Base plugin interface for adding actions after a job runs tests.

Plugins using this interface will run at the a time equivalent to plugins using the JobPost interface, that is, at avocado.core.job.Job.post_tests(). This is because JobPost based plugins will eventually be modified to really run after the job has finished, and not after it has run tests.

post_tests(job)

Entry point for job running actions after the tests execution.

class avocado.core.plugin_interfaces.JobPre

Bases: avocado.core.plugin_interfaces.Plugin

Base plugin interface for adding actions before a job runs.

Plugins that want to add actions to be run before a job runs, should use the ‘avocado.plugins.job.prepost’ namespace and implement the defined interface.

pre(job)

Entry point for actually running the pre job action.

class avocado.core.plugin_interfaces.JobPreTests

Bases: avocado.core.plugin_interfaces.Plugin

Base plugin interface for adding actions before a job runs tests.

This interface looks similar to JobPre, but it’s intended to be called at a very specific place, that is, between avocado.core.job.Job.create_test_suite() and avocado.core.job.Job.run_tests().

pre_tests(job)

Entry point for job running actions before tests execution.

class avocado.core.plugin_interfaces.Plugin

Bases: object

Base for all plugins.

class avocado.core.plugin_interfaces.Resolver

Bases: avocado.core.plugin_interfaces.Plugin

Base plugin interface for resolving test references into resolutions.

resolve(reference)

Resolves the given reference into a reference resolution.

Parameters:reference (str) – a specification that can eventually be resolved into a test (in the form of a avocado.core.nrunner.Runnable)
Returns:the result of the resolution process, containing the success, failure or error, along with zero or more avocado.core.nrunner.Runnable objects
Return type:avocado.core.resolver.ReferenceResolution
class avocado.core.plugin_interfaces.Result

Bases: avocado.core.plugin_interfaces.Plugin

render(result, job)

Entry point with method that renders the result.

This will usually be used to write the result to a file or directory.

Parameters:
class avocado.core.plugin_interfaces.ResultEvents

Bases: avocado.core.plugin_interfaces.JobPreTests, avocado.core.plugin_interfaces.JobPostTests

Base plugin interface for event based (stream-able) results.

Plugins that want to add actions to be run after a job runs, should use the ‘avocado.plugins.result_events’ namespace and implement the defined interface.

end_test(result, state)

Event triggered when a test finishes running.

start_test(result, state)

Event triggered when a test starts running.

test_progress(progress=False)

Interface to notify progress (or not) of the running test.

class avocado.core.plugin_interfaces.Runner

Bases: avocado.core.plugin_interfaces.Plugin

Base plugin interface for test runners.

This is the interface a job uses to drive the tests execution via compliant test runners.

NOTE: This interface is not to be confused with the internal interface or idiosyncrasies of the The “nrunner” and “runner” test runner.

run_suite(job, test_suite)

Run one or more tests and report with test result.

Parameters:
  • job – an instance of avocado.core.job.Job.
  • test_suite – an instance of TestSuite with some tests to run.
Returns:

a set with types of test failures.

class avocado.core.plugin_interfaces.Settings

Bases: avocado.core.plugin_interfaces.Plugin

Base plugin to allow modifying settings.

Currently it only supports to extend/modify the default list of paths to config files.

adjust_settings_paths(paths)

Entry point where plugin can modify the list of configuration paths.

class avocado.core.plugin_interfaces.Spawner

Bases: avocado.core.plugin_interfaces.Plugin

Base plugin interface spawners of tasks.

A spawner implementation will spawn a runner in its intended location, and isolation model. It’s supposed to be generic enough that it can perform that in the local machine using a process as an isolation model, or in a virtual machine, using the virtual machine itself as the isolation model.

static check_task_requirements(runtime_task)

Checks if the requirements described within a task are available.

Parameters:runtime_task (avocado.core.task.runtime.RuntimeTask) – wrapper for a Task with additional runtime information
static is_task_alive(runtime_task)

Determines if a task is alive or not.

Parameters:runtime_task (avocado.core.task.runtime.RuntimeTask) – wrapper for a Task with additional runtime information
spawn_task(runtime_task)

Spawns a task return whether the spawning was successful.

Parameters:runtime_task (avocado.core.task.runtime.RuntimeTask) – wrapper for a Task with additional runtime information
wait_task(runtime_task)

Waits for a task to finish.

Parameters:runtime_task (avocado.core.task.runtime.RuntimeTask) – wrapper for a Task with additional runtime information
class avocado.core.plugin_interfaces.Varianter

Bases: avocado.core.plugin_interfaces.Plugin

Base plugin interface for producing test variants.

to_str(summary, variants, **kwargs)

Return human readable representation.

The summary/variants accepts verbosity where 0 means silent and maximum is up to the plugin.

Parameters:
  • summary – How verbose summary to output (int)
  • variants – How verbose list of variants to output (int)
  • kwargs – Other free-form arguments
Return type:

str

avocado.core.references module

Test loader module.

avocado.core.references.reference_split(reference)

Splits a test reference into a path and additional info

This should be used dependent on the specific type of resolver. If a resolver is not expected to support multiple test references inside a given file, then this is not suitable.

Returns:(path, additional_info)
Type:(str, str or None)

avocado.core.resolver module

Test resolver module.

class avocado.core.resolver.ReferenceResolution(reference, result, resolutions=None, info=None, origin=None)

Bases: object

Represents one complete reference resolution

Note that the reference itself may result in many resolutions, or none.

Parameters:
  • reference (str) – a specification that can eventually be resolved into a test (in the form of a avocado.core.nrunner.Runnable)
  • result (ReferenceResolutionResult) – if the complete resolution was a success, failure or error
  • resolutions (list of avocado.core.nrunner.Runnable) – the runnable definitions resulting from the resolution
  • info (str) – free form information the resolver may add
  • origin (str) – the name of the resolver that performed the resolution
class avocado.core.resolver.ReferenceResolutionAction

Bases: enum.Enum

An enumeration.

CONTINUE = <object object>

Continue to resolve the given reference

RETURN = <object object>

Stop trying to resolve the reference

class avocado.core.resolver.ReferenceResolutionResult

Bases: enum.Enum

An enumeration.

ERROR = <object object>

Internal error in the resolution process

NOTFOUND = <object object>

Given test reference was not properly resolved

SUCCESS = <object object>

Given test reference was properly resolved

class avocado.core.resolver.Resolver

Bases: avocado.core.enabled_extension_manager.EnabledExtensionManager

Main test reference resolution utility.

This performs the actual resolution according to the active resolver plugins and a resolution policy.

DEFAULT_POLICY = {<ReferenceResolutionResult.SUCCESS: <object object>>: <ReferenceResolutionAction.RETURN: <object object>>, <ReferenceResolutionResult.NOTFOUND: <object object>>: <ReferenceResolutionAction.CONTINUE: <object object>>, <ReferenceResolutionResult.ERROR: <object object>>: <ReferenceResolutionAction.CONTINUE: <object object>>}
resolve(reference)
avocado.core.resolver.check_file(path, reference, suffix='.py', type_check=<function isfile>, type_name='regular file', access_check=4, access_name='readable')
avocado.core.resolver.resolve(references, hint=None, ignore_missing=True)

avocado.core.result module

Contains the Result class, used for result accounting.

class avocado.core.result.Result(job_unique_id, job_logfile)

Bases: object

Result class, holder for job (and its tests) result information.

Creates an instance of Result.

Parameters:
  • job_unique_id – the job’s unique ID, usually from avocado.core.job.Job.unique_id
  • job_logfile – the job’s unique ID, usually from avocado.core.job.Job.logfile
check_test(state)

Called once for a test to check status and report.

Parameters:test – A dict with test internal state
end_test(state)

Called when the given test has been run.

Parameters:state (dict) – result of avocado.core.test.Test.get_state.
end_tests()

Called once after all tests are executed.

rate
start_test(state)

Called when the given test is about to run.

Parameters:state (dict) – result of avocado.core.test.Test.get_state.

avocado.core.runner module

Test runner module.

class avocado.core.runner.TestStatus(job, queue)

Bases: object

Test status handler

Parameters:
  • job – Associated job
  • queue – test message queue
early_status

Get early status

finish(proc, started, step, deadline, result_dispatcher)

Wait for the test process to finish and report status or error status if unable to obtain the status till deadline.

Parameters:
  • proc – The test’s process
  • started – Time when the test started
  • first – Delay before first check
  • step – Step between checks for the status
  • deadline – Test execution deadline
  • result_dispatcher – Result dispatcher (for test_progress notifications)
wait_for_early_status(proc, timeout)

Wait until early_status is obtained :param proc: test process :param timeout: timeout for early_state :raise exceptions.TestError: On timeout/error

avocado.core.runner.add_runner_failure(test_state, new_status, message)

Append runner failure to the overall test status.

Parameters:
  • test_state – Original test state (dict)
  • new_status – New test status (PASS/FAIL/ERROR/INTERRUPTED/…)
  • message – The error message

avocado.core.safeloader module

Safe (AST based) test loader module utilities

avocado.core.safeloader.DOCSTRING_DIRECTIVE_RE_RAW = '\\s*:avocado:[ \\t]+(([a-zA-Z0-9]+?[a-zA-Z0-9_:,\\=\\-\\.]*)|(r[a-zA-Z0-9]+?[a-zA-Z0-9_:,\\=\\{\\}\\"\\-\\.\\/ ]*))\\s*$'

Gets the docstring directive value from a string. Used to tweak test behavior in various ways

class avocado.core.safeloader.PythonModule(path, module='avocado', klass='Test')

Bases: object

Representation of a Python module that might contain interesting classes

By default, it uses module and class names that matches Avocado instrumented tests, but it’s supposed to be agnostic enough to be used for, say, Python unittests.

Instantiates a new PythonModule representation

Parameters:
  • path (str) – path to a Python source code file
  • module (str) – the original module name from where the possibly interesting class must have been imported from
  • klass (str) – the possibly interesting class original name
add_imported_object(statement)

Keeps track of objects names and importable entities

imported_objects
is_matching_klass(klass)

Detect whether given class directly defines itself as <module>.<klass>

It can either be a <klass> that inherits from a test “symbol”, like:

`class FooTest(Test)`

Or from an <module>.<klass> symbol, like in:

`class FooTest(avocado.Test)`

Return type:bool
iter_classes()

Iterate through classes and keep track of imported avocado statements

klass
klass_imports
mod
mod_imports
module
path
avocado.core.safeloader.check_docstring_directive(docstring, directive)

Checks if there’s a given directive in a given docstring

Return type:bool
avocado.core.safeloader.find_avocado_tests(path)
avocado.core.safeloader.find_class_and_methods(path, method_pattern=None, base_class=None)

Attempts to find methods names from a given Python source file

Parameters:
  • path (str) – path to a Python source code file
  • method_pattern – compiled regex to match against method name
  • base_class (str or None) – only consider classes that inherit from a given base class (or classes that inherit from any class if None is given)
Returns:

an ordered dictionary with classes as keys and methods as values

Return type:

collections.OrderedDict

avocado.core.safeloader.find_python_tests(module_name, class_name, determine_match, path)

Attempts to find Python tests from source files

A Python test in this context is a method within a specific type of class (or that inherits from a specific class).

Parameters:
  • module_name (str) – the name of the module from which a class should have come from
  • class_name (str) – the name of the class that is considered to contain test methods
  • path (str) – path to a Python source code file
Returns:

tuple where first item is dict with class name and additional info such as method names and tags; the second item is set of class names which look like Python tests but have been forcefully disabled.

Return type:

tuple

avocado.core.safeloader.find_python_unittests(path)
avocado.core.safeloader.get_docstring_directives(docstring)

Returns the values of the avocado docstring directives

Parameters:docstring (str) – the complete text used as documentation
Return type:builtin.list
avocado.core.safeloader.get_docstring_directives_requirements(docstring)

Returns the test requirements from docstring patterns like :avocado: requirement={}.

Return type:list
avocado.core.safeloader.get_docstring_directives_tags(docstring)

Returns the test categories based on a :avocado: tags=category docstring

Return type:dict
avocado.core.safeloader.get_methods_info(statement_body, class_tags, class_requirements)

Returns information on an Avocado instrumented test method

avocado.core.safeloader.modules_imported_as(module)

Returns a mapping of imported module names whether using aliases or not

The goal of this utility function is to return the name of the import as used in the rest of the module, whether an aliased import was used or not.

For code such as:

>>> import foo as bar

This function should return {“foo”: “bar”}

And for code such as:

>>> import foo

It should return {“foo”: “foo”}

Please note that only global level imports are looked at. If there are imports defined, say, inside functions or class definitions, they will not be seen by this function.

Parameters:module (_ast.Module) – module, as parsed by ast.parse()
Returns:a mapping of names {<realname>: <alias>} of modules imported
Return type:dict
avocado.core.safeloader.statement_import_as(statement)

Returns a mapping of imported module names whether using aliases or not

Parameters:statement (ast.Import) – an AST import statement
Returns:a mapping of names {<realname>: <alias>} of modules imported
Return type:dict

avocado.core.settings module

This module is a new and experimental configuration handler.

This will handle both, command line args and configuration files. Settings() = configparser + argparser

Settings() is an attempt to implement part of BP001 and concentrate all default values in one place. This module will read the Avocado configuration options from many sources, in the following order:

  1. Default values: This is a “source code” defined. When plugins or core needs a settings, basically needs to call settings.register_option() with default value as argument. Developers only need to register the default value once, here when calling this methods.
  2. User/System configuration files (/etc/avocado or ~/.avocado/): This is configured by the user, on a more “permanent way”.
  3. Command-line options parsed in runtime. This is configured by the user, on a more “temporary way”;
exception avocado.core.settings.ConfigFileNotFound(path_list)

Bases: avocado.core.settings.SettingsError

Error thrown when the main settings file could not be found.

class avocado.core.settings.ConfigOption(namespace, help_msg, key_type=<class 'str'>, default=None, parser=None, short_arg=None, long_arg=None, positional_arg=False, choices=None, nargs=None, metavar=None, required=None, action=None)

Bases: object

action
add_argparser(parser, long_arg, short_arg=None, positional_arg=False, choices=None, nargs=None, metavar=None, required=None, action=None)

Add an command-line argparser to this option.

arg_parse_args
argparse_type
key
metavar
name_or_tags
section
set_value(value, convert=False)
value
exception avocado.core.settings.DuplicatedNamespace

Bases: avocado.core.settings.SettingsError

Raised when a namespace is already registered.

exception avocado.core.settings.NamespaceNotRegistered

Bases: avocado.core.settings.SettingsError

Raised when a namespace is not registered.

class avocado.core.settings.Settings

Bases: object

Settings is the Avocado configuration handler.

It is a simple wrapper around configparser and argparse.

Also, one object of this class could be passed as config to plugins and modules.

Basically, if you are going to have options (configuration options), either via config file or via command line, you should use this class. You don’t need to instantiate a new settings, just import and use register_option().

from avocado.core.settings import settings settings.register_option(…)

And when you needs get the current value, check on your configuration for the namespace (section.key) that you registered. i.e:

value = config.get(‘a.section.with.subsections.key’)

Note

Please, do not use a default value when using get() here. If you are using an existing namespace, get will always return a value, either the default value, or the value set by the user.

Please, note that most of methods and attributes here are private. Only public methods and attributes should be used outside this module.

Constructor. Tries to find the main settings files and load them.

add_argparser_to_option(namespace, parser, long_arg, short_arg=None, positional_arg=False, choices=None, nargs=None, metavar=None, required=None, action=None, allow_multiple=False)

Add a command-line argument parser to an existing option.

This method is useful to add a parser when the option is registered without any command-line argument options. You should call the “register_option()” method for the namespace before calling this method.

Arguments

namespace : str
What is the namespace of the option (section.key)
parser : argparser parser
Since that you would like to have a command-line option, you should specify what is the parser or parser group that we should add this option.
long_arg: : str
A long option for the command-line. i.e: –debug for debug.
short_arg : str
A short option for the command-line. i.e: -d for debug.
positional_arg : bool
If this option is an positional argument or not. Default is False.
choices : tuple
If you would like to limit the option to a few choices. i.e: (‘foo’, ‘bar’)
nargs : int or str
The number of command-line arguments that should be consumed. Could be a int, ‘?’, ‘*’ or ‘+’. For more information visit the argparser documentation.
metavar : str
String presenting available sub-commands in help, if None we will use the section+key as metavar.
required : bool
If this is a required option or not when on command-line. Default is False.
action :
The basic type of action to be taken when this argument is encountered at the command line. For more information visit the argparser documentation.
allow_multiple :
Whether the same option may be available on different parsers. This is useful when the same option is available on different commands, such as “avocado run” or “avocado list”.
as_dict()

Return an dictionary with the current active settings.

This will return a dict with all parsed options (either via config file or via command-line).

as_full_dict()
as_json()

Return a JSON with the current active settings.

This will return a JSON with all parsed options (either via config file or via command-line).

merge_with_arguments(arg_parse_config)

Merge the current settings with the command-line args.

After parsing argument options this method should be executed to have an unified settings.

Parameters:arg_parse_config – argparse.config dictionary with all command-line parsed arguments.
merge_with_configs()

Merge the current settings with the config file options.

After parsing config file options this method should be executed to have an unified settings.

process_config_path(path)

Update list of config paths and process the given path.

register_option(section, key, default, help_msg, key_type=<class 'str'>, parser=None, positional_arg=False, short_arg=None, long_arg=None, choices=None, nargs=None, metavar=None, required=False, action=None, allow_multiple=False)

Method used to register a configuration option inside Avocado.

This should be used to register a settings option (either config file option or command-line option). This is the central point that plugins and core should use to register a new configuration option.

This method will take care of the ‘under the hood dirt’, registering the configparse option and, if desired, the argparse too. Instead of using argparse and/or configparser, Avocado’s contributors should use this method.

Using this method, you need to specify a “section”, “key”, “default” value and a “help_msg” always. This will create a relative configuration file option for you.

For instance:

settings.register_option(section=’foo’, key=’bar’, default=’hello’,
help_msg=’this is just a test’)

This will register a ‘foo.bar’ namespace inside Avocado internals settings. And this could be now, be changed by the users or system configuration option:

[foo] bar = a different message replacing ‘hello’

If you would like to provide also the flexibility to the user change the values via command-line, you should pass the other arguments.

Arguments

section : str
The configuration file section that your option should be present. You can specify subsections with dots. i.e: run.output.json
key : str
What is the key name of your option inside that section.
default : typeof(key_type)
The default value of an option. It sets the option value when the key is not defined in any configuration files or via command-line. The default value should be “processed”. It means the value should match the type of key_type. Due to some internal limitations, the Settings module will not apply key_type to the default value.
help_msg : str
The help message that will be displayed at command-line (-h) and configuration file template.
key_type : any method
What is the type of your option? Currently supported: int, list, str or a custom method. Default is str.
parser : argparser parser
Since that you would like to have a command-line option, you should specify what is the parser or parser group that we should add this option.
positional_arg : bool
If this option is an positional argument or not. Default is False.
short_arg : str
A short option for the command-line. i.e: -d for debug.
long_arg: : str
A long option for the command-line. i.e: –debug for debug.
choices : tuple
If you would like to limit the option to a few choices. i.e: (‘foo’, ‘bar’)
nargs : int or str
The number of command-line arguments that should be consumed. Could be a int, ‘?’, ‘*’ or ‘+’. For more information visit the argparser documentation.
metavar : str
String presenting available sub-commands in help, if None we will use the section+key as metavar.
required : bool
If this is a required option or not when on command-line. Default is False.
action :
The basic type of action to be taken when this argument is encountered at the command line. For more information visit the argparser documentation.
allow_multiple :
Whether the same option may be available on different parsers. This is useful when the same option is available on different commands, such as “avocado run” or “avocado list”.

Note

Most of the arguments here (like parser, positional_arg, short_arg, long_arg, choices, nargs, metavar, required and action) are only necessary if you would like to add a command-line option.

update_option(namespace, value, convert=False)

Convenient method to change the option’s value.

This will update the value on Avocado internals and if necessary the type conversion will be realized.

For instance, if an option was registered as bool and you call:

settings.register_option(namespace=’foo.bar’, value=’true’,
convert=True)

This will be stored as True, because Avocado will get the ‘key_type’ registered and apply here for the conversion.

This method is useful when getting values from config files where everything is stored as string and a conversion is needed.

Arguments

namespace : str
Your section plus your key, separated by dots. The last part of the namespace is your key. i.e: run.outputs.json.enabled (section is run.outputs.json and key is enabled)
value : any type
This is the new value to update.
convert : bool
If Avocado should try to convert the value and store it as the ‘key_type’ specified during the register. Default is False.
exception avocado.core.settings.SettingsError

Bases: Exception

Base settings error.

avocado.core.settings.sorted_dict(dict_object)

avocado.core.settings_dispatcher module

Settings Dispatcher

This is a special case for the dispatchers that can be found in avocado.core.dispatcher. This one deals with settings that will be read by the other dispatchers, while still being a dispatcher for configuration sources.

class avocado.core.settings_dispatcher.SettingsDispatcher

Bases: avocado.core.extension_manager.ExtensionManager

Dispatchers that allows plugins to modify settings

It’s not the standard “avocado.core.dispatcher” because that one depends on settings. This dispatcher is the bare-stevedore dispatcher which is executed before settings is parsed.

avocado.core.streams module

avocado.core.streams.BUILTIN_STREAMS = {'app': 'application output', 'debug': 'tracebacks and other debugging info', 'early': 'early logging of other streams, including test (very verbose)', 'test': 'test output'}

Builtin special keywords to enable set of logging streams

avocado.core.streams.BUILTIN_STREAM_SETS = {'all': 'all builtin streams', 'none': 'disables regular output (leaving only errors enabled)'}

Groups of builtin streams

avocado.core.suite module

class avocado.core.suite.TestSuite(name, config=None, tests=None, job_config=None, resolutions=None)

Bases: object

classmethod from_config(config, name=None, job_config=None)

Helper method to create a TestSuite from config dicts.

This is different from the TestSuite() initialization because here we are assuming that you need some help to build the test suite. Avocado will try to resolve tests based on the configuration information instead of assuming pre populated tests.

If you need to create a custom TestSuite, please use the TestSuite() constructor instead of this method.

Parameters:
  • config (dict) – A config dict to be used on the desired test suite.
  • name (str) – The name of the test suite. This is optional and default is a random uuid.
  • job_config (dict) – The job config dict (a global config). Use this to avoid huge configs per test suite. This is also optional.
references
run(job)

Run this test suite with the job context in mind.

Parameters:job – A avocado.core.job.Job instance.
Return type:set
runner
size

The overall length/size of this test suite.

stats

Return a statistics dict with the current tests.

status
tags_stats

Return a statistics dict with the current tests tags.

test_parameters

Placeholder for test parameters.

This is related to –test-parameters command line option or (run.test_parameters).

variants
exception avocado.core.suite.TestSuiteError

Bases: Exception

class avocado.core.suite.TestSuiteStatus

Bases: enum.Enum

An enumeration.

RESOLUTION_NOT_STARTED = <object object>
TESTS_FOUND = <object object>
TESTS_NOT_FOUND = <object object>
UNKNOWN = <object object>

avocado.core.sysinfo module

class avocado.core.sysinfo.Collectible(logf)

Bases: object

Abstract class for representing collectibles by sysinfo.

readline(logdir)

Read one line of the collectible object.

Parameters:logdir – Path to a log directory.
class avocado.core.sysinfo.Command(cmd, logf=None, compress_log=False)

Bases: avocado.core.sysinfo.Collectible

Collectible command.

Parameters:
  • cmd – String with the command.
  • logf – Basename of the file where output is logged (optional).
  • compress_log – Whether to compress the output of the command.
run(logdir)

Execute the command as a subprocess and log its output in logdir.

Parameters:logdir – Path to a log directory.
class avocado.core.sysinfo.Daemon(*args, **kwargs)

Bases: avocado.core.sysinfo.Command

Collectible daemon.

Parameters:
  • cmd – String with the daemon command.
  • logf – Basename of the file where output is logged (optional).
  • compress_log – Whether to compress the output of the command.
run(logdir)

Execute the daemon as a subprocess and log its output in logdir.

Parameters:logdir – Path to a log directory.
stop()

Stop daemon execution.

class avocado.core.sysinfo.JournalctlWatcher(logf=None)

Bases: avocado.core.sysinfo.Collectible

Track the content of systemd journal into a compressed file.

Parameters:logf – Basename of the file where output is logged (optional).
run(logdir)
class avocado.core.sysinfo.LogWatcher(path, logf=None)

Bases: avocado.core.sysinfo.Collectible

Keep track of the contents of a log file in another compressed file.

This object is normally used to track contents of the system log (/var/log/messages), and the outputs are gzipped since they can be potentially large, helping to save space.

Parameters:
  • path – Path to the log file.
  • logf – Basename of the file where output is logged (optional).
run(logdir)

Log all of the new data present in the log file.

class avocado.core.sysinfo.Logfile(path, logf=None)

Bases: avocado.core.sysinfo.Collectible

Collectible system file.

Parameters:
  • path – Path to the log file.
  • logf – Basename of the file where output is logged (optional).
run(logdir)

Copy the log file to the appropriate log dir.

Parameters:logdir – Log directory which the file is going to be copied to.
class avocado.core.sysinfo.SysInfo(basedir=None, log_packages=None, profiler=None)

Bases: object

Log different system properties at some key control points.

Includes support for a start and stop event, with daemons running in between. An event may be a job, a test, or any other event with a beginning and end.

Set sysinfo collectibles.

Parameters:
  • basedir – Base log dir where sysinfo files will be located.
  • log_packages – Whether to log system packages (optional because logging packages is a costly operation). If not given explicitly, tries to look in the config files, and if not found, defaults to False.
  • profiler – Whether to use the profiler. If not given explicitly, tries to look in the config files.
end(status='')

Logging hook called whenever a job finishes.

start()

Log all collectibles at the start of the event.

avocado.core.sysinfo.collect_sysinfo(basedir)

Collect sysinfo to a base directory.

avocado.core.tags module

Test tags utilities module

avocado.core.tags.filter_test_tags(test_suite, filter_by_tags, include_empty=False, include_empty_key=False)

Filter the existing (unfiltered) test suite based on tags

The filtering mechanism is agnostic to test type. It means that if users request filtering by tag and the specific test type does not populate the test tags, it will be considered to have empty tags.

Parameters:
  • test_suite (dict) – the unfiltered test suite
  • filter_by_tags (list of comma separated tags (['foo,bar', 'fast'])) – the list of tag sets to use as filters
  • include_empty (bool) – if true tests without tags will not be filtered out
  • include_empty_key (bool) – if true tests “keys” on key:val tags will be included in the filtered results
avocado.core.tags.filter_test_tags_runnable(runnable, filter_by_tags, include_empty=False, include_empty_key=False)

Filter the existing (unfiltered) test suite based on tags

The filtering mechanism is agnostic to test type. It means that if users request filtering by tag and the specific test type does not populate the test tags, it will be considered to have empty tags.

Parameters:
  • test_suite (dict) – the unfiltered test suite
  • filter_by_tags (list of comma separated tags (['foo,bar', 'fast'])) – the list of tag sets to use as filters
  • include_empty (bool) – if true tests without tags will not be filtered out
  • include_empty_key (bool) – if true tests “keys” on key:val tags will be included in the filtered results

avocado.core.tapparser module

class avocado.core.tapparser.TapParser(tap_io)

Bases: object

class Bailout(message)

Bases: tuple

Create new instance of Bailout(message,)

message

Alias for field number 0

class Error(message)

Bases: tuple

Create new instance of Error(message,)

message

Alias for field number 0

class Plan(count, late, skipped, explanation)

Bases: tuple

Create new instance of Plan(count, late, skipped, explanation)

count

Alias for field number 0

explanation

Alias for field number 3

late

Alias for field number 1

skipped

Alias for field number 2

class Test(number, name, result, explanation)

Bases: tuple

Create new instance of Test(number, name, result, explanation)

explanation

Alias for field number 3

name

Alias for field number 1

number

Alias for field number 0

result

Alias for field number 2

class Version(version)

Bases: tuple

Create new instance of Version(version,)

version

Alias for field number 0

parse()
parse_test(ok, num, name, directive, explanation)
class avocado.core.tapparser.TestResult

Bases: enum.Enum

An enumeration.

FAIL = 'FAIL'
PASS = 'PASS'
SKIP = 'SKIP'
XFAIL = 'XFAIL'
XPASS = 'XPASS'

avocado.core.test module

Contains the base test implementation, used as a base for the actual framework tests.

avocado.core.test.COMMON_TMPDIR_NAME = 'AVOCADO_TESTS_COMMON_TMPDIR'

Environment variable used to store the location of a temporary directory which is preserved across all tests execution (usually in one job)

class avocado.core.test.DryRunTest(*args, **kwargs)

Bases: avocado.core.test.MockingTest

Fake test which logs itself and reports as CANCEL

filename

Returns the name of the file (path) that holds the current test

setUp()

Hook method for setting up the test fixture before exercising it.

class avocado.core.test.ExternalRunnerSpec(runner, chdir=None, test_dir=None)

Bases: object

Defines the basic options used by ExternalRunner

class avocado.core.test.ExternalRunnerTest(name, params=None, base_logdir=None, job=None, external_runner=None, external_runner_argument=None)

Bases: avocado.core.test.SimpleTest

filename

Returns the name of the file (path) that holds the current test

test()

Run the test and postprocess the results

class avocado.core.test.MockingTest(*args, **kwargs)

Bases: avocado.core.test.Test

Class intended as generic substitute for avocado tests which will not be executed for some reason. This class is expected to be overridden by specific reason-oriented sub-classes.

This class substitutes other classes. Let’s just ignore the remaining arguments and only set the ones supported by avocado.Test

test()
class avocado.core.test.PythonUnittest(name, params=None, base_logdir=None, job=None, test_dir=None, python_unittest_module=None, tags=None)

Bases: avocado.core.test.ExternalRunnerTest

Python unittest test

test()

Run the test and postprocess the results

class avocado.core.test.RawFileHandler(filename, mode='a', encoding=None, delay=False)

Bases: logging.FileHandler

File Handler that doesn’t include arbitrary characters to the logged stream but still respects the formatter.

Open the specified file and use it as the stream for logging.

emit(record)

Modifying the original emit() to avoid including a new line in streams that should be logged in its purest form, like in stdout/stderr recordings.

class avocado.core.test.ReplaySkipTest(*args, **kwargs)

Bases: avocado.core.test.MockingTest

Skip test due to job replay filter.

This test is skipped due to a job replay filter. It will never have a chance to execute.

This class substitutes other classes. Let’s just ignore the remaining arguments and only set the ones supported by avocado.Test

test()
class avocado.core.test.SimpleTest(name, params=None, base_logdir=None, job=None, executable=None)

Bases: avocado.core.test.Test

Run an arbitrary command that returns either 0 (PASS) or !=0 (FAIL).

DATA_SOURCES = ['variant', 'file']
filename

Returns the name of the file (path) that holds the current test

test()

Run the test and postprocess the results

avocado.core.test.TEST_STATE_ATTRIBUTES = ('name', 'logdir', 'logfile', 'status', 'running', 'paused', 'time_start', 'time_elapsed', 'time_end', 'fail_reason', 'fail_class', 'traceback', 'timeout', 'whiteboard', 'phase')

The list of test attributes that are used as the test state, which is given to the test runner via the queue they share

class avocado.core.test.TapTest(name, params=None, base_logdir=None, job=None, executable=None)

Bases: avocado.core.test.SimpleTest

Run a test command as a TAP test.

class avocado.core.test.Test(methodName='test', name=None, params=None, base_logdir=None, job=None, runner_queue=None, tags=None)

Bases: unittest.case.TestCase, avocado.core.test.TestData

Base implementation for the test class.

You’ll inherit from this to write your own tests. Typically you’ll want to implement setUp(), test*() and tearDown() methods on your own tests.

Initializes the test.

Parameters:
  • methodName – Name of the main method to run. For the sake of compatibility with the original unittest class, you should not set this.
  • name (avocado.core.test.TestID) – Pretty name of the test name. For normal tests, written with the avocado API, this should not be set. This is reserved for internal Avocado use, such as when running random executables as tests.
  • base_logdir – Directory where test logs should go. If None provided a temporary directory will be created.
  • job – The job that this test is part of.
basedir

The directory where this test (when backed by a file) is located at

cache_dirs

Returns a list of cache directories as set in config file.

cancel(message=None)

Cancels the test.

This method is expected to be called from the test method, not anywhere else, since by definition, we can only cancel a test that is currently under execution. If you call this method outside the test method, avocado will mark your test status as ERROR, and instruct you to fix your test in the error message.

Parameters:message (str) – an optional message that will be recorded in the logs
Warning message:
 This parameter will changed name to “msg” in the next LTS release because of lint W0221
error(message=None)

Errors the currently running test.

After calling this method a test will be terminated and have its status as ERROR.

Parameters:message (str) – an optional message that will be recorded in the logs
Warning message:
 This parameter will changed name to “msg” in the next LTS release because of lint W0221
fail(message=None)

Fails the currently running test.

After calling this method a test will be terminated and have its status as FAIL.

Parameters:message (str) – an optional message that will be recorded in the logs
Warning message:
 This parameter will changed name to “msg” in the next LTS release because of lint W0221
fail_class
fail_reason
fetch_asset(name, asset_hash=None, algorithm=None, locations=None, expire=None, find_only=False, cancel_on_missing=False)

Method o call the utils.asset in order to fetch and asset file supporting hash check, caching and multiple locations.

Parameters:
  • name – the asset filename or URL
  • asset_hash – asset hash (optional)
  • algorithm – hash algorithm (optional, defaults to avocado.utils.asset.DEFAULT_HASH_ALGORITHM)
  • locations – list of URLs from where the asset can be fetched (optional)
  • expire – time for the asset to expire
  • find_only – When True, fetch_asset only looks for the asset in the cache, avoiding the download/move action. Defaults to False.
  • cancel_on_missing – whether the test should be canceled if the asset was not found in the cache or if fetch could not add the asset to the cache. Defaults to False.
Raises:

OSError – when it fails to fetch the asset or file is not in the cache and cancel_on_missing is False.

Returns:

asset file local path.

filename

Returns the name of the file (path) that holds the current test

get_state()

Serialize selected attributes representing the test state

Returns:a dictionary containing relevant test state data
Return type:dict
job

The job this test is associated with

log

The enhanced test log

logdir

Path to this test’s logging dir

logfile

Path to this test’s main debug.log file

name

Returns the Test ID, which includes the test name

Return type:TestID
outputdir

Directory available to test writers to attach files to the results

params

Parameters of this test (AvocadoParam instance)

phase

The current phase of the test execution

Possible (string) values are: INIT, SETUP, TEST, TEARDOWN and FINISHED

report_state()

Send the current test state to the test runner process

run_avocado()

Wraps the run method, for execution inside the avocado runner.

Result:Unused param, compatibility with unittest.TestCase.
runner_queue

The communication channel between test and test runner

running

Whether this test is currently being executed

set_runner_queue(runner_queue)

Override the runner_queue

status

The result status of this test

tags

The tags associated with this test

tearDown()

Hook method for deconstructing the test fixture after testing it.

teststmpdir

Returns the path of the temporary directory that will stay the same for all tests in a given Job.

time_elapsed = -1

duration of the test execution (always recalculated from time_end - time_start

time_end = -1

(unix) time when the test finished (could be forced from test)

time_start = -1

(unix) time when the test started (could be forced from test)

timeout = None

Test timeout (the timeout from params takes precedence)

traceback
whiteboard = ''

Arbitrary string which will be stored in $logdir/whiteboard location when the test finishes.

workdir

This property returns a writable directory that exists during the entire test execution, but will be cleaned up once the test finishes.

It can be used on tasks such as decompressing source tarballs, building software, etc.

class avocado.core.test.TestData

Bases: object

Class that adds the ability for tests to have access to data files

Writers of new test types can change the completely change the behavior and still be compatible by providing an DATA_SOURCES attribute and a meth:get_data method.

DATA_SOURCES = ['variant', 'test', 'file']

Defines the name of data sources that this implementation makes available. Users may choose to pick data file from a specific source.

get_data(filename, source=None, must_exist=True)

Retrieves the path to a given data file.

This implementation looks for data file in one of the sources defined by the DATA_SOURCES attribute.

Parameters:
  • filename (str) – the name of the data file to be retrieved
  • source (str) – one of the defined data sources. If not set, all of the DATA_SOURCES will be attempted in the order they are defined
  • must_exist (bool) – whether the existence of a file is checked for
Return type:

str or None

class avocado.core.test.TestError(*args, **kwargs)

Bases: avocado.core.test.Test

Generic test error.

test()
class avocado.core.test.TimeOutSkipTest(*args, **kwargs)

Bases: avocado.core.test.MockingTest

Skip test due job timeout.

This test is skipped due a job timeout. It will never have a chance to execute.

This class substitutes other classes. Let’s just ignore the remaining arguments and only set the ones supported by avocado.Test

test()

avocado.core.test_id module

class avocado.core.test_id.TestID(uid, name, variant=None, no_digits=None)

Bases: object

Test ID construction and representation according to specification

This class wraps the representation of both Avocado’s Test ID specification and Avocado’s Test Name, which is part of a Test ID.

Constructs a TestID instance

Parameters:
  • uid – unique test id (within the job)
  • name – test name, as returned by the Avocado test resolver (AKA as test loader)
  • variant (dict) – the variant applied to this Test ID
  • no_digits – number of digits of the test uid
str_filesystem

Test ID in a format suitable for use in file systems

The string returned should be safe to be used as a file or directory name. This file system version of the test ID may have to shorten either the Test Name or the Variant ID.

The first component of a Test ID, the numeric unique test id, AKA “uid”, will be used as a an stable identifier between the Test ID and the file or directory created based on the return value of this method. If the filesystem can not even represent the “uid”, than an exception will be raised.

For Test ID “001-mytest;foo”, examples of shortened file system versions include “001-mytest;f” or “001-myte;foo”.

Raises:RuntimeError if the test ID cannot be converted to a filesystem representation.

avocado.core.teststatus module

Maps the different status strings in avocado to booleans.

This is used by methods and functions to return a cut and dry answer to whether a test or a job in avocado PASSed or FAILed.

avocado.core.tree module

Tree data structure with nodes.

This tree structure (Tree drawing code) was inspired in the base tree data structure of the ETE 2 project:

http://pythonhosted.org/ete2/

A library for analysis of phylogenetics trees.

Explicit permission has been given by the copyright owner of ETE 2 Jaime Huerta-Cepas <jhcepas@gmail.com> to take ideas/use snippets from his original base tree code and re-license under GPLv2+, given that GPLv3 and GPLv2 (used in some avocado files) are incompatible.

class avocado.core.tree.FilterSet

Bases: set

Set of filters in standardized form

add(item)

Add an element to a set.

This has no effect if the element is already present.

update(items)

Update a set with the union of itself and others.

class avocado.core.tree.TreeEnvironment

Bases: dict

TreeNode environment with values, origins and filters

copy() → a shallow copy of D
to_text(sort=False)

Human readable representation

Parameters:sort – Sorted to provide stable output
Return type:str
class avocado.core.tree.TreeNode(name='', value=None, parent=None, children=None)

Bases: object

Class for bounding nodes into tree-structure.

Parameters:
  • name (str) – a name for this node that will be used to define its path according to the name of its parents
  • value (dict) – a collection of keys and values that will be made into this node environment.
  • parent (TreeNode) – the node that is directly above this one in the tree structure
  • children (builtin.list) – the nodes that are directly beneath this one in the tree structure
add_child(node)

Append node as child. Nodes with the same name gets merged into the existing position.

detach()

Detach this node from parent

environment

Node environment (values + preceding envs)

fingerprint()

Reports string which represents the value of this node.

get_environment()

Get node environment (values + preceding envs)

get_leaves()

Get list of leaf nodes

get_node(path, create=False)
Parameters:
  • path – Path of the desired node (relative to this node)
  • create – Create the node (and intermediary ones) when not present
Returns:

the node associated with this path

Raises:

ValueError – When path doesn’t exist and create not set

get_parents()

Get list of parent nodes

get_path(sep='/')

Get node path

get_root()

Get root of this tree

is_leaf

Is this a leaf node?

iter_children_preorder()

Iterate through children

iter_leaves()

Iterate through leaf nodes

iter_parents()

Iterate through parent nodes to root

merge(other)

Merges other node into this one without checking the name of the other node. New values are appended, existing values overwritten and unaffected ones are kept. Then all other node children are added as children (recursively they get either appended at the end or merged into existing node in the previous position.

parents

List of parent nodes

path

Node path

root

Root of this tree

set_environment_dirty()

Set the environment cache dirty. You should call this always when you query for the environment and then change the value or structure. Otherwise you’ll get the old environment instead.

class avocado.core.tree.TreeNodeEnvOnly(path, environment=None)

Bases: object

Minimal TreeNode-like class providing interface for AvocadoParams

Parameters:
  • path – Path of this node (must not end with ‘/’)
  • environment – List of pair/key/value items
fingerprint()
get_environment()
get_path()
avocado.core.tree.tree_view(root, verbose=None, use_utf8=None)

Generate tree-view of the given node :param root: root node :param verbose: verbosity (0, 1, 2, 3) :param use_utf8: Use utf-8 encoding (None=autodetect) :return: string representing this node’s tree structure

avocado.core.utils module

avocado.core.utils.get_avocado_git_version()
avocado.core.utils.prepend_base_path(value)
avocado.core.utils.resolutions_to_tasks(resolutions, config)

Transforms resolver resolutions into tasks

A resolver resolution (avocado.core.resolver.ReferenceResolution) contains information about the resolution process (if it was successful or not) and in case of successful resolutions a list of resolutions. It’s expected that the resolution are avocado.core.nrunner.Runnable.

This method transforms those runnables into Tasks (avocado.core.nrunner.Task), which will include a status reporting URI. It also performs tag based filtering on the runnables for possibly excluding some of the Runnables.

Parameters:
Returns:

the resolutions converted to tasks

Return type:

list of avocado.core.nrunner.Task

avocado.core.varianter module

Base classes for implementing the varianter interface

class avocado.core.varianter.FakeVariantDispatcher(state)

Bases: object

This object can act instead of VarianterDispatcher to report loaded variants.

map_method_with_return(method, *args, **kwargs)

Reports list containing one result of map_method on self

to_str(summary=0, variants=0, **kwargs)
class avocado.core.varianter.Varianter(debug=False, state=None)

Bases: object

This object takes care of producing test variants

Parameters:
  • debug – Store whether this instance should debug varianter
  • state – Force-varianter state
Note:

it’s necessary to check whether variants debug is enable in order to provide the right results.

dump()

Dump the variants in loadable-state

This is lossy representation which takes all yielded variants and replaces the list of nodes with TreeNodeEnvOnly representations:

[{'path': path,
  'variant_id': variant_id,
  'variant': dump_tree_nodes(original_variant)},
 {'path': [str, str, ...],
  'variant_id': str,
  'variant': [(str, [(str, str, object), ...])],
 {'path': ['/run/*'],
  'variant_id': 'cat-26c0'
  'variant': [('/pig/cat',
               [('/pig', 'ant', 'fox'),
                ('/pig/cat', 'dog', 'bee')])]}
 ...]

where dump_tree_nodes looks like:

[(node.path, environment_representation),
 (node.path, [(path1, key1, value1), (path2, key2, value2), ...]),
 ('/pig/cat', [('/pig', 'ant', 'fox')])
Returns:loadable Varianter representation
classmethod from_resultsdir(resultsdir)

Retrieves the job variants objects from the results directory.

This will return a list of variants since a Job can have multiple suites and the variants is per suite.

get_number_of_tests(test_suite)
Returns:overall number of tests * number of variants
is_parsed()

Reports whether the varianter was already parsed

itertests()

Yields all variants of all plugins

The variant is defined as dictionary with at least:
  • variant_id - name of the current variant
  • variant - AvocadoParams-compatible variant (usually a list of
    TreeNodes but dict or simply None are also possible values)
  • paths - default path(s)

:yield variant

load(state)

Load the variants state

Current implementation supports loading from a list of loadable variants. It replaces the VariantDispatcher with fake implementation which reports the loaded (and initialized) variants.

Parameters:state – loadable Varianter representation
parse(config)

Apply options defined on the cmdline and initialize the plugins.

Parameters:config (dict) – Configuration received from configuration files, command line parser, etc.
to_str(summary=0, variants=0, **kwargs)

Return human readable representation

The summary/variants accepts verbosity where 0 means do not display at all and maximum is up to the plugin.

Parameters:
  • summary – How verbose summary to output (int)
  • variants – How verbose list of variants to output (int)
  • kwargs – Other free-form arguments
Return type:

str

avocado.core.varianter.dump_ivariants(ivariants)

Walks the iterable variants and dumps them into json-serializable object

avocado.core.varianter.generate_variant_id(variant)

Basic function to generate variant-id from a variant

Parameters:variant – Avocado test variant (list of TreeNode-like objects)
Returns:String compounded of ordered node names and a hash of all values.
avocado.core.varianter.is_empty_variant(variant)

Reports whether the variant contains any data

Parameters:variant – Avocado test variant (list of TreeNode-like objects)
Returns:True when the variant does not contain (any useful) data
avocado.core.varianter.variant_to_str(variant, verbosity, out_args=None, debug=False)

Reports human readable representation of a variant

Parameters:
  • variant – Valid variant (list of TreeNode-like objects)
  • verbosity – Output verbosity where 0 means brief
  • out_args – Extra output arguments (currently unused)
  • debug – Whether the variant contains and should report debug info
Returns:

Human readable representation

avocado.core.version module

Module contents

avocado.core.initialize_plugin_infrastructure()
avocado.core.initialize_plugins()
avocado.core.register_core_options()