Test APIs

This is the bare mininum set of APIs that users should use, and can rely on, while writing tests.

Module contents


alias of avocado.core.job.TestProgram

class avocado.Test(methodName='test', name=None, params=None, base_logdir=None, job=None, runner_queue=None)

Bases: unittest.case.TestCase, avocado.core.test.TestData

Base implementation for the test class.

You’ll inherit from this to write your own tests. Typically you’ll want to implement setUp(), test*() and tearDown() methods on your own tests.

Initializes the test.

  • methodName – Name of the main method to run. For the sake of compatibility with the original unittest class, you should not set this.
  • name (avocado.core.test.TestID) – Pretty name of the test name. For normal tests, written with the avocado API, this should not be set. This is reserved for internal Avocado use, such as when running random executables as tests.
  • base_logdir – Directory where test logs should go. If None provided, it’ll use avocado.data_dir.create_job_logs_dir().
  • job – The job that this test is part of.

The directory where this test (when backed by a file) is located at


Returns a list of cache directories as set in config file.


Cancels the test.

This method is expected to be called from the test method, not anywhere else, since by definition, we can only cancel a test that is currently under execution. If you call this method outside the test method, avocado will mark your test status as ERROR, and instruct you to fix your test in the error message.

Parameters:message (str) – an optional message that will be recorded in the logs

Errors the currently running test.

After calling this method a test will be terminated and have its status as ERROR.

Parameters:message (str) – an optional message that will be recorded in the logs

Fails the currently running test.

After calling this method a test will be terminated and have its status as FAIL.

Parameters:message (str) – an optional message that will be recorded in the logs
fetch_asset(name, asset_hash=None, algorithm=None, locations=None, expire=None)

Method o call the utils.asset in order to fetch and asset file supporting hash check, caching and multiple locations.

  • name – the asset filename or URL
  • asset_hash – asset hash (optional)
  • algorithm – hash algorithm (optional, defaults to avocado.utils.asset.DEFAULT_HASH_ALGORITHM)
  • locations – list of URLs from where the asset can be fetched (optional)
  • expire – time for the asset to expire

EnvironmentError – When it fails to fetch the asset


asset file local path


Returns the name of the file (path) that holds the current test


Serialize selected attributes representing the test state

Returns:a dictionary containing relevant test state data
Return type:dict

The job this test is associated with


The enhanced test log


Path to this test’s logging dir


Path to this test’s main debug.log file


Returns the Test ID, which includes the test name

Return type:TestID

Directory available to test writers to attach files to the results


Parameters of this test (AvocadoParam instance)


Send the current test state to the test runner process


Wraps the run method, for execution inside the avocado runner.

Result:Unused param, compatibility with unittest.TestCase.

The communication channel between test and test runner


Whether this test is currently being executed


Override the runner_queue


The result status of this test


Returns the path of the temporary directory that will stay the same for all tests in a given Job.

time_elapsed = -1

duration of the test execution (always recalculated from time_end - time_start

time_end = -1

(unix) time when the test finished (could be forced from test)

time_start = -1

(unix) time when the test started (could be forced from test)

timeout = None

Test timeout (the timeout from params takes precedence)

whiteboard = ''

Arbitrary string which will be stored in $logdir/whiteboard location when the test finishes.


This property returns a writable directory that exists during the entire test execution, but will be cleaned up once the test finishes.

It can be used on tasks such as decompressing source tarballs, building software, etc.


Fail the test when decorated function produces exception of the specified type.

(For example, our method may raise IndexError on tested software failure. We can either try/catch it or use this decorator instead)

Parameters:exceptions – Tuple or single exception to be assumed as test fail [Exception]
Note:self.error and self.cancel behavior remains intact
Note:To allow simple usage param “exceptions” must not be callable

Decorator to skip a test.

avocado.skipIf(condition, message=None)

Decorator to skip a test if a condition is True.

avocado.skipUnless(condition, message=None)

Decorator to skip a test if a condition is False.

exception avocado.TestError

Bases: avocado.core.exceptions.TestBaseException

Indicates that the test was not fully executed and an error happened.

This is the sort of exception you raise if the test was partially executed and could not complete due to a setup, configuration, or another fatal condition.

status = 'ERROR'
exception avocado.TestFail

Bases: avocado.core.exceptions.TestBaseException, exceptions.AssertionError

Indicates that the test failed.

TestFail inherits from AssertionError in order to keep compatibility with vanilla python unittests (they only consider failures the ones deriving from AssertionError).

status = 'FAIL'
exception avocado.TestCancel

Bases: avocado.core.exceptions.TestBaseException

Indicates that a test was canceled.

Should be thrown when the cancel() test method is used.

status = 'CANCEL'