# Test APIs¶

This is the bare mininum set of APIs that users should use, and can rely on, while writing tests.

## Module contents¶

avocado.main
class avocado.Test(methodName='test', name=None, params=None, base_logdir=None, job=None, runner_queue=None)

Bases: unittest.case.TestCase, avocado.core.test.TestData

Base implementation for the test class.

You’ll inherit from this to write your own tests. Typically you’ll want to implement setUp(), test*() and tearDown() methods on your own tests.

Initializes the test.

Parameters: methodName – Name of the main method to run. For the sake of compatibility with the original unittest class, you should not set this. name (avocado.core.test.TestID) – Pretty name of the test name. For normal tests, written with the avocado API, this should not be set. This is reserved for internal Avocado use, such as when running random executables as tests. base_logdir – Directory where test logs should go. If None provided, it’ll use avocado.data_dir.create_job_logs_dir(). job – The job that this test is part of.
basedir

The directory where this test (when backed by a file) is located at

cache_dirs

Returns a list of cache directories as set in config file.

cancel(message=None)

Cancels the test.

This method is expected to be called from the test method, not anywhere else, since by definition, we can only cancel a test that is currently under execution. If you call this method outside the test method, avocado will mark your test status as ERROR, and instruct you to fix your test in the error message.

Parameters: message (str) – an optional message that will be recorded in the logs
error(message=None)

Errors the currently running test.

After calling this method a test will be terminated and have its status as ERROR.

Parameters: message (str) – an optional message that will be recorded in the logs
fail(message=None)

Fails the currently running test.

After calling this method a test will be terminated and have its status as FAIL.

Parameters: message (str) – an optional message that will be recorded in the logs
fail_class
fail_reason
fetch_asset(name, asset_hash=None, algorithm=None, locations=None, expire=None)

Method o call the utils.asset in order to fetch and asset file supporting hash check, caching and multiple locations.

Parameters: name – the asset filename or URL asset_hash – asset hash (optional) algorithm – hash algorithm (optional, defaults to avocado.utils.asset.DEFAULT_HASH_ALGORITHM) locations – list of URLs from where the asset can be fetched (optional) expire – time for the asset to expire EnvironmentError – When it fails to fetch the asset asset file local path
filename

Returns the name of the file (path) that holds the current test

get_state()

Serialize selected attributes representing the test state

Returns: a dictionary containing relevant test state data dict
job

The job this test is associated with

log

The enhanced test log

logdir

Path to this test’s logging dir

logfile

Path to this test’s main debug.log file

name

Returns the Test ID, which includes the test name

Return type: TestID
outputdir

Directory available to test writers to attach files to the results

params

Parameters of this test (AvocadoParam instance)

report_state()

Send the current test state to the test runner process

run_avocado()

Wraps the run method, for execution inside the avocado runner.

Result: Unused param, compatibility with unittest.TestCase.
runner_queue

The communication channel between test and test runner

running

Whether this test is currently being executed

set_runner_queue(runner_queue)

Override the runner_queue

status

The result status of this test

teststmpdir

Returns the path of the temporary directory that will stay the same for all tests in a given Job.

time_elapsed = -1

duration of the test execution (always recalculated from time_end - time_start

time_end = -1

(unix) time when the test finished (could be forced from test)

time_start = -1

(unix) time when the test started (could be forced from test)

timeout = None

Test timeout (the timeout from params takes precedence)

traceback
whiteboard = ''

Arbitrary string which will be stored in \$logdir/whiteboard location when the test finishes.

workdir

This property returns a writable directory that exists during the entire test execution, but will be cleaned up once the test finishes.

It can be used on tasks such as decompressing source tarballs, building software, etc.

avocado.fail_on(exceptions=None)

Fail the test when decorated function produces exception of the specified type.

(For example, our method may raise IndexError on tested software failure. We can either try/catch it or use this decorator instead)

Parameters: exceptions – Tuple or single exception to be assumed as test fail [Exception] self.error and self.cancel behavior remains intact To allow simple usage param “exceptions” must not be callable
avocado.skip(message=None)

Decorator to skip a test.

avocado.skipIf(condition, message=None)

Decorator to skip a test if a condition is True.

avocado.skipUnless(condition, message=None)

Decorator to skip a test if a condition is False.

exception avocado.TestError

Indicates that the test was not fully executed and an error happened.

This is the sort of exception you raise if the test was partially executed and could not complete due to a setup, configuration, or another fatal condition.

status = 'ERROR'
exception avocado.TestFail

Bases: avocado.core.exceptions.TestBaseException, exceptions.AssertionError

Indicates that the test failed.

TestFail inherits from AssertionError in order to keep compatibility with vanilla python unittests (they only consider failures the ones deriving from AssertionError).

status = 'FAIL'
exception avocado.TestCancel

Indicates that a test was canceled.

Should be thrown when the cancel() test method is used.

status = 'CANCEL'