Parametrizing fixtures and test functions

pytest supports test parametrization in several well-integrated ways:

  • @pytest.mark.parametrize allows to define parametrization at the function or class level, provides multiple argument/fixture sets for a particular test function or class.
  • pytest_generate_tests enables implementing your own custom dynamic parametrization scheme or extensions.

@pytest.mark.parametrize: parametrizing test functions

New in version 2.2,: improved in 2.4

The builtin pytest.mark.parametrize decorator enables parametrization of arguments for a test function. Here is a typical example of a test function that implements checking that a certain input leads to an expected output:

# content of test_expectation.py
import pytest
@pytest.mark.parametrize("input,expected", [
    ("3+5", 8),
    ("2+4", 6),
    ("6*9", 42),
])
def test_eval(input, expected):
    assert eval(input) == expected

Here, the @parametrize decorator defines three different (input,expected) tuples so that the test_eval function will run three times using them in turn:

$ py.test
=========================== test session starts ============================
platform linux -- Python 3.4.0 -- py-1.4.25.dev2 -- pytest-2.6.3.dev3
collected 3 items

test_expectation.py ..F

================================= FAILURES =================================
____________________________ test_eval[6*9-42] _____________________________

input = '6*9', expected = 42

    @pytest.mark.parametrize("input,expected", [
        ("3+5", 8),
        ("2+4", 6),
        ("6*9", 42),
    ])
    def test_eval(input, expected):
>       assert eval(input) == expected
E       assert 54 == 42
E        +  where 54 = eval('6*9')

test_expectation.py:8: AssertionError
==================== 1 failed, 2 passed in 0.01 seconds ====================

As designed in this example, only one pair of input/output values fails the simple test function. And as usual with test function arguments, you can see the input and output values in the traceback.

Note that you could also use the parametrize marker on a class or a module (see Marking test functions with attributes) which would invoke several functions with the argument sets.

It is also possible to mark individual test instances within parametrize, for example with the builtin mark.xfail:

# content of test_expectation.py
import pytest
@pytest.mark.parametrize("input,expected", [
    ("3+5", 8),
    ("2+4", 6),
    pytest.mark.xfail(("6*9", 42)),
])
def test_eval(input, expected):
    assert eval(input) == expected

Let’s run this:

$ py.test
=========================== test session starts ============================
platform linux -- Python 3.4.0 -- py-1.4.25.dev2 -- pytest-2.6.3.dev3
collected 3 items

test_expectation.py ..x

=================== 2 passed, 1 xfailed in 0.01 seconds ====================

The one parameter set which caused a failure previously now shows up as an “xfailed (expected to fail)” test.

Note

In versions prior to 2.4 one needed to specify the argument names as a tuple. This remains valid but the simpler "name1,name2,..." comma-separated-string syntax is now advertised first because it’s easier to write and produces less line noise.

Basic pytest_generate_tests example

Sometimes you may want to implement your own parametrization scheme or implement some dynamism for determining the parameters or scope of a fixture. For this, you can use the pytest_generate_tests hook which is called when collecting a test function. Through the passed in metafunc object you can inspect the requesting test context and, most importantly, you can call metafunc.parametrize() to cause parametrization.

For example, let’s say we want to run a test taking string inputs which we want to set via a new pytest command line option. Let’s first write a simple test accepting a stringinput fixture function argument:

# content of test_strings.py

def test_valid_string(stringinput):
    assert stringinput.isalpha()

Now we add a conftest.py file containing the addition of a command line option and the parametrization of our test function:

# content of conftest.py

def pytest_addoption(parser):
    parser.addoption("--stringinput", action="append", default=[],
        help="list of stringinputs to pass to test functions")

def pytest_generate_tests(metafunc):
    if 'stringinput' in metafunc.fixturenames:
        metafunc.parametrize("stringinput",
                             metafunc.config.option.stringinput)

If we now pass two stringinput values, our test will run twice:

$ py.test -q --stringinput="hello" --stringinput="world" test_strings.py
..
2 passed in 0.01 seconds

Let’s also run with a stringinput that will lead to a failing test:

$ py.test -q --stringinput="!" test_strings.py
F
================================= FAILURES =================================
___________________________ test_valid_string[!] ___________________________

stringinput = '!'

    def test_valid_string(stringinput):
>       assert stringinput.isalpha()
E       assert <built-in method isalpha of str object at 0x2b10f6eb4b20>()
E        +  where <built-in method isalpha of str object at 0x2b10f6eb4b20> = '!'.isalpha

test_strings.py:3: AssertionError
1 failed in 0.01 seconds

As expected our test function fails.

If you don’t specify a stringinput it will be skipped because metafunc.parametrize() will be called with an empty parameter listlist:

$ py.test -q -rs test_strings.py
s
========================= short test summary info ==========================
SKIP [1] /home/hpk/p/pytest/.tox/regen/lib/python3.4/site-packages/_pytest/python.py:1139: got empty parameter set, function test_valid_string at /tmp/doc-exec-120/test_strings.py:1
1 skipped in 0.01 seconds

For further examples, you might want to look at more parametrization examples.

The metafunc object

metafunc objects are passed to the pytest_generate_tests hook. They help to inspect a testfunction and to generate tests according to test configuration or values specified in the class or module where a test function is defined:

metafunc.fixturenames: set of required function arguments for given function

metafunc.function: underlying python test function

metafunc.cls: class object where the test function is defined in or None.

metafunc.module: the module object where the test function is defined in.

metafunc.config: access to command line opts and general config

metafunc.funcargnames: alias for fixturenames, for pre-2.3 compatibility

Metafunc.parametrize(argnames, argvalues, indirect=False, ids=None, scope=None)[source]

Add new invocations to the underlying test function using the list of argvalues for the given argnames. Parametrization is performed during the collection phase. If you need to setup expensive resources see about setting indirect=True to do it rather at test setup time.

Parameters:
  • argnames – a comma-separated string denoting one or more argument names, or a list/tuple of argument strings.
  • argvalues – The list of argvalues determines how often a test is invoked with different argument values. If only one argname was specified argvalues is a list of simple values. If N argnames were specified, argvalues must be a list of N-tuples, where each tuple-element specifies a value for its respective argname.
  • indirect – if True each argvalue corresponding to an argname will be passed as request.param to its respective argname fixture function so that it can perform more expensive setups during the setup phase of a test rather than at collection time.
  • ids – list of string ids each corresponding to the argvalues so that they are part of the test id. If no ids are provided they will be generated automatically from the argvalues.
  • scope – if specified it denotes the scope of the parameters. The scope is used for grouping tests by parameter instances. It will also override any fixture-function defined scope, allowing to set a dynamic scope using test context or configuration.
Metafunc.addcall(funcargs=None, id=_notexists, param=_notexists)[source]

(deprecated, use parametrize) Add a new call to the underlying test function during the collection phase of a test run. Note that request.addcall() is called during the test collection phase prior and independently to actual test execution. You should only use addcall() if you need to specify multiple arguments of a test function.

Parameters:
  • funcargs – argument keyword dictionary used when invoking the test function.
  • id – used for reporting and identification purposes. If you don’t supply an id an automatic unique id will be generated.
  • param – a parameter which will be exposed to a later fixture function invocation through the request.param attribute.