
Let's check it out!
Frameworks
When developing code in Python there are typically five Top Python Testing Frameworks that are favorable:
NAME | MONIKER | DESCRIPTION |
unittest | PyUnit | The default Python testing framework built-in with the Python Standard Library |
pytest | Pytest | Popular testing frameworks known for simplicity, flexibility + powerful features |
noseTest | Nose2 | Enhanced unittest version offering additional plugins to support test execution |
DocTest | DocTest | Python Standard Library module generates tests within source code DocString |
Robot | Robot | Acceptance testing keyword-driven module that simplifies testcase automation |
Here are some reasons why pytest currently seems to be the most popular Python unit test framework out:
1. Simple and Readable Syntax You write plain Python functions instead of creating large verbose classes Assertions use plain assert statements which provide more detailed output |
2. Rich Plugin Ecosystem Plugins like pytest-mock, pytest-asyncio, pytest-cov, and more Easy to extend pytest plugins or write your own custom plugins |
3. Powerful Fixtures Allows for clean and re-usable setup and teardown using fixtures Supports various test level scopes, autouse, and parametrization |
4. Test Discovery Automatically discovers tests in files named test_*.py No need to manually register tests or use loader classes |
5. Great Reporting Colored output, diffs for failing assertions, and optional verbosity Integrates easily with tools like coverage, tox, and CI/CD systems |
6. Supports Complex Testing Needs Parameterized tests (@pytest.mark.parametrize) parallel test execution (pytest-xdist) + hooks |
pytest
pip install pytest |
Setup
Depending on your stack here is some great documentation to setup pytest on PyCharm, VS Code or Poetry.
Configuration
In pytest, pytest.ini is the main configuration file used to customize and control pytest behavior across the unit test suite. pytest.ini hosts pytest options, test paths, plugin settings and markers to attach to the test functions to categorize, filter or modify their behavior. Here is a sample pytest.ini configuration file as base:
[pytest] addopts = -ra -q testpaths = tests markers = slow: marks tests as slow (deselect with '-m "not slow"') db: marks tests requiring database |
Fixtures
Fixtures are methods in pytest that provide fixed baseline for tests to run. Fixtures can be used to setup all preconditions for tests, provide data, or perform teardown after tests finished via @pytest.fixture decorator.
Scope
Fixtures have scope: Function, Class, Module + Session which define how long fixture available during test:
SCOPE | DESCRIPTION |
Function | Fixture created once per test function and destroyed at end of test function |
Class | Fixture created once per test class and destroyed at the end of test class |
Module | Fixture created once per test module and destroyed at end of test module |
Session | Fixture created once per test session and destroyed at end of test session |
conftest
In pytest, conftest.py file is used to share fixtures across multiple tests. All the fixtures in conftest.py will be automagically detected without needing to import. conftest: typically scoped at test root directory structure.
Dependencies
Dependency Injection: when fixtures are requested by other fixtures although this adds complexity to tests!
autouse
Simple trick to avoid defining fixture in each test: use the autouse=True flag to apply fixture to all tests.
yield
When you use yield in fixture function setup code executes before yield and teardown executes after yield:
import pytest @pytest.fixture def my_fixture(): # setup code yield "fixture value" # teardown code |
Arguments
Use pytest fixtures with arguments to write re-usable fixtures that can easily share across tests also known as Parameterized fixtures using @pytest.fixture(params=[0, 1, 2]) syntax. Note: these fixtures should not be confused with the @pytest.mark.parametrize decorator which can be used to specify inputs and outputs!
Factories
Factories, in the context of pytest fixtures, are functions that are used to create and return instances of objects that are needed to generate test data or objects with specific configuration in re-usable manner:
conftest.py | unittest.py |
@pytest.fixture def user_creds(): def _user_creds(name: str, email: str): return {"name": name, "email": email} return _user_creds |
def test_user_creds(user_creds): assert user_creds("John", "x@abc.com")=={ "name": "John", "email": "x@abc.com", } |
Best practices for organizing tests include: Organizing Tests by Testing Pyramid, Structure Should Mirror Application Code, Group or Organize Fixtures and Organize Tests Outside Application Code for scalability.
Mocking
Mocking is technique that allows you to isolate pieces of code being tested from its dependencies so the test can focus on the code under test in isolation. The unittest.mock package offers Mock and MagicMock objects:
Mock
A mock object simulates the behavior of the object it replaces by creating attributes and methods on-the-fly.
MagicMock
Subclass of Mock with default implementations for most magic methods (__len__, __getitem__, etc.). Useful when mocking objects that interact with Python's dunder methods that enable custom behaviors for common operations.
Patching
Patching is technique that temporarily replaces real objects in code with mock objects during test execution. Patching helps ensure external systems do not affect test outcomes thus tests are consistent and repeatable.
IMPORTANT - Mocks are NOT stubs!
When we combine @patch decorator with return_value or side_effect it is a stub but from the mock package!
METHOD | DESCRIPTION |
return_value | Specify the single value of Mock object to be returned when method called |
side_effect | Specify multiple values of Mock object to be returned when method called |
Difference
In pytest, Mock and patch are both tools for simulating or replacing parts of your code during testing. Mock creates mock objects while patch temporarily replaces real objects with mocks during tests to isolate code:
Mock | patch |
from unittest.mock import Mock mock_obj = Mock() mock_obj.some_method.return_value = 42 result = mock_obj.some_method() assert result == 42 |
from unittest.mock import patch def external_function(): pass @patch('module_name.external_function') def test_function(mock_external): mock_external.return_value = "Mock data" result = external_function() assert result == "Mock data" |
When creating mocks it is critical to ensure mock objects accurately reflect objects they are replacing. Thus, it is best practice to use autospec=True to ensure mock objects respect function signatures being replaced!
Assertions
For completeness, here is list of assertion methods to verify method on mock object was called during tests:
METHOD | DESCRIPTION |
assert_called | verify specific method on mock object has been called during a test |
assert_called_once | verify specific method on mock object has been called only one time |
assert_called_once_with | verify specific method on mock object called once with specific args |
assert_called_with | verify every time method on mock object called with fixed arguments |
assert_not_called | verify specific method on mock object was not called during the test |
assert_has_calls | verify the order in which specific method on mock object was called |
assert_any_call | verify specific method on mock object has been called at least once |
Monkeypatch
Monkeypatching is technique used to modify code behavior at runtime especially where certain dependencies or settings make it challenging to isolate functionality for example environment variables or system paths:
app.py | test_app.py |
import os def get_app_mode() -> str: app_mode = os.getenv("APP_MODE") return app_mode.lower() |
def test_get_app_mode(monkeypatch): """Test behavior when APP_MODE is set.""" monkeypatch.setenv("APP_MODE", "Testing") assert get_app_mode() == "testing" |
pytest-mock
pytest-mock is pytest plugin built on top of unittest.mock that provides an easy-to-use mocker fixture that can be used to create mock objects and patch functions. When you use mocker.patch() method provided by pytest-mock default behavior is to replace the object with MagicMock() so pytest-mock uses MagicMock().
pip install pytest-mock |
app.py |
import requests from http import HTTPStatus def get_user_name(user_id: int) -> str: response = requests.get(f"https://api.example.com/users/{user_id}") return response.json()['name'] if response.status_code == HTTPStatus.OK else None |
test_app.py |
from http import HTTPStatus from app import get_user_name def test_get_user_name(mocker): mock_response = mocker.Mock() mock_response.status_code = http.HTTPStatus.OK mock_response.json.return_value = {'name': 'Test'} mocker.patch('app.requests.get', return_value=mock_response) result = get_user_name(1) assert result == 'Test' |
Legacy
In many legacy Python codebases you may detect references to Mock(), MagicMock() and @patch decorator from unittest.mock with pytest. Teams often keep the old style unless there compelling reason to refactor it.
Recommendation
However, here are some recommendations to prefer pytest-mock and mocker fixture for future unit testing:
1. Prefer pytest-mock and the mocker fixture Cleaner syntax than unittest.mock.patch Automatically cleaned up after each test Plays well with other pytest fixtures Centralizes all patching into one fixture (mocker) |
2. Use monkeypatch for patching env vars, system paths and etc. Prefer monkeypatch for clarity and idiomatic pytest style e.g. os.environ, system paths, or patching open() |
3. Avoid @patch decorators unless migrating old tests Can be harder to read or stack with multiple patches Better to use mocker.patch() inline as cleaner syntax |
4. Use autospec=True when mocking complex or external APIs Ensure mocks behave like the real objects (catch bad call signatures) |
5. Use fixtures to share mocks across tests When you have mock used by multiple tests then define it as a fixture |
Prefer pytest-mock (mocker fixture) for readability and less boilerplate. Import tools like MagicMock, Mock, call, ANY from unittest.mock when needed. Avoid @patch unless needed — inline mocker.patch() is usually cleaner. Keep everything in one style within a test module for consistency.
pytest-asyncio
Concurrency allows a program to efficiently execute its tasks asynchronously i.e. executing tasks while other tasks are waiting. pytest-asyncio simplifies handling event loops + managing async fixtures thru unit testing.
pip install pytest-asyncio |
app.py | test_app.py |
import asyncio async def fetch_data(): # Simulate I/O operation. await asyncio.sleep(1) return {"status": "OK", "data": [42]} |
import pytest from app import fetch_data @pytest.mark.asyncio async def test_fetch_data(): result = await fetch_data() assert result["status"] == "OK" assert result["data"] == [42] |
CI/CD
GitHub Actions is feature-rich CI/CD platform and offers an easy and flexible way to automate your testing processes. GitHub Actions mainly consist of files called workflows. The workflow file contains job or several jobs that consist of sequence of steps. Here is sample YAML file that will trigger the workflow on git push:
~/.github/workflows/run_test.yml |
name: Run Unit Test via Pytest on: [push] jobs: build: runs-on: ubuntu-latest strategy: matrix: python-version: ["3.10"] steps: - uses: actions/checkout@v3 - name: Set up Python ${{ matrix.python-version }} uses: actions/setup-python@v4 with: python-version: ${{ matrix.python-version }} - name: Install dependencies run: | python -m pip install --upgrade pip if [ -f requirements.txt ]; then pip install -r requirements.txt; fi - name: Lint with Ruff run: | pip install ruff ruff --format=github --target-version=py310 . continue-on-error: true - name: Test with pytest run: | coverage run -m pytest -v -s - name: Generate Coverage Report run: | coverage report -m |
Summary
To summarize, we have setup pytest for more robust unit testing with mocks and stubs via patching. Looking forward there are additional ways to improve unit test development experience with pytest as per the article:
1. Use Markers To Prioritise Tests Organize tests in such a way that prioritizes key functionalities first Running tests with critical functionality first provide faster feedback |
2. Do More With Less (Parametrized Testing) Parametrized Testing allows you to test multiple scenarios in single test function Feed different parameters into same test logic covering more scenarios + less code |
3. Profiling Tests Identify the slow-running unit tests using the --durations=XXX flag Use the pytest-profiling plugin to generate tabular and heat graphs |
4. Run Tests In Parallel (Use pytest-xdist) Use the pytest-xdist plugin to distribute tests across multiple CPUs Tests run in parallel, use resources better, provide faster feedback! |