Running tests

At least Clang 7.0 and Clant-tidy 7.0 is required to run the tests.

Before running the tests the CodeChecker package needs to be built!

# Build the package
make package

Every test target has an *in_env version (Eg. make test_in_env) which automatically creates and sources a virtualenv for the tests.

make test run all tests (unit and functional for the analyzer and the web)
make test_in_env run all tests (unit and functional), automatically setup and source a virtualenv
make test_unit unittests for the analyzer and the web
make test_functional functional tests for the analyzer and the web (SQLite and PostgreSQL)
make test_web_sqlite functional test for the web part (SQLite)
make TEST="tests/functional/cmdline" test_web_feature run only a specific web test (SQLite)
make TEST="tests/functional/analyze" test_analyzer_feature run only a specific analyzer test

Change test workspace root

With the CC_TEST_WORKSPACE environment variable the root directory for the tests can be changed. With this environment variable multiple test types (sqlite, postgresql) can be run parallel. Every test should create the temporary directories under the given root directory.

Sqlite tests with a changed workspace root can be run like this:

CC_TEST_WORKSPACE=/tmp/sqlite_test_workspace make -C web test_sqlite

Clean test workspace

make test_clean

Run tests with PostgreSQL

At least one of the database drivers needs to be available to use the PostgreSQL database backend. psycopg2 is used by default if not found pg8000 is used.

Pytest configuration

pytest.ini configuration file in the repository root is used to configure running the tests: Further configuration options can be found here Pytest configuration.

You can specify additional pytest arguments to the make targets by using the environmental variable EXTRA_PYTEST_ARGS:

EXTRA_PYTEST_ARGS='-k test_source_suppress_export'  TEST=tests/functional/suppress make test_analyzer_feature

The above example displays how you can select a specific testcase via the -k option passed through EXTRA_PYTEST_ARGS, within a specific testfile via TEST, in the analyzer library via test_analyzer_feature.

Virtual environment to run tests

Test running virtual environment is automatically created and sourced by the make test* commands. Create a python virtualenv for development:

make venv_dev

Create new unit test

Use the script to generate new test files. module_name

Add new tests for the created tests/unit/ file and run the unit tests:

make test_unit

Create new functional test

Use the script to generate new test files. newfeature

From the repository root run the new test template (should pass):

make TEST="tests/functional/mynewfeature" run_test

Create c/cpp project and generate test reports

If it is possible generate and add the report files used during the test to the repository. It will speed up the tests because no analysis is needed, during each test and the results will not depend on the analyzer version in the test environment.

Make sure to generate plist reports where the source file names do not contain the source file path only the source file name because the reports will be moved during the tests to temporary directories.

A simple test c/cpp should look like this:

Makefile              # build and analyze targets for the test source files
main.cpp              # cpp file with errors in it
reports/base/*.plist  # first report set used for the tests
reports/new/*.plist   # second report set used for the tests


In each functional test mynewfeature/ is doing the functional test setup:

  • exporting test related data to the test directories
  • analyzing a test project (multiple times if needed)
  • start a server connected to a specific database (should be stopped in teardown)
  • prepare and cleanup the test workspace
  • ...

Add new functional testcase

The actual test cases go to mynewfeature/ file. The setup part of these test read up the generated test configuration file by Test cases should only use test related configuration values ONLY from the configuration file generated by the The tests should ONLY modify the files in the workspace provided for them.
Test cases can:

  • rerun the analysis
  • connect to the server started by the
  • run command line commands
  • modify configuration files
  • ...