|
All USD code changes should have proper test coverage to ensure code quality. USD uses CMake's built-in CTest program for testing. Use the following guidelines to understand how to write and run USD tests.
Unit tests are required for all changes to a USD module. Unit tests can be written in C++ and/or Python. USD unit test source files and asset files are located in a testenv directory in the appropriate USD module's directory, and are added to the USD module's CMakeLists.txt, as described below.
Tests should be named using the format "test<USD module><Thing Being Tested>". For example, "testUsdFlatten", or "testUsdGeomExtentTransform".
Within the test code, test methods should be named using the format "test_<Some Behavior>". For example, "test_Flatten", "test_Export", etc.
Test source files and resources (such as USD files, images, etc) should be located as follows.
testenv
directory for a given USD module. For example, all test source files for usdLux are in pxr/usd/usdLux/testenv/
. testenv
named for the test. For example, USD files needed for the testUsdLuxLightListAPI test are in pxr/usd/usdLux/testenv/testUsdLuxLightListAPI/
. .testenv
appended to the test name. Appending .testenv
is an older convention that is not needed anymore.Register your test in the USD module's CMakeLists.txt to ensure the CTest harness runs your test. Tests are registered using the pxr_register_test wrapper
function, which also provides a list of steps the harness will do to run your test. A list of common options for pxr_register_test
follows (note this is not a complete list of options, see cmake/macros/Public.cmake
for details on pxr_register_test
):
Option | Meaning |
---|---|
STDOUT_REDIRECT <file-name> | Redirect stdout from a test to the file name specified. |
STDERR_REDIRECT <file-name> | Redirect stderr from a test to the file name specified. |
DIFF_COMPARE <file-name>+ | After the test has completed, diff file-name(s) against the corresponding file(s) in <pxr_install_test_dir dir>/baseline/ |
ENV <key=value>+ | A set of key-value pairs specifying environment configuration to set. |
EXPECTED_RETURN_CODE <num> | An exit code to match against after running the test. |
PYTHON | Indicates that the test is Python based. |
REQUIRES_DISPLAY | Indicates that the test requires a connected display to run. If there is no display at the time of running the test, the wrapper will skip it. |
COMMAND <cmd> | The main command to run. |
PRE_COMMAND <cmd> / POST_COMMAND <cmd> | Commands to run before or after the main command. These currently don't provide output redirection and expect an exit code of 0. |
Example
The test registration for the testUsdFlatten Python test looks as follows:
Use pxr_install_test_dir
to specify where test assets should get installed when the test harness is run. Note that the name used for DEST
must exactly match the name used in test registration. If your test does not use any additional assets, you don't need to add a pxr_install_test_dir
call nor add the "<your test name>" directory to testenv/
.
Example
The install for testUsdFlatten assets looks like:
Python tests need to be added to CMakeLists.txt using the pxr_test_scripts
function. Python unit tests should use Python's unittest module.
Use the pxr_build_test
function to CMakeLists.txt to specify how your C++ test is built. Test binaries will be installed to ${CMAKE_INSTALL_PREFIX}/tests/
. Any libraries needed by the test itself must be explicitly listed in the LIBRARIES
section.
Example
The pxr_build_test
for the C++ testUsdAttributeBlockingCpp test looks like:
To build the USD tests, use the PXR_BUILD_TESTS
CMake build configuration (this defaults to ON
if not set), or use the --tests
command line argument if you are building using the build_usd.py script.
To run tests, use CMake's ctest command and run from the build directory.
To run all tests:
To run an individual test:
An example individual test run will look something like the following:
The following simple walkthrough describes the basic steps for adding a new "testUsdGeomMyFeature" Python unit test to the usdGeom module.
We add our Python test code by creating a new Python script, testUsdGeomMyFeature.py, in pxr/usd/usdGeom/testenv/
.
We create a testUsdGeomMyFeature directory in testenv/
for any test resources, such as USD files, images, etc. We configure CTest to copy test resources from this directory by adding the following to pxr/usd/usdGeom/CMakeLists.txt
.
In CMakeLists.txt we register our new test by adding the following registration steps:
We then add our test to pxr_test_scripts
:
At this point, we can now build USD with our new test:
Once the build is complete and we've set the appropriate environment variables (PYTHONPATH
and PATH
) for our build, we can cd to our build directory and run our test:
The following simple walkthrough describes the basic steps for adding a new "testUsdGeomMyFeature" C++ unit test to the usdGeom module.
We start by adding our C++ test code to a new testUsdGeomMyFeature.cpp file in pxr/usd/usdGeom/testenv/
.
We create a testUsdGeomMyFeature directory in testenv/
for any test resources, such as USD files, images, etc. We configure CTest to copy test resources from this directory by adding the following to pxr/usd/usdGeom/CMakeLists.txt
.
In CMakeLists.txt we register our new test by adding the following registration steps:
We add the build steps for our test to CMakeLists.txt using pxr_build_test
:
At this point, we can now build USD with our new test:
Once the build is complete and we've set the appropriate environment variables (PYTHONPATH
and PATH
) for our build, we can cd to our build directory and run our test:
Please include unit tests for your changes in your submission. Don't forget to include any additional resources (USD files, image files, etc) that your tests use with your submission.
Please make sure all unit tests are passing (including any tests you have added) before submitting a PR. Note that currently the USD github CI/CD for PRs only does verification builds and does not run tests. Please run and verify tests on all platforms (Linux, macOS, Windows) if possible.
If you do any additional testing, using your own test integrations, please note the types of tests run and their results in the comments submitted with your commits/PRs. This helps inform code reviewers and helps with troubleshooting down the line.
For submissions, Pixar will run an internal suite of tests to try to find correctness or performance regressions before code is checked into the central code repository. These tests often take several hours to complete. If test issues crop up, a Pixar developer will do initial research into the errors and reach out for help if needed.