Performance Metrics
As of release 24.11, for each release, USD generates performance metrics using a specific set of assets and specific hardware and software configurations. This page describes what metrics are collected, what hardware and software configurations are used, the actual metrics results, and how to generate the metrics locally.
What We Measure
For a given asset, our performance script captures the following metrics by default (items in bold are reported on this page):
Time to load and configure USD plugins for usdview
Time to open the stage
Time to reset the usdview prim browser
Time to initialize the usdview UI
Time to render first image (in usdview)
Time to shutdown Hydra
Time to close the stage
Time to tear down the usdview UI
Total time to start and quit usdview
Time to traverse the prims in the stage
We run 10 iterations for each asset, and capture the minimum and maximum times for that set of iterations. We also calculate the mean time across the 10 iterations.
For each asset, we first warm the filesystem cache by loading the asset in usdview, to ensure we’re not including cache performance issues in our metrics.
All assets used to measure performance are assumed to be available locally. Time to download assets is not measured or included as part of the gathered performance metrics.
What Environment Is Used
This section describes the computing environment used to generate the published performance metrics. The following operating systems and hardware are currently used.
Note
Machine specifications are subject to change. If specifications do change for a give release, historical performance measurements will be run to backfill any outdated data.
Linux
OS: CentOS Linux 7
CPU: 23 cores of Intel(R) Xeon(R) Platinum 8268 CPU @ 2.90GHz
RAM: 117GB
GPU: NVIDIA TU102GL (Quadro RTX 6000/8000)
macOS
OS: macOS 14.3
CPU: Apple M2 Ultra (20 Core)
RAM: 192GB
GPU: Apple M2 Ultra GPU (76 Core)
Windows
OS: Microsoft Windows 10 Enterprise
CPU: AMD EPYC 7763 64-Core Processor, 2450 Mhz, 31 Core(s), 31 Logical Processor(s)
RAM: 128GB
GPU: NVIDIA RTXA6000-24Q
USD Build
For each of the operating systems and hardware platforms listed previously, we
build USD with the same build configuration. We use a stock invocation of
build_usd.py
with the default options (release build, Python components,
imaging and USD imaging components, usdview, etc).
Metrics
Metrics are all measured in seconds.
Performance Graphs Per Platform
The following graphs show the time (in seconds) to open and close usdview for each asset. Graphs are provided for Linux, macOS, and Windows platforms (as described in What Environment Is Used).
Note
For the 24.11 release, there are known issues with obtaining metrics for the Moore Lane asset on Windows, and the create_first_image metric on macOS. We are actively investigating these issues and will update published metrics when these issues are resolved.
Standard Shader Ball
This asset is designed to be a comprehensive test of a broad array of material properties in a single render. Geometry is expressed using USD, materials are defined using MaterialX, texture maps are provided in OpenEXR format and encoded using the Academy Color Encoding System ACEScg color space.
The shader ball asset can be downloaded here.
Metric |
24.11 |
25.02 |
---|---|---|
Open stage |
min: 0.144294
max: 0.150255
mean: 0.1480813
|
TBD |
Render first image |
min: 2.870519
max: 2.904303
mean: 2.8882178
|
TBD |
Close stage |
min: 0.000499
max: 0.000754
mean: 0.0006054
|
TBD |
Shut down Hydra |
min: 0.022082
max: 0.022668
mean: 0.0223569
|
TBD |
Metric |
24.11 |
25.02 |
---|---|---|
Open stage |
min: 0.088529
max: 0.098122
mean: 0.0901067
|
TBD |
Render first image |
min: N/A
max: N/A
mean: N/A
|
TBD |
Close stage |
min: 0.000241
max: 0.000296
mean: 0.0002641
|
TBD |
Shut down Hydra |
min: 0.009064
max: 0.014153
mean: 0.0116556
|
TBD |
Metric |
24.11 |
25.02 |
---|---|---|
Open stage |
min: 0.284833
max: 0.320872
mean: 0.2995894
|
TBD |
Render first image |
min: 4.546317
max: 4.963428
mean: 4.7824023
|
TBD |
Close stage |
min: 0.001371
max: 0.002616
mean: 0.0017503
|
TBD |
Shut down Hydra |
min: 0.040305
max: 0.04744
mean: 0.0423858
|
TBD |
Kitchen Set
This asset provides a complex kitchen scene.
The Kitchen Set asset can be downloaded here.
Metric |
24.11 |
25.02 |
---|---|---|
Open stage |
min: 0.069993
max: 0.107452
mean: 0.0842476
|
TBD |
Render first image |
min: 0.279229
max: 0.3028
mean: 0.2903682
|
TBD |
Close stage |
min: 0.007057
max: 0.007835
mean: 0.0074928
|
TBD |
Shut down Hydra |
min: 0.009247
max: 0.009617
mean: 0.0093791
|
TBD |
Metric |
24.11 |
25.02 |
---|---|---|
Open stage |
min: 0.06026
max: 0.09047
mean: 0.0732717
|
TBD |
Render first image |
min: N/A
max: N/A
mean: N/A
|
TBD |
Close stage |
min: 0.002246
max: 0.002519
mean: 0.0023587
|
TBD |
Shut down Hydra |
min: 0.006481
max: 0.013024
mean: 0.0102488
|
TBD |
Metric |
24.11 |
25.02 |
---|---|---|
Open stage |
min: 0.13587
max: 0.177268
mean: 0.1491912
|
TBD |
Render first image |
min: 2.862769
max: 3.086248
mean: 2.9800833
|
TBD |
Close stage |
min: 0.019616
max: 0.024004
mean: 0.0215864
|
TBD |
Shut down Hydra |
min: 0.020476
max: 0.023532
mean: 0.0217044
|
TBD |
ALab
ALab is a full production scene created by Animal Logic and contains over 300 assets, complete with high-quality textures and two characters with looping animation in shot context. Supplied as four separate downloads: the full production scene, high-quality textures, shot cameras matching the ALab trailer, and baked procedural fur and fabric for the animated characters.
The metrics have been measured with the base asset merged with the additional “techvars” info.
The ALab asset can be downloaded here.
Metric |
24.11 |
25.02 |
---|---|---|
Open stage |
min: 0.39004
max: 0.516364
mean: 0.4642872
|
TBD |
Render first image |
min: 7.897517
max: 8.556198
mean: 8.1684536
|
TBD |
Close stage |
min: 0.084709
max: 0.090772
mean: 0.0874594
|
TBD |
Shut down Hydra |
min: 0.12083
max: 0.165165
mean: 0.1328269
|
TBD |
Metric |
24.11 |
25.02 |
---|---|---|
Open stage |
min: 0.419115
max: 0.44761
mean: 0.4308989
|
TBD |
Render first image |
min: N/A
max: N/A
mean: N/A
|
TBD |
Close stage |
min: 0.034878
max: 0.041104
mean: 0.037702
|
TBD |
Shut down Hydra |
min: 0.100348
max: 0.120492
mean: 0.10805630000000001
|
TBD |
Metric |
24.11 |
25.02 |
---|---|---|
Open stage |
min: 0.453708
max: 0.465223
mean: 0.4604347
|
TBD |
Render first image |
min: 11.070572
max: 11.738015
mean: 11.290158
|
TBD |
Close stage |
min: 0.337716
max: 0.364083
mean: 0.35308
|
TBD |
Shut down Hydra |
min: 0.219807
max: 0.384717
mean: 0.2734629
|
TBD |
Moore Lane
4004 Moore Lane is a fully composed, high-quality scene for the purpose of testing various visual computing issues. The house itself was wrapped around a number of typical problem areas for light transport and noise sampling. This includes things like thin openings in exterior walls, recessed area light sources, deeply shadowed corners, and high-frequency details. The exterior landscape surrounding the house consisted of a relatively simple ecosystem of instanced plants which could provide some additional levels of complexity. In addition to the geometry itself being designed to exacerbate some typical issues, the USD structure itself was created for several layers of testing.
The metrics have been measured using the contained MooreLane_ASWF_0623.usda file.
The Moore Lane asset can be downloaded here.
Metric |
24.11 |
25.02 |
---|---|---|
Open stage |
min: 0.075535
max: 0.094023
mean: 0.0791025
|
TBD |
Render first image |
min: 11.484618
max: 11.776089
mean: 11.6368226
|
TBD |
Close stage |
min: 0.020613
max: 0.02107
mean: 0.0207992
|
TBD |
Shut down Hydra |
min: 0.103945
max: 0.138991
mean: 0.1194904
|
TBD |
Metric |
24.11 |
25.02 |
---|---|---|
Open stage |
min: 0.085466
max: 0.086856
mean: 0.0859231
|
TBD |
Render first image |
min: N/A
max: N/A
mean: N/A
|
TBD |
Close stage |
min: 0.003298
max: 0.00522
mean: 0.0036797
|
TBD |
Shut down Hydra |
min: 0.222393
max: 0.28412
mean: 0.2741197
|
TBD |
Metric |
24.11 |
25.02 |
---|---|---|
Open stage |
min: N/A
max: N/A
mean: N/A
|
TBD |
Render first image |
min: N/A
max: N/A
mean: N/A
|
TBD |
Close stage |
min: N/A
max: N/A
mean: N/A
|
TBD |
Shut down Hydra |
min: N/A
max: N/A
mean: N/A
|
TBD |
Running Performance Metrics Locally
We encourage developers to run the USD performance metrics to measure performance impacts of OpenUSD code contributions. Performance metrics can also be run to validate local runtime environments and hardware configurations.
Performance metrics are generating using the usdmeasureperformance.py
script found in pxr/extras/performance
. See the
usdmeasureperformance tool docs for more
information on the different parameters available.
usdmeasureperformance.py uses usdview and testusdview, so you will need to make sure those are in your current path, or aliased properly.
For gathering the metrics published on this page, the following parameters are used (for each asset):
python usdmeasureperformance.py <asset.usda> -i 10 -a min -o <metrics output filename.yaml>
Adding Custom Metrics
You can add your own custom metrics and have usdmeasureperformance.py include them as part of the set of metrics that it measures.
To define a custom metric, create a script file that defines a
testUsdviewInputFunction()
function that will be passed to testusdview
.
For example, if you wanted to add a metric named “process prims”, that
traversed the stage and processed each prim in some way, you might have a
processPrimsMetric.py
script that looks something like:
from pxr import Usd, UsdUtils, Usdviewq
def testUsdviewInputFunction(appController):
with Usdviewq.Timer("process prims", True):
stage = appController._dataModel.stage
for prim in stage.Traverse():
# process prim as needed, etc
See also the “traverse stage” example in pxr/extras/performance/explicitMetrics/stageTraversalMetric.py
.
To include your custom metrics when running usdmeasureperformance.py,
add your metrics script name and metric name as part of the --custom-metrics
script parameter. For example, if you wanted to include the “process prims”
metric example for “MyTestAsset.usda”, you would use a
usdmeasureperformance.py command line similar to:
python usdmeasureperformance.py MyTestAsset.usda --custom-metrics processPrimsMetric.py:'process prims'
usdmeasureperformance.py will look for your custom metric script relative to the directory from which the usdmeasureperformance.py script is run.