![]() |
|
UsdLux provides a representation for lights and related components that are common to many graphics environments and therefore suitable for interchange.
The goal of UsdLux is to serve as a basis for:
The UsdLux core includes:
This core can be extended to add more types of lights, filters, or other features.
For a comprehensive list of the types see the full class hierarchy.
UsdLuxLightAPI defines all the common attributes and behaviors that are considered to be core to all lights. Thus, we define a UsdLux light as any prim that has a UsdLuxLightAPI applied. In other words, applying a UsdLuxLightAPI imparts the quality of "being a light" to a prim.
Most light types, such as UsdLuxRectLight or any type that inherits from UsdLuxBoundableLightBase or UsdLuxNonboundableLightBase, have an applied UsdLuxLightAPI automatically built in to their type and therefore will always be lights. However, by attaching the quality of being a light to an applied API schema, we can turn any geometry into a light by applying a UsdLuxLightAPI to it, or, as we would in the case of a UsdGeomMesh to which we'd apply UsdLuxMeshLightAPI, by applying an API schema that itself has a built-in UsdLuxLightAPI.
By convention, lights with a primary axis emit along -Z. Area lights are centered in the XY plane and are 1 unit in diameter.
UsdLux objects are UsdGeomXformable, so they use its set of transform operators, and inherit transforms hierarchically. It is recommended that only translation and rotations be used with the lights, to avoid introducing difficulties for light sampling and integration. Lights have explicit attributes to adjust their size.
UsdLux provides the UsdLuxLightAPI, which can be applied to arbitrary user-supplied geometry, as well as concrete lights for specific shapes: spheres, rectangles, disks, and cylinders. The specific shapes exist because they can support superior, analytic sampling methods.
UsdLux is designed primarily for physically-based cinematic lighting with area lights. Accordingly, it does not include zero-area point or line lights. However, some backends benefit from a point or line approximation. This intent can be indicated using the treatAsPoint and treatAsLine attributes on UsdSphereLight and UsdCylinderLight. These attributes are hints that avoid the need for backends to use heuristics (such as an arbitrary epsilon threshold) to determine the approximation. These hints can be set at the time of creation (or export) where the most information about the visual intent exists.
UsdLux does not provide support for expressing live transform constraints, such as to attach lights to moving geometry. This is consistent with USD's policy of storing the computed result of rigging, not the rigging itself.
Most renderers consuming OpenUSD today are RGB renderers, rather than spectral. Units in RGB renderers are tricky to define as each of the red, green and blue channels transported by the renderer represents the convolution of a spectral exposure distribution, e.g. CIE Illuminant D65, with a sensor response function, e.g. CIE 1931 x. Thus the main quantity in an RGB renderer is neither radiance nor luminance, but "integrated radiance" or "tristimulus weight".
The emission of a default light with intensity
1 and color
[1, 1, 1] is an Illuminant D spectral distribution with chromaticity matching the rendering color space white point, normalized such that a ray normally incident upon the sensor with EV0 exposure settings will generate a pixel value of [1, 1, 1] in the rendering color space.
Given the above definition, that means that the luminance of said default light will be 1 nit (cd∕m²) and its emission spectral radiance distribution is easily computed by appropriate normalization.
For brevity, the term emission will be used in the documentation to mean "emitted spectral radiance" or "emitted integrated radiance/tristimulus weight", as appropriate.
The method of "uplifting" an RGB color to a spectral distribution is unspecified other than that it should round-trip under the rendering illuminant to the limits of numerical accuracy.
Note that some color spaces, most notably ACES, define their white points by chromaticity coordinates that do not exactly line up to any value of a standard illuminant. Because we do not define the method of uplift beyond the round- tripping requirement, we discourage the use of such color spaces as the rendering color space, and instead encourage the use of color spaces whose white point has a well-defined spectral representation, such as D65.
In order to generate an image, a camera effectively integrates irradiance at a patch on the sensor over time to get exposure, which is converted to an electrical signal. The default behaviour of many renderers is to ignore this, which as we have seen means that 1 nit incident on the sensor results in an output signal of 1.0.
1 nit is a tiny amount of light, equivalent by definition to a single candle viewed at a distance. Thus using physically plausible values for light brightnesses will result in blown out images unless exposure is taken into account. There is already a means of communicating sensor exposure via UsdGeomCamera::GetExposureAttr()
and it is expected that all renderers implementing UsdLux must respect the value of that attribute by scaling the rendered image by pow(2, exposure)
.
Colors specified in attributes are in energy-linear terms, and follow the USD conventions for authoring attribute color spaces on attributes and prims.
UsdLux presumes a physically-based lighting model where falloff with distance is a consequence of reduced visible solid angle. Environments that do not measure the visible solid angle are expected to provide an approximation, such as inverse-square falloff. Further artistic control over attenuation can be modelled as light filters.
The behavior of all attributes is in UsdLuxLightAPI is documented on their Get()
methods. All attributes are expected to be supported by all light types, with the effect of each attribute applying multiplicatively in sequence.
The example hdEmbree delegate provides a reference implementation, as well as a series of unit tests and reference images for renderers to check their own implementation against.
More complex behaviors are provided via a set of API classes. These behaviors are common and well-defined but may not be supported in all rendering environments. These API classes provide functions to specify the relevant semantic behavior:
Like other USD schemas, UsdLux core may be extended to address features specific to certain environments. Possible renderer- or pipeline-specific capabilities that could be addded as extensions include:
We provide a utility to aide in extending the UsdLux core schemas, namely the schema generation script usdgenschemafromsdr. This script can be used to generate new light schemas from the shader nodes in the SdrRegistry. These light schemas can be typed schemas that describe a completely new light type or can be applied API schemas that extend an existing light type automatically by being specified to auto apply to an existing light (such as UsdLuxDiskLight) or light API (such as UsdLuxMeshLightAPI).
We expect "published, pipeline citizen" render delegates to provide codeless schema libraries that contain typed schemas and/or applied API schemas that define or extend all of the renderer's core light types. But some render delegates may be developed as part of a proprietary application package that is only using USD as a mechanism to communicate to that render delegate. In other words, the application doesn't really want to participate in the open USD ecosystem, but needs to use it for rendering scenes imported from USD and augmented using application/renderer-specific lighting features, or finds it useful to use USD as an archiving format to send jobs to a render-farm so that the full application need not be run there.
In order to lower the barrier somewhat for such applications, we provide UsdLuxPluginLight and UsdLuxPluginLightFilter, concrete schema types that specify a "node identification" encoding just like UsdShadeShader, so that the application need only plug its extensions into Sdr (by providing node definitions that will be consumed by render delegates), and not be required to take the extra step of generating USD schema definitions. Because they provide less easily accessible information to users, we do not advocate using these types in pipeline-persistent "user-level" USD files.