Integrators#

In Mitsuba 3, the different rendering techniques are collectively referred to as integrators, since they perform integration over a high-dimensional space. Each integrator represents a specific approach for solving the light transport equation—usually favored in certain scenarios, but at the same time affected by its own set of intrinsic limitations. Therefore, it is important to carefully select an integrator based on user-specified accuracy requirements and properties of the scene to be rendered.

In the XML description language, a single integrator is usually instantiated by declaring it at the top level within the scene, e.g.

<scene version=3.0.0>
    <!-- Instantiate a unidirectional path tracer,
        which renders paths up to a depth of 5 -->
    <integrator type="path">
        <integer name="max_depth" value="5"/>
    </integrator>

    <!-- Some geometry to be rendered -->
    <shape type="sphere">
        <bsdf type="diffuse"/>
    </shape>
</scene>

This section gives an overview of the available choices along with their parameters.

Almost all integrators use the concept of path depth. Here, a path refers to a chain of scattering events that starts at the light source and ends at the camera. It is often useful to limit the path depth when rendering scenes for preview purposes, since this reduces the amount of computation that is necessary per pixel. Furthermore, such renderings usually converge faster and therefore need fewer samples per pixel. Then reference-quality is desired, one should always leave the path depth unlimited.

The Cornell box renderings below demonstrate the visual effect of a maximum path depth. As the paths are allowed to grow longer, the color saturation increases due to multiple scattering interactions with the colored surfaces. At the same time, the computation time increases.

../../_images/integrator_depth_1.jpg

max. depth = 1#

../../_images/integrator_depth_2.jpg

max. depth = 2#

../../_images/integrator_depth_3.jpg

max. depth = 3#

../../_images/integrator_depth_inf.jpg

max. depth = \(\infty\)#

Mitsuba counts depths starting at 1, which corresponds to visible light sources (i.e. a path that starts at the light source and ends at the camera without any scattering interaction in between.) A depth-2 path (also known as “direct illumination”) includes a single scattering event like shown here:

../../_images/path_explanation.jpg

Direct illumination integrator (direct)#

Parameter

Type

Description

Flags

shading_samples

integer

This convenience parameter can be used to set both emitter_samples and bsdf_samples at the same time.

emitter_samples

integer

Optional more fine-grained parameter: specifies the number of samples that should be generated using the direct illumination strategies implemented by the scene’s emitters. (Default: set to the value of shading_samples)

bsdf_samples

integer

Optional more fine-grained parameter: specifies the number of samples that should be generated using the BSDF sampling strategies implemented by the scene’s surfaces. (Default: set to the value of shading_samples)

hide_emitters

boolean

Hide directly visible emitters. (Default: no, i.e. false)

../../_images/integrator_direct_bsdf.jpg

(a) BSDF sampling only#

../../_images/integrator_direct_lum.jpg

(b) Emitter sampling only#

../../_images/integrator_direct_both.jpg

(c) MIS between both sampling strategies#

This integrator implements a direct illumination technique that makes use of multiple importance sampling: for each pixel sample, the integrator generates a user-specifiable number of BSDF and emitter samples and combines them using the power heuristic. Usually, the BSDF sampling technique works very well on glossy objects but does badly everywhere else (a), while the opposite is true for the emitter sampling technique (b). By combining these approaches, one can obtain a rendering technique that works well in both cases (c).

The number of samples spent on either technique is configurable, hence it is also possible to turn this plugin into an emitter sampling-only or BSDF sampling-only integrator.

Note

This integrator does not handle participating media or indirect illumination.

<integrator type="direct"/>

Path tracer (path)#

Parameter

Type

Description

Flags

max_depth

integer

Specifies the longest path depth in the generated output image (where -1 corresponds to \(\infty\)). A value of 1 will only render directly visible light sources. 2 will lead to single-bounce (direct-only) illumination, and so on. (Default: -1)

rr_depth

integer

Specifies the path depth, at which the implementation will begin to use the russian roulette path termination criterion. For example, if set to 1, then path generation many randomly cease after encountering directly visible surfaces. (Default: 5)

hide_emitters

boolean

Hide directly visible emitters. (Default: no, i.e. false)

This integrator implements a basic path tracer and is a good default choice when there is no strong reason to prefer another method.

To use the path tracer appropriately, it is instructive to know roughly how it works: its main operation is to trace many light paths using random walks starting from the sensor. A single random walk is shown below, which entails casting a ray associated with a pixel in the output image and searching for the first visible intersection. A new direction is then chosen at the intersection, and the ray-casting step repeats over and over again (until one of several stopping criteria applies).

../../_images/integrator_path_figure.png

At every intersection, the path tracer tries to create a connection to the light source in an attempt to find a complete path along which light can flow from the emitter to the sensor. This of course only works when there is no occluding object between the intersection and the emitter.

This directly translates into a category of scenes where a path tracer can be expected to produce reasonable results: this is the case when the emitters are easily “accessible” by the contents of the scene. For instance, an interior scene that is lit by an area light will be considerably harder to render when this area light is inside a glass enclosure (which effectively counts as an occluder).

Like the direct plugin, the path tracer internally relies on multiple importance sampling to combine BSDF and emitter samples. The main difference in comparison to the former plugin is that it considers light paths of arbitrary length to compute both direct and indirect illumination.

Note

This integrator does not handle participating media

<integrator type="path">
    <integer name="max_depth" value="8"/>
</integrator>

Arbitrary Output Variables integrator (aov)#

Parameter

Type

Description

Flags

aovs

string

List of <name>:<type> pairs denoting the enabled AOVs.

(Nested plugin)

integrator

Sub-integrators (can have more than one) which will be sampled along the AOV integrator. Their respective output will be put into distinct images.

This integrator returns one or more AOVs (Arbitrary Output Variables) describing the visible surfaces.

../../_images/bsdf_diffuse_plain.jpg

Scene rendered with a path tracer#

../../_images/integrator_aov_depth.y.jpg

Depth AOV#

../../_images/integrator_aov_nn.jpg

Normal AOV#

../../_images/integrator_aov_position.jpg

Position AOV#

Here is an example on how to enable the depth and shading normal AOVs while still rendering the image with a path tracer. The RGBA image produces by the path tracer will be stored in the [my_image.R, my_image.G, my_image.B, my_image.A] channels of the EXR output file.

<integrator type="aov">
    <string name="aovs" value="dd.y:depth,nn:sh_normal"/>
    <integrator type="path" name="my_image"/>
</integrator>

Currently, the following AOVs types are available:

  • depth: Distance from the pinhole.

  • position: World space position value.

  • uv: UV coordinates.

  • geo_normal: Geometric normal.

  • sh_normal: Shading normal.

  • dp_du, dp_dv: Position partials wrt. the UV parameterization.

  • duv_dx, duv_dy: UV partials wrt. changes in screen-space.

  • prim_index: Primitive index (e.g. triangle index in the mesh).

  • shape_index: Shape index.boundary_test

  • boundary_test: Boundary test.

Note that integer-valued AOVs (e.g. prim_index, shape_index) are meaningless whenever there is only partial pixel coverage or when using a wide pixel reconstruction filter as it will result in fractional values.

Volumetric path tracer (volpath)#

Parameter

Type

Description

Flags

max_depth

integer

Specifies the longest path depth in the generated output image (where -1 corresponds to \(\infty\)). A value of 1 will only render directly visible light sources. 2 will lead to single-bounce (direct-only) illumination, and so on. (Default: -1)

rr_depth

integer

Specifies the minimum path depth, after which the implementation will start to use the russian roulette path termination criterion. (Default: 5)

hide_emitters

boolean

Hide directly visible emitters. (Default: no, i.e. false)

This plugin provides a volumetric path tracer that can be used to compute approximate solutions of the radiative transfer equation. Its implementation makes use of multiple importance sampling to combine BSDF and phase function sampling with direct illumination sampling strategies. On surfaces, it behaves exactly like the standard path tracer.

This integrator has special support for index-matched transmission events (i.e. surface scattering events that do not change the direction of light). As a consequence, participating media enclosed by a stencil shape are rendered considerably more efficiently when this shape has a null or thin dielectric BSDF assigned to it (as compared to, say, a dielectric or roughdielectric BSDF).

Note

This integrator does not implement good sampling strategies to render participating media with a spectrally varying extinction coefficient. For these cases, it is better to use the more advanced volumetric path tracer with spectral MIS, which will produce in a significantly less noisy rendered image.

Warning

This integrator does not support forward-mode differentiation.

<integrator type="volpath">
    <integer name="max_depth" value="8"/>
</integrator>

Volumetric path tracer with spectral MIS (volpathmis)#

Parameter

Type

Description

Flags

max_depth

integer

Specifies the longest path depth in the generated output image (where -1 corresponds to \(\infty\)). A value of 1 will only render directly visible light sources. 2 will lead to single-bounce (direct-only) illumination, and so on. (Default: -1)

rr_depth

integer

Specifies the minimum path depth, after which the implementation will start to use the russian roulette path termination criterion. (Default: 5)

hide_emitters

boolean

Hide directly visible emitters. (Default: no, i.e. false)

This plugin provides a volumetric path tracer that can be used to compute approximate solutions of the radiative transfer equation. Its implementation performs MIS both for directional sampling as well as free-flight distance sampling. In particular, this integrator is well suited to render media with a spectrally varying extinction coefficient. The implementation is based on the method proposed by Miller et al. [MGJ19] and is only marginally slower than the simple volumetric path tracer.

Similar to the simple volumetric path tracer, this integrator has special support for index-matched transmission events.

Warning

This integrator does not support forward-mode differentiation.

<integrator type="volpathmis">
    <integer name="max_depth" value="8"/>
</integrator>

Path Replay Backpropagation (prb)#

Parameter

Type

Description

Flags

max_depth

integer

Specifies the longest path depth in the generated output image (where -1 corresponds to \(\infty\)). A value of 1 will only render directly visible light sources. 2 will lead to single-bounce (direct-only) illumination, and so on. (Default: 6)

rr_depth

integer

Specifies the path depth, at which the implementation will begin to use the russian roulette path termination criterion. For example, if set to 1, then path generation many randomly cease after encountering directly visible surfaces. (Default: 5)

This plugin implements a basic Path Replay Backpropagation (PRB) integrator with the following properties:

  • Emitter sampling (a.k.a. next event estimation).

  • Russian Roulette stopping criterion.

  • No reparameterization. This means that the integrator cannot be used for shape optimization (it will return incorrect/biased gradients for geometric parameters like vertex positions.)

  • Detached sampling. This means that the properties of ideal specular objects (e.g., the IOR of a glass vase) cannot be optimized.

See prb_basic.py for an even more reduced implementation that removes the first two features.

See the papers [VSJ21] and [ZSGJ21] for details on PRB, attached/detached sampling, and reparameterizations.

'type': 'prb',
'max_depth': 8

Basic Path Replay Backpropagation (prb_basic)#

Parameter

Type

Description

Flags

max_depth

integer

Specifies the longest path depth in the generated output image (where -1 corresponds to \(\infty\)). A value of 1 will only render directly visible light sources. 2 will lead to single-bounce (direct-only) illumination, and so on. (Default: 6)

Basic Path Replay Backpropagation-style integrator without next event estimation, multiple importance sampling, Russian Roulette, and reparameterization. The lack of all of these features means that gradients are noisy and don’t correctly account for visibility discontinuities. The lack of a Russian Roulette stopping criterion means that generated light paths may be unnecessarily long and costly to generate.

This class is not meant to be used in practice, but merely exists to illustrate how a very basic rendering algorithm can be implemented in Python along with efficient forward/reverse-mode derivatives. See the file prb.py for a more feature-complete Path Replay Backpropagation integrator, and prb_reparam.py for one that also handles visibility.

'type': 'prb_basic',
'max_depth': 8

Reparameterized Direct Integrator (direct_reparam)#

Parameter

Type

Description

Flags

reparam_max_depth

integer

Specifies the longest path depth for which the reparameterization should be enabled (maximum 2 for this integrator). A value of 1 will only produce visibility gradients for directly visible shapes while a value of 2 will also account for shadows. (Default: 2)

reparam_rays

integer

Specifies the number of auxiliary rays to be traced when performing the reparameterization. (Default: 16)

reparam_kappa

float

Specifies the kappa parameter of the von Mises Fisher distribution used to sample auxiliary rays.. (Default: 1e5)

reparam_exp

float

Power exponent applied on the computed harmonic weights in the reparameterization. (Default: 3.0)

reparam_antithetic

boolean

Should antithetic sampling be enabled to improve convergence when sampling the auxiliary rays. (Default: False)

This plugin implements a reparameterized direct illumination integrator.

It is functionally equivalent with prb_reparam when max_depth and reparam_max_depth are both set to be 2. But since direct illumination tasks only have two ray segments, the overhead of relying on radiative backpropagation is non-negligible. This implementation builds on the traditional ADIntegrator that does not require two passes during gradient traversal.

'type': 'direct_reparam',
'reparam_rays': 8

Reparameterized Emission Integrator (emission_reparam)#

Parameter

Type

Description

Flags

reparam_max_depth

integer

Specifies the longest path depth for which the reparameterization should be enabled (maximum 1 for this integrator). (Default: 1)

reparam_rays

integer

Specifies the number of auxiliary rays to be traced when performing the reparameterization. (Default: 16)

reparam_kappa

float

Specifies the kappa parameter of the von Mises Fisher distribution used to sample auxiliary rays.. (Default: 1e5)

reparam_exp

float

Power exponent applied on the computed harmonic weights in the reparameterization. (Default: 3.0)

reparam_antithetic

boolean

Should antithetic sampling be enabled to improve convergence when sampling the auxiliary rays. (Default: False)

This class implements a reparameterized emission integrator.

It reparameterizes the camera ray to handle discontinuity issues caused by moving emitters. This is mainly used for learning and debugging purpose.

'type': 'emission_reparam',
'reparam_rays': 8

Reparameterized Path Replay Backpropagation Integrator (prb_reparam)#

Parameter

Type

Description

Flags

reparam_max_depth

integer

Specifies the longest path depth for which the reparameterization should be enabled. For instance, a value of 1 will only produce visibility gradients for directly visible shapes. (Default: 2)

reparam_rays

integer

Specifies the number of auxiliary rays to be traced when performing the reparameterization. (Default: 16)

reparam_kappa

float

Specifies the kappa parameter of the von Mises Fisher distribution used to sample auxiliary rays.. (Default: 1e5)

reparam_exp

float

Power exponent applied on the computed harmonic weights in the reparameterization. (Default: 3.0)

reparam_antithetic

boolean

Should antithetic sampling be enabled to improve convergence when sampling the auxiliary rays? (Default: False)

reparam_unroll

boolean

Unroll the loop tracing auxiliary rays in the reparameterization? (Default: False)

This class implements a reparameterized Path Replay Backpropagation (PRB) integrator with the following properties:

  • Emitter sampling (a.k.a. next event estimation).

  • Russian Roulette stopping criterion.

  • The integrator reparameterizes the incident hemisphere to handle visibility-induced discontinuities. This makes it possible to optimize geometric parameters like vertex positions. Discontinuities observed through ideal specular reflection/refraction are not supported and produce biased gradients (see also the next point).

  • Detached sampling. This means that the properties of ideal specular objects (e.g., the IOR of a glass vase) cannot be optimized.

See prb.py and prb_basic.py for simplified implementations that remove some of these features.

See the papers [VSJ21] and [ZSGJ21] for details on PRB, attached/detached sampling. Reparameterizations for differentiable rendering were proposed in [LHJ19]. The specific change of variables used in Mitsuba is described in [BLD20].

A few more technical details regarding the implementation: this integrator uses detached sampling, hence the sampling process that generates the path vertices (e.g., v₀, v₁, ..) and the computation of Monte Carlo sampling densities is excluded from the derivative computation. However, the path throughput that is then evaluated on top of these vertices does track derivatives, and it also uses reparameterizations.

Consider the contribution L of a path (v₀, v₁, v₂, v₃, v₄) where v₀ is the sensor position, the f capture BSDFs and cosine factors, and the E represent emission.

L(v₀, v₁, v₂, v₃) = E₁(v₀, v₁) + f₁(v₀, v₁, v₂) *
                        (E₂(v₁, v₂) + f₂(v₁, v₂, v₃) *
                            (E₃(v₂, v₃) + f₃(v₂, v₃, v₄) * E₄(v₃, v₄)))

The derivative of this function with respect to a scene parameter π expands into a long sequence of terms via the product and chain rules.

∂L =                       ______ Derivative of emission terms ______
                          (∂E₁/∂π + ∂E₁/∂v₀ ∂v₀/∂π + ∂E₁/∂v₁ ∂v₁/∂π)
+ (f₁   )                 (∂E₂/∂π + ∂E₂/∂v₁ ∂v₁/∂π + ∂E₂/∂v₂ ∂v₂/∂π)
+ (f₁ f₂)                 (∂E₃/∂π + ∂E₃/∂v₂ ∂v₂/∂π + ∂E₃/∂v₃ ∂v₃/∂π)
+ (f₁ f₂ f₃)              (∂E₄/∂π + ∂E₄/∂v₃ ∂v₃/∂π + ∂E₄/∂v₄ ∂v₄/∂π)

                        ______________ Derivative of reflection terms _____________
+ (E₂ + f₂ E₃ + f₂ f₃ E₄) (∂f₁/∂π + ∂f₁/∂v₀ ∂v₀/∂π + ∂f₁/∂v₁ ∂v₁/∂π + ∂f₁/∂v₂ ∂v₂/∂π)
+ (f₁ E₃ + f₁ f₃ E₄     ) (∂f₂/∂π + ∂f₂/∂v₁ ∂v₁/∂π + ∂f₂/∂v₂ ∂v₂/∂π + ∂f₂/∂v₃ ∂v₃/∂π)
+ (f₁ f₂ E₄             ) (∂f₃/∂π + ∂f₃/∂v₂ ∂v₂/∂π + ∂f₃/∂v₃ ∂v₃/∂π + ∂f₃/∂v₄ ∂v₄/∂π)

This expression sums over essentially the same terms, but it must account for how each one could change as a consequence of internal dependencies (e.g., ∂f₁/∂π) or due to the reparameterization (e.g., ∂f₁/∂v₁ ∂v₁/∂π). It’s tedious to do this derivative calculation manually, especially once additional complications like direct illumination sampling and MIS are taken into account. We prefer using automatic differentiation for this, which will evaluate the chain/product rule automatically.

However, a nontrivial technical challenge here is that path tracing-style integrators perform a loop over path vertices, while DrJit’s loop recording facilities do not permit the use of AD across loop iterations. The loop must thus be designed so that use of AD is self-contained in each iteration of the loop, while generating all the terms of ∂T iteratively without omission or duplication.

We have chosen to implement this loop so that iteration i computes all derivative terms associated with the current vertex, meaning: Eᵢ/∂π, fᵢ/∂π, as well as the parameterization-induced terms involving ∂vᵢ/∂π. To give a more concrete example, filtering the previous example derivative ∂L to only include terms for the interior vertex v₂ leaves:

∂L₂ =                      __ Derivative of emission terms __
    + (f₁   )                 (∂E₂/∂π + ∂E₂/∂v₂ ∂v₂/∂π)
    + (f₁ f₂)                 (∂E₃/∂v₂ ∂v₂/∂π)

                           __ Derivative of reflection terms __
    + (E₂ + f₂ E₃ + f₂ f₃ E₄) (∂f₁/∂v₂ ∂v₂/∂π)
    + (f₁ E₃ + f₁ f₃ E₄     ) (∂f₂/∂π + ∂f₂/∂v₂ ∂v₂/∂π)
    + (f₁ f₂ E₄             ) (∂f₃/∂v₂ ∂v₂/∂π)

Let’s go through these one by one, starting with the easier ones:

∂L₂ = (f₁              ) (∂E₂/∂π + ∂E₂/∂v₂ ∂v₂/∂π)
    + (f₁ E₃ + f₁ f₃ E₄) (∂f₂/∂π + ∂f₂/∂v₂ ∂v₂/∂π)
    + ...

These are the derivatives of local emission and reflection terms. They both account for changes in the parameterization (∂v₂/∂π) and a potential dependence of the reflection/emission model on the scene parameter being differentiated (∂E₂/∂π, ∂f₂/∂π).

In the first line, the f₁ term corresponds to the path throughput (labeled β in this class) in the general case, and the (f₁ E₃ + f₁ f₃ E₄) term is the incident illumination (Lᵢ) at the current vertex.

Evaluating these terms using AD will look something like this:

with dr.resume_grad():
    v₂' = reparameterize(v₂)
    L₂ = β * E₂(v₁, v₂') + Lᵢ * f₂(v₁, v₂', v₃) + ...

with a later call to dr.backward(L₂). In practice, E and f are directional functions, which means that these directions need to be recomputed from positions using AD.

However, that leaves a few more terms in ∂L₂ that unfortunately add some complications.

∂L₂ = ...  + (f₁ f₂)                 (∂E₃/∂v₂ ∂v₂/∂π)
           + (E₂ + f₂ E₃ + f₂ f₃ E₄) (∂f₁/∂v₂ ∂v₂/∂π)
           + (f₁ f₂ E₄             ) (∂f₃/∂v₂ ∂v₂/∂π)

These are changes in emission or reflection terms at neighboring vertices (v₁ and v₃) that arise due to the reparameterization at v₂. It’s important that the influence of the scene parameters on the emission or reflection terms at these vertices is excluded: we are only interested in directional derivatives that arise due to the reparametrization, which can be accomplished using a more targeted version of dr.resume_grad().

with dr.resume_grad(v₂'):
    L₂ += β * f₂ * E₃(v₂', v₃) # Emission at next vertex
    L₂ += Lᵢ_prev * f₁(v₀, v₁, v₂') # BSDF at previous vertex
    L₂ += Lᵢ_next * f₃(v₂', v₃, v₄) # BSDF at next vertex

To get the quantities for the next vertex, the path tracer must “run ahead” by one bounce.

The loop begins each iteration being already provided with the previous and current intersection in the form of a PreliminaryIntersection3f. It must still be turned into a full SurfaceInteraction3f which, however, only involves a virtual function call and no ray tracing. It computes the next intersection and passes that along to the subsequent iteration. Each iteration reconstructs three surface interaction records (prev, cur, next), of which only cur tracks positional derivatives.

'type': 'prb_reparam',
'max_depth': 8,
'reparam_rays': 8

Path Replay Backpropagation Volumetric Integrator (prbvolpath)#

Parameter

Type

Description

Flags

max_depth

integer

Specifies the longest path depth in the generated output image (where -1 corresponds to \(\infty\)). A value of 1 will only render directly visible light sources. 2 will lead to single-bounce (direct-only) illumination, and so on. (Default: 6)

rr_depth

integer

Specifies the path depth, at which the implementation will begin to use the russian roulette path termination criterion. For example, if set to 1, then path generation many randomly cease after encountering directly visible surfaces. (Default: 5)

hide_emitters

boolean

Hide directly visible emitters. (Default: no, i.e. false)

This class implements a volumetric Path Replay Backpropagation (PRB) integrator with the following properties:

  • Differentiable delta tracking for free-flight distance sampling

  • Emitter sampling (a.k.a. next event estimation).

  • Russian Roulette stopping criterion.

  • No reparameterization. This means that the integrator cannot be used for shape optimization (it will return incorrect/biased gradients for geometric parameters like vertex positions.)

  • Detached sampling. This means that the properties of ideal specular objects (e.g., the IOR of a glass vase) cannot be optimized.

See the paper [VSJ21] for details on PRB and differentiable delta tracking.

'type': 'prbvolpath',
'max_depth': 8

Particle tracer (ptracer)#

Parameter

Type

Description

Flags

max_depth

integer

Specifies the longest path depth in the generated output image (where -1 corresponds to \(\infty\)). A value of 1 will only render directly visible light sources. 2 will lead to single-bounce (direct-only) illumination, and so on. (Default: -1)

rr_depth

integer

Specifies the minimum path depth, after which the implementation will start to use the russian roulette path termination criterion. (Default: 5)

hide_emitters

boolean

Hide directly visible emitters. (Default: no, i.e. false)

samples_per_pass

boolean

If specified, divides the workload in successive passes with samples_per_pass samples per pixel.

This integrator traces rays starting from light sources and attempts to connect them to the sensor at each bounce. It does not support media (volumes).

Usually, this is a relatively useless rendering technique due to its high variance, but there are some cases where it excels. In particular, it does a good job on scenes where most scattering events are directly visible to the camera.

Note that unlike sensor-based integrators such as path, it is not possible to divide the workload in image-space tiles. The samples_per_pass parameter allows splitting work in successive passes of the given sample count per pixel. It is particularly useful in wavefront mode.

<integrator type="ptracer">
    <integer name="max_depth" value="8"/>
</integrator>

Stokes vector integrator (stokes)#

Parameter

Type

Description

Flags

(Nested plugin)

integrator

Sub-integrator (only one can be specified) which will be sampled along the Stokes integrator. In polarized rendering modes, its output Stokes vector is written into distinct images.

This integrator returns a multi-channel image describing the complete measured polarization state at the sensor, represented as a Stokes vector \(\mathbf{s}\).

Here we show an example monochrome output in a scene with two dielectric and one conductive sphere that all affect the polarization state of the (initially unpolarized) light.

The first entry corresponds to usual radiance, whereas the remaining three entries describe the polarization of light shown as false color images (green: positive, red: negative).

../../_images/integrator_stokes_cbox.jpg

\(\mathbf{s}_0\)”: radiance#

../../_images/integrator_stokes_cbox_s1.jpg

\(\mathbf{s}_1\)”: horizontal vs. vertical polarization#

../../_images/integrator_stokes_cbox_s2.jpg

\(\mathbf{s}_2\)”: positive vs. negative diagonal polarization#

../../_images/integrator_stokes_cbox_s3.jpg

\(\mathbf{s}_3\)”: right vs. left circular polarization#

In the following example, a normal path tracer is nested inside the Stokes vector integrator:

<integrator type="stokes">
    <integrator type="path">
        <!-- path tracer parameters -->
    </integrator>
</integrator>

Depth integrator (depth)#

Example of one an extremely simple type of integrator that is also helpful for debugging: returns the distance from the camera to the closest intersected object, or 0 if no intersection was found.

<integrator type="depth"/>

Moment integrator (moment)#

Parameter

Type

Description

Flags

(Nested plugin)

integrator

Sub-integrators (can have more than one) which will be sampled along the AOV integrator. Their respective XYZ output will be put into distinct images.

This integrator returns one AOVs recording the second moment of the samples of the nested integrator.

<integrator type="moment">
    <integrator type="path"/>
</integrator>