Scripting a renderer#

Overview#

Mitsuba provides a flexible API that allows developing custom rendering pipelines that largely bypass the high-level interfaces and machinery used in the previous tutorials. This enables the use of the Mitsuba to rapidly prototype new and unconventional applications, while still leveraging the high performance JIT compiler and integrated ray acceleration data structures.

In this example, we are going to implement an ambient occlusion renderer that mostly avoids using built-in plugins (e.g., does not use existing sensor and film interfaces). This tutorial can serve as a starting point for more advanced custom rendering methods.

🚀 You will learn how to:

  • Use Mitsuba as a framework to write your own rendering pipeline

  • Spawn a wavefront of rays in Python

  • Generate random numbers with the PCG32 class

  • Work with the Loop construct

Setup#

Like in the previous tutorials, we start by importing the Mitsuba and DrJit and loading a scene.

[1]:
import mitsuba as mi
import drjit as dr

mi.set_variant('llvm_ad_rgb')

scene = mi.load_file('../scenes/cbox.xml')

While it is possible to use Mitsuba in scalar mode, it is highly recommended to stick to the JIT-compiled variants of the system (i.e., llvm or cuda) for performance critical applications implemented using the Python API. In scalar mode, we would pay the overhead of the Python binding layer for every function call on every individual sample/ray in our simulation. Using a JIT-compiled variant largely eliminates any Python related overheads and allows the system to efficiently use the available hardware.

Spawning rays#

In this tutorial, we replace large parts of Mitsuba’s high-level rendering pipeline by relatively low level Python code to demonstrate the system’s flexibility. We start by implementing a camera model and corresponding ray generation routine. In this experiment, we will implement a simple orthographic camera, given the following parameters:

[2]:
# Camera origin in world space
cam_origin = mi.Point3f(0, 1, 3)

# Camera view direction in world space
cam_dir = dr.normalize(mi.Vector3f(0, -0.5, -1))

# Camera width and height in world space
cam_width  = 2.0
cam_height = 2.0

# Image pixel resolution
image_res = [256, 256]

We will now spawn a whole wavefront of camera rays that can be processed all at once in a vectorized way. We first generate ray origins in the camera’s local coordinate frame using dr.meshgrid and dr.linspace. These functions behave similarly to their equivalents in NumPy. We construct a 2D grid of ray origins based on the camera’s physical dimensions (cam_height, cam_width) and image resolution (image_res).

The ray origins in local coordinates then need to be transformed into world space to account for the camera’s viewing direction and 3D position. We first construct a coordinate frame (mi.Frame3f) that is oriented in the camera’s viewing direction. Using its to_world() method we rotate our ray origins into world space and finally add the camera’s world space position.

[3]:
# Construct a grid of 2D coordinates
x, y = dr.meshgrid(
    dr.linspace(mi.Float, -cam_width  / 2,   cam_width / 2, image_res[0]),
    dr.linspace(mi.Float, -cam_height / 2,  cam_height / 2, image_res[1])
)

# Ray origin in local coordinates
ray_origin_local = mi.Vector3f(x, y, 0)

# Ray origin in world coordinates
ray_origin = mi.Frame3f(cam_dir).to_world(ray_origin_local) + cam_origin

We can now assemble a wavefront of world space rays that will later be traced in our rendering algorithm.

[4]:
ray = mi.Ray3f(o=ray_origin, d=cam_dir)

We then intersect those primary rays against the scene geometry to compute the corresponding surface interactions (of type SurfaceInteraction3f).

[5]:
si = scene.ray_intersect(ray)

Ambient occlusion#

Ambient occlusion is a rendering technique that calculates the average local occlusion of surfaces. For a point on the surface, we trace a set of rays (ambient_ray_count) in random directions on the hemisphere and compute the fraction of rays that intersect another surface within a specific maximum range (ambient_range).

[6]:
ambient_range = 0.75
ambient_ray_count = 256

To sample random directions on the hemisphere, we need to instantiate a random number generator. Instead of using an existing Sampler plugin, we directly use the PCG32 class that is provided by DrJit. This random number generator is initialized using the size of our wavefront of rays. We can then call rng.next_float32() to sample uniformly distributed random numbers in \([0, 1)\).

[7]:
# Initialize the random number generator
rng = mi.PCG32(size=dr.prod(image_res))

In the following code, we loop over ambient occlusion samples and use mi.Loop for performance reasons (see the DrJIT documentation for details).

The loop body of this algorithm is fairly simple:

  1. We first draw two random numbers from the PCG32 instance.

  2. We use those random numbers to sample directions on the hemisphere (in local coordinates, where the z-axis is aligned to the surface normal).

  3. Those directions then need to be transformed to world space using the local coordinate frame of the surface. The surface interaction record si (SurfaceInteraction3f) stores this local coordinate frame (Frame3f) which can be used for this transformation.

  4. We then spawn probe rays into the sampled world space direction using si.spawn_ray(...). This method implements some logic to prevent self-intersection with the surface at si. It should always be preferred over constructing the ray manually in such situations.

  5. We set the ambient occlusion ray’s maxt value to only find occluders in the provided maximum range.

  6. We accumulate a value of 1.0 if the ray did not intersect any scene geometry.

  7. Finally we increment the loop iteration counter and move on to the next iteration.

After the loop, we divide the result by the number of ambient occlusion samples to get the average occlusion.

[8]:
# Loop iteration counter
i = mi.UInt32(0)

# Accumulated result
result = mi.Float(0)

# Initialize the loop state (listing all variables that are modified inside the loop)
loop = mi.Loop(name="", state=lambda: (rng, i, result))

while loop(si.is_valid() & (i < ambient_ray_count)):
    # 1. Draw some random numbers
    sample_1, sample_2 = rng.next_float32(), rng.next_float32()

    # 2. Compute directions on the hemisphere using the random numbers
    wo_local = mi.warp.square_to_uniform_hemisphere([sample_1, sample_2])

    # Alternatively, we could also sample a cosine-weighted hemisphere
    # wo_local = mi.warp.square_to_cosine_hemisphere([sample_1, sample_2])

    # 3. Transform the sampled directions to world space
    wo_world = si.sh_frame.to_world(wo_local)

    # 4. Spawn a new ray starting at the surface interactions
    ray_2 = si.spawn_ray(wo_world)

    # 5. Set a maximum intersection distance to only account for the close-by geometry
    ray_2.maxt = ambient_range

    # 6. Accumulate a value of 1 if not occluded (0 otherwise)
    result[~scene.ray_test(ray_2)] += 1.0

    # 7. Increase loop iteration counter
    i += 1

# Divide the result by the number of samples
result = result / ambient_ray_count

Displaying the result#

The algorithm above accumulated ambient occlusion samples in a 1-dimensional array result. To work with this result as an image, we construct a TensorXf using the image resolution specified earlier.

[9]:
image = mi.TensorXf(result, shape=image_res)

Now let’s visualize our ambient occlusion rendering!

[10]:
import matplotlib.pyplot as plt

plt.imshow(image, cmap='gray'); plt.axis('off');
../../_images/src_rendering_scripting_renderer_22_0.png

See also#