Path-Space Differentiable Rendering of Participating Media

Page created by Theresa Barker
 
CONTINUE READING
Path-Space Differentiable Rendering of Participating Media
Path-Space Differentiable Rendering of Participating Media
CHENG ZHANG∗ , ZIHAN YU∗ , and SHUANG ZHAO, University of California, Irvine, USA
 0.5

 -0.5

 Ordinary image Derivative image (w.r.t. the horizontal position of the light)

Fig. 1. In this paper, we introduce the theoretical framework of generalized differential path integrals. Our technique enjoys the generality of differentiating
both surface and volumetric light transport with respect to arbitrary scene parameters (such as material optical properties and object geometries). Further, it
offers the flexibility of designing new unbiased Monte Carlo methods capable of efficiently estimating derivatives in virtual scenes with complex geometry
and light transport effects. This example shows a transparent knot inside a homogeneous participating media lit by a spotlight. On the right, we show the
corresponding derivative image with respect to the horizontal location of the light.

Physics-based differentiable rendering—which focuses on estimating deriva- ACM Reference Format:
tives of radiometric detector responses with respect to arbitrary scene Cheng Zhang, Zihan Yu, and Shuang Zhao. 2021. Path-Space Differentiable
parameters—has a diverse array of applications from solving analysis-by- Rendering of Participating Media. ACM Trans. Graph. 40, 4, Article 76 (Au-
synthesis problems to training machine-learning pipelines incorporating gust 2021), 15 pages. https://doi.org/10.1145/3450626.3459782
forward-rendering processes. Unfortunately, existing general-purpose differ-
entiable rendering techniques lack either the generality to handle volumetric
light transport or the flexibility to devise Monte Carlo estimators capable of
handling complex geometries and light transport effects.
 In this paper, we bridge this gap by showing how generalized path in- 1 INTRODUCTION
tegrals can be differentiated with respect to arbitrary scene parameters. Physics-based forward rendering—a core research topic in computer
Specifically, we establish the mathematical formulation of generalized differ- graphics—focuses on numerically estimating radiometric sensor
ential path integrals that capture both interfacial and volumetric light trans-
 responses in fully specified virtual scenes. Previous research efforts
port. Our formulation allows the development of advanced differentiable
 have led to mature algorithms capable of efficiently and accurately
rendering algorithms capable of efficiently handling challenging geometric
discontinuities and light transport phenomena such as volumetric caustics. simulating light transport in virtual environments with high com-
 We validate our method by comparing our derivative estimates to those plexities.
generated using the finite differences. Further, to demonstrate the effective- Differentiable rendering, on the other hand, concerns with esti-
ness of our technique, we compare both differentiable rendering and inverse mating derivatives of radiometric measurements with respect to
rendering performance with state-of-the-art methods. differential changes of a scene. These techniques have a wide range
CCS Concepts: • Computing methodologies → Rendering. of applications by facilitating, for instance, the use of gradient-
 based optimization for solving inverse-rendering problems, and the
Additional Key Words and Phrases: Differentiable rendering, generalized
 integration of physics-based rendering in machine-learning and
path integral, Monte Carlo rendering
 probabilistic-inference pipelines.
∗ Both authors contributed equally to the paper. Despite great progresses made by recent works, unique theoretical
 and practical challenges remain for differentiable rendering. One
Authors’ address: Cheng Zhang, chengz20@uci.edu; Zihan Yu, zihay19@uci.edu;
Shuang Zhao, shz@ics.uci.edu, University of California, Irvine, USA. of them is lacking support for participating media and translucent
 materials, which are ubiquitous in the real world and crucial to many
Permission to make digital or hard copies of part or all of this work for personal or applications such as computational fabrication, remote sensing, and
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation biomedical imaging. Most existing general-purpose differentiable
on the first page. Copyrights for third-party components of this work must be honored. rendering techniques [Li et al. 2018; Loubet et al. 2019; Zhang et al.
For all other uses, contact the owner/author(s). 2020; Bangaru et al. 2020]—which offer the generality to differentiate
© 2021 Copyright held by the owner/author(s).
0730-0301/2021/8-ART76 with respect to scene geometry—consider only interfacial reflection
https://doi.org/10.1145/3450626.3459782 and refraction of light.

 ACM Trans. Graph., Vol. 40, No. 4, Article 76. Publication date: August 2021.
Path-Space Differentiable Rendering of Participating Media
76:2 • Zhang, Yu, and Zhao

 Another challenge is the lack of robust Monte Carlo estimation To validate our theory and algorithms, we compare our derivative
techniques. Unlike the forward-rendering case where many sophis- estimates with those produced using finite differences (Figure 10). To
ticated algorithms such as bidirectional path tracing and Markov- demonstrate the effectiveness of our method, we compare (i) deriva-
Chain Monte Carlo (MCMC) methods have been developed (e.g., tive images generated with our technique and differential radiative
[Veach and Guibas 1997; Pauly et al. 2000; Jakob and Marschner transfer (Figure 10); and (ii) inverse-rendering performance using
2012a]), most recent differentiable rendering techniques [Li et al. gradients estimated with these methods (Figure 11).
2018; Zhang et al. 2019; Loubet et al. 2019; Bangaru et al. 2020]
rely on unidirectional path tracing and have difficulties handling 2 RELATED WORK
complex light transport effects such as caustics. Forward volume rendering. Monte Carlo methods have been the
 To overcome the first challenge, the differential radiative transfer “gold standard” for accurately simulating photon and neutron trans-
framework [Zhang et al. 2019] has been introduced, which offers port in complex environments [Spanier and Gelbard 1969]. In com-
the same level of generality as the radiative transfer theory [Chan- puter graphics, volumetric path tracing and its variations (e.g., [Ka-
drasekhar 1960] for forward rendering by differentiating the radia- jiya and Von Herzen 1984; Cerezo et al. 2005]) produce unbiased
tive transfer equation (RTE). and consistent estimates of radiometric measures. Later, based on
 To address the second problem, the differential path integral for- the path-integral formulation [Veach 1997] and its generalization to
mulation [Zhang et al. 2020] has been derived. Similar to Veach’s volumetric light transport [Pauly et al. 2000], bidirectional path trac-
path-integral formulation [1997] for forward rendering, this tech- ing [Lafortune and Willems 1996] and Markov-Chain Monte Carlo
nique has enabled the development of differentiable rendering algo- (MCMC) methods (e.g., [Pauly et al. 2000; Kelemen et al. 2002; Jakob
rithms beyond unidirectional path tracing. and Marschner 2012b]) have been introduced to enable efficient
 Unfortunately, each methods have their limitations. The differ- simulation of challenging effects such as caustics.
ential radiative transfer method [Zhang et al. 2019] is limited to For a comprehensive survey on Monte Carlo volume rendering
unidirectional volumetric path tracing. Further, it requires detecting techniques, we refer to the survey by Nov́ak et al. [2018].
object silhouettes at each light-scattering event, which can become
extremely expensive for scenes with complex geometries. The dif- Differentiable surface-only rendering. A main challenge towards
ferential path integral formulation [Zhang et al. 2020], on the other developing general-purpose differentiable rendering engines has
hand, neglects volumetric scattering of light. been the differentiation with respect to scene geometry, which
 In this paper, we bridge the gap by introducing the mathematical generally requires calculating additional boundary integrals. To
formulation of generalized differential path integral that offers the address this problem, Li et al. [2018] introduced a Monte Carlo
same level of generality as differential radiative transfer [Zhang et al. edge-sampling method that provides unbiased estimates of these
2019] and the same flexibility as differential path integral [Zhang boundary integrals but requires detection of object silhouettes,
et al. 2020]. In other words, our techniques allows differentiating vol- which can be computationally expensive for complex scenes. Later,
umetric light transport with respect to arbitrary scene parameters. reparameterization-based methods [Loubet et al. 2019; Bangaru et al.
Meanwhile, it enables the development of Monte Carlo methods that 2020] have been introduced to avoid computing boundary integrals
(i) go beyond unidirectional path tracing, and (ii) avoid expensive altogether. Despite their ability to differentiate with respect to ar-
silhouette detections. bitrary scene parameterizations, all these methods are obtained by
 Similar to most prior works in physics-based differentiable ren- differentiating the rendering equation [Kajiya 1986] and rely on
dering [Li et al. 2018; Loubet et al. 2019; Zhang et al. 2019, 2020; unidirectional path tracing for derivative estimations, which can be
Bangaru et al. 2020], we focus on scene geometries depicted using inefficient when handling complex light transport effects.
polygonal meshes and assume the absence of perfectly specular By differentiating Veach’s path integrals [1997], Zhang et al. [2020]
interfaces. derived the formulation of differential path integrals, enabling Monte
 Concretely, our contributions include: Carlo differentiable rendering beyond unidirectional path tracing.
• Establishing the formulation of generalized differential path inte- Despite its flexibility, this formulation still neglects all volumetric
 grals (§4) by differentiating generalized path integrals [Pauly et al. light transport effects. Our theory subsumes this work by showing
 2000]. how to differentiate full generalized path integrals.
• Discussing how zero-measure detectors (e.g., pinhole cameras) Differentiable volume rendering. Specialized differentiable volume
 and sources (e.g., point lights), which are largely neglected by rendering has been used to solve analysis-by-synthesis problems in
 previous differentiable rendering techniques, can be handled in volumetric scattering [Gkioulekas et al. 2013], prefiltering of high-
 our framework (§4.3). resolution volumes [Zhao et al. 2016], and fabrication of translucent
• Developing new unbiased and consistent Monte Carlo methods materials [Sumin et al. 2019]. All these methods compute radiance
 that estimate our generalized differential path integrals in an un- derivatives with respect to specific material properties like optical
 biased fashion (§5). Our estimators greatly outperform the volu- density.
 metric path tracing method developed by Zhang et al. [2019] for For general-purpose differentiable volume rendering, Che et al.
 complex scene geometries and light transport effects. [2020] developed a system capable of computing derivatives with
 respect to optical material and local normal properties. Nimier-
 David et al. [2019] introduced the Mitsuba 2 system that enables

ACM Trans. Graph., Vol. 40, No. 4, Article 76. Publication date: August 2021.
Path-Space Differentiable Rendering of Participating Media
Path-Space Differentiable Rendering of Participating Media • 76:3

differentiable volume rendering with millions of parameters. Unfor- and the Lebesgue measure on is defined by
tunately, these methods cannot differentiate with respect to scene
 
geometry. Ö
 d ( ¯ ) := d , ( ), (3)
 By differentiating both the rendering equation [Kajiya 1986] and
 =0
the radiative transfer equation [Chandrasekhar 1960], the theory
of differential radiative transfer introduced recently by Zhang et al. where
 (
[2019] offers the most general differentiable rendering theory to d , ( ) = 1
date. However, this technique still relies on unidirectional path d , := (4)
 d , ( ) = 0
tracing with Monte Carlo edge sampling [Li et al. 2018] and cannot
efficiently handle complex geometry or light transport effects. with d and d being the surface-area and volume measures, re-
 Our technique overcomes these limitations by leveraging more spectively. It follows that the path space in Eq. (1) becomes
advanced Monte Carlo estimators while offering the same level of Ð Ð2 +1 −1 
generality as differential radiative transfer. := ≥1 =0
 , (5)
 associated with the measure
3 PRELIMINARIES Í Í2 +1 −1  
We now briefly review the mathematical preliminaries on forward ( ) := ≥1 =0 ∩ . (6)
rendering of participating media using the path-space formula-
 Measurement contribution. For a given light path ¯ = ( 0, . . . , ),
tion [Veach 1997; Pauly et al. 2000].
 its measurement contribution is the product of per-vertex and per-
 The response ∈ R of a radiometric detector can be expressed as
 segment contributions:
a path integral of the form:
  Î   Î 
 ∫ ( ¯ ) := =0 v ( −1   +1 ) =1 ( −1 ↔ ) .
 = ( ¯ ) d ( ¯ ), (1) (7)
 In this equation, the per-vertex contribution equals
where ¯ = ( 0, . . . , ) denotes a light transport path (with 0
on a light source and on the detector); is the path space; is v ( −1   +1 ) :=
the measurement contribution function; and is the Lebesgue
measure on . 
 
  s ( −1   +1 ), 0 < < and ∈ M
 
 Veach [1997] has shown that, for surface-only light transport (that  s ( ) p ( −1   +1 ), 0 < < and ∈ V0
 
 
considers only interfacial reflection and refraction of light), the path (8)
  e ( 0  1 ), =0
space1 is given by = ∪∞ +1 , where M is the union of all 
 =1 M
 
 
  ( 
object surfaces, and is the area-product measure. This formulation  e −1  ), = 
has been the theoretical foundation of many sophisticated Monte where s is the bidirectional scattering distribution function
Carlo rendering methods such as bidirectional path tracing [Veach (BSDF); p denotes the single-scattering phase function; and s is
and Guibas 1995] and a few Markov-Chain Monte Carlo rendering the scattering coefficient. Further, e and e capture, respectively,
methods [Veach 1997; Jakob and Marschner 2012a]. the source emission and detector importance (or response). In
 Veach’s formulation has been extended by Pauly et al. [2000] to this paper, we focus on surface sources (e.g., area lights) and detec-
handle volumetric light transport based on the radiative transfer tors (e.g., virtual camera sensors), which are used almost exclusively
theory [Chandrasekhar 1960]. In what follows, we briefly review in computer graphics and vision, yielding 0 ∈ M and ∈ M.
this generalized formulation. In Eq. (7), the per-segment contribution is given by the general-
 ized geometric term defined as
 Path space and measure. Let V ⊂ R3 be a 3D volume that en-
capsulates the virtual scene, M ⊂ V be the union of all object ( ) ( )
 ( ↔ ) := V( ↔ ) ( ↔ ) , (9)
surfaces in the scene, and V0 := V \ M. A light transport path ¯ = ∥ − ∥ 2
( 0, 1, . . . , ) with ( + 1) vertices and segments is classified
with its path characteristic , an ( + 1)-bit integer that encodes where V is the mutual visibility function and, for any , ∈ V,
the type of individual vertices. Specifically, the -th bit of the binary (
 −
 →, ∈M
 ( ) · 
representation of , which we denote as ( ), equals one if is ( ) := (10)
a surface vertex (i.e., ∈ M) and zero if it is a volume vertex 1, ∈ V0
(i.e., ∈ V0 ). For all ≥ 1 and 0 ≤ < 2 +1 , the set of all paths −
 → :=
 with ( ) being the (unit-length) surface normal at , and 
with segments and characteristic is ( − )/ ∥ − ∥ . Further, ( ↔ ) indicates the transmittance be-
 ( ( ) tween and that equals
 M, ( ) = 1
 := ( 0, . . . , ) : ∈ , (2) h ∫ i
 V0 ( ) = 0 ( ↔ ) = exp − t ( ′ ) dℓ ( ′ ) , (11)

 where t is the extinction coefficient; denotes the line segment
1 We hyperlink keywords to their definitions. connecting and ; and ℓ is the curve-length measure.

 ACM Trans. Graph., Vol. 40, No. 4, Article 76. Publication date: August 2021.
Path-Space Differentiable Rendering of Participating Media
76:4 • Zhang, Yu, and Zhao

 Table 1. List of symbols commonly used in this paper. Let BV0 := BV \ BM . Following Eq. (2), we define the set of
 material light paths ¯ of length (i.e. with segments) and
 Symbol Definition characteristic as
 ( ( )
 abstract scene parameter BM , ( ) = 1
 ˆ := ( , , . . . , ) : ∈
 
 , (12)
 M ( ) object surfaces 0 1 
 BV0 , ( ) = 0
 V ( ) volume encapsulating the scene
 V0 ( ) V ( ) \ M ( ) associated with the measure defined in Eq. (3) (where the surface
 measurement contribution area and volume are with respect to the reference surface BM and
 ( ) path space volume BV , respectively). Then, similar to Eq. (5), we define the
 ( ) boundary path space material path space ˆ as
 path characteristic
 X motion ˆ := Ð ≥1 Ð2 +1 −1 
 ˆ , (13)
 =0 
 BM reference surface
 with an associated measure identical to the one expressed in Eq. (6).
 BV reference volume
 BV0 BV \ BM Choices of reference configurations. In practice, we make the refer-
 ˆ material measurement contribution ence configurations coincide with the actual scene geometry with
 ˆ material path space the parameter fixed at some 0 . Precisely, we set the reference sur-
 ˆ )
 ( material boundary path space face BM to the object surfaces M at = 0 —that is, BM = M ( 0 )—
 and the reference volume to BV = V ( 0 ), as illustrated in Figure 2.
 Under this setting, the mapping X(·, 0 ) : BV ↦→ V ( 0 ) reduces
4 GENERALIZED DIFFERENTIAL PATH INTEGRAL to the identity map (that is, X( , 0 ) = ) for all ), causing the
We now derive derivatives of the generalized path integral of Eq. (1) material path space ˆ and the ordinary path space to coincide:
with respect to arbitrary scene parameters. To this end, we gen- ˆ = ( 0 ). We note that X(·, ) generally does not equal the identity
eralize the formulation of differential path integral introduced by map for ≠ 0 .
Zhang et al. [2020] for differentiable rendering of surfaces. We will discuss how X can be expressed and stored in §6.1.
 Specifically, we first introduce a material-form reparameterization
of Eq. (1) in §4.1. Then, we introduce the generalized differential Change of variable. The motion X induces a differentiable map-
 ping X̄ such that X̄(·, ) transforms the material path space ˆ to
path integral in §4.2 and discuss how perspective pinhole cameras
and point lights can be supported using our formulation in §4.3. the ordinary one ( ) for any . Precisely, X̄(·, ) maps a material
 ˆ to an ordinary one ¯ ∈ ( ):
 path ¯ = ( 0, . . . , ) ∈ 
Lastly, we discuss the relation between our technique and some
closely related prior works in §4.4. Table 1 summarizes the symbols ¯ ) = ( 0, . . . , ),
 ¯ = X̄( , where = X( , ) for all . (14)
commonly used in this paper.
 ¯ we obtain
 By applying to Eq. (1) this change of variable from ¯ to ,
4.1 Material-Form Parameterization the material-form generalized path integral:
Assume the (optical and/or geometric) properties of a scene to ∫
be controlled by some abstract parameter ∈ R. In general, the = ˆ( )
 ¯ d ( ),
 ¯ (15)
volume V encapsulating the scene and the union of object sur- ˆ
 
faces M ⊂ V can both depend on the parameter . This causes the
corresponding path space given by Eq. (5) to also depend on , where ˆ is the material measurement contribution function
making the differentiation of Eq. (1) more challenging. defined as
 To address this problem, Zhang et al. propose to apply a change d ( ¯ ) Ö
of variable to the ordinary path integral so that the new integral ˆ( )
 ¯ := ( ¯ ) = ( ¯ ) ( ), (16)
 ¯
 d ( )
domain is independent of the scene parameter . They have also 
shown how this can be achieved for surface-only light transport where ¯ is the ordinary light path corresponding to given by
(using Veach’s formulation [1997]). In what follows, we demonstrate ¯ ), and
 ¯ = X̄( ,
how this idea can be realized for generalized path integrals.
 d ( )
 
  d ( ) , ∈ BM
 Material path space. Let X be a differentiable mapping, or a mo-
 
 
 ( ) := (17)
tion,2 such that X(·, ) is a smooth bijection that transforms (i) some  d ( ) .
  ∈ BV0
reference surface BM to M ( ) and (ii) some reference vol-  d ( )
ume BV ⊃ BM to V ( ) ⊃ M ( ). We note that both BV and Since = X( , ), generally depends on the scene parameter .
BM are independent of the scene parameter . Additionally, we call With the aforementioned choices of reference configurations with
any ∈ V ( ) a spatial point and any ∈ BV a material point. X(·, 0 ) reducing to the identity map for some fixed 0 , | = 0 will
2 We follow the terminology used by Zhang et al. [2020] originated in continuum and always have unit value (with potentially nonzero derivative). We
fluid mechanics. will discuss how this term can be computed in practice in §6.2.

ACM Trans. Graph., Vol. 40, No. 4, Article 76. Publication date: August 2021.
Path-Space Differentiable Rendering of Participating Media
Path-Space Differentiable Rendering of Participating Media • 76:5

 Boundary segment, path, and path space. We define a boundary
 light path ¯ = ( 0, 1, . . . , ) to be a light path containing ex-
 actly one boundary segment −1 (for some 0 < ≤ )
 such that the interior of this segment intersects the object sur-
 faces M ( ) at exactly one point (see Figure 3). We further denote
 the set of all boundary paths (with finite length) as the boundary
 path space ( ).
 The measure ¤ associated with the boundary path space satisfies
 that, for any boundary path ¯ = ( 0, 1, . . . , ) with characteris-
Fig. 2. Material-form parameterization of a block whose horizontal tic and boundary segment −1 :
location is controlled by a parameter . In this example, the reference
 !(
 Ö dℓ ( ), ( ) = 1
surface BM and volume BV are selected as the block at some fixed = 0 d ¤ ( ¯ ) = 
 d , ( ) (18)
(illustrated in orange). Then, the motion X captures the motion of the block ≠ 
 d ( ), ( ) = 0
(hence the name) by mapping each point in the reference volume to the
corresponding one in the “moving” block via X(·, ) for any . We show the where d , is defined in Eq. (4).
images V ( ), 0 ( ) and 1 ( ) of the reference volume BV , an interior Under the material-form parameterization described in §4.1, for
point 0 ∈ BV0 and of a boundary point 1 ∈ BM , respectively, at = 1 each boundary path ¯ ∈ ( ) with boundary segment −1 ,
and = 2 . we call ¯ = X̄−1 ( ¯ , ) a material boundary path, and the segment
 −1 on ¯ a material boundary segment. Additionally, we
 define the material boundary path space ( ˆ ) as the set of all
 (a) (b) (c) (d) material boundary paths. We note that, unlike the material path
 space that is independent of the scene parameter , the material
 boundary path space ˆ does typically depend on . This is because,
 with −1 fixed, for −1 to be a material boundary segment,
 will need to depend on . We demonstrate this in Figure 4.
 Generalized differential path integral. Based on Assumptions A.1–
 A.4, discontinuities of the material measurement contribution ˆ
 would fully emerge from the mutual visibility function V in the geo-
Fig. 3. A boundary segment −1 has property that its interior in-
tersects with another surface in the scene at exactly one point. This causes metric terms . It follows that differentiating the material-form gen-
the endpoint to lie on the discontinuity boundary of the visibility func- eralized path integral of Eq. (15) produces the following (material-
tion V( −1 ↔ ·) with −1 fixed (and vise versa). With the presence of form) generalized differential path integral:
participating media, a boundary segment can connect two surface vertices
 interior boundary
(a), two volume vertices (d), or one surface and one volume vertices (b, c).
 d ˆ( )
 ¯
 ∫ ∫
 d 
 = ¯ +
 d ( ) Δ ˆ( )
 ¯ ( ) d ¤ ( )
 ¯ , (19)
 d ˆ
 d ˆ
 
4.2 Generalized Differential Path Integral where the definitions of individual terms will be discussed in the
With the formulation of material-form generalized path integral following. For more details on the derivation of this result, please
established, differentiable rendering of participating media boils see Appendix A.
down to differentiating Eq. (15). • In the interior integral, d ˆ/d indicates the scene derivative—a
 type of material derivative—of the material measurement contri-
 Continuity assumptions. To facilitate the derivation of the deriva-
 bution ˆ given by Eq. (16). This derivative is calculated based on
tive, we make a few assumptions:
 ¯ ).
 the relation of ¯ = X̄( ,
A.1 There is no ideal specular surface (e.g., perfect mirror or smooth ˆ is the material boundary path space,
 • In the boundary integral, 
 glass);
 and the measure ¤ is defined in Eq. (18).
A.2 The source emission e , sensor importance e , BSDFs s , and
 • For each material boundary path ¯ with material boundary seg-
 phase functions p are 0 -continuous spatially and direction-
 ment −1 , the term ( ) is a scalar that captures how “fast”
 ally;
 (with respect to ) the discontinuity boundary evolves at along
A.3 The extinction coefficient s and scattering coefficient t are the normal direction (see Figure 4). Precisely,
 continuous in the interior of each medium;
 d 
 ( ) := · ( ), (20)
A.4 Discontinuities of the Jacobian determinants of Eq. (17), if d 
 they exist, are independent of the parameter . where = X−1 ( , ), and “·” denotes the dot-product operator.
We note that Assumption A.2 can be relaxed and will discuss this Additionally, when is a surface vertex, ( ) is a unit vector
aspect in §4.3. perpendicular to the discontinuity curve within the reference

 ACM Trans. Graph., Vol. 40, No. 4, Article 76. Publication date: August 2021.
Path-Space Differentiable Rendering of Participating Media
76:6 • Zhang, Yu, and Zhao

 AAACE3icbVC7TsMwFHXKq5RXgLFLRIXEVCUIBGMlFsYi0YfURJHjOK1V24lsB1FFGfgIvoEVZjbEygcw8ic4aQbaciXLR+fcq3vuCRJKpLLtb6O2tr6xuVXfbuzs7u0fmIdHfRmnAuEeimkshgGUmBKOe4ooioeJwJAFFA+C6U2hDx6wkCTm92qWYI/BMScRQVBpyjebbhDTUM6Y/rLH3HcZVBPBMgRZ7pstu22XZa0CpwItUFXXN3/cMEYpw1whCqUcOXaivAwKRRDFecNNJU4gmsIxHmnIIcPSy8ojcutUM6EVxUI/rqyS/TuRQSYLn7qz8CiXtYL8TxulKrr2MsKTVGGO5ouilFoqtopErJAIjBSdaQCRINqrhSZQQKR0bgtbgjITZzmBVdA/bzuXbfvuotW5qNKpgyY4AWfAAVegA25BF/QAAk/gBbyCN+PZeDc+jM95a82oZo7BQhlfv5+en7U=

 xcam AAAB/nicbVA9SwNBEJ2LXzF+RS1tFoNgIeEuCloGbCwjmA9IjrC32UuW7O4du3tiOA78DbZa24mtf8XSf+ImucIkPhh4vDfDzLwg5kwb1/12CmvrG5tbxe3Szu7e/kH58Kilo0QR2iQRj1QnwJpyJmnTMMNpJ1YUi4DTdjC+nfrtR6o0i+SDmcTUF3goWcgINlbq9AKRPmV9t1+uuFV3BrRKvJxUIEejX/7pDSKSCCoN4VjrrufGxk+xMoxwmpV6iaYxJmM8pF1LJRZU++ns3gydWWWAwkjZkgbN1L8TKRZaT0RgOwU2I73sTcX/vG5iwhs/ZTJODJVkvihMODIRmj6PBkxRYvjEEkwUs7ciMsIKE2MjWtgSiMxm4i0nsEpatap3Wa3dX1XqF3k6RTiBUzgHD66hDnfQgCYQ4PACr/DmPDvvzofzOW8tOPnMMSzA+foF0ueWeA==
 x0 AAACE3icbVC7TsMwFHXKq5RXgLFLRIXEVCUIBGMlFsYi0YfURJHjOK1V24lsB1FFGfgIvoEVZjbEygcw8ic4aQbaciXLR+fcq3vuCRJKpLLtb6O2tr6xuVXfbuzs7u0fmIdHfRmnAuEeimkshgGUmBKOe4ooioeJwJAFFA+C6U2hDx6wkCTm92qWYI/BMScRQVBpyjebbhDTUM6Y/rLH3HcZVBPBMgRZ7pstu22XZa0CpwItUFXXN3/cMEYpw1whCqUcOXaivAwKRRDFecNNJU4gmsIxHmnIIcPSy8ojcutUM6EVxUI/rqyS/TuRQSYLn7qz8CiXtYL8TxulKrr2MsKTVGGO5ouilFoqtopErJAIjBSdaQCRINqrhSZQQKR0bgtbgjITZzmBVdA/bzuXbfvuotW5qNKpgyY4AWfAAVegA25BF/QAAk/gBbyCN+PZeDc+jM95a82oZo7BQhlfv5+en7U=

 xcam AAAB/nicbVA9SwNBEJ2LXzF+RS1tFoNgIeEuCloGbCwjmA9IjrC32UuW7O4du3tiOA78DbZa24mtf8XSf+ImucIkPhh4vDfDzLwg5kwb1/12CmvrG5tbxe3Szu7e/kH58Kilo0QR2iQRj1QnwJpyJmnTMMNpJ1YUi4DTdjC+nfrtR6o0i+SDmcTUF3goWcgINlbq9AKRPmV9t1+uuFV3BrRKvJxUIEejX/7pDSKSCCoN4VjrrufGxk+xMoxwmpV6iaYxJmM8pF1LJRZU++ns3gydWWWAwkjZkgbN1L8TKRZaT0RgOwU2I73sTcX/vG5iwhs/ZTJODJVkvihMODIRmj6PBkxRYvjEEkwUs7ciMsIKE2MjWtgSiMxm4i0nsEpatap3Wa3dX1XqF3k6RTiBUzgHD66hDnfQgCYQ4PACr/DmPDvvzofzOW8tOPnMMSzA+foF0ueWeA==
 x0

 AAAB/nicbVA9SwNBEJ2LXzF+RS1tFoNgIeEuCloGbCwjmA9IjrC32UuW7O4du3tiOA78DbZa24mtf8XSf+ImucIkPhh4vDfDzLwg5kwb1/12CmvrG5tbxe3Szu7e/kH58Kilo0QR2iQRj1QnwJpyJmnTMMNpJ1YUi4DTdjC+nfrtR6o0i+SDmcTUF3goWcgINlbq9AKRPmX9Wr9ccavuDGiVeDmpQI5Gv/zTG0QkEVQawrHWXc+NjZ9iZRjhNCv1Ek1jTMZ4SLuWSiyo9tPZvRk6s8oAhZGyJQ2aqX8nUiy0nojAdgpsRnrZm4r/ed3EhDd+ymScGCrJfFGYcGQiNH0eDZiixPCJJZgoZm9FZIQVJsZGtLAlEJnNxFtOYJW0alXvslq7v6rUL/J0inACp3AOHlxDHe6gAU0gwOEFXuHNeXbenQ/nc95acPKZY1iA8/UL1g2Weg==
 x2 AAAB/nicbVA9SwNBEJ2LXzF+RS1tFoNgIeEuCloGbCwjmA9IjrC32UuW7O4du3tiOA78DbZa24mtf8XSf+ImucIkPhh4vDfDzLwg5kwb1/12CmvrG5tbxe3Szu7e/kH58Kilo0QR2iQRj1QnwJpyJmnTMMNpJ1YUi4DTdjC+nfrtR6o0i+SDmcTUF3goWcgINlbq9AKRPmX9Wr9ccavuDGiVeDmpQI5Gv/zTG0QkEVQawrHWXc+NjZ9iZRjhNCv1Ek1jTMZ4SLuWSiyo9tPZvRk6s8oAhZGyJQ2aqX8nUiy0nojAdgpsRnrZm4r/ed3EhDd+ymScGCrJfFGYcGQiNH0eDZiixPCJJZgoZm9FZIQVJsZGtLAlEJnNxFtOYJW0alXvslq7v6rUL/J0inACp3AOHlxDHe6gAU0gwOEFXuHNeXbenQ/nc95acPKZY1iA8/UL1g2Weg==
 x2

 x1 x1
 (a) (b)
 AAAB/nicbVA9SwNBEJ2LXzF+RS1tFoNgIeEuCloGbCwjmA9IjrC32UuW7O4du3tiOA78DbZa24mtf8XSf+ImucIkPhh4vDfDzLwg5kwb1/12CmvrG5tbxe3Szu7e/kH58Kilo0QR2iQRj1QnwJpyJmnTMMNpJ1YUi4DTdjC+nfrtR6o0i+SDmcTUF3goWcgINlbq9AKRPmV9r1+uuFV3BrRKvJxUIEejX/7pDSKSCCoN4VjrrufGxk+xMoxwmpV6iaYxJmM8pF1LJRZU++ns3gydWWWAwkjZkgbN1L8TKRZaT0RgOwU2I73sTcX/vG5iwhs/ZTJODJVkvihMODIRmj6PBkxRYvjEEkwUs7ciMsIKE2MjWtgSiMxm4i0nsEpatap3Wa3dX1XqF3k6RTiBUzgHD66hDnfQgCYQ4PACr/DmPDvvzofzOW8tOPnMMSzA+foF1HqWeQ== AAAB/nicbVA9SwNBEJ2LXzF+RS1tFoNgIeEuCloGbCwjmA9IjrC32UuW7O4du3tiOA78DbZa24mtf8XSf+ImucIkPhh4vDfDzLwg5kwb1/12CmvrG5tbxe3Szu7e/kH58Kilo0QR2iQRj1QnwJpyJmnTMMNpJ1YUi4DTdjC+nfrtR6o0i+SDmcTUF3goWcgINlbq9AKRPmV9r1+uuFV3BrRKvJxUIEejX/7pDSKSCCoN4VjrrufGxk+xMoxwmpV6iaYxJmM8pF1LJRZU++ns3gydWWWAwkjZkgbN1L8TKRZaT0RgOwU2I73sTcX/vG5iwhs/ZTJODJVkvihMODIRmj6PBkxRYvjEEkwUs7ciMsIKE2MjWtgSiMxm4i0nsEpatap3Wa3dX1XqF3k6RTiBUzgHD66hDnfQgCYQ4PACr/DmPDvvzofzOW8tOPnMMSzA+foF1HqWeQ==

 Fig. 5. Perspective pinhole camera: (a) To support this camera model, we
 encode the contribution of the segment cam illustrated as gray dashed
(a) (b) arrows in the detector importance function via Eq. (23). (b) We also allow
 cam to be a boundary segment to capture the additional discontinuities
Fig. 4. Evolution of discontinuity boundaries: when the scene geom- introduced by this segment.
etry varies with the parameter , so will the visibility boundaries. In this
example, controls the width of the surface M0 . Then, to form a boundary
segment −1 with one endpoint −1 fixed (on a surface or inside a pixel reconstruction filter—which we assume to be 0 —evaluated
medium), the other endpoint can lay (a) on a curve within an object at the projection of on the image plane.
surface (Mtop in this example); and (b) within a surface determined by −1 To avoid introducing additional Dirac delta functions in our
and the left edge (in solid black lines) of M0 . The discontinuity curves and derivations, we do not treat cam as an endpoint of all light paths.
surfaces (illustrated in orange) generally depend on the parameter . After Instead, we encode its contributions in the detector importance: For
transforming these curves and surfaces back to the reference configuration any path ( 0, . . . , ) with being a standard surface or volume
(using the inverse of X(·, ) that maps ( ) to for all ), ( ) in vertex, we set
Eq. (19) captures the change rate (with respect to ) of along the normal
direction of the discontinuity curve or surface (under reference configura- e ( −1  ) := v ( −1   cam )
tions).
 pinhole
 ( ↔ cam ) e (  cam ), (23)
 where v is defined in Eq. (8), and is the geometric term of Eq. (9).
 surface containing . When is a volume vertex, on the other Then, as demonstrated in Figure 5-a, the measurement contribution
 hand, ( ) is a unit vector perpendicular to the corresponding of a light path ( 0, . . . , ) with the detector importance of Eq. (23)
 discontinuity surface in the reference volume. Further, evaluating equals that of ( 0, . . . , , cam ) with Eq. (22).
 d /d in Eq. (20) requires parameterizing locally the discontinuity Similarly, with a (uniform) point light located at src , we encode
 curve or surface near . We will discuss how this can be done the contributions related to src in the source emission e by setting
 in practice in §6.3.
 e ( 0  1 ) := v ( src  0  1 ) ( src ↔ 0 ) I, (24)
• Δ ˆ( )
 ¯ denotes the difference in material measurement contribu- where I denotes the intensity of the point light.
 tion ˆ across the discontinuity boundary. Based on our continuity We note that our formulations of Eqs. (23) and (24) are mostly the-
 assumptions (A.1–A.4), it holds that oretical: They allow pinhole cameras and point lights to be handled
 Δ ( −1 ↔ ) using the same derivations in §4.1 and §4.2.
 Δ ˆ( )
 ¯ = ˆ( )
 ¯ , (21)
 ( −1 ↔ ) Material-form formulation. In general, the position cam and ori-
 where Δ ( −1 ↔ ) equals − ( −1 ↔ ) if the normal entation cam of a pinhole camera can be functions of the scene
 (of the discontinuity boundary at ) points toward a region parameter and can be parameterized using the material-form for-
 (0) (1)
 visible to −1 , or ( −1 ↔ ) if otherwise. mulation as follows. Let cam and cam be two fixed points in the
 reference volume BV0 that represent, respectively, the camera’s cen-
4.3 Supporting Pinhole Cameras and Point Lights ter of projection and a point along the center axis. Then, for all ,
In the following, we discuss how perspective pinhole cameras and we have
point lights—which are commonly used in computer graphics and (0)
 cam ( ) := X( cam, ), (25)
vision—can be incorporated in our material path integral framework (1) (0)
established in §4.1 and §4.2. X( cam, ) − X( cam, )
 cam ( ) := . (26)
 With the detector being a pinhole camera located at cam ∈ V0 , (1) (0)
 X( cam, ) − X( cam, )
any light transport path must terminate at cam to have a nonzero
measurement contribution. For a light path ( 0, . . . , , cam ), the The position src of a point light can be parameterized in a similar
detector importance of the pinhole camera equals fashion as Eq. (25).

 pinhole P ( ) Handling discontinuities. The inclusion of the generalized geo-
 e (  cam ) := −−−−−− −→) 3 , (22) metric term ( ↔ cam ) in Eq. (23) and ( src  0 ) in Eq. (24)
 ( cam · cam 
 can violate the assumption (A.2) of e and e being continuous.
where cam is the camera’s axis of projection, −−−−−− −→ denotes the Fortunately, this can be handled easily by including a new set of
 cam 
unit vector pointing from cam toward , and P ( ) indicates the material boundary paths in the boundary term of Eq. (19).

ACM Trans. Graph., Vol. 40, No. 4, Article 76. Publication date: August 2021.
Path-Space Differentiable Rendering of Participating Media
Path-Space Differentiable Rendering of Participating Media • 76:7

 Specifically, for pinhole cameras, we consider ¯ = ( 0, . . . , ) 3.0

such that is a discontinuity point of ( ↔ cam ). In other
 Pixel recon. filter
words, we allow cam to be a boundary segment (see Figure 5-b). AAACE3icbVC7TsMwFHXKq5RXgLFLRIXEVCUIBGMlFsYi0YfURJHjOK1V24lsB1FFGfgIvoEVZjbEygcw8ic4aQbaciXLR+fcq3vuCRJKpLLtb6O2tr6xuVXfbuzs7u0fmIdHfRmnAuEeimkshgGUmBKOe4ooioeJwJAFFA+C6U2hDx6wkCTm92qWYI/BMScRQVBpyjebbhDTUM6Y/rLH3HcZVBPBMgRZ7pstu22XZa0CpwItUFXXN3/cMEYpw1whCqUcOXaivAwKRRDFecNNJU4gmsIxHmnIIcPSy8ojcutUM6EVxUI/rqyS/TuRQSYLn7qz8CiXtYL8TxulKrr2MsKTVGGO5ouilFoqtopErJAIjBSdaQCRINqrhSZQQKR0bgtbgjITZzmBVdA/bzuXbfvuotW5qNKpgyY4AWfAAVegA25BF/QAAk/gBbyCN+PZeDc+jM95a82oZo7BQhlfv5+en7U=

 xcam
 x?
 AAACD3icbVC7TsMwFHXKq5RXgIGBxaJCYqqSCgFjJRYmVCT6kNoQOY7bWrXjyHYQVZSP4BtYYWZDrHwCI3+C02agLUeyfHTOvbr3niBmVGnH+bZKK6tr6xvlzcrW9s7unr1/0FYikZi0sGBCdgOkCKMRaWmqGenGkiAeMNIJxte533kkUlER3etJTDyOhhEdUIy0kXz7qB8IFqoJN1/6lPm3D6lRdObbVafmTAGXiVuQKijQ9O2ffihwwkmkMUNK9Vwn1l6KpKaYkazSTxSJER6jIekZGiFOlJdOD8jgqVFCOBDSvEjDqfq3I0Vc5TuaSo70SC16ufif10v04MpLaRQnmkR4NmiQMKgFzNOAIZUEazYxBGFJza4Qj5BEWJvM5qYEPM/EXUxgmbTrNfeiVr87rzbqRTplcAxOwBlwwSVogBvQBC2AQQZewCt4s56td+vD+pyVlqyi5xDMwfr6BRFVncU=

Similarly, when handling point lights, we track discontinuities of N

 0 such that src 0 is effectively a boundary segment.

 pl age
 e
 xN

 Im
 an
 AAACCHicbVDLSsNAFJ34rPUVdelmsAiuSlJEXRbcuJIK9gFtCJPJpB06Mwkzk2IJ+QG/wa2u3Ylb/8Klf+KkzcK2HhjmcM693MMJEkaVdpxva219Y3Nru7JT3d3bPzi0j447Kk4lJm0cs1j2AqQIo4K0NdWM9BJJEA8Y6Qbj28LvTohUNBaPepoQj6OhoBHFSBvJt+1BELNQTbn5sqfcv/ftmlN3ZoCrxC1JDZRo+fbPIIxxyonQmCGl+q6TaC9DUlPMSF4dpIokCI/RkPQNFYgT5WWz5Dk8N0oIo1iaJzScqX83MsRVEc5McqRHatkrxP+8fqqjGy+jIkk1EXh+KEoZ1DEsaoAhlQRrNjUEYUlNVohHSCKsTVkLVwKem07c5QZWSadRd6/qjYfLWrNRtlMBp+AMXAAXXIMmuAMt0AYYTMALeAVv1rP1bn1Yn/PRNavcOQELsL5+ASpZmog=

 Other zero-measure detectors and sources. Using the formulations -3.0

outlined in Eqs. (23) and (24), other zero-measure detectors (e.g., (a) (b1) (b2)
orthographic cameras) and sources (e.g., directional lights) can be
handled in a similar manner. In case of a directional light with Fig. 6. Pixel-level antithetic sampling: As a function of ⊥ , pixel re-
incident direction src , we can encode its contributions in the source construction filters usually exhibit point symmetry (a), causing its spatial
emission by letting derivative to be an odd function that can produce high variance when esti-
 −−−→ mating the interior integral (b1). With our pixel-level antithetic sampling,
 (  ) := ( , , 
 e 0 1 v 0 src ) V( , ) I,
 0 1 0 src (27) significant variance reduction can be achieved (b2). In this example, the
where V( 0, src ), which can be discontinuous with respect to 0 , derivatives in (b1) and (b2) are computed with respect to the bunny’s vertical
indicates whether a ray with origin 0 and direction src can reach position, and the ordinary image is shown as an inset in (b1).
infinity without being occluded.

4.4 Relation to Prior Works
 Relation to differential radiative transfer. Theoretically, our gen- 5.1 Estimating the Interior Integral
eralized differential path integral of Eq. (19) offers the same level
 With our choice of reference configurations, the material path
of generality as the differential theory of radiative transfer (DTRT) ˆ coincides with the ordinary one ( 0 ). This allows esti-
 space 
[Zhang et al. 2019], since both formulations allow differentiating
 mating the interior integral in Eq. (19) using path sampling methods
volumetric light transport with respect to arbitrary scene parame-
 developed for forward rendering.
ters.
 Specifically, we sample a material path ¯ with probability density
 On the other hand, our mathematical framework enjoys several
 ¯ using standard techniques (such as volumetric path tracing
 prob( )
significant advantages in practice.
 or bidirectional path tracing). This path sampling process does not
 Firstly, thanks to the material-form parameterization (§4.1), our ˆ and all
 need to be differentiated since the material path space 
formulation requires tracking fewer types of discontinuities. For ˆ are independent of the scene parameter (and
example, DTRT involves boundary terms emerging from differenti- material paths ¯ ∈ 
ating the line integral in the (integral-form) radiative transfer equa- thus have zero derivatives).
tion. Our formulation, on the other hand, only requires handling With the material path ¯ constructed, we compute the corre-
discontinuities resulting from the mutual visibility V. sponding oridinary path ¯ ∈ ( 0 ) by setting = X( , 0 ) for
 each vertex of . ¯ We note that, although takes the same value
 Secondly, being a path-space formulation, our technique allows
the design of sophisticated Monte Carlo estimators (which we will as for all (as X(·, 0 ) reduces to the identity map with our choice
discuss in §5) that are capable of handling complex light transport of references), the derivative d d ( 0 )—which can be obtained by
 
effects efficiently without the need of explicit silhouette detection. differentiating the motion X—is typically nonzero.
 Lastly, we compute the material measurement contribution ˆ of
 Relation to path-space differentiable rendering. The differential d ˆ ( )
 ¯
path integral formulation introduced by Zhang et al. [2020] is limited Eq. (16) in a differentiable fashion to obtain d ( 0 ). Returning
to surface-only light transport and essentially a special case of this derivative divided by the path sampling probability prob( ) ¯
Eq. (19). Further, Zhang et al. have assumed the absence of zero- completes our Monte Carlo estimation of the interior term in Eq. (19).
measure sources, while we show how this can be relaxed in §4.3.
 Pixel-level antithetic sampling. Antithetic sampling is a classic
5 MONTE CARLO ESTIMATORS variance-reduction framework for Monte Carlo estimation [Ham-
 mersley and Mauldon 1956]. When estimating integrals of the form
We now derive new unbiased and consistent Monte Carlo estimators ∫
for our generalized differential path integral of Eq. (19). We focus ℎ( ) d where ℎ is an approximately odd function, it is desired to
 d ( ) for some user-specified . use correlated pairs samples and − so that ℎ( ) + ℎ(− ) ≈ 0. Re-
on the problem of estimating d 0 0
 cently, Zhang et al. [2021] have introduced this idea to differentiable
Further, we set the reference configurations (that is, the reference
 rendering of glossy and near-specular materials.
volume and surface) as the scene geometry at = 0 (as discussed
 We apply antithetic sampling at the pixel level. Specifically, when
in §4.1).
 using perspective pinhole cameras given by Eqs. (22) and (23), the
 Thanks to the full separation between the interior and boundary
 interior component of the generalized differential path integral in-
terms in the generalized differential path integral, we estimate these
 volves derivative of the pixel reconstruction filter P satisfying:
terms in a separated fashion. In the rest of this section, we discuss
the estimation of the interior integral in §5.1 and that of the boundary
 ⊥
integral in §5.2. We keep discussions in this section high-level and dP P d 
 = ⊥ , (28)
provide some implementation details in §6. d d 

 ACM Trans. Graph., Vol. 40, No. 4, Article 76. Publication date: August 2021.
Path-Space Differentiable Rendering of Participating Media
76:8 • Zhang, Yu, and Zhao

 xS0 source subpath ¯ S = ( S, . . . , S0 ), and the detector subpath ¯ D :=
 AAACE3icbVC7TsMwFHV4lvIKMHaJqJCYqqQCwViJhbEI+pCaEDmO01q148h2EFWUgY/gG1hhZkOsfAAjf4LTZqAtV7J8dM69uueeIKFEKtv+NlZW19Y3Nitb1e2d3b198+CwK3kqEO4gTrnoB1BiSmLcUURR3E8EhiyguBeMrwq994CFJDy+U5MEewwOYxIRBJWmfLPmBpyGcsL0lz3m9y6DaiRYdpv7tm/W7YY9LWsZOCWog7LavvnjhhylDMcKUSjlwLET5WVQKIIozqtuKnEC0RgO8UDDGDIsvWx6RG6daCa0Ii70i5U1Zf9OZJDJwqfuLDzKRa0g/9MGqYouvYzESapwjGaLopRailtFIlZIBEaKTjSASBDt1UIjKCBSOre5LQHLdSbOYgLLoNtsOOcN++as3mqW6VRADRyDU+CAC9AC16ANOgCBJ/ACXsGb8Wy8Gx/G56x1xShnjsBcGV+/IQKfYw==

 xD
 AAACE3icbVC7TsMwFHV4lvIKMHaxqJCYqqQCwVgJBsYi0YfUhMhxnNaqnUS2g6iiDHwE38AKMxti5QMY+ROcNgNtuZLlo3Pu1T33+AmjUlnWt7Gyura+sVnZqm7v7O7tmweHXRmnApMOjlks+j6ShNGIdBRVjPQTQRD3Gen546tC7z0QIWkc3alJQlyOhhENKUZKU55Zc/yYBXLC9Zc95vcOR2okeHade7Zn1q2GNS24DOwS1EFZbc/8cYIYp5xECjMk5cC2EuVmSCiKGcmrTipJgvAYDclAwwhxIt1sekQOTzQTwDAW+kUKTtm/ExnisvCpOwuPclEryP+0QarCSzejUZIqEuHZojBlUMWwSAQGVBCs2EQDhAXVXiEeIYGw0rnNbfF5rjOxFxNYBt1mwz5vWLdn9VazTKcCauAYnAIbXIAWuAFt0AEYPIEX8ArejGfj3fgwPmetK0Y5cwTmyvj6BQrLn1U=

 1
 xS2 ( D D
 AAACE3icbVC7TsMwFHV4lvIKMHaxqJCYqqQCwViJhbEI+pCaEDmO01q1k8h2EFWUgY/gG1hhZkOsfAAjf4LTZqAtV7J8dM69uuceP2FUKsv6NlZW19Y3Nitb1e2d3b198+CwK+NUYNLBMYtF30eSMBqRjqKKkX4iCOI+Iz1/fFXovQciJI2jOzVJiMvRMKIhxUhpyjNrjh+zQE64/rLH/N7hSI0Ez25zr+mZdathTQsuA7sEdVBW2zN/nCDGKSeRwgxJObCtRLkZEopiRvKqk0qSIDxGQzLQMEKcSDebHpHDE80EMIyFfpGCU/bvRIa4LHzqzsKjXNQK8j9tkKrw0s1olKSKRHi2KEwZVDEsEoEBFQQrNtEAYUG1V4hHSCCsdG5zW3ye60zsxQSWQbfZsM8b1s1ZvdUs06mAGjgGp8AGF6AFrkEbdAAGT+AFvII349l4Nz6Mz1nrilHOHIG5Mr5+ASQon2U=

 0 , . . . , ), respectively:
 xS1
 AAACE3icbVC7TsMwFHV4lvIKMHaJqJCYqqQCwViJhbEI+pCaEDmO01q148h2EFWUgY/gG1hhZkOsfAAjf4LTZqAtV7J8dM69uueeIKFEKtv+NlZW19Y3Nitb1e2d3b198+CwK3kqEO4gTrnoB1BiSmLcUURR3E8EhiyguBeMrwq994CFJDy+U5MEewwOYxIRBJWmfLPmBpyGcsL0lz3m9y6DaiRYdpv7jm/W7YY9LWsZOCWog7LavvnjhhylDMcKUSjlwLET5WVQKIIozqtuKnEC0RgO8UDDGDIsvWx6RG6daCa0Ii70i5U1Zf9OZJDJwqfuLDzKRa0g/9MGqYouvYzESapwjGaLopRailtFIlZIBEaKTjSASBDt1UIjKCBSOre5LQHLdSbOYgLLoNtsOOcN++as3mqW6VRADRyDU+CAC9AC16ANOgCBJ/ACXsGb8Wy8Gx/G56x1xShnjsBcGV+/IpWfZA==

 ¯ ( ) = ˆB ( S0, D
 Δ ˆ( ) ˆS ¯ S ; D ) ˆD ( ¯ D ; S ),
 0 ) ( 0 0 (30)
 | {z } | {z } | {z }
 boundary seg. src. subpath det. subpath
 xD
 AAACE3icbVC7TsMwFHV4lvIKMHaxqJCYqqQCwVgJBsYi0YfUhMhxnNaqnUS2g6iiDHwE38AKMxti5QMY+ROcNgNtuZLlo3Pu1T33+AmjUlnWt7Gyura+sVnZqm7v7O7tmweHXRmnApMOjlks+j6ShNGIdBRVjPQTQRD3Gen546tC7z0QIWkc3alJQlyOhhENKUZKU55Zc/yYBXLC9Zc95vcOR2okeHade5Zn1q2GNS24DOwS1EFZbc/8cYIYp5xECjMk5cC2EuVmSCiKGcmrTipJgvAYDclAwwhxIt1sekQOTzQTwDAW+kUKTtm/ExnisvCpOwuPclEryP+0QarCSzejUZIqEuHZojBlUMWwSAQGVBCs2EQDhAXVXiEeIYGw0rnNbfF5rjOxFxNYBt1mwz5vWLdn9VazTKcCauAYnAIbXIAWuAFt0AEYPIEX8ArejGfj3fgwPmetK0Y5cwTmyvj6BQk4n1Q=

 0
 where

Fig. 7. We rename the vertices of a boundary path such that S0 Dis the ˆB ( S0, D S D D
 0 ) := Δ ( 0 ↔ 0 ) ( 0 ), (31)
 0
boundary segment (illustrated in red). The source and detector subpaths are
shown in yellow and green, respectively. The arrows in this figure illustrate ˆS ( ¯ S ; D ˆ S S D
 0 ) := v ( 1  0  0 )
the physical flow of light (that is from the source to the detector) and do Î ˆ S S S S S
not indicate how the subpaths are constructed. =1 v ( +1   −1 ) ( ↔ −1 ), (32)

 ˆD ( ¯ D ; S0 ) := ˆv ( S0  D D
 0  1 )
where ⊥ denotes the projection of on the image plane (see
 ˆ D D D D D
 Î 
 =1 v ( −1   +1 ) ( −1 ↔ ). (33)
Figure 6-a), and the derivative d ⊥ /d can be calculated based on
d /d given by our material-form parameterization = X( , ). In Eqs. (32) and (33), is the geometric term, and ˆv captures both
 In practice, the pixel reconstruction filter P, as a function of ⊥ per-vertex contribution v of Eq. (8) and the Jacobian determinant 
 ,
usually exhibit point symmetry with respect to the pixel center. This of Eq. (17). That is, for any 1, 2, 3 ∈ BV and = X( , ) for
causes the (vector-valued) spatial derivative P/ ⊥ to be an odd = 1, 2, 3:
function. To reduce the variance introduced by this derivative, we ˆv ( 1  2  3 ) := v ( 1  2  3 ) ( 2 ). (34)
generate pairs of samples that are point-symmetric with respect to
the center of the pixel. Figure 6-b demonstrates the effectiveness of With its integrand expressed using Eq. (35), we can rewrite the
our pixel-level antithetic sampling. boundary integral in its multi-directional form as:4
 ∬ ∫ 
 ˆB S D ˆS ¯S D S
5.2 Estimating the Boundary Integral ( 0 , 0 ) ( ; 0 ) d ¯ 0
To estimate the boundary integral, Zhang et al. [2020] introduced a ∫ 
multi-directional sampling technique that starts with drawing the ˆD ( ¯ D ; S0 ) d ¯ 0D d D S
 0 d 0 , (35)
material boundary segment −1 (such that is a jump dis-
continuity point of V( −1 ↔ ) when is fixed). This avoids where the outer integral is over the material boundary segment
explicit silhouette detections that can be expensive for complex S0 D ¯S ¯D
 0 . Additionally, 0 and 0 denote the source and detector sub-
scenes. Their technique, unfortunately, is derived for the surface- paths with endpoints 0 and D
 S
 0 of the boundary segment excluded,
only case. In what follows, we generalize this technique to also respectively. That is, ¯ 0S := ( S, . . . , S1 ) and ¯ 0D := ( D D
 1 , . . . , ).
support participating media. S D
 In the case of surface-only rendering, both 0 and 0 would
 Multi-directional boundary integral. To facilitate the development always be surface vertices (Figure 3-a). With the presence of partici-
of our Monte Carlo estimator of the boundary integral, we first pating media, on the other hand, they both can be either a surface or
rewrite this integral in a multi-directional form. The derivations a volume vertex, leading to three extra combinations (Figure 3-bcd).
of Eq. (29)–(35) are mostly identical to those from the work by
 Change of variable. To facilitate efficient sampling of the ma-
Zhang et al. [2020], and we show them nonetheless for completeness.
 We start with renaming the vertices of a material boundary path terial boundary segment S0 D 0 , we apply a series of changes of
 ˆ
 ¯ ∈ : variables to Eq. (35) as follows. First, we use the predetermined
 differentiable mapping X(·, ) to make the outer integral to be over
 ¯ = ( S, −1
 S
 , . . . , S0, D D D
 0 , 1 , . . . , ), (29)
 the corresponding boundary segment S0 D 0 . In principle, this re-
such that S and D are located, respectively, on the source and the quires computing the Jacobian determinant ∥ (d S0 d D0 )/(d S0 d D0 ) ∥
detector;3 and S0 D 0 is the material boundary segment. Similarly, based on the mapping X(·, ). In practice, because of our choice of
we rename vertices of the corresponding boundary path as ¯ = reference configurations, the Jacobian determinant reduces to one
( S, −1
 S , . . . , S , D , D , . . . , D ), as illustrated in Figure 7. Similar
 0 0 1 (i.e., ∥ (d S0 d D0 )/(d S0 d D0 ) ∥ ≡ 1).
to the estimation of the interior integral (discussed in §5.1), under Then, let B be the intersection point between S0 D 0 and the
our choice of reference configurations, a material boundary path union of all object surfaces and B be the direction of this bound-
and its ordinary counterpart coincide. ary segment (i.e., a unit vector pointing from S0 to D
 This allows us to factorize the integrand of the boundary com- 0 ). We apply
 another change of variable to make the outer integral to be on B
ponent of Eq. (19) into contributions of the segment S0 D
 0 , the and B . We note that the point B is not a vertex of the resulting
 boundary path—we use it only for sampling purposes.
3 When the detector is a pinhole camera, as discussed in §4.3, D is further connected
 (0) 4 We
to the camera’s center of projection cam (instead of being on the sensor). omit the integral domains and measures in Eq. (35) for notational simplicity.

ACM Trans. Graph., Vol. 40, No. 4, Article 76. Publication date: August 2021.
Path-Space Differentiable Rendering of Participating Media
Path-Space Differentiable Rendering of Participating Media • 76:9

 Based on the relations given by Eqs. (36)–(39), we derive the
 Jacobian determinants for changes of variables from S0 and D
 0 to
 B B S D
 and (as well as , when needed). Specifically:
 • When both S0 and D
 0 are surface vertices, according to Eqs. (36)
 and (38), we have
(a1) (a2) d ( S0 ) dℓ ( D
 0) sin B
 = S D . (40)
 dℓ ( B ) d ( B ) sin D | cos S |

 • With S0 being a surface vertex and D
 0 a volume vertex, multiply-
 ing Eqs. (36) and (39) yields

 d ( S0 ) d ( D
 0) S D sin 
 B
 = . (41)
 dℓ ( B ) d ( B ) d D | cos S |

 • With S0 being a volume vertex and D
 0 a surface vertex, multiply-
(b1) (b2)
 ing Eqs. (37) and (38) gives

Fig. 8. Illustrations for deriving the change-of-variable Jacobian determi- d ( S0 ) dℓ ( D
 0) sin B
 = S D . (42)
nants expressed in Eqs. (40)–(43). dℓ ( B ) d ( B ) d S sin D

 • Lastly, when both S0 and D
 0 are volume vertices, according to
 In what follows, we derive the Jacobian determinants correspond- Eqs. (37) and (39), we have
ing to this change of variable. We base our derivations on the as- d ( S0 ) d ( D
 0)
sumption that all surfaces in the scene are represented using polyg- = S D sin B . (43)
 dℓ ( B ) d ( B ) d S d D
onal meshes. In this case, B will always belong to an edge of some
polygonal face. We note that, among the four cases discussed above, Zhang et al. [2020]
 To facilitate the derivation of the Jacobian determinants, we first only derived the first one—namely Eq. (40)—as Eq. (48) of their paper.
relate d S0 to d ( B ) and d D B
 0 to dℓ ( ) as follows. Using these relations, we can rewrite the multi-directional bound-
• When S0 is a surface vertex, as illustrated in Figure 8-a1, we have ary integral of Eq. (35). When S0 and D 0 are both volume vertices,
 for instance, we have5
 d ( S0 ) | cos S |
 = d ( B ),
 ∫ ∫ ∫ ∞∫ ∞ h
 (36) i
 ( S ) 2 ˆB ( S0, D
 0 ) S D
 sin B
 E S2 0 S
 ∫ 
 where S := ∥ B − S0 ∥ is the distance between B and S0 ; S is
  ∫ 
 ˆS d ˆD d d D d S d ( B ) dℓ ( B ), (44)
 the angle between the segment S0 B and the surface normal at ˆ
 ˆ
 
 S0 ; and is the solid-angle measure. where E denotes the union of all face edges.
• When S0 is a volume vertex, as shown in Figure 8-a2, we have Boundary integrals of the other three cases (when at least one of
 S0 and D
 0 is a surface vertex) can be expressed in a similar fashion.
 d ( S0 )
 = d ( B ) d S . (37) Sampling boundary segments. Based on the reparameterized bound-
 ( S ) 2 ary integrals like Eq. (44), we develop a generalized multi-directional
 sampling algorithm (Algorithm 1) to estimate the boundary compo-
• When D S D
 0 is a surface vertex, for 0 0 to be a boundary segment
 S D nent of the generalized differential path integral.
 with 0 fixed, 0 belongs to a curve (see Figure 8-b1). In this case, Our algorithm starts with sampling an interior point B and
 we have direction B of the boundary segment from some predetermined
 dℓ ( D
 0 ) sin 
 D
 dℓ ( B ) sin B probability density P (Line 3).
 = , (38)
 D S Then, we obtain the two endpoints S0 and D 0 of the bound-
 where D := ∥ D S D S ary segment as well as the corresponding probabilities probS and
 0 − 0 ∥ is the distance between 0 and 0 , and
 D B
 is the angle between and the curve’s tangent at 0 . D probD using the sampleInteraction( , ) function (Lines 4, 5).
 For any given and , this function returns a randomly sam-
• When D0 is a volume vertex, it resides on a surface determined pled volume or surface vertex along the ray ( , ), accompanied
 by the point B and direction B (see Figure 8-b2). It follows that
 5 Strictly,
 the integral of B in Eq. (44) should be over a subset of S2 so that the
 d ( D
 0) dℓ ( B ) sin B D resulting boundary segment does not penetrate any surface. Please refer to the work
 = d . (39) by Zhang et al. [2020] for more details.
 D S

 ACM Trans. Graph., Vol. 40, No. 4, Article 76. Publication date: August 2021.
Path-Space Differentiable Rendering of Participating Media
76:10 • Zhang, Yu, and Zhao

ALGORITHM 1: Monte Carlo estimator of the boundary integral (35) 0.5

 1 EstimateBoundaryIntegral()
 2 begin
 /* Sample boundary segment */
 3 Draw ( B , B ) ∼ P;
 4 ( S0 , probS ) ← sampleInteraction( B , − B ); -0.5
 D
 5 ( D0 , prob ) ← sampleInteraction( , );
 B B
 (a) Ordinary (b) Derivative (c) dX(·, )/d 
 /* Compute Jacobian determinant */
 6 S ← ∥ B − S0 ∥; D ← ∥ D − S0 ∥; Fig. 9. Representing the motion: This example consists of a bunny in-
 7 B ← S D sin B ; side a Cornell box filled with a homogeneous medium (a). To estimate the
 8 if S0 is a surface vertex then derivative with respect to the position of the bunny along the -axis (b), we
 9 B ← B / | cos S |; set the derivative dX( , )/d of the motion to [1, 0, 0] for all on the surface
10 end of the bunny, and to zero for on the box. To smoothly interpolate these
11 if D 0 is a surface vertex then derivatives in the interior of the medium, we tetrahedralize its volume (with
12 B ← B /sin D ; boundaries given by the bunny and the box) and apply trilinear interpolation
13 end in each tetrahedron. We visualize a 2D slice of the resulting dX(·, )/d in
 /* Evaluate boundary segment */ (c), where the intensity of each pixel indicates the value of the derivative’s
14 S0 ← S0 ; D 0 ← 0 ;
 D -component.
 ˆB ( S , D ) B
 0 0
15 B ← ;
 P( B , B ) probS probD
 /* Sample and evaluate source & detector subpaths */
16 S ← EstimateSourcePath( S0 ; D 0 );
 X(·, ) that maps the reference surface BM and volume BV to the
17 D ← EstimateDetectorPath( D S
 0 ; 0 );
 object surfaces M ( ) and volume V ( ), respectively.
18 return B S D ; As aforementioned, we focus on the problem of estimating d /d 
19 end at some user-specified = 0 based on the generalized differential
 path integral of Eq. (19). When using the reference configurations
 discussed in §4.1, X(·, 0 ) reduces to the identity map and does not
 have to be explicitly stored. Thus, it suffices to specify the derivative
with the corresponding probability. Specifically, we follow the stan- [ dX( , )/d ] = 0 for each ∈ BV as a vector field.
dard procedure in volumetric path tracing by first drawing a free-
flight distance ≥ 0 from ∫an exponential distribution with the pdf Affine deformation. If the deformation of an object is affine, the
 
 ( ) = t ( + ) exp(− 0 t ( + ) d ) with t being the ex- corresponding motion can be expressed as X( , ) = ( ) + ( ),
tinction coefficient. For heterogeneous media, this can be achieved where ( ) is an invertible matrix and ( ) is a vector. Assuming
using techniques like delta tracking [Woodcock et al. 1965]. If the that ¤ := d /d and ¤ := d /d can be calculated analytically, we have
line segment connecting and ( + ) does not intersect any [ dX( , )/d ] = 0 = ¤ ( 0 ) + ¤ ( 0 ), which can be computed easily
object surface, the function returns a volume vertex of ( + ). on the fly.
Otherwise, let ( + 0 ) be the first intersection along the ray ( , ),
sampleInteraction returns this point as a surface vertex. Non-affine deformation. To express general non-affine deforma-
 tions, we use a tetrahedral mesh with the derivative values defined
 Sampling subpaths. With the boundary segment S0 D 0 drawn,
 at each vertex. For the -th vertex of the tetrahedral mesh, we store
we compute based on Eqs. (40)–(43) the corresponding Jacobian its position and the derivative ¤ := [dX( , )/d ] = 0 . In prac-
determinant B (Lines 6–12). This allows the contribution B of the tice, this can be implemented by representing vertex positions as
boundary segment S0 D automatic-differentiation-enabled vectors . In this way, it holds
 0 to be computed (Line 15). that = detach( ), and ¤ can be obtained via automatic differ-
 Lastly, we estimate the contributions of the source and detector
subpaths ¯ S and ¯ D , respectively, using standard techniques such as entiation.
volumetric path tracing (Lines 16 and 17), completing our estimation In the interior of each tetrahedron, we perform a trilinear interpo-
of the boundary integral. lation to obtain the derivative. Although higher-order interpolations
 is possible, we find using a linear one to suffice for our purpose.
 In practice, non-affine deformations such as stretching of a medium
6 IMPLEMENTATION DETAILS
 (as a continuum) are typically pre-specified using tetrahedral meshes.
We now discuss some important aspects for implementing the Monte In this case, we reuse these meshes to represent our motion X.
Carlo estimators presented in §5. When no tetrahedral mesh is provided as input, we tetrahedralize
 the input polygonal mesh using the TetGen library [Si 2015] so that
6.1 Representing the Motion the constructed tetrahedral mesh shares the same set of vertices
A key ingredient of our material-form parameterization presented as the input boundary mesh (see Figure 9 for an example). In our
in §4.1 is the motion X that, for each , gives a differentiable bijection experiments, the computational overhead of this step is negligible.

ACM Trans. Graph., Vol. 40, No. 4, Article 76. Publication date: August 2021.
You can also read