gems3_ch18.pdf

(1100 KB) Pobierz
Chapter 18
Relaxed Cone Stepping
for Relief Mapping
Fabio Policarpo
Perpetual Entertainment
Manuel M. Oliveira
Instituto de Informática—UFRGS
18.1
Introduction
The presence of geometric details on object surfaces dramatically changes the way light
interacts with these surfaces. Although synthesizing realistic pictures requires simulating
this interaction as faithfully as possible, explicitly modeling all the small details tends to
be impractical. To address these issues, an image-based technique called relief mapping
has recently been introduced for adding per-fragment details onto arbitrary polygonal
models (Policarpo et al. 2005). The technique has been further extended to render
correct silhouettes (Oliveira and Policarpo 2005) and to handle non-height-field surface
details (Policarpo and Oliveira 2006). In all its variations, the ray-height-field intersec-
tion is performed using a binary search, which refines the result produced by some
linear search procedure. While the binary search converges very fast, the linear search
(required to avoid missing large structures) is prone to aliasing, by possibly missing
some thin structures, as is evident in Figure 18-1a. Several space-leaping techniques
have since been proposed to accelerate the ray-height-field intersection and to minimize
the occurrence of aliasing (Donnelly 2005, Dummer 2006, Baboud and Décoret 2006).
Cone step mapping (CSM) (Dummer 2006) provides a clever solution to accelerate the
intersection calculation for the average case and avoids skipping height-field structures
by using some precomputed data (a cone map). However, because CSM uses a conser-
vative approach, the rays tend to stop before the actual surface, which introduces different
18.1 Introduction
409
979351033.190.png 979351033.201.png 979351033.212.png 979351033.223.png 979351033.001.png 979351033.012.png 979351033.023.png 979351033.034.png 979351033.044.png 979351033.055.png 979351033.065.png 979351033.076.png 979351033.087.png 979351033.098.png 979351033.109.png 979351033.119.png 979351033.130.png 979351033.141.png 979351033.152.png 979351033.156.png 979351033.157.png 979351033.158.png 979351033.159.png 979351033.160.png 979351033.161.png 979351033.162.png 979351033.163.png 979351033.164.png 979351033.165.png 979351033.166.png 979351033.167.png 979351033.168.png 979351033.169.png 979351033.170.png 979351033.171.png 979351033.172.png 979351033.173.png 979351033.174.png 979351033.175.png 979351033.176.png 979351033.177.png
 
kinds of artifacts, highlighted in Figure 18-1b. Using an extension to CSM that consists
of employing four different radii for each fragment (in the directions north, south, east,
and west), one can just slightly reduce the occurrence of these artifacts. We call this
approach quad-directional cone step mapping (QDCSM). Its results are shown in Figure
18-1c, which also highlights the technique’s artifacts.
(a) Linear + binary search
(b) Cone step mapping
(c) Quad-directional cone step mapping
(d) Relaxed cone stepping + binary search
Figure 18-1. Comparison of Four Different Ray-Height-Field Intersection Techniques Used to Render
a Relief-Mapped Surface from a 256×256 Relief Texture
(a) Fifteen steps of linear search followed by six steps of binary search. Note the highlighted
aliasing artifacts due to the step size used for the linear search. (b) Fifteen steps of the cone step
mapping technique. Note the many artifacts caused by the fact that the technique is conservative
and many rays will never hit the surface. (c) Fifteen steps of the quad-directional cone step
mapping technique. The artifacts in (b) have been reduced but not eliminated. (d) Fifteen steps of
the relaxed cone stepping followed by six steps of binary search. Note that the artifacts have been
essentially eliminated.
410
Chapter 18 Relaxed Cone Stepping for Relief Mapping
979351033.178.png 979351033.179.png 979351033.180.png 979351033.181.png 979351033.182.png 979351033.183.png 979351033.184.png 979351033.185.png 979351033.186.png 979351033.187.png 979351033.188.png 979351033.189.png 979351033.191.png 979351033.192.png 979351033.193.png 979351033.194.png 979351033.195.png 979351033.196.png 979351033.197.png 979351033.198.png 979351033.199.png 979351033.200.png 979351033.202.png 979351033.203.png 979351033.204.png 979351033.205.png 979351033.206.png 979351033.207.png 979351033.208.png 979351033.209.png 979351033.210.png 979351033.211.png 979351033.213.png 979351033.214.png 979351033.215.png 979351033.216.png 979351033.217.png 979351033.218.png 979351033.219.png 979351033.220.png 979351033.221.png 979351033.222.png 979351033.224.png
 
In this chapter, we describe a new ray-height-field intersection strategy for per-fragment
displacement mapping that combines the strengths of both cone step mapping and
binary search. We call the new space-leaping algorithm relaxed cone stepping (RCS), as it
relaxes the restriction used to define the radii of the cones in CSM. The idea for the
ray-height-field intersection is to replace the linear search with an aggressive space-
leaping approach, which is immediately followed by a binary search. While CSM con-
servatively defines the radii of the cones in such a way that a ray never pierces the
surface, RCS allows the rays to pierce the surface at most once. This produces much
wider cones, accelerating convergence. Once we know a ray is inside the surface, we can
safely apply a binary search to refine the position of the intersection. The combination
of RCS and binary search produces renderings of significantly higher quality, as shown
in Figure 18-1d. Note that both the aliasing visible in Figure 18-1a and the distortions
noticeable in Figures 18-1b and 18-1c have been removed. As a space-leaping tech-
nique, RCS can be used with other strategies for refining ray-height-field intersections,
such as the one used by interval mapping (Risser et al. 2005).
18.2
A Brief Review of Relief Mapping
Relief mapping (Policarpo et al. 2005) simulates the appearance of geometric surface
details by shading individual fragments in accordance to some depth and surface nor-
mal information that is mapped onto polygonal models. A depth map 1 (scaled to the
[0,1] range) represents geometric details assumed to be under the polygonal surface.
Depth and normal maps can be stored as a single RGBA texture (32-bit per texel)
called a relief texture (Oliveira et al. 2000). For better results, we recommend separating
the depth and normal components into two different textures. This way texture com-
pression will work better, because a specialized normal compression can be used inde-
pendent of the depth map compression, resulting in higher compression ratios and
fewer artifacts. It also provides better performance because during the relief-mapping
iterations, only the depth information is needed and a one-channel texture will be more
cache friendly (the normal information will be needed only at the end for lighting).
Figure 18-2 shows the normal and depth maps of a relief texture whose cross section is
shown in Figure 18-3. The mapping of relief details to a polygonal model is done in the
conventional way, by assigning a pair of texture coordinates to each vertex of the
model. During rendering, the depth map can be dynamically rescaled to achieve differ-
ent effects, and correct occlusion is achieved by properly updating the depth buffer.
1. We use the term depth map instead of height map because the stored values represent depth measured
under a reference plane, as opposed to height (measured above it). The reader should not confuse the
expression “depth map” used here with shadow buffers.
18.2 A Brief Review of Relief Mapping
411
979351033.225.png 979351033.226.png 979351033.227.png 979351033.228.png 979351033.229.png 979351033.230.png 979351033.231.png 979351033.232.png 979351033.002.png 979351033.003.png 979351033.004.png 979351033.005.png 979351033.006.png 979351033.007.png 979351033.008.png 979351033.009.png 979351033.010.png 979351033.011.png 979351033.013.png 979351033.014.png 979351033.015.png 979351033.016.png 979351033.017.png 979351033.018.png 979351033.019.png 979351033.020.png 979351033.021.png 979351033.022.png 979351033.024.png 979351033.025.png 979351033.026.png 979351033.027.png 979351033.028.png 979351033.029.png 979351033.030.png 979351033.031.png 979351033.032.png 979351033.033.png 979351033.035.png 979351033.036.png 979351033.037.png 979351033.038.png 979351033.039.png
 
Figure 18-2. Example of a Relief Texture
Left: The normal map is stored in the RGB channels of the texture. Right: The depth map is stored
in the alpha channel. Brighter pixels represent deeper geometry.
Light Source
Viewing Ray
Light Ray
( s,f )
f
0.0
1.0
0.0
( u,v )
( k,l )
1.0
Figure 18-3. Relief Rendering
The viewing ray is transformed to the tangent space of fragment f and then intersected with the
relief at point P, with texture coordinates (k, l). Shading is performed using the normal and color
stored at the corresponding textures at (k, l). Self-shadowing is computed by checking if the light
ray hits P before any other surface point.
412
Chapter 18 Relaxed Cone Stepping for Relief Mapping
979351033.040.png 979351033.041.png 979351033.042.png 979351033.043.png 979351033.045.png 979351033.046.png 979351033.047.png 979351033.048.png 979351033.049.png 979351033.050.png 979351033.051.png 979351033.052.png 979351033.053.png 979351033.054.png 979351033.056.png 979351033.057.png 979351033.058.png 979351033.059.png 979351033.060.png 979351033.061.png
 
979351033.062.png 979351033.063.png 979351033.064.png 979351033.066.png 979351033.067.png 979351033.068.png 979351033.069.png 979351033.070.png 979351033.071.png 979351033.072.png 979351033.073.png 979351033.074.png 979351033.075.png 979351033.077.png 979351033.078.png 979351033.079.png 979351033.080.png 979351033.081.png 979351033.082.png 979351033.083.png 979351033.084.png 979351033.085.png 979351033.086.png 979351033.088.png 979351033.089.png 979351033.090.png 979351033.091.png 979351033.092.png 979351033.093.png 979351033.094.png 979351033.095.png 979351033.096.png 979351033.097.png 979351033.099.png 979351033.100.png 979351033.101.png 979351033.102.png 979351033.103.png 979351033.104.png 979351033.105.png 979351033.106.png 979351033.107.png 979351033.108.png
 
Relief rendering is performed entirely on the GPU and can be conceptually divided
into three steps. For each fragment f with texture coordinates ( s , t ), first transform the
view direction V to the tangent space of f . Then, find the intersection P of the trans-
formed viewing ray against the depth map. Let ( k , l ) be the texture coordinates of such
intersection point (see Figure 18-3). Finally, use the corresponding position of P , ex-
pressed in camera space, and the normal stored at ( k , l ) to shade f . Self-shadowing can
be applied by checking whether the light ray reaches P before reaching any other point
on the relief. Figure 18-3 illustrates the entire process. Proper occlusion among relief-
mapped and other scene objects is achieved simply by updating the z-buffer with the z
coordinate of P (expressed in camera space and after projection and division by w ).
This updated z-buffer also supports the combined use of shadow mapping (Williams
1978) with relief-mapped surfaces.
In practice, finding the intersection point P can be entirely performed in 2D texture
space. Thus, let ( u , v ) be the 2D texture coordinates corresponding to the point where
the viewing ray reaches depth = 1.0 (Figure 18-3). We compute ( u , v ) based on ( s , t ),
on the transformed viewing direction and on the scaling factor applied to the depth
map. We then perform the search for P by sampling the depth map, stepping from ( s , t )
to ( u , v ), and checking if the viewing ray has pierced the relief (that is, whether the
depth along the viewing ray is bigger than the stored depth) before reaching ( u , v ). If
we have found a place where the viewing ray is under the relief, the intersection P is
refined using a binary search.
Although the binary search quickly converges to the intersection point and takes advan-
tage of texture filtering, it could not be used in the beginning of the search process be-
cause it may miss some large structures. This situation is depicted in Figure 18-4a, where
the depth value stored at the texture coordinates halfway from ( s , t ) and ( u , v ) is bigger
than the depth value along the viewing ray at point 1, even though the ray has already
pierced the surface. In this case, the binary search would incorrectly converge to point
Q . To minimize such aliasing artifacts, Policarpo et al. (2005) used a linear search to
restrict the binary search space. This is illustrated in Figure 18-4b, where the use of small
steps leads to finding point 3 under the surface. Subsequently, points 2 and 3 are used as
input to find the desired intersection using a binary search refinement. The linear search
itself, however, is also prone to aliasing in the presence of thin structures, as can be seen
in Figure 18-1a. This has motivated some researchers to propose the use of additional
preprocessed data to avoid missing such thin structures (Donnelly 2005, Dummer 2006,
Baboud and Décoret 2006). The technique described in this chapter was inspired by the
cone step mapping work of Dummer, which is briefly described next.
18.2 A Brief Review of Relief Mapping
413
979351033.110.png 979351033.111.png 979351033.112.png 979351033.113.png 979351033.114.png 979351033.115.png 979351033.116.png 979351033.117.png 979351033.118.png 979351033.120.png 979351033.121.png 979351033.122.png 979351033.123.png 979351033.124.png 979351033.125.png 979351033.126.png 979351033.127.png 979351033.128.png 979351033.129.png 979351033.131.png 979351033.132.png 979351033.133.png 979351033.134.png 979351033.135.png 979351033.136.png 979351033.137.png 979351033.138.png 979351033.139.png 979351033.140.png 979351033.142.png 979351033.143.png 979351033.144.png 979351033.145.png 979351033.146.png 979351033.147.png 979351033.148.png 979351033.149.png 979351033.150.png 979351033.151.png 979351033.153.png 979351033.154.png 979351033.155.png
 
Zgłoś jeśli naruszono regulamin