## Approximate Ambient Occlusion for Dynamic Scenes

- By @shailen@
- 25 September, 2013
- Comments Off on Approximate Ambient Occlusion for Dynamic Scenes

- Occlusion estimation for dynamic meshes in real-time
- Visibility from the surface of the mesh is calculated on the GPU and is used to determine occlusion values
- Quick estimation of occlusion for fully dynamic meshes with high quality of ambient occlusion results in a novel algorothm
- Advantage of correlation between consectutive frames of an animation in spatial relationship of meshes was taken to reduce the calculations made at each frame to determine occlusion y resuing values calculated in the previous frame.
- Current Work (in progress)
- Implementation of an intelligent updation scheme to account for scene correlation between frames
- Reduction in number of visibility queries hence pre-processing time
- More results will be posted here soon
- Current emphasis is on the use of LOD based scheme to reduce the number of occlusion query shot for each frame

**Dynamic Ambient Occlusion and Indirect Lighting**

The technique developed in this project develops further one developed by Bunnel. The relevant parts of the technique are described here and their shortcomings are highlighted at the end of this section which inspires the development of our new technique of dynamic ambient occlusion.

A representation of model in which at each vertex a surfel is placed is computed as a preprocessing step. A surfel is a surface element which has a direction and an area. The direction is the same as that of the averaged vertex normal. The area of the surfel is defined to be the one third of the sum of areas of triangles which share the vertex. Only the area needs to be calculated as it is assumed that the mesh already stores averaged normals for each vertex.

For a pair of occluders as shown above occlusion is estimated using the following equation

To efficiently perform animation all the position and normal data is stored as texture maps. Fragment programs are used to make all ambient occlusion calculations. Also all transformations from object space to eye space are made in fragment programs.

A role is assigned to each element, of being either an emitter or a receiver. Receivers are shadowed upon and emitter cast shadows. A fragment program calculates element-to-element occlusion.

Due to the nature of calculation of occlusion involved, a multipass algorithm is required to factor out occlusion caused by occluders who are themself in shadow and must cause less occlusion. This seems like a waste of computation as with an algorithm which in the initial step avoid such calculation can save these computations.

For improving performance a level of detail hierarchy is used for storing surfels, and surfels in the higher level of detail are only accessed if they are within some distance bound of each other.

**The Technique (Approximate Ambient Occlusion for Dynamic Scenes)**

The proposed technique hinges upon the use of GPU to determine visibility at a vertex location which is then used to determine occlusion.

I Preprocess( perform once per run of application )

Determine LOD heirarchy

Determine the children index arrays for each element

Assign each vertex to a grid cell in the 3D grid

Determine the neighbourhood of each vertex in the grid

II For each surfel perform 1-4

Setup the rendering context at a surface element

Point the camera along the average vertex normal

Determine the neighbourhood at the vertex

If | set diff.of the 2 sets of neighbours |< 2 then reuse the occlusion value

else perform step 5-7

Render the scene at a coarse level

Determine what was drawn and draw all children of the same

Determine samples passed,used to approximate occlusion

III Render final scene using occlusion values calculated in II to modulate lighting

Improvements

Light direction based occlusion

The act of computing the occlusion values by shooting rendering queries from a vertex can be put to a better use than just computing the occlusion at that vertex. Instead of just storing the number of samples passed for a rendering query, a grid of such value is stored for each vertex.

This is a proposed method to factor in the light direction in displaying occlusion caused at a vertex. The implementation is under progress.

Once we have this grid of values for each vertex, while rendering the final scene we weight the occlusion values stored in each grid cell of a particular vertex depending upon the direction of the light.

Thus for a location of light while rendering from the final camera we weight the occlusion values stored in different cell location by the inverse of distance from the light, ie the length of the light vector. This calculation will be performed in a fragment program for efficiency reasons.

The occlusion algorithm now becomes :

I Preprocess( perform once per run of application )

Determine LOD heirarchy

Determine the children index arrays for each element

Assign each vertex to a grid cell in the 3D grid

Determine the neighbourhood of each vertex in the grid

II For each surfel perform 1-4

Setup the rendering context at a surface element

Point the camera along the average vertex normal

Determine the neighbourhood at the vertex

If | set diff.of the 2 sets of neighbours |< 2 then reuse the occlusion value

else perform step 5-7

Render the scene at a coarse level

Determine what was drawn and draw all children of the same

Determine samples passed,used to approximate occlusion for each grid cell

III Render final scene using occlusion values in grid cells weighted with inverse light distances calculated in II to modulate lighting

Bibliography

Bunnell, M. Dynamic Ambient Occlusion and Indirect Lighting.

Kirk, A. G., & Arikan, O. (n.d.). Real-Time Ambient Occlusion for Dynamic Character Skins.

Sloan, P. P., Kautz, J., & Snyder, J. (2002). Precomputed Radiance Transfer for Real-Time Rendering in Dynamic, Low-Frequency Lighting Environments. Siggraph