Ambient Occlusion is a vital factor that contribute to global illumination. To see why AO is important, take a look at the following 2 images (from the GPU Gems book)
Diffuse only
The right leg of the dinosour is lit with same amount of light as the left leg, which is something unlikely happen in real world. Note that in real world, light comes from almost everywhere. And because of this characteristic, it is very unlikely that some under-part of the dinosour (e.g. the chest and the under-face of the tail) are complely dark as they are shaded with diffuse shading.

The following picture is shaded with ambient occlusion and you can notice dramatic improvement (note the soft shadow)
Dinosour shaded with Ambient Occlusion

To shade a point using AO, we need to compute the reflected radiance at that point.
Reflected radiance

Where f

is the brdf of the material.
Li

is the incident radiance, this is computed by shooting shadow rays in many directions from the hemisphere over the point being shaded. I use Kevin Suffern’s skeleton renderer. The original code is commented out.

My modified code:

RGBColor AmbientOccluder::L (ShadeRec& sr) {
	w = sr.normal;
	// jitter up vector in case normal is vertical
	v = w^Vector3D(.0072,1,.0034);
	v.normalize ();
	u = v^w;
	/*
	Ray shadow_ray;
	shadow_ray.o = sr.hit_point;
	shadow_ray.d = get_direction (sr);
		
	if (!in_shadow (shadow_ray, sr)) 
		return (ls*color)*min_amount;
	else return (black);
	*/
	
	
	Ray shadow_ray;
	int num_unoccluded = 0;
	for (int i = 0;i<num_ray_shots;++i) {
		shadow_ray.o = sr.hit_point;
		shadow_ray.d = get_direction (sr);
		
		if (!in_shadow (shadow_ray, sr)) num_unoccluded++;
	}
	float accessibility = 1.0*num_unoccluded / num_ray_shots;
	return (ls*color)*min_amount*accessibility;
	
	
}

You need to multiply 1.0 at line 25 to type cast the RHS to float, otherwise accessibility will easily be 0 which results in the following image:

I used Multijittered sampling to sample the direction over the hemisphere. Let’s say we create 256 samples for the hemisphere, however, we don’t need to fire rays in all 256 directions. This following images was rendered with 256 pixel samples and 256 AO samples but we need only to shoot 1 ray.
256_256_1_1min

Although we need to shoot 1 ray, the pixel sample still can do the trick. But since pixel sample is expansive, we will try to decrease it, fortunately 16 samples per pixel almost terminate all the jaggy artifacts, so we will use 16 samples per pixel from now on.

The following 2 images are rendered with 16/16/16 and 16/256/16 (pixel samples/AO samples/number of ray shots) respectively, the render time is 45 and 46 seconds respectively

You can see the 1st image (16/16/16) is more noisy due to the small number ofn AO samples, whereas the 2nd image (16/256/16) is less noisy and the render time is just about the same.

The next image rendered with (16/256/64) is almost noise-free, the render time is 2:50 minutes which is about the same as (256/256/1) in which noise is very visible.

Conclusion: We should use ray sampling as much as simple and only rely on pixel sampling to anti aliasing like e.g. jaggy edge. However shooting ray at hit point level is sometime much more expensive. Consider the path tracing algorithm when you need to shoot extra rays every time a ray hits a diffuse surface. The number of rays would go exponentially. In that case, pixel sampling is the preferred solution.

Advertisements