Category: rendering

K-d Tree Build

The first version of K-d tree I wrote was a trivial median-split build, and it ran quite slow. Then I found out a technique called Surface Area Heuristic (SAH). Basically, the method works under the assumption that the time of intersecting a ray with a voxel is based on its surface area and the number of triangles inside that voxel. And to find the split plane, it will check all the bounding face of every triangles in the scene. A benefit to this approach is axis-align (or planar) triangles will be preferred, making the algorithm perfect for scene in which there are many flat, parallel surfaces.

So I spent most of my last 2 weeks reading and implementing that build. Although the build is now O(NlogN), the code is till very poorly optimized.

The code is made available at:

My implementation is based on this paper:
On building fast kd-Trees for Ray Tracing, and on doing that in O(NlogN)

The SAH is described by Vlastimir Havran in his dissertation which is considered a bible for K-d tree
Heuristic ray shooting algorithm

It took me almost 3 days to understand the method, because I’m rather new to this acceleration scheme. I think I will write down some of my thoughts on this wonderful paper when I have time.


Point light usually mimics the illuminance of light bulb in which light is emitted in all direction from source.
As the name suggests, Computer Graphics often treat the light source as a single point in space. And by this way, the shadow generated by point light is always hard-edged. But sometimes, we want point light to give soft-edge shadow like in Arealight. This article will explain a trick to implement a basic point light that gives soft-edge shadow.

Here is how the render process happens
1. Primary ray is shoot from camera through each pixel’s coordinate of the view plane.
2. The renderer compute the intersection point of the ray with objects in the scene and return a ShadeRec (shading record) in which the information of the hit point (location, normal, material pointer,…) is stored.
3. From the hit point, a shadow ray is shoot to each light in the scene to determine whether that point is in shadow. If not, compute the incoming radiance from that light and add it to the final color

Let’s start the implementation by first looking at the Point light in Maya
Maya's Pointlight
The most important attributes are intensity, color, and shadow. Every kind of light must have all these 3 attributes. Under shadow, look for “raytrace shadow”, expand it and you will notice 3 sub attributes, i.e. shadow radius, # shadow rays, and # shadow depth. In this tutorial, I will implement the most 2 important (according to me) attributes which are shadow radius and number of shadow rays. Shadow depth is less important and will be 1 by default.

First, we need to define the Light class:

class Light {	
		virtual Vector3D	get_direction(const ShadeRec& sr) = 0;				
		virtual RGBColor	L(const ShadeRec& sr);		
		virtual bool in_shadow (const Ray& ray, const ShadeRec& sr) const; //shadow test

		void set_num_shadow_rays (int num_sr);
		int get_num_shadow_rays () const;

		bool isArea;   // check if this light is arealight
		int num_shadow_rays; // number of shadow rays being sampled

And PointLight class, which is a sub class of Light

class PointLight : public Light
	void set_ls (float _ls);
	void set_shadow_radius (float _sr);
	virtual Vector3D get_direction (const ShadeRec& sr);
	virtual RGBColor L (const ShadeRec& sr);
	virtual bool in_shadow (const Ray& ray, const ShadeRec& sr) const;

	float ls,shadow_radius;
	RGBColor color;
	Vector3D source;

The most important method that contribute to the shadow generated by point light is get_direction ()

Vector3D PointLight::get_direction (const ShadeRec& sr) {
	return ((sr.hit_point - source).hat()); // normalize the vector

By default, this implementation will generate, hard-edge shadow. The reason for this is because the light source is fixed. To fix this, we just need to jitter the source’s location. The shadow_radius control the spreading of the penumbra and is the same as “light radius” attribute in Maya

Vector3D PointLight::get_direction (const ShadeRec& sr) {
	Point3D sample_source;

	sample_source.x = source.x + pow (-1,(float) (rand ()%2))*shadow_radius*rand_float ();
	sample_source.y = source.y + pow (-1,(float) (rand ()%2))*shadow_radius*rand_float ();
	sample_source.z = source.z + pow (-1,(float) (rand ()%2))*shadow_radius*rand_float ();
	Vector3D temp = sr.hit_point - sample_source;
	return (temp.hat());

Now, when you jitter the light source’s location, a point near the edge of the old shadow, may make it to the light without hitting any object, and therefore is not in shadow. This will give us the below image. Look at the edge, there are noises indicating that many shadow rays did not hit anything because of the variant light source. To eliminate the noise, we just need to increase the number of samples per pixel. The image below is rendered with 1024 samples per pixel

Pointlight 1024samples

By default, we use Monte Carlo integration to estimate the color at each pixel. This is done through sampling the rays and calculate the average color return by each ray. However, this may be costly to reduce noise to acceptable level. Instead of firing many rays at the beginning, we can also shoot ray at each hit point which result in same result but lesser computation since the ray shooting process is pushed inside the pixel loop. The “Shadow Rays” attribute in Maya does exactly the same function.

This image below is rendered with 16 pixel samples and 16 shadow rays.

16 pixel samples, 16 shadow rays

Ambient Occlusion is a vital factor that contribute to global illumination. To see why AO is important, take a look at the following 2 images (from the GPU Gems book)
Diffuse only
The right leg of the dinosour is lit with same amount of light as the left leg, which is something unlikely happen in real world. Note that in real world, light comes from almost everywhere. And because of this characteristic, it is very unlikely that some under-part of the dinosour (e.g. the chest and the under-face of the tail) are complely dark as they are shaded with diffuse shading.

The following picture is shaded with ambient occlusion and you can notice dramatic improvement (note the soft shadow)
Dinosour shaded with Ambient Occlusion

To shade a point using AO, we need to compute the reflected radiance at that point.
Reflected radiance

Where f

is the brdf of the material.

is the incident radiance, this is computed by shooting shadow rays in many directions from the hemisphere over the point being shaded. I use Kevin Suffern’s skeleton renderer. The original code is commented out.

My modified code:

RGBColor AmbientOccluder::L (ShadeRec& sr) {
	w = sr.normal;
	// jitter up vector in case normal is vertical
	v = w^Vector3D(.0072,1,.0034);
	v.normalize ();
	u = v^w;
	Ray shadow_ray;
	shadow_ray.o = sr.hit_point;
	shadow_ray.d = get_direction (sr);
	if (!in_shadow (shadow_ray, sr)) 
		return (ls*color)*min_amount;
	else return (black);
	Ray shadow_ray;
	int num_unoccluded = 0;
	for (int i = 0;i<num_ray_shots;++i) {
		shadow_ray.o = sr.hit_point;
		shadow_ray.d = get_direction (sr);
		if (!in_shadow (shadow_ray, sr)) num_unoccluded++;
	float accessibility = 1.0*num_unoccluded / num_ray_shots;
	return (ls*color)*min_amount*accessibility;

You need to multiply 1.0 at line 25 to type cast the RHS to float, otherwise accessibility will easily be 0 which results in the following image:

I used Multijittered sampling to sample the direction over the hemisphere. Let’s say we create 256 samples for the hemisphere, however, we don’t need to fire rays in all 256 directions. This following images was rendered with 256 pixel samples and 256 AO samples but we need only to shoot 1 ray.

Although we need to shoot 1 ray, the pixel sample still can do the trick. But since pixel sample is expansive, we will try to decrease it, fortunately 16 samples per pixel almost terminate all the jaggy artifacts, so we will use 16 samples per pixel from now on.

The following 2 images are rendered with 16/16/16 and 16/256/16 (pixel samples/AO samples/number of ray shots) respectively, the render time is 45 and 46 seconds respectively

You can see the 1st image (16/16/16) is more noisy due to the small number ofn AO samples, whereas the 2nd image (16/256/16) is less noisy and the render time is just about the same.

The next image rendered with (16/256/64) is almost noise-free, the render time is 2:50 minutes which is about the same as (256/256/1) in which noise is very visible.

Conclusion: We should use ray sampling as much as simple and only rely on pixel sampling to anti aliasing like e.g. jaggy edge. However shooting ray at hit point level is sometime much more expensive. Consider the path tracing algorithm when you need to shoot extra rays every time a ray hits a diffuse surface. The number of rays would go exponentially. In that case, pixel sampling is the preferred solution.