Sean West
Download
Geometric Construction and Lighting
The Scene Interpreter that I have built is a modified version of the one presented in my final project progress presentation. Like the previous scene interpreter, it allows one to create scene files (with the extension .test) which build a scene just like how one builds a scene in OpenGL. The modifications I have made allow the scene writer to implement additional features which are discussed below.
All objects are stored in camera coordinates. And, the camera is placed at the origin facing the -z direction with up being +y and right being +x - just like in OpenGL. Each object is also passed the current matrix on the matrix stack right when it is built. This allows, for instance, the ellipse to transform the ray when doing ray-object test so that we can simply reuse the ray-sphere intersection code. Below are the pictures from the file scene1.test.
The pictures generated from scene1.test. |
Light attenuation and ambient properties of light were chosen to be global properies.
The ray-ellipse intersection is a simple transformation of the ray to object coordinates and then from there on the ray-sphere intersection. I found online lecture notes describing a 'simple' ray-triangle intersection algorithm. Indeed it was simple - unfortunately it seems to be considerably slower than one would expect.
On the left is a picture generated from scene2_cam1.test (here) using camera1. This has 2 point lights and one directional light all producing light of color rgb(.5, .5, .5). The cube has only a diffuse component whereas the spheres also have a specular component of shininess 200. On the right is a picture generated from scene2_cam3.test (here) using camera3. Here there are two directional lights, one shining red light and another shining blue light. The cube and spheres reflect all color equally (i.e. diffuse and specular components are all rgb(c,c,c) for some constant c. The spheres and cube share the same diffuse and specular components with diffuse = rgb(.3,.3,.3) and specular = rgb(1,1,1). |
This scene has two point lights, a magenta one outwards from the middle of the top edge of the left face that we see, and outwards from the right face is a green point light. The spheres have a brown diffuse color to them. On the left is the scene produced (scene file here). One can clearly see where the point lights shine the greatest on the middle of two of the edges of the cube. One can also see the intersection of the colored light rays as well as the reflection of only one of the light's rays due to a shadow from the other. On the right, attenuation of c=1, l=0.6, q=0.3 is used (scene file here). One can see here how the light fades as the distance from the light increases. |
I experimented with automatic full reflectivity but that caused everything to look too mirrorish (of course) so what I ended up doing in the end was just allowing for full mirror reflectivity by specifying the shininess as -1. This design decision was mainly because it doesn't (in my opinion) look good to have a sphere have shininess say equal to 3, which is a 'blurred' reflection of the light, and have a dimmed exact reflection of the scene at the same time. You can't have a blured reflection of the scene on a shiny object with one-pass raytracing, so I only allowed for perfect mirrors.
On the left is a picture generated from scene3_metallic.test (here). One can see that there is no reflectivity (but it has an awesome metallic look!). On the right, the top part of the table is a perfect mirror and reflectivity ensues (scene file here). Note: the lack of color in the first picture isn't because of any problem or anything, I just thought it looked awesome. |
I have added to my ray tracer the capability to perform motion blur and anti-aliasing. In addition, computation speeds are optimized by using a very intuitive acceleration structure.
Motion blur can be accomplished by sending out multiple rays for each pixel, each assuming a different time, and averaging them. I implemented two different methods of achieving this: randomized sampling and discrete sampling. In randomized sampling each ray sent out will contain a randomly chosen time that the environment assumes (chosen between a range). For discrete sampling, the time range is uniformly divided into different time segments and one ray is sent out for each time segment. The user can specify the method of motion blur by using the command motion_blur_method method where method is 0 if randomized sampling is desired and 1 if discrete sampling is desired. The number of samples can be specified by the command motion_blur_samples numsamples. To specify an objects direction, the command velocity vx vy vz will attach a velocity component to each subsequent object afterwards.
These two pictures use motion blur to show the metallic spheres shooting up and the two colored ellipses traveling to the right. On the left (scene file here) one can see that randomizing the samples produces typical monte carlo noise (keep in mind that only 6 samples were used per pixel). If one does not choose to increase the number of samples, the discete version is most certainly better looking. On the right is this same scene rendered using discrete samples (scene file here). If one looks closely they can see each object repeat itself in transition. Each one of these repetitions represents the object in one time which was (not randomly) chosen. I presume that the best combination may be to split the time range into equally sized time sections like in the discrete way, but then within each time section randomize the time variable. This would probably reduce the obvious repetition of the objects (by adding a little noise to the time variable) yet not be so messed up like the totally randomized version is. |
Ray tracing has its unique form of aliasing. For instance, if a ray just barely touches a red sphere at a point where it is not illuminated a lot, it will produce a dark pixel. If there is a very bright background behind this, then this pixel will stand out as too dark, since in reality the pixel area is probably a combination of the red sphere and the bright background. Aside from this, problems encountered in typical opengl graphics such as jagged edgies along a diagonal line, will also occur. Anti-aliasing can be accomplished in a manner that is much more effective than simply upping the resolution of the image and then scaling down using post-ray-traycing image processing. One simply shoots more rays per pixel and then averages them together. In the example of the red sphere in front of a bright background, some of the rays will hit the sphere and some will hit the background, making the averaged pixel look more realistic.
How one shoots out rays per pixel is another issue. The most inutitive way is to just divide the pixel into more equally spaced pixels. This works fine, however certain aliasing due to the non-randomness of this technique will show up. I implemented a more robust technique. Like the intuitive solution, the image pixel is divided into equally spaced pixels. After this, however, the ray's intersection point is chosen at a random point within this subpixel boundary (rather than simply in the middle).
One can specify the number of samples per pixel by using the command AA_samples numsamples which will shoot numsamples*numsamples rays per pixel.
At the top left is a picture generated from AA_off.test. At the top right is the same picture using 4x4 samples per pixel (scene file here). One can especially see the improvements on the legs and around the edges of objects (i.e. at the edge of red ellipse near the green ellipse). To aid in this inspection, on the bottom left is a zoom-in of the non-anti-aliased version and on the right is a zoom-in of the anti-aliased version. |
I also added in an acceleration structure to significantly speed up the intersection tests. I used a bounding box structure where one can specify groupings of objects. For instance if there are 5 objects all clumped together, one could bound these together in a bounding box and then if a ray does not intersect this box then the ray intersections-tests are not performed on the objects contained within. This could, in theory, reduce the number of ray-intersection tests from O(n) to O(logn) if used widely. The scene interpreter that I have built allows one to specify a bounding box by using the command bt::start_container to signify that all objects following should be contained within a bounding box, that is until the bt::end_container command is used. One does not need to specify the size of the bounding box (or container), the code that I have built automatically sizes the container to the minimal bounding box containing all objects within.