Sunday, May 13, 2012

Renderman Assignment

Another assignment for Computer Graphics II; grab a version of Renderman (I use Pixie) and mess around in it.

I threw what I did up on my GitHub account in case I did something groundbreaking and it needs to immediately be shared with the masses.

Mission accomplished.






Thursday, May 10, 2012

Ray Tracer Checkpoint 7 - Tone Reproduction

Reinhard, Lmax = 1
Reinhard, Lmax = 1

Reinhard, Lmax = 1000
Reinhard, Lmax = 1000

Reinhard, Lmax = 10000
Reinhard, Lmax = 10000

Ward, Lmax = 1
Ward, Lmax = 1

Ward, Lmax = 1000
Ward, Lmax = 1000

Ward, Lmax = 10000
Ward, Lmax = 10000

Thursday, May 3, 2012

Lighting and Shadowing: Status Update - Week 8

Recap

Basically, I wanted to create a scene with random terrain and light the scene with some different illumination models. Then I wanted to throw in dynamic shadow casting and, if time permitted, some fancier effects like soft shadows and volumetric shadows. The best way to go about all of this was with the use of shaders; specifically GLSL.

Status

According to my timeline in the previous post I expected to be further. In all honesty I knew I would get distracted and fall behind. Progress has actually been made though; I'm only a week behind!
  • Random scene generation is done
  • Basic shader understanding/implementation is done
  • Phong illumination model is implemented

Terrain Generation

I began the project with my basic terrain generation algorithm I started a few months ago. It used the simple recursive Diamond-Square Algorithm to generate a 2-dimensional height map. From there I could create a basic polygonal mesh and render the scene using OpenGL. It gets some pretty nice results and isn't too difficult to implement, but can be a bit intensive when generating larger maps.

Diamond-Square Terrain
Distraction 1: Minecraftify it!


 

 

 

Random Planet Generation

I got distracted on a bit of a terrain generation kick and felt like taking it a step further. I can not for the life of me find the paper and website where I got this algorithm from, but it is awesome:
  • Generate a tessellated sphere (I use recursive icosahedron subdivision)
  • For an arbitrary number of iterations:
    • Generate a random plane normal
    • For every point on sphere
      • Determine side  of plane the point is on
      • If in front, raise a random amount
      • If behind, lower a random amount
It's fast (for reasonably selected tessellation and iteration factors), simple to understand, straightforward to implement and makes some pretty interesting looking planets. All these points on the sphere are then loaded into a vertex buffer object and passed over to the GPU (with an accompanying index buffer) where my shaders take over.

 

Shaders

I ended up following an object-oriented approach similar to what's outlined on the Swiftless Tutorials website, but modified to make things a bit more logical for me. This made loading, compiling and linking shaders extremely easy and let me focus on actually writing the shaders.

My shaders are still a tad basic, but they do have a few noteworthy characteristics:
  • Implement all the basics of GLSL usage, such as varying, uniform, and attribute types
  • The vertex shader adjusts vertices based on a distance calculation (any points below sea-level are adjusted to preserve a more rounded planet)
  • The fragment shader determines pixel color based on their position, and implements the Phong Illumination models (Per-pixel lighting, yeah!)

 

Results

Originally being made in C++, all of this has been ported over to Java and runs on Android 4.0.3 (should work on 2.2+) with OpenGL ES 2.0. I got a tablet and figured this would be a good way to break it in.
Point Cloud - 0 offsets

Point Cloud - 1 offset

Wire-frame Mesh - 1 offset
Solid - 1 offset
Solid - 100 offsets
Solid - 1000 offsets


Wednesday, May 2, 2012

Ray Tracer Checkpoint 6 - Transmission

Will type better explanation/description later; currently in "the coding zone."

Results

Basic - Transmissive Surface

Wednesday, April 25, 2012

Ray Tracer Checkpoint 5 - Reflection

Before this checkpoint the ray tracer only calculated local illumination on the surface of an object; with the exception of shadows, the illumination of a point on a surface was completely independent of the rest of the scene. The purpose of this checkpoint is to change that a bit.

The Plan

The goal was to make surfaces reflective based on a coefficient of reflection that each surface will have.
The easiest way to do this was to have each surfaces' material (or texture, as of the last checkpoint) store the coefficient of reflection.

The actually reflection is then implemented using recursion. The basic algorithm is as follows:
  • Cast out a ray
  • Check for surface collision
    • If no collision, use background color (black)
    • If collision
      • Calculate normal illumination color
      • Determine reflection ray
      • Start over again, but with reflection ray
To prevent this from looping infinitely, we break out of the recursion whenever a surface isn't reflective (coefficient of 0) or after reaching a certain level of recursion depth.

Results

Basic - Reflective surface

 

The actual algorithm is extremely straightforward, simple to implement, and looks awesome. However, I still struggled for way too long with the blue sphere being in the reflection; turns out my sphere-ray collision detection was broken.

Also, it may be difficult to see, but please note that the specular highlight no longer shows up in the shadowed areas. This, of course, is a triumph. I have to make a note here; huge success. It's hard to overstate my satisfaction.

Wednesday, April 18, 2012

Ray Tracer Checkpoint 4 - Procedural Shading

The fourth checkpoint is all about procedural shading; basically applying a pattern or texture that is generated on the fly (as opposed to loaded in from an external image) and then applied to a surface or object.

The Plan

I decided to treat the procedural pattern generation as the generation of a "texture," and thus created a Texture class. A Texture object is created with a width and height and stores an array of unsigned chars, it's length being width * height * 3 (it stores the RGB values for every "pixel" of the "texture").

When the ray is cast out from the camera and intersects with a surface, we translate that point of intersection into local coordinates (u, v) relative to some origin on the surface. The easiest case for this is a Polygon which is why it was the focus of this checkpoint. Determining relative (u, v) coordinates shouldn't be too difficult for other objects, however, and I plan on doing so in a later release.

The local (u, v) coordinates then can be used to easily determine the corresponding color from the Texture object. The (u, v) coordinates lying outside the dimensions of the array can be easily handled in a manner similar to actual texture mapping; either repeat the image or stretch the nearest legitimate pixel.

The biggest benefit of using this setup, however, is that it can easily be extended to allow for actual textures to be loaded in. All you have to do is convert a texture into an array of RGB values, which the Simple OpenGL Image Library can actually do for us.

Known Problems

  • Shadows are finally casting correctly (WOO!), but ambient light does not work as expected. Illuminating scene fully with ambient light should eliminate all shadows; currently does not.
  • Objects are still one-sided and I'm still not sure if I like this or not.
  • Loading in textures and converting to RGB values is surprisingly slow using SOIL. Need to figure out why or find a better method.
  • There's no real concept of "texture mapping" yet, so when textures are loaded in they're stretched almost unrecognizably.

Results

Basic - Procedural Shaded Polygon

 

Wednesday, April 11, 2012

Ray Tracer Checkpoint 4? - Not Good

WHELP, things are spiraling out of control. I'm still stuck somewhere in the last checkpoint; the restructuring of all my code went a lot worse than I had planned and not much of anything seems to be working.

I now realize the important of backups.

Hopefully I'll be done by Friday. That's pretty acceptable, right? Yeah!

Wednesday, April 4, 2012

Ray Tracer Checkpoint 3 - Basic Shading

I'm restructuring everything I've done because when I implemented Phong shading everything kind of turned into an ugly blob mess. This post will be better when I finish doing that.

EDIT: I have determined this post is wonderful and needs no updating.

Known Problems

  • The lighting on the floor (Polygon3D) looks off and I'm not sure if shadows are correctly cast onto it. 
  • The entire code structure is a complete mess; needs restructuring.
  • Polygons and Spheres are one sided; if a light source is placed inside a Sphere or behind a Polygon its' light still reaches the other objects. I'm not sure if this is desirable or not.

Results

Basic Phong Shading.
Extra. Two light sources.

Wednesday, March 28, 2012

Ray Tracer Checkpoint 2 – Camera

For the second checkpoint we focus on the core functionality of the ray tracer; the actual ray-casting and collision detection.

I took an object-oriented design approach, keeping different components as separate as possible from one another. I ended up with the following design:
  • Camera - represents the camera. Stores it's position, up, and forward vectors. Doesn't bother storing the right vector as that can be quickly determined with a cross product. Also contains the "render" method; it takes in a World object and does the ray-casting, returning a 2D array of Colour objects (the final image to be displayed).
  • World - stores a list of 3D objects to be rendered. That's pretty much it. Also contains a method to transform all those objects based on a transformation matrix, but it isn't used yet.
  • Object3D - represents an object to be rendered. It's an abstract class and contains one method that must be implemented by all child classes; intersect(Ray).
  • Vector3 - represents a 3D vector, and implements all the standard vector math functions.
  • Point3 - represents a 3D point. Essentially a vector, but without the vector math functions.
  • Ray - a point (Point3) with direction (Vector3).
  • Colour - stores a red, green, and blue value.
Designing it this way makes it really easy to add things in, like additional objects types (besides basic sphere and polygon) and improved camera functionality. This should be pretty awesome for future checkpoints.

Known Problems

I have a variable that sets how far away the "screen" that the rays shoot through is. Unfortunately, if I try to position the screen correctly (in between the camera position and the scene), everything that rendered is too small to see. Instead, I position it on the opposite side of the scene. I get desirable results, but it's such a hack that it makes me upset. It'll hopefully be fixed by next release.

Additionally, the actual drawing to the screen is currently done with OpenGL ( glRect ). I feel like using OpenGL for just that is a bit overkill, so I'll be looking into alternatives.

Results

Ray Casting, no extras
Ray Casting with Super Sampling
I rendered the super sampled image with a rather large offset value to show the results. In practice the value would probably be much smaller.

Lighting and Shadowing: Project Proposal

Proposal Document

The Presentation

For a previous class project, I tried to implement a dynamic lighting and shadowing system in OpenGL. The main reasons for this were that OpenGL 2.0 lights don't cast shadows and that OpenGL 3.0 and 4.0 don't give you any lights at all.

After some research I found three ways for me to go about doing such a thing:
  • Per-pixel lighting
  • Per-vertex lighting
  • Light mapping
Per-pixel and per-vertex lighting require the use of shaders and GLSL, which at the time I had no desire to learn. I went with light mapping. Although it was fairly easy to implement, it was unbelievably slow and I spent the majority of my time trying to speed it up. So I've scrapped that project and moved onto better things.

The Proposal

I will create a dynamic illumination and shadowing system in OpenGL 4.0, using shaders and GLSL. I'm going to generate some random terrain, illuminate the whole thing using different models (Phong, Gouraud, and Lamertian), then implement shadow volumes. It will be glorious.

Timeline

I'm never good about keeping personal deadlines, but here's the game plan:
  • Now: Begin shader research and experimentation.
  • Week 4: Have random scene generation completely finished.
  • Week 5: Be familiar enough with OpenGL 4.0 and GLSL that actual work can begin.
  • Week 7: Have at least one illumination model completely finished.
  • Week 8: Have all illumination models completely finished. Begin work on shadow volumes.
  • Week 10: Have shadow volumes implemented.
I'm going to aim for a post on each of these milestones outlining what I did, how I did it, and numerous papers, articles, and websites that I use.

Monday, March 19, 2012

Ray Tracer Checkpoint 1 – Setting the Scene

The ray tracer assignment is a quarter long project for Computer Graphics II. Since building a ray tracer is a pretty ambitious project as a whole, it's been divided into more manageable milestones.

The first and easiest milestone is to just create the scene geometry to be ray traced. In this case, the scene is a recreation of Turner Whitted's first ray traced scene. It was recreated using OpenGL; as such, not everything is completely accurate.

Scene Geometry

  • Camera
    • Position: (0, 0, 0)
    • Forward: (0, 0, -1)
    • Up: (0, 1, 0)
  • Floor Corners
    • (7, -5, -38)
    • (-13, -5, -38)
    • (7, -5, -8)
    • (-13, -5, -8)
  • Big Sphere
    • Position: (0, 1, -13)
    • Width: 3.125
  • Small Sphere
    • Position: (-3, -5/3, -18)
    • Width: 3.125
I was pretty adamant about keeping the spheres the same size (as I believe they are in the original, but one is further away) and the camera at the origin. This led to an acceptable recreation, but some pretty unusual values.

 Results

 

Whitted's 1980 ray traced scene.
Original. Whitted, 1980
Recreation of Whitted's 1980 ray traced scene
Recreation