The complexity of objects represented by polygonal meshes has risen to the point where multiple polygons in a mesh may occupy less than a single pixel on screen when the object is rendered. Point-sampled representations of surfaces have recently gained popularity due their high level of flexibility in representing surfaces. Much of this flexibility comes from the fact that point-based representations of surfaces do not posses any explicit form of connectivity between primitives. This property makes points ideal in representing complex or dynamically changing objects.
In the area of tiled projector displays, display surfaces with complex geometries must be estimated in order to determine the necessary warping to apply to user images so that they appear undistorted from the user’s viewpoint. The warping can be accomplished by rendering a projectively textured model of the display surface using the position of the projector as the viewpoint. The stereo reconstruction step used in estimating the geometry of the surface produces a number of point samples which are typically triangulated into a polygonal display model in an up-front calibration process. This triangulation is prone to artifacts resulting from outliers being connected to the rest of the surface geometry through the triangulation process. These artifacts are very disturbing to the viewer and must be removed before the system is used. In continuously calibrated systems, these visible artifacts become harder to control since the model of the display surface is constantly being updated and re-triangulated.
Point-based rendering methods have the potential to reduce apparent visual artifacts since point primitives have no inherent connectivity to one another. In continuously calibrated displays, the use of points also has the potential to ease the update of the surface geometry since re-triangulation is no longer necessary.
Mark Levoy and Turner Whitted [6] first proposed the use of points as a display primitive in 1985. Since then, their use in volume and surface rendering has been explored. Methods have been devised to efficiently reconstruct point-sampled surfaces, filling in the holes between points with a continuous surface. Surveys of point-based representations and rendering methods can be found in [1] and [3]. Recently, a method called surface splatting was developed [2] which allows point-sampled surfaces to be rendered efficiently with anti-aliasing. Botsch et al. [4] showed how this method of point-based rendering could be fully accelerated using the programmability of the newest generation of graphics hardware.
To my knowledge, no tiled displays systems have been built which use a variant of point-based graphics to represent the display surface.
As my semester project, I propose to implement a hardware accelerated point-based renderer described in [4]. The renderer will take objects represented as a set of surface splats (center, normal and 2D extent) and render them with both object and screen space filtering in a three-pass rendering process. I will also investigate how projective texturing may be applied to this rendering process since this will be necessary for incorporating the method into a tiled display system.
Goals accomplished by March 15:
-Background Research Completed (Hardware accelerated surface splatting, multiple render targets in OpenGL etc.)
-Initial progress made on completing the first pass of the rendering algorithm.
Goals accomplished by April 15:
-Pass one and two of the surface splatting algorithm implemented in hardware. These two passes accomplish visibility and object space filtering.
-Depending on time, pass three also completed. This pass accomplishes screen-space filtering and lighting.
Goals accomplished by May 10:
-Investigation of how to add projective texturing to the surface splatting pipeline.
-Depending on time, incorporate the renderer into the tiled display system of the WAV project and observe results.
Although implementing a hardware accelerated point renderer on its own is not all that novel, it will require a significant amount of time and research since the literature on this subject is new and somewhat sparse. The application area for the renderer is however novel. I am not aware of any tiled display systems which make use of point-based graphics for representing display surface geometry.
[1] |
Kobbelt L., Botsch M., "A survey of point-based rendering techniques in computer graphics", Computers & Graphics 28, 6 (2004), 801-814. |
[2] |
Zwicker M., Pfister H., Van Baar J., Gross M., "Surface splatting", Proc. of ACM SIGGRAPH 01 (2001), pp. 371-378. |
[3] |
Sainz M., Pajarola R., "Point-based rendering techniques", Computers & Graphics 28,6 (2004), 869-879. |
[4] |
Botsch M., Hornung A., Zwicker M., Kobbelt L., "High-Quality Surface Splatting on Today’s GPUs", Proc. of Eurographics Symposium on Point-Based Graphics (2005). |
[5] |
Pfister H., Zwicker M., van Baar J., Gross M., "Surfels: Surface Elements as Rendering Primitives", Proc. of SIGGRAPH 2000, July 2000. |
[6] |
Levoy M., Whitted R., "The Use of Points as a display primitive", Technical Report 85-022, Computer Science Department, UNC-Chapel Hill, January, 1985. |
[7] |
Wu J., Kobbelt L., "Optimized Sub-sampling of Point Sets for Surface Splatting", Proc. of Eurographics 2004, Vol 23, Num. 3 |
This section provides a brief introduction to the three-pass algorithm described in reference [4] for rendering objects represented as a set of surface splats.
Splats are rendered using custom vertex and fragment shaders. The splats enter the pipeline as points along with their other attributes. A vertex shader computes the size of the splat onscreen by projecting the eye-space extent of the splat onto the viewplane and setting the size s of the rectangular point to be rasterized. This causes each splat to be rasterized by the hardware into a size s x s rectangle in the framebuffer centered around the projected position of the splat.
Each fragment of the splat is then processed by a custom fragment shader. Since a conservative estimate of the size of the splat onscreen is computed by the vertex shader, the fragment shader first determines whether the fragment is in the interior of the splat by inverting the viewport transformation and back-projecting the fragment position into a ray which is intersected with the supporting plane of the splat. If the fragment is determined to be within the interior of the splat in eye-space, it is accepted, otherwise it is abandoned. For accepted fragments, a perspectively correct depth is computed and passed onto the frame buffer for depth testing.
Using multiple rendering targets, it is possible to render not only blended material properties, but also blended normals as well. In the third pass, the normals and material properties can be combined to perform phong shading.
After normalization, shading computations are performed and rendering is complete.
As of this project update, I have completed passes one and two of the three pass algorithm outlined above, as well as the normalization component of the third pass. Some images of the rendered results are shown below.
Since I was unable to find any surface splat models online, I attempted to generate my own to get things going. To get a (very bad) approximation of a surface splat model, I took a polygonal mesh and created a splat for each vertex in the mesh, using the vertex normal as the splat normal. For the radius of the splat, I took the distance to the farthest vertex of all triangles the splat’s parent vertex is part of. As can be seen below, the result is a poorly sampled model with certain splats grossly over-proportioned. It is however sufficient for testing the renderer.
The following two images are of an eagle and cow model rendered using the surface splatting algorithm described above. At this time no shading calculations are performed. Each splat was assigned a random grayscale color value.
The next two images show are the same as the two above, except that the color at each pixel has not been normalized by dividing by the accumulated weight. The high dynamic range of the results is due to the use of 16-bit floating point render targets.
Using the Multiple Render Target (MRT) functionality of the latest graphics cards, it is possible to output both blended color and normals for each rendered pixel in pass two. The two images below show normal textures with normal values (x,y,z) interpreted as (r,g,b) values.
Due to the division in the normalization step, the use of 8-bit color channels can lead to visible artifacts due lack of precision. The following two images of a solid color cow model were rendered without any shading calculations. The image on the left was generated using 8-bits of precision for rendering and the image on the right using 16-bits of floating-point precision. The bright spots in the image on the left are due to numerical inaccuracies - the result of division by a very small number.
Due to unavailability of surface splat models, I will have to create my own. I plan to do this by taking a dense point set (possibly obtained by sampling a polygonal mesh) and use the algorithm described in [7] to turn the point set into an optimized set of circular surface splats. The following are updated goals for the next to project milestones:
Goals accomplished by April 15:
-Point-based renderer of circular surface splats completed.
-Algorithm for computing an optimized set of circular surface splats from a dense point set completed.
Goals accomplished by May 10:
-Investigation of how to add projective texturing to the surface splatting pipeline.
-Depending on time, incorporate the renderer into the tiled display system of the WAV project and observe results.
In order to get some splat-based models to test my renderer, I implemented a partial implementation of the technique in [7] for sub-sampling a point cloud into a set of surface splats. To generate the point cloud, I wrote some code to sample a triangulated surface mesh along uniformly spaced rays parallel to a coordinate axis for all three coordinate axes, generating a point sample at each ray-mesh intersection. Prior to the sampling, the meshes were scaled to fit in a 2-cube centered around the origin.
Both the cow and eagle model from the last update were sampled at resolutions of 128x128 and 256x256 rays per coordinate axis. With each point sample I also stored an interpolated normal direction from the mesh at the location of the point sample for use in the sub-sampling algorithm.
I then wrote Matlab code to take the point cloud and sub-sample it into a set of circular surface splats with a given global error tolerance. To begin with, a splat is created for each point sample. The normal for the splat is created by taking the normal of a least-squares plane fit to the k nearest neighbors of the parent point. The generated normal is then compared to the normal from the mesh and is reversed if it points in a direction inconsisten with the interpolated normal taken from the mesh i.e. the dot product is negative. This ensures that all splat normals point outwards from the surface and not inwards.
Once the normal for each splat has been computed, a radius for the splat is required. The radius of the splat is found by considering the k nearest neighbors to each splat's parent point in the order of their 2D distance from the center of the splat when projected onto the plane of the splat. The neighbors are incorporated into the splat until the user specified error threshold is reached, which occurs when the distance along the normal of the splat to the included points is greater than the error tolerance. The radius is then set as the distance in the plane of the splat to the furthest included neighbor.
The current set of splats can now be used to render the surface, but it contains many overlapping and redudant splats. To greatly reduce the number of splats, a greedy selection method is employed which selects splats in the order of their surface area contribution until all sample points are covered. The results of the sub-sampling method are depicted below.
Also as part of this update, I have completed the point-based renderer. In the last update I had completed passes one and two of the renderer and with this update, pass three has also been completed. After pass two, two textures which hold the material properties and normals of the object at each pixel have been created. In the shading pass, it is then possible to use the normals in conjunction with the material properties and information about the light source to perform per pixel shading. As mentioned in the previous update, the object material properties and normals must first be normalized by dividing their values by the accumulated weights of each kernel at the current pixel.
Here are some images I was able to generate using the point-based renderer and models obtained by sub-sampling surface meshes of the cow and eagle from the previous update.
These first images are of the cow and eagle sampled into a point cloud using 128x128 rays per coordinate axis. For both models, the point cloud was sub-sampled into surface splats while enforcing an error tolerance of 2.4% of the bounding box diagonal - both models were prescaled to fit in a 2-cube before sampling.
Eagle model - 24066 points reduced to 8657 splats. | Cow model - 24066 points reduced to 2238 splats. |
Eagle model - 24066 points reduced to 8782 splats. | Cow model - 24066 points reduced to 2278 splats. |
There are some obvious problems with the models. There are holes in the legs of the cow, and some splats in the eagle model appear out of place, especially on the talons. The point clouds used to generate these models were sparsely sampled to begin with and a denser sampling should alleviate both problems to some extent. For the out of place splats on the eagle talon, a more optimal distribution of splats could help as well. The algorithm used to generate the models uses a greedy selection of splats that minimally covers the model and does not ensure a good distribution of splats.
The cow and eagle models were also sampled using 256x256 rays per coordinate axis. The following image show the results of the sub-sampling method on the generated point clouds. The models generated with an error tolerance of 2.4% of the bounding box diagonal are displayed first followed by the models generated using a tolerance of 0.24%.
Eagle model - 94015 points reduced to 34480 splats. | Cow model - 93414 points reduced to 40913 splats. |
Eagle model - 94015 points reduced to 34481 splats. | Cow model - 93414 points reduced to 40911 splats. |
The models generated using the denser initial sampling are of much better quality than the previous ones. The rendering of the cow model actually appears to have triangles even though it is composed completely of splats due to the fact that the point samples were generated from a triangle based surface mesh. The following image reduces the size of the splats so that the model is no longer closed and shows that the model is indeed composed of circular surface splats.
As part of the calibration process in the WAV project, we produce a point-cloud estimate of the display surface geometry. I ran one of the point-clouds generated in this way through the sub-sampler I implemented for Project Update 2 to produce a circular surface splat representation. The room geometry used consisted of two complex room corners. The images below show the point-cloud obtained during calibration and the resulting surface splat model. The model appears blotched due to inconsistent normal directions used in lighting. This inconsistency results from the ambiguity during sub-sampling as to which side of the least-squares plane the splat normal should lie. This will not affect image correction in a multi-projector display environment however since lighting is not performed.
As with the models from the previous update, this model suffers from a few holes. This is due to the sampling density used to generate the point-cloud relative to the global error tolerance used. Some splats also extend over the corners of the model, which can lead to visible artifacts when the model is used to perform image correction. This could be remedied by clipping splats which lie on the corner, forcing them to have a hard boundary.
In the WAV project, we use a projective texturing approach to correct for the image distortions that occur when projectors project onto display surfaces more complex than a simple plane. If a surface splat display model is to be used for image correction, some method of adding projective texturing to the rendering pipeling is needed. To accomplish this, I incorporated a projective texturing step into the attribute pass of the renderer. Since the attribute pass already computes a point in eye space for each pixel in the screen-space extent of each splat, I added a step to also invert the viewing transform to produce a point in world space. This point is then multiplied by the projection matrix representing the projector to produce a texture coordinate in the texture being projected. The texture is then sampled and the resulting color is accumulated in the material properties texture of the attribute pass, replacing the color of the splat.
The following two images show the results of projective texturing. The first image is a texture being projected into one of the complex corners of the display surface model shown above. Note how the image "wraps" around the corner and is distorted. The texture has also been projected onto the cow model in the second image.
I also created a couple videos which show the texture moving on the model as it is translated back and forth while the projector remains static.
Display Surface Video | Cow Video
Over the course of this project I learned a lot about an alternate form of representing surfaces - surface splats. Surface splats have the same linear surface approximation order as a triangle mesh, but lack connectivity among primitives. This is advantageous in that each primitive is independent of the others, which may simplify changes to the geometry of the surface if it is being continuously estimated. Lack of connectivity can also be a disadvantage when it comes to determining which splats neighbor a certain splat. I found that point-based models also suffer from many of the same problems as triangulated surface meshes. It is not uncommon for surfaces to contain holes or blur sharp surface features due to undersampling.
I was successful in taking a point-cloud representation of a display model and converting it into a circular surface splat representation. I also showed it was possible to incorporate projective texturing into the surface splat rendering pipeline, which will allow the point-based model to be used in a multi-projector display to perform image correction in real-time. As future work, I plan to experiment with point-based representations for performing updates to surface geometry as it is continuously estimated.