National Research Council CanadaSkip all menusSkip first menu Menu
National Research Council Canada Government of Canada
NRC-IIT - Institute for Information Technology
NRC-IIT - Institute for Information Technology
Who we are
NRC-IIT Organizational Chart
Research Groups
Computational Video
e-Learning
Health Initiative
High Performance Computing
Information Security
Interactive Information
Interactive Language Technologies
Integrated Reasoning
Internet Logic
People-Centred Technologies
Software Engineering
Visual Information Technology
Research Groups by Locations
Business Development Office
NRC-IIT Advisory Board
Outstanding Achievers
NRC-IIT Staff List
What we do
Where we do it
Printable version Printable
version
Home | About Us | Who we are | Research Groups | Visual Information Technology | 3D Technologies Developed by VIT Group | 3D Models Acquisition and Construction | Constructing the 3D Model

Constructing the 3D Model

In using 3D digitization for analytical applications, there are several situations where a single image suffices to perform a task. For example; detection and monitoring of cracks, or documentation of tool marks on specific areas of an object.

However, most objects and environments require the acquisition of more than one range image in order to achieve sufficient coverage of the surface of interest. The necessary number of images will depend on the shape of the object and its amount of self-occlusion, the eventual presence of obstacles to sensor positioning, as well as on the size of the object if it exceeds the field of view of the sensor. A benefit of merging different views is that the unavoidable noise present in the original data is filtered through the integration process, provided that there are no biases in the data.

A 3D modeling methodology for the construction of models from a set of range images has been developed by NRC researchers in collaboration with researchers at InnovMetric Software Inc. and the Canadian Conservation Institute. This approach is now commercially implemented in InnovMetric’s Polyworks™ software suite (http://www.innovmetric.com).

Principle:
Figure 1 describes a 3D modeling methodology for the construction of models from a set of range images, now commercially implemented in InnovMetric’s Polyworks™ software suite. The first three steps form the acquisition loop: range images are acquired one at a time, until the desired surface coverage is obtained; a user-guided tool sequentially aligns (or registers) each new image with the previous ones, and to rapidly detect surface areas not yet measured. The following steps constitute the modeling sequence: these completely automated steps globally refine the alignment between images, integrate them into a unified model that can be directly used, or alternately geometrically compressed and texture-mapped. If required, the models can be manually edited at any step of the modeling sequence.

Figure 1: From multiple range images to shape and color models

Figure 1: From multiple range images to shape and color models

The texture-mapped compressed mesh model: One can compute a texture to be applied on a compressed version of the original model in order to approximate the appearance of the full resolution colored model (Figure 2). An algorithm for the automatic generation of this map is coupled with the vertex removal mesh compression method. It requires that, during the geometric compression, the algorithm keep track of where each removed vertex projects on the compressed version of the model. When the desired level of compression is reached, the original vertices are protected and transposed into the associated triangle in a tessellated texture map. The main challenge is to tessellate the rectangular texture map efficiently, given a set of triangles of varying proportions and size, while preserving as much of the color information as possible, and avoiding discontinuities between adjacent model triangles. This is a very important aspect for virtual display applications. The use of 3D models which represent the shape, subtle color variations, material characteristics (ivory, bone, stone, metal, wood) and features such as tool mark details as closely as possible to the actual object is a paramount consideration. In short, the fidelity of the 3D models to the actual objects is a priority.

Alternately vertices of the high-resolution model can be handled as a cloud of points that is fed into a point-based rendering. Also, compared with the original points from the simple union of the original images, vertices of the integrated model undergo filtering through the weighted averaging surface reconstruction, and are organized in a more regular sampling pattern on the surface.

Figure 2: High-resolution model of a Haida mask

Figure 2: The high-resolution model of a Haida mask from the Canadian Museum of Civilization (VII-B-136a) is illustrated in (a2) and contains 600,000 polygons of shape information (a1) in a 16.5 MB file. This model is the archival quality model used for research applications and to prepare lower resolution models for other applications. Images (b1) and (b2) illustrate a 10,000 polygon model (b1) with a 1000 x 1000 texture map (b2) in a 6 MB file. This model is suitable for interactive display in a museum kiosk. Images (c1) and (c2) illustrate a 1,000 polygon model (c1) with a 512 x 512 texture map (c2) in a 0.17 MB file. This model is suitable for interactive web display. When the texture maps are applied to the compressed models, it generates a 3D appearance that approximates the appearance of the full resolution colored model. Thus a close approximation to the fidelity of the high-resolution model is retained in the 3D model used for museum visualization as well as web applications.


Date Published: 2006-02-17
Top of Page