Contents of this page are obsolete. This page is preserved and stored at this URL just from historical viewpoint. Original URL was http://www.mm.media.kyoto-u.ac.jp/members/kameda/...
Please visit www.kameda-lab.org for recent information. (2002/12/06, kameda@ieee.org)

Publication, Research, Docs and Info, KAMEDA


next up previous
Next: 3D Reconstruction Method Up: No Title Previous: No Title

Introduction

With improvement in processing speed of computers and with increase of their storage size, it may come true to synchronize a virtual space in computers with a real 3D space[5]. Our final goal is to construct a real-time virtual space which displays human activities in a certain real space. Once the virtual space is constructed, anyone outside the real space can observe the human activities in the real space from any viewpoint with a little delay.

To synchronize the virtual space with the real space, the real space should be reconstructed in realtime.

Slit light projection methods and structured light projection methods achieve real-time 3D reconstruction, but these methods require active sensing which disturbs human activities in the real space. On the contrary, passive vision based approach[3,4] does not affect the activities. Stereo vision methods achieve real-time 3D reconstruction though they cannot reconstruct backside shapes that cannot be seen by stereo cameras. Therefore, the cameras have to be placed so as to surround the real space. Realistic 3D reconstruction methods[1,2] have been proposed which uses over ten cameras, but their approaches need certain period to reconstruct one scene and are not suitable for real-time applications.

The main problem of 3D reconstruction with such camera surrounding layout is that it requires much calculation time because there are many images at each frame. This problem is resolved by distributed computing in our approach. We reconstruct the real space by preparing one computer for each camera to execute image processing, and other computers to calculate 3D reconstruction. All the computers are connected one another with 100baseT Ethernet and 155Mbps ATM LAN.

We describe the reconstructed space by voxel representation. In our method, we improve throughput by dividing video processing into some stages and forming them as the pipeline processing, and decrease latency by dividing a real 3D space into some subspaces and reconstructing each subspace simultaneously with several distributed computers. We can also control throughput and latency by changing the pipeline formation in the system and satisfies the requirements of the applications.

In the following sections, Section2 describes how to reconstruct 3D scene in this method, and in Section3 we explain the prototype system named SCRAPER and show experimental results. We conclude this paper in Section4.



next up previous
Next: 3D Reconstruction Method Up: No Title Previous: No Title



Yoshinari Kameda
Mon Sep 21 11:42:41 JST 1998