Contents of this page are obsolete. This page is preserved and stored at this URL just from historical viewpoint. Original URL was http://www.mm.media.kyoto-u.ac.jp/members/kameda/...
Please visit www.kameda-lab.org for recent information. (2002/12/06, kameda@ieee.org)

Kameda, Research, Documents and Info


next up previous
Next: Conclusion Up: Three Dimensional Pose Estimation Previous: Experimental Results

Application to a Real-Life Object

 

Our proposed method shows its applicability through the experiment for computer generated images. However, when we take up a real-life object as the target a problem arises: the geometric features of the model might not be precisely the same as those of the real-life object. Since the method previously proposed refers only the contour information and therefore is sensitive to the noise on the contour, it is essential to extend the method to overcome this problem. The method is modified on two points.

1. In Step 1, generated candidates must not project Part i in a way that more than a certain amount of the projected region strays from the silhouette. The amount should be determined according to the geometric unlikeness of Part i to the target real-life object.

2. In Step 2, the system first makes a ``gnawed image.'' It is an image in which the silhouette contour is removed from the given image and in which the regions projected by the parts in are also removed. Then, the system measures the exclusive OR area between the silhouette on the gnawed image and the projected region for each candidate. The candidate which makes the smallest exclusive OR area is selected as the estimation of the rotation angles of Part i.

As an example, suppose the model is the same as that used in the previous section and consider the situation when contains the head, neck, and breast part. The gnawed image at that time is shown in Figure 5. The bright gray colored contour corresponds to the removed contour and the dark gray colored region corresponds to the removed projection region. Exclusive OR calculation is worked out only in white or black colored regions. The dark gray colored region has the role, like Step 2 in the previous proposed method, of attracting the edge of the projected region to the silhouette contour. In addition, since the bright gray colored contour is out of consideration on counting the exclusive OR area, the system acquires a tendency to rotate the parts in order to cover as much of the black colored region as possible.

  
Figure 5: Gnawed Image

We have implemented the modified method and tested several cases for an image of a woman. In this experiment, the resolution of the pose estimation is set to 20 degree and silhouette images are 320 pixels by 360 pixels. One of the results is shown in Figure 6. The estimated pose is quite similar to the target woman's pose in the original image. We show another case in Figure 7, where the breast part is posed in a slightly wrong way. As a result, the estimation of the right arm failed. The reason for this type of failure is considered to be the method that does not examine uncovered silhouette regions during the process. However, it would be very expensive to compute a prediction for the part to be processed next, so that it can reduce regions not covered in Step 1. This problem is left for further study.

  
Figure 6: A Case of Real-Life Human

  
Figure 7: Another Case of Real-Life Human



next up previous
Next: Conclusion Up: Three Dimensional Pose Estimation Previous: Experimental Results



Yoshinari Kameda
Thu Apr 3 22:11:48 JST 1997