Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Journal of Sensor Science and Technology Vol. 22, No. 6 (2013) pp. 387-392 http://dx.doi.org/10.5369/JSST.2013.22.6.387 pISSN 1225-5475/eISSN 2093-7563 Capturing Distance Parameters Using a Laser Sensor in a Stereoscopic 3D Camera Rig System Wan-Young Chung, Julian Ilham, and Jong-Jin Kim+ Abstract Camera rigs for shooting 3D video are classified as manual, motorized, or fully automatic. Even in an automatic camera rig, the process of Stereoscopic 3D (S3D) video capture is very complex and time-consuming. One of the key time-consuming operations is capturing the distance parameters, which are near distance, far distance, and convergence distance. Traditionally these distances are measured by tape measure or triangular indirect measurement methods. These two methods consume a long time for every scene in shot. In our study, a compact laser distance sensing system with long range distance sensitivity is developed. The system is small enough to be installed on top of a camera and the measuring accuracy is within 2% even at a range of 50 m. The shooting time of an automatic camera rig equipped with the laser distance sensing system can be reduced significantly to less than a minute. Keywords : Automatic camera rig, Stereoscopic 3D (S3D), Distance parameters, Laser sensor 1. INTRODUCTION Three dimensional (3D) video is becoming popular again. Between 1952 and 1954 more than 65 3D movies were produced by Hollywood before other cinema formats took over [1]. Recently the number of 3D video productions has started increasing, however the impractical process of 3D production remains a challenge. To make 3D video using the stereoscopic method, two cameras are used. They should be attached to one holder, widely known as a 3D camera rig to get the correct pictures which have the same height and size to reduce picture distortion. There are three types of 3D camera rigs. The first type is a manual rig. This needs full human control to adjust the cameras positions. The second is a motorized camera rig. This still needs human control to operate through a human interface panel. This type has a higher cost since it contains a microprocessor for calculation and driving the positioning motors. The last is an automatic camera rig. This type can reduce human dependence because it can Department of Electronic Engineering, Pukyong National University, Nam-gu, Busan 608-737, Korea +Corresponding author: kimjj@pknu.ac.kr (Received : Aug. 7, 2013, Revised : Oct. 1, 2013, Accepted : Oct. 7, 2013) This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License(http://creativecommons.org/licenses/bync/3.0)which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. automatically move the cameras to the desired positions. The system structure of this type of camera rig is obviously more complex and costly. Several parameters which are needed for determining the camera position are usually acquired from some kind of information devices or sensors attached to the camera rig. Efforts are needed to simplify the 3D shooting operation in camera rigs. Several researchers have proposed their own methods. Liu et al proposed motorized zoom lenses for reconstructing Stereoscopic 3D (S3D) shooting with a two digital image sensors [2]. Heinzle et al proposed a computational camera system for their automatic camera rig by embedding image processing capability inside [3]. In this study, an automatic camera rig is developed using a low-cost laser sensor for capturing several distance parameters to reduce the operation time required. 2. AUTOMATIC 3D CAMERA RIG SYSTEM As seen in Fig. 1 and Fig. 2, the camera rig system can be divided mainly into two movement parts. The horizontal movement part is for adjusting the distance between the two cameras on the rig. The distance between the two cameras is known as the interaxial distance (T). The cameras are moved simultaneously to the desired interaxial distance. The convergence movement part is for adjusting the convergence angle ( 3) of the two cameras. They are Wan-Young Chung Julian Ilham Jong-Jin Kim both converged around 1 simultaneously to create a symmetric figure of a stereoscopic image. Proper interaxial distance and convergence angle controls are crucial to produce better stereoscopic image [4]. Making a 3D video takes much longer time than traditional 2D video even when an automatic method is used. This long operation time is due to the high complexity level of the 3D camera rig system. Several parameters are needed to determine the correct interaxial distance and convergence angle for proper 3D video shooting [4]. This will prevent any cyber sickness or visual fatigue for viewers. Based on equation (1), the interaxial distance (T) is determined by the focal length of the cameras (Lens), the desired disparity of two images (Parallax), nearest (Near) and farthest (Far) object distances from the cameras. Fig. 1. Relation between interaxial distance and convergence angle [4]. Fig. 2. The two main movement parts of a horizontal parallel S3D camera rig. (1) This paper presents low-cost laser sensor for S3D camera systems to simplify the shooting operation and reduce the operation time. 3. THEORETICAL DESCRIPTIONS 3.1 Laser distance sensing system The three main distance parameters to be measured are the near distance, far distance, convergence distance. In our previous study [5], the three distance parameters were captured using indirect methods such as triangulation calculation along with a tape measure on the camera rig, or only using the tape measure. However these kinds of methods are inefficient because of the large amount of time required and the number of people required to perform the measurements; for example, one person is needed to hold the pole of the tape measure. A laser sensor range finder provides a direct method. There are several ways range finder can operate, such as rotational scanning, triangulation, and single point method. A rotational scanning laser sensor is generally used for 2D mapping [6]. Most triangulation laser sensors are used for obtaining more precise distance measurement. Konolige et al proposed a low-cost laser sensor using the triangulation method, but the maximum range was 6 meters [7]. It therefore cannot be used for S3D systems, which are sometimes used to capture video from distances greater than 10 m. The stereoscopic 3D camera rig system, which only needs to get the objects’ distance one by one, is more suitable for the single point method. However, the common barrier for using this kind of sensor is high price. Several low-cost single point sensors such as Leica and Bosch products [8, 9], do not provide any interface for integrating the sensor to the main system and cannot easily be attached to the camera. A laser sensor suitable for our system would be at least smaller than the camera itself, with a distance measurement greater than 30 meters, low cost with an embedded circuit interface, easy to install, and with light weight. These specifications force us to design a custom laser sensor. A red laser diode with a wavelength of approximately Capturing Distance Parameters Using a Laser Sensor in a Stereoscopic 3D Camera Rig System 650 nm is used as a transmitter. A Hamamatsu S4282-51 photodiode is used as a light receiver as it is quite sensitive to the 650 nm wavelength, as shown in Fig 3 (a). When the photodiode is receiving a low intensity reflected laser beam or no laser beam at all, the logic level output is high because the transistor is open circuit. If there is a sufficient intensity of reflected laser beam detected by photodiode the transistor is short circuit to the ground [10]. A single point scanning sensor system, which uses a time- of- flight method, was designed for the low-cost device and long-distance measurement. The sensor measures the time taken from transmitting the light until it is detected by sensor. The sequential process can be seen in Fig. 4. As shown in Fig. 6, the laser transmitter is triggered by microcontroller periodically. A plano-convex lens collects the light reflected by an object and focuses it onto the photodiode. The microcontroller takes the time taken and determines the distance. The equation is given as: (2) (a) Where c is the velocity of light in the atmosphere and t is the measured time. The red laser used in this report has a wavelength of approximately 650 nm which means it is classified as a class 2 laser. To make it safe for general use and to follow international standards [11], the average maximum output power must be limited at 1 mW. (3) (4) (b) The laser energy, E, typically refers to the output of the pulsed laser and it is related to the power output. The Fig. 3. Hamamatsu S4282-51 photo sensor [10]; (a) Characteristic of energy E is the multiplication of the laser peak power photo sensor and (b) principle of photo sensor. output PPEAK and the laser pulses duration t. Since the average power output PAVG is the multiplication of the energy E and repetition rate, f, the output power can be reduced by minimizing the repetition rate or by lowering the laser pulse length. 3.2 System structure and tests Fig. 4. Sequential process of a laser sensor measurement. A laser sensor using the time-of-flight technique deals with nanosecond timing. Therefore, to achieve required accuracy, one dedicated high speed microcontroller is used to fully handle the laser sensor, particularly when using polling technique, as shown in Fig. 5. Serial Peripheral Interface (SPI) communication which Wan-Young Chung Julian Ilham Jong-Jin Kim Fig. 5. Diagram of the laser sensor process system. is faster than Inter-Integrated Circuit (I2C) or Universal Asynchronous Receiver Transmitter (UART), is used to send data from the laser sensor’s microcontroller to the general purpose microcontroller. The laser sensor’s microcontroller sends the data out continuously after the time-of-flight to distance conversion is finished. Finally, the distance data can be sent to Programmable Logic Control (PLC) board from the general purpose microcontroller through UART. Fig. 7 shows the use of the laser sensor module on the 3D camera rig. In order to check the measurement resolution of sensor module, the distance between the camera rig and the object is also determined using a tape measure. They are aimed towards the same point and the wall is used as an object for reflecting the laser beam. This experiment was repeated at several distances, and the results are summarized in Table 1. Fig. 8 shows the process of capturing the distance parameters in a simulated situation. For comparison the direct and indirect are shown. Fig. 6. The principle of a single point scanning laser sensor system. Fig. 7. An accuracy test of the fabricated laser sensor module on the 3D camera rig; (a) Laser beam and (b) tape measure. Table 1. Distance comparison between tape measure and laser sensor No Tape measure (m) Laser sensor (m) 1 2 3 4 5 6 7 5 7.5 10 20 30 40 50 5.006 7.509 10.013 20.03 30.051 40.073 50.103 Capturing Distance Parameters Using a Laser Sensor in a Stereoscopic 3D Camera Rig System Table 2. Shortest distance comparison based on theoretical processes of the microcontroller. and experiment Theoretical (m) Experiment (m) 0.15 0.17 (a) Fig. 9. Laser sensor error rate. For an accuracy test from non-flat surfaces which are harder to measure, a mannequin was used. The commercial product DistoTM D2 (Leica Geosystem Ltd) was also tested together with the tape measure and our developed laser sensor module. The commercial laser sensor guarantees the typical accuracy 1.5 mm [8]. Several colors can absorb the source light and affect the resolution. However, in this study there are no colors (b) Fig. 8. Simulated situation for capturing distance on the 3D camera which can introduce such an error. rig; (a) Indirect method and (b) direct method. Table 3. Distance measurement comparison during practice 4. RESULT AND DISCUSSION Measurement The maximum range the laser sensor module reached was approximately 50 m. At greater distances, the intensity of reflected light was too low to trigger the photodiode. The distances measured using tape measure and laser sensor are compared in Table 1. The results indicate that the laser sensor has less than 1% error as seen in Fig. 9. The shortest distance which can be measured experimentally using laser sensor is 0.17 m which is equivalent to time travel 1.13 ns as shown in Table 2. Based on equation (2), when a 1 GHz clock source is used, the shortest distance that can theoretically be measured is approximately 0.15 m which is equivalent to time travel 1 ns. The additional time delay is attributed to the internal Near distance Convergence distance Far distance Tape measure (m) Laser sensor Laser sensor Disto D2(m) (m) TM 2.96 3.036 3.035 4.67 4.699 4.700 6.80 6.825 6.826 Indirect measurement method by using the tape measure needs considerable efforts and time, usually minutes. On the other hand, the laser sensor takes only seconds to measure it even with one operator. The accuracy of the distance measurement by the developed laser sensor module is compared with that from the tape measure and Wan-Young Chung Julian Ilham Jong-Jin Kim commercial laser sensor module (DistoTM D2 laser sensor). The errors in the range of 3 m to 7 m by both the designed laser sensor module and commercial laser sensor module for the non-flat object are 2.5% in the 3m range and 0.4% in the 7 m range. 5. CONCLUSION A laser sensor module was designed and fabricated using single point scanning sensor system with time-of-flight measurement technique to be applied to the fast 3D video shooting. The maximum distance range obtainable from this laser sensor module is approximately 50 m. Thus, this laser sensor can be very useful for simplifying the shooting process of an S3D camera rig system in the range of 50 m. The error rate of the designed laser distance module for vertical concrete surface is within 0.21% in the distance range between 5 and 50 m. The measuring error is within 2.6% in the range between 3 to 7 m for the mannequin with clothes as a reflecting object. In addition, besides the impressive accuracy, using this low-cost laser sensor module, the shooting operation time to measure the three main distance parameters in a scene becomes a few seconds. This processing time is much shorter compared to the previous triangulation method. The number of individuals involved is also reduce. ACKNOWLEDGMENT This work was supported by a Research Grant of Pukyong National University (2013). REFERENCES [1] M. T. M. Lamboiij, W. A. IJsselsteijn, and I. Heynderickx “Visual discomfort in stereoscopic displays: a review”, Proc. SPIE 6490. Stereoscopic Displays and Virtual Reality Systems XIV, pp. 64900I1-64900I-13, California, USA, 2007. [2] P. Liu, A. Willis, and Y. Sui, “Stereoscopic 3D reconstruction using motorized zoom lenses within an embedded system”, Proc. SPIE 7251. Image Processing: Machine Vision Application II, pp. 72510W-1-72510W-12, California, USA, 2009. [3] S. Heinzle, P. Greisen, D. Gallup, C. Chen, D. Saner, A. Smolic, A. Burg, W. Matusik, and M. Gross, “Computational stereo camera system with programmable control loop”, J. ACM Trans. On Graphic, Vol. 30, No. 4, p. 1, 2011. [4] S. M. An, R. Ramesh, Y. S. Lee, and W. Y. Chung, “Interaxial distance and convergence control for efficient stereoscopic shooting using horizontal moving 3D camera rig”, ICMVIPPA, Vol. 59, No. 408, pp. 2176-2181, 2011. [5] S. M. An, “Automatic stereoscopic camera rig based on PLC control”, M. Eng. Thesis, Pukyong National University, Busan, pp. 34-39, 2013. [6] M. Alwan, M. B. Wagner, G. Wasson, and P. Sheth, “Characterization of infrared range-finder PBS-03JN for 2-D mapping”, IEEE Int. Conf. on Robotics and Automation, pp. 3936-3941, Barcelona, Spain, 2005. [7] K. Konolige, J. Augenbraun, N. Donaldson, C. Fiebig, and P. Shah “A low-cost laser distance sensor”, IEEE Int. Conf. on Robotics and Automation, pp. 3002-3008, California, USA, 2008. [8] http://www.leica-geosystems.com/en/Leica-DISTO D2 _69656.htm. (retrieved on Oct. 8, 2013). [9] http://www.bosch-professional.com/za/en/glm-50-26319-ocs-p/ (retrieved on Oct. 21, 2013). [10] Hamamatsu, “Light Modulation Photo IC”, S4282-51 datasheet, Mar. 2009. [11] http://www.rli.com/resources/articles/classification.aspx (retrieved on May. 8, 2013).