Abstract
Projector-camera systems have numerous applications across diverse domains. Accurate calibration of both intrinsic and extrinsic parameters is crucial for these systems. Intrinsic parameters include focal length, distortion parameters, and the principal point, while extrinsic parameters encompass the position and orientation of the projector and camera. A non-overlapping projector-camera system is required in certain scenarios due to practical limitations, physical arrangements, or specific application requirements. These systems pose a more complex calibration challenge because the devices have no direct correspondences or overlapping fields of view, necessitating intermediate objects or methods. This paper proposes a calibration method for non-overlapping projector-camera systems using a planar mirror. The method involves a straightforward process that requires a calibrated camera and a separate mirror calibration step. In this setup, the projector displays a pattern on a planar calibration board, and the camera has an indirect view of this calibration board through the mirror. Using homography, 3D-2D correspondences of the projector are established, enabling the calibration of the system. This method is empirically evaluated using real-world setups and quantitatively assessed in a synthetic environment. The results demonstrate that the proposed method achieves precise calibration across various setups, proving its effectiveness. This approach is an easy-to-use and accessible calibration process for non-overlapping projector-camera systems, made possible by a mirror.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Projector-camera systems have numerous applications, ranging from Augmented Reality and computer vision to interactive display environments. Applications span various domains, including entertainment, art, architecture, and manufacturing. For example, projector-camera systems can be used in the manufacturing industry for operator guidance. Additionally, they can be utilized to create interactive art installations, projection mapping, AR-guided museum tours, and architectural visualization of designs.
These systems rely on accurately calibrating the intrinsic and extrinsic parameters of the camera and projector. The intrinsic parameters refer to the internal characteristics of the projector, such as focal length, center of projection, and lens distortion. The extrinsic parameters define the position and orientation of the device in space. Knowing these parameters is essential for accurate projection and alignment of content. A small deviation in the pose can cause a significant inconsistency when projecting over a relatively long distance.
Various methods exist for calibrating stereo camera setups and camera-projector pairs. Both the camera and projector can be described by the pinhole model. Calibration typically requires overlapping fields of view (FOVs). While a checkerboard pattern can effectively calibrate a stereo camera setup because direct correspondences can be observed between the cameras, calibrating camera-projector pairs poses a distinct challenge due to the lack of direct correspondences. It requires projecting an image or pattern onto a surface within the shared FOV to establish correspondences between the projector and the camera. Camera-projector systems can be calibrated using structured light patterns as described by Moreno and Taubin (2012), or by projecting a pattern such as a chessboard pattern on a planar surface (Falcao et al. 2008; Fernandez and Salvi 2011).
In certain scenarios, a projector-camera system with non-overlapping FOVs becomes necessary. This necessity may arise from operational constraints, spatial configurations, or specific application requirements. Non-overlapping FOVs can be beneficial when the system needs to capture features for tracking and localization, for example. If the projection falls within the camera’s FOV, it can interfere with feature capturing, making it difficult to track camera location or objects. A concrete example of this is a mobile projector-camera setup for Augmented Reality applications. Such applications require tracking of the system’s pose. When projecting content in front of the camera, many good features used for tracking will deteriorate. A projector-camera system without overlapping fields of view solves this issue since the projection is outside of the camera’s field of view, and thus there is no interference.
This paper proposes a practical method for projector-camera calibration using a planar mirror to resolve non-overlapping FOVs. In our approach, the camera observes the projector’s output reflected by the mirror. Figure 1 displays a schematic overview of the calibration setup. Through a separate mirror pose calibration step, we show that the entire system can be calibrated. Utilizing a planar mirror makes our approach accessible and flexible. An alternative approach would require an additional camera, which many people may not have readily available, can be expensive to purchase solely for calibration purposes, and is less flexible since you cannot properly calibrate 180-degree rotated projector-camera setups with a single camera. While mirrors have been employed before to calibrate camera setups without overlapping FOVs, no methods exist for projector-camera setups. We evaluate our approach empirically in practical scenarios and quantitatively in a simulated environment. By employing our method, we can accurately estimate both the intrinsic and extrinsic parameters of the projector, which are essential for applications demanding accurate spatial projection.
Schematic overview of the calibration setup. \(\mathrm {C_R}\) represents the real camera, while P is the real projector. Camera C is looking through mirror M at the calibration board B, which is being projected on by projector P. \(\mathrm {C_V}\) corresponds to the virtual camera, the camera behind mirror M. \(\mathrm {T_{Reflection}}\) is the reflection transformation of mirror M and transforms the virtual camera to the real camera and vice versa. \(\mathrm {P2C_R}\) is the desired transformation
2 Related work
Numerous methods are available for calibrating both projector-camera and stereo-camera setups. This overview begins by examining projector-camera calibration with overlapping fields of view. Then, non-overlapping stereo camera configurations are further inspected. This section ends with other related work involving a planar mirror. To the best of our knowledge, no research has been done on calibrating projector-camera systems with non-overlapping fields of view, which may be desired in some scenarios. This paper offers a solution to this problem.
2.1 Projector-camera calibration with overlapping FOVs
Projectors can be effectively calibrated using a camera with overlapping fields of view. Techniques for projector-camera calibration fall into several categories, including utilizing a pre-calibrated camera, prewarping, multishot, and single-shot calibration methods.
2.1.1 Pre-calibrated camera
Several techniques rely on a pre-calibrated camera (Kimura et al. 2007; Gao et al. 2008; Griesser and Van Gool 2006; Fernandez and Salvi 2011; Falcao et al. 2008) to determine the world coordinates of a calibration target, which are subsequently used for projector correspondences. While these methods for projector calibration may be less precise due to their reliance on camera calibration, they are relatively straightforward to execute. Approaches by Kimura et al. (2007) and Anwar et al. (2012) utilize homographies to compute the projector points using the camera points. However, homographies, being linear operators, cannot model the non-linear distortions caused by the lens, decreasing the calibration accuracy.
2.1.2 Prewarping
Another approach to establishing projector-world correspondences involves iteratively adjusting a projector pattern until it aligns perfectly with a calibration target (Mosnier et al. 2009; Park and Park 2010; Audet and Okutomi 2009; Martynov et al. 2011; Yang et al. 2016). The projection error is evaluated using an uncalibrated camera. To ensure clear detection and identification of the calibration patterns, colored patterns may be used instead of black-and-white ones. However, for this method to function reliably in practice, camera color calibration is necessary. Additionally, since the projected pattern is adjusted in real-time, there is no way to separate the calibration and capture stages, which might be desired for ease of use.
2.1.3 Multishot calibration
Multishot calibration requires multiple images for each pose of the calibration pattern. They project a sequence of patterns that are used for calibration. Consequently, these methods are slow and cumbersome, requiring tens of minutes for recalibration. Moreover, the calibration target must remain stationary over time until the entire pattern sequence is projected. This means the calibration board should be placed securely and cannot be held by a user. Prewarp methods (Mosnier et al. 2009; Park and Park 2010; Audet and Okutomi 2009; Martynov et al. 2011; Yang et al. 2016) belong to this category as projection adjustments are made iteratively. Another example of a multishot calibration method is the approach by Willi and Grundhofer (2017). Additionally, Moreno and Taubin (2012) utilize gray code patterns to encode each pixel, while Zhang and Huang (2006) employ phase-shifted coding.
2.1.4 Single-shot calibration
In contrast to multishot techniques, single-shot calibration methods necessitate only one image per pose, typically resulting in faster calibration. While structured-light patterns in multishot methods are time-encoded (such as Gray codes), single-shot methods utilize spatially encoded structured-light patterns, such as De Bruijn sequences (Huang et al. 2021), dot patterns (Ben-Hamadou et al. 2013) or checkerboard patterns (Anwar et al. 2012). The methods discussed by Fernandez and Salvi (2011) and Falcao et al. (2008) also fall under the category of single-shot calibration methods.
2.2 Non-overlapping FOV stereo camera configurations
Significant research has also focused on calibrating cameras with non-overlapping FOVs. The most common methods tackle the issue of non-overlapping FOVs by using a large calibration target, employing structure from motion, utilizing a mirror, or adding an additional camera to the setup for calibration only.
2.2.1 Using a large calibration target
Employing a large calibration target can overcome the issue of non-overlapping FOVs. For instance, using a lengthy stick with predefined markings could serve as a 1D calibration target (Liu et al. 2011b; De França et al. 2012). Similarly, a large 2D calibration target might consist of two chessboard patterns connected to both ends of a long stick (Liu et al. 2011a). However, these methods introduce additional transformations that must be solved, like the transformation of the stick. Additionally, this method may not scale well to large setups. The approach is a viable option for non-overlapping camera configurations but can not be employed for projector-camera setups since the projector itself can not capture any information.
2.2.2 Using structure from motion
Another method for calibrating multiple cameras involves utilizing structure from motion (SfM). SfM is a technique that reconstructs three-dimensional structures from a set of two-dimensional images. For instance, SfM can be applied in the calibration process of two non-overlapping cameras on a robot. If both cameras have observed the same sections of the environment as the robot navigates through it, SfM can determine the position and orientation of each camera based on the three-dimensional model of the environment. Several examples are provided by Lébraly et al. (2010b) and Esquivel et al. (2007). Alternatively, some methods use SLAM (Simultaneous Localization And Mapping) instead of structure from motion, as demonstrated by Ataer-Cansizoglu et al. (2014) and Carrera et al. (2011). These methods require the entire camera system to move, which may not be feasible for many configurations. This method can not be used for projector calibration since the projector can not capture any information, and the camera thus must have a view of the projections. Although this approach can not be used directly for projector-camera calibration, an alternative is first to calibrate the projector and camera with overlapping FOVs. Afterward, the camera can be transformed to the desired position and orientation, and its pose can be tracked using SLAM or SfM. However, practically, this is hard to achieve since the projector can not move in the meantime. Also, drift, caused by small errors accumulating over time, may impact the accuracy.
2.2.3 Using a mirror
Both spherical (Agrawal 2013) and planar mirrors (Sturm and Bonfort 2006; Lébraly et al. 2010a; Kumar et al. 2008; Hutchison et al. 2010; Takahashi et al. 2012) can be employed to calibrate non-overlapping camera pairs. Mirrors address the issue by providing the camera with an indirect view of the calibration targets. While these methods are straightforward for small systems, they do not scale efficiently to larger setups with multiple cameras. As the distance between the camera and the mirror increases, the image of the calibration target becomes smaller, rendering pattern detection more challenging and introducing inaccuracies. This problem could be reduced by enlarging the pattern or employing another kind of pattern that is easier to detect.
2.2.4 Using an additional camera
An additional camera can serve as a bridge between the fields of view of multiple cameras. Miyata et al. (2018) utilize an omnidirectional camera as a reference point, calibrating each camera to this reference. Zhao et al. (2018) employ an extra support camera to determine the fixed relation between a camera and a marker. By leveraging markers and the supporting camera, they gather sufficient information to determine the relative poses of multiple cameras. This approach is strongly related to outside-in tracking. Using an additional camera can be a viable option, but it reduces the flexibility of camera and projector placement. For instance, a 180-degree setup cannot be calibrated with just one additional camera. Considering omnidirectional cameras, we argue that they are more expensive to acquire than a mirror, for example.
2.3 Other related work
Mirrors have been used for calibrating monocular camera setups, as shown by Xf and Df (2018). They calibrated the camera’s intrinsic parameters using the vanishing point constraint, which requires four points. A standard planar calibration board only provides two vanishing points, but by using a mirror to capture two different perspectives of the board, they achieved one-shot camera calibration. This approach, however, does not account for distortion parameters.
Our method focuses on determining the camera’s pose relative to the planar mirror (see Sect. 4.1). Chunhai and Bin (2010) also estimated the camera’s pose relative to the mirror using an octagonal marker and the vanishing point constraint. This allowed them to perform online calibration of the camera’s extrinsic parameters. The accuracy of this method may be limited, as the octagonal marker’s corners are manually placed on the mirror, and the detection of their centers may not be precise.
3 Methodology overview
This paper proposes a method to calibrate a projector-camera system without overlapping fields of view (FOVs) using a planar mirror surface. The method proposed is inspired by the projector calibration methods of Falcao et al. (2008) and Fernandez and Salvi (2011). We extended their method and included a mirror in the process to overcome the lack of overlapping FOVs. This means that we require an intrinsically pre-calibrated camera and accordingly accept the dependency on the camera calibration for accuracy. Although a calibrated camera is necessary, it can be calibrated using the same input images used to calibrate the projector.
Figure 2 illustrates an overview of the projector-camera calibration setup, in which the camera looks at the calibration board through the mirror. The projector directly projects a calibration pattern on the board, such as a circle grid or a chessboard pattern. In the process of calibrating a camera indirectly via a mirror, common methods involve keeping the calibration board stationary while adjusting the position of the mirror. This movement allows the camera to capture multiple perspectives of the board through the mirror. However, this approach is unsuitable for our needs as we aim to determine a projector’s intrinsic and extrinsic parameters. A projection onto a fixed calibration board from a stationary projector does not provide the necessary variations to calibrate the intrinsic parameters accurately. Instead, we keep the mirror static and move the calibration board to address this issue. This setup allows for the necessary variations in observations for accurate calibration. Consequently, we also need to calibrate the mirror with respect to the camera to ensure its position and orientation are correctly accounted for in the calibration process.
This paper proposes the following guideline to successfully calibrate a non-overlapping projector-camera pair:
-
1.
Take pictures for mirror pose calibration. Two methods for mirror calibration are described in Sect. 4.1.
-
2.
Take pictures for projector calibration by moving the calibration board and projecting calibration patterns onto the board as displayed in Fig. 3. Ensure that patterns are projected over the entire FOV of the projector. If a single pattern over the complete FOV requires a too-large calibration board, multiple smaller patterns could offer a solution and improve accuracy.
-
3.
Pre-calibrate the camera or use the calibration pictures of step 2 to calibrate the camera using Zhang (2000)’s method.
-
4.
Calibrate the mirror using the calibrated camera and the images of step 1 as described in Sect. 4.1. In this step, \(\mathrm {T_{Reflection}}\), which can be seen in Fig. 1, is determined.
-
5.
Calibrate the projector using the calibrated mirror, the calibrated camera, and the pictures of step 2 (Sect. 4.2). This step finds the intrinsic parameters of the projector and the transformation \(\mathrm {P2C_R}\) from Fig. 1, which encompasses the extrinsic parameters of the projector with respect to the camera.
Top-down overview of the setup for taking pictures for projector calibration. Each plane is represented by a single line. Projector P displays a calibration pattern on the calibration boards. Each number represents a new pose of the calibration board. Camera C observes the calibration board with the projected pattern via the mirror. It is important to keep the calibration board within the fields of view of both the projector and camera, as shown in the figure
4 Calibration process and formulation
This section details the transformation and calibration steps required for the projector-camera system. The process involves properly calibrating the mirror and projector to obtain the projector’s spatial relation (extrinsic parameters) to the camera. The transformation from a 3D point in the projector’s coordinate system to the real camera’s coordinate system is given by Eq. 1:
where \(\mathrm {T_{Reflection}}\) is the transformation of a reflection through an arbitrary plane (see Eq. 4), \(\mathrm {C_V2B}\) denotes the transformation of virtual camera’s coordinate system to the calibration board’s coordinate system and \(\textrm{P2B}\) is the transformation of the projector’s coordinate system to the calibration board’s coordinate system. The transformation \(\mathrm {P2C_R}\) can be found by solving the following equation:
For successful calibration, \(\mathrm {T_{Reflection}}\), \(\mathrm {C_V2B}\) and \(\textrm{P2B}\) need to be determined. \(\mathrm {C_V2B}\) can be computed using the images obtained in step 2 of the guideline, applying the Perspective-n-Point algorithm (Lepetit et al. 2009). This algorithm utilizes 3D-2D correspondences between the ChArUco pattern on the calibration board and the camera. Specifically, it solves the following equation for \(\mathrm {C_V2B}\) using a minimum of four 3D-2D correspondences:
where \(\mathrm {K_{cam}}\) is the known camera intrinsic matrix, and \(\mathrm {C_V2B}\) is the transformation from the virtual camera coordinate system to the calibration board coordinate system. This is essentially a camera pose estimation for a virtual, reflected perspective.
The transformation of the projector’s coordinate system to the calibration board’s coordinate system \(\textrm{P2B}\) can be found by first calibrating the intrinsic parameters of the projector, which will be discussed in Sect. 4.2.
The reflection transformation \(\mathrm {T_{Reflection}}\) can not be determined without a separate mirror calibration step. This is because there are insufficient constraints to compute the mirror’s pose from only the input images used for projector calibration. The only constraint that can be calculated in this step is \(\mathrm {C_V2B}\). The computed mirror can be placed anywhere as long as this constraint is satisfied, which means there can be infinitely many possible real camera positions. Section 4.1 describes the process to obtain the reflection transformation \(\mathrm {T_{Reflection}}\), which consists of a mirror calibration step.
4.1 Reflection transformation
\(\mathrm {T_{Reflection}}\) is the transformation of a reflection through an arbitrary plane and is given by Eq. 4:
where \(P_0\) is a point on the mirror plane and H is the householder matrix (Householder 1958) defined by Eq. 5.
I is the identity matrix and v is the normal vector of the mirror plane. This normal vector is determined in the coordinate system of the real camera. This step is called mirror calibration, for which two approaches were researched: the real-virtual method and the real-mirror method. Both methods employ a ChArUco calibration pattern as shown in Fig. 4. Figure 5 displays a top-down view of each method and Fig. 6 shows examples of input images for both methods. The real-virtual method requires a real and virtual (mirrored) view of the calibration pattern in the same frame and computes the 3D points on the plane by using both patterns. The real-mirror approach has a pattern applied to the mirror itself, and the 3D points on the plane are thus directly computed. Both methods have advantages and disadvantages. The first approach might be more flexible because applying something to the mirror is unnecessary. However, it may be more difficult to position the mirror and camera so that both the real and virtual patterns are completely visible while having a view on the calibration board and projection. The second technique requires a pattern applied to the mirror, which has to be removed from the mirror without moving the mirror or the camera. Additionally, the pattern is applied on top of the mirror, which could also introduce inaccuracies. The following sections will explain each method in more detail. We only consider first surface mirrors since this implies we do not need to worry about refraction indices.
Mirror calibration using a pre-calibrated camera. Left: real-virtual method. Right: real-mirror method. Both methods employ a ChArUco board to obtain the mirror’s pose. For illustration, only one line of the pattern is shown. The left method uses the real and virtual (mirrored) pattern to calculate 3D points that lie on the plane. With the right method, the pattern is applied to the mirror, directly resulting in 3D points on the plane. Having obtained these 3D points, a plane can be fitted to estimate the mirror’s pose
4.1.1 Mirror calibration: real-virtual method
The first approach requires a real and virtual (mirrored) view of the calibration pattern in the same frame. Additionally, the intrinsic parameters, such as focal length and principal points, must be known. Using a pose estimation method, the 3D coordinates of the points on the calibration boards (real and virtual) are computed in the camera’s coordinate system. The center points are computed using the corresponding 3D points: these center points lie on the mirror plane. Then, the mirror plane is estimated using Singular Value Decomposition (SVD) as shown by Brown (1976), and RANSAC (Fischler and Bolles 1981). This is achieved by first calculating the centroid of the points and moving the points to this center of origin. The centroid will always be on the mirror plane. Next, SVD is applied to these points to find the normal v of the plane. The Cartesian plane equation can be found using the centroid and the normal. This is repeatedly executed using random points, and the inliers are calculated. The plane equation with the most inliers is used.
4.1.2 Mirror calibration: real-mirror method
The second approach also relies on a calibrated camera and applies a printed calibration pattern to the mirror. The 3D coordinates of the points on the calibration board are computed in the camera’s coordinate system. These points directly represent points on the mirror plane, allowing us to employ the SVD method with RANSAC to determine the plane equation, as described above.
4.2 Projector calibration
To calibrate the projector, we need 3D-2D correspondences of the projector so the method of Zhang (2000) can be employed. For cameras, obtaining these correspondences is trivial. For projectors, we were inspired by the methods of Fernandez and Salvi (2011) and Falcao et al. (2008), using a planar calibration board. This process is displayed in Fig. 7. Figure 8 shows the calibration board with a projected pattern and is one of the input images to our algorithm.
First, the camera is calibrated using Zhang (2000)’s method. This can be done either beforehand or using the same input images that will be used for projector calibration, as described in the guideline in Sect. 3. The local 3D points of the ChArUco calibration pattern and the 2D detections are used to compute the homography from the 2D image space to the 3D space of the calibration board. It is important to note that these calculations are performed in the virtual camera space. The homography relates the transformation between two planes up to a scale, and is given by the following equation:
where H is the homography computed, (u, v) is the 2D location on the image plane and \((X_B,Y_B)\) is the x and y coordinate of the corresponding 3D point. The z coordinate of the 3D point is 0 since these points are in the board’s local coordinate system and lie in the same plane. Using Eq. 6, the 3D points of the projected pattern can be found because these points lie in the same plane as the ChArUco pattern. Since the projected pattern is known, this results in 3D-2D correspondences, which are the input to Zhang’s calibration method. This method also includes the calibration of the distortion parameters.
After determining the intrinsic parameters using the approach described above, the same 3D-2D correspondences can be utilized to determine the transformation from the projector’s coordinate system to the calibration board’s coordinate system (\(\textrm{P2B}\) in Fig. 1). This is done using the Perspective-n-Point algorithm (Lepetit et al. 2009). Together, these approaches enable the calibration of both the intrinsic and extrinsic parameters of the projector and, thus, the complete projector-camera system.
5 Results
The non-overlapping calibration method is implemented with OpenCV 5.0 (Bradski 2000). Our method was evaluated using two methods: empirical evaluation of a real system and a quantitative evaluation using synthetic data.
5.1 Empirical evaluation of a real projector-camera system
To empirically evaluate whether the calibration was successful, we use a technique based on epipolar geometry. Epipolar geometry describes the relationship between two views of the same scene. It relates points in one image to lines in the other. The process is shown in Fig. 9 and described as follows:
-
1.
Choose a pixel: select a pixel on the camera’s image plane.
-
2.
Calculate and draw the epipolar line: using the calibration data, calculate the epipolar line corresponding to this pixel on the projector’s image plane. The epipolar line represents all possible locations where the corresponding 3D point could appear in the projector’s view.
-
3.
Verify alignment: to check if the calibration is accurate, look at the camera image and see if the epipolar line crosses through the pixel originally selected. If the line and point intersect, this means the intrinsic and extrinsic parameters have been accurately estimated.
Additionally, the reprojection error, as reported by OpenCV’s CalibrateCamera and StereoCalibrate functions, indicates the accuracy of the calibration.
The calibration was executed with a RealSense D455 and a Kodak Luma 450 Portable projector. We use an asymmetric grid of \(4\times 9\) circles. The grid is shifted across the entire FOV of the projector over 13 images, and for each of these, 4 images were taken while varying the calibration board’s pose. Figure 10 shows some empirical results of the projector-camera calibration, as well as the reprojection errors for each of these setups. These results show the system can be accurately calibrated with sub-pixel precision. Table 1 shows the estimated intrinsic parameters for each of the real camera-projector setups.
The empirical results of three different non-overlapping projector-camera setups using epipolar geometry. Left: The projector-camera setup. The yellow arrow represents the camera’s viewing direction, while the red arrow indicates the viewing direction of the projector. Right: The projection of the epipolar lines for each setup. The setup is accurately calibrated if the epipolar line passes through the corresponding 2D point on the image plane. This demonstrates the accuracy of the calibration process. For each setup, the reprojection errors are also given. Cam RMS and Proj RMS are the reprojection errors of the camera and projector respectively, as announced by OpenCV’s CalibrateCamera function. Stereo RMS represents the reprojection error as reported by OpenCV’s StereoCalibrate function
5.2 Quantitative evaluation using synthetic data
To measure the accuracy of this approach, Unreal Engine 5.3 was used to simulate the calibration process. The scene consists of a calibration board, a projector, a camera, and a mirror. Figure 11 illustrates the setup in Unreal Engine used to gather synthetic data, while Fig. 12 presents synthetic images from the camera’s perspective through the mirror. The projector projects a circle grid onto the calibration board, and these images are employed for calibration.
Multiple simulations were conducted, varying the movement of the calibration board, the intrinsic and extrinsic parameters of the camera and projector, and using different calibration patterns such as a circle grid and a chessboard pattern. Using the ground truth information of the setup, the system can be assessed thoroughly. The following metrics were used:
-
the Frobenius/Euclidean norm of the difference between the ground truth and the estimated intrinsic matrices (CamK/ProjK),
-
the Frobenius/Euclidean norm of the difference between the ground truth and the estimated distortion parameters (CamD/ProjD),
-
the distance between the estimation and the ground truth of the relative spatial relation of camera and projector (Distance),
-
the difference in angle between the estimation and the ground truth of the relative spatial relation of camera and projector (Angle),
-
the reprojection error of the calibration process as reported by OpenCV’s CalibrateCamera-function (CamRMS/ProjRMS),
-
the stereo reprojection error as reported by OpenCV’s StereoCalibrate-function (StereoRMS).
The Frobenius/Euclidean norm is calculated using Eq. 7 for an \(\textrm{m} \times \textrm{n}\) matrix A.
The results are shown in Tables 2 and 3. Table 2 shows the calibration results with a projected chessboard pattern, while Table 3 shows the results with a projected circle grid pattern. These results include 12 sets of images with no mirror involved (overlapping fields of view) and 13 sets with non-overlapping fields of view. For each set of images, the calibration was performed with the ground truth camera calibration and an estimated camera calibration. This estimation was computed using the ChArUco board in the same input images used for the projector calibration. If applicable, the mirror was calibrated using both described methods. The patterns, a \(4\times 9\) asymmetric circle grid and a \(5\times 6\) chessboard pattern, are shifted across the entire FOV of the projector over 13 images, and for each of these, 4 images were taken while varying the calibration board’s pose. The ground truth variations are displayed in Table 5.
As shown in Tables 2 and 3, the circle grid pattern yields better results for both the intrinsic and extrinsic parameters of the projector-camera setup. This may be because some chessboard patterns are not accurately detected, and the corners might be too small. For this paper, only a single chessboard pattern size was tested. Larger patterns with fewer columns and rows could potentially improve accuracy. The following discussion applies to the results of the circle grid pattern (Table 3). The projector’s intrinsic parameters are accurate, with an average Euclidean norm of 7.42 pixels. The results with ideal camera intrinsics outperform those with the calibrated camera. However, the impact is minimal: 1.3 mm and 0.08°. Projector-camera calibration with overlapping fields of view yields better results than the non-overlapping setup, as the introduction of the mirror introduces additional errors. The accuracy of the extrinsic parameters depends on the mirror calibration’s accuracy. As shown in Table 4, the mirror pose can be estimated with an average error of 0.6 mm and 0.09° for the real-virtual method. Though both are comparable, in a simulated environment the ‘real-mirror’ calibration achieves better results than the ‘real-virtual’ approach.
6 Conclusion
This paper presents an easy-to-use approach to calibrating a projector-camera system without overlapping fields of view. We showed that the presented method works for various setups, including scenarios where the camera is 180° rotated relative to the projector. We empirically and quantitatively validated the method. In a simulated environment and without using any prior knowledge, the system achieves an average accuracy of 3.2 mm and 0.18° and accurately estimates the projector’s intrinsic and distortion parameters with an Euclidean norm of 7.59 pixels and 0.02, respectively. The proposed method is accessible thanks to using a simple planar mirror to give the camera an indirect view of the calibration board and projection. Future research could include expanding the method’s compatibility with different mirror types and investigating the impact of refraction on the calibration accuracy. Large mirrors might suffer from internal bending and deviate from the planar assumption. Future work could research how to adjust for these deviations and calibrate the mirror surface as well. This could possibly be done using a projected pattern and measuring the distortion of this pattern. Additionally, exploring alternative calibration approaches for non-overlapping projector-camera systems, such as using an additional camera or omnidirectional camera, presents interesting opportunities.
Data availability
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
References
Agrawal A (2013) Extrinsic camera calibration without a direct view using spherical mirror. In: 2013 IEEE international conference on computer vision. IEEE, Sydney, Australia, pp 2368–2375, https://doi.org/10.1109/ICCV.2013.294
Anwar H, Din I, Park K (2012) Projector calibration for 3D scanning using virtual target images. Int J Precis Eng Manuf 13(1):125–131. https://doi.org/10.1007/s12541-012-0017-3
Ataer-Cansizoglu E, Taguchi Y, Ramalingam S et al (2014) Calibration of non-overlapping cameras using an external SLAM system. In: 2014 2nd international conference on 3D vision. IEEE, Tokyo, pp 509–516, https://doi.org/10.1109/3DV.2014.106
Audet S, Okutomi M (2009) A user-friendly method to geometrically calibrate projector-camera systems. In: 2009 IEEE computer society conference on computer vision and pattern recognition workshops, pp 47–54, https://doi.org/10.1109/CVPRW.2009.5204319, iSSN: 2160-7516
Ben-Hamadou A, Soussen C, Daul C et al (2013) Flexible calibration of structured-light systems projecting point patterns. Comput Vis Image Underst 117(10):1468–1481. https://doi.org/10.1016/j.cviu.2013.06.002
Bradski G (2000) The OpenCV library. Dr Dobb’s Journal of Software Tools
Brown C (1976) Principal axes and best-fit planes, with applications. Tech. rep., http://hdl.handle.net/1802/13747
Carrera G, Angeli A, Davison AJ (2011) SLAM-based automatic extrinsic calibration of a multi-camera rig. In: 2011 IEEE international conference on robotics and automation. IEEE, Shanghai, China, pp 2652–2659, https://doi.org/10.1109/ICRA.2011.5980294
Chunhai H, Bin L (2010) Three-dimensional reconstruction method using single CCD based on geometric constraint of bilateral symmetry. Chin J Lasers 37(10):2576. https://doi.org/10.3788/cjl20103710.2576
De França J, Stemmer M, França M et al (2012) A new robust algorithmic for multi-camera calibration with a 1D object under general motions without prior knowledge of any camera intrinsic parameter. Pattern Recogn 45(10):3636–3647. https://doi.org/10.1016/j.patcog.2012.04.006
Esquivel S, Woelk F, Koch R (2007) Calibration of a multi-camera rig from non-overlapping views. In: Hamprecht FA, Schnörr C, Jähne B (eds) Pattern recognition. Springer, Heidelberg
Falcao G, Hurtos N, Massich J (2008) Plane-based calibration of a projector-camera system. VIBOT Master 9
Xf Feng, Df Pan (2018) A camera calibration method based on plane mirror and vanishing point constraint. Optik 154:558–565. https://doi.org/10.1016/j.ijleo.2017.10.086
Fernandez S, Salvi J (2011) Planar-based camera-projector calibration. In: Seventh international symposium on image and signal processing and analysis (ISPA)
Fischler MA, Bolles RC (1981) Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun ACM 24(6):381–395. https://doi.org/10.1145/358669.358692
Gao W, Wang L, Hu ZY (2008) Flexible calibration of a portable structured light system through surface plane. Acta Autom Sin 34(11):1358–1362. https://doi.org/10.1016/S1874-1029(08)60060-9
Griesser A, Van Gool L (2006) Automatic interactive calibration of multi-projector-camera systems. In: 2006 conference on computer vision and pattern recognition workshop (CVPRW’06). IEEE, New York, NY, USA, pp 8–8, https://doi.org/10.1109/CVPRW.2006.37
Householder AS (1958) Unitary triangularization of a nonsymmetric matrix. J ACM 5(4):339–342. https://doi.org/10.1145/320941.320947
Huang B, Tang Y, Ozdemir S et al (2021) A fast and flexible projector-camera calibration system. IEEE Trans Autom Sci Eng 18(3):1049–1063. https://doi.org/10.1109/TASE.2020.2994223
Hutchison D, Kanade T, Kittler J et al (2010) Camera pose estimation using images of planar mirror reflections. In: Daniilidis K, Maragos P, Paragios N (eds) Computer vision—ECCV 2010, vol 6314. Springer Berlin Heidelberg, Berlin, Heidelberg, p 382–395, https://doi.org/10.1007/978-3-642-15561-1_28, series Title: Lecture Notes in Computer Science
Kimura M, Mochimaru M, Kanade T (2007) Projector calibration using arbitrary planes and calibrated camera. In: 2007 IEEE conference on computer vision and pattern recognition, pp 1–2,https://doi.org/10.1109/CVPR.2007.383477, iSSN: 1063-6919
Kumar RK, Ilie A, Frahm JM, et al (2008) Simple calibration of non-overlapping cameras with a mirror. In: 2008 IEEE conference on computer vision and pattern recognition. IEEE, Anchorage, AK, USA, pp 1–7, https://doi.org/10.1109/CVPR.2008.4587676
Lepetit V, Moreno-Noguer F, Fua P (2009) EPnP: an accurate o(n) solution to the PnP problem. Int J Comput Vision 81(2):155–166. https://doi.org/10.1007/s11263-008-0152-6
Liu Z, Zhang G, Wei Z et al (2011) A global calibration method for multiple vision sensors based on multiple targets. Meas Sci Technol 22(12):125102. https://doi.org/10.1088/0957-0233/22/12/125102
Liu Z, Zhang G, Wei Z et al (2011) Novel calibration method for non-overlapping multiple vision sensors based on 1D target. Opt Lasers Eng 49(4):570–577. https://doi.org/10.1016/j.optlaseng.2010.11.002
Lébraly P, Deymier C, Ait-Aider O, et al (2010a) Flexible extrinsic calibration of non-overlapping cameras using a planar mirror: application to vision-based robotics. In: 2010 IEEE/RSJ international conference on intelligent robots and systems. IEEE, Taipei, pp 5640–5647, https://doi.org/10.1109/IROS.2010.5651552
Lébraly P, Royer E, Ait-Aider O et al (2010b) Calibration of non-overlapping cameras—application to vision-based robotics. In: Procedings of the British machine vision conference 2010. British machine vision association, Aberystwyth, pp 10.1–10.12, https://doi.org/10.5244/C.24.10
Martynov I, Kamarainen JK, Lensu L (2011) projector calibration by “inverse camera calibration.” In: Heyden A, Kahl F (eds) Image analysis. Lecture notes in computer science. Springer, Berlin, Heidelberg, pp 536–544
Miyata S, Saito H, Takahashi K et al (2018) Extrinsic camera calibration without visible corresponding points using omnidirectional cameras. IEEE Trans Circuits Syst Video Technol 28(9):2210–2219. https://doi.org/10.1109/TCSVT.2017.2731792
Moreno D, Taubin G (2012) Simple, accurate, and robust projector-camera calibration. In: Visualization and transmission 2012 second international conference on 3D imaging, modeling, processing, pp 464–471, https://doi.org/10.1109/3DIMPVT.2012.77, ISSN: 1550-6185
Mosnier J, Berry F, Ait-Aider O (2009) A new method for projector calibration based on visual servoing. In: IAPR international workshop on machine vision applications, https://api.semanticscholar.org/CorpusID:8840067
Park SY, Park GG (2010) Active calibration of camera-projector systems based on planar homography. In: 2010 20th international conference on pattern recognition, pp 320–323, https://doi.org/10.1109/ICPR.2010.87, iSSN: 1051-4651
Sturm P, Bonfort T (2006) How to compute the pose of an object without a direct view? Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics) 3852 LNCS:21–31. https://doi.org/10.1007/11612704_3
Takahashi K, Nobuhara S, Matsuyama T (2012) A new mirror-based extrinsic camera calibration using an orthogonality constraint. In: 2012 IEEE conference on computer vision and pattern recognition. IEEE, Providence, RI, pp 1051–1058,https://doi.org/10.1109/CVPR.2012.6247783
Willi S, Grundhofer A (2017) Robust geometric self-calibration of generic multi-projector camera systems. In: 2017 IEEE international symposium on mixed and augmented reality (ISMAR). IEEE, Nantes, pp 42–51, https://doi.org/10.1109/ISMAR.2017.21
Yang L, Normand JM, Moreau G (2016) Practical and precise projector-camera calibration. In: 2016 IEEE international symposium on mixed and augmented reality (ISMAR), pp 63–70, https://doi.org/10.1109/ISMAR.2016.22
Zhang S, Huang P (2006) Novel method for structured light system calibration. Opt Eng. https://doi.org/10.1117/1.2336196
Zhang Z (2000) A flexible new technique for camera calibration. IEEE Trans Pattern Anal Mach Intell 22(11):1330–1334. https://doi.org/10.1109/34.888718
Zhao F, Tamaki T, Kurita T et al (2018) Marker-based non-overlapping camera calibration methods with additional support camera views. Image Vis Comput 70:46–54. https://doi.org/10.1016/j.imavis.2017.12.006
Acknowledgements
This work was made possible with support from MAXVR-INFRA, a scalable and flexible infrastructure that facilitates the transition to digital-physical work environments. This project is subsidized by the Flemish Government and the European Union.
Funding
This research was partly funded by the European Union (HORIZON MAX-R, Mixed Augmented and Extended Reality Media Pipeline, 101070072), the Flanders Make’s XRTwin SBO project (R-12528), the Special Research Fund (BOF) of Hasselt University (R-14360) and the specialized FWO fellowship grant (1SHDZ24N).
Author information
Authors and Affiliations
Contributions
J.V. is the lead submission author and has worked on preparing the manuscript text and all the technical components and evaluating the submission. L.J. contributed to all technical components of the submission and is co-supervisor of J.V. B.Z. contributed to the writing of the manuscript text. N.M. is the lead submission author’s supervisor and contributed to the polishing of the manuscript text and the technical components of the submission. All authors reviewed the manuscript.
Corresponding author
Ethics declarations
Conflict of interest
No, I declare that the authors have no Conflict of interest as defined by Springer, or other interests that might be perceived to influence the results and/or discussion reported in this paper.
Ethics approval
not applicable
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix 1
Appendix 1
1.1 Ground truth parameters of the synthetic variations
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Vanherck, J., Jorissen, L., Zoomers, B. et al. Projector-camera calibration with non-overlapping fields of view using a planar mirror. Virtual Reality 29, 19 (2025). https://doi.org/10.1007/s10055-024-01089-7
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10055-024-01089-7