Automatic Rectification of the Hybrid Stereo Vision System
<p>Block diagram of the proposed automatic rectification approach. (<b>a</b>) Acquisition of the virtual perspective image. (<b>b</b>) Calculation of the fundamental matrix. (<b>c</b>) Stereo rectification.</p> "> Figure 2
<p>The configuration of the hybrid vision system. It consists of a perspective camera and a catadioptric camera with a hyperboloidal mirror.</p> "> Figure 3
<p>The unit sphere model for the catadioptric camera.</p> "> Figure 4
<p>(<b>a</b>) The effective viewpoint of the virtual perspective image; (<b>b</b>) the coordinate of the omnidirectional image.</p> "> Figure 5
<p>Overview of virtual perspective image generation.</p> "> Figure 6
<p>The epipolar geometry of the virtual perspective image and conventional image. <math display="inline"><semantics> <mrow> <msubsup> <mi>π</mi> <mi>p</mi> <mo>′</mo> </msubsup> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msubsup> <mi>π</mi> <mi>v</mi> <mo>′</mo> </msubsup> </mrow> </semantics></math> are the rectified images. It is obvious that the rectified images are column aligned.</p> "> Figure 7
<p>Experiment platform. The upper camera is omnidirectional, and the lower camera is conventional.</p> "> Figure 8
<p>Three examples of the image pairs used for rectification accuracy comparison.</p> "> Figure 9
<p>Stereo rectification results. The first row in each image is from the conventional camera. The second row in each image is from the omnidirectional camera. (<b>a</b>) The image pair with the rectification method in [<a href="#B27-sensors-18-03355" class="html-bibr">27</a>]; (<b>b</b>) the image pair with our proposed rectification method.</p> "> Figure 10
<p>A simulated environment with one omnidirectional image and one conventional image.</p> "> Figure 11
<p>Sample omnidirectional (<b>a</b>) and perspective (<b>b</b>) images captured in the simulated environment.</p> "> Figure 12
<p>Mean errors of ten experiments with different orientation angles.</p> "> Figure 13
<p>The tracking and cooperation result of the two cameras. From left to right, the 17th, 26th, 35th, 43rd, and 85th frame are shown.</p> "> Figure 14
<p>The rectification result of image pairs in <a href="#sensors-18-03355-f013" class="html-fig">Figure 13</a>.</p> "> Figure 15
<p>The comparison of odometry results and ground truth.</p> ">
Abstract
:1. Introduction
- A perspective projection model is proposed for the omnidirectional image, which significantly reduces the computational complexity of 3D formulation for mixed-view pairs.
- A method based on a novel, well-defined cost function for optimizing the normalization matrix is employed, which can calculate the rectification transformation more accurately.
- To evaluate the performance of the proposed automatic rectification method and to provide a direct application, a target tracking and odometry hybrid vision system is established based on an automatic rectification approach.
2. Proposed Automatic Rectification Approach
3. Hybrid Omnidirectional and Conventional Imaging System
4. Methodology
4.1. Virtual Image Generation
4.2. Automatic Stereo Rectification
4.2.1. Epipolar Geometry Between Image Pairs
4.2.2. Optimization Method of the Normalization Matrix
5. Experimental Results and Analysis
5.1. Hybrid Stereo Vision System
5.2. Stereo Rectification Experiment with Real Image Pairs
5.3. Odometry in a Simulated Environment
5.4. Real-Time Target Tracking and Odometry Experiment
6. Discussion
7. Conclusions
Author Contributions
Acknowledgments
Conflicts of Interest
References
- Klitzke, L.; Koch, C. Robust Object Detection for Video Surveillance Using Stereo Vision and Gaussian Mixture Model. J. WSCG 2016, 24, 9–17. [Google Scholar]
- Barry, A.J.; Tedrake, R. Pushbroom Stereo for High-Speed Navigation in Cluttered Environments. arXiv, 2014; arXiv:1407.7091. [Google Scholar]
- De Wagter, C.; Tijmons, S.; Remes, B.D.W.; de Croon, G.C.H.E. Autonomous flight of a 20-gram Flapping Wing MAV with a 4-gram onboard stereo vision system. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 4982–4987. [Google Scholar]
- Marín-Plaza, P.; Beltrán, J.; Hussein, A.; Musleh, B.; Martín, D.; de la Escalera, A.; Armingol, J.M. Stereo Vision-Based Local Occupancy Grid Map for Autonomous Navigation in ROS. In Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Rome, Italy, 27–29 February 2016; pp. 701–706. [Google Scholar]
- Fu, C.; Carrio, A.; Campoy, P. Efficient visual odometry and mapping for unmanned aerial vehicle using ARM-based stereo vision pre-processing system. In Proceedings of the 2015 International Conference on Unmanned Aircraft Systems (ICUAS), Denver, CO, USA, 9–12 June 2015; pp. 957–962. [Google Scholar]
- De La Cruz, C.; Carelli, R. Dynamic model based formation control and obstacle avoidance of multi-robot systems. Robotica 2008, 26, 345–356. [Google Scholar] [CrossRef]
- Micusik, B.; Pajdla, T. Estimation of omnidirectional camera model from epipolar geometry. In Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Madison, WI, USA, 18–20 June 2003; Volume 1. [Google Scholar]
- Wang, Y.; Gong, X.; Lin, Y.; Liu, J. Stereo calibration and rectification for omnidirectional multi-camera systems. Int. J. Adv. Robot. Syst. 2012, 9, 143. [Google Scholar] [CrossRef]
- Ramalingam, S.; Sturm, P. A Unifying Model for Camera Calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1309–1319. [Google Scholar] [CrossRef] [PubMed]
- Yu, M.-S.; Wu, H.; Lin, H.-Y. A visual surveillance system for mobile robot using omnidirectional and PTZ cameras. In Proceedings of the SICE Annual Conference, Taipei, Taiwan, 18–21 August 2010; pp. 37–42. [Google Scholar]
- Cagnoni, S.; Mordonini, M.; Mussi, L.; Adorni, G. Hybrid Stereo Sensor with Omnidirectional Vision Capabilities: Overview and Calibration Procedures. In Proceedings of the ICIAP 2007 14th International Conference on Image Analysis and Processing, Modena, Italy, 10–14 September 2007; pp. 99–104. [Google Scholar]
- Bastanlar, Y. A simplified two-view geometry based external calibration method for omnidirectional and PTZ camera pairs. Pattern Recognit. Lett. 2016, 71, 1–7. [Google Scholar] [CrossRef] [Green Version]
- Sturm, P. Pinhole Camera Model. In Computer Vision: A Reference Guide; Ikeuchi, K., Ed.; Springer US: Boston, MA, USA, 2014; pp. 610–613. ISBN 978-0-387-31439-6. [Google Scholar]
- Lui, W.L.D.; Jarvis, R. Eye-full tower: A gpu-based variable multibaseline omnidirectional stereovision system with automatic baseline selection for outdoor mobile robot navigation. Robot. Auton. Syst. 2010, 58, 747–761. [Google Scholar] [CrossRef]
- Schraml, S.; Belbachir, A.N.; Bischof, H. An Event-Driven Stereo System for Real-Time 3-D 360° Panoramic Vision. IEEE Trans. Ind. Electron. 2016, 63, 418–428. [Google Scholar] [CrossRef]
- Barone, S.; Neri, P.; Paoli, A.; Razionale, A.V. Catadioptric stereo-vision system using a spherical mirror. Procedia Struct. Integr. 2018, 8, 83–91. [Google Scholar] [CrossRef]
- Chen, D.; Yang, J. Image registration with uncalibrated cameras in hybrid vision systems. In Proceedings of the Seventh IEEE Workshops on Application of Computer Vision, WACV/MOTIONS’05, Breckenridge, CO, USA, 5–7 January 2005; Volume 1, pp. 427–432. [Google Scholar]
- Rathnayaka, P.; Baek, S.-H.; Park, S.-Y. An Efficient Calibration Method for a Stereo Camera System with Heterogeneous Lenses Using an Embedded Checkerboard Pattern. J. Sens. 2017, 2017, 6742615. [Google Scholar] [CrossRef]
- Chen, X.; Yang, J.; Waibel, A. Calibration of a hybrid camera network. In Proceedings of the Ninth IEEE International Conference on Computer Vision, Nice, France, 13–16 October 2003. [Google Scholar]
- Deng, X.; Wu, F.; Wu, Y.; Duan, F.; Chang, L.; Wang, H. Self-calibration of hybrid central catadioptric and perspective cameras. Comput. Vis. Image Underst. 2012, 116, 715–729. [Google Scholar] [CrossRef]
- Puig, L.; Guerrero, J.; Sturm, P. Matching of omnidirectional and perspective images using the hybrid fundamental matrix. In Proceedings of the OMNIVIS 2008-8th Workshop on Omnidirectional Vision, Camera Networks and Non-Classical Cameras, Marseille, France, 17 October 2008. [Google Scholar]
- Chen, C.; Yao, Y.; Page, D.; Abidi, B.; Koschan, A.; Abidi, M. Heterogeneous Fusion of Omnidirectional and PTZ Cameras for Multiple Object Tracking. IEEE Trans. Circuits Syst. Video Technol. 2008, 18, 1052–1063. [Google Scholar] [CrossRef] [Green Version]
- Liu, Y.; Shi, H.; Lai, S.; Zuo, C.; Zhang, M. A spatial calibration method for master-slave surveillance system. Opt. Int. J. Light Electron Opt. 2014, 125, 2479–2483. [Google Scholar] [CrossRef]
- Tan, S.; Xia, Q.; Basu, A.; Lou, J.; Zhang, M. A two-point spatial mapping method for hybrid vision systems. J. Mod. Opt. 2014, 61, 910–922. [Google Scholar] [CrossRef]
- Baris, I.; Bastanlar, Y. Classification and tracking of traffic scene objects with hybrid camera systems. In Proceedings of the 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), Yokohama, Japan, 16–19 October 2017; pp. 1–6. [Google Scholar]
- Scotti, G.; Marcenaro, L.; Coelho, C.; Selvaggi, F.; Regazzoni, C.S. Dual camera intelligent sensor for high definition 360 degrees surveillance. IEE Proc. Vis. Image Signal Process. 2005, 152, 250–257. [Google Scholar] [CrossRef]
- Lin, H.-Y.; Wang, M.-L. HOPIS: Hybrid omnidirectional and perspective imaging system for mobile robots. Sensors 2014, 14, 16508–16531. [Google Scholar] [CrossRef] [PubMed]
- Yu, G.; Morel, J.-M. ASIFT: An Algorithm for Fully Affine Invariant Comparison. Image Process. 2011, 1. [Google Scholar] [CrossRef] [Green Version]
- Hartley, R.I. In defence of the 8-point algorithm. In Proceedings of the Fifth International Conference on Computer Vision, Cambridge, MA, USA, 20–23 June 1995; pp. 1064–1070. [Google Scholar]
- Goncalves, N.; Nogueira, A.C.; Miguel, A.L. Forward projection model of non-central catadioptric cameras with spherical mirrors. Robotica 2017, 35, 1378–1396. [Google Scholar] [CrossRef]
- Simoncini, V. Computational Methods for Linear Matrix Equations. SIAM Rev. 2016, 58, 377–441. [Google Scholar] [CrossRef]
- Toldo, R.; Gherardi, R.; Farenzena, M.; Fusiello, A. Hierarchical structure-and-motion recovery from uncalibrated images. Comput. Vis. Image Underst. 2015, 140, 127–143. [Google Scholar] [CrossRef] [Green Version]
- Albl, C.; Kukelova, Z.; Fitzgibbon, A.; Heller, J.; Smid, M.; Pajdla, T. On the Two-View Geometry of Unsynchronized Cameras. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5593–5602. [Google Scholar]
- Longuet-Higgins, H.C. A computer algorithm for reconstructing a scene from two projections. Nature 1981, 293, 133. [Google Scholar] [CrossRef]
- Cai, C.; Fan, B.; Weng, X.; Zhu, Q.; Su, L. A target tracking and location robot system based on omnistereo vision. Ind. Robot. 2017, 44, 741–753. [Google Scholar] [CrossRef]
- Moon, T.K. The expectation-maximization algorithm. IEEE Signal Process. Mag. 1996, 13, 47–60. [Google Scholar] [CrossRef]
- Cai, C.; Weng, X.; Fan, B.; Zhu, Q. Target-tracking algorithm for omnidirectional vision. J. Electron. Imaging 2017, 26, 033014. [Google Scholar] [CrossRef]
Hyperbolic Mirror Parameters | Omnidirectional Camera Parameters | Conventional Camera Parameters | |||
---|---|---|---|---|---|
a (Major axis) | 31.2888 mm | Part Number | FL2G-50S5C-C | Part Number | FL2G-50S5C-C |
b (Minor axis) | 51.1958 mm | Resolution | 1360 × 1360 pixels | Resolution | 2448 × 2048 pixels |
mapping parameter | 0.82 | Frame rate | 10 frames/s | Frame rate | 10 frames/s |
vertical viewing angle | 120° | Interface | 1394 b | Interface | 1394 b |
Method in [27] | Our Proposed Method | |
---|---|---|
Set 1 | 2.457 | 1.401 |
Set 2 | 2.374 | 1.645 |
Set 3 | 2.621 | 1.831 |
Set 4 | 1.987 | 1.176 |
Average Err. | 2.360 | 1.513 |
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Cai, C.; Fan, B.; Liang, X.; Zhu, Q. Automatic Rectification of the Hybrid Stereo Vision System. Sensors 2018, 18, 3355. https://doi.org/10.3390/s18103355
Cai C, Fan B, Liang X, Zhu Q. Automatic Rectification of the Hybrid Stereo Vision System. Sensors. 2018; 18(10):3355. https://doi.org/10.3390/s18103355
Chicago/Turabian StyleCai, Chengtao, Bing Fan, Xin Liang, and Qidan Zhu. 2018. "Automatic Rectification of the Hybrid Stereo Vision System" Sensors 18, no. 10: 3355. https://doi.org/10.3390/s18103355