Real-Time Monocular Visual Odometry for Turbid and Dynamic Underwater Environments
<p>Remotely Operated Vehicle Dumbo performing archaeological sampling on an antic shipwreck in the Mediterranean Sea (460 m deep). This archaeological operation has been performed under the supervision of the French Department of Underwater Archaeology (DRASSM), in accordance with the UNESCO Convention on the Protection of the Underwater Cultural Heritage. Credit: F. Osada, DRASSM - Images Explorations.</p> "> Figure 2
<p>Images used to evaluate the tracking of features. (<b>a</b>) Images from the TURBID dataset [<a href="#B24-sensors-19-00687" class="html-bibr">24</a>]. (<b>b</b>) Images acquired on a deep antic shipwreck (depth: 500 m, Corsica, France)—Credit: DRASSM (French Department of Underwater Archaeological Research).</p> "> Figure 3
<p>Features tracking methods evaluation on the TURBID dataset [<a href="#B24-sensors-19-00687" class="html-bibr">24</a>] (presented in <a href="#sensors-19-00687-f002" class="html-fig">Figure 2</a>a). Graphs (<b>a</b>) and (<b>b</b>) illustrates number of features respectively detected and tracked with different detectors while (<b>c</b>) and (<b>d</b>) illustrates number of features respectively detected with the Harris corner detector and tracked as before (the SURF and SIFT curves coinciding with the Harris-KLT one in (<b>c</b>)).</p> "> Figure 4
<p>Evaluation of features tracking methods on a real underwater sequence (presented in <a href="#sensors-19-00687-f002" class="html-fig">Figure 2</a>b). Graphs (<b>a</b>) and (<b>b</b>) illustrate the number of features respectively detected and tracked with different detectors, while (<b>c</b>) and (<b>d</b>) illustrate the number of features respectively detected with the Harris corner detector and tracked as before (the SIFT curve coinciding with the SURF one in (<b>c</b>)).</p> "> Figure 5
<p>Pipeline of the proposed visual odometry algorithm.</p> "> Figure 6
<p>The four different turbidity levels of the simulated dataset.</p> "> Figure 7
<p>Drift of ORB-SLAM (green), V.O. ORB-SLAM (blue) and UW-VO (red) on the simulated underwater dataset.</p> "> Figure 8
<p>Trajectories estimated with (<b>a</b>) our method on the sequence with the highest level of noise and with (<b>b</b>) V.O. ORB-SLAM on the sequence with the noise level of 3.</p> "> Figure 9
<p>Trajectories of ORB-SLAM, SVO and UW-VO over the five underwater sequences. (<b>a</b>) Sequence 1, (<b>b</b>) Sequence 2, (<b>c</b>) Sequence 3, (<b>d</b>) Sequence 4, (<b>e</b>) Sequence 5. Ground-truths (GT) are extracted from Colmap trajectories.</p> ">
Abstract
:1. Introduction
- A thorough evaluation of visual features tracking methods on underwater images.
- The development of UW-VO: a monocular VO method robust to turbidity and to short occlusions caused by the environment dynamism (animals, algae...).
- An evaluation of state-of-the-art open-source monocular VO and VSLAM algorithms on underwater datasets and a comparison to UW-VO, highlighting its robustness to underwater visual degradation.
- The release of a dataset consisting of video sequences taken on an underwater archaeological site (https://seafile.lirmm.fr/d/aa84057dc29a4af8ae4a/).
2. Related Work
2.1. Underwater Features Tracking
2.2. Underwater Visual Localization
2.3. Aerial and Terrestrial Visual Localization
3. Features Tracking Methods Evaluation
3.1. Underwater Sets of Images
3.2. Features Tracking Methods
3.3. Evaluation Protocol
- we divide each image into 500 cells and try to extract one feature per cell
- we track the features extracted in one image into the following one (i.e., the image shot right after an adding of milk)
- before each tracking, the second image is virtually translated of 10 pixels to avoid initializing the KLT at the right spot
- we divide each image into 500 cells and try to extract one feature per cell
- we try to match the features extracted in the first image in all the following ones
- we divide the first image into 500 cells and try to extract one feature per cell
- we try to track these features sequentially (image-to-image) by computing optical flow in a forward-backward fashion and remove features whose deviation is more than 2 pixels
3.4. Results
4. The Visual Odometry Framework
4.1. Frame-To-Frame Features Tracking
4.2. Features Retracking
4.3. Pose Estimation
4.4. Keyframe Selection and Mapping
4.5. Windowed Local Bundle Adjustment
4.6. Initialization
5. Experimental Results
5.1. Results on a Simulated Underwater Dataset
5.2. Results on a Real Underwater Video Sequence
- Sequence 1: low level of turbidity and almost no fishes.
- Sequence 2: medium level of turbidity and some fishes.
- Sequence 3: high level of turbidity and many fishes.
- Sequence 4: low level of turbidity and many fishes.
- Sequence 5: medium level of turbidity and many fishes.
6. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Paull, L.; Saeedi, S.; Seto, M.; Li, H. AUV navigation and localization: A review. IEEE J. Oceanic Eng. 2014, 39, 131–149. [Google Scholar] [CrossRef]
- Eustice, R.M.; Pizarro, O.; Singh, H. Visually Augmented Navigation for Autonomous Underwater Vehicles. IEEE J. Oceanic Eng. 2008, 33, 103–122. [Google Scholar] [CrossRef]
- Johnson-Roberson, M.; Pizarro, O.; Williams, S.B.; Mahon, I. Generation and visualization of large-scale three-dimensional reconstructions from underwater robotic surveys. J. Field Rob. 2010, 27, 21–51. [Google Scholar] [CrossRef]
- Mahon, I.; Williams, S.B.; Pizarro, O.; Johnson-Roberson, M. Efficient View-Based SLAM Using Visual Loop Closures. IEEE Trans. Rob. 2008, 24, 1002–1014. [Google Scholar] [CrossRef]
- Beall, C.; Lawrence, B.J.; Ila, V.; Dellaert, F. 3D reconstruction of underwater structures. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Taipei, Taiwan, 18–22 October 2010; pp. 4418–4423. [Google Scholar]
- Warren, M.; Corke, P.; Pizarro, O.; Williams, S.; Upcroft, B. Visual sea-floor mapping from low overlap imagery using bi-objective bundle adjustment and constrained motion. In Proceedings of the Australasian Conference on Robotics and Automation, Wellington, New Zealand, 3–5 December 2012. [Google Scholar]
- Carrasco, P.L.N.; Bonin-Font, F.; Campos, M.M.; Codina, G.O. Stereo-Vision Graph-SLAM for Robust Navigation of the AUV SPARUS II. IFAC-PapersOnLine 2015, 48, 200–205. [Google Scholar] [CrossRef]
- Cadena, C.; Carlone, L.; Carrillo, H.; Latif, Y.; Scaramuzza, D.; Neira, J.; Reid, I.; Leonard, J.J. Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age. IEEE Trans. Rob. 2016, 32, 1309–1332. [Google Scholar] [CrossRef]
- Carrasco, P.L.N.; Bonin-Font, F.; Oliver, G. Cluster-based loop closing detection for underwater slam in feature-poor regions. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 2589–2595. [Google Scholar]
- Weidner, N.; Rahman, S.; Li, A.Q.; Rekleitis, I. Underwater cave mapping using stereo vision. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 5709–5715. [Google Scholar]
- Ribas, D.; Ridao, P.; Tardos, J.D.; Neira, J. Underwater SLAM in a marina environment. In Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 29 October–2 November 2007; pp. 1455–1460. [Google Scholar]
- White, C.; Hiranandani, D.; Olstad, C.S.; Buhagiar, K.; Gambin, T.; Clark, C.M. The Malta cistern mapping project: Underwater robot mapping and localization within ancient tunnel systems. J. Field Rob. 2010, 27, 399–411. [Google Scholar] [CrossRef]
- Yuan, X.; Martínez-Ortega, J.; Fernández, J.A.S.; Eckert, M. AEKF-SLAM: A New Algorithm for Robotic Underwater Navigation. Sensors 2017, 17, 1174. [Google Scholar] [CrossRef]
- Bonin-Font, F.; Oliver, G.; Wirth, S.; Massot, M.; Negre, P.L.; Beltran, J.P. Visual sensing for autonomous underwater exploration and intervention tasks. Ocean Eng. 2015, 93, 25–44. [Google Scholar] [CrossRef]
- Palomeras, N.; Vallicrosa, G.; Mallios, A.; Bosch, J.; Vidal, E.; Hurtos, N.; Carreras, M.; Ridao, P. AUV homing and docking for remote operations. Ocean Eng. 2018, 154, 106–120. [Google Scholar] [CrossRef]
- Triggs, B.; McLauchlan, P.F.; Hartley, R.I.; Fitzgibbon, A.W. Bundle Adjustment—A Modern Synthesis. In Vision Algorithms: Theory and Practice; Springer: Berlin/Heidelberg, Germany, 2000; pp. 298–372. [Google Scholar]
- Aulinas, J.; Carreras, M.; Llado, X.; Salvi, J.; Garcia, R.; Prados, R.; Petillot, Y.R. Feature extraction for underwater visual SLAM. In Proceedings of the OCEANS 2011, Cantabria, Spain, 6–9 June 2011. [Google Scholar]
- Bay, H.; Tuytelaars, T.; Van Gool, L. SURF: Speeded Up Robust Features. In Proceedings of the European Conference on Computer Vision (ECCV), Graz, Austria, 7–13 May 2006; pp. 404–417. [Google Scholar]
- Shkurti, F.; Rekleitis, I.; Dudek, G. Feature Tracking Evaluation for Pose Estimation in Underwater Environments. In Proceedings of the 2011 Canadian Conference on Computer and Robot Vision, St Johns, NF, Canada, 25–27 May 2011; pp. 160–167. [Google Scholar]
- Shi, J.; Tomasi, C. Good features to track. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 21–23 June 1994; pp. 593–600. [Google Scholar]
- Calonder, M.; Lepetit, V.; Ozuysal, M.; Trzcinski, T.; Strecha, C.; Fua, P. BRIEF: Computing a Local Binary Descriptor Very Fast. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 1281–1298. [Google Scholar] [CrossRef] [PubMed]
- Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the International Conference on Computer Vision (ICCV), Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar]
- Garcia, R.; Gracias, N. Detection of interest points in turbid underwater images. In Proceedings of the OCEANS 2011, Cantabria, Spain, 6–9 June 2011. [Google Scholar]
- Codevilla, F.; Gaya, J.D.O.; Filho, N.D.; Botelho, S.S.C.C. Achieving Turbidity Robustness on Underwater Images Local Feature Detection. In Proceedings of the British Machine Vision Conference (BMVC), Swansea, UK, 7–10 September 2015. [Google Scholar]
- Mikolajczyk, K.; Tuytelaars, T.; Schmid, C.; Zisserman, A.; Matas, J.; Schaffalitzky, F.; Kadir, T.; Gool, L.V. A Comparison of Affine Region Detectors. Int. J. Comput. Vision 2005, 65, 43–72. [Google Scholar] [CrossRef]
- Pfingsthorn, M.; Rathnam, R.; Luczynski, T.; Birk, A. Full 3D navigation correction using low frequency visual tracking with a stereo camera. In Proceedings of the OCEANS 2016, Shanghai, China, 10–13 April 2016. [Google Scholar]
- Kim, A.; Eustice, R.M. Real-Time Visual SLAM for Autonomous Underwater Hull Inspection Using Visual Saliency. IEEE Trans. Rob. 2013, 29, 719–733. [Google Scholar] [CrossRef]
- Corke, P.; Detweiler, C.; Dunbabin, M.; Hamilton, M.; Rus, D.; Vasilescu, I. Experiments with Underwater Robot Localization and Tracking. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation (ICRA), Rome, Italy, 10–14 April 2007; pp. 4556–4561. [Google Scholar]
- Drap, P.; Merad, D.; Hijazi, B.; Gaoua, L.; Nawaf, M.M.; Saccone, M.; Chemisky, B.; Seinturier, J.; Sourisseau, J.C.; Gambin, T.; et al. Underwater Photogrammetry and Object Modeling: A Case Study of Xlendi Wreck in Malta. Sensors 2015, 15, 30351–30384. [Google Scholar] [CrossRef] [PubMed]
- Bellavia, F.; Fanfani, M.; Colombo, C. Selective visual odometry for accurate AUV localization. Autom. Rob. 2017, 41, 133–143. [Google Scholar] [CrossRef]
- Garcia, R.; Cufi, X.; Carreras, M. Estimating the motion of an underwater robot from a monocular image sequence. In Proceedings of the 2001 IEEE/RSJ Intelligent Robots and Systems (IROS), Maui, HI, USA, 29 October–3 November 2001; pp. 1682–1687. [Google Scholar]
- Gracias, N.R.; van der Zwaan, S.; Bernardino, A.; Santos-Victor, J. Mosaic-based navigation for autonomous underwater vehicles. IEEE J. Oceanic Eng. 2003, 28, 609–624. [Google Scholar] [CrossRef]
- Negahdaripour, S.; Barufaldi, C.; Khamene, A. Integrated System for Robust 6-DOF Positioning Utilizing New Closed-Form Visual Motion Estimation Methods in Planar Terrains. IEEE J. Oceanic Eng. 2006, 31, 533–550. [Google Scholar] [CrossRef]
- Nicosevici, T.; Gracias, N.; Negahdaripour, S.; Garcia, R. Efficient three-dimensional scene modeling and mosaicing. J. Field Rob. 2009, 26, 759–788. [Google Scholar] [CrossRef]
- Shkurti, F.; Rekleitis, I.; Scaccia, M.; Dudek, G. State estimation of an underwater robot using visual and inertial information. In Proceedings of the 2011 IEEE/RSJ Intelligent Robots and Systems (IROS), San Francisco, CA, USA, 25–30 September 2011. [Google Scholar]
- Burguera, A.; Bonin-Font, F.; Oliver, G. Trajectory-Based Visual Localization in Underwater Surveying Missions. Sensors 2015, 15, 1708–1735. [Google Scholar] [CrossRef]
- Palomeras, N.; Nagappa, S.; Ribas, D.; Gracias, N.; Carreras, M. Vision-based localization and mapping system for AUV intervention. In Proceedings of the 2013 MTS/IEEE OCEANS, San Diego, CA, USA, 10–14 June 2013. [Google Scholar]
- Creuze, V. Monocular Odometry for Underwater Vehicles with Online Estimation of the Scale Factor. In Proceedings of the IFAC 2017 World Congress, Toulouse, France, 9–14 July 2017. [Google Scholar]
- Strasdat, H.; Montiel, J.; Davison, A.J. Visual SLAM: Why filter? Image Vision Comput. 2012, 30, 65–77. [Google Scholar] [CrossRef]
- Klein, G.; Murray, D. Parallel Tracking and Mapping for Small AR Workspaces. In Proceedings of the IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR), Nara, Japan, 13–16 November 2007; pp. 225–234. [Google Scholar]
- Mouragnon, E.; Lhuillier, M.; Dhome, M.; Dekeyser, F.; Sayd, P. Real Time Localization and 3D Reconstruction. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), New York, NY, USA, 17–22 June 2006; pp. 363–370. [Google Scholar]
- Strasdat, H.; Davison, A.J.; Montiel, J.M.M.; Konolige, K. Double window optimisation for constant time visual SLAM. In Proceedings of the International Conference on Computer Vision (ICCV), Barcelona, Spain, 6–13 November 2011; pp. 2352–2359. [Google Scholar]
- Mur-Artal, R.; Montiel, J.M.M.; Tardós, J.D. ORB-SLAM: A Versatile and Accurate Monocular SLAM System. IEEE Trans. Rob. 2015, 31, 1147–1163. [Google Scholar] [CrossRef]
- Forster, C.; Zhang, Z.; Gassner, M.; Werlberger, M.; Scaramuzza, D. SVO: Semidirect Visual Odometry for Monocular and Multicamera Systems. IEEE Trans. Rob. 2017, 33, 249–265. [Google Scholar] [CrossRef]
- Engel, J.; Schops, T.; Cremers, D. LSD-SLAM: Large-Scale Direct Monocular SLAM. In Proceedings of the European Conference on Computer Vision (ECCV), Zurich, Switzerland, 6–12 September 2014; pp. 834–849. [Google Scholar]
- Leutenegger, S.; Lynen, S.; Bosse, M.; Siegwart, R.; Furgale, P. Keyframe-Based Visual-Inertial Odometry using Nonlinear Optimization. Int. J. Rob. Res. 2015, 34, 314–334. [Google Scholar] [CrossRef]
- Bloesch, M.; Omari, S.; Hutter, M.; Siegwart, R. Robust visual inertial odometry using a direct EKF-based approach. In Proceedings of the 2015 IEEE/RSJ Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 298–304. [Google Scholar]
- Lin, Y.; Gao, F.; Qin, T.; Gao, W.; Liu, T.; Wu, W.; Yang, Z.; Shen, S. Autonomous aerial navigation using monocular visual-inertial fusion. J. Field Rob. 2018, 35, 23–51. [Google Scholar] [CrossRef]
- Bouguet, J.Y. Pyramidal Implementation of the Lucas Kanade Feature Tracker; Intel: Santa Clara, CA, USA, 2000. [Google Scholar]
- Leutenegger, S.; Chli, M.; Siegwart, R.Y. BRISK: Binary Robust invariant scalable keypoints. In Proceedings of the International Conference on Computer Vision (ICCV), Barcelona, Spain, 6–13 November 2011; pp. 2548–2555. [Google Scholar]
- Rosten, E.; Porter, R.; Drummond, T. Faster and Better: A Machine Learning Approach to Corner Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 105–119. [Google Scholar] [CrossRef] [PubMed]
- Alahi, A.; Ortiz, R.; Vandergheynst, P. FREAK: Fast Retina Keypoint. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, Rhode Island, 16–21 June 2012; pp. 510–517. [Google Scholar]
- Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vision 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Baker, S.; Matthews, I. Lucas-Kanade 20 Years On: A Unifying Framework. Int. J. Comput. Vision 2004, 56, 221–255. [Google Scholar] [CrossRef]
- Fischler, M.A.; Bolles, R.C. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
- Nister, D. An efficient solution to the five-point relative pose problem. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 756–770. [Google Scholar] [CrossRef]
- Kneip, L.; Scaramuzza, D.; Siegwart, R. A novel parametrization of the perspective-three-point problem for a direct computation of absolute camera position and orientation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Colorado Springs, CO, USA, 20–25 June 2011; pp. 2969–2976. [Google Scholar]
- Hartley, R.I.; Zisserman, A. Multiple View Geometry in Computer Vision, 2nd ed.; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
- Quigley, M.; Conley, K.; Gerkey, B.P.; Faust, J.; Foote, T.; Leibs, J.; Wheeler, R.; Ng, A.Y. ROS: An open-source Robot Operating System. In Proceedings of the ICRA Workshop on Open Source Software, Kobe, Japan, 12–17 May 2009. [Google Scholar]
- Kneip, L.; Furgale, P. OpenGV: A unified and generalized approach to real-time calibrated geometric vision. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 1–8. [Google Scholar]
- Kümmerle, R.; Grisetti, G.; Strasdat, H.; Konolige, K.; Burgard, W. g2o: A general framework for graph optimization. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, 9–13 May 2011; pp. 3607–3613. [Google Scholar]
- Duarte, A.C.; Zaffari, G.B.; da Rosa, R.T.S.; Longaray, L.M.; Drews, P.; Botelho, S.S.C. Towards comparison of underwater SLAM methods: An open dataset collection. In Proceedings of the 2016 MTS/IEEE OCEANS, Monterey, CA, USA, 19–23 September 2016. [Google Scholar]
- Furgale, P.; Rehder, J.; Siegwart, R. Unified temporal and spatial calibration for multi-sensor systems. In Proceedings of the 2013 IEEE/RSJ Intelligent Robots and Systems (IROS), Tokyo, Japan, 3–7 November 2013; pp. 1280–1286. [Google Scholar]
- Łuczyński, T.; Pfingsthorn, M.; Birk, A. The Pinax-model for accurate and efficient refraction correction of underwater cameras in flat-pane housings. Ocean Eng. 2017, 133, 9–22. [Google Scholar] [CrossRef]
- Schönberger, J.L.; Frahm, J.M. Structure-from-Motion Revisited. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 4104–4113. [Google Scholar]
- Umeyama, S. Least-Squares Estimation of Transformation Parameters Between Two Point Patterns. IEEE Trans. Pattern Anal. Mach. Intell. 1991, 13, 376–380. [Google Scholar] [CrossRef]
- Sturm, J.; Engelhard, N.; Endres, F.; Burgard, W.; Cremers, D. A Benchmark for the Evaluation of RGB-D SLAM Systems. In Proceedings of the 2012 IEEE/RSJ Intelligent Robots and Systems (IROS), Taipei, Taiwan, 7–12 October 2012; pp. 573–580. [Google Scholar]
- Galvez-Lopez, D.; Tardos, J.D. Bags of Binary Words for Fast Place Recognition in Image Sequences. IEEE Trans. Rob. 2012, 28, 1188–1197. [Google Scholar] [CrossRef]
Seq. Noise Level | Turbidity | Drift (in %) | ||
---|---|---|---|---|
ORB-SLAM | V.O. ORB-SLAM | UW-VO | ||
1 | None | 0.18 | 0.97 | 0.78 |
2 | Low | 0.18 | 0.93 | 0.81 |
3 | Medium | 0.17 * | 1.21 * | 0.85 |
4 | High | X | X | 0.89 |
Seq. # | Duration | Turbidity Level | Short Occlusions | Absolute Trajectory Error RMSE (in %) | ||||
---|---|---|---|---|---|---|---|---|
LSD-SLAM | ORB-SLAM | SVO | UW-VO* | UW-VO | ||||
1 | 4’ | Low | Few | X | 1.67 | 1.63 | 1.78 | 1.76 |
2 | 2’30” | Medium | Some | X | 1.91 | 2.45 | 1.78 | 1.73 |
3 | 22” | High | Many | X | X | 1.57 | 1.10 | 1.04 |
4 | 4’30” | Low | Many | X | 1.13 | X | 1.61 | 1.58 |
5 | 3’15” | Medium | Many | X | 1.94 | X | 2.08 | 1.88 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ferrera, M.; Moras, J.; Trouvé-Peloux, P.; Creuze, V. Real-Time Monocular Visual Odometry for Turbid and Dynamic Underwater Environments. Sensors 2019, 19, 687. https://doi.org/10.3390/s19030687
Ferrera M, Moras J, Trouvé-Peloux P, Creuze V. Real-Time Monocular Visual Odometry for Turbid and Dynamic Underwater Environments. Sensors. 2019; 19(3):687. https://doi.org/10.3390/s19030687
Chicago/Turabian StyleFerrera, Maxime, Julien Moras, Pauline Trouvé-Peloux, and Vincent Creuze. 2019. "Real-Time Monocular Visual Odometry for Turbid and Dynamic Underwater Environments" Sensors 19, no. 3: 687. https://doi.org/10.3390/s19030687