Analysis and Improvements in AprilTag Based State Estimation
<p>Figure shows four steps of AprilTag detection algorithm with an input image of AprilTag of class 36H10.</p> "> Figure 2
<p>Trajectory using AprilTag detections. The trail of the transformation frame centers that constitute the trajectory is depicted in blue for various time instances. Here, <math display="inline"><semantics> <msub> <mi>p</mi> <msub> <mi>i</mi> <mi>x</mi> </msub> </msub> </semantics></math>,<math display="inline"><semantics> <msub> <mi>p</mi> <msub> <mi>i</mi> <mi>y</mi> </msub> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>p</mi> <msub> <mi>i</mi> <mi>z</mi> </msub> </msub> </semantics></math> of Equation (<a href="#FD1-sensors-19-05480" class="html-disp-formula">1</a>) (although not shown in the figure) depict the position of AprilTag in <math display="inline"><semantics> <msub> <mi>x</mi> <mi>i</mi> </msub> </semantics></math>-axis, <math display="inline"><semantics> <msub> <mi>y</mi> <mi>i</mi> </msub> </semantics></math>-axis and <math display="inline"><semantics> <msub> <mi>z</mi> <mi>i</mi> </msub> </semantics></math>-axis in the respective camera frame of reference.</p> "> Figure 3
<p>Motion Capture (MoCap) setup at LUMS Biomechanics lab for AprilTag comparison.</p> "> Figure 4
<p>Accuracy plot for Motion Capture (MoCap).</p> "> Figure 5
<p>Photographs from different views of the AprilTag error measurement setup. (<b>Left</b>): Shows the top-down view of the error measurement setup. (<b>Middle</b>): Shows the placement of the camera in front of the AprilTag over error measurement setup. (<b>Right</b>): Shows the side view of the measurement recording process.</p> "> Figure 6
<p>Error measurement setup showing measurement positions and yaw angles of the camera to AprilTag placed at the origin.</p> "> Figure 7
<p>Multiple raw AprilTag readings plotted for ideal (green) and worst (blue) scenarios. Mean ground-truth (MoCap) readings are plotted as red crosses.</p> "> Figure 8
<p>Error plot with camera’s <span class="html-italic">z</span>-axis pointed towards the center of AprilTag. (<b>Left</b>): Error plot for <math display="inline"><semantics> <mover accent="true"> <mi>x</mi> <mo>¯</mo> </mover> </semantics></math>-axis. (<b>Right</b>): Error plot for <math display="inline"><semantics> <mover accent="true"> <mi>y</mi> <mo>¯</mo> </mover> </semantics></math>-axis.</p> "> Figure 9
<p>Plot for measurements with changing camera yaw angle ‘<math display="inline"><semantics> <mi>ϕ</mi> </semantics></math>’ for <math display="inline"><semantics> <mrow> <msup> <mn>70</mn> <mo>∘</mo> </msup> <mo>≤</mo> <mi>ϕ</mi> <mo>≤</mo> <msup> <mn>110</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>.</p> "> Figure 10
<p>Error plot with camera yaw axis ‘<math display="inline"><semantics> <mi>ϕ</mi> </semantics></math>’ fixed at <math display="inline"><semantics> <msup> <mn>110</mn> <mo>∘</mo> </msup> </semantics></math>. (<b>Left</b>): Error plot for <math display="inline"><semantics> <mover accent="true"> <mi>x</mi> <mo>¯</mo> </mover> </semantics></math>-axis. (<b>Right</b>): Error plot for <math display="inline"><semantics> <mover accent="true"> <mi>y</mi> <mo>¯</mo> </mover> </semantics></math>-axis.</p> "> Figure 11
<p>Geometrically aligning subsequent frames.</p> "> Figure 12
<p>A comparison plot for AprilTag raw readings and improved SYAC measurements with changing camera yaw angle ‘<math display="inline"><semantics> <mi>ϕ</mi> </semantics></math>’ for <math display="inline"><semantics> <mrow> <msup> <mn>70</mn> <mo>∘</mo> </msup> <mo>≤</mo> <mi>ϕ</mi> <mo>≤</mo> <msup> <mn>110</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>. Blue circles show the clustering of the plotted data around a ground truth point.</p> "> Figure 13
<p>An angle-wise comparison plot for AprilTag raw readings and improved SYAC measurements with changing camera yaw angle ‘<math display="inline"><semantics> <mi>ϕ</mi> </semantics></math>’ for <math display="inline"><semantics> <mrow> <msup> <mn>70</mn> <mo>∘</mo> </msup> <mo>≤</mo> <mi>ϕ</mi> <mo>≤</mo> <msup> <mn>110</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>. Plot shows that our proposed technique has significantly improved AprilTag raw measurements.</p> "> Figure 14
<p>Yaw-axis gimbal hardware setup developed by the authors. A monocular camera has been mounted on a Dynamixal stepper motor, which is controlled by an Arduino Mega 2560 controller. The controller is used as a slave ROS process in localization application. Housing is in a 3D printed retrofit.</p> "> Figure 15
<p>Data scatter plot for geometrically consistent (SYAC) and non consistent frames(raw AprilTag) with custom-built yaw axis gimbal.</p> "> Figure 16
<p>Comparison of resulting data spread (precision) from different approaches against the ground truth (Mocap) at nominal reference point straight in front of AprilTag i.e., <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <mover accent="true"> <mi>x</mi> <mo>¯</mo> </mover> <mo>,</mo> <mover accent="true"> <mi>y</mi> <mo>¯</mo> </mover> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>70</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p> "> Figure 17
<p>With an oblique viewing angle i.e., <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <mover accent="true"> <mi>x</mi> <mo>¯</mo> </mover> <mo>,</mo> <mover accent="true"> <mi>y</mi> <mo>¯</mo> </mover> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <mn>20</mn> <mo>,</mo> <mn>70</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>, a comparison of resulting data spread (precision) from different approaches against the ground truth (Mocap).</p> "> Figure 18
<p>Root Mean Square Error (RMSE) comparison of raw AprilTag against proposed approaches and MoCap.</p> "> Figure 19
<p>Incremental motion model used between two configuration points <math display="inline"><semantics> <msub> <mi>c</mi> <mi>i</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>c</mi> <mi>f</mi> </msub> </semantics></math>, encoded by three parameters <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>θ</mi> <mi>i</mi> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>δ</mi> <mi>d</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>θ</mi> <mi>f</mi> </msub> </mrow> </semantics></math> for Monte Carlo simulation.</p> "> Figure 20
<p>Trajectory comparison between MoCap and trajectory generated by Monte Carlo simulation using our proposed AprilTag sensor model.</p> "> Figure 21
<p>(<b>Left</b>): Camera view of detected AprilTag (red polygon) in outdoors. (<b>Right</b>): Camera view of detected AprilTag (red polygon) in indoors. Both images show detection polygons along with the detected tag IDs based on implanted code.</p> "> Figure 22
<p>Trajectory generated using Monte Carlo Simulation in an outdoor environment.</p> "> Figure 23
<p>Comparison of raw AprilTag data and proposed generalize sensor model based particle filter output along <math display="inline"><semantics> <mover accent="true"> <mi>x</mi> <mo>¯</mo> </mover> </semantics></math>-axis and <math display="inline"><semantics> <mover accent="true"> <mi>y</mi> <mo>¯</mo> </mover> </semantics></math>-axis. Dotted line shows the initialization of yaw axis gimbal for active correction.</p> ">
Abstract
:1. Introduction
2. Related Work
3. Problem Setup and System Evaluation
3.1. AprilTag Working Principle
3.2. Trajectory Generation
3.3. Error Measurements Setup
Distance from Tag:
Viewing Angle:
Yaw Angle of the Viewing Camera:
4. Improvement Techniques
4.1. Passive Correction for Frame Consistency
4.2. Active Correction with a Yaw Axis Gimbal
Algorithm 1 Active Camera Tracking of AprilTag Center. | |
Input: camera yaw angle ’’ Output: servo angle ’’
| |
| ▹ Servo stopping threshold. |
| ▹ Smoothing factor. |
| |
| ▹ To keep camera facing AprilTag |
|
4.3. Comparative Results
4.4. Probabilistic Sensor Model for AprilTag
Experimental Verification of Sensor Model
5. Conclusions
6. Code & Dataset
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
Abbreviations
MAV | Micro Aerial Vehicle |
UAV | Unmanned Aerial Vehicle |
2D | Two Dimensional |
3D | Three Dimensional |
DOF | Degree of Freedom |
MoCap | Motion Capture System (Vicon MX F-49) |
PID | Proportional-Integral-Derivative |
GP | Gaussian Processes |
InC | Inconsistent |
C | Consistent |
G+InC | Gimbal with inconsistent |
G+Con | Gimbal with consistent |
References
- Leonard, J.J.; Durrant-Whyte, H.F. Mobile robot localization by tracking geometric beacons. IEEE Trans. Robot. Autom. 1991, 7, 376–382. [Google Scholar] [CrossRef]
- Thrun, S.; Burgard, W.; Fox, D. Probabilistic Robotics; MIT press: Cambridge, MA, USA, 2005. [Google Scholar]
- Moeslund, T.B.; Hilton, A.; Krüger, V. A survey of advances in vision-based human motion capture and analysis. Comput. Vis. Image Underst. 2006, 104, 90–126. [Google Scholar] [CrossRef]
- Fiala, M. Comparing ARTag and ARToolkit Plus fiducial marker systems. In Proceedings of the IEEE International Workshop on Haptic Audio Visual Environments and their Applications. IEEE, Ottawa, ON, Canada, 1–2 October 2005; pp. 148–153. [Google Scholar]
- Reina, G.; Vargas, A.; Nagatani, K.; Yoshida, K. Adaptive kalman filtering for gps-based mobile robot localization. In Proceedings of the 2007 IEEE International Workshop on Safety, Security and Rescue Robotics. IEEE, Rome, Italy, 27–29 September 2007; pp. 1–6. [Google Scholar]
- Olson, E. AprilTag: A robust and flexible visual fiducial system. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation. IEEE, Shanghai, China, 9–13 May 2011; pp. 3400–3407. [Google Scholar]
- Owen, C.B.; Xiao, F.; Middlin, P. What is the best fiducial? In Proceedings of the The First IEEE International Workshop Agumented Reality Toolkit, IEEE. Darmstadt, Germany, 29–29 September 2002; p. 8. [Google Scholar]
- Kato, H.; Billinghurst, M. Marker tracking and hmd calibration for a video-based augmented reality conferencing system. In Proceedings of the 2nd IEEE and ACM International Workshop on Augmented Reality (IWAR’99). IEEE, San Francisco, CA, USA, 20–21 October 1999; pp. 85–94. [Google Scholar]
- Cho, Y.; Lee, J.; Neumann, U. A multi-ring color fiducial system and an intensity-invariant detection method for scalable fiducial-tracking augmented reality. In Proceedings of the In IWAR. Citeseer, San Francisco, CA, USA, 1 November 1998. [Google Scholar]
- López de Ipiña, D.; Mendonça, P.R.; Hopper, A. TRIP: A low-cost vision-based location system for ubiquitous computing. Pers. Ubiquitous Comput. 2002, 6, 206–219. [Google Scholar] [CrossRef]
- Fiala, M. ARTag, a fiducial marker system using digital techniques. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; Volume 2, pp. 590–596. [Google Scholar]
- Wagner, D.; Schmalstieg, D. Artoolkitplus for Pose Tracking on Mobile Devices. 2007. Available online: www.researchgate.net/publication/216813818_ARToolKitPlus_for_Pose_Tracking_on_Mobile_Devices (accessed on 30 October 2019).
- Xu, A.; Dudek, G. Fourier tag: A smoothly degradable fiducial marker system with configurable payload capacity. In Proceedings of the 2011 Canadian Conference on Computer and Robot Vision. IEEE, St. Johns, NL, Canada, 25–27 May 2011; pp. 40–47. [Google Scholar]
- Bergamasco, F.; Albarelli, A.; Rodola, E.; Torsello, A. Rune-tag: A high accuracy fiducial marker with strong occlusion resilience. In Proceedings of the CVPR 2011. IEEE, Providence, RI, USA, 20–25 June 2011; pp. 113–120. [Google Scholar]
- Edwards, M.J.; Hayes, M.P.; Green, R.D. High-accuracy fiducial markers for ground truth. In Proceedings of the 2016 International Conference on Image and Vision Computing New Zealand (IVCNZ). IEEE, Palmerston North, New Zealand, 21–22 November 2016; pp. 1–6. [Google Scholar]
- Sagitov, A.; Shabalina, K.; Lavrenov, R.; Magid, E. Comparing fiducial marker systems in the presence of occlusion. In Proceedings of the 2017 International Conference on Mechanical, System and Control Engineering (ICMSC). IEEE, St. Petersburg, Russia, 19–21 May 2017; pp. 377–382. [Google Scholar]
- Feng, C.; Kamat, V.R. Augmented reality markers as spatial indices for indoor mobile AECFM applications. Proceeding of 12th International Conference on Construction Applications of Virtual Reality (CONVR 2012), Taipei, Taiwan, 1–2 November 2012. [Google Scholar]
- Li, J.; Slembrouck, M.; Deboeverie, F.; Bernardos, A.M.; Besada, J.A.; Veelaert, P.; Aghajan, H.; Philips, W.; Casar, J.R. A hybrid pose tracking approach for handheld augmented reality. In Proceedings of the the 9th International Conference on Distributed Smart Cameras. ACM, Seville, Spain, 8–11 September 2015; pp. 7–12. [Google Scholar]
- Wang, J.; Sadler, C.; Montoya, C.F.; Liu, J.C. Optimizing ground vehicle tracking using unmanned aerial vehicle and embedded apriltag design. In Proceedings of the 2016 International Conference on Computational Science and Computational Intelligence (CSCI). IEEE, Las Vegas, NV, USA, 15–17 December 2016; pp. 739–744. [Google Scholar]
- Wang, K.; Phang, S.K.; Ke, Y.; Chen, X.; Gong, K.; Chen, B.M. Vision-aided tracking of a moving ground vehicle with a hybrid uav. In Proceedings of the 2017 13th IEEE International Conference on Control & Automation (ICCA). IEEE, Ohrid, Macedonia, 3–6 July 2017; pp. 28–33. [Google Scholar]
- Ling, K.; Chow, D.; Das, A.; Waslander, S.L. Autonomous maritime landings for low-cost vtol aerial vehicles. In Proceedings of the 2014 Canadian Conference on Computer and Robot Vision. IEEE, Montreal, QC, Canada, 6–9 May 2014; pp. 32–39. [Google Scholar]
- Zhang, Y.; Yu, Y.; Jia, S.; Wang, X. Autonomous landing on ground target of UAV by using image-based visual servo control. In Proceedings of the 2017 36th Chinese Control Conference (CCC). IEEE, Dalian, China, 26–28 July 2017; pp. 11204–11209. [Google Scholar]
- Jiaxin, H.; Yanning, G.; Zhen, F.; Yuqing, G. Vision-based autonomous landing of unmanned aerial vehicles. In Proceedings of the 2017 Chinese Automation Congress (CAC). IEEE, Jinan, China, 20–22 October 2017; pp. 3464–3469. [Google Scholar]
- Tang, D.; Hu, T.; Shen, L.; Ma, Z.; Pan, C. AprilTag array-aided extrinsic calibration of camera–laser multi-sensor system. Robot. Biomim. 2016, 3, 13. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Ramirez, E.A. An Experimental Study of Mobile Device Localization. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 2015. Available online: https://dspace.mit.edu/handle/1721.1/98770 (accessed on 30 October 2019).
- Parkison, S.A.; Psota, E.T.; Pérez, L.C. Automated indoor RFID inventorying using a self-guided micro-aerial vehicle. In Proceedings of the IEEE International Conference on Electro/Information Technology. IEEE, Milwaukee, WI, USA, 5–7 June 2014; pp. 335–340. [Google Scholar]
- Raina, S.; Chang, H.Y.; Sarkar, S.; Chen, M.N.; Cai, Y. An Integrated System for 3D Pose Estimation in Cluttered Environments. Available online: https://mrsd.ri.cmu.edu/wp-content/uploads/2017/07/Team8Report.pdf (accessed on 30 October 2019).
- Maragh, J.M. Dynamic Tracking With AprilTags for Robotic Education. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 2013. Available online: https://dspace.mit.edu/handle/1721.1/83725 (accessed on 30 October 2019).
- Zake, Z.; Caro, S.; Roos, A.S.; Chaumette, F.; Pedemonte, N. Stability Analysis of Pose-Based Visual Servoing Control of Cable-Driven Parallel Robots. In Proceedings of the International Conference on Cable-Driven Parallel Robots, 30 June–4 July; Springer: Krakow, Poland, 2019; pp. 73–84. [Google Scholar]
- Florea, A.G.; Buiu, C. Sensor Fusion for Autonomous Drone Waypoint Navigation Using ROS and Numerical P Systems: A Critical Analysis of Its Advantages and Limitations. In Proceedings of the 2019 22nd International Conference on Control Systems and Computer Science (CSCS). IEEE, Bucharest, Romania, 28–30 May 2019; pp. 112–117. [Google Scholar]
- Britto, J.; Cesar, D.; Saback, R.; Arnold, S.; Gaudig, C.; Albiez, J. Model identification of an unmanned underwater vehicle via an adaptive technique and artificial fiducial markers. In Proceedings of the OCEANS 2015-MTS/IEEE Washington. IEEE, Washington, DC, USA, 19–22 October 2015; pp. 1–6. [Google Scholar]
- Fuchs, C.; Neuhaus, F.; Paulus, D. 3D pose estimation for articulated vehicles using Kalman-filter based tracking. Pattern Recognit. Image Anal. 2016, 26, 109–113. [Google Scholar] [CrossRef]
- Kalman, R.E. A new approach to linear filtering and prediction problems. J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef] [Green Version]
- Mueggler, E.; Faessler, M.; Fontana, F.; Scaramuzza, D. Aerial-guided navigation of a ground robot among movable obstacles. In Proceedings of the 2014 IEEE International Symposium on Safety, Security, and Rescue Robotics (2014). IEEE, Hokkaido, Japan, 27–30 October 2014; pp. 1–8. [Google Scholar]
- Xie, Y.; Shao, R.; Guli, P.; Li, B.; Wang, L. Infrastructure based calibration of a multi-camera and multi-lidar system using apriltags. In Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV). IEEE, Changshu, China, 26–30 June 2018; pp. 605–610. [Google Scholar]
- Nissler, C.; Marton, Z.C. Robot-to-Camera Calibration: A Generic Approach Using 6D Detections. In Proceedings of the 2017 First IEEE International Conference on Robotic Computing (IRC). IEEE, Taichung, Taiwan, 10–12 April 2017; pp. 299–302. [Google Scholar]
- de Almeida Barbosa, J.P.; Dias, S.S.; dos Santos, D.A. A Visual-Inertial Navigation System Using AprilTag for Real-Time MAV Applications. In Proceedings of the 2018 25th International Conference on Mechatronics and Machine Vision in Practice (M2VIP). IEEE, Stuttgart, Germany, 20–22 November 2018; pp. 1–7. [Google Scholar]
- Abawi, D.F.; Bienwald, J.; Dorner, R. Accuracy in optical tracking with fiducial markers: an accuracy function for ARToolKit. In Proceedings of the the 3rd IEEE/ACM International Symposium on Mixed and Augmented Reality. IEEE Computer Society, Arlington, VA, USA, 2–5 November 2004; pp. 260–261. [Google Scholar]
- Wang, J.; Olson, E. AprilTag 2: Efficient and robust fiducial detection. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, Daejeon, Korea, 9–14 October 2016; pp. 4193–4198. [Google Scholar]
- Jin, P.; Matikainen, P.; Srinivasa, S.S. Sensor fusion for fiducial tags: Highly robust pose estimation from single frame RGBD. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, Vancouver, BC, Canada, 24–28 September 2017; pp. 5770–5776. [Google Scholar]
- Zhenglong, G.; Qiang, F.; Quan, Q. Pose Estimation for Multicopters Based on Monocular Vision and AprilTag. In Proceedings of the 2018 37th Chinese Control Conference (CCC). IEEE, Wuhan, China, 26–27 July 2018; pp. 4717–4722. [Google Scholar]
- Kayhani, N.; Heins, A.; Zhao, W.; Nahangi, M.; McCabe, B.; Schoelligb, A.P. Improved Tag-based Indoor Localization of UAVs Using Extended Kalman Filter. In Proceedings of the ISARC. International Symposium on Automation and Robotics in Construction, Banff, AB, Canada, 21–24 May 2019; Volume 36, pp. 624–631. [Google Scholar]
- Plungis, J. Self-driving cars: Driving into the future. Consum. Rep. 2017. Available online: https://velodynelidar.com/docs/news/Self-Driving%20Cars_%20Driving%20Into%20the%20Future%20-%20Consumer%20Reports.pdf (accessed on 30 October 2019).
- Rasmussen, C.E. Gaussian processes in machine learning. In Summer School on Machine Learning; Springer: Tübingen, Germany, 2003; pp. 63–71. [Google Scholar]
- Choset, H.M.; Hutchinson, S.; Lynch, K.M.; Kantor, G.; Burgard, W.; Kavraki, L.E.; Thrun, S. Principles of Robot Motion: Theory, Algorithms, and Implementation; MIT press: Cambridge, MA, USA, 2005. [Google Scholar]
- Thrun, S.; Fox, D.; Burgard, W.; Dellaert, F. Robust Monte Carlo localization for mobile robots. Artif. Intell. 2001, 128, 99–141. [Google Scholar] [CrossRef] [Green Version]
- Abbas, S.M. AprilTag Code & Datasets: For Analysis & Improvement 2019. Available online: http://cyphynets.lums.edu.pk/index.php/Apriltag (accessed on 31 October 2019).
Tag Names | Key Features |
---|---|
ARToolkit [8] | Use solid black outline for quick and robust detection. |
Multi-ring Marker [9] | Use color rings instead of black marker for more robust detection. |
TRIP [10] | Use a 2D circular mark for location identification. |
ARTag [11] | Robustness marker detection against different lightening conditions. |
ARToolKitPlus [12] | ARToolKit algorithm has been optimized for embedded devices. |
Fourier-Tag [13] | Use robust tag encoding scheme using the phase spectrum of a 1-D signal (gray-scale). |
RUNE-Tag [14] | Use perspective properties of circular dots for high accuracy and robustness. |
CircularTag [15] | Use circular nature and non-linear optimization to further increase accuracy. |
AprilTag [6] | Use stronger digital encoding, robust against different lighting conditions and occlusions. |
Nominal Reference Points | Motion Capture (MoCap) Readings | |||||
---|---|---|---|---|---|---|
(cm) | (cm) | N | (cm) | (cm) | (cm) | (cm) |
0 | 30 | 217 | 0.7062 | 30.3437 | ||
6 | 30 | 195 | 6.9367 | 29.5193 | ||
−6 | 30 | 193 | −3.9230 | 30.1210 | ||
0 | 50 | 204 | 2.9902 | 50.0462 | ||
15 | 50 | 202 | 17.7118 | 49.6042 | ||
−15 | 50 | 205 | −12.0469 | 50.8171 | ||
0 | 70 | 217 | 3.2683 | 70.0810 | ||
20 | 70 | 199 | 23.3681 | 69.4768 | ||
−20 | 70 | 212 | −16.8097 | 70.9505 |
Nominal Reference Points | Ground Truth (MoCap) | AprilTag Readings | ||||||
---|---|---|---|---|---|---|---|---|
(cm) | (cm) | (cm) | (cm) | N | (cm) | (cm) | (cm) | (cm) |
0 | 30 | 0.0166 | 30.020 | 110 | −0.2420 | 30.1510 | ||
6 | 30 | 7.0161 | 30.010 | 115 | 7.0161 | 29.8573 | ||
−6 | 30 | −5.9798 | 29.92 | 80 | −5.9798 | 30.4293 | ||
0 | 50 | 0.102 | 49.960 | 107 | 0.7571 | 50.0819 | ||
15 | 50 | 14.952 | 50.69 | 113 | 16.9141 | 49.4206 | ||
−15 | 50 | −14.98 | 49.90 | 103 | −16.9185 | 49.4014 | ||
0 | 70 | 0.003 | 70.05 | 134 | 1.3080 | 70.0264 | ||
20 | 70 | 20.06 | 70.06 | 144 | 22.3574 | 69.0718 | ||
−20 | 70 | −20.01 | 70.02 | 151 | −21.7560 | 69.5979 |
Nominal Reference Points | Ground Truth (MoCap) | AprilTag Readings | ||||||
---|---|---|---|---|---|---|---|---|
(cm) | (cm) | (cm) | (cm) | N | (cm) | (cm) | (cm) | (cm) |
0 | 30 | 0.7062 | 30.3437 | 152 | −1.1162 | 30.1501 | ||
6 | 30 | 6.9367 | 29.5193 | 182 | 5.4598 | 29.4149 | ||
−6 | 30 | −3.9230 | 30.1210 | 162 | −6.4092 | 29.6787 | ||
0 | 50 | 2.9902 | 50.0462 | 133 | −1.3508 | 49.1515 | ||
15 | 50 | 17.7118 | 49.6042 | 144 | 13.8889 | 48.8854 | ||
−15 | 50 | −12.0469 | 50.8171 | 186 | −15.5709 | 48.3046 | ||
0 | 70 | 3.2683 | 70.0810 | 163 | −0.0285 | 68.3890 | ||
20 | 70 | 23.3681 | 69.4768 | 149 | 23.4214 | 67.4216 | ||
−20 | 70 | −16.8097 | 70.9505 | 140 | −24.4433 | 67.4337 |
Nominal Reference Points | Ground Truth (MoCap) | AprilTag Readings | |||||||
---|---|---|---|---|---|---|---|---|---|
(cm) | (cm) | (deg) | (cm) | (cm) | N | (cm) | (cm) | (cm | (cm |
0 | 30 | 110 | 0.020 | 30.001 | 113 | −1.2057 | 30.5027 | ||
10 | 30 | 110 | 10.07 | 30.01 | 154 | −0.3694 | 30.7558 | ||
−10 | 30 | 110 | −9.67 | 30.06 | 178 | −0.5791 | 31.1269 | ||
0 | 50 | 110 | 0.100 | 50.071 | 147 | −0.8202 | 50.5234 | ||
15 | 50 | 110 | 14.960 | 50.100 | 120 | −1.9371 | 51.8464 | ||
−15 | 50 | 110 | −14.91 | 49.97 | 117 | −2.2290 | 52.4599 | ||
0 | 70 | 110 | 0.03 | 70.01 | 184 | 1.1841 | 70.4607 | ||
20 | 70 | 110 | 20.10 | 69.98 | 102 | 0.3113 | 71.4872 | ||
−20 | 70 | 110 | −20.08 | 70.05 | 128 | −1.8619 | 72.1167 |
Nominal Reference Points | Ground Truth (MoCap) | AprilTag Readings | ||||||
---|---|---|---|---|---|---|---|---|
(cm) | (cm) | (cm) | (cm) | N | (cm) | (cm) | (cm | (cm |
0 | 30 | 0.7062 | 30.3437 | 113 | −0.5813 | 30.6166 | ||
6 | 30 | 6.9367 | 29.5193 | 154 | 6.4232 | 29.7444 | ||
−6 | 30 | −3.9230 | 30.1210 | 178 | −6.2826 | 30.5416 | ||
0 | 50 | 2.9902 | 50.0462 | 147 | 0.1930 | 51.3185 | ||
15 | 50 | 17.7118 | 49.6042 | 120 | 17.3150 | 49.5959 | ||
−15 | 50 | −12.0469 | 50.8171 | 117 | −16.2733 | 50.1972 | ||
0 | 70 | 3.2683 | 70.0810 | 184 | 1.8551 | 71.8400 | ||
20 | 70 | 23.3681 | 69.4768 | 102 | 24.8826 | 70.5867 | ||
−20 | 70 | −16.8097 | 70.9505 | 128 | −26.4522 | 69.1570 |
Nominal Reference Points | Ground Truth (MoCap) | AprilTag Readings | ||||||
---|---|---|---|---|---|---|---|---|
(cm) | (cm) | (cm) | (cm) | N | (cm) | (cm) | (cm) | (cm |
0 | 30 | 0.642 | 31.006 | 156 | −2.7569 | 30.3124 | ||
6 | 30 | 6.71 | 29.820 | 144 | 5.8448 | 30.0716 | ||
−6 | 30 | −5.89 | 29.851 | 126 | −6.6391 | 29.8201 | ||
0 | 50 | 2.017 | 50.02 | 124 | −2.9412 | 49.8801 | ||
15 | 50 | 16.90 | 51.13 | 115 | 15.9652 | 49.6746 | ||
−15 | 50 | −14.42 | 49.63 | 144 | −15.6127 | 49.4842 | ||
0 | 70 | −2.10 | 68.90 | 149 | −5.5017 | 70.0076 | ||
20 | 70 | 22.70 | 71.16 | 152 | 21.0146 | 69.8425 | ||
−20 | 70 | −21.10 | 71.23 | 138 | −21.7858 | 69.7444 |
Nominal Reference Points | Ground Truth (MoCap) | AprilTag Readings | ||||||
---|---|---|---|---|---|---|---|---|
(cm) | (cm) | (cm) | (cm) | N | (cm) | (cm) | (cm | (cm |
0 | 30 | 0.642 | 31.006 | 156 | −1.4932 | 30.6753 | ||
6 | 30 | 6.71 | 29.820 | 144 | 6.7595 | 29.7747 | ||
−6 | 30 | −5.89 | 29.851 | 126 | −5.5643 | 30.4668 | ||
0 | 50 | 2.017 | 50.02 | 124 | −2.1262 | 50.1620 | ||
15 | 50 | 16.90 | 51.13 | 115 | 16.8541 | 49.2115 | ||
−15 | 50 | −14.42 | 49.63 | 144 | −14.8491 | 50.1751 | ||
0 | 70 | −2.10 | 68.90 | 149 | −4.3831 | 70.2868 | ||
20 | 70 | 22.70 | 71.16 | 152 | 21.9349 | 69.3702 | ||
−20 | 70 | −21.10 | 71.23 | 138 | −20.8403 | 70.5489 |
Ground-Truth (cm) | Error in Mean Using Different Approaches for AprilTag. (cm) | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Motion Capture System (MoCap) | Raw AprilTag Readings (Camera Pointing Towards Tag’s Center) | Raw AprilTag Readings (Camera Pointing Away from Tag’s Center) | Applying Soft Yaw Angle Correction (SYAC) on Raw AprilTag Readings | Applying Active Correction with Yaw Axis Gimbal on Raw AprilTag Readings | Applying (SYAC + Active Yaw Axis Gimbal Correction) on Raw AprilTag Readings | ||||||
0.642 | 31.00 | 0.884 | 0.855 | 1.758 | 0.855 | 1.223 | 0.389 | 3.3989 | 0.693 | 2.135 | 0.330 |
6.71 | 29.82 | 0.3061 | 0.037 | 1.250 | 0.405 | 0.286 | 0.075 | 0.8652 | 0.251 | 0.049 | 0.045 |
−5.89 | 29.85 | 0.0898 | 0.578 | 0.519 | 0.172 | 0.392 | 0.690 | 0.7491 | 0.030 | 0.325 | 0.615 |
2.017 | 50.02 | 1.2599 | 0.0618 | 3.367 | 0.868 | 1.824 | 1.298 | 4.958 | 0.139 | 4.143 | 0.141 |
16.90 | 51.13 | 0.0141 | 1.709 | 3.011 | 2.244 | 0.415 | 1.534 | 0.934 | 1.455 | 0.045 | 1.918 |
−14.42 | 49.63 | 2.4985 | 0.228 | 1.150 | 1.325 | 1.853 | 0.567 | 1.192 | 0.145 | 0.429 | 0.545 |
−2.10 | 68.90 | 3.408 | 1.126 | 2.071 | 0.511 | 3.955 | 2.940 | 3.401 | 1.107 | 2.283 | 1.386 |
22.70 | 71.16 | 0.3426 | 2.088 | 0.721 | 3.738 | 2.182 | 0.573 | 1.685 | 1.317 | 0.765 | 1.789 |
−21.10 | 71.23 | 0.6559 | 0.432 | 3.343 | 2.596 | 5.352 | 0.873 | 0.685 | 0.285 | 0.259 | 0.518 |
Approaches | Average Execution Time Per Input Image |
---|---|
Raw AprilTag implementation | 90 ms |
Passive yaw axis correction (SYAC) | 130 ms |
Active Correction with yaw axis gimbal | 125 ms |
(Passive yaw axis + Active gimbal) correction | 255 ms |
Unknown Points | Predictive Distribution | Experimental Distribution | ||
---|---|---|---|---|
((cm,deg)) | (cm) | (cm | (cm,deg) | (cm |
(0,30,90) | (0.4,30.9,95.3) | () | (0.1,30.6,89.77) | () |
(10,30,100) | (11.9,30.9,92.2) | () | (10.4,29.5,70.57) | () |
(0,50,80) | (0.4,49.9,88.56) | () | (1.0,51.8,88.84) | () |
(−15,50,100) | (−16.01,49.9,92.2) | () | (−18.8,52.7,109.6) | () |
(20,70,100) | (20.7,70.01,92.2) | () | (22.1,67.0,72.1) | () |
(−20,70,100) | (−21.3,70.01,92.2) | () | (−26.1,78.0,109.8) | () |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Abbas, S.M.; Aslam, S.; Berns, K.; Muhammad, A. Analysis and Improvements in AprilTag Based State Estimation. Sensors 2019, 19, 5480. https://doi.org/10.3390/s19245480
Abbas SM, Aslam S, Berns K, Muhammad A. Analysis and Improvements in AprilTag Based State Estimation. Sensors. 2019; 19(24):5480. https://doi.org/10.3390/s19245480
Chicago/Turabian StyleAbbas, Syed Muhammad, Salman Aslam, Karsten Berns, and Abubakr Muhammad. 2019. "Analysis and Improvements in AprilTag Based State Estimation" Sensors 19, no. 24: 5480. https://doi.org/10.3390/s19245480