Vacant Parking Slot Detection in the Around View Image Based on Deep Learning
<p>Three typical kinds of parking slots. (<b>a</b>) perpendicular parking slots; (<b>b</b>) parallel parking slots; (<b>c</b>) slanted parking slots. A parking slot consists of four vertices, of which the paired marking points of the entrance line are marked with red dots, and the other two invisible vertices are marked with yellow dots. The entrance lines and the viewing range of an AVM system are also marked out.</p> "> Figure 2
<p>Overview of the VPS-Net, which contains two modules: parking slot detection and occupancy classification. It takes the around view image as input and outputs the position of the vacant parking slot to the decision module of the PAS.</p> "> Figure 3
<p>Marking points and parking slot heads. (<b>a</b>) shows the geometric relationship between the paired marking points and the parking slot head. Paired marking points are marked with green dots, and the parking slot head is marked with the red rectangle; (<b>b</b>) shows a variety of deformations of “T-shaped” or “L-shaped” marking points; (<b>c</b>) shows three kinds of the parking slot head belonging to classes “right-angled head”, “obtuse-angled head”, and “acute-angled head” respectively.</p> "> Figure 4
<p>The bounding boxes of the parking slot head and marking points. Each bounding box consists of three parts: coordinates of the center point, width, and height.</p> "> Figure 5
<p>The relationship between two marking points <math display="inline"><semantics> <msub> <mi mathvariant="bold">p</mi> <mn>1</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi mathvariant="bold">p</mi> <mn>2</mn> </msub> </semantics></math> and the bounding box of the parking slot head <math display="inline"><semantics> <mi mathvariant="bold">B</mi> </semantics></math>. (<b>a</b>) shows <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">p</mi> <mn mathvariant="bold">1</mn> </msub> <mo>⊆</mo> <mi mathvariant="bold">B</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">p</mi> <mn mathvariant="bold">2</mn> </msub> <mo>⊆</mo> <mi mathvariant="bold">B</mi> </mrow> </semantics></math>; (<b>b</b>) shows <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">p</mi> <mn mathvariant="bold">1</mn> </msub> <mo>⊆</mo> <mi mathvariant="bold">B</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">p</mi> <mn mathvariant="bold">2</mn> </msub> <mo>⊄</mo> <mi mathvariant="bold">B</mi> </mrow> </semantics></math>; (<b>c</b>) shows <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">p</mi> <mn mathvariant="bold">1</mn> </msub> <mo>⊄</mo> <mi mathvariant="bold">B</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">p</mi> <mn mathvariant="bold">2</mn> </msub> <mo>⊆</mo> <mi mathvariant="bold">B</mi> <mspace width="4pt"/> </mrow> </semantics></math>; (<b>d</b>) shows <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">p</mi> <mn mathvariant="bold">1</mn> </msub> <mo>⊄</mo> <mi mathvariant="bold">B</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">p</mi> <mn mathvariant="bold">2</mn> </msub> <mo>⊄</mo> <mi mathvariant="bold">B</mi> </mrow> </semantics></math>.</p> "> Figure 6
<p>Complete parking slot inference. (<b>a</b>–<b>d</b>) are the perpendicular parking slot, the parallel parking slot, the slanted parking with an acute angle, and the slanted parking with an obtuse angle respectively. Their depth is <math display="inline"><semantics> <msub> <mi>d</mi> <mn>1</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>d</mi> <mn>2</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>d</mi> <mn>3</mn> </msub> </semantics></math> respectively, and their parking angle is <math display="inline"><semantics> <msub> <mi>α</mi> <mn>1</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>α</mi> <mn>2</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>α</mi> <mn>3</mn> </msub> </semantics></math> respectively. <math display="inline"><semantics> <msub> <mi mathvariant="bold">p</mi> <mn>1</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi mathvariant="bold">p</mi> <mn>2</mn> </msub> </semantics></math> are two visible paired marking points, and <math display="inline"><semantics> <msub> <mi mathvariant="bold">p</mi> <mn>3</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi mathvariant="bold">p</mi> <mn>4</mn> </msub> </semantics></math> are two invisible vertices.</p> "> Figure 7
<p>The orientation of the parking slot when the vehicle is around it. Two rectangular boxes formed by the entrance line with a depth <span class="html-italic">d</span> are marked with red and orange dotted lines. The rectangular box formed by the car model is marked with gree dotted lines. The red arrow indicates the orientation of the parking slot.</p> "> Figure 8
<p>The orientation of the parking slot when the vehicle is parking into it. (<b>a</b>) shows the orientation of the vertical parking slot. (<b>b</b>) shows the orientation of the parallel parking slot. The red arrow indicates the orientation of the parking slot. The yellow dotted line indicates the entrance line.</p> "> Figure 9
<p>Training samples for vacant parking slot classification. (<b>a</b>) a negative sample: a non-vacant regularized parking slot. (<b>b</b>) a positive sample: a vacant regularized parking slot.</p> "> Figure 10
<p>Cases of datasets used in evaluation. Rows 1 and 2 are the annotation information that was labeled for ps2.0 and PSV datasets. The green indicates the vacant parking slot. The red indicates the non-vacant parking slot. Rows 3 and 4 are parking slot samples that were cut and warped according to the annotation information.</p> "> Figure 11
<p>AP<math display="inline"><semantics> <msub> <mrow/> <mn>50</mn> </msub> </semantics></math> histograms by three kinds of DCNN-based detectors.</p> "> Figure 12
<p>Detection results by YOLOv3-based detector. The green bounding box indicates the “right-angled head”. The blue bounding box indicates the “acute-angled head”. The yellow bounding box indicates the “obtuse-angled head”. The red dot indicates the “marking point”.</p> "> Figure 13
<p>(<b>a</b>,<b>b</b>) show representative images in the ps2.0 test dataset where the vehicle is across parking slots.</p> "> Figure 14
<p>Precision-recall curves of different methods for parking slot occupancy classification.</p> "> Figure 15
<p>VPS-Net detection results. Green indicates the vacant parking slot. Red indicates the non-vacant parking slot. Different rows show three kinds of parking slots in various imaging conditions like ’indoor’, ’outdoor daylight’, ’outdoor rainy’, ’outdoor shadow’, ’outdoor slanted’, ’outdoor street light’ respectively.</p> "> Figure 16
<p>Representative images with the degraded image quality of marking points in the PSV dataset. (<b>a</b>) shows the marking point is far from cameras. (<b>b</b>) shows the marking point is on the stitching lines. The green bounding box indicates the parking slot head. The red dot indicates the detected marking point, and the purple dot indicates the inferred marking point based on the parking slot head.</p> ">
Abstract
:1. Introduction
- A new vacant parking slot detection method in the around view image is proposed, and we name it as VPS-Net, which combines the advantages of a multi-object detection network with a classification network. Compared with the semantic segmentation-based methods that need a series of complex post-processing to get the position of the parking slot, VPS-Net can directly get the coordinates of marking points, so the more accurate position of parking slots can be achieved. To facilitate future researchers, the related codes and the annotations for vacant parking slots of ps2.0 and PSV datasets have been made publicly available at https://github.com/weili1457355863/VPS-Net.
- A parking slot detection method based on YOLOv3 is proposed, which combines the classification of the parking slot with the localization of marking points. Compared with previous marking point-based methods that cumbersome steps are required to match the paired marking points of the parking slot, VPS-Net simplifies the process of parking slot detection, so various kinds of parking slots can be detected quickly and robustly.
- A customized DCNN model is designed to distinguish whether it is a vacant parking slot. To evaluate the performance of the model, we update both ps2.0 and PSV datasets by marking the type of parking slot in each image. Compared with some state-of-the-art (SOTA) DCNN models, our customized DCNN model not only achieves comparable accuracy but also consumes less time to process an image and has fewer parameters.
2. Related Works
2.1. Vision-Based Parking Slot Detection in the Around View Image
2.2. Parking Slot Occupancy Classification
3. Proposed Method
3.1. Head and Marking Points of the Parking Slot Detection
3.2. Paired Marking Points of Entrance Line Confirmation
Algorithm 1 Rules of paired marking points confirmation |
Input: Two sets and , comprising all bounding boxes and marking points in an around view image, respectively. Output: Paired marking points.
|
3.3. Complete Parking Slot Inference
3.4. Parking Slot Occupancy Classification
4. Experiments and Results
4.1. Experiments Setup
4.1.1. Datasets
4.1.2. Experiment Settings
4.2. Heads and Marking Points Detection Performance
4.3. Parking Slot Detection and Occupancy Classification Performance
4.4. Overall Performance and Generalizability of VPS-Net
5. Discussion
6. Conclusions
Author Contributions
Funding
Conflicts of Interest
Abbreviations
PAS | Park assist system |
AVM | Around view monitor |
DCNN | Deep convolutional neural network |
SOTA | State of the art |
References
- Gallivan, S. IBM Global Parking Survey: Drivers Share Worldwide Parking Woes Technical Report. Technical Report; IBM: Armonk, NY, USA, 2011; Available online: https://www-03.ibm.com/press/us/en/pressrelease/35515.wss (accessed on 6 April 2020).
- Matthew, A. Automated Driving: The Technology and Implications for Insurance. Technical Report; Thatcham Research: Thatcham, UK, 2016. [Google Scholar]
- Jeong, S.H.; Choi, C.G.; Oh, J.N.; Yoon, P.J.; Kim, B.S.; Kim, M.; Lee, K.H. Low cost design of parallel parking assist system based on an ultrasonic sensor. Int. J. Automot. Technol. 2010, 11, 409–416. [Google Scholar] [CrossRef]
- Jung, H.G.; Cho, Y.H.; Yoon, P.J.; Kim, J. Scanning Laser Radar-Based Target Position Designation for Parking Aid System. IEEE Trans. Intell. Transport. Syst. 2008, 9, 406–424. [Google Scholar] [CrossRef]
- Ullrich, S.; Basel, F.; Norman, M.; Gerd, W. Free space determination for parking slots using a 3D PMD sensor. In Proceedings of the 2007 IEEE Intelligent Vehicles Symposium (IV), Istanbul, Turkey, 13–15 June 2007; pp. 154–159. [Google Scholar]
- Loeffler, A.; Ronczka, J.; Fechner, T. Parking lot measurement with 24 GHz short range automotive radar. In Proceedings of the 16th International Radar Symposium (IRS), Dresden, Germany, 24–26 June 2015; pp. 137–142. [Google Scholar]
- Kaempchen, N.; Franke, U.; Ott, R. Stereo vision based pose estimation of parking lots using 3D vehicle models. In Proceedings of the 2002 IEEE Intelligent Vehicle Symposium (IV), Versailles, France, 17–21 June 2002; pp. 459–464. [Google Scholar]
- Lee, S.; Hyeon, D.; Park, G.; Baek, I.-J.; Kim, S.-W.; Seo, S.-W. Directional-DBSCAN: Parking-slot detection using a clustering method in around-view monitoring system. In Proceedings of the 2016 IEEE Intelligent Vehicle Symposium (IV), Gotenburg, Sweden, 19–22 June 2016; pp. 349–354. [Google Scholar]
- Li, Q.; Lin, C.; Zhao, Y. Geometric Features-Based Parking Slot Detection. Sensors 2018, 18, 2821. [Google Scholar] [CrossRef] [Green Version]
- Zhang, L.; Li, X.; Huang, J.; Shen, Y.; Wang, D. Vision-Based Parking-Slot Detection: A Benchmark and A Learning-Based Approach. Symmetry 2018, 10, 64. [Google Scholar] [CrossRef] [Green Version]
- Zhang, L.; Huang, J.; Li, X.; Xiong, L. Vision-based Parking-slot Detection: A DCNN-based Approach and A Large-scale Benchmark Dataset. IEEE Trans. Image Process. 2018, 27, 5350–5364. [Google Scholar] [CrossRef] [PubMed]
- Jung, H.G.; Kim, D.S.; Yoon, P.J.; Kim, J. Structure Analysis Based Parking Slot Marking Recognition for Semi-automatic Parking System. In Proceedings of the Joint IAPR International WorkShops on Statistical Techniques in Pattern Recognition and Structural and Syntactic Pattern Recognition, Hong Kong, China, 17–19 August 2006; pp. 384–393. [Google Scholar]
- Jung, H.G.; Kim, D.S.; Yoon, P.J.; Kim, J. Two-Touch Type Parking Slot Marking Recognition for Target Parking Position Designation. In Proceedings of the 2008 IEEE Intelligent Vehicles Symposium (IV), Eindhoven, The Netherlands, 4–6 June 2008; pp. 1161–1166. [Google Scholar]
- Jung, H.G.; Lee, Y.H.; Kim, J. Uniform User Interface for Semiautomatic Parking Slot Marking Recognition. IEEE Trans. Veh. Technol. 2010, 59, 616–626. [Google Scholar] [CrossRef]
- Yang, J.; Portilla, J.; Riesgo, T. Smart parking service based on wireless sensor networks. In Proceedings of the IECON 2012—38th Annual Conference on IEEE Industrial Electronics Society, Montreal, QC, Canada, 25–28 October 2012; pp. 6029–6034. [Google Scholar]
- Lee, C.; Han, Y.; Jeon, S.; Seo, D.; Jung, I. Smart parking system for Internet of Things. In Proceedings of the 2016 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 7–11 January 2016; pp. 6029–6034. [Google Scholar]
- Balzano, W.; Vitale, F. DiG-Park: A Smart Parking Availability Searching Method Using V2V/V2I and DGP-Class Problem. In Proceedings of the 31st International Conference on Advanced Information Networking and Applications Workshops (WAINA), Taipei, Taiwan, 27–29 March 2017; pp. 698–703. [Google Scholar]
- Su, C.-L.; Lee, C.-J.; Li, M.-S.; Chen, K.-P. 3D AVM system for automotive applications. In Proceedings of the 10th International Conference on Information, Communications and Signal Processing (ICICS), Singapore, 2–4 December 2015; pp. 1–5. [Google Scholar]
- Suhr, J.K.; Jung, H.G. Fully-automatic Recognition of Various Parking Slot Markings in Around View Monitor (AVM) Image Sequences. In Proceedings of the 15th International IEEE Conference on Intelligent Transportation Systems, Anchorage, AK, USA, 16–19 September 2012; pp. 1294–1299. [Google Scholar]
- Suhr, J.K.; Jung, H.G. Full-automatic recognition of various parking slot markings using a hierarchical tree structure. Opt. Eng. 2013, 52, 037203. [Google Scholar]
- Suhr, J.K.; Jung, H.G. Sensor Fusion-Based Vacant Parking Slot Detection and Tracking. IEEE Trans. Intell. Transport. Syst. 2014, 15, 21–36. [Google Scholar] [CrossRef]
- Suhr, J.K.; Jung, H.G. A Universal Vacant Parking Slot Recognition System Using Sensors Mounted on Off-the-Shelf Vehicles. Sensors 2018, 18, 1213. [Google Scholar] [CrossRef] [Green Version]
- Zinelli, A.; Musto, L.; Pizzati, F. A Deep-Learning Approach for Parking Slot Detection on Surround-View Images. In Proceedings of the 2019 IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June; pp. 683–688.
- Wu, Y.; Yang, T.; Zhao, J.; Guan, L.; Jiang, W. VH-HFCN based Parking Slot and Lane Markings Segmentation on Panoramic Surround View. In Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018; pp. 1767–1772. [Google Scholar]
- Jiang, W.; Wu, Y.; Guan, L.; Zhao, J. DFNet: Semantic Segmentation on Panoramic Images with Dynamic Loss Weights and Residual Fusion Block. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 5887–5892. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Hamada, K.; Hu, Z.; Fan, M.; Chen, H. Surround view based parking lot detection and tracking. In Proceedings of the 2015 IEEE Intelligent Vehicles Symposium (IV), Seoul, Korea, 28 June–1 July 2015; pp. 1106–1111. [Google Scholar]
- Wang, C.; Zhang, H.; Yang, M.; Wang, X.; Ye, L.; Guo, C. Automatic Parking Based on a Bird’s Eye View Vision System. Adv. Mech. Eng. 2014, 6, 847406. [Google Scholar] [CrossRef]
- Jang, C.; Sunwoo, M. Semantic segmentation-based parking space detection with standalone around view monitoring system. Mach. Vis. Appl. 2019, 30, 309–319. [Google Scholar] [CrossRef]
- Harris, C.J.; Stephens, M. A Combined Corner and Edge Detector. In Proceeding of the 4th Alvey Vision Conference, Manchester, UK, 31 August–2 September 1988; pp. 147–151. [Google Scholar]
- Yoav, F.; Robert, S. A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting. J. Comput. Syst. Sci. 1999, 55, 119–139. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6517–6525. [Google Scholar]
- Suhr, J.K.; Jung, H.G. Automatic Parking Space Detection and Tracking for Underground and Indoor Environments. IEEE Trans. Ind. Electron. 2016, 63, 5687–5698. [Google Scholar] [CrossRef]
- Li, L.; Li, C.; Zhang, Q.; Guo, T.; Miao, Z. Automatic Parking Slot Detection Based on Around View Monitor (AVM) Systems. In Proceedings of the 9th International Conference on Wireless Communications and Signal Processing (WCSP), Nanjing, China, 11–13 October 2017; pp. 1–6. [Google Scholar]
- Lee, S.; Seo, S.W. Available Parking Slot Recognition based on Slot Context Analysis. IET Intell. Transp. Syst. 2016, 10, 594–604. [Google Scholar] [CrossRef]
- Dalal, N.; Triggs, B. Histograms of Oriented Gradients for Human Detection. In Proceedings of the 2005 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Diego, CA, USA, 20–25 June 2005; pp. 886–893. [Google Scholar]
- Amarappa, S.; Sathyanarayana, S.V. Data classification using Support vector Machine (SVM), a simplified approach. Int. J. Electron. Comput. Sci. Eng. 2010, 3, 435–445. [Google Scholar]
- Rianto, D.; Erwin, I.; Prakasa, E.; Herlan, H. Parking Slot Identification using Local Binary Pattern and Support Vector Machine. In Proceedings of the 2018 International Conference on Computer, Control, Informatics and its Applications (IC3INA), Tangerang, Indonesia, 1–2 November 2018; pp. 129–133. [Google Scholar]
- Ojala, T.; Pietikainen, M.; Maenpaa, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef]
- Amato, G.; Carrara, F.; Falchi, F.; Gennaro, C.; Meghini, C.; Vairo, C. Deep Learning for Decentralized Parking Lot Occupancy Detection. Expert Syst. Appl. 2017, 72, 327–334. [Google Scholar] [CrossRef]
- Nurullayev, S.; Lee, S.-W. Generalized Parking Occupancy Analysis Based on Dilated Convolutional Neural Network. Sensors 2019, 19, 277. [Google Scholar] [CrossRef] [Green Version]
- Paidi, V.; Fleyeh, H. Parking Occupancy Detection Using Thermal Camera. In Proceedings of the Vehicle Technology and Intelligent Transport Systems (VEHITS 2019), Heraklion, Greece, 3–5 May 2019; pp. 483–490. [Google Scholar]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. arXiv 2015, arXiv:1512.02325. [Google Scholar]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef] [Green Version]
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Alex, K.; Ilya, S.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the Advances in Neural Information Processing Systems 25 (NIPS 2012), Lake Tahoe, NV, USA, 3–8 December 2012; pp. 1097–1105. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556v6. [Google Scholar]
- He, K.M.; Zhang, X.Y.; Ren, S.Q.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
- Howard, A.; Sandler, M.; Chu, G.; Chen, L.-C.; Chen, B.; Tan, M.; Wang, W.; Zhu, Y.; Pang, R.; Vasudevan, V.; et al. Searching for MobileNetV3. arXiv 2019, arXiv:1905.02244. [Google Scholar]
- Paszke, A.; Gross, S.; Chintala, S.; Chanan, G.; Yang, E.; DeVito, Z.; Lin, Z.; Desmaison, A.; Antiga, L.; Lerer, A. Automatic differentiation in PyTorch. In Proceedings of the NIPS 2017 Workshop, Long Beach, CA, USA, 23–28 December 2017. [Google Scholar]
Layer Name | Kernel | Padding | Stride | Output (CxHxW) |
---|---|---|---|---|
Input | - | - | - | 3 × 46 × 120 |
Conv1 | [3, 9] | [0, 0] | [1, 2] | 40 × 44 × 56 |
Maxpool1 | [3, 3] | [0, 0] | [2, 2] | 40 × 21 × 27 |
Conv2 | [3, 5] | [1, 0] | [1, 1] | 80 × 21 × 23 |
Maxpool2 | [3, 3] | [1, 0] | [2, 2] | 80 × 11 × 11 |
Conv3 | [3, 3] | [1, 1] | [1, 1] | 120 × 11 × 11 |
Conv4 | [3, 3] | [1, 1] | [1, 1] | 160 × 11 × 11 |
Maxpool2 | [3, 3] | [0, 0] | [2, 2] | 160 × 5 × 5 |
Fc1 | - | - | - | 512 × 1 × 1 |
Fc1 | - | - | - | 2 × 1 × 1 |
Parameter | Value (pixels) | Parameter | Value (pixels) |
---|---|---|---|
48 | 67 | ||
44 | 129 | ||
40 | 250 | ||
60 | 125 | ||
t | 190 | 240 | |
90 | d | 250 |
Method | Localization Error (in pixel) | Localization Error (in cm) | Running Time (ms) |
---|---|---|---|
Faster-RCNN [46] | 3.67 ± 2.32 | 6.12 ± 3.87 | 45 |
SSD [43] | 1.51 ± 1.17 | 2.52 ± 1.95 | 26 |
YOLOv3-based | 1.03 ± 0.65 | 1.72 ± 1.09 | 18 |
Method | #GT | #TP | #FP | Precision Rate | Recall Rate |
---|---|---|---|---|---|
PSD_L [10] | 2173 | 1845 | 27 | 98.55% | 84.89% |
DeepPS [11] | 2173 | 2143 | 5 | 99.77% | 98.62% |
VPS-Net | 2173 | 2157 | 9 | 99.58% | 99.26% |
DeepPS (no across) | 2166 | 2137 | 5 | 99.77% | 98.66% |
VPS-Net (no across) | 2166 | 2157 | 2 | 99.91% | 99.58% |
Sub-Test Set | DeepPS [11] | VPS-Net | VPS-Net (No Across) |
---|---|---|---|
Indoor | : 100.00%; r: 97.67% | : 99.71% r: 98.54% | : 99.71%; r: 98.54% |
Outdoor normal | : 99.87%; r: 98.85% | : 100.00%; r: 99.74% | : 100.00%;r: 99.74% |
Street light | : 100.00%; r: 100.00% | : 100.00%; r: 100.00% | : 100.00%;r: 100.00% |
Outdoor shadow | : 99.86%; r: 99.14% | : 100.00%; r: 99.86% | : 100.00%;r: 99.86% |
Outdoor rainy | : 100.00%; r: 99.42% | : 100.00%; r: 100.00% | : 100.00%;r: 100.00% |
Slanted | : 96.15%; r: 92.59 % | : 90.12%; r: 90.12% | : 98.65%;r: 98.65% |
Method | Localization Error (in pixel) | Localization Error (in cm) |
---|---|---|
PSD_L [10] | 3.64 ± 1.85 | 6.07 ± 3.09 |
DeepPS [11] | 1.55 ± 1.04 | 2.58 ± 1.74 |
VPS-Net | 1.03 ± 0.64 | 1.72 ± 1.07 |
DCNN Model | Accuracy | Running Time (ms) | Model Size (MB) |
---|---|---|---|
HOG+SVM [37] | 92.54% | 2.13 | 0.04 |
AlexNet [49] | 99.67% | 1.75 | 228.1 |
VGG-16 [50] | 99.62% | 2.15 | 537.1 |
ResNet-50 [51] | 98.55% | 5.10 | 44.8 |
MobileNetV3-Small [52] | 98.55% | 6.21 | 5.1 |
Customized DCNN | 99.48% | 0.81 | 9.4 |
Step | Running Time (ms) | Precision Rate | Recall Rate |
---|---|---|---|
marking points and heads detection | 18 | - | - |
complete parking slot inference | 0.5 | 99.91% | 99.58% |
parking slot occupancy classification | 2 | 99.86% | 99.62% |
total | 20.5 | 99.63% | 99.31% |
Method | #GT | #TP | #FP | Precision Rate | Recall Rate |
---|---|---|---|---|---|
DeepPS [11] | 1593 | 1396 | 63 | 95.68% | 87.63% |
VPS-Net (no Algorithm 1) | 1593 | 1483 | 50 | 96.73% | 93.09% |
VPS-Net | 1593 | 1507 | 54 | 96.54% | 94.60% |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, W.; Cao, L.; Yan, L.; Li, C.; Feng, X.; Zhao, P. Vacant Parking Slot Detection in the Around View Image Based on Deep Learning. Sensors 2020, 20, 2138. https://doi.org/10.3390/s20072138
Li W, Cao L, Yan L, Li C, Feng X, Zhao P. Vacant Parking Slot Detection in the Around View Image Based on Deep Learning. Sensors. 2020; 20(7):2138. https://doi.org/10.3390/s20072138
Chicago/Turabian StyleLi, Wei, Libo Cao, Lingbo Yan, Chaohui Li, Xiexing Feng, and Peijie Zhao. 2020. "Vacant Parking Slot Detection in the Around View Image Based on Deep Learning" Sensors 20, no. 7: 2138. https://doi.org/10.3390/s20072138