CN109697426B - Flight based on multi-detector fusion shuts down berth detection method - Google Patents
Flight based on multi-detector fusion shuts down berth detection method Download PDFInfo
- Publication number
- CN109697426B CN109697426B CN201811584147.4A CN201811584147A CN109697426B CN 109697426 B CN109697426 B CN 109697426B CN 201811584147 A CN201811584147 A CN 201811584147A CN 109697426 B CN109697426 B CN 109697426B
- Authority
- CN
- China
- Prior art keywords
- detection
- berth
- ranging
- airplane
- confidence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
The present invention relates to a kind of flights based on multi-detector fusion to shut down berth detection method, comprising: the object ranging for shut down berth calculates distance measurement result, obtains ranging and detects the lower confidence level shut down and have aircraft on berth;Shooting image is analyzed in the shoot on location for shut down berth, obtains the confidence level shut down under image detection and have aircraft on berth;There are the confidence calculations of aircraft to shut down the objective degrees of confidence for having aircraft on berth on berth according to shutting down under the confidence level and image detection for having aircraft on the priori accuracy rate of image detection, the priori accuracy rate of ranging detection, the lower shutdown berth of ranging detection; when objective degrees of confidence is greater than the judgment threshold of setting; confirmation, which is shut down on berth, aircraft; when objective degrees of confidence is less than the judgment threshold of setting, confirmation, which is shut down on berth, does not have aircraft.The present invention uses modelling method of probabilistic, merge ranging detection and image detection as a result, avoid single method there are the shortcomings that, to obtain more preferably detection effect.
Description
Technical Field
The invention relates to a flight stop berth detection method based on multi-detector fusion.
Background
The airplane parking space is an important item in airport control, and the flight parking space state influences the utilization rate of the airplane space, so that the throughput of the airport is directly influenced. The main current approaches are manual detection, in addition to detection based on laser ranging and detection based on video analysis.
At present, most of airport airplane parking time statistics still adopt a manual mode, and the cost is high and the efficiency is low. A laser ranging device is installed at a few airports for parking berths to detect, the principle of the method is that whether an airplane exists or not is judged by measuring the distance between a light source point and an object in front, the method has the advantages of being strong in real-time performance, free of influence of weather change, cheap in equipment and the like, the false alarm rate is serious, the requirement on the erection position is strict, the equipment is arranged at the early stage of many airports to serve as auxiliary detection, and the equipment is only put into an auxiliary system in practical application due to the high false alarm rate and still needs manual intervention. On the other hand, almost every airport in the airport is installed with a monitoring camera, and the parking condition of the airplane is observed through video. Recently, the video is also used for automatic detection of parking berth, and a video analysis method is used for detecting the parking state of an airplane, for example, an algorithm based on moving target detection is fast, and has a good detection effect under good imaging conditions, but the following defects exist: whether the detected target is an airplane or not cannot be judged, and false alarm may be caused when other moving targets (such as large vehicles) appear in the image; only a moving target can be detected, and when the aircraft is in a static state, the parking state of the aircraft cannot be judged; the setting of the algorithm parameters is sensitive to conditions such as weather, illumination and the like, the accuracy of airplane parking identification still cannot meet the intelligent requirement, and manual auxiliary inspection is also needed.
Disclosure of Invention
In order to solve the technical problem, the invention provides a flight stop berth detection method based on multi-detector fusion so as to obtain a target detection result with higher accuracy.
The technical scheme of the invention is as follows: a flight halt berth detection method based on multi-detector fusion comprises
Ranging the object at the parking position, calculating the ranging result to obtain the confidence of the airplane on the parking position under the ranging detection,
performing live-action shooting of the parking lot, analyzing the shot image to obtain the confidence that the airplane is on the parking lot under the image detection,
calculating the confidence of the target with the airplane on the parking berth according to the following formula:
when the target confidence coefficient is larger than the set judgment threshold value, determining that the airplane is on the parking space, when the target confidence coefficient is smaller than the set judgment threshold value, determining that the airplane is not on the parking space,
wherein,
Pfusetarget confidence for aircraft on a parking berth, PridlFor a priori accuracy of image detection, PrilaFor measuring distanceA priori accuracy of detection, PlaFor distance-measuring confidence of aircraft on a berthdlAnd detecting the confidence level that the airplane is on the lower parking berth for the image.
The laser ranging device can be used for ranging the object at the parking position.
The confidence that there is an aircraft at the berth under range detection can be calculated according to the following formula:
wherein, L is the object distance obtained by the current distance measurement detection, Ls is the detection distance standard value when the airplane is parked on the parking berth, and Lm is the detection distance standard value when the airplane is not parked on the parking berth.
The Ls and Lm may be predetermined.
The Ls and Lm can be obtained experimentally under good ranging detection conditions. The method comprises the steps of measuring distances of an airplane in various states such as a common position where the airplane stops at a parking berth, a position nearest to a distance measurement detection device, a position farthest to the distance measurement detection device and the like to obtain corresponding measured object distances, and taking the average value or the minimum value of the measured object distances as Ls; and (3) ranging under the condition that no airplane or other objects influencing the ranging result exist on the parking berth, so as to obtain the corresponding measured object distance, wherein the average value or the maximum value of the measured object distances is Lm.
Ls and Lm may also be determined based on the actual distance of the ranging detection device from the parking lot and the actual distance from the relevant object ahead (the object detected without the aircraft parking time).
The prior accuracy of the ranging detection can be determined experimentally.
For example, multiple experiments are performed in two states of an airplane and an airplane-free parking space, distance measurement detection is performed in the same way as actual distance measurement detection, the correctness of the distance measurement detection result is manually checked, the number of times of the correctness of the detection result is divided by the total number of times of the detection, and the obtained quotient is the prior accuracy of the distance measurement detection.
Preferably, the image shot is analyzed by adopting a YOLO algorithm, and the confidence coefficient that the airplane exists on the parking lot under the image detection is obtained.
When the target airplane is detected by the plurality of cells and/or the boundary boxes, a non-maximum suppression algorithm is preferably adopted to obtain the largest boundary box, and the corresponding confidence of the obtained largest boundary box is used as the confidence of the airplane on the lower parking berth under the image detection.
The a priori accuracy of the image detection can be determined experimentally.
For example, multiple experiments are performed in two states of an airplane and an airplane-free state on a parking lot, image detection is performed in the same way as actual image detection, the correctness of the image detection result is manually checked, the number of times of the correctness of the detection result is divided by the total number of times of the detection, and the obtained quotient is the prior accuracy of the image detection.
The invention has the beneficial effects that: by adopting a probability modeling method, the object existence confidence is obtained by two methods respectively, and then the probability formula is used for fusion to obtain a final judgment algorithm, thereby avoiding the defects of a single method. For the selection of the visual method, the defects of a moving target detection algorithm are considered, the current mainstream deep learning method is applied to airplane parking detection, the specific target of an airplane (aircraft) which is interested by a user can be detected robustly, the airplane can be detected no matter the airplane is in a moving or static state, and meanwhile, the advantages that the laser ranging method is not influenced by factors such as weather and illumination are combined, so that a more ideal detection effect is finally obtained.
Drawings
FIG. 1 is a schematic flow diagram of the present invention.
Detailed Description
Referring to fig. 1, the present invention integrates a physical method with a visual method in order to avoid the disadvantages of the single method. For the selection of the visual method, the defects of a moving target detection algorithm are considered, and the current mainstream deep learning method is applied to airplane parking detection, so that on one hand, the method can robustly detect a specific target, namely an airplane, which is interested by a user, and the airplane can be detected no matter the airplane is in a moving or static state, and on the other hand, the advantage that the laser ranging method is not influenced by factors such as weather, illumination and the like is combined, and therefore a more ideal detection effect is finally obtained.
The main problem to be solved by the invention is the fusion of the two methods. And (3) obtaining the confidence coefficient of the target by adopting a probability modeling method and two methods respectively, and then fusing by utilizing a probability formula to obtain the target confidence coefficient which can be used for finally judging whether the airplane exists on the parking space.
The invention comprises the following steps:
1) laser detection
The laser ranging equipment consists of an optical receiver, an optical transmitter and a timer, wherein the optical transmitter transmits laser in the working process, when a reflecting object is received by the optical receiver, the time of the timer is multiplied by the light speed to obtain the distance from a target to the laser transmitter.
The space laser ranging related to the invention is different from the cooperative satellite laser ranging, because the airplane as the measured target is not provided with a corner reflector for returning the laser on the original path, the incident laser can be reflected to a receiver only by depending on the diffuse reflection characteristic of the target, and the accuracy of the laser equipment is also reduced.
The invention designs a probability model for laser detection by utilizing the characteristics of laser ranging. The Ls is a distance threshold value between the airplane and the laser receiver when the airplane stops at the airplane position, the Lm is a distance threshold value detected by the laser when the airplane does not stop, and the values of the Ls and the Lm are set in advance. Setting the object detection distance obtained by the current laser ranging as L, and defining the confidence coefficient P of the airplane on the laser detection lower airplane positionlaThe following were used:
Plathe value of (A) is between 0 and 1, and the larger the value is, the higher the credibility that the airplane exists on the parking berth is represented.
2) Image recognition for deep learning algorithms
The deep learning algorithm is started in recent years, most of airports adopt the traditional method at present, and the research on airport target identification based on the deep learning algorithm is less. The deep learning is an algorithm with a multi-level model, the back propagation algorithm is adopted to automatically adjust model parameters, abstract expressions of data are automatically learned from a large amount of data, and the trained model can automatically identify the position of the airplane in the image from the input image, so that the method is more stable compared with the traditional method.
The invention adopts the YOLOv3 algorithm proposed by Joseph Redmon and Ali Farhadi[1]As a basic frame, the method has the advantages of high speed, multi-scale prediction, low background false detection rate, strong universality and the like. Target detector passes through Microsoft COCO data set[2]Training is carried out, and common targets such as airplanes and pedestrians can be detected.
YOLO divides the input image into cells, and each cell is then responsible for detecting those objects whose center points fall within the cell. Each cell predicts several bounding boxes and the confidence of the bounding boxes. Confidence mainly comprises two aspects, namely the possibility that the bounding box contains the target; the accuracy of this bounding box is the second. The probability that the former bounding box contains an object is denoted Pr, which is 0 when the bounding box is background, i.e. contains no object; and when the bounding box contains the target, Pr is 1. The accuracy of the bounding box can be characterized by the intersection ratio IOU (intersection OverUnion) of the predicted box and the actual box, and is marked asThe corresponding confidence level can therefore be defined as follows[1]:
Since YOLO divides an image into several cells, a detected object may be divided into many blocks, and just some of the cells detect the object, which may result in multiple bounding boxes, i.e., multiple confidences, of an object. Book (I)The invention adopts a non-Maximum value inhibition algorithm NMS (non Maximum suppression) to mainly solve the problem that a target is detected for many times. Firstly, selecting a certain frame as an initial frame, then respectively carrying out intersection ratio with the rest frames, if the intersection ratio is larger than a certain threshold value (the coincidence degree is too high), rejecting the frame, and otherwise, replacing the option of the maximum frame. Repeating the above process for all detection frames until obtaining the only maximum frame, and determining the confidence degree P of the framedlAs a confidence of whether the target is present.
Also, PdlThe value of (A) is between 0 and 1, and the larger the value is, the higher the credibility of the existence of the airplane is represented.
3) Multi-detector fusion
The multi-detector fusion model fuses the target confidence coefficients obtained by the laser ranging method and the deep learning method, firstly, the prior accuracy rates of the two methods need to be estimated, and then the target confidence coefficients of the two methods are fused by using a mixed model formula to obtain the final airplane parking judgment result.
(1) Apriori accuracy estimation
(a) The prior accuracy of the laser ranging method is estimated, N tests can be carried out, the times M of correctly detecting the target in the N tests are manually checked, and the prior accuracy of the method is obtainedN should be large enough to guarantee the accuracy of the values of a priori accuracy.
(b) Estimation of the priori accuracy Pri of the deep learning algorithm in the same waydl。
(2) Multi-detector fusion detection
(a) For the current detection, the confidence coefficient P of the target is calculated by a laser ranging method and a deep learning method respectivelylaAnd Pdl。
(b) The fused target confidence is obtained using the following formula.
(5) Judging navigationAnd judging the class parking state. When P is presentfuse>When Thr, the airplane position is occupied by an airplane; otherwise, the aircraft is absent and the aircraft is idle, wherein the threshold Thr is normally set to 0.5.
The term aircraft and flight in the present invention refers broadly to all kinds of aircraft, including what is commonly referred to as aircraft and flight.
The technical means disclosed by the invention can be combined arbitrarily to form a plurality of different technical schemes except for special description and the further limitation that one technical means is another technical means.
Reference to the literature
[1]Joseph Redmon and Ali Farhadi,“YOLOv3:An Incremental Improvement”, Technical report 2018.
[2]Tsungyi Lin et al.,“Microsoft COCO:Common Objects in Context”,ECCV 2015.
Claims (9)
1. A flight halt berth detection method based on multi-detector fusion comprises
Ranging the object at the parking position, calculating the ranging result to obtain the confidence of the airplane on the parking position under the ranging detection,
performing live-action shooting of the parking lot, analyzing the shot image to obtain the confidence that the airplane is on the parking lot under the image detection,
calculating the confidence of the target with the airplane on the parking berth according to the following formula:
when the target confidence coefficient is larger than the set judgment threshold value, determining that the airplane is on the parking space, when the target confidence coefficient is smaller than the set judgment threshold value, determining that the airplane is not on the parking space,
wherein,
Pfusetarget confidence for aircraft on a parking berth, PridlFor a priori accuracy of image detection, PrilaApriori calibration for ranging detectionAccuracy, PlaFor distance-measuring confidence of aircraft on a berthdlFor image detection confidence that there is an aircraft on the lower berth,
calculating the confidence level that the airplane is on the parking space under the ranging detection according to the following formula:
wherein, L is the object distance obtained by the current distance measurement detection, Ls is the detection distance standard value when the airplane is parked on the parking berth, and Lm is the detection distance standard value when the airplane is not parked on the parking berth.
2. The method of claim 1, wherein the ranging of the object at the parking lot is performed using a laser ranging device.
3. The method of claim 2 wherein said Ls and Lm are obtained experimentally under good range detection conditions or determined from the actual distance of the range detection device from the parking lot and from the actual distance from the associated object in front.
4. The method of claim 1, wherein the a priori accuracy of the ranging detection is determined experimentally.
5. The method of claim 4, wherein the plurality of tests are conducted in both a parked position with and without an aircraft, ranging measurements are conducted in the same manner as actual ranging measurements, the accuracy of the ranging measurements is manually checked, the number of times the measurements are correct divided by the total number of times the measurements are made, and the quotient obtained is the prior accuracy of the ranging measurements.
6. The method of any one of claims 1-5, wherein the captured images are analyzed using a YOLO algorithm to obtain a confidence level that the aircraft is present at the berth under image detection.
7. The method of claim 6, wherein when the target aircraft is detected by the plurality of cells and/or bounding boxes, a non-maximum suppression algorithm is used to obtain the largest bounding box, and the corresponding confidence level of the obtained largest bounding box is used as the confidence level that the aircraft is present at the image detection lower berth.
8. The method of claim 6, wherein the a priori accuracy of the image detection is determined experimentally.
9. The method of claim 8, wherein the plurality of experiments are performed in both a situation with an airplane and a situation without an airplane at the parking berth, the image detection is performed in the same manner as the actual image detection, the correctness of the image detection result is manually checked, the number of times the detection result is correct is divided by the total number of times of detection, and the obtained quotient is the prior accuracy of the image detection.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201811584147.4A CN109697426B (en) | 2018-12-24 | 2018-12-24 | Flight based on multi-detector fusion shuts down berth detection method |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201811584147.4A CN109697426B (en) | 2018-12-24 | 2018-12-24 | Flight based on multi-detector fusion shuts down berth detection method |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN109697426A CN109697426A (en) | 2019-04-30 |
| CN109697426B true CN109697426B (en) | 2019-10-18 |
Family
ID=66232766
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201811584147.4A Active CN109697426B (en) | 2018-12-24 | 2018-12-24 | Flight based on multi-detector fusion shuts down berth detection method |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN109697426B (en) |
Families Citing this family (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110309735A (en) * | 2019-06-14 | 2019-10-08 | 平安科技(深圳)有限公司 | Abnormality detection method, device, server and storage medium |
| CN110544396A (en) * | 2019-08-12 | 2019-12-06 | 南京莱斯信息技术股份有限公司 | An aircraft parking guidance device |
| CN111427374B (en) * | 2020-02-25 | 2023-03-28 | 深圳市镭神智能系统有限公司 | Airplane berth guiding method, device and equipment |
| CN114419478B (en) * | 2021-12-07 | 2025-02-28 | 优刻得科技股份有限公司 | Flight on/off block time identification method, device, equipment and storage medium |
| CN114220297A (en) * | 2021-12-09 | 2022-03-22 | 飞友科技有限公司 | Airport parking place safety monitoring method based on video |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103076877A (en) * | 2011-12-16 | 2013-05-01 | 微软公司 | Interacting with a mobile device within a vehicle using gestures |
| CN103390173A (en) * | 2013-07-24 | 2013-11-13 | 佳都新太科技股份有限公司 | Plate number character vote algorithm based on SVM (support vector machine) confidence |
| CN104290730A (en) * | 2014-06-20 | 2015-01-21 | 郑州宇通客车股份有限公司 | Radar and video information fusing method applied to advanced emergency brake system |
| CN107850895A (en) * | 2015-05-13 | 2018-03-27 | 优步技术公司 | Autonomous vehicle with guidance assistance |
| CN108009494A (en) * | 2017-11-30 | 2018-05-08 | 中山大学 | A kind of intersection wireless vehicle tracking based on unmanned plane |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8447112B2 (en) * | 2010-12-17 | 2013-05-21 | Xerox Corporation | Method for automatic license plate recognition using adaptive feature set |
-
2018
- 2018-12-24 CN CN201811584147.4A patent/CN109697426B/en active Active
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103076877A (en) * | 2011-12-16 | 2013-05-01 | 微软公司 | Interacting with a mobile device within a vehicle using gestures |
| CN103390173A (en) * | 2013-07-24 | 2013-11-13 | 佳都新太科技股份有限公司 | Plate number character vote algorithm based on SVM (support vector machine) confidence |
| CN104290730A (en) * | 2014-06-20 | 2015-01-21 | 郑州宇通客车股份有限公司 | Radar and video information fusing method applied to advanced emergency brake system |
| CN107850895A (en) * | 2015-05-13 | 2018-03-27 | 优步技术公司 | Autonomous vehicle with guidance assistance |
| CN108009494A (en) * | 2017-11-30 | 2018-05-08 | 中山大学 | A kind of intersection wireless vehicle tracking based on unmanned plane |
Also Published As
| Publication number | Publication date |
|---|---|
| CN109697426A (en) | 2019-04-30 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN109697426B (en) | Flight based on multi-detector fusion shuts down berth detection method | |
| CN106023605B (en) | A kind of method for controlling traffic signal lights based on depth convolutional neural networks | |
| US10818172B2 (en) | Method, device and system for processing startup of preceding vehicle | |
| CN111274976A (en) | Lane detection method and system based on multi-level fusion of vision and lidar | |
| CN105512720A (en) | Public transport vehicle passenger flow statistical method and system | |
| CN114545424B (en) | Obstacle recognition and model training method, device, equipment and storage medium | |
| CN112241004B (en) | Object recognition device | |
| KR20110064814A (en) | Left Turn Standby Vehicle Sensing Signal Control System Using Image Processing Technology | |
| CN111898491B (en) | Identification method and device for reverse driving of vehicle and electronic equipment | |
| KR102039118B1 (en) | Photographing system of multi lines using radar | |
| WO2019198076A1 (en) | Real-time raw data- and sensor fusion | |
| KR102616571B1 (en) | System and method for providing road traffic information based on image analysis using artificial intelligence | |
| CN116238504B (en) | Vehicle control method, device, equipment, medium and product | |
| CN114298142A (en) | Multi-source heterogeneous sensor information fusion method and device for camera and millimeter wave radar | |
| KR102587046B1 (en) | Method and System for identify Vehicle number and detect their speed | |
| CN117173666B (en) | Automatic driving target identification method and system for unstructured road | |
| CN110275167A (en) | A control method, controller and terminal for radar detection | |
| KR102659057B1 (en) | Apparatus and method for optimizing data processing computation | |
| CN114662600B (en) | Lane line detection method, device and storage medium | |
| KR102062579B1 (en) | Vehicle license-plate recognition system that recognition of Vehicle license-plate damaged by shadow and light reflection through the correction | |
| CN115985113B (en) | Traffic signal lamp control method and electronic equipment | |
| CN113408372B (en) | A method for detecting abnormal behavior of pedestrians and vehicles based on spatiotemporal features | |
| Ying-hong et al. | An improved Gaussian mixture background model with real-time adjustment of learning rate | |
| CN108182430B (en) | Double-area lane line identification system and method | |
| JP2021128759A (en) | Method and device for detecting objects |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |