[go: up one dir, main page]

CN105335985B - A kind of real-time capturing method and system of docking aircraft based on machine vision - Google Patents

A kind of real-time capturing method and system of docking aircraft based on machine vision Download PDF

Info

Publication number
CN105335985B
CN105335985B CN201410377269.1A CN201410377269A CN105335985B CN 105335985 B CN105335985 B CN 105335985B CN 201410377269 A CN201410377269 A CN 201410377269A CN 105335985 B CN105335985 B CN 105335985B
Authority
CN
China
Prior art keywords
aircraft
background
image
area
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410377269.1A
Other languages
Chinese (zh)
Other versions
CN105335985A (en
Inventor
邓览
程建
王帅
李鸿升
习友宝
王海彬
王龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen CIMC Tianda Airport Support Ltd
Original Assignee
China International Marine Containers Group Co Ltd
Shenzhen CIMC Tianda Airport Support Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China International Marine Containers Group Co Ltd, Shenzhen CIMC Tianda Airport Support Ltd filed Critical China International Marine Containers Group Co Ltd
Priority to CN201410377269.1A priority Critical patent/CN105335985B/en
Publication of CN105335985A publication Critical patent/CN105335985A/en
Application granted granted Critical
Publication of CN105335985B publication Critical patent/CN105335985B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

A kind of real-time capturing method and system of docking aircraft based on machine vision, this method include that will monitor scene partitioning for different information processing function areas, to improve treatment effeciency;Background model, mixture Gaussian background model based on median filtering or the background model based on cuclear density probability Estimation come the DYNAMIC DISTRIBUTION of background in simulated scenario and carry out background modeling, and present frame and background model are then made difference to eliminate background;Statistics eliminates the gray value in the foreground area extracted by background and carries out shadow removing;Establish standard front face aircraft region template, Detection and Extraction target area and the upright projection curve for seeking the region, then the related coefficient for seeking the upright projection curve of the upright projection curve and standard front face aircraft region template determines whether aircraft, is further verified by detecting engine and the front-wheel of the aircraft captured.The present invention also provides the corresponding real-time capture systems of docking aircraft for realizing the above method.

Description

A kind of real-time capturing method and system of docking aircraft based on machine vision
Technical field
The present invention relates to a kind of berth Plane location and bootstrap technique, especially a kind of movement for aircraft docking guidance Object detection, the real-time capturing method and system of docking aircraft based on machine vision of feature identification and verifying.
Background technique
Aircraft docking guidance, which refers to, will be directed to the stop position on machine level ground from taxiway end to port aircraft and accurately berths Process.The purpose of aircraft docking guidance is to ensure that docking aircraft safety accurately berths, and can facilitate aircraft and various ground service interfaces Accurate docking, and connecting bridge is enable effectively to be abutted against aircraft door, improves Airport Operation efficiency and safety.Draw in autoplane berth Guiding systems are broadly divided by using the type difference of sensor: (1) ground buried coil class;(2) laser scanning and ranging class;(3) vision Perceive class.Since laser scanning and ranging class and visual perception class automated induction systems can effectively obtain the visualization letter of docking aircraft Breath, therefore the two classes autoplane docking guidance system also known as visualizes berth priming system.Buried induction coil class is automatic The position to determine docking aircraft is passed through or stopped to guidance system by detecting whether metal object.Buried induction coil it is excellent Point is in response to that speed is fast, at low cost, and to weather and illumination no requirement (NR), but error is larger, anti-interference ability is low.Meanwhile being embedded in ground Under lead and electronic component crush easily, reliability it is not high, measurement accuracy is not high, cannot identify that type, adjustable can be tieed up Repairing property is poor.Laser scanning and ranging class automated induction systems determined by laser ranging and laser scanning aircraft position, speed and The information such as type are not influenced by ambient light illumination and are influenced by weather smaller, and precision is higher, and adjustable maintenanceability is good, but at This is higher, and laser scanning frequency is limited, limits the real-time and stability of guidance.Visual perception class automated induction systems are logical The image information that optical imaging modalities obtain aircraft docking process is crossed, and then determines that docking flies by Intelligentized Information technology The information such as position, speed and the type of machine, system architecture is simple, at low cost, has high intelligent level, and adjustability can be safeguarded Property is preferable, but weather and illumination are required, adaptability it is poor.
It, can with the development that deepens continuously of visual perception imaging technique, Intelligentized Information technology and computer technology The docking information of docking aircraft can be accurately and fast obtained depending on changing aircraft docking guidance technology, in the docking guidance system on airport In be applied.The visualization aircraft docking guidance system (VDGS) and Siemens that Honeywell Corp. USA develops are public Take charge of the video docking guidance system (VDOCKS) developed as the leading level in the world vision guide equipment also more in the world Airport is applied, but these systems are higher to weather and illumination requirement, adaptability is poor, and the information processing of lack of wisdom Ability.In entire berth aircraft bootup process, aircraft tracking and positioning, plane type recognition and authentication operation are all in berth It is carried out after aircraft capture.If parking system does not capture berth aircraft, subsequent every operation will not be executed. Therefore, quick, the accurate capture of berth aircraft is basis and the premise that docking guidance system completes berth aircraft guidance task.One A fast and accurately berth aircraft catching method can identify for subsequent aircraft model, tracking and guidance provide more accurately Information and more processing times.
Summary of the invention
Berth aircraft can quickly, be accurately captured technical problem to be solved by the invention is to provide one kind to regard based on machine The real-time capturing method and system of docking aircraft of feel.
To achieve the goals above, the present invention provides a kind of real-time catching method of docking aircraft based on machine vision, Wherein, include the following steps:
Monitoring scene partitioning is different information processing function areas, to reduce picture by S1, aircraft berth scene setting Processing region range improves treatment effeciency;
S2, aircraft capture, comprising:
S21, background are eliminated, and background model, mixture Gaussian background model based on median filtering or are based on cuclear density probability The background model of estimation carrys out the DYNAMIC DISTRIBUTION of background in simulated scenario and carries out background modeling, then by present frame and background model Make difference to eliminate background;
S22, shadow removing, statistics are eliminated the gray value in the foreground area extracted by background, find out maximum gradation value Then gmax and minimum gradation value gmin carries out shade in region of the gray value less than T=gmin+ (gmax-gmin) * 0.5 and disappears It removes;
S23, territorial classification establish a standard front face aircraft region template, simultaneously by variation Detection and Extraction target area Then the upright projection curve for seeking the region seeks hanging down for the upright projection curve and standard front face aircraft region template The related coefficient of straight drop shadow curve, if the related coefficient is greater than or equal to 0.9, which is aircraft;
S24, signature verification, further verified by detecting the engine of aircraft captured and front-wheel the target whether be Aircraft.
The above-mentioned real-time catching method of docking aircraft based on machine vision, wherein after step S1, may also include as Lower step:
S10, video image pretreatment, carry out gamma correction and denoising to image, to improve the visual effect of image, Improve the clarity of image.
The above-mentioned real-time catching method of docking aircraft based on machine vision, wherein in the background removal process S21, The foundation of the list Gaussian Background model includes the following steps:
The foundation of S211, background model, initial background image calculate in a period of time in video sequence image f (x, y) The average gray value μ of each pixel0And the variance of pixel grey scaleBy μ0WithComposition has Gaussian Profile η (x, μ0, σ0) initial background image B0,
Wherein:
Then Gauss model η (x is established for each pixel of every frame imageiii);
Wherein, i is frame number, xiFor the current pixel value of pixel, μiFor the mean value of current pixel point Gauss model, σi For the mean square deviation of current pixel point Gauss model;If η (xiii)≤Tp, Tp are probability threshold value, then determine that the point is prospect Otherwise point is background dot;
The update of S212, background model
If scene changes, background model is updated, is provided using the consecutive image of photographic device shooting Real time information is updated background model, such as following formula:
μi+1=(1- α) μi+αxi
Wherein α is turnover rate, and value is between 0~1.
The above-mentioned real-time catching method of docking aircraft based on machine vision, wherein if the pixel is background, α is taken 0.05, if the pixel is prospect, α takes 0.0025.
The above-mentioned real-time catching method of docking aircraft based on machine vision, wherein the signature verification step S24 packet It includes:
S241, image pole black region extract, and statistics of histogram are carried out to image, 1%~99% among gray level Maximum gradation value/minimum gradation value ratio that pixel number is not 0 is obtained in range, is extracted using preset extremely black decision threshold Most black part in image obtains a width pole black region;
S242, similar round detection, extract all outer boundaries of the pole black region, use each boundary the square on boundary The barycentric coodinates on boundary are calculated, the jth i rank square on boundary is defined as follows:
Barycentric coodinates
For all pixels point of current border, calculate it at a distance from the center of gravity, if the maximum distance being calculated with The ratio of minimum range is greater than a round decision threshold, then it is assumed that the region is non-circular, carries out the judgement of subsequent region, and record is sentenced The barycentric coodinates and radius in fixed similar round region;
S243, justify in region in class by judging that similarity detects aeroengine;
S244, detection aircraft nose wheel, confirm the aeroengine and front-wheel then acquisition success.
The above-mentioned real-time catching method of docking aircraft based on machine vision, wherein detection aircraft draws in class circle region It holds up in step S243, it is assumed that M similar round region is detected altogether, wherein the calculating of i-th and j-th of similarity are as follows:
Similarityij=| Heighti-Heightj|*|Radiusi-Radiusj|
Wherein, Height is height of C.G., and Radius is radius, as similarity SimilarityijIt is similar less than preset When spending threshold value, then it is assumed that region i and j are aeroengine.
The above-mentioned real-time catching method of docking aircraft based on machine vision, wherein in step S243, if not detecting Aeroengine out is then iterated detection, and the extremely black decision threshold, round decision threshold, similarity threshold are increased respectively, Step S241-243 is carried out again;If aeroengine is still not detected, the circular mode of 7*7 is used all pole black regions Plate carries out out operation, then carries out step S242-243;
If aeroengine is still not detected, then carry out 2 above-mentioned iterative detections;
If aeroengine is still not detected, determine exist in image without engine.
The above-mentioned real-time catching method of docking aircraft based on machine vision, wherein the extremely black decision threshold, circle are sentenced Determine threshold value, the incrementss of similarity threshold are respectively 0.05,0.5,20.
The above-mentioned real-time catching method of docking aircraft based on machine vision, wherein in step S244, what be will test flies It is region of search of the region of 4 engine radiuses as aircraft nose wheel that power traction, which holds up intermediate and height, in region of search, by 256 The gray-level quantization of grade is to 64 grades, first wave crest and the trough that search is quantified as in 64 grades of grey level histogram, and original 256 grades Optimal crest location BestPeak, the position optimal trough BestValley in the grey level histogram of gray scale are defined as follows:
Wherein hist256(i) in the grey level histogram of 256 grades of gray scales, gray scale is the sum of all pixels of i;
Gray scale is split with this optimal trough BestValley, to be less than optimal trough BestValley part, The lesser miscellaneous point of area is removed, closed operation is carried out to image using a flattened oval type structural element;
Then to the 7 rank H on all graphics calculations boundariesUMoment characteristics are carried out with the moment characteristics of preset standard front-wheel model It compares, then determines that intermediate one is front-wheel when similarity is lower than a given threshold.
In order to which above-mentioned purpose is better achieved, the present invention also provides a kind of for the above-mentioned docking based on machine vision The docking aircraft of the real-time catching method of aircraft implements capture systems.
The technical effects of the invention are that:
The present invention has effective Intelligent vision information processing capability, can effectively realize that the aircraft of aircraft docking process is caught Obtain, track with the functions such as positioning, plane type recognition and authentication, and there is intelligentized station level ground visualized monitoring function, energy Effectively improve Civil Aviation Airport automation, intelligent and operation management level.
Below in conjunction with the drawings and specific embodiments, the present invention will be described in detail, but not as a limitation of the invention.
Detailed description of the invention
Fig. 1 is the real-time capture systems schematic diagram of docking aircraft of one embodiment of the invention;
Fig. 2 is that the docking aircraft of one embodiment of the invention captures operation schematic diagram in real time;
Fig. 3 is the real-time catching method flow chart of docking aircraft of one embodiment of the invention;
Fig. 4 is that the scene of one embodiment of the invention defines schematic diagram;
Fig. 5 is that the background of one embodiment of the invention eliminates flow chart;
Fig. 6 is the aircraft front vertical drop shadow curve of one embodiment of the invention;
Fig. 7 is the typical pole black region schematic diagram of one embodiment of the invention;
Fig. 8 is the flow chart that the similarity of one embodiment of the invention determines;
Fig. 9 is the grey level histogram (abscissa: gray level of 256 grades of gray scales of one embodiment of the invention;Ordinate: the ash The number put under degree grade);
Figure 10 is the grey level histogram (abscissa: gray level of 64 grades of gray scales of one embodiment of the invention;Ordinate: the ash The number put under degree grade);
Figure 11 is the effect exemplary diagram of the closed operation of one embodiment of the invention.
Wherein, appended drawing reference
1 photographic device
2 central processor equipments
3 display equipment
4 aircraft berths station level ground
41 stop lines
42 guide lines
5 aircrafts
6 trapping regions
7 tracking and positioning areas
8 ground servicing equipment areas
9 mark points
91 first mark points
10 first wave crests
11 first troughs
S1-S24 step
Specific embodiment
Structural principle and working principle of the invention are described in detail with reference to the accompanying drawing:
Referring to Fig. 1 and Fig. 2, Fig. 1 is the real-time capture systems schematic diagram of docking aircraft of one embodiment of the invention, and Fig. 2 is this The docking aircraft for inventing an embodiment captures operation schematic diagram in real time.Docking aircraft tracking based on machine vision of the invention is fixed Position system is mainly made of photographic device 1, central processor equipment 2 and display equipment 3.Photographic device 1 and central processor equipment 2 Connection, central processor equipment 2 are connect with display equipment 3, and the image of shooting is sent to central processor equipment 2 by photographic device 1, in Display content comprising guidance information is sent to display equipment 3 by centre processing equipment 2.Wherein, photographic device 1 is mounted on aircraft pool 41 rear of stop line on position station level ground 4, face guide line 42 are advisable, and mounting height is higher than the fuselage of aircraft 5, is in 8m or so Preferably.Central processor equipment 2 can be one possess receive data, processing data, storage data, generate display image data, hair The computer of data capability is sent, including for executing aircraft berth scene configuration, video image pretreatment, aircraft capture, aircraft Tracking, Plane location, aircraft identifies and multiple functional modules of authentication, and generates the mould for being used for informational display Block, all as software installation in central processor equipment 2.Display equipment 3 is preferably installed in airport for aircraft handling The large-scale information display screen of member's viewing, in addition, airport employe is also provided with hand-held display device to observe airplane conditions.
Referring to Fig. 3, Fig. 3 is the real-time catching method flow chart of docking aircraft of one embodiment of the invention.It is of the invention based on The real-time catching method of docking aircraft of machine vision, includes the following steps:
Step S1, monitoring scene partitioning is different information processing function areas, to reduce figure by aircraft berth scene setting The processing region range of piece improves treatment effeciency;
Firstly the need of scene definition is carried out in actual scene, on computers by monitoring scene partitioning at different information Processing function area reduces the processing region range of picture, improves treatment effeciency;In addition the information such as guide line, stop line are marked, with Plane location close relation.Firstly the need of in actual scene, be laid with the scale at a black and white interval close to guide line, black with The length interval of white just as, length interval maximum 1m, can according to the resolution ratio of photographic device, using be divided between length 0.5m, The finer scale such as 0.25m, the total length of scale are no more than the range to aircraft position progress apart from resolving, usually 50m, Other work pass through the software write in advance and execute, and the picture of photographic device shooting is opened and shown to software, and by drawing manually Lines processed select frame and point, to mark relevant range, and keep records of.
Aircraft berth scene image when shooting no aircraft simultaneously shows that scene defines schematic diagram and sees that Fig. 4, Fig. 4 are this hair The scene of a bright embodiment defines schematic diagram.Picture shown when progress proving operation is indicated in picture frame and can be used for description Region, dotted line wire frame can be the position of Manual description in figure, can on the image of display hand drawn lines, mark respectively Guide line 42 and stop line 41 out keep records of the location information of guide line 42 and stop line 41 in the picture;Hand drawn again Frame is selected, trapping region 6, tracking and positioning area 7 and related ground service battery limits 8 is marked respectively, keeps records of trapping region 6 and tracking and positioning The location information of area 7 in the picture;Further according to the scale being laid in scene, draws a little, marked close to beside guide line manually Largest interval is all mark points 9 of 1m, keeps records of the location information and each label of all mark points 9 in the picture The distance of first mark point of distance 91 in actual scene of point 9.Wherein, mark guide line 42, stop line 41 and mark point 9 when Wait, can will need mark image section amplification, be amplified to tens of pixels it is wide when, manually therebetween part mark, with improve Mark precision.The trapping region 6 of label and the position in tracking and positioning area 7 do not need very strict, and 6 top edge of trapping region is in actual field The about 100m of positional distance stop line 41 in scape, positional distance stop line 41 of 6 lower edge of trapping region in actual scene are big About 50m, positional distance stop line 41 about 50m of 7 top edge of tracking and positioning area in actual scene, tracking and positioning area 7 are following Edge is in 41 or less stop line.
It after step S1, may also include video image pre-treatment step S10, image carried out at gamma correction and denoising Reason, to improve the visual effect of image, improves the clarity of image.Utilize common image processing method, including brightness school Just, denoising etc., improves the visual effect of image, improves the clarity of iconic element or image is made to become to be more advantageous to computer Processing.
Step S2, aircraft captures, comprising:
Step S21, background is eliminated, and background model, mixture Gaussian background model based on median filtering or is based on cuclear density The background model of probability Estimation carrys out the DYNAMIC DISTRIBUTION of background in simulated scenario and carries out background modeling, then by present frame and background Model makees difference to eliminate background;
Flow chart is eliminated referring to the background that Fig. 5, Fig. 5 are one embodiment of the invention.Wherein, single Gaussian Background model Foundation is come the DYNAMIC DISTRIBUTION of background in simulated scenario and to carry out background modeling using single Gaussian Background model, then by present frame Make difference with background model to eliminate background, in the scene of not aircraft, that is, in the background that needs, is continuously adopted by camera Collect N frame, then background model is trained using this N frame background image, to determine the mean value and variance of Gaussian Profile.Including Following steps:
Step S211, the foundation of background model, initial background image, calculate a period of time in video sequence image f (x, Y) the average gray value μ of each pixel in0And the variance of pixel grey scaleBy μ0WithComposition have Gaussian Profile η (x, μ00) initial background image B0,
Wherein:
Then Gauss model η (x is established for each pixel of every frame imageiii);
Wherein, subscript i is frame number, xiFor the current pixel value of pixel, μiFor the equal of current pixel point Gauss model Value, σiFor the mean square deviation of current pixel point Gauss model;If η (xiii)≤Tp, Tp are probability threshold value, then determine that the point is Otherwise foreground point (is at this moment also known as x for background dotiWith η (xiii) matching);In use, can be replaced with threshold value of equal value Probability threshold value.Remember di=| xii|, in common one-dimensional situation, then often according to diiValue foreground detection threshold is set Value: if dii> T (T value is between 2 to 3), then the point is judged as foreground point, is otherwise background dot.
Other background removing methods carry out background elimination by using other background models, as based on median filtering Background model, mixture Gaussian background model and background model based on cuclear density probability Estimation etc..Background based on median filtering Model is by taking the intermediate value of N frame image as background, and algorithm is relatively simple, and effect is bad;Mixture Gaussian background model is logical The variation for safeguarding several Gauss models to simulate dynamic scene is crossed, algorithm is complicated, and real-time is poor;Based on cuclear density probability Estimation Background model is a kind of background model method of powerful non-parametric estmation, is capable of the distribution of fine simulation dynamic scene, fits Certain illumination variation is answered, but algorithm is very complicated, very high to request memory, real-time is very poor.
Step S212, the update of background model
If scene changes, background model needs to respond these variations, then is updated to background model, using taking the photograph The real time information that the consecutive image shot as device provides is updated background model, such as following formula:
μi+1=(1- α) μi+αxi
Wherein α is turnover rate, and value is between 0~1.If the pixel is background, turnover rate α preferably takes 0.05, if should Pixel is prospect, then α turnover rate preferably takes 0.0025.
Step S22, shadow removing, statistics eliminate the gray value in the foreground area extracted by background, find out maximum ash Then angle value gmax and minimum gradation value gmin carries out yin in region of the gray value less than T=gmin+ (gmax-gmin) * 0.5 Shadow is eliminated.
In the low gray level areas, the gray level ratio between each pixel and background pixel is sought, if preferably this ratio It is then considered as shadow spots between 0.3 and 0.9.It is handled using morphological image, is first corroded and carrying out expansion behaviour Make, zonule therein can be eliminated.Wherein morphology processing usually moves a structural element in the picture and carries out Similar to the operation of convolution, in each location of pixels, the image pixel corresponding to structural element implements specific logic behaviour Make, can remove noise and noise jamming, improve the signal-to-noise ratio of image, expansion, corrosion are the basic behaviour of morphology processing Make.So removing what the non-hatched area (zonule) in shadow spots was detected by multiple morphological erosion and expansive working Shadow region is simultaneously eliminated, so finally to eliminate the sky in the target area needed by multiple morphological dilations and etching operation Hole and each region connection get up.
Step S23, territorial classification establishes a standard front face aircraft region template, since this both sides in aircraft region are narrow Intermediate wide characteristic, the template can distinguish aircraft and non-aircraft well.By variation Detection and Extraction target area and seek The upright projection curve (being the aircraft front vertical drop shadow curve of one embodiment of the invention referring to Fig. 6, Fig. 6) in the region, then The related coefficient for seeking the upright projection curve of the upright projection curve and standard front face aircraft region template, if the correlation Coefficient is larger, is greater than or is equal to 0.9, then the target is aircraft;It otherwise, is non-aircraft.
Step S24, signature verification, the engine and front-wheel of the aircraft captured by detection are further to verify the target No is aircraft.
The signature verification step S24 further comprises:
Step S241, image pole black region extracts, and carries out statistics of histogram to image, 1% among gray level~ Maximum (gmax) gray value/minimum that pixel number is not 0 is obtained in 99% (usually namely 2~253 gray levels) range (gmin) ratio of gray value is extracted part most black in image using preset threshold value, obtains a width pole black region.
In the present embodiment, 0.05 extremely black decision threshold (BlackestJudge), the extremely black judgement are preset as using one Threshold value is meant most black 5% in image, should be adjusted according to actual scene, until being just partitioned into two engine outer profiles Come.Extract region of the gray value between gmin to (gmax-gmin) * BlackestJudge+gmin in image, that is, Most black part in image, obtains a width pole black region, and the typical pole black region schematic diagram of a width is shown in that Fig. 7, Fig. 7 are the present invention one The typical pole black region schematic diagram of embodiment is pole black region inside each figure in figure.
Step S242, similar round detects, and extracts all outer boundaries of the pole black region, uses boundary to each boundary Square calculate the barycentric coodinates on boundary, the jth i rank square on boundary is defined as follows:
Barycentric coodinates
For all pixels point of current border, calculate it at a distance from the center of gravity, if the maximum distance being calculated with The ratio of minimum range is greater than a preset value (such as the round decision threshold circleJudge for being preset as 1.5), then it is assumed that the area Domain is non-circular, carries out the judgement of subsequent region, for the region that judgement passes through, records the barycentric coodinates in the similar round region of judgement With radius (average distance on boundary to center of gravity), to determine into similarity.
Step S243, by judging that similarity detects aeroengine in class circle region;
The flow chart determined referring to the similarity that Fig. 8, Fig. 8 are one embodiment of the invention.In the present embodiment, it is assumed that Yi Gongjian M similar round region is measured, wherein the calculating of i-th and j-th of similarity are as follows:
Similarityij=| Heighti-Heightj|*|Radiusi-Radiusj|
Wherein, Height is height of C.G., and Radius is radius (i.e. the average distance on boundary to center of gravity), works as similarity SimilarityijWhen less than being preset as 40 threshold value similarThresh, then it is assumed that region i and j are aeroengine.
If aeroengine is not detected, it is iterated detection, it will extremely black decision threshold (BlackestJudge), circle Shape decision threshold (circleJudge), similarity threshold (similarThresh) increase respectively, the extremely black decision threshold of the present embodiment It is worth the incrementss of (BlackestJudge), round decision threshold (circleJudge), similarity threshold (similarThresh) 0.05,0.5,20 are respectively preferably, then carries out step S241-243;If aeroengine is still not detected, to all Pole black region carries out out operation using the circular shuttering of 7*7, then carries out step S242-243;
If aeroengine is still not detected, then carry out 2 above-mentioned iterative detections;
If aeroengine is still not detected, determine exist in image without engine.When being detected to subsequent frame, If the iterative steps that its previous frame image uses are n, the iteration directly since the (n-1)th step.
Step S244, aircraft nose wheel is detected, confirms the aeroengine and front-wheel then acquisition success.
The aeroengine centre and height that can be will test in step S244 are the region of 4 engine radiuses as aircraft The region of search of front-wheel, by 256 grades of gray-level quantization to 64 grades, is this hair referring to Fig. 9 and Figure 10, Fig. 9 in region of search The grey level histogram of 256 grades of gray scales of a bright embodiment, wherein abscissa is gray level, and ordinate is put under the gray level Number, Figure 10 are the grey level histogram of 64 grades of gray scales of one embodiment of the invention, and wherein abscissa is gray level, and ordinate is the ash The number put under degree grade.First wave crest 10 and first trough 11 being quantified as in 64 grades of grey level histogram are searched for, if amount First crest location after change is peak, wave trough position valley, then in the grey level histogram of original 256 grades of gray scales most Excellent crest location BestPeak, the position optimal trough BestValley are defined as follows:
Wherein hist256(i) in the grey level histogram of 256 grades of gray scales, gray scale is the sum of all pixels of i;
Gray scale is split with this optimal trough BestValley, to be less than optimal trough BestValley part, The lesser miscellaneous point of area is removed, closed operation is carried out to image using a flattened oval type structural element, effect example is referring to figure 11, Figure 11 be the effect exemplary diagram of the closed operation of one embodiment of the invention;
Then to the 7 rank Hu moment characteristics on all graphics calculations boundaries, with the Hu moment characteristics of preset standard front-wheel model into Row compares (about HUAway from feature: geometric moment is by Hu (Visual pattern recognition by moment Invariants) proposed that there is translation, rotation and scale invariability in 1962.Hu is using second order and three rank centers away from structure 7 made not displacement.So 7 rank Hu are uniquely determined away from 7 ranks of feature), when similarity is lower than a given threshold (preferably Value is then determined as wheel when being 1).The position of totally three wheels can be arrived in this way, and intermediate wheel on the lower is front-wheel.
Aircraft catching method and system for intelligent aircraft docking guidance system of the invention, passes through visual imaging subsystem System carries out the video image information acquisition of aircraft berth process, the transmission of video images of acquisition to central processor equipment is carried out real When handle and analysis, finally by display equipment show guidance information.In order to realize fast and accurately berth aircraft capture, obtain One stable target area, entire berth aircraft acquisition procedure all only in berth aircraft capture region in scene defines into Row, reduces the processing region range of picture, improves treatment effeciency, is conducive to fast implementing for aircraft capture.Fly in berth In machine capture region, it is changed detection, including background elimination, shadow removing, territorial classification first, extracts moving object area Then domain carries out classification to the moving object region extracted again and judges whether it is berth aircraft, to realize the standard of berth aircraft Really capture.
Certainly, the present invention can also have other various embodiments, without deviating from the spirit and substance of the present invention, ripe It knows those skilled in the art and makes various corresponding changes and modifications, but these corresponding changes and change in accordance with the present invention Shape all should fall within the scope of protection of the appended claims of the present invention.

Claims (10)

1.一种基于机器视觉的入坞飞机实时捕获方法,其特征在于,包括如下步骤:1. a real-time capture method of docking aircraft based on machine vision, is characterized in that, comprises the steps: S1、飞机泊位场景设置,将监测场景划分为不同的信息处理功能区,以缩小图像的处理区域范围,提高处理效率;S1. Aircraft berth scene setting, divide the monitoring scene into different information processing functional areas, so as to reduce the scope of the image processing area and improve the processing efficiency; S2、飞机捕获,包括:S2, aircraft capture, including: S21、背景消除,基于中值滤波的背景模型、混合高斯背景模型或基于核密度概率估计的背景模型来模拟场景中背景的动态分布并进行背景建模,然后将当前帧与背景模型作差分以消除背景;S21. Background elimination, a background model based on median filtering, a mixed Gaussian background model, or a background model based on kernel density probability estimation to simulate the dynamic distribution of the background in the scene and perform background modeling, and then make a difference between the current frame and the background model to obtain remove background; S22、阴影消除,统计通过背景消除提取的前景区域中的灰度值,找出最大灰度值gmax和最小灰度值gmin,然后在灰度值小于T=gmin+(gmax-gmin)*0.5的区域进行阴影消除;S22, shadow elimination, count the gray value in the foreground area extracted by background elimination, find out the maximum gray value gmax and the minimum gray value gmin, and then when the gray value is less than T=gmin+(gmax-gmin)*0.5 area for shadow removal; S23、区域分类,建立一个标准正面飞机区域模板,经过变化检测在前景区域中提取目标区域并求取该区域的垂直投影曲线,然后求取该垂直投影曲线与所述标准正面飞机区域模板的垂直投影曲线的相关系数,若该相关系数大于或等于0.9,则该目标为飞机;S23, area classification, establish a standard frontal aircraft area template, extract the target area in the foreground area through change detection, obtain the vertical projection curve of the area, and then find the vertical projection curve and the standard frontal aircraft area template The perpendicularity The correlation coefficient of the projection curve, if the correlation coefficient is greater than or equal to 0.9, the target is an aircraft; S24、特征验证,通过检测捕获到的飞机的引擎和前轮来进一步验证该目标是否为飞机;S24, feature verification, further verify whether the target is an aircraft by detecting the engine and front wheel of the captured aircraft; 其中,步骤S24包括:Wherein, step S24 includes: S241、图像极黑区域提取;S241. Extraction of extremely dark areas of the image; S242、类圆形检测;S242, circular detection; S243、在类圆区域中通过判断相似度检测飞机引擎;S243, detecting the aircraft engine by judging the similarity in the circle-like region; S244、检测飞机前轮,确认该飞机引擎和前轮则捕获成功。S244. Detect the front wheel of the aircraft, and confirm that the aircraft engine and the front wheel are successfully captured. 2.如权利要求1所述的基于机器视觉的入坞飞机实时捕获方法,其特征在于,在步骤S1之后,还可包括如下步骤:2. The real-time capturing method of docking aircraft based on machine vision as claimed in claim 1, is characterized in that, after step S1, can also comprise the following steps: S10、视频图像预处理,对图像进行亮度校正和去噪处理,以改善图像的视觉效果,提高图像的清晰度。S10, video image preprocessing, performing brightness correction and denoising processing on the image, so as to improve the visual effect of the image and improve the clarity of the image. 3.如权利要求1或2所述的基于机器视觉的入坞飞机实时捕获方法,其特征在于,所述背景消除步骤S21中,所述混合高斯背景模型的建立包括如下步骤:3. The real-time capturing method of docked aircraft based on machine vision as claimed in claim 1 or 2, wherein in the background elimination step S21, the establishment of the mixed Gaussian background model comprises the steps: S211、背景模型的建立,初始化背景图像,计算一段时间内视频序列图像f(x,y)中每一个像素的平均灰度值μ0以及像素灰度的方差由μ0组成具有高斯分布η(x,μ00)的初始背景图像B0 S211, establishing a background model, initializing the background image, calculating the average gray value μ 0 of each pixel in the video sequence image f(x, y) within a period of time and the variance of the pixel gray level by μ 0 and compose an initial background image B 0 with a Gaussian distribution η(x, μ 00 ), 其中: in: 然后为每帧图像的每个像素点建立高斯模型η(xiii);Then build a Gaussian model η(x i , μ ii ) for each pixel of each frame of image; 其中,i为帧序号,xi为像素点的当前像素值,μi为当前像素点高斯模型的均值,σi为当前像素点高斯模型的均方差;若η(xiii)≤Tp,Tp为概率阈值,则判定该点为前景点,否则为背景点;Among them, i is the frame serial number, x i is the current pixel value of the pixel point, μ i is the mean value of the Gaussian model of the current pixel point, σ i is the mean square error of the Gaussian model of the current pixel point; if η( xi , μ i , σ i )≤Tp, Tp is the probability threshold, then it is determined that the point is the foreground point, otherwise it is the background point; S212、背景模型的更新S212. Update of background model 若场景发生变化,则对背景模型进行更新,利用摄像装置拍摄的连续图像提供的实时信息对背景模型进行更新,如下式:If the scene changes, the background model is updated, and the background model is updated using the real-time information provided by the continuous images captured by the camera, as follows: μi+1=(1-α)μi+αxi μ i+1 =(1-α)μ i +αx i 其中α为更新率,取值在0~1之间。where α is the update rate, which ranges from 0 to 1. 4.如权利要求3所述的基于机器视觉的入坞飞机实时捕获方法,其特征在于,若该像素为背景,则α取0.05,若该像素为前景,则α取0.0025。4 . The real-time capture method of a docked aircraft based on machine vision according to claim 3 , wherein, if the pixel is the background, α is 0.05, and if the pixel is the foreground, α is 0.0025. 5 . 5.如权利要求1所述的基于机器视觉的入坞飞机实时捕获方法,其特征在于,5. The real-time capture method of docking aircraft based on machine vision as claimed in claim 1, is characterized in that, 步骤S241中的所述图像极黑区域提取,是对图像进行灰度直方图统计,在灰度级中间1%~99%范围内获得像素数不为0的最大灰度值/最小灰度值的比值,使用预设的极黑判定阈值提取图像中最黑的部分,得到一幅极黑区域;The extraction of the extremely dark area of the image in step S241 is to perform grayscale histogram statistics on the image, and obtain the maximum grayscale value/minimum grayscale value of which the number of pixels is not 0 within the range of 1% to 99% of the middle gray level. ratio, use the preset extreme black judgment threshold to extract the darkest part of the image to obtain a very black area; 步骤S242中的所述类圆形检测,是提取该极黑区域的所有外层边界,对每一个边界使用边界的矩计算边界的重心坐标,边界的第ji阶矩定义如下:The described circle-like detection in step S242 is to extract all the outer boundaries of the extremely black area, and use the moment of the boundary to calculate the barycentric coordinates of the boundary for each boundary, and the jith moment of the boundary is defined as follows: 重心坐标 barycentric coordinates 对于当前边界的所有像素点,计算其与该重心的距离,若计算得到的最大距离与最小距离的比值大于一圆形判定阈值,则认为该区域非圆形,进行下一区域的判定,记录判定的类圆形区域的重心坐标和半径。For all pixels in the current boundary, calculate the distance from the center of gravity. If the ratio of the calculated maximum distance to the minimum distance is greater than a circular judgment threshold, the area is considered to be non-circular, and the next area is judged and recorded. The barycentric coordinates and radius of the determined circle-like area. 6.如权利要求5所述的基于机器视觉的入坞飞机实时捕获方法,其特征在于,在类圆区域中检测飞机引擎步骤S243中,假设一共检测到了M个类圆形区域,其中第e个和第f个的相似度的计算为:6. the real-time capture method of docked aircraft based on machine vision as claimed in claim 5, it is characterized in that, in the circle-like area, in the step S243 of detecting the aircraft engine, it is assumed that M circle-like areas are detected altogether, wherein the e-th The calculation of the similarity between the first and the fth is: Similarityij=|Heighti-Heightj|*|Radiusi-Radiusj|Similarity ij =|Height i -Height j |*|Radius i -Radius j | 其中,Height为重心高度,Radius为半径,当相似度Similarityij小于预设的相似度阈值时,则认为区域e和f为飞机引擎。Among them, Height is the height of the center of gravity, and Radius is the radius. When the similarity ij is less than the preset similarity threshold, the regions e and f are considered to be aircraft engines. 7.如权利要求6所述的基于机器视觉的入坞飞机实时捕获方法,其特征在于,在步骤S243中,若没有检测出飞机引擎,则进行迭代检测,将所述极黑判定阈值、圆形判定阈值、相似度阈值分别增大,再进行步骤:7. The real-time capture method of docked aircraft based on machine vision as claimed in claim 6, wherein in step S243, if the aircraft engine is not detected, then iterative detection is performed, and the extreme black judgment threshold, circle The shape judgment threshold and similarity threshold are increased respectively, and then the steps are as follows: S241、图像极黑区域提取;S241. Extraction of extremely dark areas of the image; S242、类圆形检测;S242, circular detection; S243、在类圆区域中通过判断相似度检测飞机引擎;S243, detecting the aircraft engine by judging the similarity in the circle-like region; 若仍然没有检测出飞机引擎,则对所有的极黑区域使用7*7的圆形模板进行开操作,再进行步骤:If the aircraft engine is still not detected, use the 7*7 circular template to open all the extremely dark areas, and then proceed to the following steps: S242、类圆形检测;S242, circular detection; S243、在类圆区域中通过判断相似度检测飞机引擎;S243, detecting the aircraft engine by judging the similarity in the circle-like region; 若仍然没有检测出飞机引擎,则再进行2次上述迭代检测;If the aircraft engine is still not detected, perform the above iterative detection two more times; 若仍然没有检测出飞机引擎,则判定图像中无引擎存在。If the aircraft engine is still not detected, it is determined that no engine exists in the image. 8.如权利要求7所述的基于机器视觉的入坞飞机实时捕获方法,其特征在于,所述极黑判定阈值、圆形判定阈值和所述相似度阈值的增加量分别为0.05、0.5、20。8. The real-time capture method for docked aircraft based on machine vision as claimed in claim 7, wherein the increments of the extreme black judgment threshold, the circular judgment threshold and the similarity threshold are respectively 0.05, 0.5, 20. 9.如权利要求5所述的基于机器视觉的入坞飞机实时捕获方法,其特征在于,步骤S244中,将检测到的飞机引擎中间及高度为4个引擎半径的区域作为飞机前轮的搜索区域,在搜索区域中,将256级的灰度级量化至64级,搜索量化为64级的灰度直方图中的第一个波峰和波谷,原始256级灰度的灰度直方图中的最优波峰位置BestPeak、最优波谷BestValley位置定义如下:9. the real-time capture method of docked aircraft based on machine vision as claimed in claim 5, is characterized in that, in step S244, the area of 4 engine radii is the area of 4 engine radii with the detected aircraft engine middle and height as the search of aircraft front wheel area, in the search area, quantize the 256-level gray level to 64 levels, search for the first peak and trough in the grayscale histogram quantized to 64-level grayscale, and in the original 256-level grayscale grayscale histogram The optimal wave peak position BestPeak and the optimal wave valley BestValley position are defined as follows: 其中hist256(i)为256级灰度的灰度直方图中,灰度为i的像素总数;Wherein hist 256 (i) is the total number of pixels whose gray level is i in the grayscale histogram of 256 levels of grayscale; 以此最优波谷BestValley对灰度进行分割,对小于最优波谷BestValley的部分,除去面积较小的杂点,使用一个扁平椭圆型结构元素对图像进行闭操作;Use this optimal valley BestValley to segment the gray level, remove the noise with a small area for the part smaller than the optimal valley BestValley, and use a flat elliptical structural element to close the image; 接着对所有图形计算边界的7阶HU矩特征,与预置的标准前轮模型的矩特征进行比对,当相似度低于一设定阈值时则判定中间一个为前轮。Then, compare the 7th-order H U moment features of all graph calculation boundaries with the moment features of the preset standard front wheel model. When the similarity is lower than a set threshold, it is determined that the middle one is the front wheel. 10.一种用于上述权利要求1-9任意一项所述的基于机器视觉的入坞飞机实时捕获方法的入坞飞机实时捕获系统。10 . A real-time capture system for a docked aircraft for use in the machine vision-based real-time capture method for a docked aircraft according to claim 1 . 11 .
CN201410377269.1A 2014-08-01 2014-08-01 A kind of real-time capturing method and system of docking aircraft based on machine vision Active CN105335985B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410377269.1A CN105335985B (en) 2014-08-01 2014-08-01 A kind of real-time capturing method and system of docking aircraft based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410377269.1A CN105335985B (en) 2014-08-01 2014-08-01 A kind of real-time capturing method and system of docking aircraft based on machine vision

Publications (2)

Publication Number Publication Date
CN105335985A CN105335985A (en) 2016-02-17
CN105335985B true CN105335985B (en) 2019-03-01

Family

ID=55286490

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410377269.1A Active CN105335985B (en) 2014-08-01 2014-08-01 A kind of real-time capturing method and system of docking aircraft based on machine vision

Country Status (1)

Country Link
CN (1) CN105335985B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108683865A (en) * 2018-04-24 2018-10-19 长沙全度影像科技有限公司 A kind of background replacement system and method for bullet time special efficacy
CN108921891A (en) * 2018-06-21 2018-11-30 南通西塔自动化科技有限公司 A kind of machine vision method for rapidly positioning that can arbitrarily rotate
CN109785357B (en) * 2019-01-28 2020-10-27 北京晶品特装科技有限责任公司 Robot intelligent panoramic photoelectric reconnaissance method suitable for battlefield environment
CN109887343B (en) * 2019-04-04 2020-08-25 中国民航科学技术研究院 Automatic acquisition and monitoring system and method for flight ground service support nodes

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1399767A (en) * 1999-10-29 2003-02-26 安全门国际股份公司 Aircraft identification and docking guidance systems
CN101739694A (en) * 2010-01-07 2010-06-16 北京智安邦科技有限公司 Image analysis-based method and device for ultrahigh detection of high voltage transmission line
CN102509101A (en) * 2011-11-30 2012-06-20 昆山市工业技术研究院有限责任公司 Background updating method and vehicle target extracting method in traffic video monitoring
CN103049788A (en) * 2012-12-24 2013-04-17 南京航空航天大学 Computer-vision-based system and method for detecting number of pedestrians waiting to cross crosswalk
CN103177586A (en) * 2013-03-05 2013-06-26 天津工业大学 Machine-vision-based urban intersection multilane traffic flow detection method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100933483B1 (en) * 2008-01-28 2009-12-23 국방과학연구소 Target recognition method in the image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1399767A (en) * 1999-10-29 2003-02-26 安全门国际股份公司 Aircraft identification and docking guidance systems
CN101739694A (en) * 2010-01-07 2010-06-16 北京智安邦科技有限公司 Image analysis-based method and device for ultrahigh detection of high voltage transmission line
CN102509101A (en) * 2011-11-30 2012-06-20 昆山市工业技术研究院有限责任公司 Background updating method and vehicle target extracting method in traffic video monitoring
CN103049788A (en) * 2012-12-24 2013-04-17 南京航空航天大学 Computer-vision-based system and method for detecting number of pedestrians waiting to cross crosswalk
CN103177586A (en) * 2013-03-05 2013-06-26 天津工业大学 Machine-vision-based urban intersection multilane traffic flow detection method

Also Published As

Publication number Publication date
CN105335985A (en) 2016-02-17

Similar Documents

Publication Publication Date Title
CN105373135B (en) A kind of method and system of aircraft docking guidance and plane type recognition based on machine vision
Yun et al. An automatic hand gesture recognition system based on Viola-Jones method and SVMs
CN108122247B (en) A kind of video object detection method based on saliency and feature prior model
CN109977782B (en) Cross-store operation behavior detection method based on target position information reasoning
CN102542289B (en) Pedestrian volume statistical method based on plurality of Gaussian counting models
CN107133973B (en) A ship detection method in a bridge anti-collision system
CN105893946B (en) A detection method for frontal face images
CN106875424A (en) A kind of urban environment driving vehicle Activity recognition method based on machine vision
CN108776974B (en) A kind of real-time modeling method method suitable for public transport scene
CN102156983A (en) Pattern recognition and target tracking based method for detecting abnormal pedestrian positions
CN102945554A (en) Target tracking method based on learning and speeded-up robust features (SURFs)
CN105335985B (en) A kind of real-time capturing method and system of docking aircraft based on machine vision
CN107798691B (en) A vision-based real-time detection and tracking method for autonomous landing landmarks of unmanned aerial vehicles
CN105335688B (en) A kind of aircraft model recognition methods of view-based access control model image
CN110276371A (en) A recognition method for container corner fittings based on deep learning
CN112258490A (en) Low-emissivity coating intelligent damage detection method based on optical and infrared image fusion
CN105447431B (en) A kind of docking aircraft method for tracking and positioning and system based on machine vision
CN105184301A (en) Method for distinguishing vehicle azimuth by utilizing quadcopter
CN106384089A (en) Human body reliable detection method based on lifelong learning
CN111950357A (en) A fast identification method of marine debris based on multi-feature YOLOV3
CN111259736A (en) Real-time pedestrian detection method based on deep learning in complex environment
CN103996207A (en) Object tracking method
CN106997670A (en) Real-time sampling of traffic information system based on video
CN108985216B (en) Pedestrian head detection method based on multivariate logistic regression feature fusion
CN114882074B (en) Target motion state identification method, device, equipment and medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210618

Address after: 518103 No.9, Fuyuan 2nd Road, Fuyong street, Bao'an District, Shenzhen City, Guangdong Province

Patentee after: SHENZHEN CIMC-TIANDA AIRPORT SUPPORT Co.,Ltd.

Address before: Four No. four industrial road, Shekou Industrial Zone, Guangdong, Shenzhen 518067, China

Patentee before: SHENZHEN CIMC-TIANDA AIRPORT SUPPORT Co.,Ltd.

Patentee before: China International Marine Containers (Group) Co.,Ltd.