CN105335985B - A kind of real-time capturing method and system of docking aircraft based on machine vision - Google Patents
A kind of real-time capturing method and system of docking aircraft based on machine vision Download PDFInfo
- Publication number
- CN105335985B CN105335985B CN201410377269.1A CN201410377269A CN105335985B CN 105335985 B CN105335985 B CN 105335985B CN 201410377269 A CN201410377269 A CN 201410377269A CN 105335985 B CN105335985 B CN 105335985B
- Authority
- CN
- China
- Prior art keywords
- aircraft
- background
- image
- area
- real
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003032 molecular docking Methods 0.000 title claims abstract description 47
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000001514 detection method Methods 0.000 claims abstract description 21
- 230000010365 information processing Effects 0.000 claims abstract description 7
- 238000000605 extraction Methods 0.000 claims abstract description 6
- 238000001914 filtration Methods 0.000 claims abstract description 6
- 238000012545 processing Methods 0.000 claims description 13
- 239000000284 extract Substances 0.000 claims description 7
- 230000008030 elimination Effects 0.000 claims description 6
- 238000003379 elimination reaction Methods 0.000 claims description 6
- 230000005484 gravity Effects 0.000 claims description 6
- 230000000007 visual effect Effects 0.000 claims description 6
- 238000012544 monitoring process Methods 0.000 claims description 5
- 238000012795 verification Methods 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 230000008859 change Effects 0.000 claims description 3
- 238000012937 correction Methods 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims 1
- 239000000203 mixture Substances 0.000 abstract description 7
- 238000000638 solvent extraction Methods 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 12
- 241001270131 Agaricus moelleri Species 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000005286 illumination Methods 0.000 description 5
- 230000006698 induction Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 230000007306 turnover Effects 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000004438 eyesight Effects 0.000 description 3
- 230000000877 morphologic effect Effects 0.000 description 3
- 230000016776 visual perception Effects 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 2
- 238000013139 quantization Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000012800 visualization Methods 0.000 description 2
- 241000872198 Serjania polyphylla Species 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000005260 corrosion Methods 0.000 description 1
- 230000007797 corrosion Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000010339 dilation Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000003628 erosive effect Effects 0.000 description 1
- 238000005530 etching Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 238000012634 optical imaging Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000002203 pretreatment Methods 0.000 description 1
- 230000037452 priming Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000009416 shuttering Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
A kind of real-time capturing method and system of docking aircraft based on machine vision, this method include that will monitor scene partitioning for different information processing function areas, to improve treatment effeciency;Background model, mixture Gaussian background model based on median filtering or the background model based on cuclear density probability Estimation come the DYNAMIC DISTRIBUTION of background in simulated scenario and carry out background modeling, and present frame and background model are then made difference to eliminate background;Statistics eliminates the gray value in the foreground area extracted by background and carries out shadow removing;Establish standard front face aircraft region template, Detection and Extraction target area and the upright projection curve for seeking the region, then the related coefficient for seeking the upright projection curve of the upright projection curve and standard front face aircraft region template determines whether aircraft, is further verified by detecting engine and the front-wheel of the aircraft captured.The present invention also provides the corresponding real-time capture systems of docking aircraft for realizing the above method.
Description
Technical field
The present invention relates to a kind of berth Plane location and bootstrap technique, especially a kind of movement for aircraft docking guidance
Object detection, the real-time capturing method and system of docking aircraft based on machine vision of feature identification and verifying.
Background technique
Aircraft docking guidance, which refers to, will be directed to the stop position on machine level ground from taxiway end to port aircraft and accurately berths
Process.The purpose of aircraft docking guidance is to ensure that docking aircraft safety accurately berths, and can facilitate aircraft and various ground service interfaces
Accurate docking, and connecting bridge is enable effectively to be abutted against aircraft door, improves Airport Operation efficiency and safety.Draw in autoplane berth
Guiding systems are broadly divided by using the type difference of sensor: (1) ground buried coil class;(2) laser scanning and ranging class;(3) vision
Perceive class.Since laser scanning and ranging class and visual perception class automated induction systems can effectively obtain the visualization letter of docking aircraft
Breath, therefore the two classes autoplane docking guidance system also known as visualizes berth priming system.Buried induction coil class is automatic
The position to determine docking aircraft is passed through or stopped to guidance system by detecting whether metal object.Buried induction coil it is excellent
Point is in response to that speed is fast, at low cost, and to weather and illumination no requirement (NR), but error is larger, anti-interference ability is low.Meanwhile being embedded in ground
Under lead and electronic component crush easily, reliability it is not high, measurement accuracy is not high, cannot identify that type, adjustable can be tieed up
Repairing property is poor.Laser scanning and ranging class automated induction systems determined by laser ranging and laser scanning aircraft position, speed and
The information such as type are not influenced by ambient light illumination and are influenced by weather smaller, and precision is higher, and adjustable maintenanceability is good, but at
This is higher, and laser scanning frequency is limited, limits the real-time and stability of guidance.Visual perception class automated induction systems are logical
The image information that optical imaging modalities obtain aircraft docking process is crossed, and then determines that docking flies by Intelligentized Information technology
The information such as position, speed and the type of machine, system architecture is simple, at low cost, has high intelligent level, and adjustability can be safeguarded
Property is preferable, but weather and illumination are required, adaptability it is poor.
It, can with the development that deepens continuously of visual perception imaging technique, Intelligentized Information technology and computer technology
The docking information of docking aircraft can be accurately and fast obtained depending on changing aircraft docking guidance technology, in the docking guidance system on airport
In be applied.The visualization aircraft docking guidance system (VDGS) and Siemens that Honeywell Corp. USA develops are public
Take charge of the video docking guidance system (VDOCKS) developed as the leading level in the world vision guide equipment also more in the world
Airport is applied, but these systems are higher to weather and illumination requirement, adaptability is poor, and the information processing of lack of wisdom
Ability.In entire berth aircraft bootup process, aircraft tracking and positioning, plane type recognition and authentication operation are all in berth
It is carried out after aircraft capture.If parking system does not capture berth aircraft, subsequent every operation will not be executed.
Therefore, quick, the accurate capture of berth aircraft is basis and the premise that docking guidance system completes berth aircraft guidance task.One
A fast and accurately berth aircraft catching method can identify for subsequent aircraft model, tracking and guidance provide more accurately
Information and more processing times.
Summary of the invention
Berth aircraft can quickly, be accurately captured technical problem to be solved by the invention is to provide one kind to regard based on machine
The real-time capturing method and system of docking aircraft of feel.
To achieve the goals above, the present invention provides a kind of real-time catching method of docking aircraft based on machine vision,
Wherein, include the following steps:
Monitoring scene partitioning is different information processing function areas, to reduce picture by S1, aircraft berth scene setting
Processing region range improves treatment effeciency;
S2, aircraft capture, comprising:
S21, background are eliminated, and background model, mixture Gaussian background model based on median filtering or are based on cuclear density probability
The background model of estimation carrys out the DYNAMIC DISTRIBUTION of background in simulated scenario and carries out background modeling, then by present frame and background model
Make difference to eliminate background;
S22, shadow removing, statistics are eliminated the gray value in the foreground area extracted by background, find out maximum gradation value
Then gmax and minimum gradation value gmin carries out shade in region of the gray value less than T=gmin+ (gmax-gmin) * 0.5 and disappears
It removes;
S23, territorial classification establish a standard front face aircraft region template, simultaneously by variation Detection and Extraction target area
Then the upright projection curve for seeking the region seeks hanging down for the upright projection curve and standard front face aircraft region template
The related coefficient of straight drop shadow curve, if the related coefficient is greater than or equal to 0.9, which is aircraft;
S24, signature verification, further verified by detecting the engine of aircraft captured and front-wheel the target whether be
Aircraft.
The above-mentioned real-time catching method of docking aircraft based on machine vision, wherein after step S1, may also include as
Lower step:
S10, video image pretreatment, carry out gamma correction and denoising to image, to improve the visual effect of image,
Improve the clarity of image.
The above-mentioned real-time catching method of docking aircraft based on machine vision, wherein in the background removal process S21,
The foundation of the list Gaussian Background model includes the following steps:
The foundation of S211, background model, initial background image calculate in a period of time in video sequence image f (x, y)
The average gray value μ of each pixel0And the variance of pixel grey scaleBy μ0WithComposition has Gaussian Profile η (x, μ0,
σ0) initial background image B0,
Wherein:
Then Gauss model η (x is established for each pixel of every frame imagei,μi,σi);
Wherein, i is frame number, xiFor the current pixel value of pixel, μiFor the mean value of current pixel point Gauss model, σi
For the mean square deviation of current pixel point Gauss model;If η (xi,μi,σi)≤Tp, Tp are probability threshold value, then determine that the point is prospect
Otherwise point is background dot;
The update of S212, background model
If scene changes, background model is updated, is provided using the consecutive image of photographic device shooting
Real time information is updated background model, such as following formula:
μi+1=(1- α) μi+αxi
Wherein α is turnover rate, and value is between 0~1.
The above-mentioned real-time catching method of docking aircraft based on machine vision, wherein if the pixel is background, α is taken
0.05, if the pixel is prospect, α takes 0.0025.
The above-mentioned real-time catching method of docking aircraft based on machine vision, wherein the signature verification step S24 packet
It includes:
S241, image pole black region extract, and statistics of histogram are carried out to image, 1%~99% among gray level
Maximum gradation value/minimum gradation value ratio that pixel number is not 0 is obtained in range, is extracted using preset extremely black decision threshold
Most black part in image obtains a width pole black region;
S242, similar round detection, extract all outer boundaries of the pole black region, use each boundary the square on boundary
The barycentric coodinates on boundary are calculated, the jth i rank square on boundary is defined as follows:
Barycentric coodinates
For all pixels point of current border, calculate it at a distance from the center of gravity, if the maximum distance being calculated with
The ratio of minimum range is greater than a round decision threshold, then it is assumed that the region is non-circular, carries out the judgement of subsequent region, and record is sentenced
The barycentric coodinates and radius in fixed similar round region;
S243, justify in region in class by judging that similarity detects aeroengine;
S244, detection aircraft nose wheel, confirm the aeroengine and front-wheel then acquisition success.
The above-mentioned real-time catching method of docking aircraft based on machine vision, wherein detection aircraft draws in class circle region
It holds up in step S243, it is assumed that M similar round region is detected altogether, wherein the calculating of i-th and j-th of similarity are as follows:
Similarityij=| Heighti-Heightj|*|Radiusi-Radiusj|
Wherein, Height is height of C.G., and Radius is radius, as similarity SimilarityijIt is similar less than preset
When spending threshold value, then it is assumed that region i and j are aeroengine.
The above-mentioned real-time catching method of docking aircraft based on machine vision, wherein in step S243, if not detecting
Aeroengine out is then iterated detection, and the extremely black decision threshold, round decision threshold, similarity threshold are increased respectively,
Step S241-243 is carried out again;If aeroengine is still not detected, the circular mode of 7*7 is used all pole black regions
Plate carries out out operation, then carries out step S242-243;
If aeroengine is still not detected, then carry out 2 above-mentioned iterative detections;
If aeroengine is still not detected, determine exist in image without engine.
The above-mentioned real-time catching method of docking aircraft based on machine vision, wherein the extremely black decision threshold, circle are sentenced
Determine threshold value, the incrementss of similarity threshold are respectively 0.05,0.5,20.
The above-mentioned real-time catching method of docking aircraft based on machine vision, wherein in step S244, what be will test flies
It is region of search of the region of 4 engine radiuses as aircraft nose wheel that power traction, which holds up intermediate and height, in region of search, by 256
The gray-level quantization of grade is to 64 grades, first wave crest and the trough that search is quantified as in 64 grades of grey level histogram, and original 256 grades
Optimal crest location BestPeak, the position optimal trough BestValley in the grey level histogram of gray scale are defined as follows:
Wherein hist256(i) in the grey level histogram of 256 grades of gray scales, gray scale is the sum of all pixels of i;
Gray scale is split with this optimal trough BestValley, to be less than optimal trough BestValley part,
The lesser miscellaneous point of area is removed, closed operation is carried out to image using a flattened oval type structural element;
Then to the 7 rank H on all graphics calculations boundariesUMoment characteristics are carried out with the moment characteristics of preset standard front-wheel model
It compares, then determines that intermediate one is front-wheel when similarity is lower than a given threshold.
In order to which above-mentioned purpose is better achieved, the present invention also provides a kind of for the above-mentioned docking based on machine vision
The docking aircraft of the real-time catching method of aircraft implements capture systems.
The technical effects of the invention are that:
The present invention has effective Intelligent vision information processing capability, can effectively realize that the aircraft of aircraft docking process is caught
Obtain, track with the functions such as positioning, plane type recognition and authentication, and there is intelligentized station level ground visualized monitoring function, energy
Effectively improve Civil Aviation Airport automation, intelligent and operation management level.
Below in conjunction with the drawings and specific embodiments, the present invention will be described in detail, but not as a limitation of the invention.
Detailed description of the invention
Fig. 1 is the real-time capture systems schematic diagram of docking aircraft of one embodiment of the invention;
Fig. 2 is that the docking aircraft of one embodiment of the invention captures operation schematic diagram in real time;
Fig. 3 is the real-time catching method flow chart of docking aircraft of one embodiment of the invention;
Fig. 4 is that the scene of one embodiment of the invention defines schematic diagram;
Fig. 5 is that the background of one embodiment of the invention eliminates flow chart;
Fig. 6 is the aircraft front vertical drop shadow curve of one embodiment of the invention;
Fig. 7 is the typical pole black region schematic diagram of one embodiment of the invention;
Fig. 8 is the flow chart that the similarity of one embodiment of the invention determines;
Fig. 9 is the grey level histogram (abscissa: gray level of 256 grades of gray scales of one embodiment of the invention;Ordinate: the ash
The number put under degree grade);
Figure 10 is the grey level histogram (abscissa: gray level of 64 grades of gray scales of one embodiment of the invention;Ordinate: the ash
The number put under degree grade);
Figure 11 is the effect exemplary diagram of the closed operation of one embodiment of the invention.
Wherein, appended drawing reference
1 photographic device
2 central processor equipments
3 display equipment
4 aircraft berths station level ground
41 stop lines
42 guide lines
5 aircrafts
6 trapping regions
7 tracking and positioning areas
8 ground servicing equipment areas
9 mark points
91 first mark points
10 first wave crests
11 first troughs
S1-S24 step
Specific embodiment
Structural principle and working principle of the invention are described in detail with reference to the accompanying drawing:
Referring to Fig. 1 and Fig. 2, Fig. 1 is the real-time capture systems schematic diagram of docking aircraft of one embodiment of the invention, and Fig. 2 is this
The docking aircraft for inventing an embodiment captures operation schematic diagram in real time.Docking aircraft tracking based on machine vision of the invention is fixed
Position system is mainly made of photographic device 1, central processor equipment 2 and display equipment 3.Photographic device 1 and central processor equipment 2
Connection, central processor equipment 2 are connect with display equipment 3, and the image of shooting is sent to central processor equipment 2 by photographic device 1, in
Display content comprising guidance information is sent to display equipment 3 by centre processing equipment 2.Wherein, photographic device 1 is mounted on aircraft pool
41 rear of stop line on position station level ground 4, face guide line 42 are advisable, and mounting height is higher than the fuselage of aircraft 5, is in 8m or so
Preferably.Central processor equipment 2 can be one possess receive data, processing data, storage data, generate display image data, hair
The computer of data capability is sent, including for executing aircraft berth scene configuration, video image pretreatment, aircraft capture, aircraft
Tracking, Plane location, aircraft identifies and multiple functional modules of authentication, and generates the mould for being used for informational display
Block, all as software installation in central processor equipment 2.Display equipment 3 is preferably installed in airport for aircraft handling
The large-scale information display screen of member's viewing, in addition, airport employe is also provided with hand-held display device to observe airplane conditions.
Referring to Fig. 3, Fig. 3 is the real-time catching method flow chart of docking aircraft of one embodiment of the invention.It is of the invention based on
The real-time catching method of docking aircraft of machine vision, includes the following steps:
Step S1, monitoring scene partitioning is different information processing function areas, to reduce figure by aircraft berth scene setting
The processing region range of piece improves treatment effeciency;
Firstly the need of scene definition is carried out in actual scene, on computers by monitoring scene partitioning at different information
Processing function area reduces the processing region range of picture, improves treatment effeciency;In addition the information such as guide line, stop line are marked, with
Plane location close relation.Firstly the need of in actual scene, be laid with the scale at a black and white interval close to guide line, black with
The length interval of white just as, length interval maximum 1m, can according to the resolution ratio of photographic device, using be divided between length 0.5m,
The finer scale such as 0.25m, the total length of scale are no more than the range to aircraft position progress apart from resolving, usually 50m,
Other work pass through the software write in advance and execute, and the picture of photographic device shooting is opened and shown to software, and by drawing manually
Lines processed select frame and point, to mark relevant range, and keep records of.
Aircraft berth scene image when shooting no aircraft simultaneously shows that scene defines schematic diagram and sees that Fig. 4, Fig. 4 are this hair
The scene of a bright embodiment defines schematic diagram.Picture shown when progress proving operation is indicated in picture frame and can be used for description
Region, dotted line wire frame can be the position of Manual description in figure, can on the image of display hand drawn lines, mark respectively
Guide line 42 and stop line 41 out keep records of the location information of guide line 42 and stop line 41 in the picture;Hand drawn again
Frame is selected, trapping region 6, tracking and positioning area 7 and related ground service battery limits 8 is marked respectively, keeps records of trapping region 6 and tracking and positioning
The location information of area 7 in the picture;Further according to the scale being laid in scene, draws a little, marked close to beside guide line manually
Largest interval is all mark points 9 of 1m, keeps records of the location information and each label of all mark points 9 in the picture
The distance of first mark point of distance 91 in actual scene of point 9.Wherein, mark guide line 42, stop line 41 and mark point 9 when
Wait, can will need mark image section amplification, be amplified to tens of pixels it is wide when, manually therebetween part mark, with improve
Mark precision.The trapping region 6 of label and the position in tracking and positioning area 7 do not need very strict, and 6 top edge of trapping region is in actual field
The about 100m of positional distance stop line 41 in scape, positional distance stop line 41 of 6 lower edge of trapping region in actual scene are big
About 50m, positional distance stop line 41 about 50m of 7 top edge of tracking and positioning area in actual scene, tracking and positioning area 7 are following
Edge is in 41 or less stop line.
It after step S1, may also include video image pre-treatment step S10, image carried out at gamma correction and denoising
Reason, to improve the visual effect of image, improves the clarity of image.Utilize common image processing method, including brightness school
Just, denoising etc., improves the visual effect of image, improves the clarity of iconic element or image is made to become to be more advantageous to computer
Processing.
Step S2, aircraft captures, comprising:
Step S21, background is eliminated, and background model, mixture Gaussian background model based on median filtering or is based on cuclear density
The background model of probability Estimation carrys out the DYNAMIC DISTRIBUTION of background in simulated scenario and carries out background modeling, then by present frame and background
Model makees difference to eliminate background;
Flow chart is eliminated referring to the background that Fig. 5, Fig. 5 are one embodiment of the invention.Wherein, single Gaussian Background model
Foundation is come the DYNAMIC DISTRIBUTION of background in simulated scenario and to carry out background modeling using single Gaussian Background model, then by present frame
Make difference with background model to eliminate background, in the scene of not aircraft, that is, in the background that needs, is continuously adopted by camera
Collect N frame, then background model is trained using this N frame background image, to determine the mean value and variance of Gaussian Profile.Including
Following steps:
Step S211, the foundation of background model, initial background image, calculate a period of time in video sequence image f (x,
Y) the average gray value μ of each pixel in0And the variance of pixel grey scaleBy μ0WithComposition have Gaussian Profile η (x,
μ0,σ0) initial background image B0,
Wherein:
Then Gauss model η (x is established for each pixel of every frame imagei,μi,σi);
Wherein, subscript i is frame number, xiFor the current pixel value of pixel, μiFor the equal of current pixel point Gauss model
Value, σiFor the mean square deviation of current pixel point Gauss model;If η (xi,μi,σi)≤Tp, Tp are probability threshold value, then determine that the point is
Otherwise foreground point (is at this moment also known as x for background dotiWith η (xi,μi,σi) matching);In use, can be replaced with threshold value of equal value
Probability threshold value.Remember di=| xi-μi|, in common one-dimensional situation, then often according to di/σiValue foreground detection threshold is set
Value: if di/σi> T (T value is between 2 to 3), then the point is judged as foreground point, is otherwise background dot.
Other background removing methods carry out background elimination by using other background models, as based on median filtering
Background model, mixture Gaussian background model and background model based on cuclear density probability Estimation etc..Background based on median filtering
Model is by taking the intermediate value of N frame image as background, and algorithm is relatively simple, and effect is bad;Mixture Gaussian background model is logical
The variation for safeguarding several Gauss models to simulate dynamic scene is crossed, algorithm is complicated, and real-time is poor;Based on cuclear density probability Estimation
Background model is a kind of background model method of powerful non-parametric estmation, is capable of the distribution of fine simulation dynamic scene, fits
Certain illumination variation is answered, but algorithm is very complicated, very high to request memory, real-time is very poor.
Step S212, the update of background model
If scene changes, background model needs to respond these variations, then is updated to background model, using taking the photograph
The real time information that the consecutive image shot as device provides is updated background model, such as following formula:
μi+1=(1- α) μi+αxi
Wherein α is turnover rate, and value is between 0~1.If the pixel is background, turnover rate α preferably takes 0.05, if should
Pixel is prospect, then α turnover rate preferably takes 0.0025.
Step S22, shadow removing, statistics eliminate the gray value in the foreground area extracted by background, find out maximum ash
Then angle value gmax and minimum gradation value gmin carries out yin in region of the gray value less than T=gmin+ (gmax-gmin) * 0.5
Shadow is eliminated.
In the low gray level areas, the gray level ratio between each pixel and background pixel is sought, if preferably this ratio
It is then considered as shadow spots between 0.3 and 0.9.It is handled using morphological image, is first corroded and carrying out expansion behaviour
Make, zonule therein can be eliminated.Wherein morphology processing usually moves a structural element in the picture and carries out
Similar to the operation of convolution, in each location of pixels, the image pixel corresponding to structural element implements specific logic behaviour
Make, can remove noise and noise jamming, improve the signal-to-noise ratio of image, expansion, corrosion are the basic behaviour of morphology processing
Make.So removing what the non-hatched area (zonule) in shadow spots was detected by multiple morphological erosion and expansive working
Shadow region is simultaneously eliminated, so finally to eliminate the sky in the target area needed by multiple morphological dilations and etching operation
Hole and each region connection get up.
Step S23, territorial classification establishes a standard front face aircraft region template, since this both sides in aircraft region are narrow
Intermediate wide characteristic, the template can distinguish aircraft and non-aircraft well.By variation Detection and Extraction target area and seek
The upright projection curve (being the aircraft front vertical drop shadow curve of one embodiment of the invention referring to Fig. 6, Fig. 6) in the region, then
The related coefficient for seeking the upright projection curve of the upright projection curve and standard front face aircraft region template, if the correlation
Coefficient is larger, is greater than or is equal to 0.9, then the target is aircraft;It otherwise, is non-aircraft.
Step S24, signature verification, the engine and front-wheel of the aircraft captured by detection are further to verify the target
No is aircraft.
The signature verification step S24 further comprises:
Step S241, image pole black region extracts, and carries out statistics of histogram to image, 1% among gray level~
Maximum (gmax) gray value/minimum that pixel number is not 0 is obtained in 99% (usually namely 2~253 gray levels) range
(gmin) ratio of gray value is extracted part most black in image using preset threshold value, obtains a width pole black region.
In the present embodiment, 0.05 extremely black decision threshold (BlackestJudge), the extremely black judgement are preset as using one
Threshold value is meant most black 5% in image, should be adjusted according to actual scene, until being just partitioned into two engine outer profiles
Come.Extract region of the gray value between gmin to (gmax-gmin) * BlackestJudge+gmin in image, that is,
Most black part in image, obtains a width pole black region, and the typical pole black region schematic diagram of a width is shown in that Fig. 7, Fig. 7 are the present invention one
The typical pole black region schematic diagram of embodiment is pole black region inside each figure in figure.
Step S242, similar round detects, and extracts all outer boundaries of the pole black region, uses boundary to each boundary
Square calculate the barycentric coodinates on boundary, the jth i rank square on boundary is defined as follows:
Barycentric coodinates
For all pixels point of current border, calculate it at a distance from the center of gravity, if the maximum distance being calculated with
The ratio of minimum range is greater than a preset value (such as the round decision threshold circleJudge for being preset as 1.5), then it is assumed that the area
Domain is non-circular, carries out the judgement of subsequent region, for the region that judgement passes through, records the barycentric coodinates in the similar round region of judgement
With radius (average distance on boundary to center of gravity), to determine into similarity.
Step S243, by judging that similarity detects aeroengine in class circle region;
The flow chart determined referring to the similarity that Fig. 8, Fig. 8 are one embodiment of the invention.In the present embodiment, it is assumed that Yi Gongjian
M similar round region is measured, wherein the calculating of i-th and j-th of similarity are as follows:
Similarityij=| Heighti-Heightj|*|Radiusi-Radiusj|
Wherein, Height is height of C.G., and Radius is radius (i.e. the average distance on boundary to center of gravity), works as similarity
SimilarityijWhen less than being preset as 40 threshold value similarThresh, then it is assumed that region i and j are aeroengine.
If aeroengine is not detected, it is iterated detection, it will extremely black decision threshold (BlackestJudge), circle
Shape decision threshold (circleJudge), similarity threshold (similarThresh) increase respectively, the extremely black decision threshold of the present embodiment
It is worth the incrementss of (BlackestJudge), round decision threshold (circleJudge), similarity threshold (similarThresh)
0.05,0.5,20 are respectively preferably, then carries out step S241-243;If aeroengine is still not detected, to all
Pole black region carries out out operation using the circular shuttering of 7*7, then carries out step S242-243;
If aeroengine is still not detected, then carry out 2 above-mentioned iterative detections;
If aeroengine is still not detected, determine exist in image without engine.When being detected to subsequent frame,
If the iterative steps that its previous frame image uses are n, the iteration directly since the (n-1)th step.
Step S244, aircraft nose wheel is detected, confirms the aeroengine and front-wheel then acquisition success.
The aeroengine centre and height that can be will test in step S244 are the region of 4 engine radiuses as aircraft
The region of search of front-wheel, by 256 grades of gray-level quantization to 64 grades, is this hair referring to Fig. 9 and Figure 10, Fig. 9 in region of search
The grey level histogram of 256 grades of gray scales of a bright embodiment, wherein abscissa is gray level, and ordinate is put under the gray level
Number, Figure 10 are the grey level histogram of 64 grades of gray scales of one embodiment of the invention, and wherein abscissa is gray level, and ordinate is the ash
The number put under degree grade.First wave crest 10 and first trough 11 being quantified as in 64 grades of grey level histogram are searched for, if amount
First crest location after change is peak, wave trough position valley, then in the grey level histogram of original 256 grades of gray scales most
Excellent crest location BestPeak, the position optimal trough BestValley are defined as follows:
Wherein hist256(i) in the grey level histogram of 256 grades of gray scales, gray scale is the sum of all pixels of i;
Gray scale is split with this optimal trough BestValley, to be less than optimal trough BestValley part,
The lesser miscellaneous point of area is removed, closed operation is carried out to image using a flattened oval type structural element, effect example is referring to figure
11, Figure 11 be the effect exemplary diagram of the closed operation of one embodiment of the invention;
Then to the 7 rank Hu moment characteristics on all graphics calculations boundaries, with the Hu moment characteristics of preset standard front-wheel model into
Row compares (about HUAway from feature: geometric moment is by Hu (Visual pattern recognition by moment
Invariants) proposed that there is translation, rotation and scale invariability in 1962.Hu is using second order and three rank centers away from structure
7 made not displacement.So 7 rank Hu are uniquely determined away from 7 ranks of feature), when similarity is lower than a given threshold (preferably
Value is then determined as wheel when being 1).The position of totally three wheels can be arrived in this way, and intermediate wheel on the lower is front-wheel.
Aircraft catching method and system for intelligent aircraft docking guidance system of the invention, passes through visual imaging subsystem
System carries out the video image information acquisition of aircraft berth process, the transmission of video images of acquisition to central processor equipment is carried out real
When handle and analysis, finally by display equipment show guidance information.In order to realize fast and accurately berth aircraft capture, obtain
One stable target area, entire berth aircraft acquisition procedure all only in berth aircraft capture region in scene defines into
Row, reduces the processing region range of picture, improves treatment effeciency, is conducive to fast implementing for aircraft capture.Fly in berth
In machine capture region, it is changed detection, including background elimination, shadow removing, territorial classification first, extracts moving object area
Then domain carries out classification to the moving object region extracted again and judges whether it is berth aircraft, to realize the standard of berth aircraft
Really capture.
Certainly, the present invention can also have other various embodiments, without deviating from the spirit and substance of the present invention, ripe
It knows those skilled in the art and makes various corresponding changes and modifications, but these corresponding changes and change in accordance with the present invention
Shape all should fall within the scope of protection of the appended claims of the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410377269.1A CN105335985B (en) | 2014-08-01 | 2014-08-01 | A kind of real-time capturing method and system of docking aircraft based on machine vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410377269.1A CN105335985B (en) | 2014-08-01 | 2014-08-01 | A kind of real-time capturing method and system of docking aircraft based on machine vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105335985A CN105335985A (en) | 2016-02-17 |
CN105335985B true CN105335985B (en) | 2019-03-01 |
Family
ID=55286490
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410377269.1A Active CN105335985B (en) | 2014-08-01 | 2014-08-01 | A kind of real-time capturing method and system of docking aircraft based on machine vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105335985B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108683865A (en) * | 2018-04-24 | 2018-10-19 | 长沙全度影像科技有限公司 | A kind of background replacement system and method for bullet time special efficacy |
CN108921891A (en) * | 2018-06-21 | 2018-11-30 | 南通西塔自动化科技有限公司 | A kind of machine vision method for rapidly positioning that can arbitrarily rotate |
CN109785357B (en) * | 2019-01-28 | 2020-10-27 | 北京晶品特装科技有限责任公司 | Robot intelligent panoramic photoelectric reconnaissance method suitable for battlefield environment |
CN109887343B (en) * | 2019-04-04 | 2020-08-25 | 中国民航科学技术研究院 | Automatic acquisition and monitoring system and method for flight ground service support nodes |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1399767A (en) * | 1999-10-29 | 2003-02-26 | 安全门国际股份公司 | Aircraft identification and docking guidance systems |
CN101739694A (en) * | 2010-01-07 | 2010-06-16 | 北京智安邦科技有限公司 | Image analysis-based method and device for ultrahigh detection of high voltage transmission line |
CN102509101A (en) * | 2011-11-30 | 2012-06-20 | 昆山市工业技术研究院有限责任公司 | Background updating method and vehicle target extracting method in traffic video monitoring |
CN103049788A (en) * | 2012-12-24 | 2013-04-17 | 南京航空航天大学 | Computer-vision-based system and method for detecting number of pedestrians waiting to cross crosswalk |
CN103177586A (en) * | 2013-03-05 | 2013-06-26 | 天津工业大学 | Machine-vision-based urban intersection multilane traffic flow detection method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100933483B1 (en) * | 2008-01-28 | 2009-12-23 | 국방과학연구소 | Target recognition method in the image |
-
2014
- 2014-08-01 CN CN201410377269.1A patent/CN105335985B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1399767A (en) * | 1999-10-29 | 2003-02-26 | 安全门国际股份公司 | Aircraft identification and docking guidance systems |
CN101739694A (en) * | 2010-01-07 | 2010-06-16 | 北京智安邦科技有限公司 | Image analysis-based method and device for ultrahigh detection of high voltage transmission line |
CN102509101A (en) * | 2011-11-30 | 2012-06-20 | 昆山市工业技术研究院有限责任公司 | Background updating method and vehicle target extracting method in traffic video monitoring |
CN103049788A (en) * | 2012-12-24 | 2013-04-17 | 南京航空航天大学 | Computer-vision-based system and method for detecting number of pedestrians waiting to cross crosswalk |
CN103177586A (en) * | 2013-03-05 | 2013-06-26 | 天津工业大学 | Machine-vision-based urban intersection multilane traffic flow detection method |
Also Published As
Publication number | Publication date |
---|---|
CN105335985A (en) | 2016-02-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105373135B (en) | A kind of method and system of aircraft docking guidance and plane type recognition based on machine vision | |
Yun et al. | An automatic hand gesture recognition system based on Viola-Jones method and SVMs | |
CN108122247B (en) | A kind of video object detection method based on saliency and feature prior model | |
CN109977782B (en) | Cross-store operation behavior detection method based on target position information reasoning | |
CN102542289B (en) | Pedestrian volume statistical method based on plurality of Gaussian counting models | |
CN107133973B (en) | A ship detection method in a bridge anti-collision system | |
CN105893946B (en) | A detection method for frontal face images | |
CN106875424A (en) | A kind of urban environment driving vehicle Activity recognition method based on machine vision | |
CN108776974B (en) | A kind of real-time modeling method method suitable for public transport scene | |
CN102156983A (en) | Pattern recognition and target tracking based method for detecting abnormal pedestrian positions | |
CN102945554A (en) | Target tracking method based on learning and speeded-up robust features (SURFs) | |
CN105335985B (en) | A kind of real-time capturing method and system of docking aircraft based on machine vision | |
CN107798691B (en) | A vision-based real-time detection and tracking method for autonomous landing landmarks of unmanned aerial vehicles | |
CN105335688B (en) | A kind of aircraft model recognition methods of view-based access control model image | |
CN110276371A (en) | A recognition method for container corner fittings based on deep learning | |
CN112258490A (en) | Low-emissivity coating intelligent damage detection method based on optical and infrared image fusion | |
CN105447431B (en) | A kind of docking aircraft method for tracking and positioning and system based on machine vision | |
CN105184301A (en) | Method for distinguishing vehicle azimuth by utilizing quadcopter | |
CN106384089A (en) | Human body reliable detection method based on lifelong learning | |
CN111950357A (en) | A fast identification method of marine debris based on multi-feature YOLOV3 | |
CN111259736A (en) | Real-time pedestrian detection method based on deep learning in complex environment | |
CN103996207A (en) | Object tracking method | |
CN106997670A (en) | Real-time sampling of traffic information system based on video | |
CN108985216B (en) | Pedestrian head detection method based on multivariate logistic regression feature fusion | |
CN114882074B (en) | Target motion state identification method, device, equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20210618 Address after: 518103 No.9, Fuyuan 2nd Road, Fuyong street, Bao'an District, Shenzhen City, Guangdong Province Patentee after: SHENZHEN CIMC-TIANDA AIRPORT SUPPORT Co.,Ltd. Address before: Four No. four industrial road, Shekou Industrial Zone, Guangdong, Shenzhen 518067, China Patentee before: SHENZHEN CIMC-TIANDA AIRPORT SUPPORT Co.,Ltd. Patentee before: China International Marine Containers (Group) Co.,Ltd. |