[go: up one dir, main page]

CN109883400A - Fixed station Automatic Targets and space-location method based on YOLO-SITCOL - Google Patents

Fixed station Automatic Targets and space-location method based on YOLO-SITCOL Download PDF

Info

Publication number
CN109883400A
CN109883400A CN201811616997.8A CN201811616997A CN109883400A CN 109883400 A CN109883400 A CN 109883400A CN 201811616997 A CN201811616997 A CN 201811616997A CN 109883400 A CN109883400 A CN 109883400A
Authority
CN
China
Prior art keywords
yolo
camera
space
binocular
sitcol
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811616997.8A
Other languages
Chinese (zh)
Other versions
CN109883400B (en
Inventor
陈磊
鞠彪
沈周
刘笑笑
卜磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing State Map Information Industry Co Ltd
Original Assignee
Nanjing State Map Information Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing State Map Information Industry Co Ltd filed Critical Nanjing State Map Information Industry Co Ltd
Priority to CN201811616997.8A priority Critical patent/CN109883400B/en
Publication of CN109883400A publication Critical patent/CN109883400A/en
Application granted granted Critical
Publication of CN109883400B publication Critical patent/CN109883400B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses fixed station Automatic Targets and space-location method based on YOLO-SITCOL, comprising the following steps: the configuration of fixed station acquisition hardware;Image collection;Geographic object Automatic Targets are realized based on YOLO algorithm;Image pixel frame inverse geography pixel frame is realized based on SITCOL algorithm;The single more spatial position points of object are fitted optimum position point.The present invention effectively reduces made Target and checks workload, improves inspection efficiency, reduces the consumption of manpower financial capacity, has preferable application value in practical applications.

Description

Fixed station Automatic Targets and space-location method based on YOLO-SITCOL
Technical field
The invention belongs to the technical fields of digital close range photogrammetry, and in particular to a kind of consolidating based on YOLO-SITCOL Surely station Automatic Targets and space-location method.
Background technique
When patrolling supervision, it usually needs made Target checks that inspection efficiency is lower, and fixed base stations measuring system is to communicate Steel tower is mounted with high-precision binocular camera, camera space station acquisition equipment GPS antenna, takes the photograph as data acquisition platform As head spatial attitude acquisition equipment electronic platform, determining appearance sensor based on GPS/ electronic platform integrated positioning makes column foot measuring system Ability with direct geo-location (Direct Georeferencing, DG).
It combines the theory of fixed base stations mapping system with engineer application, the consumption of manpower financial capacity can be reduced, in reality There is preferable application value in.
Summary of the invention
The technical problem to be solved by the present invention is to solve the above shortcomings of the prior art and to provide one kind to be based on YOLO- The fixed station Automatic Targets and space-location method of SITCOL.
To realize the above-mentioned technical purpose, the technical scheme adopted by the invention is as follows:
Fixed station Automatic Targets and space-location method based on YOLO-SITCOL, comprising the following steps:
Step 1: the configuration of fixed station acquisition hardware, the fixed station acquisition hardware are adopted using fixed base stations as acquisition platform Varifocal gun-type camera is integrated with 2, binocular connecting rod, electric platform, fixed link, power supply line, computer equipment are completed Automatic detection and positioning to geographic object;
Step 2: image collection, including using fixed base stations as carrier platform, it is fast by integrated binocular technical grade panorama camera Speed carries out the acquisition of live-action image, generates full-view image;Attitude measurement system obtains camera posture information, the camera appearance in real time State information includes the binocular camera real-time position information that GPS is obtained and the real-time exterior orientation of binocular camera that electronic platform obtains Parameter information;
Step 3: based on YOLO (You Only Look Once) model realization geographic object Automatic Targets, including with Lower step:
Step 3.1: object classification, tag definition are carried out to scale sample;
Step 3.2: being improved according to demand on the basis of existing YOLO basic model, constructing and training needs suitable for target The deep learning target detection model asked;
Step 3.3: the target detection model being applied in the monitoring of target geographic object;
Step 4: space intersection (the Space Intersect of Two Cameras based on two cameras and an orientation And One Location, SITCOL) algorithm realization image pixel frame inverse geography pixel frame, comprising the following steps:
Step 4.1: identification frame pixel coordinate being obtained according to the geographic object that YOLO algorithm identifies, not by using scale Become the same exposure that eigentransformation (Scale-invariant feature transform, SIFT) obtains binocular camera Two photos at moment carry out feature point extraction and Feature Points Matching;
Step 4.2: mistake is eliminated by random sampling unification algorism (random sample consensus, RANSAC) Matching is to reduce error;
Step 4.3:: the outsourcing rectangle frame of the same object is surveyed using digital photography on the two width images obtained based on matching Forward intersection method realizes the geo-location of object in amount, calculates its space coordinate;
Step 5: the single more spatial position points of object are fitted optimum position point.
To optimize above-mentioned technical proposal, the concrete measure taken further include:
In step 1,2 varifocal gun-type cameras are used to keep rigid connection, holding are synchronous to become in binocular connecting rod It is burnt;The electric platform is installed on fixed link end, and connect with binocular connecting rod center, for rotating 2 varifocal gun-type Camera;The geographical position coordinates that the fixed link end is demarcated by measuring device are via rotation angle, binocular baseline length Can inverse go out the accurate geographical position coordinates of two cameras.
In step 2, the full-view image is shot in such a way that fixed base stations install binocular camera;Fixed base stations phase Machine is integrated with positioning and orientation system, panoramic information acquisition system, power-supply system and computer data processing system.
Step 3.2 the following steps are included:
Step 3.2.1: by input picture resize to 448x448 and it is sent into CNN network;
Step 3.2.2: convolutional neural networks (Convolutional Neural Network, CNN) are by the picture of input It is divided into S × S-grid, then each cell is responsible for detecting the target that those central points are fallen in the grid;
Step 3.2.3:YOLO extracts feature using convolutional network, obtains predicted value using full articulamentum, handles network The target that prediction result is detected.
In step 4.2, RANSAC algorithm using the method for continuous iteration, is sought in one group of data set comprising " exterior point " Look for optimized parameter model.
Step 4.3 specifically:
For multiple adjacent sites sequential images with stereo-overlap, attitude measurement system (Position and Orientation System, POS) system for every sequence stereopsis provides high-precision elements of exterior orientation, in order to obtain High-precision topocentric coordinates within the scope of stereo-overlap are obtained, using the elements of exterior orientation and pixel of the same name of photo in two shadows As upper pixel coordinate, topocentric coordinates are resolved using forward intersection formula.
The invention has the following advantages:
The present invention uses for reference domestic and international fixed base stations measuring system the relevant technologies and its newest research results, analyzes existing fixation The working mechanism of base station measurement system;Key technology in terms of primary study fixed base stations mapping system data processing, including it is solid Surely station acquisition hardware configuration design, image collecting method, geographic object Automatic Targets algorithm, image pixel frame inverse are geographical Pixel frame algorithm and the more spatial position points of single object are fitted optimum position point algorithm.
The present invention combines the theory of fixed base stations mapping system with engineer application, probes into fixed base stations measuring system Basic theory studies the method for calibration of fixed base stations system, solves the matching of the sequential images based on column foot video acquisition and stands The methods of body positioning, so that the geographical coordinate of quick obtaining geographic object is realized, for inspection work providing method support.The present invention It can be widely applied in inspection supervision area, made Target is effectively reduced and checks workload, promote inspection efficiency, reduces manpower financial capacity Consumption, in practical applications have preferable application value.
Detailed description of the invention
Fig. 1 is flow chart of the present invention;
Fig. 2 is fixed base stations integrated hardware configuration diagram of the present invention;
Fig. 3 is YOLO of embodiment of the present invention overall system view;
Fig. 4 is grid dividing of embodiment of the present invention figure;
Fig. 5 is network structure of the embodiment of the present invention;
Fig. 6 is that cubic phase of the embodiment of the present invention matches figure to spatial object;
Fig. 7 is SITCOL of embodiment of the present invention forward intersection schematic diagram;
Fig. 8 is that the embodiment of the present invention schemes fusion searching optimum position point diagram more.
Specific embodiment
The embodiment of the present invention is described in further detail below in conjunction with attached drawing.
Referring to Fig. 1, a kind of fixed station Automatic Targets based on YOLO-SITCOL of the invention and space orientation side Method, comprising the following steps:
Step 1: the configuration of fixed station acquisition hardware, as shown in Fig. 2, fixed station acquisition hardware is flat using fixed base stations as acquisition Platform, using 2 integrate varifocal gun-type camera, binocular connecting rod, electric platform, fixed link, power supply line, computer equipment come Complete the automatic detection and positioning to geographic object;
In embodiment, 2 varifocal gun-type cameras are used to keep rigid connection in binocular connecting rod, keep synchronous zoom; The electric platform is installed on fixed link end, and connect with binocular connecting rod center, for rotating 2 varifocal gun-type phases Machine;The geographical position coordinates that the fixed link end is demarcated by measuring device, via rotation angle, binocular baseline length Inverse goes out the accurate geographical position coordinates of two cameras.
Step 2: image collection, including using fixed base stations as carrier platform, it is fast by integrated binocular technical grade panorama camera Speed carries out the acquisition of live-action image, generates full-view image;Attitude measurement system obtains camera posture information, camera posture letter in real time Breath includes the binocular camera real-time position information that GPS is obtained and the real-time exterior orientation parameter of binocular camera that electronic platform obtains Information;
In embodiment, full-view image is shot in such a way that fixed base stations install binocular camera;Fixed base stations camera It is integrated with positioning and orientation system, panoramic information acquisition system, power-supply system and computer data processing system.This system can be quickly The acquisition of live-action image is carried out, and full-view image generation and the acquisition of GPS position information may be implemented, can sufficiently excavate panorama The spatial information that image contains.
Step 3: geographic object Automatic Targets are realized based on YOLO algorithm, comprising the following steps:
Step 3.1: object classification, tag definition are carried out to scale sample;
Step 3.2: being improved according to demand on the basis of existing YOLO basic model, constructing and training needs suitable for target The deep learning target detection model asked;
Step 3.3: the target detection model being applied in the monitoring of target geographic object.
In embodiment, requirement of the algorithm of target detection to sample size is taken into account, except clapping by fixed base stations binocular camera It is acquired outside data according to mode, the increase of sample size is also carried out by modes such as internet crawler, sample fold rotations;
Step 3.2 is constructed using YOLO algorithm model and training, and on the whole, as shown in 3,4, YOLO algorithm uses one Target detection, whole system working principle are first to arrive input picture resize to individual CNN model realization end to end 448x448 is then fed into CNN network, finally handles the target that neural network forecast result is detected.
Specifically, the CNN network of YOLO is by the picture segmentation of input at S × S-grid, and then each cell is responsible for Detect the target that those central points are fallen in the grid, it can be seen that the center of this target of scout car falls in an intermediate unit In lattice, then the cell is responsible for predicting this investigation vehicle.Each cell can predict B bounding box (bounding box) with And the confidence level (confidence score) of bounding box.So-called confidence level includes two aspects in fact, first is that this bounding box A possibility that containing target size, second is that the accuracy of this bounding box.The former is denoted as P (class | object), when the boundary When frame is background (do not include target), P (class | object)=0 at this time.And when the bounding box includes target, Pr (class | object)=1.The accuracy of bounding box can use the IOU of prediction block and actual frames (ground truth) (intersection over union is handed over and compared) characterizes, is denoted asTherefore confidence level can be defined asThe size of bounding box can be characterized with position with 4 values: (x, y, w, h), wherein (x, y) is boundary The centre coordinate of frame, and w and h are the width and height of bounding box.The predicted value (x, y) of centre coordinate is left relative to each cell The deviant of upper angular coordinate point, and unit is relative to cell size.In this way, the predicted value of each bounding box is actually Include 5 elements: (x, y, w, h, c), wherein the size of preceding 4 characterizations bounding box and position, and the last one value is confidence level.
Referring to Fig. 5, YOLO extracts feature using convolutional network, then obtains predicted value using full articulamentum.Network Structural reference GooLeNet model includes 24 convolutional layers and 2 full articulamentums.It is main to be come using 1x1 convolution for convolutional layer Channel reduction is done, 3x3 convolution is then closely followed.For convolutional layer and full articulamentum, using Leaky ReLU activation primitive: max (x, 0.1x).The last layer uses linear activation primitive.
Last model after training, can be saved as .pb file, can be called by Flask by YOLO.Construct front end The page, the input picture to be detected can carry out target mark, and return to json data.
Step 4: image pixel frame inverse geography pixel frame is realized based on SITCOL algorithm, comprising the following steps:
Step 4.1: identification frame pixel coordinate being obtained according to the geographic object that YOLO algorithm identifies, by using SIFT pairs Two photos for the same time of exposure that binocular camera obtains carry out feature point extraction and Feature Points Matching;
SIFT is a kind of algorithm for extracting local feature, finds extreme point in scale space, extracts position, scale, rotation Invariant.This feature description is able to maintain angle rotation, dimensional variation, brightness light and shade invariance to image, while to image Shooting visual angle, affine, noise also keep stability.
Firstly, establishing image pyramid, pyramid one shares O group, and every group has S layers, and the image of O group (O≤2) is by O- 1 group of image by 1/2 it is down-sampled obtain, the image in every group carries out gaussian filtering from bottom to top and obtains, obtaining After image pyramid, in order to detect stable characteristic point, Gaussian difference scale is established:
D (x, y, σ)=(G (x, y, k σ)-G (x, y, σ)) * I (x, y)
=L (x, y, k σ)-L (x, y, σ)
In above formula, G (x, y, k σ) is Gaussian filter, and I (x, y) is the gray value at image midpoint (x, y).
In order to find the extreme point of image in scale space, neighbor point that each sampled point will be all with him (including 8 is adjacent Domain point and corresponding 18 points of neighbouring scale, altogether 26 points) compare, if it is maximum value in all the points or minimum value When, being considered as the point is a candidate feature point of the image under the scale.
Extreme point in scale space is the characteristic point of preliminary screening, it will receive the influence of some noises, and have Stronger skirt response, Lowe is accurately positioned position and the scale of key point by being fitted three-dimensional quadratic function, and passes through Hessian matrix removes skirt response, he thinks that pseudo feature point has biggish principal curvatures in the place across edge, vertical There is lesser principal curvatures in the place at edge, if principal curvatures is less than (r+1)2/ r then retains this feature point, otherwise abandons:
Whether it is that required characteristic point is usually determined by following formula:
In above formula, r is the ratio of maximum eigenvalue and minimal eigenvalue, and H indicates the Hessian matrix at point (x, y):
In formula, D value can be obtained by seeking the difference of neighbor point pixel.
As shown in fig. 6, the left photograph of binocular is identified by using YOLO, the right photograph of binocular carries out pixel using SIFT Match, to first location point of space object;And so on, finally obtain multiple location points of the same space object.
Step 4.2: erroneous matching is eliminated to reduce error by RANSAC algorithm;
RANSAC algorithm using the method for continuous iteration, can be found optimal in one group of data set comprising " exterior point " Parameter model does not meet the point of optimal models, is defined as " exterior point ".Several sample datas, sample are extracted from data set at random Cannot be conllinear between this, a best homography matrix H is calculated, model M is denoted as, so that meeting the data point number of the matrix most More, matrix size is 3 × 3:
Wherein (x, y) indicates target image corner location, and (x', y') is scene image corner location, and s is scale parameter. The projection error of all data and matrix in data set is calculated, if error is less than threshold, interior point set I is added;If point in current Collect I element number and be greater than optimal interior point set I-best, then updates I-best=I, while updating the number of iterations k;If iteration time Number is greater than k, then exits;Otherwise the number of iterations adds 1, and repeats the above steps, and the number of iterations k calculation formula is as follows:
Wherein, p is confidence level, generally takes 0.995;W is the ratio of interior point, and m is minimum sample number needed for computation model;
Step 4.3: the outsourcing rectangle frame of the same object is surveyed using digital photography on the two width images obtained based on matching SITCOL forward intersection method realizes the geo-location of object in amount, calculates its space coordinate;
As shown in fig. 7, step 4.3 is in order to solid quickly to handle based on direct geographical positioning method using SITCOL method Determine base station camera measurement data.Two photos for the same time of exposure that the present invention is shot using binocular camera in fixed base stations Calculating of the cubic phase to culture point space coordinate is carried out with forward intersection method in digital close range photogrammetry, specifically such as Under:
For multiple adjacent sites sequential images with stereo-overlap, POS system provides for every sequence stereopsis High-precision elements of exterior orientation in order to obtain high-precision topocentric coordinates within the scope of stereo-overlap utilizes the foreign side of photo The pixel coordinate of bit element and pixel of the same name on two images, resolves topocentric coordinates using forward intersection formula, if Terrestrial photogrammetric survey coordinate system D-XYZ, the image space coordinate system of first website are s1-xyz, and the image space of second website is sat Mark system is s2-xyz, meanwhile, if image space auxiliary coordinates are the photographic base that s-uvw, s1 and s2 are binocular camera, as flat Areal coordinate system o-xy, focal length f are to seek the space coordinate of spatial point P if the resolution ratio of photo is w*h, pixel size px Example, calculating process are as follows: set P point in as plane as point a, pixel coordinate is (i, j), converts the pixel coordinate of a to picture Principal point is picture plane coordinates (x, y) of origin:
Construct image space coordinate system s-xyz, s point position (0,0,0), obtain a in image space coordinate system coordinate (x, y,-f);It constructs image space auxiliary coordinates s-uvw, s (0,0,0), calculates coordinate of a in the auxiliary coordinates of image space;According to POS resolves the video camera elements of exterior orientation for obtaining photography website s1 and s2, calculates image space coordinate system and image space auxiliary coordinate Spin matrix R between system.ψ is set as around the azimuth that y-axis rotates, ω is the angle of roll rotated around x axis, κ is to rotate around z-axis Pitch angle:
Thus obtaining position of a point in the auxiliary coordinates of image space is (u, v, w)
S-uvw is moved to D-XYZ, shoots camera position twice and isSet s1 and s2 It photographs in website, the amplification coefficient between image space auxiliary coordinates and terrestrial photogrammetric survey coordinate system is N1And N2,
N1=(Bxw2-Bzu2)/(u1w2-w1u2)
N2=(Bxw1-Bzu1)/(u1w2-w1u2)
Use N1And N2Geographical coordinate (the X of P point is calculatedP,YP,ZP)。
Step 5: the single more spatial position points of object are fitted optimum position point.Fixed base stations camera is same in shooting process Atural object is taken repeatedly, and after YOLO is identified, the differences such as different image directions, scale make the image containing same target Same target, which is positioned into space often, the coordinate points not exactly the same there are multiple positions, as shown in figure 8, what be will acquire is same Multiple location points of one space object obtain the optimum position point of space object by RANSAC algorithm.
In conclusion a kind of fixed station Automatic Targets based on YOLO-SITCOL of the invention and space orientation side Method, fixed station acquisition meets the hardware configuration standard that geographic object detects automatically with positions image, for realizing geographic object Automatic Targets;The three-dimensional space forward intersection algorithm of elements of interior orientation, elements of exterior orientation and SITCOL based on fixed station, It realizes its corresponding geographical coordinate of matched pixel point group inverse, and evaluates its mapping precision;When a geographic object is in multiple shadows When being photographed and be detected on picture, the quasi- of optimum position point is carried out based on its semantic more geographical location point to same target It closes, obtains most accurate geographic object position, effectively reduce made Target and check workload, improve inspection efficiency, reduce The consumption of manpower financial capacity has preferable application value in practical applications.
The above is only the preferred embodiment of the present invention, protection scope of the present invention is not limited merely to above-described embodiment, All technical solutions belonged under thinking of the present invention all belong to the scope of protection of the present invention.It should be pointed out that for the art For those of ordinary skill, several improvements and modifications without departing from the principles of the present invention should be regarded as protection of the invention Range.

Claims (5)

1. fixed station Automatic Targets and space-location method based on YOLO-SITCOL, it is characterised in that: including following step It is rapid:
Step 1: the configuration of fixed station acquisition hardware, the fixed station acquisition hardware is using fixed base stations as acquisition platform, using 2 Integrate varifocal gun-type camera, binocular connecting rod, electric platform, fixed link, power supply line, computer equipment are completed to geography The automatic detection and positioning of object;
Step 2: image collection, including using fixed base stations as carrier platform, by integrated binocular technical grade panorama camera quickly into The acquisition of row live-action image generates full-view image;Attitude measurement system obtains camera posture information, the camera posture letter in real time Breath includes the binocular camera real-time position information that GPS is obtained and the real-time exterior orientation parameter of binocular camera that electronic platform obtains Information;
Step 3: geographic object Automatic Targets are realized based on YOLO algorithm, comprising the following steps:
Step 3.1: object classification, tag definition are carried out to scale sample;
Step 3.2: being improved according to demand on the basis of existing YOLO basic model, construct and train suitable for target requirement Deep learning target detection model;
Step 3.3: the target detection model being applied in the monitoring of target geographic object;
Step 4: image pixel frame inverse geography pixel frame is realized based on SITCOL algorithm, comprising the following steps:
Step 4.1: identification frame pixel coordinate being obtained according to the geographic object that YOLO algorithm identifies, by using SIFT to binocular Two photos for the same time of exposure that camera obtains carry out feature point extraction and Feature Points Matching;
Step 4.2: erroneous matching is eliminated to reduce error by RANSAC algorithm;
Step 4.3: the outsourcing rectangle frame of the same object is using in digital photogrammetry on the two width images obtained based on matching Forward intersection method realizes the geo-location of object, calculates its space coordinate;
Step 5: the single more spatial position points of object are fitted optimum position point.
2. the fixed base stations Automatic Targets and space-location method according to claim 1 based on YOLO-SITCOL, It is characterized by: 2 varifocal gun-type cameras keep rigid connection, holding are synchronous to become for binocular connecting rod in step 1 It is burnt;The electric platform is installed on fixed link end, and connect with binocular connecting rod center, for rotating 2 varifocal gun-type Camera;The geographical position coordinates that the fixed link end is demarcated by measuring device are via rotation angle, binocular baseline length Can inverse go out the accurate geographical position coordinates of two cameras.
3. the fixed base stations Automatic Targets and space-location method according to claim 1 based on YOLO-SITCO, It is characterized by: the full-view image is shot in such a way that fixed base stations install binocular camera in step 2;Fixed base Camera of standing is integrated with positioning and orientation system, panoramic information acquisition system, power-supply system and computer data processing system.
4. the fixed base stations Automatic Targets and space-location method according to claim 1 based on YOLO-SITCOL, It is characterized by: the step 3.2 the following steps are included:
Step 3.2.1: by input picture resize to 448x448 and it is sent into CNN network;
Step 3.2.2:CNN network is by the picture segmentation of input at S × S-grid, and then each cell is responsible for detecting those Central point falls in the target in the grid;
Step 3.2.3:YOLO extracts feature using convolutional network, obtains predicted value using full articulamentum, handles neural network forecast As a result the target detected.
5. according to claim 1 based on the fixed base stations Automatic Targets of YOLO-SITCOL algorithm and space orientation Method, it is characterised in that: RANSAC algorithm described in step 4.2 is in one group of data set comprising " exterior point ", using continuous iteration Method, find optimized parameter model.
CN201811616997.8A 2018-12-27 2018-12-27 Automatic target detection and space positioning method for fixed station based on YOLO-SITCOL Active CN109883400B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811616997.8A CN109883400B (en) 2018-12-27 2018-12-27 Automatic target detection and space positioning method for fixed station based on YOLO-SITCOL

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811616997.8A CN109883400B (en) 2018-12-27 2018-12-27 Automatic target detection and space positioning method for fixed station based on YOLO-SITCOL

Publications (2)

Publication Number Publication Date
CN109883400A true CN109883400A (en) 2019-06-14
CN109883400B CN109883400B (en) 2021-12-10

Family

ID=66925214

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811616997.8A Active CN109883400B (en) 2018-12-27 2018-12-27 Automatic target detection and space positioning method for fixed station based on YOLO-SITCOL

Country Status (1)

Country Link
CN (1) CN109883400B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110796002A (en) * 2019-09-23 2020-02-14 苏州光格设备有限公司 Method and system for automatically generating panoramic image containing hot spot information
CN112711972A (en) * 2019-10-26 2021-04-27 上海海思技术有限公司 Target detection method and device
CN113221823A (en) * 2021-05-31 2021-08-06 南通大学 Traffic signal lamp countdown identification method based on improved lightweight YOLOv3
CN116482731A (en) * 2023-04-25 2023-07-25 长春理工大学 A Geographic Information Acquisition Method Based on Satellite Positioning and Distance Measurement

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101334276A (en) * 2007-06-27 2008-12-31 中国科学院自动化研究所 A visual measurement method and device
CN105262946A (en) * 2015-09-23 2016-01-20 上海大学 Three-dimensional binocular camera platform experimental device
CN106525004A (en) * 2016-11-09 2017-03-22 人加智能机器人技术(北京)有限公司 Binocular stereo vision system and depth measuring method
EP3159651A1 (en) * 2015-10-20 2017-04-26 MBDA UK Limited Improvements in and relating to missile targeting
CN106871878A (en) * 2015-12-14 2017-06-20 莱卡地球系统公开股份有限公司 The method that spatial model is created using hand-held range unit
CN206400640U (en) * 2017-01-17 2017-08-11 湖南优象科技有限公司 A kind of caliberating device for binocular panoramic camera
CN107240126A (en) * 2016-03-28 2017-10-10 华天科技(昆山)电子有限公司 The calibration method of array image

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101334276A (en) * 2007-06-27 2008-12-31 中国科学院自动化研究所 A visual measurement method and device
CN105262946A (en) * 2015-09-23 2016-01-20 上海大学 Three-dimensional binocular camera platform experimental device
EP3159651A1 (en) * 2015-10-20 2017-04-26 MBDA UK Limited Improvements in and relating to missile targeting
CN106871878A (en) * 2015-12-14 2017-06-20 莱卡地球系统公开股份有限公司 The method that spatial model is created using hand-held range unit
CN107240126A (en) * 2016-03-28 2017-10-10 华天科技(昆山)电子有限公司 The calibration method of array image
CN106525004A (en) * 2016-11-09 2017-03-22 人加智能机器人技术(北京)有限公司 Binocular stereo vision system and depth measuring method
CN206400640U (en) * 2017-01-17 2017-08-11 湖南优象科技有限公司 A kind of caliberating device for binocular panoramic camera

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
杜军平: "《多源运动图像的跨尺度融合研究》", 30 June 2018, 北京邮电大学出版社 *
蒋强卫: "基于双目图像多特征点融合匹配物体识别与定位研究", 《无线电工程》 *
陈慧岩: "《智能车辆理论与应用》", 31 July 2018 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110796002A (en) * 2019-09-23 2020-02-14 苏州光格设备有限公司 Method and system for automatically generating panoramic image containing hot spot information
CN112711972A (en) * 2019-10-26 2021-04-27 上海海思技术有限公司 Target detection method and device
CN113221823A (en) * 2021-05-31 2021-08-06 南通大学 Traffic signal lamp countdown identification method based on improved lightweight YOLOv3
CN113221823B (en) * 2021-05-31 2024-06-07 南通大学 Traffic signal lamp countdown identification method based on improved lightweight YOLOv3
CN116482731A (en) * 2023-04-25 2023-07-25 长春理工大学 A Geographic Information Acquisition Method Based on Satellite Positioning and Distance Measurement

Also Published As

Publication number Publication date
CN109883400B (en) 2021-12-10

Similar Documents

Publication Publication Date Title
CN101894366B (en) Method and device for acquiring calibration parameters and video monitoring system
CN115439424A (en) Intelligent detection method for aerial video image of unmanned aerial vehicle
EP3140613B1 (en) Surveying system
CN114936971A (en) A water-oriented UAV remote sensing multispectral image stitching method and system
CN109520500B (en) A precise positioning and street view library collection method based on terminal shooting image matching
CN109871739B (en) Automatic target detection and space positioning method for mobile station based on YOLO-SIOCTL
CN109596121B (en) A method for automatic target detection and spatial positioning of a mobile station
WO2021004416A1 (en) Method and apparatus for establishing beacon map on basis of visual beacons
CN111126304A (en) Augmented reality navigation method based on indoor natural scene image deep learning
CN106373088B (en) The quick joining method of low Duplication aerial image is tilted greatly
CN108765298A (en) Unmanned plane image split-joint method based on three-dimensional reconstruction and system
CN109883400A (en) Fixed station Automatic Targets and space-location method based on YOLO-SITCOL
CN112613397B (en) Construction method of training sample set for multi-view optical satellite remote sensing image target recognition
Li et al. A study on automatic UAV image mosaic method for paroxysmal disaster
CN110246177A (en) A Vision-Based Automatic Wave Measurement Method
CN108107462A (en) The traffic sign bar gesture monitoring device and method that RTK is combined with high speed camera
CN113313659A (en) High-precision image splicing method under multi-machine cooperative constraint
Cao Applying image registration algorithm combined with CNN model to video image stitching
Liu et al. A novel adjustment model for mosaicking low-overlap sweeping images
CN109671109A (en) Point off density cloud generation method and system
CN110245566A (en) A long-distance tracking method for infrared targets based on background features
CN115597592B (en) A comprehensive positioning method applied to UAV inspection
CN119223292A (en) Real estate surveying and mapping area management method and system based on remote sensing images
CN116109956A (en) Unmanned aerial vehicle self-adaptive zooming high-precision target detection intelligent inspection method
CN119992140A (en) A UAV visual positioning method and system based on satellite image map matching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant