CN111222514B - Local map optimization method based on visual positioning - Google Patents
Local map optimization method based on visual positioning Download PDFInfo
- Publication number
- CN111222514B CN111222514B CN201911422992.6A CN201911422992A CN111222514B CN 111222514 B CN111222514 B CN 111222514B CN 201911422992 A CN201911422992 A CN 201911422992A CN 111222514 B CN111222514 B CN 111222514B
- Authority
- CN
- China
- Prior art keywords
- current layer
- feature points
- feature
- points
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The local map optimization method based on visual positioning provided by the embodiment of the invention can adaptively adjust parameters according to the characteristics of images and extract the characteristic points with higher quality; the characteristic points can be uniformly distributed in the image area, so that the subsequent matching process is more facilitated; meanwhile, aiming at the traditional situation that the random sampling consistency method is directly adopted for carrying out false matching screening, and errors occur in the random sampling consistency method due to excessive sample points, the Jaccard similarity is adopted for carrying out first screening, after the number of samples is reduced, a random sampling consistency method algorithm is carried out, so that the matching accuracy can be improved, after the matching is successful, the image with updated characteristics replaces the original map image, and further the local map is optimized, so that the map accuracy is ensured.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a local map optimization method based on visual positioning.
Background
At present, computer vision is continuously developed, and vision is widely applied to various fields, industry, medicine, traffic, military, and the like. The feature extraction and matching technology is a big research hot spot in the field of computer vision, especially in the field of vision navigation, and high-precision image feature extraction and matching are key for determining the performance of the front end of vision navigation.
In recent years, the ORB algorithm is widely applied to various researches with good performance, but on one hand, the quantity and quality of extracted key points depend on the setting of a FAST threshold, the fixed FAST threshold setting is difficult to meet the quantity and quality of characteristic points required by different images, on the other hand, the distribution of the characteristic points has great influence on subsequent matching, the distribution of the characteristic points should be uniform as much as possible, and meanwhile, the traditional characteristic point matching method still has a large mismatching condition.
Disclosure of Invention
In order to solve the problem that the conventional feature point matching method still has larger mismatching, the embodiment of the invention provides a local map optimization method based on visual positioning. The specific technical scheme is as follows:
the embodiment of the invention provides a local map optimization method based on visual positioning, which comprises the following steps:
acquiring a first picture and a second picture of a local map, and constructing a first image pyramid of the first picture; constructing a second image pyramid of a second picture;
taking a first layer in a first image pyramid as a first current layer; taking the first layer of the second image pyramid as the second current layer
Performing feature extraction on the first current layer and the first current layer by adopting an ORB technology, and performing descriptor calculation on the features so as to determine the direction of feature points;
taking each layer in the first image pyramid as a first current layer, taking each layer in the second image pyramid as a second current layer, extracting features of each layer in the first image pyramid and the second image pyramid, and carrying out descriptor calculation on the features;
performing feature matching on the first picture and the second picture;
performing motion estimation on the first image and the second image after matching to obtain a motion estimation result;
and optimizing the local map according to the motion estimation result.
Optionally, performing feature extraction on the first current layer and the first current layer by using an ORB technique, and performing descriptor calculation on the features respectively includes:
respectively performing crude extraction on the first current layer and the second current layer; obtaining a plurality of feature points of the first current layer and a plurality of features of the second current layer;
creating a decision tree model by using an ID3 algorithm, and respectively placing a plurality of feature points of the first current layer into the decision tree model to obtain a plurality of FAST feature points of the first current layer; respectively placing a plurality of feature points of the second current layer in the decision tree building model to obtain a plurality of FAST feature points of the second current layer;
removing local dense characteristic points from a plurality of FAST characteristic points of the first current layer by using non-maximum suppression; removing local dense characteristic points from a plurality of FAST characteristic points of the second current layer by using non-maximum suppression;
scaling the feature points of the first image pyramid after the local dense feature points are removed to form a first nlevels; scaling the feature points of the second image pyramid after the local dense feature points are removed; forming a second nlevels web; all the characteristic points in the first nlevels are used as OFAST characteristic points of the first picture, and all the characteristic points in the second nlevels are used as OFAST characteristic points of the second picture;
the direction of the OFAST feature points is determined using a moment method.
Optionally, determining the direction of the OFAST feature point using a moment method includes:
and calculating the centroid of the characteristic point in the radius range of r by using the moment, wherein the coordinate of the characteristic point and the centroid form a vector as the direction of the characteristic point. The moments of the image blocks are defined as follows:
p is the selected pixel point, ip is the pixel gray value, and I (x, y) is the image gray expression. The centroid of the image block can be rounded by a moment, the centroid of the moment being:
assuming that the geometric center of the image block is O, connecting O with the centroid C to obtain a direction vector, wherein the direction vector is the direction of the feature point. The calculation formula is as follows:
θ=atan(m 01 /m 10 )
and for the image with the extracted feature points, constructing a quadtree, for each sub-node, when the number of the feature points in the node is equal to 1, not dividing downwards, if the number of the node is greater than 1, continuing to divide the quadtree downwards until all the nodes only contain one feature point, or the number of the divided nodes meets the requirement of the number of the feature points, and for the condition, reserving the feature point with the largest response value in the node, and discarding the redundant feature points, so that the feature point distribution is uniform.
The intelligent image fault judging and early warning system provided by the embodiment of the invention can adaptively adjust parameters according to the characteristics of the image, and extract the characteristic points with higher quality; the characteristic points can be uniformly distributed in the image area, so that the subsequent matching process is more facilitated; meanwhile, aiming at the traditional situation that the random sampling consistency method is directly adopted for carrying out false matching screening, and errors occur in the random sampling consistency method due to excessive sample points, the Jaccard similarity is adopted for carrying out first screening, after the number of samples is reduced, a random sampling consistency method algorithm is carried out, so that the matching accuracy can be improved, after the matching is successful, the image with updated characteristics replaces the original map image, and further the local map is optimized, so that the map accuracy is ensured.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
FIG. 1 is a flow chart of a visual positioning-based local map optimization method provided by an embodiment of the invention;
fig. 2 is a schematic diagram of a FAST feature point according to an embodiment of the present invention.
FIG. 3 is a schematic diagram of an ORB vision odometer module employed in accordance with an embodiment of the present invention.
Fig. 4 is a schematic diagram of a visual odometer between two adjacent frames according to an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described below with reference to the accompanying drawings in the embodiments of the present invention.
In order to solve the problem that the traditional feature point matching method still has larger mismatching at present, the local map optimization method based on visual positioning is provided.
It should be noted that the visual odometer technology and the implementation thereof are key technologies and difficult works of the local map optimization method based on visual positioning, and the ORB visual odometer module to be adopted in the project operates according to the flow as shown in fig. 3.
The visual odometer calculates camera motion by using two adjacent frames of images, wherein the former frame locates a rear reference frame, the latter frame defines a current frame, the reference frame is always used as a coordinate system, the current frame is used for matching with the current frame, and the motion relation is estimated. The visual odometer between two adjacent frames may be illustrated by fig. 4.
Referring to fig. 1, an embodiment of the present invention provides a local map optimization method based on visual localization, including:
s110, acquiring a first picture and a second picture of a local map, and constructing a first image pyramid of the first picture; constructing a second image pyramid of a second picture;
s120, taking a first layer in the first image pyramid as a first current layer; the first layer of the second image pyramid is used as a second current layer;
s130, performing feature extraction on the first current layer and the first current layer by adopting an ORB technology, and performing descriptor calculation on the features so as to determine the directions of feature points;
specifically, the ORB feature extraction is to extract ORB feature points from an input picture for performing feature matching later. ORB feature extraction can be generalized to two processes of oFAST keypoint extraction and rBRIEF descriptor computation.
S140, taking each layer in the first image pyramid as a first current layer, taking each layer in the second image pyramid as a second current layer, extracting features of each layer in the first image pyramid and the second image pyramid, and carrying out descriptor calculation on the features;
s150, performing feature matching on the first picture and the second picture;
s160, performing motion estimation on the first image and the second image after matching to obtain a motion estimation result;
s170, optimizing the local map according to the motion estimation result.
Further, performing feature extraction on the first current layer and the first current layer by using an ORB technique, and performing descriptor calculation on the features respectively includes:
respectively performing crude extraction on the first current layer and the second current layer; obtaining a plurality of feature points of the first current layer and a plurality of features of the second current layer;
specifically, this step can extract a large number of feature points, but the quality of the feature points is low for a large portion. If the gray value of n consecutive pixel points on the circumference is larger or smaller than the gray value of P point, then P is considered as the feature point. Typically n is set to 12. In order to accelerate the extraction of the characteristic points and rapidly discharge the non-characteristic points, firstly, the gray values at the positions of 1, 9, 5 and 13 are detected, and if P is the characteristic point, 3 or more than 3pixel values at the four positions are all larger or smaller than the gray value of the P point. If not, this point is directly discharged.
Creating a decision tree model by using an ID3 algorithm, and respectively placing a plurality of feature points of the first current layer into the decision tree model to obtain a plurality of FAST feature points of the first current layer; and respectively placing the plurality of feature points of the second current layer in the decision tree building model to obtain a plurality of FAST feature points of the second current layer, referring to fig. 2, fig. 2 is a schematic diagram of a FAST feature point provided by an embodiment of the present invention.
Specifically, the machine learning method screens out the optimal feature points. In short, an ID3 algorithm is used to train a decision tree, and 16 pixels on the circumference of the feature point are input into the decision tree, so as to screen out the optimal FAST feature point.
Removing local dense characteristic points from a plurality of FAST characteristic points of the first current layer by using non-maximum suppression; removing local dense characteristic points from a plurality of FAST characteristic points of the second current layer by using non-maximum suppression;
specifically, non-maxima suppress the removal of locally denser feature points. The problem of using non-maximum suppression algorithms to remove multiple feature points in close proximity. The response size is calculated for each feature point. The calculation method is the absolute value sum of the deviation of the feature point P and 16 feature points around the feature point P. And (3) reserving the characteristic points with larger response values from the relatively adjacent characteristic points, and deleting the rest characteristic points.
Scaling the feature points of the first image pyramid after the local dense feature points are removed to form a first nlevels; scaling the feature points of the second image pyramid after the local dense feature points are removed; forming a second nlevels web; all the characteristic points in the first nlevels are used as OFAST characteristic points of the first picture, and all the characteristic points in the second nlevels are used as OFAST characteristic points of the second picture;
specifically, the scale of the feature points is not deformed. And building a pyramid to realize the multi-scale invariance of the feature points. A scaling factor scaleFactor (opencv defaults to 1.2) and the number of pyramid layers nlevels (pencv defaults to 8) were set. The original image is scaled down by a scaling factor to nlevels images. The scaled image is: i' =i/scaleefaactork (k=1, 2, …, nlevels). The nlevels images with different proportions extract the sum of the feature points as the oFAST feature points of the image.
The direction of the OFAST feature points is determined using a moment method.
Further, determining the direction of the OFAST feature point by a moment method includes:
and calculating the centroid of the characteristic point in the radius range of r by using the moment, wherein the coordinate of the characteristic point and the centroid form a vector as the direction of the characteristic point. The moments of the image blocks are defined as follows:
p is the selected pixel point, ip is the pixel gray value, and I (x, y) is the image gray expression. The centroid of the image block can be rounded by a moment, the centroid of the moment being:
assuming that the geometric center of the image block is O, connecting O with the centroid C to obtain a direction vector, wherein the direction vector is the direction of the feature point. The calculation formula is as follows:
θ=atan(m 01 /m 10 )
and for the image with the extracted feature points, constructing a quadtree, for each sub-node, when the number of the feature points in the node is equal to 1, not dividing downwards, if the number of the node is greater than 1, continuing to divide the quadtree downwards until all the nodes only contain one feature point, or the number of the divided nodes meets the requirement of the number of the feature points, and for the condition, reserving the feature point with the largest response value in the node, and discarding the redundant feature points, so that the feature point distribution is uniform.
The intelligent image fault judging and early warning system provided by the embodiment of the invention can adaptively adjust parameters according to the characteristics of the image, and extract the characteristic points with higher quality; the characteristic points can be uniformly distributed in the image area, so that the subsequent matching process is more facilitated; meanwhile, aiming at the traditional situation that the random sampling consistency method is directly adopted for carrying out false matching screening, and errors occur in the random sampling consistency method due to excessive sample points, the Jaccard similarity is adopted for carrying out first screening, after the number of samples is reduced, a random sampling consistency method algorithm is carried out, so that the matching accuracy can be improved, after the matching is successful, the image with updated characteristics replaces the original map image, and further the local map is optimized, so that the map accuracy is ensured.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention are included in the protection scope of the present invention.
Claims (2)
1. A method for optimizing a local map based on visual localization, comprising:
acquiring a first picture and a second picture of a local map, and constructing a first image pyramid of the first picture; constructing a second image pyramid of a second picture;
taking a first layer in a first image pyramid as a first current layer; the first layer of the second image pyramid is used as a second current layer;
performing feature extraction on the first current layer and the second current layer by adopting an ORB technology, and performing descriptor calculation on the features so as to determine the direction of feature points;
taking each layer in the first image pyramid as a first current layer, taking each layer in the second image pyramid as a second current layer, extracting features of each layer in the first image pyramid and the second image pyramid, and carrying out descriptor calculation on the features;
performing feature matching on the first picture and the second picture;
performing motion estimation on the first picture and the second picture after matching to obtain a motion estimation result;
optimizing the local map according to the motion estimation result;
the performing feature extraction on the first current layer and the second current layer by using the ORB technique and performing descriptor calculation on the features respectively includes:
respectively performing crude extraction on the first current layer and the second current layer; obtaining a plurality of characteristic points of the first current layer and a plurality of characteristic points of the second current layer;
creating a decision tree model by using an ID3 algorithm, and respectively placing a plurality of feature points of the first current layer into the decision tree model to obtain a plurality of FAST feature points of the first current layer;
respectively placing a plurality of feature points of the second current layer in the decision tree model to obtain a plurality of FAST feature points of the second current layer;
removing local dense characteristic points from a plurality of FAST characteristic points of the first current layer by using non-maximum suppression; removing local dense characteristic points from a plurality of FAST characteristic points of the second current layer by using non-maximum suppression;
scaling the feature points of the first image pyramid after the local dense feature points are removed to form a first nlevels;
scaling the feature points of the second image pyramid after the local dense feature points are removed; forming a second nlevels web; all the feature points in the first nlevels frame are used as the oFAST feature points of the first picture, and all the feature points in the second nlevels frame are used as the oFAST feature points of the second picture;
the direction of the oFAST feature points is determined using a moment method.
2. The method of claim 1, wherein determining the direction of the oFAST feature points using a moment method comprises:
calculating the mass center of the characteristic point in the radius range of r by using moment, wherein the direction from the coordinate of the characteristic point to the mass center forms a vector as the characteristic point; the moments of the image blocks are defined as follows:
m pq =∑ x,y∈B x p · q ·(x,y),p,q={0,1};
p is the selected pixel point, ip is the pixel gray value, and I (x, y) is the image gray expression; the centroid of the image block can be rounded by a moment, the centroid of the moment being:
assuming that the geometric center of the image block is 0, connecting 0 with the centroid C to obtain a direction vector, wherein the vector is the direction of the feature point; the calculation formula is as follows:
and for the image with the extracted feature points, constructing a quadtree, for each sub-node, when the number of the feature points in the node is equal to 1, not dividing downwards, if the number of the node is greater than 1, continuing to divide the quadtree downwards until all the nodes only contain one feature point, or the number of the divided nodes meets the requirement of the number of the feature points, and for the condition, reserving the feature point with the largest response value in the node, and discarding the redundant feature points, so that the feature point distribution is uniform.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911422992.6A CN111222514B (en) | 2019-12-31 | 2019-12-31 | Local map optimization method based on visual positioning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911422992.6A CN111222514B (en) | 2019-12-31 | 2019-12-31 | Local map optimization method based on visual positioning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111222514A CN111222514A (en) | 2020-06-02 |
CN111222514B true CN111222514B (en) | 2023-06-27 |
Family
ID=70829310
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911422992.6A Active CN111222514B (en) | 2019-12-31 | 2019-12-31 | Local map optimization method based on visual positioning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111222514B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112381841A (en) * | 2020-11-27 | 2021-02-19 | 广东电网有限责任公司肇庆供电局 | Semantic SLAM method based on GMS feature matching in dynamic scene |
CN112435262A (en) * | 2020-11-27 | 2021-03-02 | 广东电网有限责任公司肇庆供电局 | Dynamic environment information detection method based on semantic segmentation network and multi-view geometry |
CN113012196B (en) * | 2021-03-05 | 2023-03-24 | 华南理工大学 | Positioning method based on information fusion of binocular camera and inertial navigation sensor |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018176929A1 (en) * | 2017-03-27 | 2018-10-04 | 华为技术有限公司 | Image background blurring method and apparatus |
WO2019140745A1 (en) * | 2018-01-16 | 2019-07-25 | 广东省智能制造研究所 | Robot positioning method and device |
CN110414533A (en) * | 2019-06-24 | 2019-11-05 | 东南大学 | An Improved ORB Feature Extraction and Matching Method |
-
2019
- 2019-12-31 CN CN201911422992.6A patent/CN111222514B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018176929A1 (en) * | 2017-03-27 | 2018-10-04 | 华为技术有限公司 | Image background blurring method and apparatus |
WO2019140745A1 (en) * | 2018-01-16 | 2019-07-25 | 广东省智能制造研究所 | Robot positioning method and device |
CN110414533A (en) * | 2019-06-24 | 2019-11-05 | 东南大学 | An Improved ORB Feature Extraction and Matching Method |
Non-Patent Citations (1)
Title |
---|
刘祥 ; .基于光束法平差的RGB-D视觉里程计.现代计算机(专业版).2018,(17),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN111222514A (en) | 2020-06-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110414507B (en) | License plate recognition method and device, computer equipment and storage medium | |
CN106934795B (en) | A kind of automatic testing method and prediction technique of glue into concrete beam cracks | |
CN111209770B (en) | Lane line recognition method and device | |
CN111222514B (en) | Local map optimization method based on visual positioning | |
WO2022257254A1 (en) | Image data processing method and apparatus, and device and medium | |
CN107248174A (en) | A kind of method for tracking target based on TLD algorithms | |
CN110276279B (en) | A Text Detection Method for Arbitrary Shape Scenes Based on Image Segmentation | |
CN106373128B (en) | Method and system for accurately positioning lips | |
CN110309765B (en) | An efficient method for detecting moving objects in video | |
CN109685827B (en) | Target detection and tracking method based on DSP | |
CN115471682A (en) | An Image Matching Method Based on SIFT Fusion ResNet50 | |
CN110472640B (en) | Target detection model prediction frame processing method and device | |
CN111932506A (en) | Method for extracting discontinuous straight line in image | |
CN113643365B (en) | A camera pose estimation method, device, equipment and readable storage medium | |
CN110503637A (en) | An automatic detection method for road cracks based on convolutional neural network | |
CN114862642A (en) | Method for removing short video visible watermark and computer readable storage medium | |
CN110675442A (en) | Local stereo matching method and system combined with target identification technology | |
CN113744315A (en) | Semi-direct vision odometer based on binocular vision | |
CN117333431A (en) | A tool detection method based on neural network | |
CN107437257A (en) | Moving object segmentation and dividing method under a kind of mobile background | |
CN107248143A (en) | A kind of depth image restorative procedure split based on image | |
CN110490170A (en) | A kind of face candidate frame extracting method | |
CN112381844B (en) | Adaptive ORB Feature Extraction Method Based on Image Blocking | |
CN104574396B (en) | Straight line detection method and system | |
CN114639076A (en) | Target object detection method, device, storage medium and electronic device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20230313 Address after: Room 401 and Room 402, Building 5, No. 690, Bibo Road, Pilot Free Trade Zone, Pudong New Area, Shanghai, March 2012 Applicant after: Shanghai Xingsi Semiconductor Co.,Ltd. Address before: 710065 10th floor, block C, Xi'an National Digital publishing base, 996 Tiangu 7th Road, Yuhua Street office, high tech Zone, Xi'an City, Shaanxi Province Applicant before: XI'AN AEROSPACE HUAXUN TECHNOLOGY Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |