[go: up one dir, main page]

CN112507778A - Loop detection method of improved bag-of-words model based on line characteristics - Google Patents

Loop detection method of improved bag-of-words model based on line characteristics Download PDF

Info

Publication number
CN112507778A
CN112507778A CN202011111454.8A CN202011111454A CN112507778A CN 112507778 A CN112507778 A CN 112507778A CN 202011111454 A CN202011111454 A CN 202011111454A CN 112507778 A CN112507778 A CN 112507778A
Authority
CN
China
Prior art keywords
bag
visual
loop
words
loopback
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011111454.8A
Other languages
Chinese (zh)
Other versions
CN112507778B (en
Inventor
孟庆浩
史佳豪
戴旭阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202011111454.8A priority Critical patent/CN112507778B/en
Publication of CN112507778A publication Critical patent/CN112507778A/en
Application granted granted Critical
Publication of CN112507778B publication Critical patent/CN112507778B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a loop detection method of an improved bag-of-words model based on line characteristics, which comprises the following steps: extracting LSD (line Segment detector) characteristics through an offline image data set and calculating a corresponding LBD descriptor, and taking the descriptor as original data of a clustering generation dictionary. And constructing an LSD characteristic word bag model by using an improved word bag model construction method, and constructing a visual dictionary tree with self-adaptive branches. And converting the bag-of-words model vector. And optimizing visual word weight. And (3) similarity calculation: and calculating the similarity by adopting an L1 norm according to the visual bag-of-word vector between the current frame and the historical key frame to obtain an appearance similarity score between the images. And acquiring loop candidate frames, grouping the loop candidate frames, and removing isolated loop candidate frames with similar appearances. The continuity verification can only consider that the loop is a reliable loop candidate if the loop is continuously detected, and the loop candidate is reserved. And (5) verifying geometric consistency.

Description

Loop detection method of improved bag-of-words model based on line characteristics
Technical Field
The invention relates to the field of visual SLAM (Simultaneous Localization And Mapping), in particular to a visual SLAM loop detection method for improving a bag-of-words model based on line characteristics.
Background
The loopback detection is an indispensable part in the visual SLAM, and can eliminate the accumulative error generated by the visual odometer part, thereby constructing a globally consistent map. The loop detection algorithm based on the bag-of-words model is the current main method, and judges whether a loop exists by constructing similarity between the bag-of-words model and the contrast images. The bag-of-words model was originally derived from text analysis, and the similarity of texts was determined by comparing the frequency of occurrence of words in the texts. Accordingly, the visual bag-of-words model also measures the similarity between two images by comparing the frequency of appearance of "visual words" in the images.
Cummins et al (Cummins M, Newman P.FAB-MAP: preceding Localization and Mapping in the Space of application [ M ]. Sage Publications, Inc.2008.) in 2008 propose a bag-of-words model based on SURF (speeded Up Robust features) features and a Chou-Liu tree, and better realize camera position identification based on image Appearance through the bag-of-words model. But the bag-of-words model vector is a binary vector, i.e. only whether visual words appear in the image is considered, but not the different frequencies of appearance of different words.
In 2011 Galvez-Lopez et al (Galvez-Lopez D, Tardos J D. real-time loop detection with bases of binary words [ C ]. International Conference on Intelligent Robots and Systems,2011:25-30.), FAST (features from accessed segmented test) key points and BRIEF (binary route index Elementary features) binary descriptors are adopted to realize extraction and description of point features, and a data structure of a k-D tree is introduced for dictionary construction. A point feature-based binary descriptor visual bag-of-words model is constructed by using a hierarchical K-means clustering method. The dictionary structure of the K-d tree also leads the K-means clustering adopted in the dictionary construction process to adopt the same parameter K, however, any data are not clustered by using the same K value, and the obtained clustering result is the best.
Subsequently, in ORB-SLAM proposed in 2015 by Mur-Artal et al (Mur-Artal R, Montiel J M, Tardos J D. ORB-SLAM: a versatile and acid monomeric SLAM system [ J ]. IEEE Transactions on Robotics,2015,31(5):1147 + 1163), a visual bag-of-words model based on ORB (organized FAST and rotaed BRIEF) point characteristics was constructed. The ORB point characteristics solve the problems of rotation invariance and scale invariance of FAST key points and achieve better effect in experiments. However, the visual dictionary still adopts K-means clustering and a dictionary structure of a K-d tree, and a bag-of-words model construction process is not improved.
The detection effect based on the point feature bag-of-words model depends on the quantity of the point features extracted from the environment, and when enough point features cannot be extracted from the environment and the point features are easy to appear in a pile, the appearance similarity between the bag-of-words vector of the video frame and the video frame cannot be calculated.
In a structured low-texture environment, there are abundant line features available in such a scene, although often not enough point features can be extracted.
Lee et al (Lee J H, Zhang G, Lim J, et al. plant recognition using string lines for vision-based SLAM [ C ],. 2013 IEEE International Conference on Robotics and Automation (ICRA). IEEE,2013, pp.3799-3806) propose a bag-of-words model based on MSLD (Mean-Standard discovery Line Descriptor) Line feature Descriptor, and obtain better effect in experiments. However, the MSLD line feature descriptor does not have scale invariance, and is high in computational complexity, which is not favorable for real-time operation.
Linrimong et al (Linrimong, Wangmei. binocular vision SLAM algorithm [ J ] for improving dotted line characteristics. computer measurement and control, 2019(9): 156-.
Patent 201811250049.7 (a close-coupled binocular vision inertia SLAM method with point-line feature fusion) constructs a point feature bag model and a line feature bag model respectively, calculates a point feature similarity score and a line feature similarity score between two frames, and takes the weighted sum as the similarity score of the final two frames. Both methods use line characteristics to construct a bag-of-words model, but still use dictionary structures of K-means clustering and a K-d tree in the visual bag-of-words model construction process, and have no essential difference from the above several bag-of-words model construction processes, and a better visual word clustering result can not be obtained. Moreover, the TF-IDF (Term Frequency-Inverse Document Frequency) method is adopted in the word weight calculation method in the bag-of-words model, the Frequency of the visual words in the current image and the importance of the visual words on the training data set are considered, and the importance of the visual words on the loop detection query data set is not considered.
In summary, the line feature is a local feature that can replace a point feature in a structured environment, and a visual SLAM loop detection algorithm based on a line feature bag-of-words model is constructed in real time, so that the problem that a loop cannot be effectively detected by the loop detection algorithm based on the point feature in the structured low-texture environment can be effectively solved. The visual SLAM loop detection algorithm based on the line features and used for improving the bag-of-words model improves the construction process of the bag-of-words model and the weight of visual words.
Disclosure of Invention
The invention provides a loop detection method of an improved bag-of-words model based on line features, aiming at the problem that sufficient point features are difficult to extract to realize visual SLAM loop detection in a structured low-texture environment. The algorithm can utilize abundant line features as visual local features in a structured environment to realize visual-based loop detection. And the accuracy and recall rate of the recall loop detection are improved by improving the construction method of the bag-of-words model and the visual word weight calculation method. The technical scheme is as follows:
a loop detection method of an improved bag-of-words model based on line features comprises the following steps:
step 1: extracting LSD (line Segment descriptor) characteristics through an offline image data set, calculating a corresponding LBD (line Band descriptor) descriptor, and taking the descriptor as original data of a clustering generation dictionary.
Step 2: constructing an LSD characteristic bag-of-words model by utilizing an improved bag-of-words model construction method: before each clustering of the dictionary tree is constructed, the optimal clustering k value k 'for the current data is determined, and then the current data is clustered into k' classes. And the process is circulated until the visual dictionary tree with the adaptive branches is finally constructed.
And step 3: bag of words model vector transformation: and (3) extracting LSD-LBD line characteristics from the image, and quantizing each line characteristic in the image into a corresponding visual word according to the constructed bag-of-words model based on the LBD descriptor and the Hamming distance between the line characteristic descriptor and the visual word, thereby converting the whole image into a corresponding numerical value vector.
And 4, step 4: visual word weight optimization: introducing a weight optimization parameter in loop detection
Figure BDA0002728735520000021
And optimizing the visual word weight in the bag-of-words model vector according to the distribution condition of the visual words on the historical key frame data set, calculating the weight optimization parameters of the visual words, and combining the word weight calculated by the TF-IDF method to obtain the weight-optimized visual bag-of-words vector.
And 5: and (3) similarity calculation: and calculating the similarity by adopting an L1 norm according to the visual bag-of-word vector between the current frame and the historical key frame to obtain an appearance similarity score between the images.
Step 6: loop candidate frames are acquired and grouped: setting the historical key frames meeting the requirement of the similarity threshold value as loop candidate frames, grouping the loop candidate frames, dividing the loop candidate frames with similar time sequence into a group, and then rejecting the isolated loop candidate frames with similar appearances according to the similarity scores of the whole group and the given threshold value.
And 7: and (3) verifying the continuity: at this stage, it is detected whether or not a loop can be continuously detected for a certain period of time in the loop candidate frames. If the loop is detected continuously, the loop candidate can be regarded as a reliable loop candidate, and then the loop candidate is reserved.
And 8: and (3) verifying geometric consistency: in order to ensure the accuracy of the loop, the visual word distribution of the current frame and the loop candidate frame is verified, and the two frames can be considered to form the loop only if the line feature distribution corresponding to the visual words is the same.
Currently, the visual SLAM mainly adopts point features as visual features, and compared with point feature loop detection, the method adopts line features which are more abundant in a structured environment as local visual features for loop detection. The key points of the invention are that 1) firstly, a visual dictionary tree with self-adaptive branch number is constructed by adopting line characteristics, the discrimination of visual words is improved, and the quantization error of converting local characteristics into visual words is reduced. 2) Then, according to the distribution condition of the visual words in the loop detection query data set, calculating the weight optimization parameter of each word, and optimizing the visual bag-of-word vector through the parameters, so that the calculation result of the bag-of-word vector similarity has higher discrimination. Compared with an unoptimized visual bag-of-words model, the method can obtain higher recall rate at 100% accuracy, thereby showing that the method can detect the recall loop more accurately and effectively.
Drawings
FIG. 1 is a flow chart of improved bag-of-words model construction
FIG. 2 is a diagram of the LSD line feature extraction result in the low texture of the structured environment
FIG. 3 is a diagram illustrating the result of extracting ORB point features in low texture in structured environment
FIG. 4 is a schematic diagram of LBD descriptor construction of visual dictionary in accordance with an embodiment of the present invention
Detailed Description
The invention is further illustrated below with reference to specific examples. It should be noted that the described embodiments are only intended to facilitate the understanding of the invention and do not have any limiting effect thereon.
Step 1: for a large amount of image data acquired offline, line segment features are extracted by using an LSD algorithm, and descriptors of the line features are calculated by using an LBD algorithm. The LSD algorithm can quickly realize the detection of line characteristics, the LBD descriptor can generate a binary descriptor, and can realize quick matching, and the real-time requirement of loopback detection can be met by adopting the line characteristic extraction and description method.
Step 2: and constructing a line feature visual bag-of-words model with the adaptive branch number. The traditional bag-of-words construction method adopts a data structure of a K-means clustering algorithm and a K-d tree, so that the defect that the K-means algorithm artificially specifies a K value is kept. Before each clustering of the dictionary tree is constructed, a link for determining the optimal k value of the current data clustering is added, and the optimal k value of the clustering is determined by introducing a clustering evaluation index contour coefficient. Firstly, calculating the contour coefficient of the current data under different k values (k is 5-15), selecting the most reasonable k value under the current data according to the contour coefficient, namely the k value k 'corresponding to the maximum contour coefficient, and then taking k' as the clustering k value of the current node of the finally constructed visual dictionary tree. The above steps are repeated in a circulating mode until the 5 th layer of the bag-of-words model is built.
The cluster evaluative index profile coefficient combines two factors, intra-class cohesion a (i) and inter-class separation b (i). Assuming that the current data has been grouped into k classes, the current contour coefficient S can be obtained as follows:
first, the contour coefficient of each element after clustering is calculated
s(i)=(b(i)-a(i))/(max{a(i),b(i)}) (1)
Wherein the in-class polymerization degree a (i) represents the current element miTo other elements m of its classjThe average distance of (a), the degree of separation between classes b (i) represents the current element miMinimum of average distance to other clusters. It can be seen that s (i) e [ -1, 1]S (i) is close to 1, indicating sample element miClustering is reasonable; s (i) is close to-1, indicating sample element miShould be classified into other clusters; s (i) is close to 0, indicating a sample element miLocated on the boundary of two clusters.
After calculating the contour coefficient of each element, the average of the contour coefficients of all elements is used as the contour coefficient of the current clustering result, i.e., S1/k Σ0<i≤ks(i)
And finally, according to the contour coefficient S calculated under different k values, selecting the k value corresponding to the maximum contour coefficient as the final clustering branch number of the current node in the visual dictionary tree. By analogy, the optimal clustering branch number of all intermediate nodes in the visual dictionary building process is calculated, so that a relatively good clustering effect is obtained, and visual words are more distinctive.
And step 3: and converting the bag-of-words model vector. Quantizing each line feature in the image into a corresponding visual word according to the constructed bag-of-words model based on the LBD descriptor and the Hamming distance between the line feature descriptor and the visual word, thereby converting the whole image into a corresponding numerical vector, such as
Figure BDA0002728735520000041
Wherein wiRepresenting the ith visual word, ηiFor its corresponding weight, a weight calculation method of TF-IDF is used, i.e. etai=TFi*IDFi. In practice, each image contains only a small number of visual words in the visual dictionary, and therefore most of η in the numerical vectori0, i.e. vaIs a sparse vector.
And 4, step 4: and optimizing visual word weight. The TF-IDF weight calculation method takes into account the frequency of the visual word in the current image, as well as its importance on the training data set, but does not take into account its importance on the loop detection historical key frame data set. The TF-IDF method considers that the smaller the frequency of text occurrences (i.e., the number of texts containing a word) is, the greater its ability to distinguish between different classes of text. Then, similarly, it can be said that the smaller the frequency of occurrence of a visual word on the historical key frame data set, the greater its ability to distinguish between different images in the historical key frame data set.
Therefore, a repetition factor is introduced in the weight calculation
Figure BDA0002728735520000045
Counting the number I of each visual word appearing in the key frame in the historical key frame data seti. Let the parameters
Figure BDA0002728735520000046
With key frame number IiIs increased and decreased, thereby reducing the visual words that recur in the processAnd (4) weighting. The method comprises the following specific steps:
1) in loop detection, the number of key frames in which each visual word appears is counted while establishing a mutual index between the visual word and the key frame
Figure BDA00027287355200000410
And calculating the repetition factor of the visual word according to the number of the corresponding key frames
Figure BDA0002728735520000047
Where n is the number of keyframes in the historical keyframe data set,
Figure BDA00027287355200000412
for the appearance therein of visual words wiThe number of key frames.
2) Binding repeat factor
Figure BDA0002728735520000049
And TF-IDF, generating visual word wiNew weights
Figure BDA0002728735520000048
And generates therefrom a new bag-of-words model vector v'a
v′a={(W1,η′1),(w2,η′2),...,(wN,η′N)} (3)
Wherein eta'iThe optimized weights for the visual word i,
Figure BDA00027287355200000411
optimizing the parameters, TF, for the weight of the word iiFor the word frequency, IDF, of the word i in the current pictureiIs the inverse document frequency of word i on the training data set.
And 5: and calculating the image similarity. And calculating the image similarity by using the new bag-of-words model vector calculated by the current image and the historical key frame. For bag of words model vectors of any two images, the similarity of the images is evaluated using the L1 norm as follows:
Figure BDA0002728735520000042
the similarity calculation result is between 0 and 1, and when the two images are completely unrelated, the similarity score is 0; when the two images are identical, the similarity score is 1.
Step 6: loop back candidate frames are obtained and grouped. In the historical key frames, if there are key frames whose similarity to the current key frame satisfies a certain threshold α, the key frames can be set as loop candidate frames. After all loop candidate frames are obtained, the loop candidate frames are grouped, the loop candidate frames with close time sequence are grouped into a group, and the group similarity score is calculated. For each candidate group, use I1,I2,I3,..,InRepresenting a key frame therein, s1,s2,s3,...,snRepresenting their similarity to the current key frame, the group similarity score may be represented by the sum of these similarities, i.e.
Figure BDA0002728735520000043
Figure BDA0002728735520000044
In the formula vkBag of words model vector, v, corresponding to the kth key frame in the groupcAnd the corresponding bag-of-words model vector of the current key frame.
And (3) grouping the loopback candidate frames to obtain corresponding group similarity scores, and eliminating the loopback candidate key frames with lower group scores according to a given group score threshold value beta. Because the correct loop-back key frame, the key frame with the similar time sequence and the current key frame have higher similarity and also belong to the loop-back candidate frame, so that some incorrect loop-back candidate frames can be excluded.
And 7: and (5) checking the continuity. At this stage, we consider that the loop candidate is retained only when the loop is detected in a plurality of consecutive frames at the same time, and the loop is considered to be reliable.
And 8: and (5) verifying geometric consistency. Because the visual bag-of-words model ignores the spatial information of the visual features, in the final stage, the geometric consistency verification needs to be performed on the loopback candidate frame and the current key frame to ensure the accuracy of loopback detection.
Calculating line feature reprojection error by calculating line features matched between the current frame and the loop candidate frame, and solving pose transformation between the current frame and the loop candidate frame by local BA optimization. And judging whether the pose transformation is reasonable or not by calculating the number of line features inliers under the pose transformation, thereby judging whether the loopback candidate frame passes geometric consistency verification or not.
And extracting loop candidate frames when the image appearance similarity reaches a threshold value alpha, and judging that loop has occurred after a series of verification links for ensuring loop accuracy are passed, and correcting and updating the global map according to the detected loop.

Claims (1)

1.一种基于线特征的改进词袋模型的回环检测方法,包括以下步骤:1. A method for loop closure detection based on a line feature-based improved bag-of-words model, comprising the following steps: 步骤1:通过离线图像数据集提取LSD(Line Segment Detector)特征并计算对应的LBD(Line Band Descriptor)描述符,将描述符作为聚类生成词典的原始数据;Step 1: Extract the LSD (Line Segment Detector) feature from the offline image dataset and calculate the corresponding LBD (Line Band Descriptor) descriptor, and use the descriptor as the original data of the clustering dictionary; 步骤2:利用改进的词袋模型构建方法构建LSD特征词袋模型:在构建词典树的每一次聚类之前,先确定针对当前数据的最优聚类k值k′,然后再将当前数据聚为k′类;如此循环直至最终构建出具有自适应分支的视觉词典树;Step 2: Use the improved bag of words model construction method to build the LSD feature bag of words model: Before constructing each clustering of the dictionary tree, first determine the optimal clustering k value k' for the current data, and then cluster the current data. is class k'; this cycle is repeated until a visual dictionary tree with adaptive branches is finally constructed; 步骤3:词袋模型向量转化:从图像中提取LSD-LBD线特征,根据构建的基于LBD描述符的词袋模型,以及线特征描述符和视觉单词之间的汉明距离,将图像中每一个线特征量化为对应的视觉单词,从而将整幅图像转化为对应的数值向量;Step 3: Vector transformation of bag-of-words model: LSD-LBD line features are extracted from the image. According to the constructed bag-of-words model based on LBD descriptors, as well as the Hamming distance between the line feature descriptors and visual words, each line in the image A line feature is quantified into a corresponding visual word, thereby converting the entire image into a corresponding numerical vector; 步骤4:视觉单词权重优化:在回环检测中引入一个权重优化参数
Figure FDA0002728735510000011
根据视觉单词在历史关键帧数据集上的分布情况,对词袋模型向量中的视觉单词权重进行优化,计算视觉单词的权重优化参数,并结合TF-IDF法计算出的单词权重,得到权重优化后的视觉词袋向量;
Step 4: Visual word weight optimization: Introduce a weight optimization parameter in loop closure detection
Figure FDA0002728735510000011
According to the distribution of visual words on the historical key frame data set, optimize the visual word weight in the bag of words model vector, calculate the weight optimization parameters of the visual word, and combine the word weight calculated by the TF-IDF method to obtain the weight optimization. Post visual word bag vector;
步骤5:相似度计算:根据当前帧和历史关键帧之间的视觉词袋向量采用L1范数计算相似度,获得图像间的外观相似度评分;Step 5: similarity calculation: according to the visual word bag vector between the current frame and the historical key frame, the L1 norm is used to calculate the similarity, and the appearance similarity score between the images is obtained; 步骤6:获取回环候选帧并分组:将满足相似度阈值要求的历史关键帧,设置为回环候选帧,对回环候选帧进行分组,将时序相近的回环候选帧分为一组,然后根据整组的相似度评分,以及给定阈值,剔除掉那些孤立的外观相似的回环候选帧;Step 6: Obtain and group loopback candidate frames: Set the historical key frames that meet the similarity threshold requirements as loopback candidate frames, group the loopback candidate frames, and group the loopback candidate frames with similar timing into a group, and then according to the whole group The similarity score of , and given a threshold, those isolated loop closure candidate frames with similar appearance are eliminated; 步骤7:连续性验证:在此阶段,检测在这些回环候选帧中,是否能够在一段时间内持续检测到回环;只有持续检测到回环,才能认为这是一个可靠的回环候选,则保留该回环候选;Step 7: Continuity Verification: At this stage, check whether loopbacks can be continuously detected in these loopback candidate frames for a period of time; only when loopbacks are continuously detected can this be considered a reliable loopback candidate, and the loopbacks are reserved candidate; 步骤8:几何一致性验证:为了保证回环的准确性,对这些当前帧和回环候选帧的视觉单词分布进行验证,只有这些视觉单词对应的线特征分布的情况相同,才能认为这两帧构成回环。Step 8: Geometric consistency verification: In order to ensure the accuracy of the loopback, the visual word distribution of the current frame and the loopback candidate frame is verified. Only when the line feature distributions corresponding to these visual words are the same, can the two frames be considered to constitute a loopback .
CN202011111454.8A 2020-10-16 2020-10-16 Loop detection method of improved bag-of-words model based on line characteristics Active CN112507778B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011111454.8A CN112507778B (en) 2020-10-16 2020-10-16 Loop detection method of improved bag-of-words model based on line characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011111454.8A CN112507778B (en) 2020-10-16 2020-10-16 Loop detection method of improved bag-of-words model based on line characteristics

Publications (2)

Publication Number Publication Date
CN112507778A true CN112507778A (en) 2021-03-16
CN112507778B CN112507778B (en) 2022-10-04

Family

ID=74953814

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011111454.8A Active CN112507778B (en) 2020-10-16 2020-10-16 Loop detection method of improved bag-of-words model based on line characteristics

Country Status (1)

Country Link
CN (1) CN112507778B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991448A (en) * 2021-03-22 2021-06-18 华南理工大学 Color histogram-based loop detection method and device and storage medium
CN115063715A (en) * 2022-05-30 2022-09-16 杭州电子科技大学 An acceleration method for ORB-SLAM3 loop closure detection based on gray histogram
CN115240115A (en) * 2022-07-27 2022-10-25 河南工业大学 A visual SLAM loop closure detection method combining semantic features and bag-of-words model
CN117409388A (en) * 2023-12-11 2024-01-16 天津中德应用技术大学 An improved bag-of-words model for smart car visual SLAM closed-loop detection method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106909877A (en) * 2016-12-13 2017-06-30 浙江大学 A kind of vision based on dotted line comprehensive characteristics builds figure and localization method simultaneously
CN108682027A (en) * 2018-05-11 2018-10-19 北京华捷艾米科技有限公司 VSLAM realization method and systems based on point, line Fusion Features
CN109409418A (en) * 2018-09-29 2019-03-01 中山大学 A kind of winding detection method based on bag of words
CN109656545A (en) * 2019-01-17 2019-04-19 云南师范大学 A kind of software development activity clustering method based on event log
CN109886065A (en) * 2018-12-07 2019-06-14 武汉理工大学 An online incremental loopback detection method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106909877A (en) * 2016-12-13 2017-06-30 浙江大学 A kind of vision based on dotted line comprehensive characteristics builds figure and localization method simultaneously
CN108682027A (en) * 2018-05-11 2018-10-19 北京华捷艾米科技有限公司 VSLAM realization method and systems based on point, line Fusion Features
CN109409418A (en) * 2018-09-29 2019-03-01 中山大学 A kind of winding detection method based on bag of words
CN109886065A (en) * 2018-12-07 2019-06-14 武汉理工大学 An online incremental loopback detection method
CN109656545A (en) * 2019-01-17 2019-04-19 云南师范大学 A kind of software development activity clustering method based on event log

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
RUIFANG DONG ET AL: "A Novel Loop Closure Detection Method Using Line Features", 《IEEE ACCESS》 *
程瑞营: "基于点线综合特征的视觉SLAM中闭环检测方法研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *
马冰: "基于计算机视觉的机场增强型地勤检测系统的设计与实现", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅱ辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991448A (en) * 2021-03-22 2021-06-18 华南理工大学 Color histogram-based loop detection method and device and storage medium
CN112991448B (en) * 2021-03-22 2023-09-26 华南理工大学 Loop detection method, device and storage medium based on color histogram
CN115063715A (en) * 2022-05-30 2022-09-16 杭州电子科技大学 An acceleration method for ORB-SLAM3 loop closure detection based on gray histogram
CN115240115A (en) * 2022-07-27 2022-10-25 河南工业大学 A visual SLAM loop closure detection method combining semantic features and bag-of-words model
CN117409388A (en) * 2023-12-11 2024-01-16 天津中德应用技术大学 An improved bag-of-words model for smart car visual SLAM closed-loop detection method

Also Published As

Publication number Publication date
CN112507778B (en) 2022-10-04

Similar Documents

Publication Publication Date Title
CN111814584B (en) Vehicle re-identification method in multi-view environment based on multi-center metric loss
Song et al. Region-based quality estimation network for large-scale person re-identification
CN110163258B (en) Zero sample learning method and system based on semantic attribute attention redistribution mechanism
CN109670528B (en) Data expansion method facing pedestrian re-identification task and based on paired sample random occlusion strategy
CN112507778B (en) Loop detection method of improved bag-of-words model based on line characteristics
CN111460980B (en) Multi-scale detection method for small-target pedestrian based on multi-semantic feature fusion
CN111553193A (en) Visual SLAM closed-loop detection method based on lightweight deep neural network
CN109063649B (en) Pedestrian re-identification method based on twin pedestrian alignment residual error network
Tang et al. Facial landmark detection by semi-supervised deep learning
CN112784929B (en) Small sample image classification method and device based on double-element group expansion
CN104615986B (en) The method that pedestrian detection is carried out to the video image of scene changes using multi-detector
CN112069940A (en) A cross-domain person re-identification method based on staged feature learning
CN111860494A (en) Optimal method, device, electronic device and storage medium for image target detection
CN111161315A (en) A multi-target tracking method and system based on graph neural network
CN114120041A (en) A Few-Sample Classification Method Based on Dual Adversarial Variational Autoencoders
Yang et al. Multi-scale bidirectional fcn for object skeleton extraction
Du et al. Convolutional neural network-based data anomaly detection considering class imbalance with limited data
CN114926742B (en) A loop detection and optimization method based on second-order attention mechanism
CN111950525A (en) A Fine-Grained Image Classification Method Based on Destruction Reconstruction Learning and GoogLeNet
CN118015539A (en) Improved YOLOv8 dense pedestrian detection method based on GSConv+VOV-GSCSP
CN112633180B (en) Video anomaly detection method and system based on dual memory module
CN114861761A (en) Loop detection method based on twin network characteristics and geometric verification
CN117152459A (en) Image detection method, device, computer readable medium and electronic equipment
CN111723600A (en) A feature descriptor for person re-identification based on multi-task learning
CN112613474B (en) Pedestrian re-identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant