[go: up one dir, main page]

CN115018709A - Image splicing rock wall reconstruction method - Google Patents

Image splicing rock wall reconstruction method Download PDF

Info

Publication number
CN115018709A
CN115018709A CN202210737778.5A CN202210737778A CN115018709A CN 115018709 A CN115018709 A CN 115018709A CN 202210737778 A CN202210737778 A CN 202210737778A CN 115018709 A CN115018709 A CN 115018709A
Authority
CN
China
Prior art keywords
image
points
rock
point
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210737778.5A
Other languages
Chinese (zh)
Other versions
CN115018709B (en
Inventor
黄方沂
黄品高
白雁力
王鸿强
王辉
张元康
黄博俊
高超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Electronic Technology
Original Assignee
Guilin University of Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Electronic Technology filed Critical Guilin University of Electronic Technology
Priority to CN202210737778.5A priority Critical patent/CN115018709B/en
Publication of CN115018709A publication Critical patent/CN115018709A/en
Application granted granted Critical
Publication of CN115018709B publication Critical patent/CN115018709B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for reconstructing an image splicing rock wall, which comprises a video frame obtaining step, a feature point matching step and an image splicing step; a step of acquiring video frames, which is to intercept a plurality of video frames from a video and acquire images corresponding to the video frames; the method comprises the steps of obtaining characteristic points, wherein the step comprises an image preprocessing step, a rock point calibration step and a characteristic point obtaining step; matching the characteristic points, namely performing characteristic matching on the characteristic points extracted from the two images through a matching algorithm until the characteristic points of each image are matched with the characteristic points of the surrounding images; and image splicing, namely determining an overlapping area through perspective transformation, and transforming the original image to a target plane through a splicing method to perform splicing operation until a complete rock wall reconstruction image is obtained. The invention improves the detection speed, has less calculation amount and high accuracy.

Description

一种图像拼接岩壁重建的方法A method of image mosaic rock wall reconstruction

技术领域technical field

本发明涉及图像拼接技术领域,更具体地,涉及一种图像拼接岩壁重建的方法。The invention relates to the technical field of image stitching, and more particularly, to a method for reconstructing rock wall by image stitching.

背景技术Background technique

图像拼接是一种由多幅重叠信息的图像拼接成一幅广视角高分辨率的平滑图像的技术。在航空领域,需要多张拍摄地面不同角度的照片合并构成一个整体清晰的图像;在航海领域,需要实现勘测场景的全幅绘制和三维重构;在医学领域,需要将电子计算机X射线断层扫描技术、核磁共振技术和数字剪影血管造影技术获得的图像数据组合起来,得到关于病人的完全信息,方便医生诊断,从而提高病理检查的准确性。因此,图像拼接技术在各个现代科学领域中均发挥着重要作用。Image stitching is a technique of stitching multiple overlapping images into a smooth image with wide viewing angle and high resolution. In the field of aviation, it is necessary to combine multiple photos of different angles of the ground to form a clear overall image; in the field of navigation, it is necessary to realize the full-scale rendering and three-dimensional reconstruction of the survey scene; in the field of medicine, it is necessary to combine the computer X-ray tomography technology The image data obtained by MRI, MRI and digital silhouette angiography can be combined to obtain complete information about the patient, which is convenient for doctors to diagnose, thereby improving the accuracy of pathological examination. Therefore, image stitching technology plays an important role in various modern scientific fields.

传统的图像拼接过程,首先在输入的图片中提取和匹配SIFT特征,得到了大量的特征点。SIFT算法是用来侦测与描述影像中的局部性特征,它在空间尺度中寻找极值点,并提取出其位置、尺度、旋转不变量。然后使用KNN算法进行特征匹配,对于迭代KNN返回的每个对并执行距离测试。对于每对特征,设定一个比率值,如果特征之间的距离在一定比例内,我们保留它,否则,我们将它丢弃。接下来找到单适应矩阵实现基于匹配点的图像拼接,在这个过程中过滤了异常点。最后进行透视变换,可以组合一个或者多个,采用图像和单适应矩阵作为输入。然后,它根据单适应矩阵将源图像变换到目的平面上,就可以得到一个全景图。在特征提取中,由于SIFT方法通过对特征点构造128维的向量,然后对向量进行匹配,这样图像就得满足足够多的纹理,这样造成区别性就不是太大,容易造成误匹配。在岩壁重建中,由于存在着较多岩点、人等其他一些不相干的事物,如果全部进行特征提取,这样会导致工作量较大,且有用的信息会被数量巨大的关键点所影响,这些关键点,有着许多异常值,用RANSAC 算法进行去除不相干的数据,但还是由于样本多大,迭代时间过长导致效率低下。In the traditional image stitching process, firstly, SIFT features are extracted and matched in the input image, and a large number of feature points are obtained. The SIFT algorithm is used to detect and describe local features in images. It finds extreme points in the spatial scale and extracts its position, scale, and rotation invariants. Feature matching is then performed using the KNN algorithm, for each pair returned by iterative KNN and a distance test is performed. For each pair of features, we set a ratio value, if the distance between features is within a certain ratio, we keep it, otherwise, we discard it. Next, a single adaptation matrix is found to implement image stitching based on matching points, and outliers are filtered out in the process. Finally, a perspective transformation is performed, one or more of which can be combined, taking the image and the single adaptation matrix as input. Then, it transforms the source image onto the destination plane according to the single adaptation matrix to get a panorama. In feature extraction, since the SIFT method constructs a 128-dimensional vector from the feature points, and then matches the vectors, the image has to satisfy enough textures, so that the difference is not too large, and it is easy to cause mismatches. In rock wall reconstruction, due to the existence of many rock points, people and other irrelevant things, if all feature extraction is performed, it will lead to a large workload, and useful information will be affected by a huge number of key points. , These key points have many outliers. The RANSAC algorithm is used to remove irrelevant data, but the efficiency is low due to the large sample size and the long iteration time.

发明内容SUMMARY OF THE INVENTION

本发明的目的在于提供一种图像拼接岩壁重建的方法,提升检测速度,得到的特征点特征较强。且由于特征点的减少,计算单适应矩阵过程钟迭代次数减少,实现了计算量少,准确率高的特点。The purpose of the present invention is to provide a method for reconstructing a rock wall by image splicing, so as to improve the detection speed, and the obtained feature points have stronger characteristics. In addition, due to the reduction of feature points, the number of iterations in the process of calculating a single adaptive matrix is reduced, and the characteristics of less calculation and high accuracy are realized.

为达到上述目的,提供了一种图像拼接岩壁重建的方法,包括获取视频帧步骤、获取特征点步骤、特征点匹配步骤和图像拼接步骤;In order to achieve the above purpose, a method for image mosaic rock wall reconstruction is provided, which includes the steps of acquiring video frames, acquiring feature points, matching feature points, and stitching images;

获取视频帧步骤,从视频中截取若干个视频帧,获得视频帧对应的图像;The step of acquiring video frames is to intercept several video frames from the video to obtain images corresponding to the video frames;

获取特征点步骤,包括图像预处理步骤、岩点标定步骤和特征点获取步骤;所述图像预处理步骤为对图像进行预处理操作;所述岩点标定步骤为对图像的岩点进行标定;所述特征点获取步骤为通过边缘特征提取法提取岩点边缘上的特征点;所述岩点为图像中具有不规则形状、易被抓取的物体;The step of acquiring feature points includes an image preprocessing step, a rock point calibration step and a feature point acquiring step; the image preprocessing step is to perform a preprocessing operation on the image; the rock point calibration step is to calibrate the rock points of the image; The feature point acquisition step is to extract feature points on the edge of rock points by an edge feature extraction method; the rock points are objects with irregular shapes and easy to be grasped in the image;

特征点匹配步骤,通过匹配算法将两幅图像提取到的特征点进行特征匹配,直至每幅图像的特征点都与周围的图像特征点完成匹配;In the feature point matching step, the feature points extracted from the two images are matched by the matching algorithm until the feature points of each image are matched with the surrounding image feature points;

图像拼接步骤,通过透视变换,确定重叠区域,通过拼接法将原图像变换到目的平面上进行拼接操作直至获得完整的岩壁重建图为止。In the image splicing step, through perspective transformation, the overlapping area is determined, and the original image is transformed to the target plane through the splicing method for splicing operation until a complete reconstruction of the rock wall is obtained.

特别的,所述获取视频帧步骤中截取若干个视频帧的具体方法为:设定一个单位截取视频帧的值,并将上述值为单位截取若干个视频帧。Particularly, the specific method for intercepting several video frames in the step of acquiring video frames is as follows: setting a value for intercepting video frames per unit, and intercepting several video frames per unit by using the above value.

特别的,所述图像预处理步骤中对图像进行预处理操作的具体方法为对所有视频帧图像进行均值滤波,去除噪声。Particularly, in the image preprocessing step, the specific method of performing the image preprocessing operation is to perform mean filtering on all video frame images to remove noise.

特别的,所述岩点标定步骤中对图像的岩点进行标定的具体方法为对每张图像一一进行标定,得到每个图像中标签;所述标签包括岩点的坐标和岩点的命名。In particular, the specific method for calibrating the rock points of the image in the rock point calibration step is to calibrate each image one by one to obtain a label in each image; the label includes the coordinates of the rock point and the name of the rock point .

特别的,所述边缘特征提取法包括以下步骤:In particular, the edge feature extraction method includes the following steps:

利用yolo网络训练标定特征,得到每个岩点的bounding box并对bounding box内的岩点进行图像灰度化;Use the yolo network to train the calibration features, get the bounding box of each rock point and perform image grayscale for the rock points in the bounding box;

使用Canny边缘算法检测bounding box得到每个岩点边缘的图像;Use the Canny edge algorithm to detect the bounding box to get the image of the edge of each rock point;

利用Shi-Tomasi角点检测方法得到岩点边缘上的角点,并通过自适应矩阵获得最佳角点即为特征点;Use the Shi-Tomasi corner detection method to obtain the corners on the edge of the rock point, and obtain the best corners through the adaptive matrix, which are the feature points;

获取特征点的坐标和所处尺度及特征点的方向信息、大小及角度信息。Obtain the coordinates and scale of the feature points and the direction information, size and angle information of the feature points.

特别的,所述利用Shi-Tomasi角点检测方法得到岩点边缘上的角点,并通过自适应矩阵获得最佳角点的具体方法为:In particular, the specific method for obtaining the corner points on the edge of the rock point by using the Shi-Tomasi corner point detection method, and obtaining the best corner point through the adaptive matrix is:

A1.通过局部窗口观察包含角点的图像,将局部窗口向各个方向移动(u,v) 并计算所有灰度差异的总和:A1. Observe the image containing the corners through the local window, move the local window in all directions (u,v) and calculate the sum of all grayscale differences:

E(u,v)=∑x,yw(x,y)[I(x+u,y+v)-I(x,y)]2;其中,x为,y为E(u, v)=∑ x, y w(x, y)[I(x+u, y+v)-I(x, y)] 2 ; where x is and y is

其中I(x,y)是局部窗口的图像灰度,I(x+u,y+v)是移动后的图像灰度,w(x,y)是窗口函数;where I(x, y) is the image grayscale of the local window, I(x+u, y+v) is the moved image grayscale, and w(x, y) is the window function;

A2.进行角点检测使E(u,v)的值最大,化简为A2. Perform corner detection to maximize the value of E(u, v), which is simplified to

Figure BDA0003714581660000031
设自适应矩阵
Figure BDA0003714581660000032
Figure BDA0003714581660000033
Figure BDA0003714581660000031
Let the adaptive matrix
Figure BDA0003714581660000032
which is
Figure BDA0003714581660000033

A3.利用自适应矩阵M来求角点,设自适应矩阵M的特征值为λ1和λ2,若矩阵M的两个特征值中较小的一个大于预设的阈值,则角点 R=min{λ1,λ2};A3. Use the adaptive matrix M to find the corner points, set the eigenvalues of the adaptive matrix M as λ 1 and λ 2 , if the smaller one of the two eigenvalues of the matrix M is greater than the preset threshold, then the corner point R =min{λ 12 };

A4.计算角点的质量,依据角点的质量的最大特征值设置为阈值,设置两个角点之间的最短欧式距离;A4. Calculate the quality of the corner points, set the maximum eigenvalue of the quality of the corner points as the threshold, and set the shortest Euclidean distance between the two corner points;

A5.去除低于阈值的角点;A5. Remove the corners below the threshold;

A6.把剩下的角点按质量进行降序排列;A6. Arrange the remaining corner points in descending order of quality;

A7.选取降序排列后的第一个角点作为最佳角点并将与该角点的欧氏距离小于最短欧式距离的全部角点删掉;A7. Select the first corner point in descending order as the best corner point and delete all the corner points whose Euclidean distance is less than the shortest Euclidean distance from the corner point;

A8.循环步骤A6-A7直至无法删除角点,返回全部最佳角点。A8. Repeat steps A6-A7 until the corner points cannot be deleted, and return to all the best corner points.

特别的,所述特征点匹配步骤中通过匹配算法将两幅图像提取到的特征点进行特征匹配的具体方法为:使用KNN匹配算法将两幅图之间的特征点进行匹配,使用比率测试,即根据比较最近邻距离与次近邻距离的方法,距离比率小于预定的阈值时,可认为两个特征点匹配正确。In particular, in the feature point matching step, the specific method for performing feature matching on the feature points extracted from the two images by the matching algorithm is: using the KNN matching algorithm to match the feature points between the two images, using the ratio test, That is, according to the method of comparing the nearest neighbor distance and the next nearest neighbor distance, when the distance ratio is less than a predetermined threshold, it can be considered that the two feature points are correctly matched.

特别的,所述图像拼接步骤之前还需要进行去除错误匹配步骤:In particular, before the image stitching step, it is also necessary to perform a step of removing false matching:

B1.使用RANSAC算法,在匹配的特征点中随机选取4对特征点设为内点,其他的特征点设为外点;B1. Using the RANSAC algorithm, randomly select 4 pairs of feature points from the matched feature points as inner points, and other feature points as outer points;

B2.得到内点的单适应矩阵;B2. Obtain the single adaptation matrix of the interior point;

B3.使用所述单适应矩阵测试所有外点,并设置一个阈值条件,将满足单适应矩阵测试后满足阈值条件的所有外点归为新的内点;B3. Use the single adaptive matrix to test all the outer points, and set a threshold condition, and classify all the outer points that meet the threshold condition after the single adaptive matrix test as new inner points;

B4.循环步骤B2-B3,直至内点数量不发生改变时可以得到最终的单适应矩阵。B4. Repeat steps B2-B3 until the final single adaptation matrix can be obtained when the number of interior points does not change.

特别的,所述图像拼接步骤中拼接操作采用加权融合法进行拼接;所述加权融合法为将图像的重叠区域的像素值按一定权值相加合成。Particularly, in the image splicing step, the splicing operation adopts a weighted fusion method for splicing; the weighted fusion method is to add and synthesize the pixel values of the overlapping areas of the images according to a certain weight.

本发明的有益效果:Beneficial effects of the present invention:

本发明基于YOLO目标检测网络将岩点进行单独的特征提取。提高了检测速度,也对于有用的特征信息可以得到更好的利用,得到YOLO网络训练后的岩点bounding box,再用Canny算子进行岩点的边缘检测,接着基于Shi-Tomasi算法提取岩点上的特征点进行特征匹配,得到的特征点特征较强。且由于特征点的减少,计算单适应矩阵过程钟迭代次数减少,实现了计算量少,准确率高的特点。The present invention performs individual feature extraction on rock points based on the YOLO target detection network. The detection speed is improved, and the useful feature information can be better used. The bounding box of the rock point after the YOLO network training is obtained, and then the edge detection of the rock point is performed by the Canny operator, and then the rock point is extracted based on the Shi-Tomasi algorithm. Feature matching is performed on the feature points, and the obtained feature points have strong features. In addition, due to the reduction of feature points, the number of iterations in the process of calculating a single adaptive matrix is reduced, and the characteristics of less calculation and high accuracy are realized.

附图说明Description of drawings

为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to explain the embodiments of the present invention or the technical solutions in the prior art more clearly, the following briefly introduces the accompanying drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only These are some embodiments of the present invention. For those of ordinary skill in the art, other drawings can also be obtained according to these drawings without creative efforts.

图1为本发明实施例的方法的总体框架流程图;Fig. 1 is the overall framework flow chart of the method of the embodiment of the present invention;

图2为本发明实施例加权融合方法的演示图。FIG. 2 is a demonstration diagram of a weighted fusion method according to an embodiment of the present invention.

具体实施方式Detailed ways

下面结合附图对本发明的优选实施例进行详细阐述,以使本发明的优点和特征能更易于被本领域技术人员理解,从而对本发明的保护范围做出更为清楚明确的界定。The preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings, so that the advantages and features of the present invention can be more easily understood by those skilled in the art, and the protection scope of the present invention can be more clearly defined.

应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。It should be noted that like numerals and letters refer to like items in the following figures, so once an item is defined in one figure, it does not require further definition and explanation in subsequent figures.

需要说明的是,术语“中心”、“上”、“下”、“左”、“右”、“竖直”、“水平”、“内”、“外”等指示的方位或位置关系为基于附图所示的方位或位置关系,或者是该发明产品使用时惯常摆放的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。此外,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。It should be noted that the orientation or positional relationship indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. Based on the orientation or positional relationship shown in the accompanying drawings, or the orientation or positional relationship that the product of the invention is usually placed in use, it is only for the convenience of describing the present invention and simplifying the description, rather than indicating or implying that the indicated device or element must be It has a specific orientation, is constructed and operates in a specific orientation, and therefore should not be construed as a limitation of the present invention. Furthermore, the terms "first", "second", "third", etc. are only used to differentiate the description and should not be construed as indicating or implying relative importance.

此外,术语“水平”、“竖直”、“悬垂”等术语并不表示要求部件绝对水平或悬垂,而是可以稍微倾斜。如“水平”仅仅是指其方向相对“竖直”而言更加水平,并不是表示该结构一定要完全水平,而是可以稍微倾斜。Furthermore, the terms "horizontal", "vertical", "overhanging" etc. do not imply that a component is required to be absolutely horizontal or overhang, but rather may be slightly inclined. For example, "horizontal" only means that its direction is more horizontal than "vertical", it does not mean that the structure must be completely horizontal, but can be slightly inclined.

如图1所示,本发明实施例一种图像拼接岩壁重建的方法,是在一个速度攀岩视频中进行操作的。本发明实施例的方法如下所示:As shown in FIG. 1 , an image mosaic rock wall reconstruction method according to an embodiment of the present invention is performed in a speed rock climbing video. The method of the embodiment of the present invention is as follows:

(1)获取视频帧步骤:输入速度攀岩视频,设立一个Z值(其中,Z为正整数)为单位来截取视频帧,这样可以得到多个场景图Si(i=1,2,…,N)作为本发明实施例的图像。Si是经过以Z为单位截取的第i个视频帧所对应的图像。单位可为时间或视频帧间隔。(1) Steps of acquiring video frames: input the speed rock climbing video, set up a Z value (where Z is a positive integer) as a unit to intercept video frames, so that multiple scene graphs S i (i=1, 2, . . . , N) Images as examples of the present invention. S i is the image corresponding to the i-th video frame that has been cut in units of Z. The unit can be time or video frame interval.

(2)图像预处理步骤:对所有视频帧对应的图像进行均值滤波,去除噪声。(2) Image preprocessing step: perform mean filtering on images corresponding to all video frames to remove noise.

(3)岩点标定步骤:对N张图像一一进行标定,得到每个图像中左、右两个赛道上每个岩点的信息

Figure BDA0003714581660000051
Figure BDA0003714581660000053
包含着岩点的位置坐标和尺度信息,
Figure BDA0003714581660000052
则是每个岩点的ID。(3) Rock point calibration step: calibrate N images one by one to obtain the information of each rock point on the left and right tracks in each image
Figure BDA0003714581660000051
Figure BDA0003714581660000053
Contains the location coordinates and scale information of the rock point,
Figure BDA0003714581660000052
is the ID of each move.

(4)特征点获取步骤:通过边缘特征提取法提取岩点边缘上的特征点。(4) Feature point acquisition step: feature points on the edge of rock points are extracted by an edge feature extraction method.

边缘特征提取法包括以下步骤:The edge feature extraction method includes the following steps:

利用yolo网络训练标定特征,得到每个岩点的bounding box并对bounding box内的岩点进行图像灰度化;Use the yolo network to train the calibration features, get the bounding box of each rock point and perform image grayscale for the rock points in the bounding box;

使用Canny边缘算法检测bounding box得到每个岩点边缘的图像;Use the Canny edge algorithm to detect the bounding box to get the image of the edge of each rock point;

现有Canny边缘检测算法包括以下步骤:对图像进行去噪;计算梯度强度的方向;保留边缘方向上具有极大值的像素;双阈值法决定潜在边界(即找出图像中像素点大于阈值上界)通过滞后技术跟踪边界。The existing Canny edge detection algorithm includes the following steps: denoising the image; calculating the direction of the gradient intensity; retaining the pixels with maximum values in the edge direction; Bounds) track the boundaries through hysteresis techniques.

利用Shi-Tomasi角点检测方法得到岩点边缘上的角点,并通过自适应矩阵获得最佳角点即为特征点;具体方法为:Use the Shi-Tomasi corner detection method to obtain the corner points on the edge of the rock point, and obtain the best corner point through the adaptive matrix, which is the feature point; the specific method is as follows:

A1.通过局部窗口观察包含角点的图像,将局部窗口向各个方向移动(u,v) 并计算所有灰度差异的总和:A1. Observe the image containing the corners through the local window, move the local window in all directions (u,v) and calculate the sum of all grayscale differences:

E(u,v)=∑x,yw(x,y)[I(x+u,y+v)-I(x,y)]2;其中,x为,y为E(u, v)=∑ x, y w(x, y)[I(x+u, y+v)-I(x, y)] 2 ; where x is and y is

其中I(x,y)是局部窗口的图像灰度,I(x+u,y+v)是移动后的图像灰度,w(x,y)是窗口函数;where I(x, y) is the image grayscale of the local window, I(x+u, y+v) is the moved image grayscale, and w(x, y) is the window function;

A2.进行角点检测使E(u,v)的值最大,化简为A2. Perform corner detection to maximize the value of E(u, v), which is simplified to

Figure BDA0003714581660000061
设自适应矩阵
Figure BDA0003714581660000062
Figure BDA0003714581660000063
Figure BDA0003714581660000061
Let the adaptive matrix
Figure BDA0003714581660000062
which is
Figure BDA0003714581660000063

A3.利用自适应矩阵M来求角点,设自适应矩阵M的特征值为λ1和λ2,若矩阵M的两个特征值中较小的一个大于预设的阈值,则角点 R=min{λ1,λ2};A3. Use the adaptive matrix M to find the corner points, set the eigenvalues of the adaptive matrix M as λ 1 and λ 2 , if the smaller one of the two eigenvalues of the matrix M is greater than the preset threshold, then the corner point R =min{λ 12 };

A4.计算角点的质量,依据角点的质量的最大特征值设置为阈值,设置两个角点之间的最短欧式距离;A4. Calculate the quality of the corner points, set the maximum eigenvalue of the quality of the corner points as the threshold, and set the shortest Euclidean distance between the two corner points;

A5.去除低于阈值的角点;A5. Remove the corners below the threshold;

A6.把剩下的角点按质量进行降序排列;A6. Arrange the remaining corner points in descending order of quality;

A7.选取降序排列后的第一个角点作为最佳角点并将与该角点的欧氏距离小于最短欧式距离的全部角点删掉;A7. Select the first corner point in descending order as the best corner point and delete all the corner points whose Euclidean distance is less than the shortest Euclidean distance from the corner point;

A8.循环步骤A6-A7直至无法删除角点,返回全部最佳角点。A8. Repeat steps A6-A7 until the corner points cannot be deleted, and return to all the best corner points.

获取特征点的坐标和所处尺度及特征点的方向信息、大小及角度信息。Obtain the coordinates and scale of the feature points and the direction information, size and angle information of the feature points.

(5)特征匹配步骤:通过匹配算法将两幅图像提取到的特征点进行特征匹配,直至每幅图像的特征点都与周围的图像特征点完成匹配。(5) Feature matching step: The feature points extracted from the two images are matched by the matching algorithm until the feature points of each image are matched with the surrounding image feature points.

使用KNN匹配算法将两幅图之间的特征点进行匹配,使用比率测试,即根据比较最近邻距离与次近邻距离的方法,距离比率小于某个阈值时,可认为两个特征点匹配正确。当最后一张图像匹配完成后,每幅图像的特征点都与周围的图像特征点完成匹配。Use the KNN matching algorithm to match the feature points between the two images, and use the ratio test, that is, according to the method of comparing the nearest neighbor distance and the next nearest neighbor distance, when the distance ratio is less than a certain threshold, it can be considered that the two feature points are correctly matched. When the last image is matched, the feature points of each image are matched with the surrounding image feature points.

(6)去除错误匹配步骤:在步骤(5)的特征匹配过程中,并非所有的特征点配对都会正确,仍然可能存在着错误的匹配对。本发明实施例取出错误匹配的方法包括以下步骤:(6) Step of removing wrong matching: In the feature matching process of step (5), not all feature point pairings will be correct, and there may still be wrong matching pairs. The method for taking out an incorrect match according to an embodiment of the present invention includes the following steps:

B1.使用RANSAC算法,在匹配的特征点中随机选取4对特征点设为内点,其他的特征点设为外点;B1. Using the RANSAC algorithm, randomly select 4 pairs of feature points from the matched feature points as inner points, and other feature points as outer points;

B2.得到内点的单适应矩阵;B2. Obtain the single adaptation matrix of the interior point;

B3.使用所述单适应矩阵测试所有外点,并设置一个阈值条件,将满足单适应矩阵测试后满足阈值条件的所有外点归为新的内点;B3. Use the single adaptive matrix to test all the outer points, and set a threshold condition, and classify all the outer points that meet the threshold condition after the single adaptive matrix test as new inner points;

B4.循环步骤B2-B3,直至内点数量不发生改变时可以得到最终的单适应矩阵。B4. Repeat steps B2-B3 until the final single adaptation matrix can be obtained when the number of interior points does not change.

(7)图像拼接步骤:对图像进行透视变换,采用图像和单适应矩阵作为输入,然后,它根据单适应矩阵将原图像变换到目的平面上。此时图像有较大区域的重叠部分进行融合,在这个过程,由于光照色泽等原因,会出现拼接裂缝。本发明实施例采用加权融合法,如图2所示,在重叠部分由前一幅图像慢慢地过渡到第二幅图像,即将图像的重叠区域的像素值P按一定权值相加合成,即

Figure BDA0003714581660000071
(h是重叠区域中某一个像素点距离拼接区域最上边的边界线的距离,P1、P2分别表示Picture 1和Picture 2的RGB取值分量),得到一个拼接无裂缝的岩壁重建图。(7) Image stitching step: perform perspective transformation on the image, using the image and the single adaptation matrix as input, and then, it transforms the original image to the target plane according to the single adaptation matrix. At this time, the overlapping parts of a large area of the image are fused. During this process, stitching cracks will appear due to light color and other reasons. The embodiment of the present invention adopts the weighted fusion method. As shown in FIG. 2 , in the overlapping part, the previous image is slowly transitioned to the second image, that is, the pixel values P in the overlapping area of the images are added and synthesized according to a certain weight. which is
Figure BDA0003714581660000071
(h is the distance between a certain pixel in the overlapping area and the uppermost boundary line of the splicing area, P 1 and P 2 represent the RGB value components of Picture 1 and Picture 2 respectively), and obtain a spliced and crack-free rock wall reconstruction map .

本发明实施例的有益效果:Beneficial effects of the embodiments of the present invention:

本发明实施例基于YOLO目标检测网络将岩点进行单独的特征提取。提高了检测速度,也对于有用的特征信息可以得到更好的利用,得到YOLO网络训练后的岩点boundingbox,再用Canny算子进行岩点的边缘检测,接着基于Shi- Tomasi算法提取岩点上的特征点进行特征匹配,得到的特征点特征较强。且由于特征点的减少,计算单适应矩阵过程钟迭代次数减少,实现了计算量少,准确率高的特点。The embodiment of the present invention performs individual feature extraction on rock points based on the YOLO target detection network. The detection speed is improved, and the useful feature information can be better used. The boundingbox of the rock points after the YOLO network training is obtained, and then the Canny operator is used to detect the edge of the rock points, and then the rock points are extracted based on the Shi-Tomasi algorithm. The feature points are matched, and the obtained feature points have stronger features. In addition, due to the reduction of feature points, the number of iterations in the process of calculating a single adaptive matrix is reduced, and the characteristics of less calculation and high accuracy are realized.

虽然结合附图描述了本发明的实施方式,但是专利所有者可以在所附权利要求的范围之内做出各种变形或修改,只要不超过本发明的权利要求所描述的保护范围,都应当在本发明的保护范围之内。Although the embodiments of the present invention are described in conjunction with the accompanying drawings, the patent owner can make various changes or modifications within the scope of the appended claims, as long as the protection scope described in the claims of the present invention is not exceeded, all should be within the protection scope of the present invention.

Claims (9)

1.一种图像拼接岩壁重建的方法,其特征在于,包括获取视频帧步骤、获取特征点步骤、特征点匹配步骤和图像拼接步骤;1. a method for image splicing rock wall reconstruction, is characterized in that, comprises obtaining video frame step, obtaining feature point step, feature point matching step and image stitching step; 获取视频帧步骤,从视频中截取若干个视频帧,获得视频帧对应的图像;The step of acquiring video frames is to intercept several video frames from the video to obtain images corresponding to the video frames; 获取特征点步骤,包括图像预处理步骤、岩点标定步骤和特征点获取步骤;所述图像预处理步骤为对图像进行预处理操作;所述岩点标定步骤为对图像的岩点进行标定;所述特征点获取步骤为通过边缘特征提取法提取岩点边缘上的特征点;所述岩点为图像中具有不规则形状、易被抓取的物体;The step of acquiring feature points includes an image preprocessing step, a rock point calibration step and a feature point acquiring step; the image preprocessing step is to perform a preprocessing operation on the image; the rock point calibration step is to calibrate the rock points of the image; The feature point acquisition step is to extract feature points on the edge of rock points by an edge feature extraction method; the rock points are objects with irregular shapes and easy to be grasped in the image; 特征点匹配步骤,通过匹配算法将两幅图像提取到的特征点进行特征匹配,直至每幅图像的特征点都与周围的图像特征点完成匹配;In the feature point matching step, the feature points extracted from the two images are matched by the matching algorithm until the feature points of each image are matched with the surrounding image feature points; 图像拼接步骤,通过透视变换,确定重叠区域,通过拼接法将原图像变换到目的平面上进行拼接操作直至获得完整的岩壁重建图为止。In the image splicing step, through perspective transformation, the overlapping area is determined, and the original image is transformed to the target plane through the splicing method for splicing operation until a complete reconstruction of the rock wall is obtained. 2.根据权利要求1所述的一种图像拼接岩壁重建的方法,其特征在于:所述获取视频帧步骤中截取若干个视频帧的具体方法为:设定一个单位截取视频帧的值,并将上述值为单位截取若干个视频帧。2. a kind of image mosaic rock wall reconstruction method according to claim 1 is characterized in that: the concrete method of intercepting several video frames in the described acquiring video frame step is: set the value of a unit to intercept video frame, The above value is used to intercept several video frames. 3.根据权利要求1所述的一种图像拼接岩壁重建的方法,其特征在于:所述图像预处理步骤中对图像进行预处理操作的具体方法为对所有视频帧图像进行均值滤波,去除噪声。3. The method for reconstructing a rock wall by image splicing according to claim 1, wherein the specific method for performing a preprocessing operation on the image in the image preprocessing step is to perform mean filtering on all video frame images, remove the noise. 4.根据权利要求1所述的一种图像拼接岩壁重建的方法,其特征在于:所述岩点标定步骤中对图像的岩点进行标定的具体方法为对每张图像一一进行标定,得到每个图像中标签;所述标签包括岩点的坐标和岩点的命名。4. The method for reconstructing a rock wall by image splicing according to claim 1, wherein the specific method for calibrating the rock points of the image in the rock point calibration step is to calibrate each image one by one, A label in each image is obtained; the label includes the coordinates of the move and the name of the move. 5.根据权利要求1所述的一种图像拼接岩壁重建的方法,其特征在于:所述边缘特征提取法包括以下步骤:5. a kind of image mosaic rock wall reconstruction method according to claim 1, is characterized in that: described edge feature extraction method comprises the following steps: 利用yolo网络训练标定特征,得到每个岩点的bounding box并对bounding box内的岩点进行图像灰度化;Use the yolo network to train the calibration features, get the bounding box of each rock point and perform image grayscale for the rock points in the bounding box; 使用Canny边缘算法检测bounding box得到每个岩点边缘的图像;Use the Canny edge algorithm to detect the bounding box to get the image of the edge of each rock point; 利用Shi-Tomasi角点检测方法得到岩点边缘上的角点,并通过自适应矩阵获得最佳角点即为特征点;Use the Shi-Tomasi corner detection method to obtain the corners on the edge of the rock point, and obtain the best corners through the adaptive matrix, which are the feature points; 获取特征点的坐标和所处尺度及特征点的方向信息、大小及角度信息。Obtain the coordinates and scale of the feature points and the direction information, size and angle information of the feature points. 6.根据权利要求5所述的一种图像拼接岩壁重建的方法,其特征在于:所述利用Shi-Tomasi角点检测方法得到岩点边缘上的角点,并通过自适应矩阵获得最佳角点的具体方法为:6. The method for image mosaic rock wall reconstruction according to claim 5, characterized in that: the corner point on the edge of the rock point is obtained by using the Shi-Tomasi corner point detection method, and the optimal solution is obtained by an adaptive matrix. The specific method of the corner point is: A1.通过局部窗口观察包含角点的图像,将局部窗口向各个方向移动(u,v)并计算所有灰度差异的总和:A1. Observe the image containing the corners through the local window, move the local window in all directions (u, v) and calculate the sum of all grayscale differences: E(u,v)=∑x,yw(x,y)[I(x+u,y+v)-I(x,y)]2;其中,x为,y为E(u,v)=∑ x,y w(x,y)[I(x+u,y+v)-I(x,y)] 2 ; where x is and y is 其中I(x,y)是局部窗口的图像灰度,I(x+u,y+v)是移动后的图像灰度,w(x,y)是窗口函数;where I(x,y) is the image grayscale of the local window, I(x+u,y+v) is the moved image grayscale, and w(x,y) is the window function; A2.进行角点检测使E(u,v)的值最大,化简为A2. Perform corner detection to maximize the value of E(u, v), which is simplified to
Figure FDA0003714581650000021
设自适应矩阵
Figure FDA0003714581650000022
Figure FDA0003714581650000023
Figure FDA0003714581650000021
Let the adaptive matrix
Figure FDA0003714581650000022
which is
Figure FDA0003714581650000023
A3.利用自适应矩阵M来求角点,设自适应矩阵M的特征值为λ1和λ2,若矩阵M的两个特征值中较小的一个大于预设的阈值,则角点R=min{λ12};A3. Use the adaptive matrix M to find the corner points, set the eigenvalues of the adaptive matrix M as λ 1 and λ 2 , if the smaller one of the two eigenvalues of the matrix M is greater than the preset threshold, then the corner point R =min{λ 12 }; A4.计算角点的质量,依据角点的质量的最大特征值设置为阈值,设置两个角点之间的最短欧式距离;A4. Calculate the quality of the corner points, set the maximum eigenvalue of the quality of the corner points as the threshold, and set the shortest Euclidean distance between the two corner points; A5.去除低于阈值的角点;A5. Remove the corners below the threshold; A6.把剩下的角点按质量进行降序排列;A6. Arrange the remaining corner points in descending order of quality; A7.选取降序排列后的第一个角点作为最佳角点并将与该角点的欧氏距离小于最短欧式距离的全部角点删掉;A7. Select the first corner point in descending order as the best corner point and delete all the corner points whose Euclidean distance is less than the shortest Euclidean distance from the corner point; A8.循环步骤A6-A7直至无法删除角点,返回全部最佳角点。A8. Repeat steps A6-A7 until the corner points cannot be deleted, and return to all the best corner points.
7.根据权利要求1所述的一种图像拼接岩壁重建的方法,其特征在于:所述特征点匹配步骤中通过匹配算法将两幅图像提取到的特征点进行特征匹配的具体方法为:使用KNN匹配算法将两幅图之间的特征点进行匹配,使用比率测试,即根据比较最近邻距离与次近邻距离的方法,距离比率小于预定的阈值时,可认为两个特征点匹配正确。7. a kind of image mosaic rock wall reconstruction method according to claim 1, is characterized in that: in described feature point matching step, the specific method that the feature points that two images are extracted to carry out feature matching by matching algorithm is: Use the KNN matching algorithm to match the feature points between the two images, and use the ratio test, that is, according to the method of comparing the nearest neighbor distance and the next nearest neighbor distance, when the distance ratio is less than a predetermined threshold, it can be considered that the two feature points are correctly matched. 8.根据权利要求1所述的一种图像拼接岩壁重建的方法,其特征在于:所述图像拼接步骤之前还需要进行去除错误匹配步骤:8. a kind of image splicing method for rock wall reconstruction according to claim 1, is characterized in that: before described image splicing step, also need to carry out the step of removing wrong matching: B1.使用RANSAC算法,在匹配的特征点中随机选取4对特征点设为内点,其他的特征点设为外点;B1. Using the RANSAC algorithm, randomly select 4 pairs of feature points from the matched feature points as inner points, and other feature points as outer points; B2.得到内点的单适应矩阵;B2. Obtain the single adaptation matrix of the interior point; B3.使用所述单适应矩阵测试所有外点,并设置一个阈值条件,将满足单适应矩阵测试后满足阈值条件的所有外点归为新的内点;B3. Use the single adaptive matrix to test all the outer points, and set a threshold condition, and classify all the outer points that meet the threshold condition after the single adaptive matrix test as new inner points; B4.循环步骤B2-B3,直至内点数量不发生改变时可以得到最终的单适应矩阵。B4. Repeat steps B2-B3 until the final single adaptation matrix can be obtained when the number of interior points does not change. 9.根据权利要求1所述的一种图像拼接岩壁重建的方法,其特征在于:所述图像拼接步骤中拼接操作采用加权融合法进行拼接;所述加权融合法为将图像的重叠区域的像素值按一定权值相加合成。9. The method for reconstructing a rock wall by image splicing according to claim 1, wherein: in the image splicing step, the splicing operation adopts a weighted fusion method for splicing; The pixel values are added and synthesized according to a certain weight.
CN202210737778.5A 2022-06-27 2022-06-27 A method for rock wall reconstruction using image stitching Active CN115018709B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210737778.5A CN115018709B (en) 2022-06-27 2022-06-27 A method for rock wall reconstruction using image stitching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210737778.5A CN115018709B (en) 2022-06-27 2022-06-27 A method for rock wall reconstruction using image stitching

Publications (2)

Publication Number Publication Date
CN115018709A true CN115018709A (en) 2022-09-06
CN115018709B CN115018709B (en) 2025-03-07

Family

ID=83077042

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210737778.5A Active CN115018709B (en) 2022-06-27 2022-06-27 A method for rock wall reconstruction using image stitching

Country Status (1)

Country Link
CN (1) CN115018709B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102006425A (en) * 2010-12-13 2011-04-06 交通运输部公路科学研究所 Method for splicing video in real time based on multiple cameras
CN104376548A (en) * 2014-11-07 2015-02-25 中国电子科技集团公司第二十八研究所 Fast image splicing method based on improved SURF algorithm
WO2016086754A1 (en) * 2014-12-03 2016-06-09 中国矿业大学 Large-scale scene video image stitching method
CN109736894A (en) * 2018-11-27 2019-05-10 中国矿业大学 A monitoring system, monitoring method and early warning method for surrounding rock disaster of coal mine roadway
CN110660023A (en) * 2019-09-12 2020-01-07 中国测绘科学研究院 Video stitching method based on image semantic segmentation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102006425A (en) * 2010-12-13 2011-04-06 交通运输部公路科学研究所 Method for splicing video in real time based on multiple cameras
CN104376548A (en) * 2014-11-07 2015-02-25 中国电子科技集团公司第二十八研究所 Fast image splicing method based on improved SURF algorithm
WO2016086754A1 (en) * 2014-12-03 2016-06-09 中国矿业大学 Large-scale scene video image stitching method
CN109736894A (en) * 2018-11-27 2019-05-10 中国矿业大学 A monitoring system, monitoring method and early warning method for surrounding rock disaster of coal mine roadway
CN110660023A (en) * 2019-09-12 2020-01-07 中国测绘科学研究院 Video stitching method based on image semantic segmentation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄远征;尹春丽;刘波;韩鸣晓;陈敏;张鼎瑞;梁庆华;: "基于改进的快速Shi-Tomasi特征点检测算法的图像拼接算法", 机械设计与研究, no. 04, 20 August 2018 (2018-08-20) *

Also Published As

Publication number Publication date
CN115018709B (en) 2025-03-07

Similar Documents

Publication Publication Date Title
Li et al. LNIFT: Locally normalized image for rotation invariant multimodal feature matching
Zokai et al. Image registration using log-polar mappings for recovery of large-scale similarity and projective transformations
CN105574527B (en) A kind of quick object detecting method based on local feature learning
US20130208997A1 (en) Method and Apparatus for Combining Panoramic Image
US8948484B2 (en) Method and system for automatic view planning for cardiac magnetic resonance imaging acquisition
Legg et al. Feature neighbourhood mutual information for multi-modal image registration: an application to eye fundus imaging
CN113826143A (en) Feature point detection
US20190347468A1 (en) 3d image detection method and apparatus, electronic device, and computer readable medium
CN116681894B (en) Multi-organ segmentation method, system, equipment and medium for adjacent layer feature fusion Unet combined with large-kernel convolution
Zhang et al. Feature matching for multi-epoch historical aerial images
CN110070571A (en) A detection method of bamboo morphological parameters based on depth camera
CN107766864B (en) Method and device for extracting features and method and device for object recognition
CN114897676A (en) A method, equipment and medium for stitching multispectral images of UAV remote sensing
CN111126508A (en) An Improved Heterologous Image Matching Method Based on HOPC
EP0895189A1 (en) Method for recovering radial distortion parameters from a single camera image
CN115511928A (en) Matching method of multispectral image
CN114663880A (en) Three-dimensional target detection method based on multi-level cross-modal self-attention mechanism
EP4350615A1 (en) Facial deformation compensation method for facial depth image, and imaging apparatus and storage medium
CN110046555A (en) Endoscopic system video image stabilization method and device
Mo et al. A robust infrared and visible image registration method for Dual-Sensor UAV system
CN114913064B (en) Large parallax image stitching method and device based on structure preservation and many-to-many matching
CN105184736B (en) A kind of method of the image registration of narrow overlapping double-view field hyperspectral imager
CN112017221B (en) Method, device and equipment for multimodal image registration based on scale space
CN117689567B (en) Ultrasonic image scanning method and device
CN115018709A (en) Image splicing rock wall reconstruction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant