CN115909025A - Terrain vision autonomous detection and identification method for small celestial body surface sampling point - Google Patents
Terrain vision autonomous detection and identification method for small celestial body surface sampling point Download PDFInfo
- Publication number
- CN115909025A CN115909025A CN202211211528.4A CN202211211528A CN115909025A CN 115909025 A CN115909025 A CN 115909025A CN 202211211528 A CN202211211528 A CN 202211211528A CN 115909025 A CN115909025 A CN 115909025A
- Authority
- CN
- China
- Prior art keywords
- point
- image
- terrain
- pixel
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000005070 sampling Methods 0.000 title claims abstract description 48
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000001514 detection method Methods 0.000 title claims abstract description 28
- 238000012937 correction Methods 0.000 claims abstract description 31
- 238000005259 measurement Methods 0.000 claims abstract description 13
- 238000003384 imaging method Methods 0.000 claims abstract description 9
- 238000007781 pre-processing Methods 0.000 claims abstract description 6
- 230000003287 optical effect Effects 0.000 claims abstract description 5
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 4
- 239000011159 matrix material Substances 0.000 claims description 21
- 238000004364 calculation method Methods 0.000 claims description 19
- 238000001914 filtration Methods 0.000 claims description 14
- 230000000007 visual effect Effects 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 6
- 230000000694 effects Effects 0.000 claims description 5
- 238000006243 chemical reaction Methods 0.000 claims 1
- 239000011435 rock Substances 0.000 description 8
- 239000000523 sample Substances 0.000 description 7
- 238000004891 communication Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 1
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 1
- 101100012902 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) FIG2 gene Proteins 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000011017 operating method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012876 topography Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Landscapes
- Image Processing (AREA)
Abstract
Description
技术领域Technical Field
本发明属于小天体表面探测技术领域,尤其涉及一种小天体表面采样点地形视觉自主检测识别方法。The present invention belongs to the technical field of small celestial body surface detection, and in particular relates to a method for autonomous visual detection and recognition of terrain at sampling points on the surface of a small celestial body.
背景技术Background Art
通过采集小天体表面的岩石样品、分析其组成成分,能够探索太阳系形成乃至生命起源的线索。小天体探测器携带机械臂进行表面自主采样返回是一种重要手段。由于能源、推进等方面的限制,探测器在小天体上的附着任务时间较短。而小天体距离地球十分遥远,通信时延长,无法开展实时测控。因此,机械臂采样过程中必须依赖在轨全自主视觉图像处理来实现采样点地形的检测识别,以完成机械臂采样操作。By collecting rock samples from the surface of small celestial bodies and analyzing their composition, we can explore clues to the formation of the solar system and even the origin of life. It is an important means for small celestial body probes to carry robotic arms to conduct autonomous surface sampling and return. Due to limitations in energy, propulsion, etc., the probe's attachment mission time on small celestial bodies is relatively short. However, small celestial bodies are very far away from the earth, and the communication time is extended, making it impossible to carry out real-time measurement and control. Therefore, during the robotic arm sampling process, it is necessary to rely on on-orbit fully autonomous visual image processing to realize the detection and identification of the sampling point terrain in order to complete the robotic arm sampling operation.
非合作目标的检测识别是空间视觉领域的难点问题之一,而作为采样对象的小天体表面岩石因其具有弱纹理高相似度的特点,则使其图像处理更加困难。样品岩石的图像在像素值上区别非常小,非常难于提取到特征。因此,对此类非结构化高相似度的岩石目标进行识别和测量,具有非常大的挑战性。文献(王亚林等,碎石堆构造小行星表面地形分析与仿真验证,深空探测学报,2019,6(5))对具有碎石堆构造特性的小行星表面地形特性进行研究,提出了一种生成小行星表面地形仿真模型的方法,对影响地形构造的石块幂律分布规律进行实验模拟,但并没有给出机械臂采样点的识别检测方法。发明专利CN111721302A公开了一种不规则小行星表面复杂地形特征识别与感知方法,利用深空探测器拍摄的光学图像,根据小行星表面地形特征的几何特点,能够检测并区分出陨石坑和岩石特征。该方法主要应用于深空探测器在轨的导航和避障,并不适用于着陆后基于近距离成像的采样点细致地形识别。The detection and recognition of non-cooperative targets is one of the difficult problems in the field of space vision. The surface rocks of small celestial bodies as sampling objects have the characteristics of weak texture and high similarity, which makes their image processing more difficult. The image of the sample rock has very small difference in pixel value, and it is very difficult to extract features. Therefore, it is very challenging to identify and measure such unstructured high-similarity rock targets. The literature (Wang Yalin et al., Analysis and Simulation Verification of the Surface Topography of asteroids with rubble pile structure, Journal of Deep Space Exploration, 2019, 6(5)) studied the surface terrain characteristics of asteroids with rubble pile structure characteristics, proposed a method for generating a simulation model of asteroid surface terrain, and experimentally simulated the power law distribution of stones that affect the terrain structure, but did not provide a method for identifying and detecting the sampling points of the robotic arm. Invention patent CN111721302A discloses a method for identifying and perceiving complex terrain features on irregular asteroid surfaces. It uses optical images taken by deep space probes to detect and distinguish craters and rock features based on the geometric characteristics of the asteroid surface terrain features. This method is mainly used for on-orbit navigation and obstacle avoidance of deep space probes, and is not suitable for detailed terrain identification of sampling points based on close-range imaging after landing.
本发明针对小天体表面探测任务中机械臂采样点选取开展相关研究,提出一种基于双目立体视觉的采样点地形自主检测识别方法,在小天体表面探测任务中具有广泛的应用前景。The present invention conducts relevant research on the selection of sampling points of the robotic arm in the surface exploration mission of small celestial bodies, and proposes an autonomous detection and recognition method of the sampling point terrain based on binocular stereo vision, which has broad application prospects in the surface exploration mission of small celestial bodies.
发明内容Summary of the invention
本发明解决的技术问题是:针对作为采样对象的小天体表面岩石所具有的弱纹理高相似特点,提供了一种基于双目立体视觉的小天体表面采样点地形视觉自主检测识别方法,解决了一类小天体表面探测任务中,采样机械臂对表面采样点地形进行自主检测识别的问题。The technical problem solved by the present invention is: in view of the weak texture high similarity characteristics of the rocks on the surface of small celestial bodies as sampling objects, a method for visual autonomous detection and identification of the terrain of sampling points on the surface of small celestial bodies based on binocular stereo vision is provided, which solves the problem of autonomous detection and identification of the terrain of surface sampling points by the sampling robot arm in a type of small celestial body surface detection mission.
本发明目的通过以下技术方案予以实现:The purpose of the present invention is achieved through the following technical solutions:
机械臂系统配置双目立体视觉相机,两相机成像公共视场覆盖样品采集区域,以实现对采样区域的视觉成像、以及对采样点地形的识别与测量,其中机械臂系统已经广泛应用,此为现有技术,在此不作具体阐述。双目相机配置主动照明光源,以保证在获得较好的成像效果。两台相机对采样区域进行同步成像,首先通过预处理算法来滤除图像中的噪声;然后对图像进行畸变校正,减少相机光学系统带来的成像误差。接着对图像进行双目外极线校正后,将左目图像和右目图像进行匹配,获得稠密的视差图。对二维视差图逐点计算其在左目相机坐标系中的三维坐标,从而获得三维点云图。利用点云数据对整体地形态势进行感知。最后将地面所在平面区域内进行网格划分,计算滑动窗口内区域信息,并对其地形进行判断,从而可以筛选出机械臂上采样装置的可操作位置。The robotic arm system is equipped with binocular stereo vision cameras. The common field of view of the two cameras covers the sample collection area to achieve visual imaging of the sampling area and identification and measurement of the terrain of the sampling point. The robotic arm system has been widely used. This is a prior art and will not be elaborated on here. The binocular camera is equipped with an active lighting source to ensure a good imaging effect. The two cameras synchronously image the sampling area. First, the noise in the image is filtered out by a preprocessing algorithm; then the image is distorted to reduce the imaging error caused by the camera optical system. Then, after the image is binocular epipolar line correction, the left eye image and the right eye image are matched to obtain a dense disparity map. The three-dimensional coordinates of the two-dimensional disparity map in the left camera coordinate system are calculated point by point to obtain a three-dimensional point cloud map. The point cloud data is used to perceive the overall terrain situation. Finally, the plane area where the ground is located is gridded, the area information in the sliding window is calculated, and the terrain is judged, so that the operable position of the sampling device on the robotic arm can be screened out.
一种小天体表面采样点视觉自主检测识别方法,主要包括以下步骤:A method for visual autonomous detection and recognition of sampling points on the surface of a small celestial body mainly comprises the following steps:
(1)利用双目立体视觉相机对采样区域进行成像;保证左目相机、右目相机的公共视场能够覆盖采样区域;(1) Use a binocular stereo vision camera to image the sampling area; ensure that the public field of view of the left camera and the right camera can cover the sampling area;
(2)对左、右目相机采集到的原始图像进行预处理,通过适当的滤波算法来减少噪声的影响;(2) Preprocess the original images collected by the left and right cameras and reduce the influence of noise through appropriate filtering algorithms;
(3)对步骤(2)所得的滤波处理后的去噪图像进行畸变校正,减小相机光学系统畸变带来的误差;(3) performing distortion correction on the denoised image after filtering obtained in step (2) to reduce the error caused by the distortion of the camera optical system;
(4)对步骤(3)所得的畸变校正后的图像进行双目外极线校正,生成左目外极线校正图像和右目外极线校正图像;(4) performing binocular epipolar line correction on the distortion-corrected image obtained in step (3) to generate a left-eye epipolar line corrected image and a right-eye epipolar line corrected image;
(5)搜索左目图像在右目图像中的匹配点计算视差,从而获得稠密的视差图;(5) Searching for matching points of the left-eye image in the right-eye image to calculate the disparity, thereby obtaining a dense disparity map;
(6)对步骤(5)所得的二维视差图逐点计算其在左目相机坐标系中的三维坐标,从而获得三维点云图;(6) calculating the three-dimensional coordinates of the two-dimensional disparity map obtained in step (5) in the left camera coordinate system point by point, thereby obtaining a three-dimensional point cloud map;
(7)利用点云数据对可视区域进行地面检测,确定采样区域内整体地形态势;(7) Use point cloud data to perform ground detection on the visible area to determine the overall terrain situation within the sampling area;
(8)在地面所在平面区域内进行网格划分,计算滑动窗口内区域信息,从而对其地形进行判断。(8) Grid division is performed in the plane area where the ground is located, and the regional information within the sliding window is calculated to determine its terrain.
进一步地,所述步骤(2)中的预处理为通过对左目图像、右目图像进行中值滤波处理,滤波窗口大小为m×m,计算方法包括以下步骤:Furthermore, the preprocessing in step (2) is performed by performing median filtering on the left-eye image and the right-eye image, the filter window size is m×m, and the calculation method includes the following steps:
A1)图像的宽高分别为W、H,扩充图像边界,图像宽高变为W+2×[m/2]、H+2×[m/2],扩充的图像像素置为0;A1) The width and height of the image are W and H respectively. The image boundary is expanded, and the image width and height become W+2×[m/2] and H+2×[m/2]. The expanded image pixels are set to 0;
A2)对于原始图像上一点(uori,vori),其灰度值为I(uori,vori),中值滤波计算公式如下:A2) For a point (u ori ,v ori ) on the original image, its gray value is I(u ori ,v ori ), and the median filter calculation formula is as follows:
其中G(uf,vf)为滤波后的像素灰度值,W为m×m的滤波模板,i、j表示模板W上像素点的坐标。Where G(u f ,v f ) is the grayscale value of the pixel after filtering, W is the m×m filtering template, and i and j represent the coordinates of the pixel points on the template W.
进一步地,所述步骤(3)中的畸变校正包括以下计算步骤:Furthermore, the distortion correction in step (3) includes the following calculation steps:
畸变校正前图像第k个像素点二维像素齐次坐标: The two-dimensional pixel homogeneous coordinates of the k-th pixel in the image before distortion correction:
畸变校正前图像第k个像素点二维物理齐次坐标: The two-dimensional physical homogeneous coordinates of the k-th pixel in the image before distortion correction:
其中矩阵A为相机的内参数矩阵,A-1表示矩阵A的逆,The matrix A is the intrinsic parameter matrix of the camera, and A -1 represents the inverse of the matrix A.
镜头畸变水平分量值:Lens distortion horizontal component value:
镜头畸变垂直分量值:The vertical component of lens distortion:
其中k1、k2、k3为一、二、三阶径向畸变,p1、p2为一、二阶切向畸变,in k 1 , k 2 , k 3 are the first, second, and third order radial distortions, p 1 , p 2 are the first and second order tangential distortions,
则畸变校正后的第k个像素点的二维物理齐次坐标:Then the two-dimensional physical homogeneous coordinates of the k-th pixel after distortion correction are:
转换成二维像素齐次坐标:Convert to two-dimensional pixel homogeneous coordinates:
进一步地,所述步骤(4)中,所述外极线校正的校正公式如下:Furthermore, in the step (4), the correction formula for the epipolar line correction is as follows:
其中,[uc vc 1]T为空间点在外极线校正前图像中的齐次像素坐标,[u v 1]T为空间点在外极线校正后图像中的齐次像素坐标,M为相机的内参数矩阵,R为相机坐标系旋转矩阵,M′为校正后相机内参数矩阵,Rrec为校正后相机坐标系旋转矩阵,λ≠0是常数。Among them, [u c v c 1] T is the homogeneous pixel coordinate of the spatial point in the image before epipolar correction, [uv 1] T is the homogeneous pixel coordinate of the spatial point in the image after epipolar correction, M is the intrinsic parameter matrix of the camera, R is the rotation matrix of the camera coordinate system, M′ is the intrinsic parameter matrix of the camera after correction, R rec is the rotation matrix of the camera coordinate system after correction, and λ≠0 is a constant.
进一步地,所述步骤(5)中,利用块匹配方法搜索左目图像在右目图像中的匹配点计算视差,包括以下几个步骤:Furthermore, in step (5), the block matching method is used to search for matching points of the left-eye image in the right-eye image to calculate the disparity, which includes the following steps:
B1)计算两个像素之间的差异,即对不同视差下的灰度相似性测量:B1) Calculate the difference between two pixels, that is, measure the grayscale similarity under different disparities:
e(u,v,d)=|GL(u,v)-GR(u-d,v)| ⑦e(u,v,d)=|G L (u,v)-G R (ud,v)| ⑦
其中G(u,v)为像素坐标系下坐标为(u,v)的像素点灰度值,d为视差。Where G(u,v) is the grayscale value of the pixel with coordinates (u,v) in the pixel coordinate system, and d is the disparity.
B2)选取围绕匹配点的一个窗口作为相似性测量区域,其中对应像素为该窗口的中心点。在选取的窗口中,将对应像素的匹配代价做叠加运算,所得结果作为该点的匹配相似度测定值:B2) Select a window around the matching point as the similarity measurement area, where the corresponding pixel is the center point of the window. In the selected window, the matching cost of the corresponding pixel is superimposed, and the result is used as the matching similarity measurement value of the point:
其中S为相似性测量区域,一般取n×n的矩形区域。Where S is the similarity measurement area, which is generally an n×n rectangular area.
B3)在搜索范围内选取匹配代价叠加值最小的值所对应的点作为最终匹配点。B3) Select the point corresponding to the smallest matching cost superposition value within the search range as the final matching point.
B4)对整幅图像进行逐像素求取视差,最终得到稠密的视差图。B4) Calculate the disparity of the entire image pixel by pixel, and finally obtain a dense disparity map.
进一步地,所述步骤(6)中,对于立体匹配计算得到的二维视差图中任一点(u,v),其在左目相机坐标系中的三维坐标(X,Y,Z)计算公式为Furthermore, in step (6), for any point (u, v) in the two-dimensional disparity map obtained by stereo matching, the three-dimensional coordinates (X, Y, Z) in the left camera coordinate system are calculated as follows:
其中,fx、fy为相机的等效焦距,u0、v0为主点像素坐标,B为基线长度。Among them, f x and f y are the equivalent focal lengths of the camera, u 0 and v 0 are the pixel coordinates of the principal point, and B is the baseline length.
进一步地,所述步骤(7)中,可视区域地面检测包括以下几个步骤:Furthermore, in step (7), the visible area ground detection includes the following steps:
C1)随机选取K个点拟合平面,设平面方程为Z=AX+BY+C,则有C1) Randomly select K points to fit the plane, and let the plane equation be Z = AX + BY + C, then we have
其中,(Xl,Yl,Zl)为第l个点的三维空间坐标。C2)用其余点验证平面拟合的效果,计算每个点到平面的距离:Where (X l ,Y l ,Z l ) are the three-dimensional space coordinates of the lth point. C2) Use the remaining points to verify the effect of plane fitting and calculate the distance from each point to the plane:
设定平面度阈值T,统计在一组平面参数下到平面距离小于阈值T的点的个数Nc,进行多次循环获得Nc的最大值,同时取对应的平面作为可视区域地面。Set the flatness threshold T, count the number N c of points whose distance to the plane is less than the threshold T under a set of plane parameters, perform multiple cycles to obtain the maximum value of N c , and take the corresponding plane as the visible area ground.
进一步地,所述步骤(8)中,分区域对地形进行判断的过程包括以下几个步骤:Furthermore, in step (8), the process of determining the terrain by region includes the following steps:
S1)参考采样装置的操作面积等来确定划分网格面积,将点云集合划分到p×q个网格区域。S1) Determine the grid area with reference to the operation area of the sampling device, and divide the point cloud set into p×q grid areas.
S2)在每个网格区域中检索该区域内点到地面的距离值Di,计算当前窗口的粗糙度。用直方图描述子表示,直方图H从0到最大距离Dmax均分为若干等份,对距离进行统计。S2) In each grid area, retrieve the distance value D i from the point in the area to the ground, and calculate the roughness of the current window. Use a histogram descriptor to represent it. The histogram H is divided into several equal parts from 0 to the maximum distance D max , and the distance is counted.
S3)设定阈值及判定规则,根据粗糙度直方图来判断该区域地形是否可执行任务。S3) setting thresholds and judgment rules, and judging whether the terrain in the area is suitable for executing the task based on the roughness histogram.
S4)设置滑动窗口和滑动步长,对于滑动窗口进行上述粗糙度计算,并判断滑动窗口是否可执行任务。S4) Setting a sliding window and a sliding step size, performing the above roughness calculation on the sliding window, and determining whether the sliding window can execute the task.
S5)标记出满足任务需求的网格区域,记录区域内部地形参数,形成可操作区域信息列表。S5) Mark the grid area that meets the task requirements, record the terrain parameters inside the area, and form an information list of the operable area.
与现有技术相比,本发明的有益效果是:Compared with the prior art, the present invention has the following beneficial effects:
一、本发明针对小天体表面采样点地形检测识别问题,提出了一种基于双目立体视觉的自主检测识别方法,有效解决了弱纹理高相似的小天体表面岩石自主检测识别问题;1. Aiming at the problem of terrain detection and identification of sampling points on the surface of small celestial bodies, the present invention proposes an autonomous detection and identification method based on binocular stereo vision, which effectively solves the problem of autonomous detection and identification of rocks on the surface of small celestial bodies with weak texture and high similarity;
二、本发明提供的小天体表面采样点地形检测识别方法,能够实现在轨全自主数据处理,有效解决了小天体探测任务中通信时延长,地面系统无法进行实时处理的问题;Second, the method for detecting and identifying terrain at sampling points on the surface of a small celestial body provided by the present invention can realize fully autonomous data processing on orbit, effectively solving the problem of extended communication time and inability of the ground system to perform real-time processing in the small celestial body detection mission;
三、本发明所采用的视觉检测识别方法具有简洁、高效、可靠的特点,适合于计算资源较为匮乏的空间环境。3. The visual detection and recognition method adopted by the present invention is simple, efficient and reliable, and is suitable for space environments where computing resources are relatively scarce.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1为本发明双目立体相机配置结构示意图;FIG1 is a schematic diagram of the configuration structure of a binocular stereo camera according to the present invention;
图2为本发明基于双目立体视觉的采样点地形自主检测识别方法流程图;FIG2 is a flow chart of a method for autonomously detecting and identifying terrain at sampling points based on binocular stereo vision according to the present invention;
图3为本发明三维平面网格区域划分示意图。FIG. 3 is a schematic diagram of three-dimensional plane grid area division according to the present invention.
具体实施方式DETAILED DESCRIPTION
为了使本发明的目的及优点更加清楚明白,以下结合实施例对本发明进行具体说明。应当理解,以下文字仅仅用以描述本发明的一种或几种具体的实施方式,并不对本发明具体请求的保护范围进行严格限定。In order to make the purpose and advantages of the present invention more clearly understood, the present invention is specifically described below in conjunction with embodiments. It should be understood that the following text is only used to describe one or several specific embodiments of the present invention, and does not strictly limit the scope of protection of the specific claims of the present invention.
实施例Example
如图1所示,双目立体相机配置在小天体探测器对地观测面上,对采样区域进行成像,以实现对采样区域内地形的检测识别。双目立体相机通过设计相机在整器的位置和姿态来保证获取最佳观测范围。双目立体相机配置主动光源,使其公共视场内的景物能够获得较好成像效果。双目立体相机通过同步采集保证左目相机、右目相机在同一时刻成像,以实现立体视觉计算。As shown in Figure 1, the binocular stereo camera is configured on the ground observation surface of the small celestial body detector to image the sampling area to detect and identify the terrain in the sampling area. The binocular stereo camera ensures the best observation range by designing the position and posture of the camera in the whole device. The binocular stereo camera is equipped with an active light source so that the scenery in its public field of view can obtain a better imaging effect. The binocular stereo camera ensures that the left camera and the right camera are imaged at the same time through synchronous acquisition to realize stereo vision calculation.
如图2所示,基于双目立体视觉的采样点地形自主检测识别方法,主要包括以下步骤:As shown in Figure 2, the sampling point terrain autonomous detection and recognition method based on binocular stereo vision mainly includes the following steps:
(1)对双目相机的图像进行预处理,以减少噪声的影响。针对样品岩石的成像特点,图像预处理对左目图像、右目图像进行了中值滤波处理,可以平滑数据并保留微小锐利的细节。滤波窗口大小为3×3,具体计算过程如下:(1) Preprocess the binocular camera image to reduce the impact of noise. According to the imaging characteristics of the sample rock, the image preprocessing performs median filtering on the left and right images, which can smooth the data and retain tiny sharp details. The filter window size is 3×3, and the specific calculation process is as follows:
A1)图像的宽高分别为W、H,扩充图像边界,图像宽高变为W+2、H+2,扩充的图像像素置为0。A1) The width and height of the image are W and H respectively. The image boundary is expanded, and the image width and height become W+2 and H+2, and the expanded image pixels are set to 0.
A2)对于原始图像上一点(uori,vori),其灰度值为I(uori,vori),3×3中值滤波计算公式如下:A2) For a point (u ori ,v ori ) on the original image, its gray value is I(u ori ,v ori ), and the 3×3 median filter calculation formula is as follows:
其中G(uf,vf)为滤波后的像素灰度值,W为3×3的滤波模板,因此i、j均为区间[-1,1]上的整数。Where G(u f ,v f ) is the grayscale value of the pixel after filtering, W is a 3×3 filtering template, so i and j are both integers in the interval [-1,1].
(2)对左目滤波后图像、右目滤波后图像进行畸变校正,计算过程如下(以其中一目相机的图像为例):(2) Perform distortion correction on the left-eye filtered image and the right-eye filtered image. The calculation process is as follows (taking the image of one camera as an example):
畸变校正前图像第k个像素点二维像素齐次坐标: The two-dimensional pixel homogeneous coordinates of the k-th pixel in the image before distortion correction:
畸变校正前图像第k个像素点二维物理齐次坐标: The two-dimensional physical homogeneous coordinates of the k-th pixel in the image before distortion correction:
其中矩阵A为相机的内参数矩阵,A-1表示矩阵A的逆。Wherein matrix A is the intrinsic parameter matrix of the camera, and A -1 represents the inverse of matrix A.
镜头畸变水平分量值:Lens distortion horizontal component value:
镜头畸变垂直分量值:The vertical component of lens distortion:
其中k1、k2、k3为一、二、三阶径向畸变,p1、p2为一、二阶切向畸变。in k 1 , k 2 , k 3 are first, second, and third order radial distortions, and p 1 , p 2 are first and second order tangential distortions.
则畸变校正后的第k个像素点的二维物理齐次坐标:Then the two-dimensional physical homogeneous coordinates of the k-th pixel after distortion correction are:
转换成二维像素齐次坐标:Convert to two-dimensional pixel homogeneous coordinates:
(3)对畸变校正后图像进行外极线校正,计算过程如下(以其中一目相机的图像为例):(3) Perform epipolar correction on the distortion-corrected image. The calculation process is as follows (taking the image of one of the cameras as an example):
其中,[uc vc 1]T为空间点在外极线校正前图像中的齐次像素坐标,[u v 1]T为空间点在外极线校正后图像中的齐次像素坐标,M为相机的内参数矩阵,R为相机坐标系旋转矩阵,M′为校正后相机内参数矩阵,Rrec为校正后相机坐标系旋转矩阵,λ≠0是常数。Among them, [u c v c 1] T is the homogeneous pixel coordinate of the spatial point in the image before epipolar correction, [uv 1] T is the homogeneous pixel coordinate of the spatial point in the image after epipolar correction, M is the intrinsic parameter matrix of the camera, R is the rotation matrix of the camera coordinate system, M′ is the intrinsic parameter matrix of the camera after correction, R rec is the rotation matrix of the camera coordinate system after correction, and λ≠0 is a constant.
(4)对左目图像、右目图像中的同名点进行搜索匹配,计算过程如下:(4) Search and match the points with the same name in the left and right images. The calculation process is as follows:
B1)首先基于像素点计算两个像素间的差异:B1) First, calculate the difference between two pixels based on the pixel points:
e(u,v,d)=|GL(u,v)-GR(u-d,v)| ⑦e(u,v,d)=|G L (u,v)-G R (ud,v)| ⑦
其中,G(u,v)为像素坐标系下坐标为(u,v)的像素点灰度值。Among them, G(u,v) is the grayscale value of the pixel with coordinates (u,v) in the pixel coordinate system.
B2)选取围绕匹配点的一个窗口作为相似性测量区域,其中对应像素为该窗口的中心点。在选取的窗口中,将对应像素的匹配代价做叠加运算,所得结果作为该点的匹配相似度测定值:B2) Select a window around the matching point as the similarity measurement area, where the corresponding pixel is the center point of the window. In the selected window, the matching cost of the corresponding pixel is superimposed, and the result is used as the matching similarity measurement value of the point:
其中S为相似性测量区域,一般取n×n的矩形区域,本实例根据试验图像的分辨率确定取n=21。Where S is the similarity measurement area, which is generally a rectangular area of n×n. In this example, n=21 is determined according to the resolution of the test image.
B3)在搜索范围内选取匹配代价叠加值最小的值所对应的点作为最终匹配点;B3) Selecting the point corresponding to the smallest value of the matching cost superposition value within the search range as the final matching point;
B4)对整幅图像进行逐像素求取视差,最终得到稠密的视差图。B4) Calculate the disparity of the entire image pixel by pixel, and finally obtain a dense disparity map.
(5)对于立体匹配计算得到的二维视差图中任一点(u,v),计算其在左目相机坐标系中的三维坐标(X,Y,Z),计算公式为(5) For any point (u, v) in the two-dimensional disparity map obtained by stereo matching, calculate its three-dimensional coordinates (X, Y, Z) in the left camera coordinate system. The calculation formula is:
其中,fx、fy为相机的等效焦距,u0、v0为主点像素坐标,B为基线长度。Among them, f x and f y are the equivalent focal lengths of the camera, u 0 and v 0 are the pixel coordinates of the principal point, and B is the baseline length.
(6)通过点云数据对可视区域进行地面检测,计算过程如下:(6) The visible area is detected using point cloud data. The calculation process is as follows:
C1)随机选取K个点拟合平面,设平面方程为Z=AX+BY+C,则有C1) Randomly select K points to fit the plane, and let the plane equation be Z = AX + BY + C, then we have
其中,(Xl,Yl,Zl)为第l个点的三维空间坐标。Among them, (X l ,Y l ,Z l ) are the three-dimensional space coordinates of the lth point.
C2)用其余点验证平面拟合的效果,计算每个点到平面的距离:C2) Use the remaining points to verify the effect of plane fitting and calculate the distance from each point to the plane:
设定平面度阈值T,统计在一组平面参数下到平面距离小于阈值T的点的个数Nc,进行多次循环获得Nc的最大值,同时取对应的平面作为可视区域地面。Set the flatness threshold T, count the number N c of points whose distance to the plane is less than the threshold T under a set of plane parameters, perform multiple cycles to obtain the maximum value of N c , and take the corresponding plane as the visible area ground.
(7)在地面所在平面区域内进行网格划分,并对其地形进行判断,计算过程如下:(7) Grid division is performed in the plane area where the ground is located, and its terrain is judged. The calculation process is as follows:
S1)在检测到的平面内进行区域划分及滑动窗口预设,如图3所示。可根据采样装置的单点操作面积来确定网格化尺寸。譬如,若采样装置单点操作面积为100mm×100mm,双目立体相机公共视场1.2m×1.1m,可设置12×11个网格区域,每个网格尺寸设置为100mm×100mm。S1) Perform area division and sliding window preset in the detected plane, as shown in Figure 3. The grid size can be determined according to the single-point operation area of the sampling device. For example, if the single-point operation area of the sampling device is 100mm×100mm, and the common field of view of the binocular stereo camera is 1.2m×1.1m, 12×11 grid areas can be set, and the size of each grid is set to 100mm×100mm.
S2)在每个网格区域中检索该区域内点到地面的距离值Di,计算当前窗口的粗糙度,用直方图描述子表示,直方图H从0到最大距离Dmax均分为15等份,对距离进行统计。S2) Retrieve the distance value D i from the point in each grid area to the ground, calculate the roughness of the current window, and represent it with a histogram descriptor. The histogram H is divided into 15 equal parts from 0 to the maximum distance D max , and the distance is counted.
S3)设定直方图区间阈值及相应判定规则,根据粗糙度直方图判断该区域是否可执行某一类任务。S3) setting a histogram interval threshold and corresponding judgment rules, and judging whether the area can perform a certain type of task according to the roughness histogram.
S4)设置滑动窗口和滑动步长,参考网格尺寸可设置滑动窗口为100mm×100mm,滑动步长50mm。对于滑动窗口进行上述粗糙度计算,并判断滑动窗口是否可执行任务。S4) Setting the sliding window and sliding step size, the reference grid size can be set to 100 mm × 100 mm, the sliding step size 50 mm. The above roughness calculation is performed on the sliding window, and it is determined whether the sliding window can execute the task.
S5)标记出满足任务需求的网格区域,记录区域内部地形参数,形成可操作区域信息列表。S5) Mark the grid area that meets the task requirements, record the terrain parameters inside the area, and form an information list of the operable area.
在本发明的描述中,需要说明的是,除非另有明确的规定和限定,术语“安装”、“连接”、“设置”应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或一体地连接;可以是机械连接,可以是直接相连,也可以通过中间媒介间接相连。对于本领域的普通技术人员而言,根据具体情况理解上述术语在本发明中的具体含义。在本说明书的描述中,参考术语“一个实施例”、“示例”、“具体示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施例或示例中以合适的方式结合。In the description of the present invention, it should be noted that, unless otherwise clearly specified and limited, the terms "installation", "connection" and "setting" should be understood in a broad sense. For example, it can be a fixed connection, a detachable connection, or an integral connection; it can be a mechanical connection, a direct connection, or an indirect connection through an intermediate medium. For ordinary technicians in this field, the specific meanings of the above terms in the present invention are understood according to the specific circumstances. In the description of this specification, the description of the reference terms "one embodiment", "example", "specific example", etc. means that the specific features, structures, materials or characteristics described in combination with the embodiment or example are included in at least one embodiment or example of the present invention. In this specification, the schematic representation of the above terms does not necessarily refer to the same embodiment or example. Moreover, the specific features, structures, materials or characteristics described can be combined in any one or more embodiments or examples in a suitable manner.
以上仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以作出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。本发明中未具体描述和解释说明的结构、装置以及操作方法,如无特别说明和限定,均按照本领域的常规手段进行实施。The above are only preferred embodiments of the present invention. It should be noted that, for those skilled in the art, several improvements and modifications can be made without departing from the principles of the present invention, and these improvements and modifications should also be considered as the protection scope of the present invention. The structures, devices and operating methods not specifically described and explained in the present invention shall be implemented according to conventional means in the art unless otherwise specified and limited.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211211528.4A CN115909025A (en) | 2022-09-30 | 2022-09-30 | Terrain vision autonomous detection and identification method for small celestial body surface sampling point |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211211528.4A CN115909025A (en) | 2022-09-30 | 2022-09-30 | Terrain vision autonomous detection and identification method for small celestial body surface sampling point |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115909025A true CN115909025A (en) | 2023-04-04 |
Family
ID=86488746
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211211528.4A Pending CN115909025A (en) | 2022-09-30 | 2022-09-30 | Terrain vision autonomous detection and identification method for small celestial body surface sampling point |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115909025A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116524031A (en) * | 2023-07-03 | 2023-08-01 | 盐城数智科技有限公司 | YOLOV 8-based large-range lunar rover positioning and mapping method |
CN116758026A (en) * | 2023-06-13 | 2023-09-15 | 河海大学 | Dam seepage area measurement method based on binocular remote sensing image significance analysis |
CN117414578A (en) * | 2023-10-19 | 2024-01-19 | 广州市容大计算机科技有限公司 | Game resource distribution system and method based on cloud computing |
-
2022
- 2022-09-30 CN CN202211211528.4A patent/CN115909025A/en active Pending
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116758026A (en) * | 2023-06-13 | 2023-09-15 | 河海大学 | Dam seepage area measurement method based on binocular remote sensing image significance analysis |
CN116758026B (en) * | 2023-06-13 | 2024-03-08 | 河海大学 | A dam water seepage area measurement method based on saliency analysis of binocular remote sensing images |
CN116524031A (en) * | 2023-07-03 | 2023-08-01 | 盐城数智科技有限公司 | YOLOV 8-based large-range lunar rover positioning and mapping method |
CN116524031B (en) * | 2023-07-03 | 2023-09-22 | 盐城数智科技有限公司 | YOLOV 8-based large-range lunar rover positioning and mapping method |
CN117414578A (en) * | 2023-10-19 | 2024-01-19 | 广州市容大计算机科技有限公司 | Game resource distribution system and method based on cloud computing |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111260773B (en) | Three-dimensional reconstruction method, detection method and detection system for small obstacle | |
CN103868460B (en) | Binocular stereo vision method for automatic measurement based on parallax optimized algorithm | |
CN105866790B (en) | A kind of laser radar obstacle recognition method and system considering lasing intensity | |
CN105225482B (en) | Vehicle detecting system and method based on binocular stereo vision | |
CN115909025A (en) | Terrain vision autonomous detection and identification method for small celestial body surface sampling point | |
JP6858415B2 (en) | Sea level measurement system, sea level measurement method and sea level measurement program | |
CN108416791A (en) | A Binocular Vision-Based Pose Monitoring and Tracking Method for Parallel Mechanism Maneuvering Platform | |
CN113393439A (en) | Forging defect detection method based on deep learning | |
CN106709950A (en) | Binocular-vision-based cross-obstacle lead positioning method of line patrol robot | |
CN106969706A (en) | Workpiece sensing and three-dimension measuring system and detection method based on binocular stereo vision | |
WO2018028103A1 (en) | Unmanned aerial vehicle power line inspection method based on characteristics of human vision | |
CN107560592B (en) | Precise distance measurement method for photoelectric tracker linkage target | |
CN109934230A (en) | A Radar Point Cloud Segmentation Method Based on Visual Aid | |
CN115375842A (en) | Plant three-dimensional reconstruction method, terminal and storage medium | |
CN105716539A (en) | Rapid high-precision 3D shape measuring method | |
CN110334701A (en) | Data acquisition method based on deep learning and multi-eye vision in digital twin environment | |
CN111260715B (en) | Depth map processing method, small obstacle detection method and system | |
CN105913013A (en) | Binocular vision face recognition algorithm | |
CN107677274A (en) | Unmanned plane independent landing navigation information real-time resolving method based on binocular vision | |
CN106996748A (en) | Wheel diameter measuring method based on binocular vision | |
CN117274573A (en) | Airplane surface hole positioning measurement method based on two-dimensional image and three-dimensional point cloud | |
CN117710588A (en) | A three-dimensional target detection method based on visual ranging prior information | |
CN106709432B (en) | Human head detection counting method based on binocular stereo vision | |
CN116755123A (en) | Non-contact RTK acquisition and measurement method, system and measurement equipment | |
CN113884017B (en) | Non-contact deformation detection method and system for insulator based on three-eye vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |