[go: up one dir, main page]

CN118746257A - High-precision extraction method and system for optical slice center based on weighted dimensionality reduction decomposition - Google Patents

High-precision extraction method and system for optical slice center based on weighted dimensionality reduction decomposition Download PDF

Info

Publication number
CN118746257A
CN118746257A CN202410729634.4A CN202410729634A CN118746257A CN 118746257 A CN118746257 A CN 118746257A CN 202410729634 A CN202410729634 A CN 202410729634A CN 118746257 A CN118746257 A CN 118746257A
Authority
CN
China
Prior art keywords
image
layer space
image layer
pixel
light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410729634.4A
Other languages
Chinese (zh)
Inventor
陈晓辉
杨广
王生怀
骆晓文
周齐齐
张艺衡
王艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei University of Automotive Technology
Original Assignee
Hubei University of Automotive Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei University of Automotive Technology filed Critical Hubei University of Automotive Technology
Priority to CN202410729634.4A priority Critical patent/CN118746257A/en
Publication of CN118746257A publication Critical patent/CN118746257A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了一种基于加权降维分解的光切片中心高精度提取方法及系统,涉及二/三维视觉测量中线结构光视觉测量系统标定领域,基于加权降维分解的光切片中心高精度提取方法主要包括:将“像层空间”降维分解,得到灰度向量,对灰度向量进行分析和计算,得到“像层空间”的分解方向亚像素坐标,匹配当前灰度向量的分解方向的垂直方向的序号,得到该灰度向量对应的光切片中心坐标,对所有的灰度向量对应的光切片中心坐标进行遍历计算,得到光条像素级图像的光切片中心。实施本发明提供的基于加权降维分解的光切片中心高精度提取方法及系统,能提高单线激光场景光切片中心提取的实时性和精度。

The present invention discloses a method and system for extracting the center of a light slice with high precision based on weighted dimensionality reduction decomposition, and relates to the field of calibration of a line structured light vision measurement system in two-dimensional/three-dimensional vision measurement. The method and system for extracting the center of a light slice with high precision based on weighted dimensionality reduction decomposition mainly include: decomposing the "image layer space" with dimensionality reduction to obtain a grayscale vector, analyzing and calculating the grayscale vector to obtain the sub-pixel coordinates of the decomposition direction of the "image layer space", matching the sequence number in the vertical direction of the decomposition direction of the current grayscale vector, obtaining the center coordinates of the light slice corresponding to the grayscale vector, traversing and calculating the center coordinates of the light slice corresponding to all grayscale vectors, and obtaining the center of the light slice of the light strip pixel-level image. The implementation of the method and system for extracting the center of a light slice with high precision based on weighted dimensionality reduction decomposition provided by the present invention can improve the real-time performance and precision of extracting the center of a light slice of a single-line laser scene.

Description

基于加权降维分解的光切片中心高精度提取方法及系统High-precision extraction method and system for optical slice center based on weighted dimensionality reduction decomposition

技术领域Technical Field

本发明涉及二/三维视觉测量中线结构光视觉测量系统标定领域,更具体地说,涉及一种基于加权降维分解的光切片中心高精度提取方法及系统。The present invention relates to the field of calibration of line structured light vision measurement system in two-dimensional/three-dimensional vision measurement, and more specifically to a method and system for high-precision extraction of light slice center based on weighted dimensionality reduction decomposition.

背景技术Background Art

光切片激光扫描三维测量方法是一种重要的主动视觉技术。它基于三维激光扫描技术的原理,具有测量范围广、测量效率高、设备硬件体积小等优点,广泛应用于航空航天、汽车智能制造、船舶智能制造等国家重点发展领域。The optical slice laser scanning 3D measurement method is an important active vision technology. It is based on the principle of 3D laser scanning technology and has the advantages of wide measurement range, high measurement efficiency, and small equipment hardware size. It is widely used in national key development fields such as aerospace, automotive intelligent manufacturing, and ship intelligent manufacturing.

光切片激光扫描三维测量原理是将光切片单线激光投射到异构加工曲面上产生表征异构加工曲面相应截面的光切片轮廓。成像系统采集到光切片轮廓后先提取光切片轮廓在图像空间中的亚像素中心坐标,通过标定将图像空间中的光切片中心坐标映射到物理空间(标定)得到整个光切片轮廓的真实高度数据。具体的,通过一维平移,可获得垂直于光切片单线激光投射面方向单向扫描得到异构加工曲面的高精度三维点云数据。因此,提取光切片轮廓在图像空间中的亚像素中心坐标是光切片激光扫描三维测量方法的重要前提,而高效率、高精度、高鲁棒性的光切片轮廓在图像空间中的亚像素中心坐标的计算是保证线结构光视觉测量系统实时性,高精度,高鲁棒性的关键。The principle of light-slice laser scanning 3D measurement is to project a light-slice single-line laser onto a heterogeneous machined surface to generate a light-slice contour that represents the corresponding cross section of the heterogeneous machined surface. After the imaging system acquires the light-slice contour, the sub-pixel center coordinates of the light-slice contour in the image space are first extracted, and the light-slice center coordinates in the image space are mapped to the physical space (calibration) by calibration to obtain the true height data of the entire light-slice contour. Specifically, through one-dimensional translation, high-precision 3D point cloud data of the heterogeneous machined surface can be obtained by unidirectional scanning perpendicular to the projection surface of the light-slice single-line laser. Therefore, extracting the sub-pixel center coordinates of the light-slice contour in the image space is an important prerequisite for the light-slice laser scanning 3D measurement method, and the calculation of the sub-pixel center coordinates of the light-slice contour in the image space with high efficiency, high precision, and high robustness is the key to ensuring the real-time, high precision, and high robustness of the line structured light vision measurement system.

传统的光切片中心提取方法主要有Steger算法等。Steger算法基于Hessian矩阵,通过Hessian矩阵获得图像空间中光切片轮廓的法线方向,然后求法线方向上的中心点得到光切片轮廓中心的亚像素位置。此方法鲁棒性高,精度高,但是该方法运算量大,很难实现光切片轮廓中心线的快速提取,难以满足快速测量三维重建等实时性要求高的应用场合。Traditional methods for extracting the center of a light slice mainly include the Steger algorithm. The Steger algorithm is based on the Hessian matrix. It obtains the normal direction of the light slice contour in the image space through the Hessian matrix, and then obtains the sub-pixel position of the center of the light slice contour by finding the center point in the normal direction. This method has high robustness and high accuracy, but the method has a large amount of calculation, and it is difficult to achieve rapid extraction of the center line of the light slice contour, and it is difficult to meet the application occasions with high real-time requirements such as rapid measurement of three-dimensional reconstruction.

J.Zhang提出了一种高精度提取三维重建中心线结构光条纹中心线的方法。该方法通过阈值分割法以及灰度重心法的二次优化算法对图像进行处理并提取其亚像素中心。基于J.Zhang的方法不但可以实现多线激光测量,而且可以高实时性地提取线结构光条纹的中心线。但是,该方法需要对图像使用高斯滤波进行处理,这不仅会比较耗时而且会存在负面的边缘模糊效应,而且边缘像素点会经常发生变化,所以很难实现光条纹中心坐标的高精度、高稳定性提取。J. Zhang proposed a method for extracting the center line of the three-dimensional reconstruction center line structured light stripes with high precision. This method processes the image and extracts its sub-pixel center through the secondary optimization algorithm of the threshold segmentation method and the grayscale centroid method. The method based on J. Zhang can not only realize multi-line laser measurement, but also extract the center line of the line structured light stripes with high real-time performance. However, this method requires the use of Gaussian filtering to process the image, which is not only time-consuming but also has a negative edge blurring effect. In addition, the edge pixel points often change, so it is difficult to achieve high-precision and high-stability extraction of the center coordinates of the light stripes.

专利“基于高斯超拉普拉斯分布的线结构光中心提取方法及装置”(专利号:ZL202311135784.4)提出了一种基于高斯超拉普拉斯分布的线结构光中心提取方法,采用高斯滤波对结构光图像进行处理,在不破坏线结构光原本的形状以及结构的前提下可以达到去除影响拟合精度的噪点,该方法有一定的抗噪能力,且满足了一定的精度要求。但是,该方法中的光条边界区域内确定拟合区域的方法中,通过对每列灰度值最大的点分别向两侧扩展两倍的线结构光条宽度以确定拟合区间,因为在结构光中心提取中,每一列都可能存在多个且不连续的灰度值的最大点,容易产生较大误差,所以该方法抗干扰性较差,精度较低。The patent "Method and device for extracting the center of line structured light based on Gaussian super Laplace distribution" (patent number: ZL202311135784.4) proposes a method for extracting the center of line structured light based on Gaussian super Laplace distribution. It uses Gaussian filtering to process the structured light image. It can remove the noise that affects the fitting accuracy without destroying the original shape and structure of the line structured light. This method has a certain noise resistance and meets certain accuracy requirements. However, in the method of determining the fitting area within the boundary area of the light strip in this method, the fitting interval is determined by expanding the maximum grayscale value of each column by twice the width of the line structured light strip on both sides. Because in the extraction of the structured light center, each column may have multiple and discontinuous maximum grayscale value points, which is easy to produce large errors, so this method has poor anti-interference and low accuracy.

Y.Gong,G.Liu,H.Huang and T.Song等学者提出了一种改进的基于内推进的激光中心线提取算法。该方法首先基于中心点拟合切线方程以确定激光线的法向量,然后计算中心点法向量的亚像素坐标,最后,使用线性平滑处理使激光中心点更加平滑。与灰度重心法相比,该方法的均方根误差提高了0.337个像素,增加了中心线提取的鲁棒性。但是,该学者所提出的阈值类光条中心点搜索算法中通过中心法获得中心点坐标,会使精度降低。但是该方法运算量较大,耗时问题仍然没有很好的解决。Scholars such as Y.Gong, G.Liu, H.Huang and T.Song proposed an improved laser centerline extraction algorithm based on inner propulsion. This method first determines the normal vector of the laser line based on the center point fitting tangent equation, then calculates the sub-pixel coordinates of the center point normal vector, and finally uses linear smoothing to make the laser center point smoother. Compared with the grayscale centroid method, the root mean square error of this method is improved by 0.337 pixels, which increases the robustness of centerline extraction. However, the threshold-type light strip center point search algorithm proposed by the scholar obtains the center point coordinates through the center method, which will reduce the accuracy. However, this method has a large amount of calculation, and the time-consuming problem has not been well solved.

综上,传统的中心线提取方法一般分为两类,第一类是以阈值法为代表的几何中心法,此类方法实时性较强,抗干扰能力较强,精度较高,但是耗时较多。虽然后来有学者进行图像立体矫正以及使用高斯滤波对图像进行预处理减少误差,但是仍然无法实现对复杂物体的快速检测。第二类方法是以灰度重心法为代表的能量中心法,该类方法适用于多线激光测量场景,通过结构光的轮廓信息来寻找结构光中心,因此该类方法更多应用于匹配多线激光中心,实时性较高但是受噪声影响大且精度较低,而且灰度重心法常规的用法是对二维场景进行处理,而本实施例提到的多阈值分域加权法是对二维场景降维分解后的一维场景进行处理,提高了结构光中心测量的实时性。后有学者提出了灰度重心法的二次优化算法,精度较灰度重心法有一定的提升,但是仍然不能满足高精度的结构光中心测量。因此,现阶段的线结构光视觉测量系统实时性不高,精度较低,鲁棒性较低。In summary, traditional centerline extraction methods are generally divided into two categories. The first category is the geometric center method represented by the threshold method. This type of method has strong real-time performance, strong anti-interference ability, high accuracy, but is more time-consuming. Although some scholars later performed image stereo correction and used Gaussian filtering to pre-process the image to reduce errors, it is still impossible to achieve rapid detection of complex objects. The second type of method is the energy center method represented by the grayscale centroid method. This type of method is suitable for multi-line laser measurement scenes. The structured light center is found through the contour information of the structured light. Therefore, this type of method is more used to match the multi-line laser center. It has high real-time performance but is greatly affected by noise and has low accuracy. In addition, the conventional use of the grayscale centroid method is to process two-dimensional scenes, while the multi-threshold domain weighted method mentioned in this embodiment processes the one-dimensional scene after the dimensionality reduction decomposition of the two-dimensional scene, which improves the real-time performance of the structured light center measurement. Later, some scholars proposed a secondary optimization algorithm for the grayscale centroid method, which has a certain improvement in accuracy compared to the grayscale centroid method, but still cannot meet the high-precision structured light center measurement. Therefore, the current line structured light vision measurement system has low real-time performance, low accuracy and low robustness.

发明内容Summary of the invention

本发明的目的在于,提供一种加权降维分解的光切片中心高精度提取方法及系统,能提高单线激光场景光切片中心提取的实时性和精度。The purpose of the present invention is to provide a method and system for high-precision extraction of light slice centers by weighted dimensionality reduction decomposition, which can improve the real-time performance and precision of light slice center extraction in a single-line laser scene.

本发明提供一种基于加权降维分解的光切片中心高精度提取方法,包括:S1:获取光条像素级图像,根据光条像素级图像,得到光条像素级图像的“像层空间”降维分解方向和光条像素级图像的“像层空间”降维分解方向的垂直方向;S2:将光条像素级图像灰度化,根据光条像素级图像的“像层空间”降维分解方向,将光条像素级图像进行分解,得到光条像素级图像的“像层空间”的灰度向量;S3:根据光条像素级图像的“像层空间”的灰度向量,得到灰度向量的“像层空间”的分解方向亚像素坐标;S4:根据灰度向量的“像层空间”的分解方向亚像素坐标,匹配当前灰度向量的分解方向的垂直方向的序号,得到该灰度向量对应的光切片中心坐标;S5:对所有的灰度向量对应的光切片中心坐标进行遍历计算,得到光条像素级图像的光切片中心。The present invention provides a high-precision extraction method for the center of a light slice based on weighted dimensionality reduction decomposition, comprising: S1: acquiring a light strip pixel-level image, and obtaining a dimensionality reduction decomposition direction of an "image layer space" of the light strip pixel-level image and a vertical direction of the dimensionality reduction decomposition direction of the "image layer space" of the light strip pixel-level image according to the light strip pixel-level image; S2: graying the light strip pixel-level image, and decomposing the light strip pixel-level image according to the dimensionality reduction decomposition direction of the "image layer space" of the light strip pixel-level image, to obtain a gray vector of the "image layer space" of the light strip pixel-level image; S3: obtaining a sub-pixel coordinate of a decomposition direction of the "image layer space" of the gray vector according to the gray vector of the "image layer space" of the light strip pixel-level image; S4: matching a sequence number of a vertical direction of the decomposition direction of a current gray vector according to the sub-pixel coordinate of the decomposition direction of the "image layer space" of the gray vector, to obtain a light slice center coordinate corresponding to the gray vector; S5: traversing and calculating the light slice center coordinates corresponding to all gray vectors, to obtain a light slice center of the light strip pixel-level image.

进一步地,上述的基于加权降维分解的光切片中心高精度提取方法的步骤S1具体包括:S11:获取光条像素级图像,根据光条像素级图像,得到光切片主方向、光条像素级图像的“像层空间”行方向和“像层空间”列方向;S12:根据光切片主方向、光条像素级图像的“像层空间”行方向和“像层空间”列方向,得到光切片主方向与“像层空间”行方向的夹角和光切片主方向与“像层空间”列方向的夹角;S13:当光切片主方向与“像层空间”行方向的夹角不小于光切片主方向与“像层空间”列方向的夹角时,以光条像素级图像的列方向为光条像素级图像的“像层空间”降维分解方向,以光条像素级图像的行方向为光条像素级图像的“像层空间”降维分解方向的垂直方向;当光切片主方向与“像层空间”行方向的夹角小于光切片主方向与“像层空间”列方向的夹角时,以光条像素级图像的行方向为光条像素级图像的“像层空间”降维分解方向,以光条像素级图像的列方向为光条像素级图像的“像层空间”降维分解方向的垂直方向。Furthermore, step S1 of the above-mentioned high-precision extraction method of the light slice center based on weighted dimensionality reduction decomposition specifically includes: S11: acquiring a light strip pixel-level image, and obtaining the main direction of the light slice, the row direction of the "image layer space" of the light strip pixel-level image, and the column direction of the "image layer space" according to the light strip pixel-level image; S12: obtaining the angle between the main direction of the light slice and the row direction of the "image layer space" and the angle between the main direction of the light slice and the column direction of the "image layer space" according to the main direction of the light slice, the row direction of the "image layer space" of the light strip pixel-level image, and the column direction of the "image layer space"; S13: when the angle between the main direction of the light slice and the row direction of the "image layer space" is not less than When the angle between the main direction of the light slice and the column direction of the "image layer space" is less than the angle between the main direction of the light slice and the column direction of the "image layer space", the row direction of the light stripe pixel level image is taken as the dimensionality reduction decomposition direction of the "image layer space" of the light stripe pixel level image, and the column direction of the light stripe pixel level image is taken as the dimensionality reduction decomposition direction of the "image layer space" of the light stripe pixel level image, and the column direction of the light stripe pixel level image is taken as the dimensionality reduction decomposition direction of the "image layer space" of the light stripe pixel level image.

进一步地,上述的基于加权降维分解的光切片中心高精度提取方法的步骤S3具体包括:根据光条像素级图像的“像层空间”的灰度向量,得到灰度向量的“像层空间”的分解方向亚像素坐标,如公式:Furthermore, step S3 of the above-mentioned method for high-precision extraction of light slice center based on weighted dimensionality reduction decomposition specifically includes: obtaining the sub-pixel coordinates of the decomposition direction of the "image layer space" of the grayscale vector according to the grayscale vector of the "image layer space" of the light strip pixel-level image, such as the formula:

其中,Pk为第k个灰度向量的“像层空间”的分解方向亚像素坐标,n为保留的区域内存在的亚像素坐标的个数,Pki为第k个灰度向量的第i个“像层空间”分解方向亚像素坐标点,Aki为第k个灰度向量的第i个“像层空间”分解方向亚像素坐标点的灰度值所对应的权重,rki为第k个灰度向量的第i个“像层空间”分解方向亚像素坐标点的灰度值,a为计算灰度值权重的第1预设系数,b为计算灰度值权重的第2预设系数,c为计算灰度值权重的第3预设系数,Th为灰度基线常数,T为预设灰度阈值,σ2(.)为方差函数,Max(.)为最大值函数,VG为整个图像的灰度总值,RA(T)为像素点被分到A类的概率,VA(T)为分配到A类的像素点的平均灰度,RB(T)为像素点被分到B类的概率,VB(T)为分配到B类的像素点的平均灰度,Ri为像素点的灰度为i的概率,gi为图像中灰度为i的像素的数量。Wherein, Pk is the sub-pixel coordinate in the decomposition direction of the “image layer space” of the kth grayscale vector, n is the number of sub-pixel coordinates existing in the retained area, Pki is the i-th sub-pixel coordinate point in the “image layer space” decomposition direction of the kth grayscale vector, Aki is the weight corresponding to the grayscale value of the i-th sub-pixel coordinate point in the “image layer space” decomposition direction of the kth grayscale vector, rki is the grayscale value of the i-th sub-pixel coordinate point in the “image layer space” decomposition direction of the kth grayscale vector, a is the first preset coefficient for calculating the grayscale value weight, b is the second preset coefficient for calculating the grayscale value weight, c is the third preset coefficient for calculating the grayscale value weight, Th is the grayscale baseline constant, T is the preset grayscale threshold, σ2 (.) is the variance function, Max(.) is the maximum value function, VG is the total grayscale value of the entire image, RA (T) is the probability that the pixel is classified into class A, VA (T) is the average grayscale of the pixels assigned to class A, and RB is the average grayscale of the pixels assigned to class A. (T) is the probability that the pixel is assigned to class B, VB (T) is the average grayscale of the pixels assigned to class B, Ri is the probability that the grayscale of the pixel is i, and gi is the number of pixels with grayscale i in the image.

进一步地,上述的基于加权降维分解的光切片中心高精度提取方法的步骤S4具体包括:根据灰度向量的“像层空间”的分解方向亚像素坐标,匹配当前灰度向量的分解方向的垂直方向的序号,得到该灰度向量对应的光切片中心坐标,如公式:Further, step S4 of the above-mentioned method for high-precision extraction of the center of a light slice based on weighted dimensionality reduction decomposition specifically includes: according to the sub-pixel coordinates of the decomposition direction of the "image layer space" of the grayscale vector, matching the serial number in the vertical direction of the decomposition direction of the current grayscale vector, obtaining the coordinates of the center of the light slice corresponding to the grayscale vector, such as the formula:

Pk(Rk,Ck)=(Pk,k)P k (R k , C k ) = (P k , k)

其中,Pk(Rk,Ck)为第k个灰度向量对应的光切片中心坐标,Rk为“像层空间”光切片中心分解方向坐标,Ck为“像层空间”光切片中心分解方向的垂直方向坐标,Pk为第k各灰度向量的“像层空间”的分解方向亚像素坐标,k为当前灰度向量的分解方向的垂直方向的序号。Among them, Pk ( Rk , Ck ) is the coordinate of the center of the light slice corresponding to the k-th grayscale vector, Rk is the coordinate of the decomposition direction of the center of the light slice in the "image layer space", Ck is the vertical coordinate of the decomposition direction of the center of the light slice in the "image layer space", Pk is the sub-pixel coordinate of the decomposition direction of the "image layer space" of each k-th grayscale vector, and k is the serial number in the vertical direction of the decomposition direction of the current grayscale vector.

进一步地,上述的基于加权降维分解的光切片中心高精度提取方法的步骤S5具体包括:对所有的灰度向量对应的光切片中心坐标进行遍历计算,得到光条像素级图像的光切片中心,如公式:Furthermore, step S5 of the above-mentioned method for high-precision extraction of light slice centers based on weighted dimensionality reduction decomposition specifically includes: traversing and calculating the light slice center coordinates corresponding to all grayscale vectors to obtain the light slice center of the light strip pixel-level image, as shown in the formula:

{m}={Pk(Rk,Ck)|k=1,2,...,H}{m}={P k (R k ,C k )|k=1,2,...,H}

其中,{m}为光条像素级图像的光切片中心,H为灰度向量的个数。Wherein, {m} is the center of the light slice of the light bar pixel level image, and H is the number of grayscale vectors.

本发明还提供一种系统,包括以下模块:图像获取模块,配置为:获取光条像素级图像,根据光条像素级图像,得到光条像素级图像的“像层空间”降维分解方向和光条像素级图像的“像层空间”降维分解方向的垂直方向;向量获取模块,配置为:将光条像素级图像灰度化,根据光条像素级图像的“像层空间”降维分解方向,将光条像素级图像进行分解,得到光条像素级图像的“像层空间”的灰度向量;分解坐标模块,配置为:根据光条像素级图像的“像层空间”的灰度向量,得到灰度向量的“像层空间”的分解方向亚像素坐标;坐标匹配模块,配置为:根据灰度向量的“像层空间”的分解方向亚像素坐标,匹配当前灰度向量的分解方向的垂直方向的序号,得到该灰度向量对应的光切片中心坐标;图像光切片中心模块,配置为:对所有的灰度向量对应的光切片中心坐标进行遍历计算,得到光条像素级图像的光切片中心。The present invention also provides a system, comprising the following modules: an image acquisition module, configured to: acquire a light strip pixel-level image, and obtain a dimensionality reduction decomposition direction of the "image layer space" of the light strip pixel-level image and a vertical direction of the dimensionality reduction decomposition direction of the "image layer space" of the light strip pixel-level image according to the light strip pixel-level image; a vector acquisition module, configured to: grayscale the light strip pixel-level image, and decompose the light strip pixel-level image according to the dimensionality reduction decomposition direction of the "image layer space" of the light strip pixel-level image to obtain a grayscale vector of the "image layer space" of the light strip pixel-level image; decompose the coordinates The module is configured as follows: according to the grayscale vector of the "image layer space" of the light strip pixel-level image, the sub-pixel coordinates of the decomposition direction of the grayscale vector of the "image layer space" are obtained; the coordinate matching module is configured as follows: according to the sub-pixel coordinates of the decomposition direction of the "image layer space" of the grayscale vector, the serial number of the vertical direction of the decomposition direction of the current grayscale vector is matched to obtain the light slice center coordinates corresponding to the grayscale vector; the image light slice center module is configured as follows: the light slice center coordinates corresponding to all grayscale vectors are traversed and calculated to obtain the light slice center of the light strip pixel-level image.

进一步地,上述系统的图像获取模块具体配置为:获取光条像素级图像,根据光条像素级图像,得到光切片主方向、光条像素级图像的“像层空间”行方向和“像层空间”列方向;根据光切片主方向、光条像素级图像的“像层空间”行方向和“像层空间”列方向,得到光切片主方向与“像层空间”行方向的夹角和光切片主方向与“像层空间”列方向的夹角;当光切片主方向与“像层空间”行方向的夹角不小于光切片主方向与“像层空间”列方向的夹角时,以光条像素级图像的列方向为光条像素级图像的“像层空间”降维分解方向,以光条像素级图像的行方向为光条像素级图像的“像层空间”降维分解方向的垂直方向;当光切片主方向与“像层空间”行方向的夹角小于光切片主方向与“像层空间”列方向的夹角时,以光条像素级图像的行方向为光条像素级图像的“像层空间”降维分解方向,以光条像素级图像的列方向为光条像素级图像的“像层空间”降维分解方向的垂直方向。Furthermore, the image acquisition module of the above system is specifically configured as follows: acquiring a light strip pixel-level image, and obtaining the main direction of the light slice, the row direction of the "image layer space" of the light strip pixel-level image, and the column direction of the "image layer space" according to the light strip pixel-level image; obtaining the angle between the main direction of the light slice and the row direction of the "image layer space" and the angle between the main direction of the light slice and the column direction of the "image layer space" according to the main direction of the light slice, the row direction of the "image layer space" of the light strip pixel-level image, and the column direction of the "image layer space"; when the angle between the main direction of the light slice and the row direction of the "image layer space" is not less than the angle between the main direction of the light slice and the column direction of the "image layer space", the angle between the main direction of the light slice and the row direction of the "image layer space" is obtained. direction, the column direction of the light stripe pixel level image is taken as the dimensionality reduction decomposition direction of the "image layer space" of the light stripe pixel level image, and the row direction of the light stripe pixel level image is taken as the perpendicular direction of the dimensionality reduction decomposition direction of the "image layer space" of the light stripe pixel level image; when the angle between the main direction of the light slice and the row direction of the "image layer space" is smaller than the angle between the main direction of the light slice and the column direction of the "image layer space", the row direction of the light stripe pixel level image is taken as the dimensionality reduction decomposition direction of the "image layer space" of the light stripe pixel level image, and the column direction of the light stripe pixel level image is taken as the perpendicular direction of the dimensionality reduction decomposition direction of the "image layer space" of the light stripe pixel level image.

进一步地,上述系统的分解坐标模块具体配置为:根据光条像素级图像的“像层空间”的灰度向量,得到灰度向量的“像层空间”的分解方向亚像素坐标,如公式:Furthermore, the decomposition coordinate module of the above system is specifically configured as follows: according to the grayscale vector of the "image layer space" of the light strip pixel-level image, the decomposition direction sub-pixel coordinate of the "image layer space" of the grayscale vector is obtained, as shown in the formula:

其中,Pk为第k个灰度向量的“像层空间”的分解方向亚像素坐标,n为保留的区域内存在的亚像素坐标的个数,Pki为第k个灰度向量的第i个“像层空间”分解方向亚像素坐标点,Aki为第k个灰度向量的第i个“像层空间”分解方向亚像素坐标点的灰度值所对应的权重,rki为第k个灰度向量的第i个“像层空间”分解方向亚像素坐标点的灰度值,a为计算灰度值权重的第1预设系数,b为计算灰度值权重的第2预设系数,c为计算灰度值权重的第3预设系数,Th为灰度基线常数,T为预设灰度阈值,σ2(.)为方差函数,Max(.)为最大值函数,VG为整个图像的灰度总值,RA(T)为像素点被分到A类的概率,VA(T)为分配到A类的像素点的平均灰度,RB(T)为像素点被分到B类的概率,VB(T)为分配到B类的像素点的平均灰度,Ri为像素点的灰度为i的概率,gi为图像中灰度为i的像素的数量。Wherein, Pk is the sub-pixel coordinate in the decomposition direction of the “image layer space” of the kth grayscale vector, n is the number of sub-pixel coordinates existing in the retained area, Pki is the i-th sub-pixel coordinate point in the “image layer space” decomposition direction of the kth grayscale vector, Aki is the weight corresponding to the grayscale value of the i-th sub-pixel coordinate point in the “image layer space” decomposition direction of the kth grayscale vector, rki is the grayscale value of the i-th sub-pixel coordinate point in the “image layer space” decomposition direction of the kth grayscale vector, a is the first preset coefficient for calculating the grayscale value weight, b is the second preset coefficient for calculating the grayscale value weight, c is the third preset coefficient for calculating the grayscale value weight, Th is the grayscale baseline constant, T is the preset grayscale threshold, σ2 (.) is the variance function, Max(.) is the maximum value function, VG is the total grayscale value of the entire image, RA (T) is the probability that the pixel is classified into class A, VA (T) is the average grayscale of the pixels assigned to class A, and RB is the average grayscale of the pixels assigned to class A. (T) is the probability that the pixel is assigned to class B, VB (T) is the average grayscale of the pixels assigned to class B, Ri is the probability that the grayscale of the pixel is i, and gi is the number of pixels with grayscale i in the image.

进一步地,上述系统的坐标匹配模块具体配置为:根据灰度向量的“像层空间”的分解方向亚像素坐标,匹配当前灰度向量的分解方向的垂直方向的序号,得到该灰度向量对应的光切片中心坐标,如公式:Furthermore, the coordinate matching module of the above system is specifically configured as follows: according to the sub-pixel coordinates of the decomposition direction of the "image layer space" of the grayscale vector, the serial number in the vertical direction of the decomposition direction of the current grayscale vector is matched to obtain the center coordinates of the light slice corresponding to the grayscale vector, as shown in the formula:

Pk(Rk,Ck)=(Pk,k)P k (R k , C k ) = (P k , k)

其中,Pk(Rk,Ck)为第k个灰度向量对应的光切片中心坐标,Rk为“像层空间”光切片中心分解方向坐标,Ck为“像层空间”光切片中心分解方向的垂直方向坐标,Pk为第k各灰度向量的“像层空间”的分解方向亚像素坐标,k为当前灰度向量的分解方向的垂直方向的序号。Among them, Pk ( Rk , Ck ) is the coordinate of the center of the light slice corresponding to the k-th grayscale vector, Rk is the coordinate of the decomposition direction of the center of the light slice in the "image layer space", Ck is the vertical coordinate of the decomposition direction of the center of the light slice in the "image layer space", Pk is the sub-pixel coordinate of the decomposition direction of the "image layer space" of each k-th grayscale vector, and k is the serial number in the vertical direction of the decomposition direction of the current grayscale vector.

进一步地,上述系统的图像光切片中心模块具体配置为:对所有的灰度向量对应的光切片中心坐标进行遍历计算,得到光条像素级图像的光切片中心,如公式:Furthermore, the image light slice center module of the above system is specifically configured as follows: traversing and calculating the light slice center coordinates corresponding to all grayscale vectors to obtain the light slice center of the light strip pixel-level image, as shown in the formula:

{m}={Pk(Rk,Ck)|k=1,2,...,H}{m}={P k (R k ,C k )|k=1,2,...,H}

其中,{m}为光条像素级图像的光切片中心,H为灰度向量的个数。Wherein, {m} is the center of the light slice of the light bar pixel level image, and H is the number of grayscale vectors.

实施本发明提供的基于加权降维分解的光切片中心高精度提取方法及系统,具有以下有益效果:The method and system for high-precision extraction of light slice centers based on weighted dimensionality reduction decomposition provided by the present invention have the following beneficial effects:

构建“像层空间”二维灰度矩阵的降维分解模型,通过将“像层空间”二维灰度矩阵分解为有序“灰度向量池”,使“像层空间”光切片中心二维坐标的提取问题转化为“灰度向量池”内向量中心点的高精度计算与向量序号匹配问题,降低了光切片中心提取的难度;A dimensionality reduction decomposition model of the two-dimensional grayscale matrix of the "image layer space" is constructed. By decomposing the two-dimensional grayscale matrix of the "image layer space" into an ordered "grayscale vector pool", the problem of extracting the two-dimensional coordinates of the center of the light slice in the "image layer space" is transformed into a problem of high-precision calculation of the center point of the vector in the "grayscale vector pool" and matching the vector sequence number, which reduces the difficulty of extracting the center of the light slice.

研究“灰度向量池”内各灰度向量的灰度分布特征机理,构建“灰度向量池”内向量灰度中心点高精度计算模型,提高了光切片中心提取的精度;The grayscale distribution characteristic mechanism of each grayscale vector in the "grayscale vector pool" is studied, and a high-precision calculation model of the grayscale center point of the vector in the "grayscale vector pool" is constructed to improve the accuracy of light slice center extraction;

基于“灰度向量池”的有序性,通过“灰度向量池”的循环遍历即可实现“灰度向量池”内向量的序号与向量灰度中心点匹配,相较于二维矩阵的运算(卷积、细化、形态学算法),降低了光切片中心提取的运算量,从而提高了提取的实时性。Based on the orderliness of the "grayscale vector pool", the sequence number of the vector in the "grayscale vector pool" can be matched with the grayscale center point of the vector through a loop traversal of the "grayscale vector pool". Compared with the two-dimensional matrix operation (convolution, refinement, morphological algorithm), the amount of calculation for extracting the center of the light slice is reduced, thereby improving the real-time performance of the extraction.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

下面将结合附图及实施例对本发明作进一步说明,附图中:The present invention will be further described below with reference to the accompanying drawings and embodiments, in which:

图1是本发明提供的基于加权降维分解的光切片中心高精度提取方法的流程图;FIG1 is a flow chart of a method for high-precision extraction of optical slice centers based on weighted dimensionality reduction decomposition provided by the present invention;

图2是本发明提供的“像层空间”降维分解方向选定示意图;FIG2 is a schematic diagram of the direction of dimensionality reduction decomposition of the “image layer space” provided by the present invention;

图3是本发明提供的“像层空间”按列(垂直于光切片主方向)方向进行降维分解示意图;FIG3 is a schematic diagram of dimensionality reduction decomposition of the “image layer space” provided by the present invention in the column direction (perpendicular to the main direction of the light slice);

图4是本发明提供的灰度向量的灰度分布示意图;FIG4 is a schematic diagram of grayscale distribution of a grayscale vector provided by the present invention;

其中,(a)是灰度向量V1的灰度分布示意图,(b)是灰度向量VK的灰度分布示意图;Among them, (a) is a schematic diagram of the grayscale distribution of the grayscale vector V 1 , and (b) is a schematic diagram of the grayscale distribution of the grayscale vector V K ;

图5是本发明提供的“灰度—权重映射曲线”示意图;FIG5 is a schematic diagram of a “grayscale-weight mapping curve” provided by the present invention;

图6是本发明提供的以VK为例的“灰度向量池”内灰度向量序号匹配方法实现示意图。Figure 6 is a schematic diagram of the implementation method of the grayscale vector serial number matching method in the "grayscale vector pool" taking V K as an example provided by the present invention.

具体实施方式DETAILED DESCRIPTION

为了对本发明的技术特征、目的和效果有更加清楚的理解,现对照附图详细说明本发明的具体实施方式。In order to have a clearer understanding of the technical features, purposes and effects of the present invention, specific embodiments of the present invention are now described in detail with reference to the accompanying drawings.

图1示出了本实施例的基于加权降维分解的光切片中心高精度提取方法的示意图。在本实施例中,基于加权降维分解的光切片中心高精度提取方法包括:Fig. 1 is a schematic diagram of a method for extracting light slice centers with high precision based on weighted dimensionality reduction decomposition in this embodiment. In this embodiment, the method for extracting light slice centers with high precision based on weighted dimensionality reduction decomposition includes:

S1:获取光条像素级图像,根据光条像素级图像,得到光条像素级图像的“像层空间”降维分解方向和光条像素级图像的“像层空间”降维分解方向的垂直方向;S1: Obtain a light stripe pixel-level image, and obtain a dimensionality reduction decomposition direction of the “image layer space” of the light stripe pixel-level image and a vertical direction of the dimensionality reduction decomposition direction of the “image layer space” of the light stripe pixel-level image according to the light stripe pixel-level image;

具体地,步骤S1包括:S11:获取光条像素级图像,根据光条像素级图像,得到光切片主方向、光条像素级图像的“像层空间”行方向和“像层空间”列方向;S12:根据光切片主方向、光条像素级图像的“像层空间”行方向和“像层空间”列方向,得到光切片主方向与“像层空间”行方向的夹角和光切片主方向与“像层空间”列方向的夹角;S13:当光切片主方向与“像层空间”行方向的夹角不小于光切片主方向与“像层空间”列方向的夹角时,以光条像素级图像的列方向为光条像素级图像的“像层空间”降维分解方向,以光条像素级图像的行方向为光条像素级图像的“像层空间”降维分解方向的垂直方向;当光切片主方向与“像层空间”行方向的夹角小于光切片主方向与“像层空间”列方向的夹角时,以光条像素级图像的行方向为光条像素级图像的“像层空间”降维分解方向,以光条像素级图像的列方向为光条像素级图像的“像层空间”降维分解方向的垂直方向;Specifically, step S1 includes: S11: acquiring a light strip pixel-level image, and obtaining a main direction of a light slice, a row direction of an "image layer space" of the light strip pixel-level image, and a column direction of an "image layer space" according to the light strip pixel-level image; S12: obtaining an angle between the main direction of the light slice and the row direction of the "image layer space" and an angle between the main direction of the light slice and the column direction of the "image layer space" according to the main direction of the light slice, the row direction of the "image layer space" of the light strip pixel-level image, and the column direction of the "image layer space"; S13: when the angle between the main direction of the light slice and the row direction of the "image layer space" is not less than the angle between the main direction of the light slice and the column direction of the "image layer space", the angle between the main direction of the light slice and the row direction of the "image layer space" is obtained. When the angle between the main direction of the light slice and the row direction of the "image layer space" is less than the angle between the main direction of the light slice and the column direction of the "image layer space", the row direction of the light stripe pixel level image is taken as the dimensionality reduction decomposition direction of the "image layer space" of the light stripe pixel level image, and the row direction of the light stripe pixel level image is taken as the vertical direction of the dimensionality reduction decomposition direction of the "image layer space" of the light stripe pixel level image; when the angle between the main direction of the light slice and the row direction of the "image layer space" is less than the angle between the main direction of the light slice and the column direction of the "image layer space", the row direction of the light stripe pixel level image is taken as the dimensionality reduction decomposition direction of the "image layer space" of the light stripe pixel level image, and the column direction of the light stripe pixel level image is taken as the vertical direction of the dimensionality reduction decomposition direction of the "image layer space" of the light stripe pixel level image;

S2:将光条像素级图像灰度化,根据光条像素级图像的“像层空间”降维分解方向,将光条像素级图像进行分解,得到光条像素级图像的“像层空间”的灰度向量;S2: graying the light stripe pixel-level image, decomposing the light stripe pixel-level image according to the dimensionality reduction decomposition direction of the "image layer space" of the light stripe pixel-level image, and obtaining the grayscale vector of the "image layer space" of the light stripe pixel-level image;

S3:根据光条像素级图像的“像层空间”的灰度向量,得到灰度向量的“像层空间”的分解方向亚像素坐标;S3: according to the grayscale vector of the “image layer space” of the pixel-level image of the light strip, the sub-pixel coordinates of the decomposition direction of the grayscale vector of the “image layer space” are obtained;

具体地,步骤S3包括:根据光条像素级图像的“像层空间”的灰度向量,得到灰度向量的“像层空间”的分解方向亚像素坐标,如公式:Specifically, step S3 includes: according to the grayscale vector of the "image layer space" of the pixel-level image of the light strip, obtaining the sub-pixel coordinates of the decomposition direction of the "image layer space" of the grayscale vector, such as the formula:

其中,Pk为第k个灰度向量的“像层空间”的分解方向亚像素坐标,n为保留的区域内存在的亚像素坐标的个数,Pki为第k个灰度向量的第i个“像层空间”分解方向亚像素坐标点,Aki为第k个灰度向量的第i个“像层空间”分解方向亚像素坐标点的灰度值所对应的权重,rki为第k个灰度向量的第i个“像层空间”分解方向亚像素坐标点的灰度值,a为计算灰度值权重的第1预设系数,b为计算灰度值权重的第2预设系数,c为计算灰度值权重的第3预设系数,Th为灰度基线常数,T为预设灰度阈值,σ2(.)为方差函数,Max(.)为最大值函数,VG为整个图像的灰度总值,RA(T)为像素点被分到A类的概率,VA(T)为分配到A类的像素点的平均灰度,RB(T)为像素点被分到B类的概率,VB(T)为分配到B类的像素点的平均灰度,Ri为像素点的灰度为i的概率,gi为图像中灰度为i的像素的数量;Wherein, Pk is the sub-pixel coordinate in the decomposition direction of the “image layer space” of the kth grayscale vector, n is the number of sub-pixel coordinates existing in the retained area, Pki is the i-th sub-pixel coordinate point in the “image layer space” decomposition direction of the kth grayscale vector, Aki is the weight corresponding to the grayscale value of the i-th sub-pixel coordinate point in the “image layer space” decomposition direction of the kth grayscale vector, rki is the grayscale value of the i-th sub-pixel coordinate point in the “image layer space” decomposition direction of the kth grayscale vector, a is the first preset coefficient for calculating the grayscale value weight, b is the second preset coefficient for calculating the grayscale value weight, c is the third preset coefficient for calculating the grayscale value weight, Th is the grayscale baseline constant, T is the preset grayscale threshold, σ2 (.) is the variance function, Max(.) is the maximum value function, VG is the total grayscale value of the entire image, RA (T) is the probability that the pixel is classified into class A, VA (T) is the average grayscale of the pixels assigned to class A, and RB is the average grayscale of the pixels assigned to class A. (T) is the probability that a pixel is assigned to class B, V B (T) is the average grayscale of pixels assigned to class B, R i is the probability that the grayscale of a pixel is i, and g i is the number of pixels with grayscale i in the image;

S4:根据灰度向量的“像层空间”的分解方向亚像素坐标,匹配当前灰度向量的分解方向的垂直方向的序号,得到该灰度向量对应的光切片中心坐标;S4: According to the sub-pixel coordinates of the decomposition direction of the "image layer space" of the gray vector, the serial number of the vertical direction of the decomposition direction of the current gray vector is matched to obtain the center coordinates of the light slice corresponding to the gray vector;

具体地,步骤S4包括:根据灰度向量的“像层空间”的分解方向亚像素坐标,匹配当前灰度向量的分解方向的垂直方向的序号,得到该灰度向量对应的光切片中心坐标,如公式:Specifically, step S4 includes: according to the sub-pixel coordinates of the decomposition direction of the "image layer space" of the gray vector, matching the serial number of the vertical direction of the decomposition direction of the current gray vector, and obtaining the light slice center coordinates corresponding to the gray vector, such as the formula:

Pk(Rk,Ck)=(Pk,k)P k (R k , C k ) = (P k , k)

其中,Pk(Rk,Ck)为第k个灰度向量对应的光切片中心坐标,Rk为“像层空间”光切片中心分解方向坐标,Ck为“像层空间”光切片中心分解方向的垂直方向坐标,Pk为第k各灰度向量的“像层空间”的分解方向亚像素坐标,k为当前灰度向量的分解方向的垂直方向的序号;Wherein, P k (R k , C k ) is the coordinate of the center of the light slice corresponding to the k-th grayscale vector, R k is the coordinate of the decomposition direction of the center of the light slice in the "image layer space", C k is the vertical coordinate of the decomposition direction of the center of the light slice in the "image layer space", P k is the sub-pixel coordinate of the decomposition direction of the "image layer space" of each k-th grayscale vector, and k is the serial number of the vertical direction of the decomposition direction of the current grayscale vector;

S5:对所有的灰度向量对应的光切片中心坐标进行遍历计算,得到光条像素级图像的光切片中心;S5: traverse and calculate the light slice center coordinates corresponding to all grayscale vectors to obtain the light slice center of the light strip pixel-level image;

具体地,步骤S5包括:对所有的灰度向量对应的光切片中心坐标进行遍历计算,得到光条像素级图像的光切片中心,如公式:Specifically, step S5 includes: traversing and calculating the light slice center coordinates corresponding to all grayscale vectors to obtain the light slice center of the light strip pixel level image, such as the formula:

{m}={Pk(Rk,Ck)|k=1,2,...,H}{m}={P k (R k ,C k )|k=1,2,...,H}

其中,{m}为光条像素级图像的光切片中心,H为灰度向量的个数。Wherein, {m} is the center of the light slice of the light bar pixel level image, and H is the number of grayscale vectors.

本实施例提供一种系统,包括以下模块:This embodiment provides a system, including the following modules:

图像获取模块,配置为:获取光条像素级图像,根据光条像素级图像,得到光条像素级图像的“像层空间”降维分解方向和光条像素级图像的“像层空间”降维分解方向的垂直方向;The image acquisition module is configured to: acquire the light strip pixel level image, and obtain the "image layer space" dimensionality reduction decomposition direction of the light strip pixel level image and the vertical direction of the "image layer space" dimensionality reduction decomposition direction of the light strip pixel level image according to the light strip pixel level image;

具体地,上述系统的图像获取模块配置为:获取光条像素级图像,根据光条像素级图像,得到光切片主方向、光条像素级图像的“像层空间”行方向和“像层空间”列方向;根据光切片主方向、光条像素级图像的“像层空间”行方向和“像层空间”列方向,得到光切片主方向与“像层空间”行方向的夹角和光切片主方向与“像层空间”列方向的夹角;当光切片主方向与“像层空间”行方向的夹角不小于光切片主方向与“像层空间”列方向的夹角时,以光条像素级图像的列方向为光条像素级图像的“像层空间”降维分解方向,以光条像素级图像的行方向为光条像素级图像的“像层空间”降维分解方向的垂直方向;当光切片主方向与“像层空间”行方向的夹角小于光切片主方向与“像层空间”列方向的夹角时,以光条像素级图像的行方向为光条像素级图像的“像层空间”降维分解方向,以光条像素级图像的列方向为光条像素级图像的“像层空间”降维分解方向的垂直方向;Specifically, the image acquisition module of the above system is configured as follows: acquiring a light strip pixel-level image, and obtaining the main direction of the light slice, the row direction of the "image layer space" of the light strip pixel-level image, and the column direction of the "image layer space" according to the light strip pixel-level image; obtaining the angle between the main direction of the light slice and the row direction of the "image layer space" and the angle between the main direction of the light slice and the column direction of the "image layer space" according to the main direction of the light slice, the row direction of the "image layer space" of the light strip pixel-level image, and the column direction of the "image layer space"; when the angle between the main direction of the light slice and the row direction of the "image layer space" is not less than the angle between the main direction of the light slice and the column direction of the "image layer space", the angle between the main direction of the light slice and the row direction of the "image layer space" is obtained. , the column direction of the light stripe pixel level image is taken as the dimensionality reduction decomposition direction of the "image layer space" of the light stripe pixel level image, and the row direction of the light stripe pixel level image is taken as the vertical direction of the dimensionality reduction decomposition direction of the "image layer space" of the light stripe pixel level image; when the angle between the main direction of the light slice and the row direction of the "image layer space" is smaller than the angle between the main direction of the light slice and the column direction of the "image layer space", the row direction of the light stripe pixel level image is taken as the dimensionality reduction decomposition direction of the "image layer space" of the light stripe pixel level image, and the column direction of the light stripe pixel level image is taken as the vertical direction of the dimensionality reduction decomposition direction of the "image layer space" of the light stripe pixel level image;

向量获取模块,配置为:将光条像素级图像灰度化,根据光条像素级图像的“像层空间”降维分解方向,将光条像素级图像进行分解,得到光条像素级图像的“像层空间”的灰度向量;The vector acquisition module is configured to: grayscale the light stripe pixel level image, decompose the light stripe pixel level image according to the dimensionality reduction decomposition direction of the "image layer space" of the light stripe pixel level image, and obtain the grayscale vector of the "image layer space" of the light stripe pixel level image;

分解坐标模块,配置为:根据光条像素级图像的“像层空间”的灰度向量,得到灰度向量的“像层空间”的分解方向亚像素坐标;The decomposition coordinate module is configured to: obtain the decomposition direction sub-pixel coordinates of the "image layer space" of the grayscale vector according to the grayscale vector of the "image layer space" of the pixel-level image of the light strip;

具体地,上述系统的分解坐标模块配置为:根据光条像素级图像的“像层空间”的灰度向量,得到灰度向量的“像层空间”的分解方向亚像素坐标,如公式:Specifically, the decomposition coordinate module of the above system is configured as follows: according to the grayscale vector of the "image layer space" of the pixel-level image of the light strip, the decomposition direction sub-pixel coordinate of the "image layer space" of the grayscale vector is obtained, as shown in the formula:

其中,Pk为第k个灰度向量的“像层空间”的分解方向亚像素坐标,n为保留的区域内存在的亚像素坐标的个数,Pki为第k个灰度向量的第i个“像层空间”分解方向亚像素坐标点,Aki为第k个灰度向量的第i个“像层空间”分解方向亚像素坐标点的灰度值所对应的权重,rki为第k个灰度向量的第i个“像层空间”分解方向亚像素坐标点的灰度值,a为计算灰度值权重的第1预设系数,b为计算灰度值权重的第2预设系数,c为计算灰度值权重的第3预设系数,Th为灰度基线常数,T为预设灰度阈值,σ2(.)为方差函数,Max(.)为最大值函数,VG为整个图像的灰度总值,RA(T)为像素点被分到A类的概率,VA(T)为分配到A类的像素点的平均灰度,RB(T)为像素点被分到B类的概率,VB(T)为分配到B类的像素点的平均灰度,Ri为像素点的灰度为i的概率,gi为图像中灰度为i的像素的数量;Wherein, Pk is the sub-pixel coordinate in the decomposition direction of the “image layer space” of the kth grayscale vector, n is the number of sub-pixel coordinates existing in the retained area, Pki is the i-th sub-pixel coordinate point in the “image layer space” decomposition direction of the kth grayscale vector, Aki is the weight corresponding to the grayscale value of the i-th sub-pixel coordinate point in the “image layer space” decomposition direction of the kth grayscale vector, rki is the grayscale value of the i-th sub-pixel coordinate point in the “image layer space” decomposition direction of the kth grayscale vector, a is the first preset coefficient for calculating the grayscale value weight, b is the second preset coefficient for calculating the grayscale value weight, c is the third preset coefficient for calculating the grayscale value weight, Th is the grayscale baseline constant, T is the preset grayscale threshold, σ2 (.) is the variance function, Max(.) is the maximum value function, VG is the total grayscale value of the entire image, RA (T) is the probability that the pixel is classified into class A, VA (T) is the average grayscale of the pixels assigned to class A, and RB is the average grayscale of the pixels assigned to class A. (T) is the probability that a pixel is assigned to class B, V B (T) is the average grayscale of pixels assigned to class B, R i is the probability that the grayscale of a pixel is i, and g i is the number of pixels with grayscale i in the image;

坐标匹配模块,配置为:根据灰度向量的“像层空间”的分解方向亚像素坐标,匹配当前灰度向量的分解方向的垂直方向的序号,得到该灰度向量对应的光切片中心坐标;The coordinate matching module is configured to: match the sub-pixel coordinates of the decomposition direction of the grayscale vector in the "image layer space" to the serial number in the vertical direction of the decomposition direction of the current grayscale vector, and obtain the center coordinates of the light slice corresponding to the grayscale vector;

具体地,上述系统的坐标匹配模块配置为:根据灰度向量的“像层空间”的分解方向亚像素坐标,匹配当前灰度向量的分解方向的垂直方向的序号,得到该灰度向量对应的光切片中心坐标,如公式:Specifically, the coordinate matching module of the above system is configured as follows: according to the sub-pixel coordinates of the decomposition direction of the "image layer space" of the grayscale vector, the serial number of the vertical direction of the decomposition direction of the current grayscale vector is matched to obtain the center coordinates of the light slice corresponding to the grayscale vector, as shown in the formula:

Pk(Rk,Ck)=(Pk,k)P k (R k , C k ) = (P k , k)

其中,Pk(Rk,Ck)为第k个灰度向量对应的光切片中心坐标,Rk为“像层空间”光切片中心分解方向坐标,Ck为“像层空间”光切片中心分解方向的垂直方向坐标,Pk为第k各灰度向量的“像层空间”的分解方向亚像素坐标,k为当前灰度向量的分解方向的垂直方向的序号;Wherein, P k (R k , C k ) is the coordinate of the center of the light slice corresponding to the k-th grayscale vector, R k is the coordinate of the decomposition direction of the center of the light slice in the "image layer space", C k is the vertical coordinate of the decomposition direction of the center of the light slice in the "image layer space", P k is the sub-pixel coordinate of the decomposition direction of the "image layer space" of each k-th grayscale vector, and k is the serial number of the vertical direction of the decomposition direction of the current grayscale vector;

图像光切片中心模块,配置为:对所有的灰度向量对应的光切片中心坐标进行遍历计算,得到光条像素级图像的光切片中心;The image light slice center module is configured as follows: traversing and calculating the light slice center coordinates corresponding to all grayscale vectors to obtain the light slice center of the light strip pixel-level image;

具体地,上述系统的图像光切片中心模块配置为:对所有的灰度向量对应的光切片中心坐标进行遍历计算,得到光条像素级图像的光切片中心,如公式:Specifically, the image light slice center module of the above system is configured as follows: traverse and calculate the light slice center coordinates corresponding to all grayscale vectors to obtain the light slice center of the light strip pixel-level image, as shown in the formula:

{m}={Pk(Rk,Ck)|k=1,2,…,H}{m}={P k (R k ,C k )|k=1,2,…,H}

其中,{m}为光条像素级图像的光切片中心,H为灰度向量的个数。Wherein, {m} is the center of the light slice of the light bar pixel level image, and H is the number of grayscale vectors.

在一些实施例中,上述的基于加权降维分解的光切片中心高精度提取方法也可以通过以下步骤实施:In some embodiments, the above-mentioned method for high-precision extraction of light slice centers based on weighted dimensionality reduction decomposition can also be implemented by the following steps:

步骤1.“像层空间”降维分解方向选定,假设“像层空间”的分辨率为W×H:Step 1. Select the dimensionality reduction decomposition direction of the "image layer space", assuming that the resolution of the "image layer space" is W×H:

1)确定光切片主方向;1) Determine the main direction of the light slice;

2)分析光切片主方向与“像层空间”行方向的夹角α;2) Analyze the angle α between the main direction of the light slice and the row direction of the “image layer space”;

3)分析光切片主方向与“像层空间”列方向的夹角β;3) Analyze the angle β between the main direction of the light slice and the column direction of the “image layer space”;

4)比较α与β的大小,当α≥β时,对“像层空间”按列方向进行降维分解;当α<β时,对“像层空间”按行方向进行降维分解,并对当前图像进行灰度化,接下来以对“像层空间”按列方向进行降维分解为例,同样的,对“像层空间”按行方向进行降维分解类似,“像层空间”降维分解方向选定示意图如图2所示;4) Compare the size of α and β. When α≥β, perform dimensionality reduction decomposition on the “image layer space” in the column direction; when α<β, perform dimensionality reduction decomposition on the “image layer space” in the row direction, and grayscale the current image. Next, take the dimensionality reduction decomposition on the “image layer space” in the column direction as an example. Similarly, the dimensionality reduction decomposition on the “image layer space” in the row direction is similar. The schematic diagram of the dimensionality reduction decomposition direction selection of the “image layer space” is shown in Figure 2;

步骤2.对“像层空间”按列方向进行降维分解:Step 2. Decompose the “image layer space” in the column direction:

假设“像层空间”的分辨率为W×H,则“像层空间”可以按列向量降维分解为由{Vk|k=1,2...H}共H个灰度向量构成的“灰度向量池”,如图3所示;以灰度向量Vk为例,将该灰度向量上对应的光切片中心坐标为:Assuming that the resolution of the “image layer space” is W×H, the “image layer space” can be decomposed into a “grayscale vector pool” consisting of H grayscale vectors {V k |k=1,2...H} according to column vector dimensionality reduction, as shown in FIG3 ; taking the grayscale vector V k as an example, the corresponding light slice center coordinates on the grayscale vector are:

PAk(Rk,Ck)=(Pk,k)PA k (R k , C k ) = (P k , k)

式中,Rk为“像层空间”光切片中心列方向坐标;Ck为“像层空间”光切片中心行方向坐标;Pk为“灰度向量池”中灰度向量Vk的“像层空间”列方向亚像素坐标;k为灰度向量Vk行方向的向量序号;Where Rk is the column coordinate of the center of the light slice in the "image layer space"; Ck is the row coordinate of the center of the light slice in the "image layer space"; Pk is the sub-pixel coordinate of the gray vector Vk in the "gray vector pool" in the "image layer space" column direction; k is the vector number of the gray vector Vk in the row direction;

步骤3.“灰度向量池”灰度向量中心点的高精度计算方法实现:Step 3. High-precision calculation method of the center point of the grayscale vector of the "grayscale vector pool" is implemented:

经过“像层空间”的降维分割后,“像层空间”光切片中心的提取由二维矩阵运算问题转化成了H个灰度向量的灰度曲线的中心计算问题,考虑到光切片灰度分布中心最高、两边最低的渐变特性,同时为抑制环境光的灰度干扰,本实施例提出一种向量自适应阈值算法,基于每个向量设定其灰度基线常数Th,Th的计算方法如下:After the dimensionality reduction segmentation of the "image layer space", the extraction of the center of the light slice in the "image layer space" is transformed from a two-dimensional matrix operation problem into a center calculation problem of the grayscale curve of H grayscale vectors. Considering the gradient characteristics of the light slice grayscale distribution with the highest center and the lowest on both sides, and in order to suppress the grayscale interference of ambient light, this embodiment proposes a vector adaptive threshold algorithm, which sets the grayscale baseline constant Th based on each vector. The calculation method of Th is as follows:

设gi为图像中灰度为i的像素的数量,则有g0,g1,g2 ,则灰度示意图中任意一点的灰度为i的概率为:Let g i be the number of pixels with gray level i in the image, then g 0 ,g 1 ,g 2 , then the probability that the grayscale of any point in the grayscale diagram is i is:

且有:And there are:

设阈值为T,图像中的点根据其对应的灰度分别为A,B两类,其中A∈(0,T),B∈(T+1,255);则任意选一点,该点被分到A类的概率为RA(T),分配到A类的点的平均灰度为VA(T):Assume that the threshold is T, and the points in the image are divided into two categories, A and B, according to their corresponding grayscale, where A∈(0,T) and B∈(T+1,255); then, for any point selected, the probability of the point being classified into category A is RA (T), and the average grayscale of the points assigned to category A is VA (T):

类似的,点被分到B类的概率为RB(T),分配到B类的点的平均灰度为VB(T):Similarly, the probability of a point being assigned to class B is RB (T), and the average grayscale of the points assigned to class B is VB (T):

整个图像的灰度总值VGThe total gray value V G of the entire image:

所以可知方差为:So we know the variance is:

σ2(T)=RA(T)(VA(T)-VG)2+RB(T)(VB(T)-VG)2 σ 2 (T)= RA (T)(V A (T)-V G ) 2 +R B (T)(V B (T)-V G ) 2

进一步可通过以下公式确定灰度基线常数Th。The grayscale baseline constant Th can be further determined by the following formula.

进一步,分两种情况讨论:Further, we discuss two cases:

1)第一种情况,灰度向量上不存在光切片分布:1) In the first case, there is no light slice distribution on the grayscale vector:

以图3中灰度向量V1为例,图4(a)为V1的灰度分布示意图;由于V1没有光切片灰度分布,整个灰度分布均低于灰度基线常数Th,此种情况下,认为V1上不存在有效中心点,即灰度向量V1上不存在光切片分布;Taking the grayscale vector V 1 in Figure 3 as an example, Figure 4(a) is a schematic diagram of the grayscale distribution of V 1. Since V 1 has no light slice grayscale distribution, the entire grayscale distribution is lower than the grayscale baseline constant Th. In this case, it is considered that there is no effective center point on V 1 , that is, there is no light slice distribution on the grayscale vector V 1 .

2)第二种情况,灰度向量上存在光切片分布:2) In the second case, there is a light slice distribution on the grayscale vector:

以图3中灰度向量VK为例,图4(b)为灰度向量VK的灰度分布示意图;由于VK上存在光切片分布,则认为Vk上存在有效中心点。因此将灰度向量Vk的灰度分布示意图大于灰度基线常数Th的图像保留,将小于Th的图像去除,通过以下公式进行处理;Taking the grayscale vector V K in Figure 3 as an example, Figure 4(b) is a grayscale distribution diagram of the grayscale vector V K. Since there is a light slice distribution on V K , it is considered that there is an effective center point on V K. Therefore, the images whose grayscale distribution diagram of the grayscale vector V K is greater than the grayscale baseline constant Th are retained, and the images whose grayscale distribution diagram is less than Th are removed, and processed by the following formula;

式中,rki为灰度向量VK的灰度分布示意图中第i个“像层空间”列方向亚像素坐标点的灰度值;Where r ki is the gray value of the sub-pixel coordinate point in the column direction of the i-th “image layer space” in the gray distribution diagram of the gray vector V K ;

根据灰度映射原理,对不同灰度值rki设置不同权重Aki;灰度值越高代表该“像层空间”列方向亚像素坐标点的可靠性越高,所以灰度值越高,规定的Aki就越大,所以为了提高侧重高亮度的比重,本实施例提出一种“灰度—权重映射曲线”,如图5所示,本实施例中此例只是映射的一个实施例,该曲线不仅限于这一种,可根据场景以及条件进行更改。该函数如下所示,且rki∈(Th,255),此例中Th=200:According to the grayscale mapping principle, different weights A ki are set for different grayscale values r ki ; the higher the grayscale value, the higher the reliability of the sub-pixel coordinate point in the column direction of the "image layer space", so the higher the grayscale value, the larger the specified A ki , so in order to increase the proportion of high brightness, this embodiment proposes a "grayscale-weight mapping curve", as shown in Figure 5, this example in this embodiment is only one embodiment of the mapping, the curve is not limited to this one, and can be changed according to the scene and conditions. The function is as follows, and r ki ∈ (Th, 255), in this example Th = 200:

式中,Aki为灰度向量VK的灰度分布示意图中第i个“像层空间”列方向亚像素坐标点的灰度值的灰度值所对应的权重;Wherein, A ki is the weight corresponding to the grayscale value of the grayscale value of the column direction sub-pixel coordinate point of the i-th “image layer space” in the grayscale distribution diagram of the grayscale vector V K ;

进一步,可通过以下公式计算灰度向量VK的“像层空间”列方向亚像素坐标PkFurthermore, the sub-pixel coordinate P k in the column direction of the “image layer space” of the gray vector V K can be calculated by the following formula:

公式中,n为保留的区域内存在在n个亚像素坐标,Pki为灰度向量VK的灰度分布示意图中第i个“像层空间”列方向亚像素坐标点,Pk即为VK的光切片中心“像层空间”列方向的亚像素坐标RKIn the formula, n is the number of sub-pixel coordinates in the reserved area, P ki is the i-th sub-pixel coordinate point in the column direction of the "image layer space" in the grayscale distribution diagram of the grayscale vector V K , and P k is the sub-pixel coordinate R K in the column direction of the "image layer space" of the center of the optical slice of V K ;

步骤4.“灰度向量池”内灰度向量序号匹配方法实现:Step 4. Grayscale vector serial number matching method in the "grayscale vector pool" is implemented:

基于步骤1、2、3,通过循环遍历的方式可实现“灰度向量池”所有灰度向量中心点的计算,以Vk为例,在得到Vk的光切片中心亚像素坐标Rk的同时,匹配当前灰度向量的行方向的索引号,结合以下公式实现光切片中心坐标(Pk,k)的匹配,图6为以Vk为例的“灰度向量池”内灰度向量序号匹配方法实现示意图:Based on steps 1, 2, and 3, the calculation of the center points of all grayscale vectors in the "grayscale vector pool" can be realized by looping. Taking Vk as an example, while obtaining the sub-pixel coordinates Rk of the light slice center of Vk , the index number of the row direction of the current grayscale vector is matched, and the matching of the light slice center coordinates ( Pk , k) is realized by combining the following formula. FIG6 is a schematic diagram of the grayscale vector sequence matching method in the "grayscale vector pool" taking Vk as an example:

Pk(Rk,Ck)=(Pk,k)P k (R k , C k ) = (P k , k)

经过“灰度向量池”H个灰度向量的遍历计算,可获得集合{m},最终构建“像层空间”光切片中心坐标的高实时性的提取模型:After traversing and calculating the H grayscale vectors in the "grayscale vector pool", the set {m} can be obtained, and finally a high-real-time extraction model for the center coordinates of the light slice in the "image layer space" is constructed:

{m}={Pk(Rk,Ck)|k=1,2,...,H}{m}={P k (R k ,C k )|k=1,2,...,H}

同时,由于采用了灰度基线常数Th的方法进行中心点的计算,抑制了环境光等噪声干扰对光切片中心提取准确度的影响。At the same time, since the grayscale baseline constant Th method is used to calculate the center point, the influence of noise interference such as ambient light on the accuracy of light slice center extraction is suppressed.

上面结合附图对本发明的实施例进行了描述,但是本发明并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本发明的启示下,在不脱离本发明宗旨和权利要求所保护的范围情况下,还可做出很多形式,这些均属于本发明的保护之内。The embodiments of the present invention are described above in conjunction with the accompanying drawings, but the present invention is not limited to the above-mentioned specific implementation methods. The above-mentioned specific implementation methods are merely illustrative and not restrictive. Under the enlightenment of the present invention, ordinary technicians in this field can also make many forms without departing from the scope of protection of the purpose of the present invention and the claims, which all fall within the protection of the present invention.

Claims (10)

1. The high-precision extraction method of the optical slice center based on the weighted dimension reduction decomposition is characterized by comprising the following steps of:
S1: acquiring a light bar pixel level image, and obtaining the vertical direction of the dimension reduction decomposition direction of the image layer space of the light bar pixel level image and the dimension reduction decomposition direction of the image layer space of the light bar pixel level image according to the light bar pixel level image;
s2: graying the light bar pixel level image, and decomposing the light bar pixel level image according to the dimension-reducing decomposition direction of the image layer space of the light bar pixel level image to obtain a gray vector of the image layer space of the light bar pixel level image;
s3: according to the gray vector of the image layer space of the light bar pixel level image, sub-pixel coordinates of the gray vector in the decomposition direction of the image layer space are obtained;
s4: according to the sub-pixel coordinates of the gray level vector in the decomposition direction of the image layer space, matching the sequence number of the current gray level vector in the vertical direction of the decomposition direction to obtain the center coordinates of the optical slice corresponding to the gray level vector;
S5: and traversing the center coordinates of the optical slice corresponding to all the gray vectors to obtain the center of the optical slice of the optical stripe pixel level image.
2. The method for extracting the optical slice center with high precision based on weighted dimension reduction decomposition according to claim 1, wherein step S1 specifically comprises:
s11: acquiring a light bar pixel level image, and acquiring a light slice main direction, an image layer space row direction and an image layer space column direction of the light bar pixel level image according to the light bar pixel level image;
S12: obtaining an included angle between the main direction of the light slice and the row direction of the image layer space and an included angle between the main direction of the light slice and the column direction of the image layer space according to the main direction of the light slice, the row direction of the image layer space of the light strip pixel-level image and the column direction of the image layer space;
S13: when the included angle between the main direction of the light slice and the row direction of the image layer space is not smaller than the included angle between the main direction of the light slice and the column direction of the image layer space, the column direction of the light bar pixel level image is taken as the dimension-reducing decomposition direction of the image layer space of the light bar pixel level image, and the row direction of the light bar pixel level image is taken as the vertical direction of the dimension-reducing decomposition direction of the image layer space of the light bar pixel level image; when the included angle between the main direction of the light slice and the row direction of the image layer space is smaller than the included angle between the main direction of the light slice and the column direction of the image layer space, the row direction of the light bar pixel level image is taken as the dimension reduction decomposition direction of the image layer space of the light bar pixel level image, and the column direction of the light bar pixel level image is taken as the vertical direction of the dimension reduction decomposition direction of the image layer space of the light bar pixel level image.
3. The method for extracting the optical slice center with high precision based on weighted dimension reduction decomposition according to claim 1, wherein step S3 specifically comprises: according to the gray vector of the image layer space of the light bar pixel level image, the sub-pixel coordinates of the decomposing direction of the image layer space of the gray vector are obtained, such as the formula:
σ2(T)=RA(T)(VA(T)-VG)2+RB(T)(VB(T)-VG)2
Wherein P k is the sub-pixel coordinate of the decomposition direction of the "image layer space" of the kth gray level vector, n is the number of sub-pixel coordinates existing in the reserved area, P ki is the sub-pixel coordinate point of the decomposition direction of the ith "image layer space" of the kth gray level vector, A ki is the weight corresponding to the gray level value of the sub-pixel coordinate point of the decomposition direction of the ith "image layer space" of the kth gray level vector, R ki is the gray level value of the sub-pixel coordinate point of the decomposition direction of the ith "image layer space" of the kth gray level vector, a is the 1 st preset coefficient for calculating the gray level weight, B is the 2 nd preset coefficient for calculating the gray level weight, c is the 3 rd preset coefficient of the calculated gray value weight, th is the gray base line constant, T is the preset gray threshold value, σ 2 (-) is the variance function, max (-) is the maximum function, V G is the gray total value of the whole image, R A (T) is the probability that the pixel point is classified into class a, V A (T) is the average gray level of the pixel point assigned to class a, R B (T) is the probability that the pixel point is classified into class B, V B (T) is the average gray level of the pixel point assigned to class B, R i is the probability that the gray level of the pixel point is i, and g i is the number of pixels in the image with gray level i.
4. The method for extracting the optical slice center with high precision based on weighted dimension reduction decomposition according to claim 1, wherein step S4 specifically comprises: according to the sub-pixel coordinates of the resolution direction of the image layer space of the gray level vector, matching the sequence number of the vertical direction of the resolution direction of the current gray level vector to obtain the center coordinates of the optical slice corresponding to the gray level vector, wherein the formula is as follows:
Pk(Rk,Ck)=(Pk,k)
Wherein, P k(Rk,Ck) is the optical slice center coordinate corresponding to the kth gray-scale vector, R k is the optical slice center decomposition direction coordinate of the "image layer space", C k is the vertical direction coordinate of the optical slice center decomposition direction of the "image layer space", P k is the decomposition direction subpixel coordinate of the "image layer space" of each of the kth gray-scale vectors, and k is the number of the vertical direction of the decomposition direction of the current gray-scale vector.
5. The method for extracting the optical slice center with high precision based on weighted dimension reduction decomposition according to claim 1, wherein step S5 specifically comprises: traversing the center coordinates of the optical slice corresponding to all the gray vectors to obtain the center of the optical slice of the optical stripe pixel level image, wherein the formula is as follows:
{m}={Pk(Rk,Ck)|k=1,2,...,H}
Wherein { m } is the center of the optical slice of the optical stripe pixel level image, and H is the number of gray vectors.
6. A system, the system comprising the following modules:
An image acquisition module configured to: acquiring a light bar pixel level image, and obtaining the vertical direction of the dimension reduction decomposition direction of the image layer space of the light bar pixel level image and the dimension reduction decomposition direction of the image layer space of the light bar pixel level image according to the light bar pixel level image;
A vector acquisition module configured to: graying the light bar pixel level image, and decomposing the light bar pixel level image according to the dimension-reducing decomposition direction of the image layer space of the light bar pixel level image to obtain a gray vector of the image layer space of the light bar pixel level image;
A decomposition coordinates module configured to: according to the gray vector of the image layer space of the light bar pixel level image, sub-pixel coordinates of the gray vector in the decomposition direction of the image layer space are obtained;
The coordinate matching module is configured to: according to the sub-pixel coordinates of the gray level vector in the decomposition direction of the image layer space, matching the sequence number of the current gray level vector in the vertical direction of the decomposition direction to obtain the center coordinates of the optical slice corresponding to the gray level vector;
an image light slice center module configured to: and traversing the center coordinates of the optical slice corresponding to all the gray vectors to obtain the center of the optical slice of the optical stripe pixel level image.
7. The system of claim 6, wherein the image acquisition module is specifically configured to:
acquiring a light bar pixel level image, and acquiring a light slice main direction, an image layer space row direction and an image layer space column direction of the light bar pixel level image according to the light bar pixel level image;
Obtaining an included angle between the main direction of the light slice and the row direction of the image layer space and an included angle between the main direction of the light slice and the column direction of the image layer space according to the main direction of the light slice, the row direction of the image layer space of the light strip pixel-level image and the column direction of the image layer space;
When the included angle between the main direction of the light slice and the row direction of the image layer space is not smaller than the included angle between the main direction of the light slice and the column direction of the image layer space, the column direction of the light bar pixel level image is taken as the dimension-reducing decomposition direction of the image layer space of the light bar pixel level image, and the row direction of the light bar pixel level image is taken as the vertical direction of the dimension-reducing decomposition direction of the image layer space of the light bar pixel level image; when the included angle between the main direction of the light slice and the row direction of the image layer space is smaller than the included angle between the main direction of the light slice and the column direction of the image layer space, the row direction of the light bar pixel level image is taken as the dimension reduction decomposition direction of the image layer space of the light bar pixel level image, and the column direction of the light bar pixel level image is taken as the vertical direction of the dimension reduction decomposition direction of the image layer space of the light bar pixel level image.
8. The system of claim 6, wherein the decomposition coordinate module is specifically configured to: according to the gray vector of the image layer space of the light bar pixel level image, the sub-pixel coordinates of the decomposing direction of the image layer space of the gray vector are obtained, such as the formula:
σ2(T)=RA(T)(VA(T)-VG)2+RB(T)(VB(T)-VG)2
Wherein P k is the sub-pixel coordinate of the decomposition direction of the "image layer space" of the kth gray level vector, n is the number of sub-pixel coordinates existing in the reserved area, P ki is the sub-pixel coordinate point of the decomposition direction of the ith "image layer space" of the kth gray level vector, A ki is the weight corresponding to the gray level value of the sub-pixel coordinate point of the decomposition direction of the ith "image layer space" of the kth gray level vector, R ki is the gray level value of the sub-pixel coordinate point of the decomposition direction of the ith "image layer space" of the kth gray level vector, a is the 1 st preset coefficient for calculating the gray level weight, B is the 2 nd preset coefficient for calculating the gray level weight, c is the 3 rd preset coefficient of the calculated gray value weight, th is the gray base line constant, T is the preset gray threshold value, σ 2 (-) is the variance function, max (-) is the maximum function, V G is the gray total value of the whole image, R A (T) is the probability that the pixel point is classified into class a, V A (T) is the average gray level of the pixel point assigned to class a, R B (T) is the probability that the pixel point is classified into class B, V B (T) is the average gray level of the pixel point assigned to class B, R i is the probability that the gray level of the pixel point is i, and g i is the number of pixels in the image with gray level i.
9. The system of claim 6, wherein the coordinate matching module is specifically configured to: according to the sub-pixel coordinates of the resolution direction of the image layer space of the gray level vector, matching the sequence number of the vertical direction of the resolution direction of the current gray level vector to obtain the center coordinates of the optical slice corresponding to the gray level vector, wherein the formula is as follows:
Pk(Rk,Ck)=(Pk,k)
Wherein, P k(Rk,Ck) is the optical slice center coordinate corresponding to the kth gray-scale vector, R k is the optical slice center decomposition direction coordinate of the "image layer space", C k is the vertical direction coordinate of the optical slice center decomposition direction of the "image layer space", P k is the decomposition direction subpixel coordinate of the "image layer space" of each of the kth gray-scale vectors, and k is the number of the vertical direction of the decomposition direction of the current gray-scale vector.
10. The system of claim 6, wherein the image light slice center module is specifically configured to: traversing the center coordinates of the optical slice corresponding to all the gray vectors to obtain the center of the optical slice of the optical stripe pixel level image, wherein the formula is as follows:
{m}={Pk(Rk,Ck)|k=1,2,...,H}
Wherein { m } is the center of the optical slice of the optical stripe pixel level image, and H is the number of gray vectors.
CN202410729634.4A 2024-06-06 2024-06-06 High-precision extraction method and system for optical slice center based on weighted dimensionality reduction decomposition Pending CN118746257A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410729634.4A CN118746257A (en) 2024-06-06 2024-06-06 High-precision extraction method and system for optical slice center based on weighted dimensionality reduction decomposition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410729634.4A CN118746257A (en) 2024-06-06 2024-06-06 High-precision extraction method and system for optical slice center based on weighted dimensionality reduction decomposition

Publications (1)

Publication Number Publication Date
CN118746257A true CN118746257A (en) 2024-10-08

Family

ID=92922455

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410729634.4A Pending CN118746257A (en) 2024-06-06 2024-06-06 High-precision extraction method and system for optical slice center based on weighted dimensionality reduction decomposition

Country Status (1)

Country Link
CN (1) CN118746257A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119540243A (en) * 2025-01-23 2025-02-28 合肥埃科光电科技股份有限公司 Pole piece burr detection method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10105716A (en) * 1996-09-30 1998-04-24 Honda Motor Co Ltd Detection of position of light-sectioned picture
CN104658015A (en) * 2015-01-21 2015-05-27 沈阳理工大学 Visual locating method for automatic laser slicing in corn breeding
US20150153161A1 (en) * 2012-10-12 2015-06-04 Nireco Corporation Shape measuring method and shape measureing device
CN205209443U (en) * 2015-11-16 2016-05-04 上海汽车集团股份有限公司 Electricity drives three -dimensional flame shape measurement system in sliced optics engine of laser
CN112633197A (en) * 2020-12-28 2021-04-09 宁波江丰生物信息技术有限公司 Method and system for tissue region identification of fluorescence section
CN115272504A (en) * 2022-07-13 2022-11-01 暨南大学 Signal extraction and image reconstruction method for wide-field illumination confocal microscopic imaging system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10105716A (en) * 1996-09-30 1998-04-24 Honda Motor Co Ltd Detection of position of light-sectioned picture
US20150153161A1 (en) * 2012-10-12 2015-06-04 Nireco Corporation Shape measuring method and shape measureing device
CN104658015A (en) * 2015-01-21 2015-05-27 沈阳理工大学 Visual locating method for automatic laser slicing in corn breeding
CN205209443U (en) * 2015-11-16 2016-05-04 上海汽车集团股份有限公司 Electricity drives three -dimensional flame shape measurement system in sliced optics engine of laser
CN112633197A (en) * 2020-12-28 2021-04-09 宁波江丰生物信息技术有限公司 Method and system for tissue region identification of fluorescence section
CN115272504A (en) * 2022-07-13 2022-11-01 暨南大学 Signal extraction and image reconstruction method for wide-field illumination confocal microscopic imaging system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
贾倩倩;王伯雄;罗秀芝;: "光切法形貌测量中光条中心的亚像素提取", 光学精密工程, no. 02, 15 February 2010 (2010-02-15) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119540243A (en) * 2025-01-23 2025-02-28 合肥埃科光电科技股份有限公司 Pole piece burr detection method and system

Similar Documents

Publication Publication Date Title
CN111640157B (en) Checkerboard corner detection method based on neural network and application thereof
CN105956582B (en) A kind of face identification system based on three-dimensional data
CN107203973B (en) Sub-pixel positioning method for center line laser of three-dimensional laser scanning system
CN112116619B (en) Multi-line structured light system stripe center line extraction method based on structural constraint
CN106683173A (en) A Method of Improving the Density of 3D Reconstruction Point Cloud Based on Neighborhood Block Matching
CN115187676A (en) High-precision line laser three-dimensional reconstruction calibration method
CN103136525A (en) High-precision positioning method for special-shaped extended target by utilizing generalized Hough transformation
CN106570887A (en) Adaptive Mean Shift target tracking method based on LBP features
CN116763295B (en) Livestock scale measuring method, electronic equipment and storage medium
CN115267724B (en) Position re-identification method of mobile robot capable of estimating pose based on laser radar
CN118746257A (en) High-precision extraction method and system for optical slice center based on weighted dimensionality reduction decomposition
CN115239882A (en) A 3D reconstruction method of crops based on low-light image enhancement
CN112946679B (en) Unmanned aerial vehicle mapping jelly effect detection method and system based on artificial intelligence
CN116188544A (en) Point cloud registration method combining edge features
CN114580559A (en) Speed measuring method based on monocular vision system
CN112950562A (en) Fastener detection algorithm based on line structured light
CN112241964A (en) Light strip center extraction method for line structured light non-contact measurement
CN113628170B (en) Laser line extraction method and system based on deep learning
CN113409334B (en) Centroid-based structured light angle point detection method
CN115100446A (en) A Similarity Measurement Method for Matching SAR and Visible Light Remote Sensing Images
CN117710603B (en) Unmanned aerial vehicle image three-dimensional building modeling method under constraint of linear geometry
CN114283199B (en) Dynamic scene-oriented dotted line fusion semantic SLAM method
CN113674360B (en) A covariant-based method for line structured light and light plane calibration
CN115717887A (en) Fast star point extraction method based on gray distribution histogram
CN117314989A (en) Line structure light measurement method and system and line structure light camera device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination