[go: up one dir, main page]

CN115773720A - A device and method for measuring underwater target size based on optical vision - Google Patents

A device and method for measuring underwater target size based on optical vision Download PDF

Info

Publication number
CN115773720A
CN115773720A CN202211663782.8A CN202211663782A CN115773720A CN 115773720 A CN115773720 A CN 115773720A CN 202211663782 A CN202211663782 A CN 202211663782A CN 115773720 A CN115773720 A CN 115773720A
Authority
CN
China
Prior art keywords
underwater
spot
underwater target
target
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211663782.8A
Other languages
Chinese (zh)
Inventor
李小斌
韩彪
杨景川
谢彬
杨梦宁
陈开润
李亚涛
汪涵
何鑫
向刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Mihong Technology Co ltd
Chongqing University
Xidian University
Institute of Deep Sea Science and Engineering of CAS
Original Assignee
Chongqing Mihong Technology Co ltd
Chongqing University
Xidian University
Institute of Deep Sea Science and Engineering of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Mihong Technology Co ltd, Chongqing University, Xidian University, Institute of Deep Sea Science and Engineering of CAS filed Critical Chongqing Mihong Technology Co ltd
Priority to CN202211663782.8A priority Critical patent/CN115773720A/en
Publication of CN115773720A publication Critical patent/CN115773720A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

本发明涉及一种基于光学视觉的水下目标尺寸测量的装置和方法,该装置包括水下相机、多个水下激光器、转台和数据处理模块,该方法包括:建立光斑图像与水下相机之间的距离D与光斑图案周长L之间的映射关系;所述转台从水下目标物的起始位置D1转到水下目标物的末端位置D2,记录转台的旋转角度α,同时拍摄D1和D2处水下目标物表面的光斑图像;采用数据处理模块对D1和D2对应的光斑图像进行处理,得到光斑图像周长L1和L2;通过D与L之间的映射关系得到D1和D2;计算得到水下目标物的实际尺寸M。本发明能够用于水下目标尺寸的测量,具有操作简单、水质环境适应性好、精度高、结构简单、复杂度低的特点,便于推广应用。

Figure 202211663782

The invention relates to a device and method for measuring the size of an underwater target based on optical vision. The device includes an underwater camera, a plurality of underwater lasers, a turntable and a data processing module. The method includes: establishing a relationship between a spot image and the underwater camera The mapping relationship between the distance D between them and the perimeter L of the spot pattern; the turntable is transferred from the starting position D 1 of the underwater target to the end position D 2 of the underwater target, and the rotation angle α of the turntable is recorded, and at the same time Take the spot images on the surface of the underwater target at D1 and D2 ; use the data processing module to process the spot images corresponding to D1 and D2 to obtain the perimeter L1 and L2 of the spot images; pass between D and L The mapping relationship of D 1 and D 2 is obtained; the actual size M of the underwater target is obtained by calculation. The invention can be used for measuring the size of an underwater target, has the characteristics of simple operation, good adaptability to the water quality environment, high precision, simple structure and low complexity, and is convenient for popularization and application.

Figure 202211663782

Description

一种基于光学视觉的水下目标尺寸测量装置和方法Device and method for measuring underwater target size based on optical vision

技术领域technical field

本发明涉及水下光学测量领域,特别涉及一种基于光学视觉的水下目标尺寸测量装置和方法。The invention relates to the field of underwater optical measurement, in particular to an optical vision-based underwater target size measurement device and method.

背景技术Background technique

水下目标物尺寸测量对于海洋管道巡检、深海工程作业、湖泊大坝检测等应用至关重要,由于无线电波在水中衰减严重,传统的电磁检测方法不能适应水下环境。目前,常用的水下目标物尺寸测量主要依靠声学方法和光学方法。其中声学测量是通过声波对水中的目标尺寸进行测量,具有作用距离远的优势;然而受声波波长较大的限制,这种方法在测量精度方面存在局限性。相对的,光学方法具有精度高的优点,近年来引起了广泛的重视。The size measurement of underwater targets is very important for applications such as marine pipeline inspection, deep sea engineering operations, and lake and dam detection. Due to the serious attenuation of radio waves in water, traditional electromagnetic detection methods cannot adapt to the underwater environment. At present, the commonly used underwater target size measurement mainly relies on acoustic methods and optical methods. Among them, the acoustic measurement is to measure the size of the target in the water through the sound wave, which has the advantage of a long working distance; however, due to the limitation of the large wavelength of the sound wave, this method has limitations in the measurement accuracy. In contrast, the optical method has the advantages of high precision and has attracted extensive attention in recent years.

基于光学的水下目标尺寸测量方法主要包括飞行时间(Time Of Flight,TOF)测量、水下双目、水下激光扫描成像、水下主动激光点阵投影成像等。TOF方法是通过测量激光发射和回波探测的时间差得到距离信息,进而通过扫描形成点云,测量目标物的大小;但是这种方法容易受到海水后向散射的影响,因此限制了该技术的应用范围。Optics-based underwater target size measurement methods mainly include Time Of Flight (TOF) measurement, underwater binoculars, underwater laser scanning imaging, underwater active laser dot matrix projection imaging, etc. The TOF method is to obtain distance information by measuring the time difference between laser emission and echo detection, and then form a point cloud by scanning to measure the size of the target; however, this method is easily affected by seawater backscattering, thus limiting the application of this technology scope.

水下双目测量是通过立体视觉的方法建立目标的三维立体结构,进而得到目标的尺寸信息,可参考“张洪龙,陈涛,庄培钦等,《基于立体视觉的水下三维测量系统研究》”,这种方法的不足是测量效果受到水体浑浊度和照度的影响较大,测量距离和稳定性受限。Underwater binocular measurement is to establish the three-dimensional structure of the target through the method of stereo vision, and then obtain the size information of the target. Please refer to "Zhang Honglong, Chen Tao, Zhuang Peiqin, etc., "Research on Underwater Three-dimensional Measurement System Based on Stereo Vision", here The disadvantage of this method is that the measurement effect is greatly affected by the turbidity and illuminance of the water body, and the measurement distance and stability are limited.

水下激光扫描成像是通过相机拍摄线激光在目标物表面的影响,然后通过三角测量的方法获取目标物表面三维信息;这种方法的不足是作用距离一般较近,受水质影响较大,可参考“谢晓梦,《水下目标激光扫描深度探测精度分析》”。Underwater laser scanning imaging is to capture the influence of the line laser on the surface of the target through the camera, and then obtain the three-dimensional information of the target surface through the method of triangulation; Refer to "Xie Xiaomeng, "Analysis of Underwater Target Laser Scanning Depth Detection Accuracy".

水下激光点阵投影成像测量系统主要由相互平行的阵列激光和相机构成,工作时阵列激光向水中投射激光点组成的点阵,通过相机对点阵特征进行提取,进而获取目标物的距离和尺寸信息,见“杨梦宁,韩彪,张兵等,《一种水下测量的设备与水下测量方法》”;该种方法有助于实现远距离目标的尺寸测量,但是能够测量的目标物尺寸大小受到相机视场的限制。The underwater laser dot matrix projection imaging measurement system is mainly composed of array lasers and cameras parallel to each other. When working, the array laser projects a dot matrix composed of laser dots into the water. The camera extracts the dot matrix features, and then obtains the distance and distance of the target object. For size information, see "Yang Mengning, Han Biao, Zhang Bing, etc., "An Underwater Measurement Equipment and Underwater Measurement Method""; this method is helpful to realize the size measurement of long-distance targets, but the target Size is limited by the camera's field of view.

发明内容Contents of the invention

针对现有技术存在的上述问题,本发明要解决的第一个技术问题是:提供一种对水下目标物进行测量的装置。In view of the above-mentioned problems in the prior art, the first technical problem to be solved by the present invention is to provide a device for measuring underwater targets.

要解决的第二个技术问题是:针对水下目标物提供一种测量准确性高且适用范围广的测量方法。The second technical problem to be solved is to provide a measurement method with high measurement accuracy and wide application range for underwater targets.

为解决上述第一个技术问题,本发明采用如下技术方案:一种基于光学视觉的水下目标尺寸测量的装置,包括水下相机、J个水下激光器、转台和数据处理模块,其中J≥2;In order to solve the above-mentioned first technical problem, the present invention adopts the following technical solution: a device for measuring the size of an underwater target based on optical vision, including an underwater camera, J underwater lasers, a turntable and a data processing module, wherein J≥ 2;

所述J个水下激光器用于在水下目标物表面形成光斑阵列,且所述J个水下激光器发射的J个光束之间相互平行,且水下激光器的光束指向和水下相机光轴平行;The J underwater lasers are used to form a spot array on the surface of the underwater target, and the J beams emitted by the J underwater lasers are parallel to each other, and the beams of the underwater lasers are directed to the optical axis of the underwater camera parallel;

所述水下相机用于拍摄光束在水下目标物表面形成的光斑图案;The underwater camera is used to photograph the spot pattern formed by the light beam on the surface of the underwater target;

所述转台用于支撑水下相机和J个水下激光器;The turntable is used to support the underwater camera and J underwater lasers;

所述数据处理模块用于对水下相机拍摄到光斑图案进行识别,通过“近大远小”的成像原理,从中获取水下相机与水下目标物之间的距离信息,并结合转台的旋转角,计算水下目标物的尺寸信息。The data processing module is used to identify the light spot pattern captured by the underwater camera, and obtain the distance information between the underwater camera and the underwater target through the imaging principle of "near large, far small", and combine the rotation of the turntable angle to calculate the size information of the underwater target.

作为优选,所述数据处理模块包括区域分割模块和测量计算模块,所述区域分割模块和测量计算模块为依次连接;Preferably, the data processing module includes an area segmentation module and a measurement calculation module, and the area segmentation module and the measurement calculation module are sequentially connected;

所述区域分割模块为Mask R-CNN分割算法模型,所述测量计算模块利用区域分割模块得到的区域信息计算水下目标物的尺寸。The area segmentation module is a Mask R-CNN segmentation algorithm model, and the measurement calculation module uses the area information obtained by the area segmentation module to calculate the size of the underwater target.

为解决上述第二个技术问题,本发明采用如下技术方案:一种基于光学视觉的水下目标尺寸测量的方法,其特征在于:采用权利要求1所述的基于光学视觉的水下目标尺寸测量的装置,所述测量方法包括如下步骤:In order to solve the above-mentioned second technical problem, the present invention adopts the following technical solution: a method for measuring the size of an underwater target based on optical vision, characterized in that: the underwater target size measurement based on optical vision described in claim 1 is adopted The device, described measuring method comprises the steps:

S1:设D为光斑图像与水下相机之间的距离,L为光斑图案周长,建立D与L之间的映射关系,表达式如下:S1: Let D be the distance between the spot image and the underwater camera, L be the circumference of the spot pattern, and establish the mapping relationship between D and L, the expression is as follows:

D=a×Lb+c;(1)D=a×L b +c; (1)

其中,a、b和c表示常数;Among them, a, b and c represent constants;

S2:所述转台从水下目标物的起始位置D1转到水下目标物的末端位置D2,记录转台的旋转角度α;同时,水下相机拍摄D1和D2处水下目标物表面的光斑图像;S2: The turntable turns from the initial position D1 of the underwater target to the end position D2 of the underwater target, and records the rotation angle α of the turntable; at the same time, the underwater camera shoots the underwater targets at D1 and D2 Spot image on the surface of the object;

S3:采用数据处理模块对D1对应的光斑图像进行处理,得到光斑图像周长L1;采用数据处理模块对D2对应的光斑图像进行处理,得到光斑图像周长L2S3: use the data processing module to process the spot image corresponding to D1 to obtain the perimeter L1 of the spot image; use the data processing module to process the spot image corresponding to D2 to obtain the perimeter L2 of the spot image;

S4:将L1和L2代入公式(1)进行计算,分别得到D1和D2S4: Substituting L 1 and L 2 into formula (1) for calculation to obtain D 1 and D 2 respectively;

S5:计算水下目标物的实际尺寸M,具体表达式如下:S5: Calculate the actual size M of the underwater target, the specific expression is as follows:

Figure BDA0004013839660000031
Figure BDA0004013839660000031

其中,α表示转台旋转的角度。Among them, α represents the angle of rotation of the turntable.

作为优选,所述S1中建立D与L之间的映射关系的具体步骤如下:As a preference, the specific steps for establishing the mapping relationship between D and L in the S1 are as follows:

S11:获取W个带有距离标签的光斑图像;S11: Acquire W spot images with distance labels;

S12:将所述光斑图像作为数据处理模块的输入,获取多个光斑的像素坐标,利用像素坐标计算该光斑图像周长L,L的具体表达式如下:S12: Using the spot image as the input of the data processing module, obtain the pixel coordinates of multiple spots, and use the pixel coordinates to calculate the perimeter L of the spot image, and the specific expression of L is as follows:

Figure BDA0004013839660000032
Figure BDA0004013839660000032

其中,(xr,yr)表示光斑的像素坐标,r表示光斑的数量;Among them, (x r , y r ) represents the pixel coordinates of the light spot, and r represents the number of light spots;

S13:将所有带有距离标签D的光斑图像周长L组成数据集,且D与L一一对应;S13: Form a data set with perimeter L of all spot images with distance labels D, and D and L correspond one-to-one;

S14:采用最小二乘法将数据集中的D和L进行拟合,得到D与L之间的函数映射关系。S14: Fitting D and L in the data set by the least square method to obtain a functional mapping relationship between D and L.

作为优选,所述S12中获取多个光斑的像素坐标,计算该光斑图像周长L的具体步骤如下:As a preference, the pixel coordinates of multiple light spots are acquired in the S12, and the specific steps of calculating the perimeter L of the light spot image are as follows:

S121:将S11中获取的W个光斑图像作为训练集;S121: using the W spot images acquired in S11 as a training set;

S122:分别对W个光斑图像进行几何图形标注,得到W个带有图形标注标签的光斑图像,每个带有图形标注标签的光斑图像为一个训练样本;S122: Carry out geometric figure annotation on the W spot images respectively, and obtain W light spot images with graphic annotation labels, and each light spot image with graphic annotation labels is a training sample;

S123:令t=1;S123: let t=1;

S124:从W中选择第t个训练样本作为Mask R-CNN神经网络模型的输入,输出得到第t个训练样本对应的Mask掩码分割图像t’;S124: Select the tth training sample from W as the input of the Mask R-CNN neural network model, and output the Mask mask segmentation image t' corresponding to the tth training sample;

S125:对t’进行轮廓拟合与边缘检测处理,得到第t个训练样本的预测图像t”;S125: Perform contour fitting and edge detection processing on t' to obtain the predicted image t" of the tth training sample;

S126:设置损失阈值,计算t”与t之间的损失Loss;S126: Set the loss threshold, and calculate the loss Loss between t" and t;

当Loss值小于损失阈值时,得到训练好的Mask R-CNN神经网络模型,并执行下一步;否则,反向传播更新Mask R-CNN神经网络模型中的参数,并令t=t+1,且返回S124;When the Loss value is less than the loss threshold, the trained Mask R-CNN neural network model is obtained, and the next step is performed; otherwise, the parameters in the Mask R-CNN neural network model are updated by backpropagation, and t=t+1 is set, And return to S124;

S127:将待预测光斑图像进行几何图形标注,并作为训练好的Mask R-CNN神经网络模型的输入,输出得到围成该待预测光斑图像的光斑像素坐标;S127: Marking the image of the to-be-predicted light spot with a geometric figure, and using it as an input of the trained Mask R-CNN neural network model, and outputting the pixel coordinates of the light spot surrounding the image of the to-be-predicted light spot;

S128:利用S127得到的光斑像素坐标计算该待预测光斑图像的周长L。S128: Calculate the perimeter L of the to-be-predicted spot image by using the spot pixel coordinates obtained in S127.

相对于现有技术,本发明至少具有如下优点:Compared with the prior art, the present invention has at least the following advantages:

1.利用现有的Mask R-CNN神经网络对水下目标物上的光斑区域进行精确的划分,这样可以明确区域范围,并且有准确的像素点坐标;然后通过像素点坐标得到光斑图像的周长,利用水下相机到水下目标物之间的距离D与光斑图像周长L之间的映射关系,可以得到光束点在准确的起始位置D1与结束位置D2,最后利用D1和D2来计算水下目标的尺寸,这样的测量过程更能适应不同的水质环境,也使测量结果更加准确。1. Use the existing Mask R-CNN neural network to accurately divide the spot area on the underwater target, so that the range of the area can be clarified, and there are accurate pixel coordinates; then the perimeter of the spot image can be obtained through the pixel coordinates Long, using the mapping relationship between the distance D between the underwater camera and the underwater target and the perimeter L of the spot image, the exact starting position D 1 and the ending position D 2 of the beam point can be obtained, and finally using D 1 and D 2 to calculate the size of the underwater target, such a measurement process is more adaptable to different water quality environments, and also makes the measurement results more accurate.

2.利用转台搭载相机和激光器,使其能够旋转,结合三角测量的方法能够提升目标尺寸的监测范围,更加适应水下大尺寸目标的测量。2. Use the turntable to carry the camera and laser so that it can rotate, combined with the method of triangulation, the monitoring range of the target size can be improved, and it is more suitable for the measurement of large underwater targets.

3.采用深度学习检测光斑的位置信息,能够有效抑制水体后向散射对光斑检测的影响,有助于提升光斑位置的检测精度。3. Using deep learning to detect the position information of the light spot can effectively suppress the influence of backscattering of the water body on the light spot detection, and help to improve the detection accuracy of the light spot position.

附图说明Description of drawings

图1本发明水下测量装置示意图。Fig. 1 is a schematic diagram of the underwater measuring device of the present invention.

图2本发明水下测量方法示意图。Fig. 2 is a schematic diagram of the underwater measurement method of the present invention.

图3本发明实施例中测量系统示意图。Fig. 3 is a schematic diagram of the measurement system in the embodiment of the present invention.

图4本发明实施例中所采取的水下距离测量装置示意图。Fig. 4 is a schematic diagram of the underwater distance measuring device adopted in the embodiment of the present invention.

图5本发明实施例中水下相机和4个水下激光器组成的测试设备。Fig. 5 is a testing device composed of an underwater camera and four underwater lasers in the embodiment of the present invention.

图6本发明实施例中水下光斑的识别。Fig. 6 Recognition of underwater light spots in the embodiment of the present invention.

图7本发明实施例中测试点A的距离。Fig. 7 is the distance of test point A in the embodiment of the present invention.

图8本发明实施例中测试点B的距离。Fig. 8 is the distance of test point B in the embodiment of the present invention.

具体实施方式Detailed ways

下面对本发明作进一步详细说明。The present invention will be described in further detail below.

参见图1-图2,一种基于光学视觉的水下目标尺寸测量的装置,包括水下相机、J个水下激光器、转台和数据处理模块,其中J≥2;Referring to Figures 1-2, a device for measuring the size of an underwater target based on optical vision, including an underwater camera, J underwater lasers, a turntable and a data processing module, wherein J≥2;

所述J个水下激光器用于在水下目标物表面形成光斑阵列,且所述J个水下激光器发射的J个光束之间相互平行,且水下激光器的光束指向和水下相机光轴平行;The J underwater lasers are used to form a spot array on the surface of the underwater target, and the J beams emitted by the J underwater lasers are parallel to each other, and the beams of the underwater lasers are directed to the optical axis of the underwater camera parallel;

所述水下相机用于拍摄光束在水下目标物表面形成的光斑图案;The underwater camera is used to photograph the spot pattern formed by the light beam on the surface of the underwater target;

所述转台用于支撑水下相机和J个水下激光器,其目的是为了使得J个水下激光器和水下相机能够作为一个整体发生旋转;The turntable is used to support the underwater camera and J underwater lasers, the purpose of which is to enable the J underwater lasers and underwater cameras to rotate as a whole;

所述数据处理模块用于对水下相机拍摄到光斑图案进行识别,通过“近大远小”的成像原理,从中获取水下相机与水下目标物之间的距离信息,并结合转台的旋转角,计算水下目标物的尺寸信息。The data processing module is used to identify the light spot pattern captured by the underwater camera, and obtain the distance information between the underwater camera and the underwater target through the imaging principle of "near large, far small", and combine the rotation of the turntable angle to calculate the size information of the underwater target.

所述数据处理模块包括区域分割模块和测量计算模块,所述区域分割模块和测量计算模块为依次连接;所述数据处理模块,用于对水下相机拍摄到的激光光斑阵列图像进行分析计算,主要就是通过“近大远小”的成像原理,从中获取相机与目标之间的距离信息,并结合转台的旋转角,计算目标物的尺寸信息;The data processing module includes an area segmentation module and a measurement calculation module, the area segmentation module and the measurement calculation module are sequentially connected; the data processing module is used to analyze and calculate the laser spot array image captured by the underwater camera, The main thing is to obtain the distance information between the camera and the target through the imaging principle of "near large, far small", and combine the rotation angle of the turntable to calculate the size information of the target;

所述区域分割模块为Mask R-CNN分割算法模型,所述Mask R-CNN分割算法模型为现有技术,所述测量计算模块利用区域分割模块得到的区域信息计算水下目标物的尺寸。The area segmentation module is a Mask R-CNN segmentation algorithm model, and the Mask R-CNN segmentation algorithm model is the prior art, and the measurement and calculation module uses the area information obtained by the area segmentation module to calculate the size of the underwater target.

具体实施时,参见图3,为本项目实施例测量系统主要有四个激光器阵列、相机、转台、电脑、测试点A和测试点B组成。其中,四个激光器和相机放置在一个水密壳体内,如图4所示;激光器出射方向和相机光轴相互平行,当激光照射到目标物表面时,能够从相机中观察到激光光点。转台放置在水中,能够按照指令旋转一定的角度,用来使激光器阵列和相机组成的装置在水中整体转动。电脑和转台、激光器和相机构成的装备之间通过水密缆线连接,用来读取转台旋转的角度和相机中获取的激光光点图片,并计算出目标物的尺寸信息。测试点A和测试点B分别用来模拟待测目标物的起始位置和末端位置。For specific implementation, see Fig. 3, the measurement system for the embodiment of this project mainly consists of four laser arrays, cameras, turntables, computers, test point A and test point B. Among them, four lasers and cameras are placed in a watertight housing, as shown in Figure 4; the laser emission direction and the optical axis of the camera are parallel to each other, and when the laser is irradiated on the surface of the target, the laser spot can be observed from the camera. The turntable is placed in the water and can rotate at a certain angle according to the command, which is used to make the device composed of the laser array and the camera rotate in the water as a whole. The equipment composed of computer, turntable, laser and camera is connected by watertight cable, which is used to read the rotation angle of the turntable and the laser light spot pictures captured by the camera, and calculate the size information of the target. Test point A and test point B are used to simulate the start position and end position of the object to be tested respectively.

如图5所示,将水下相机、水下激光器及对应的安装支架放入水中进行标定,以0.05m为间距,采集1.5m~11m处的光斑图像;这些管板图像组成数据集;As shown in Figure 5, the underwater camera, underwater laser and the corresponding mounting bracket are put into the water for calibration, and the spot images at 1.5m to 11m are collected at a distance of 0.05m; these tubesheet images form a data set;

S1:设D为光斑图像与水下相机之间的距离,L为光斑图案周长,建立D与L之间的映射关系,表达式如下:S1: Let D be the distance between the spot image and the underwater camera, L be the circumference of the spot pattern, and establish the mapping relationship between D and L, the expression is as follows:

D=a×Lb+c;(1)D=a×L b +c; (1)

其中,a、b和c表示常数;Among them, a, b and c represent constants;

所述S1中建立D与L之间的映射关系的具体步骤如下:The specific steps for establishing the mapping relationship between D and L in the S1 are as follows:

S11:获取W个带有距离标签的光斑图像;S11: Acquire W spot images with distance labels;

S12:将所述光斑图像作为数据处理模块的输入,获取多个光斑的像素坐标,利用像素坐标计算该光斑图像周长L,L的具体表达式如下:S12: Using the spot image as the input of the data processing module, obtain the pixel coordinates of multiple spots, and use the pixel coordinates to calculate the perimeter L of the spot image, and the specific expression of L is as follows:

Figure BDA0004013839660000051
Figure BDA0004013839660000051

其中,(xr,yr)表示光斑的像素坐标,r表示光斑的数量;Among them, (x r , y r ) represents the pixel coordinates of the light spot, and r represents the number of light spots;

S121:将S11中获取的W个光斑图像作为训练集;S121: using the W spot images acquired in S11 as a training set;

S122:分别对W个光斑图像进行几何图形标注,得到W个带有图形标注标签的光斑图像,每个带有图形标注标签的光斑图像为一个训练样本;S122: Carry out geometric figure annotation on the W spot images respectively, and obtain W light spot images with graphic annotation labels, and each light spot image with graphic annotation labels is a training sample;

S123:令t=1;S123: let t=1;

S124:从W中选择第t个训练样本作为Mask R-CNN神经网络模型的输入,输出得到第t个训练样本对应的Mask掩码分割图像t’;S124: Select the tth training sample from W as the input of the Mask R-CNN neural network model, and output the Mask mask segmentation image t' corresponding to the tth training sample;

S125:对t’进行轮廓拟合与边缘检测处理,轮廓拟合与边缘检测均为现有技术,得到第t个训练样本的预测图像t”;S125: Perform contour fitting and edge detection processing on t', both contour fitting and edge detection are existing technologies, and obtain the predicted image t" of the tth training sample;

S126:设置损失阈值,计算t”与t之间的损失Loss;S126: Set the loss threshold, and calculate the loss Loss between t" and t;

当Loss值小于损失阈值时,得到训练好的Mask R-CNN神经网络模型,并执行下一步;否则,反向传播更新Mask R-CNN神经网络模型中的参数,并令t=t+1,且返回S124;When the Loss value is less than the loss threshold, the trained Mask R-CNN neural network model is obtained, and the next step is performed; otherwise, the parameters in the Mask R-CNN neural network model are updated by backpropagation, and t=t+1 is set, And return to S124;

S127:将待预测光斑图像进行几何图形标注,并作为训练好的Mask R-CNN神经网络模型的输入,输出得到围成该待预测光斑图像的光斑像素坐标;S127: Marking the image of the to-be-predicted light spot with a geometric figure, and using it as an input of the trained Mask R-CNN neural network model, and outputting the pixel coordinates of the light spot surrounding the image of the to-be-predicted light spot;

S128:利用S127得到的光斑像素坐标计算该待预测光斑图像的周长L。S128: Calculate the perimeter L of the to-be-predicted spot image by using the spot pixel coordinates obtained in S127.

S13:将所有带有距离标签D的光斑图像周长L组成数据集,且D与L一一对应;S13: Form a data set with perimeter L of all spot images with distance labels D, and D and L correspond one-to-one;

S14:采用最小二乘法将数据集中的D和L进行拟合,得到D与L之间的函数映射关系,最小二乘法为现有技术;S14: Fitting D and L in the data set by a least square method to obtain a functional mapping relationship between D and L, the least square method is a prior art;

S2:所述转台从水下目标物的起始位置D1转到水下目标物的末端位置D2,记录转台的旋转角度α;同时,水下相机拍摄D1和D2处水下目标物表面的光斑图像;S2: The turntable turns from the initial position D1 of the underwater target to the end position D2 of the underwater target, and records the rotation angle α of the turntable; at the same time, the underwater camera shoots the underwater targets at D1 and D2 Spot image on the surface of the object;

S3:采用数据处理模块对D1对应的光斑图像进行处理,得到光斑图像周长L1;采用数据处理模块对D2对应的光斑图像进行处理,得到光斑图像周长L2S3: use the data processing module to process the spot image corresponding to D1 to obtain the perimeter L1 of the spot image; use the data processing module to process the spot image corresponding to D2 to obtain the perimeter L2 of the spot image;

S4:将L1和L2代入公式(1)进行计算,分别得到D1和D2S4: Substituting L 1 and L 2 into formula (1) for calculation to obtain D 1 and D 2 respectively;

S5:计算水下目标物的实际尺寸M,具体表达式如下:S5: Calculate the actual size M of the underwater target, the specific expression is as follows:

Figure BDA0004013839660000061
Figure BDA0004013839660000061

其中,α表示转台旋转的角度。Among them, α represents the angle of rotation of the turntable.

具体实施时,如图6所示,通过Mask R-CNN深度学习算法训练模型的权重,用于精确识别和分割光斑区域,并拟合相机中光斑中心组成的图像周长和距离之间的关系,如公式(4)。其中L是所测光斑区域的周长,单位为pixel;D是目标平面与相机之间的预测距离,单位为cmIn the specific implementation, as shown in Figure 6, the weight of the model is trained through the Mask R-CNN deep learning algorithm, which is used to accurately identify and segment the spot area, and fit the relationship between the perimeter and distance of the image composed of the center of the spot in the camera , such as formula (4). Where L is the perimeter of the measured spot area, in pixel; D is the predicted distance between the target plane and the camera, in cm

D=99556.57L-0.9753 (4)D=99556.57L -0.9753 (4)

最后,如图3所示,在水中搭建测试系统,以测试点A和测试点B分别模拟待测目标的起始位置和末端位置,二者间距约为490cm,即测试点A和测试点B所模拟的目标尺寸约为490cm。转台从测试点A旋转到测试点B的角度为45°,如图7所示,测试得到相机与测试点A的距离为168.91cm,测试点B的距离为591.54cm。计算得到测试点A和测试点B之间模拟目标尺寸为486.98cm。目标物测量的相对误差约为0.62%。这验证了方法的可行性。Finally, as shown in Figure 3, set up a test system in water, use test point A and test point B to simulate the start position and end position of the target to be tested respectively, and the distance between the two is about 490cm, that is, test point A and test point B The simulated target size is approximately 490 cm. The angle at which the turntable rotates from test point A to test point B is 45°. As shown in Figure 7, the distance between the camera and test point A is 168.91cm, and the distance between test point B is 591.54cm. The calculated simulated target size between test point A and test point B is 486.98cm. The relative error of target measurement is about 0.62%. This verifies the feasibility of the method.

最后说明的是,以上实施例仅用以说明本发明的技术方案而非限制,尽管参照较佳实施例对本发明进行了详细说明,本领域的普通技术人员应当理解,可以对本发明的技术方案进行修改或者等同替换,而不脱离本发明技术方案的宗旨和范围,其均应涵盖在本发明的权利要求范围当中。Finally, it is noted that the above embodiments are only used to illustrate the technical solutions of the present invention without limitation. Although the present invention has been described in detail with reference to the preferred embodiments, those of ordinary skill in the art should understand that the technical solutions of the present invention can be carried out Modifications or equivalent replacements without departing from the spirit and scope of the technical solution of the present invention shall be covered by the claims of the present invention.

Claims (5)

1.一种基于光学视觉的水下目标尺寸测量的装置,其特征在于:包括水下相机、J个水下激光器、转台和数据处理模块,其中J≥2;1. A device for measuring the size of an underwater target based on optical vision, characterized in that it includes an underwater camera, J underwater lasers, a turntable and a data processing module, wherein J≥2; 所述J个水下激光器用于在水下目标物表面形成光斑阵列,且所述J个水下激光器发射的J个光束之间相互平行,且水下激光器的光束指向和水下相机光轴平行;The J underwater lasers are used to form a spot array on the surface of the underwater target, and the J beams emitted by the J underwater lasers are parallel to each other, and the beams of the underwater lasers are directed to the optical axis of the underwater camera parallel; 所述水下相机用于拍摄光束在水下目标物表面形成的光斑图案;The underwater camera is used to photograph the spot pattern formed by the light beam on the surface of the underwater target; 所述转台用于支撑水下相机和J个水下激光器;The turntable is used to support the underwater camera and J underwater lasers; 所述数据处理模块用于对水下相机拍摄到光斑图案进行识别,通过“近大远小”的成像原理,从中获取水下相机与水下目标物之间的距离信息,并结合转台的旋转角,计算水下目标物的尺寸信息。The data processing module is used to identify the light spot pattern captured by the underwater camera, and obtain the distance information between the underwater camera and the underwater target through the imaging principle of "near large, far small", and combine the rotation of the turntable angle to calculate the size information of the underwater target. 2.如权利要求1所述的一种基于光学视觉的水下目标尺寸测量的装置,其特征在于:所述数据处理模块包括区域分割模块和测量计算模块,所述区域分割模块和测量计算模块为依次连接;2. a kind of device based on the underwater target size measurement of optical vision as claimed in claim 1, is characterized in that: described data processing module comprises area segmentation module and measurement calculation module, and described area segmentation module and measurement calculation module for sequential connection; 所述区域分割模块为Mask R-CNN分割算法模型,所述测量计算模块利用区域分割模块得到的区域信息计算水下目标物的尺寸。The area segmentation module is a Mask R-CNN segmentation algorithm model, and the measurement calculation module uses the area information obtained by the area segmentation module to calculate the size of the underwater target. 3.一种基于光学视觉的水下目标尺寸测量的方法,其特征在于:采用权利要求1所述的基于光学视觉的水下目标尺寸测量的装置,所述测量方法包括如下步骤:3. A method for measuring the size of an underwater target based on optical vision, characterized in that: the device for measuring the size of an underwater target based on optical vision as claimed in claim 1 is used, and the measuring method comprises the steps: S1:设D为光斑图像与水下相机之间的距离,L为光斑图案周长,建立D与L之间的映射关系,表达式如下:S1: Let D be the distance between the spot image and the underwater camera, L be the circumference of the spot pattern, and establish the mapping relationship between D and L, the expression is as follows: D=a×Lb+c;(1)D=a×L b +c; (1) 其中,a、b和c表示常数;Among them, a, b and c represent constants; S2:所述转台从水下目标物的起始位置D1转到水下目标物的末端位置D2,记录转台的旋转角度α;同时,水下相机拍摄D1和D2处水下目标物表面的光斑图像;S2: The turntable turns from the initial position D1 of the underwater target to the end position D2 of the underwater target, and records the rotation angle α of the turntable; at the same time, the underwater camera shoots the underwater targets at D1 and D2 Spot image on the surface of the object; S3:采用数据处理模块对D1对应的光斑图像进行处理,得到光斑图像周长L1;采用数据处理模块对D2对应的光斑图像进行处理,得到光斑图像周长L2S3: use the data processing module to process the spot image corresponding to D1 to obtain the perimeter L1 of the spot image; use the data processing module to process the spot image corresponding to D2 to obtain the perimeter L2 of the spot image; S4:将L1和L2代入公式(1)进行计算,分别得到D1和D2S4: Substituting L 1 and L 2 into formula (1) for calculation to obtain D 1 and D 2 respectively; S5:计算水下目标物的实际尺寸M,具体表达式如下:S5: Calculate the actual size M of the underwater target, the specific expression is as follows:
Figure FDA0004013839650000011
Figure FDA0004013839650000011
其中,α表示转台旋转的角度。Among them, α represents the angle of rotation of the turntable.
4.如权利要求3所述的一种基于光学视觉的水下目标尺寸测量的方法,其特征在于:所述S1中建立D与L之间的映射关系的具体步骤如下:4. the method for a kind of underwater target size measurement based on optical vision as claimed in claim 3 is characterized in that: the specific steps of establishing the mapping relationship between D and L among the described S1 are as follows: S11:获取W个带有距离标签的光斑图像;S11: Acquire W spot images with distance labels; S12:将所述光斑图像作为数据处理模块的输入,获取多个光斑的像素坐标,利用像素坐标计算该光斑图像周长L,L的具体表达式如下:S12: Using the spot image as the input of the data processing module, obtain the pixel coordinates of multiple spots, and use the pixel coordinates to calculate the perimeter L of the spot image, and the specific expression of L is as follows:
Figure FDA0004013839650000021
Figure FDA0004013839650000021
其中,(xr,yr)表示光斑的像素坐标,r表示光斑的数量;Among them, (x r , y r ) represents the pixel coordinates of the light spot, and r represents the number of light spots; S13:将所有带有距离标签D的光斑图像周长L组成数据集,且D与L一一对应;S13: Form a data set with perimeter L of all spot images with distance labels D, and D and L correspond one-to-one; S14:采用最小二乘法将数据集中的D和L进行拟合,得到D与L之间的函数映射关系。S14: Fitting D and L in the data set by the least square method to obtain a functional mapping relationship between D and L.
5.如权利要求4所述的一种基于光学视觉的水下目标尺寸测量的方法,其特征在于:所述S12中获取多个光斑的像素坐标,计算该光斑图像周长L的具体步骤如下:5. the method for a kind of underwater target size measurement based on optical vision as claimed in claim 4, is characterized in that: obtain the pixel coordinates of a plurality of light spots in the described S12, the concrete steps of calculating this light spot image perimeter L are as follows : S121:将S11中获取的W个光斑图像作为训练集;S121: using the W spot images acquired in S11 as a training set; S122:分别对W个光斑图像进行几何图形标注,得到W个带有图形标注标签的光斑图像,每个带有图形标注标签的光斑图像为一个训练样本;S122: Carry out geometric figure annotation on the W spot images respectively, and obtain W light spot images with graphic annotation labels, and each light spot image with graphic annotation labels is a training sample; S123:令t=1;S123: let t=1; S124:从W中选择第t个训练样本作为Mask R-CNN神经网络模型的输入,输出得到第t个训练样本对应的Mask掩码分割图像t’;S124: Select the tth training sample from W as the input of the Mask R-CNN neural network model, and output the Mask mask segmentation image t' corresponding to the tth training sample; S125:对t’进行轮廓拟合与边缘检测处理,得到第t个训练样本的预测图像t”;S125: Perform contour fitting and edge detection processing on t' to obtain the predicted image t" of the tth training sample; S126:设置损失阈值,计算t”与t之间的损失Loss;S126: Set the loss threshold, and calculate the loss Loss between t" and t; 当Loss值小于损失阈值时,得到训练好的Mask R-CNN神经网络模型,并执行下一步;否则,反向传播更新Mask R-CNN神经网络模型中的参数,并令t=t+1,且返回S124;When the Loss value is less than the loss threshold, the trained Mask R-CNN neural network model is obtained, and the next step is performed; otherwise, the parameters in the Mask R-CNN neural network model are updated by backpropagation, and t=t+1 is set, And return to S124; S127:将待预测光斑图像进行几何图形标注,并作为训练好的Mask R-CNN神经网络模型的输入,输出得到围成该待预测光斑图像的光斑像素坐标;S127: Marking the image of the to-be-predicted light spot with a geometric figure, and using it as an input of the trained Mask R-CNN neural network model, and outputting the pixel coordinates of the light spot surrounding the image of the to-be-predicted light spot; S128:利用S127得到的光斑像素坐标计算该待预测光斑图像的周长L。S128: Calculate the perimeter L of the to-be-predicted spot image by using the spot pixel coordinates obtained in S127.
CN202211663782.8A 2022-12-23 2022-12-23 A device and method for measuring underwater target size based on optical vision Pending CN115773720A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211663782.8A CN115773720A (en) 2022-12-23 2022-12-23 A device and method for measuring underwater target size based on optical vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211663782.8A CN115773720A (en) 2022-12-23 2022-12-23 A device and method for measuring underwater target size based on optical vision

Publications (1)

Publication Number Publication Date
CN115773720A true CN115773720A (en) 2023-03-10

Family

ID=85392872

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211663782.8A Pending CN115773720A (en) 2022-12-23 2022-12-23 A device and method for measuring underwater target size based on optical vision

Country Status (1)

Country Link
CN (1) CN115773720A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117011688A (en) * 2023-07-11 2023-11-07 广州大学 Method, system and storage medium for identifying diseases of underwater structure
CN117970305A (en) * 2024-03-18 2024-05-03 国家海洋局南海调查技术中心(国家海洋局南海浮标中心) Underwater target distance measuring device and distance measuring method based on power adjustable laser projection

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117011688A (en) * 2023-07-11 2023-11-07 广州大学 Method, system and storage medium for identifying diseases of underwater structure
CN117011688B (en) * 2023-07-11 2024-03-08 广州大学 Method, system and storage medium for identifying diseases of underwater structure
CN117970305A (en) * 2024-03-18 2024-05-03 国家海洋局南海调查技术中心(国家海洋局南海浮标中心) Underwater target distance measuring device and distance measuring method based on power adjustable laser projection

Similar Documents

Publication Publication Date Title
CN111815716B (en) Parameter calibration method and related device
CN103591939B (en) Simulated Seabed Topographic Measurement Method and Measurement Device Based on Active Stereo Vision Technology
CN115773720A (en) A device and method for measuring underwater target size based on optical vision
CN106596736B (en) A kind of real-time ultrasound phased array total focus imaging method
CN115077414B (en) Device and method for measuring bottom contour of sea surface target by underwater vehicle
CN114152935B (en) Method, device and equipment for evaluating radar external parameter calibration precision
CN112215903A (en) Method and device for detecting river flow velocity based on ultrasonic wave and optical flow method
JP6333396B2 (en) Method and apparatus for measuring displacement of mobile platform
CN110135396A (en) Recognition methods, device, equipment and the medium of surface mark
CN116990830B (en) Distance positioning method and device based on binocular and TOF, electronic equipment and medium
CN113534161B (en) Beam mirror image focusing method for remotely positioning underwater sound source
CN115824170A (en) A Method of Fusion of Photogrammetry and LiDAR to Measure Ocean Waves
KR101772220B1 (en) Calibration method to estimate relative position between a multi-beam sonar and a camera
CN116338628B (en) Laser radar sounding method and device based on learning architecture and electronic equipment
CN107132524A (en) Submarine target locus computational methods based on two identification sonars
CN110426675A (en) A kind of sound phase instrument auditory localization result evaluation method based on image procossing
Holak et al. A vision system for pose estimation of an underwater robot
CN114782556B (en) Registration method, system and storage medium of camera and lidar
Iscar et al. Towards distortion based underwater domed viewport camera calibration
WO2024160337A1 (en) Depth map generation for 2d panoramic images
CN117095038A (en) Point cloud filtering method and system for laser scanner
CN116399227A (en) A Calibration Method of Structured Light 3D Scanning System Based on MEMS Galvanometer
CN111260674A (en) Method, system and storage medium for extracting target contour from sonar image
CN114019519B (en) Trajectory recording method and equipment of a leveling ranging fish finder
Marburg et al. Extrinsic calibration of an RGB camera to a 3D imaging sonar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination