[go: up one dir, main page]

CN111413698A - A target localization method for underwater robot search and exploration - Google Patents

A target localization method for underwater robot search and exploration Download PDF

Info

Publication number
CN111413698A
CN111413698A CN202010143341.XA CN202010143341A CN111413698A CN 111413698 A CN111413698 A CN 111413698A CN 202010143341 A CN202010143341 A CN 202010143341A CN 111413698 A CN111413698 A CN 111413698A
Authority
CN
China
Prior art keywords
underwater
target
underwater robot
sonar
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010143341.XA
Other languages
Chinese (zh)
Inventor
马杰
尉浩然
余逸飞
刘克中
张煜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Technology WUT
Original Assignee
Wuhan University of Technology WUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Technology WUT filed Critical Wuhan University of Technology WUT
Priority to CN202010143341.XA priority Critical patent/CN111413698A/en
Publication of CN111413698A publication Critical patent/CN111413698A/en
Priority to CN202011065672.2A priority patent/CN111983620B/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/02Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems using reflection of acoustic waves
    • G01S15/06Systems determining the position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/89Sonar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/539Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

本发明公开了一种面向水下机器人搜寻探摸的目标定位方法,包括以下步骤:提取并识别水下目标声呐图像的A‑KAZE特征点;测算水下目标相对于水下机器人的二维方位;测算水下机器人的仰角;测算水下目标相对于水下机器人的三维方位;修正水下机器人的仰角,修正水下目标点相对于水下机器人的三维方位。本发明面向水下机器人搜寻探摸的目标定位方法,基于前视声呐数据,运用深度卷积神经网络对水下目标特征点进行自动提取和识别,结合水下机器人姿态,实现了水下目标的精准定位,便于搜寻人员对于水下目标的位置进行精细化探测,实现水下搜寻探摸作业的可靠化、高效化、智能化,此发明用于水下目标搜寻探摸技术领域。

Figure 202010143341

The invention discloses a target positioning method for underwater robot search and exploration, comprising the following steps: extracting and identifying A-KAZE feature points of underwater target sonar images; measuring the two-dimensional orientation of the underwater target relative to the underwater robot ; Measure the elevation angle of the underwater robot; measure the three-dimensional orientation of the underwater target relative to the underwater robot; correct the elevation angle of the underwater robot, and correct the three-dimensional orientation of the underwater target point relative to the underwater robot. The present invention is oriented to the target positioning method of underwater robot search and exploration. Based on forward-looking sonar data, the deep convolutional neural network is used to automatically extract and identify the feature points of the underwater target, and combined with the posture of the underwater robot, the underwater target is realized. Precise positioning is convenient for searchers to finely detect the position of underwater targets, and realizes reliable, efficient and intelligent underwater search and exploration operations. The invention is used in the technical field of underwater target search and exploration.

Figure 202010143341

Description

一种面向水下机器人搜寻探摸的目标定位方法A target localization method for underwater robot search and exploration

技术领域technical field

本发明涉及水下目标搜寻探摸技术领域,特别是涉及一种面向水下机器人搜寻探摸的目标定位方法。The invention relates to the technical field of underwater target search and exploration, in particular to a target positioning method for underwater robot search and exploration.

背景技术Background technique

在对海洋进行科学研究时,水下机器人是最重要的研究工具,用于代替人类长时间水下作业或者在恶劣水下环境中工作。在复杂的水下环境中,最为可靠有效的探测手段是水声探测,也是水下机器人应用最为广泛的水下探测手段。综合利用现代声呐探测技术对遇难事故海域水下展开搜寻探摸,获取水下搜寻目标的关键特征点,同时结合目标特征点和探摸机器人的姿态信息,实现水下目标的准确定位。When conducting scientific research on the ocean, underwater robots are the most important research tools, which are used to replace human beings for long-term underwater operations or work in harsh underwater environments. In complex underwater environments, the most reliable and effective detection method is underwater acoustic detection, and it is also the most widely used underwater detection method for underwater robots. Comprehensively use modern sonar detection technology to conduct underwater search and exploration in the sea area of the accident, and obtain the key feature points of the underwater search target. At the same time, combine the target feature points and the attitude information of the exploration robot to achieve accurate positioning of the underwater target.

现有的水下目标搜寻探摸方法获得的水下目标位置的精细化及准确性不高,研究水下目标搜寻探摸定位方法是而今乃至未来很长一段时间的科学研究中的焦点问题。The precision and accuracy of the underwater target position obtained by the existing underwater target search and detection methods are not high. The study of the underwater target search and detection method is the focus of scientific research now and for a long time in the future.

发明内容SUMMARY OF THE INVENTION

本发明的目的在于至少解决现有技术中存在的技术问题之一,提供一种面向水下机器人搜寻探摸的目标定位方法,能够实现水下目标的精确定位。The purpose of the present invention is to solve at least one of the technical problems existing in the prior art, and to provide a target positioning method for underwater robot search and exploration, which can realize accurate positioning of underwater targets.

根据本发明的实施例,提供一种面向水下机器人搜寻探摸的目标定位方法,包括以下步骤:According to an embodiment of the present invention, there is provided a target positioning method for underwater robot search and exploration, comprising the following steps:

S1.通过水下机器人的前视声呐采集的声纳图像提取水下目标的A-KAZE特征点;S1. Extract the A-KAZE feature points of the underwater target through the sonar image collected by the forward-looking sonar of the underwater robot;

S2.将带有A-KAZE特征的声呐图像输入卷积神经网络方法识别声呐图像中目标的A-KAZE特征点;S2. Input the sonar image with A-KAZE feature into the convolutional neural network method to identify the A-KAZE feature point of the target in the sonar image;

S3.运用目标特征点与前置声呐的几何关系测算水下目标相对于水下机器人的二维方位;S3. Use the geometric relationship between the target feature point and the front sonar to measure the two-dimensional orientation of the underwater target relative to the underwater robot;

S4.将水下目标特征点二维方位和水下机器人的姿态结合测算水下机器人的仰角θ,通过得到的仰角θ测算水下目标相对于水下机器人的三维方位;S4. Combine the two-dimensional orientation of the feature point of the underwater target and the attitude of the underwater robot to measure the elevation angle θ of the underwater robot, and measure the three-dimensional orientation of the underwater target relative to the underwater robot through the obtained elevation angle θ;

S5.运用约束不足或约束充足的特征点修正水下机器人的仰角θ,修正水下目标点相对于水下机器人的三维方位。S5. Use the feature points with insufficient or sufficient constraints to correct the elevation angle θ of the underwater robot, and correct the three-dimensional orientation of the underwater target point relative to the underwater robot.

根据本发明实施例所述的面向水下机器人搜寻探摸的目标定位方法,所述步骤S1中提取水下目标的A-KAZE特征点包括以下步骤:According to the target positioning method for underwater robot search and exploration according to the embodiment of the present invention, the extraction of the A-KAZE feature points of the underwater target in the step S1 includes the following steps:

S101.定义一组演化时间构建非线性尺度空间;S101. Define a set of evolution times to construct a nonlinear scale space;

S102.将像素单位中的离散集转换为时间单位;S102. Convert discrete sets in pixel units into time units;

S103.给定输入图像和对比度因子,使用快速显式扩散方法;S103. Given an input image and a contrast factor, use a fast explicit diffusion method;

S104.将快速显式扩散方法嵌入从粗到细的金字塔方法中;S104. Embed the fast explicit diffusion method into the coarse-to-fine pyramid method;

S105.为每个声呐图像计算海森行列式;S105. Calculate the Hessian determinant for each sonar image;

S106.使用级联沙尔滤波器计算二阶导数。S106. Calculate the second derivative using a cascaded Schall filter.

根据本发明实施例所述的面向水下机器人搜寻探摸的目标定位方法,所述步骤S2的具体实现包括以下子步骤:According to the target positioning method for underwater robot search and exploration according to the embodiment of the present invention, the specific implementation of step S2 includes the following sub-steps:

S201.使用GoogLeNet架构在声呐图像数据集上训练卷积神经网络。S201. Train a convolutional neural network on the sonar image dataset using the GoogLeNet architecture.

根据本发明实施例所述的面向水下机器人搜寻探摸的目标定位方法,所述GoogLeNet架构包括五层,第一层和第二层为卷积层和最大池化层,第三层为inception层,第四层为特征层,是完全连接的层,第四层将先前的输出映射到Dim×1向量,第五层是完全连接的层,第五层将先前的特征层映射为3×1向量,并将映射为3×1向量的特征层与使用欧式损失的位置标签进行比较。According to the target positioning method for underwater robot search and exploration according to the embodiment of the present invention, the GoogLeNet architecture includes five layers, the first layer and the second layer are convolution layers and maximum pooling layers, and the third layer is inception layer, the fourth layer is the feature layer, which is a fully connected layer, the fourth layer maps the previous output to a Dim × 1 vector, the fifth layer is a fully connected layer, and the fifth layer maps the previous feature layer to a 3 × 1 vector and compare the feature layer mapped as a 3×1 vector with the location labels using Euclidean loss.

根据本发明实施例所述的面向水下机器人搜寻探摸的目标定位方法,所述步骤S3的具体实现包括以下子步骤:According to the target positioning method for underwater robot search and exploration according to the embodiment of the present invention, the specific implementation of step S3 includes the following sub-steps:

S301.局部笛卡尔声呐坐标系与球面参数坐标系相互转化。S301. The local Cartesian sonar coordinate system and the spherical parameter coordinate system are transformed into each other.

根据本发明实施例所述的面向水下机器人搜寻探摸的目标定位方法,所述步骤S4的具体实现包括以下子步骤:According to the target positioning method for underwater robot search and exploration according to the embodiment of the present invention, the specific implementation of step S4 includes the following sub-steps:

S401.将水下目标特征点和水下机器人的姿态公式化为非线性最小二乘因子图优化,对于每个姿态Xt,包含以下6个参数(x,y,z,yaw,pitch,roll),对于每个特征点,包含以下3个参数(x,y,z);S401. Formulate the underwater target feature points and the attitude of the underwater robot into nonlinear least square factor graph optimization. For each attitude X t , it contains the following 6 parameters (x, y, z, yaw, pitch, roll) , for each feature point, contains the following 3 parameters (x, y, z);

S402.将因子图求解为非线性最小二乘优化;S402. Solve the factor graph as nonlinear least squares optimization;

S403.将特征点lj=(x,y,z)转换为声呐帧,获得局部坐标(xs,ys,zs)的方位角和距离;S403. Convert the feature point l j =(x, y, z) into a sonar frame, and obtain the azimuth and distance of the local coordinates (x s , y s , z s );

S404.利用对数函数的单调性,通过声呐测量值的反投影找到特征点的初始估计;S404. Use the monotonicity of the logarithmic function to find the initial estimate of the feature point through the back projection of the sonar measurement value;

S405.将未知仰角θ设置为0,然后使用水下机器人姿态Xt将点从声纳直角坐标(xs,ys,zs)转换为世界直角坐标(x,y,z),用作初始猜测特征点的三维方位;S405. Set the unknown elevation angle θ to 0, and then use the underwater robot attitude X t to convert the point from the sonar Cartesian coordinates (x s , y s , z s ) to the world Cartesian coordinates (x, y, z), which are used as Initially guess the three-dimensional orientation of the feature point;

S406.将预测的三维特征点位置转换为姿态Xt的声呐坐标系。S406. Convert the predicted three-dimensional feature point position to the sonar coordinate system of the attitude X t .

根据本发明实施例所述的面向水下机器人搜寻探摸的目标定位方法,所述步骤S5的具体实现包括以下子步骤:According to the target positioning method for underwater robot search and exploration according to the embodiment of the present invention, the specific implementation of step S5 includes the following sub-steps:

S501.通过不同姿态观察目标特征点的仰角;S501. Observe the elevation angle of the target feature point through different attitudes;

S502.将观察到的特征点分类为约束不足或约束充足的要素;S502. Classify the observed feature points as under-constrained or sufficiently-constrained elements;

S503.为了确定点特征点是否受到充足约束,使用三自由度球面参数化;S503. In order to determine whether the point feature points are sufficiently constrained, use three-degree-of-freedom spherical parameterization;

S504.以特征点l0的初始估计为线性化点,使用测量函数的泰勒级数展开;S504. Take the initial estimation of the feature point l 0 as the linearization point, and use the Taylor series expansion of the measurement function;

S505.将优化简化为线性最小二乘问题;S505. Reduce optimization to linear least squares problem;

S506.确定优化是否受到测量约束;S506. Determine whether the optimization is constrained by measurement;

S507.从状态向量中完全删除约束不足的特征点;S507. Completely delete feature points with insufficient constraints from the state vector;

S508.仅从状态向量中完全删除约束不足的特征点的仰角,然后将约束不足的特征点建模为因子图中的二维方位距离点。S508. Only completely remove the elevation angles of the under-constrained feature points from the state vector, and then model the under-constrained feature points as two-dimensional azimuth-distance points in the factor graph.

有益效果:本面向水下机器人搜寻探摸的目标定位方法,基于前视声呐数据,运用深度卷积神经网络对水下目标特征点进行自动提取和识别,结合水下机器人姿态,实现了水下目标的精准定位,便于搜寻人员对于水下目标的位置进行精细化探测,实现水下搜寻探摸作业的可靠化、高效化、智能化,此发明用于水下目标搜寻探摸技术领域。Beneficial effects: This target positioning method for underwater robot search and exploration, based on forward-looking sonar data, uses deep convolutional neural network to automatically extract and recognize underwater target feature points, and combines the posture of the underwater robot to realize the underwater robot posture. The precise positioning of the target facilitates the search personnel to perform precise detection of the position of the underwater target, and realizes the reliability, efficiency and intelligence of the underwater search and exploration operation. The invention is used in the technical field of underwater target search and exploration.

附图说明Description of drawings

下面结合附图对本发明作进一步说明:The present invention will be further described below in conjunction with the accompanying drawings:

图1是本发明实施例的步骤框图;Fig. 1 is the step block diagram of the embodiment of the present invention;

图2是本发明实施例目标特征点与前置声呐的几何关系图;Fig. 2 is the geometrical relationship diagram of the target feature point of the embodiment of the present invention and the front sonar;

图3是本发明实施例的因子图模型;Fig. 3 is the factor graph model of the embodiment of the present invention;

图4是本发明实施例水下目标的三维位置图;4 is a three-dimensional position map of an underwater target according to an embodiment of the present invention;

图5是本发明实施例水下机器人绕z轴旋转的示意图;5 is a schematic diagram of an underwater robot rotating around the z-axis according to an embodiment of the present invention;

图6是本发明实施例从状态向量删除约束不足的特征点的因子图修正模型;6 is a factor graph correction model for deleting feature points with insufficient constraints from a state vector according to an embodiment of the present invention;

图7是本发明实施例仅从状态向量中删除约束不足的特征点的仰角的因子图修正模型。FIG. 7 is a factor graph correction model for only deleting the elevation angles of feature points with insufficient constraints from the state vector according to an embodiment of the present invention.

具体实施方式Detailed ways

本部分将详细描述本发明的具体实施例,本发明之较佳实施例在附图中示出,附图的作用在于用图形补充说明书文字部分的描述,使人能够直观地、形象地理解本发明的每个技术特征和整体技术方案,但其不能理解为对本发明保护范围的限制。This part will describe the specific embodiments of the present invention in detail, and the preferred embodiments of the present invention are shown in the accompanying drawings. Each technical feature and overall technical solution of the invention should not be construed as limiting the protection scope of the invention.

在本发明的描述中,需要理解的是,涉及到方位描述,例如上、下、前、后、左、右等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。In the description of the present invention, it should be understood that the azimuth description, such as the azimuth or position relationship indicated by up, down, front, rear, left, right, etc., is based on the azimuth or position relationship shown in the drawings, only In order to facilitate the description of the present invention and simplify the description, it is not indicated or implied that the indicated device or element must have a particular orientation, be constructed and operated in a particular orientation, and therefore should not be construed as limiting the present invention.

在本发明的描述中,若干的含义是一个或者多个,多个的含义是两个以上,大于、小于、超过等理解为不包括本数,以上、以下、以内等理解为包括本数。如果有描述到第一、第二只是用于区分技术特征为目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量或者隐含指明所指示的技术特征的先后关系。In the description of the present invention, the meaning of several is one or more, the meaning of multiple is two or more, greater than, less than, exceeding, etc. are understood as not including this number, above, below, within, etc. are understood as including this number. If it is described that the first and the second are only for the purpose of distinguishing technical features, it cannot be understood as indicating or implying relative importance, or indicating the number of the indicated technical features or the order of the indicated technical features. relation.

本发明的描述中,除非另有明确的限定,设置、安装、连接等词语应做广义理解,所属技术领域技术人员可以结合技术方案的具体内容合理确定上述词语在本发明中的具体含义。In the description of the present invention, unless otherwise clearly defined, words such as setting, installation, connection should be understood in a broad sense, and those skilled in the art can reasonably determine the specific meanings of the above words in the present invention in combination with the specific content of the technical solution.

参照图1,本发明实施例提供一种面向水下机器人搜寻探摸的目标定位方法,包括以下步骤:Referring to FIG. 1, an embodiment of the present invention provides a target positioning method for underwater robot search and exploration, including the following steps:

S1.通过水下机器人的前视声呐采集的声纳图像提取水下目标的A-KAZE特征点,提取水下目标的A-KAZE特征点具体实现包括以下子步骤:S1. Extract the A-KAZE feature points of the underwater target through the sonar image collected by the forward-looking sonar of the underwater robot, and the specific implementation of extracting the A-KAZE feature points of the underwater target includes the following sub-steps:

S101.定义一组演化时间构建非线性尺度空间,S101. Define a set of evolution times to construct a nonlinear scale space,

σi(o,s)=2o+s/S,o∈[0…O-1],s∈[0…S-1],i∈[0…M],σ i (o,s)=2 o+s/S , o∈[0…O-1], s∈[0…S-1], i∈[0…M],

其中O为经不同高斯核模糊的图像的集合,S为离散化的层,σ为像素M,是声呐图像的总数。where O is the set of images blurred by different Gaussian kernels, S is the discretized layer, σ is the pixel M, and is the total number of sonar images.

S102.将像素单位σi中的离散集转换为时间单位,S102. Convert the discrete set in the pixel unit σ i to a time unit,

Figure BDA0002399864350000041
Figure BDA0002399864350000041

S103:给定输入图像和对比度因子,使用快速显式扩散方法,使用M-1个外部快速显式扩散循环,并为每个循环计算最小内部步数n。S103: Given an input image and a contrast factor, use the fast explicit diffusion method, use M-1 outer fast explicit diffusion cycles, and calculate the minimum number of inner steps n for each cycle.

S104.为了加快非线性尺度空间的计算,将快速显式扩散方法嵌入从粗到细的金字塔方法中。S104. In order to speed up the computation of the nonlinear scale space, the fast explicit diffusion method is embedded in the coarse-to-fine pyramid method.

快速显式扩散方法被嵌入从粗到细的金字塔分解中。为了尽可能快地达到稳态,级联快速显式扩散将解决从粗略级到精细级传播。使用平滑矩阵

Figure BDA0002399864350000042
对图像进行2倍降频采样,并将该降采样后的图像用作下一个集合中下一个快速显式扩散周期的起始图像。The fast explicit diffusion method is embedded in the coarse-to-fine pyramid decomposition. To reach steady state as quickly as possible, cascading fast explicit diffusion will resolve propagation from coarse to fine levels. Use smoothing matrix
Figure BDA0002399864350000042
The image is downsampled by a factor of 2, and this downsampled image is used as the starting image for the next fast explicit diffusion cycle in the next set.

S105.为每个声呐图像Li计算海森行列式, S105 . Calculate the Hessian determinant for each sonar image Li,

Figure BDA0002399864350000043
Figure BDA0002399864350000043

使用考虑到非线性尺度空间中每个特定图像的集合的归一化像素因子即σi,norm=σi/2。A normalized pixel factor that takes into account the set of each particular image in the nonlinear scale space is used, ie σ i,norm =σi/2.

S106.为了计算二阶导数使用步长为σi,norm的级联沙尔滤波器。S106. To calculate the second derivative, use a cascaded Schall filter with a step size of σ i,norm .

在每个演化水平i处,检查检测器响应是否高于预定阈值,并且它是3×3像素窗口中的最大值。然后对于每个潜在的最大值,分别在大小为σi×σi像素的窗口中,检查响应相对于级别i+1和i-1中其他关键点的最大值是否分别为正上方和正下方。最后,通过在3×3像素邻域中拟合二维二次函数到海森响应的决定因素,并找到其最大值,可以以亚像素精度估算关键点的二维位置。At each evolution level i, it is checked whether the detector response is above a predetermined threshold and it is the maximum value in a 3x3 pixel window. Then for each potential maxima, in a window of size σ i ×σ i pixels, respectively, check whether the response is directly above and below the maxima relative to other keypoints in levels i+1 and i-1, respectively. Finally, by fitting a 2D quadratic function to the determinants of the Hessian response in a 3 × 3 pixel neighborhood and finding its maximum, the 2D location of keypoints can be estimated with subpixel accuracy.

S2.将带有A-KAZE特征的声呐图像输入卷积神经网络方法识别声呐图像中目标的A-KAZE特征点,具体实现包括以下子步骤:S2. Input the sonar image with the A-KAZE feature into the convolutional neural network method to identify the A-KAZE feature point of the target in the sonar image. The specific implementation includes the following sub-steps:

S201.使用GoogLeNet架构在声呐图像数据集上训练卷积神经网(CNN);S201. Use the GoogLeNet architecture to train a convolutional neural network (CNN) on the sonar image dataset;

原始的GoogLeNet架构分为五层,第一、二层为卷积层和最大池化层,第三、四、五层为inception层。The original GoogLeNet architecture is divided into five layers, the first and second layers are convolutional layers and maximum pooling layers, and the third, fourth, and fifth layers are inception layers.

为使原始网络适应本发明进行了两项改进:Two improvements were made to adapt the original network to the present invention:

(1)倒数第二层(即第四层)是完全连接的层,该层将先前的输出映射到Dim×1向量,称为特征层。(1) The penultimate layer (i.e., the fourth layer) is a fully connected layer that maps the previous output to a Dim × 1 vector, called a feature layer.

(2)最后一层(即第五层)是一个完全连接的层,该层将先前的特征层映射为3×1向量,并将其与使用欧式损失的位置标签进行比较。(2) The last layer (i.e., the fifth layer) is a fully connected layer that maps the previous feature layer into a 3 × 1 vector and compares it with the location labels using Euclidean loss.

S3.运用目标特征点与前置声呐的几何关系测算水下目标相对于水下机器人的二维方位,具体实现包括以下子步骤:S3. Use the geometric relationship between the target feature point and the front sonar to measure the two-dimensional orientation of the underwater target relative to the underwater robot. The specific implementation includes the following sub-steps:

S301.在局部笛卡尔声呐坐标系中参数化的点C=[x y z]T。该点也可以使用球面参数化表示为Q,两者之间的转换为,S301. Point C=[xyz] T parameterized in the local Cartesian sonar coordinate system. The point can also be represented as Q using a spherical parameterization, and the conversion between the two is,

Figure BDA0002399864350000051
Figure BDA0002399864350000051

Figure BDA0002399864350000052
Figure BDA0002399864350000052

其中

Figure BDA0002399864350000053
是方位角,r是距离,θ是仰角。in
Figure BDA0002399864350000053
is the azimuth, r is the distance, and θ is the elevation.

r由发射的时间和水中的声速决定。收发器阵列允许将接收到的反射的方位角

Figure BDA0002399864350000054
计算为准确度<1°以内。同时这些测量没有提供关于仰角θ的任何信息。r is determined by the time of launch and the speed of sound in the water. The transceiver array allows the azimuth of the received reflections to be
Figure BDA0002399864350000054
Calculated to be accurate within <1°. At the same time these measurements do not provide any information about the elevation angle θ.

从位于同一仰角弧上的曲面块反射的检测到的声呐反射将投射到最终成像声呐图像中的同一像素,如图2所示。Detected sonar reflections reflected from curved patches lying on the same elevation arc will be projected to the same pixel in the final imaged sonar image, as shown in Figure 2.

在传感器视野范围内编译所有测量结果将得到灰度级极坐标图像,其中列对应于离散方位角空间,行对应于离散范围空间。Compiling all measurements within the sensor's field of view results in a grayscale polar image, where the columns correspond to the discrete azimuth space and the rows correspond to the discrete range space.

对于单位像素σ,令

Figure BDA0002399864350000061
表示从像素空间到方位范围空间的转换。像素的强度对应于在指定的方位角和范围内从仰角反射的声音的强度。For unit pixel σ, let
Figure BDA0002399864350000061
Represents a transformation from pixel space to azimuth extent space. The intensity of the pixel corresponds to the intensity of the sound reflected from the elevation angle within the specified azimuth and range.

S4.将水下目标特征点二维方位和水下机器人的姿态结合测算水下机器人的仰角θ,通过得到的仰角θ测算水下目标相对于水下机器人的三维方位,具体实现包括以下子步骤:S4. Combine the two-dimensional orientation of the feature point of the underwater target and the attitude of the underwater robot to measure the elevation angle θ of the underwater robot, and calculate the three-dimensional orientation of the underwater target relative to the underwater robot through the obtained elevation angle θ. The specific implementation includes the following sub-steps :

S401.将水下目标特征点和水下机器人的姿态公式化为非线性最小二乘因子图优化,对于每个姿态Xt,都有以下6个参数(x,y,z,yaw,pitch,roll),对于每个特征点,都有以下3个参数(x,y,z)。S401. Formulate the underwater target feature points and the attitude of the underwater robot as nonlinear least square factor graph optimization. For each attitude X t , there are the following 6 parameters (x, y, z, yaw, pitch, roll ), for each feature point, there are the following 3 parameters (x, y, z).

因子图是一个二分图,其中待优化未知变量的变量节点连接到测量值的因子节点,如图3所示。A factor graph is a bipartite graph in which the variable nodes of the unknown variable to be optimized are connected to the factor nodes of the measured values, as shown in Figure 3.

在每个时间t处,姿态Xt与里程表测量位置值ut-1一起作为新节点添加到因子图中,后者提供Xt-1和Xt之间的运动估计。将第j个特征点的方位和距离

Figure BDA0002399864350000062
测量值mk添加到图形中,从而将特征点lj连接到观察其的姿态。使用“基本姿态Xb”(观察到的特征点的第一个姿态)的框架的球面坐标,首先假设0°仰角生成特征点三维位置的初始估计。At each time t, the attitude Xt is added to the factor graph as a new node along with the odometer measurement position value ut-1 , which provides an estimate of the motion between Xt-1 and Xt . The bearing and distance of the jth feature point
Figure BDA0002399864350000062
The measurements m k are added to the graph, connecting the feature points l j to the pose from which it was observed. Using the spherical coordinates of the frame of the "base pose X b " (the first pose of the observed feature point), an initial estimate of the three-dimensional position of the feature point is generated first assuming a 0° elevation angle.

S402.将因子图求解为非线性最小二乘优化,S402. Solve the factor graph as nonlinear least squares optimization,

Figure BDA0002399864350000063
Figure BDA0002399864350000063

其中状态向量X=[X0,X1,…,L0,L1,…]T包含所有未知变量:姿态和特征点。Wherein the state vector X=[X 0 , X 1 ,...,L 0 ,L 1 ,...] T contains all unknown variables: pose and feature points.

第i个因子指定了预测函数hi(X),测量值Zi={ui,mk}和测量不确定度∑i。The i -th factor specifies the prediction function hi (X), the measured value Z i ={u i , m k } and the measurement uncertainty Σi.

在从姿态Xt测量特征点j的方位和距离

Figure BDA0002399864350000064
的情况下,预测函数:Measure the bearing and distance of the feature point j from the attitude X t
Figure BDA0002399864350000064
In the case of , the prediction function:

hi(X)=π(Xt,lj)。h i (X)=π(X t ,l j ).

S403.hi(X)首先根据姿态Xt将特征点lj=(x,y,z)转换为声呐帧,获得局部坐标(xs,ys,zs)的方位角

Figure BDA0002399864350000065
和距离r,由以下公式获得; S403.hi (X) firstly convert the feature point l j = (x, y, z) into a sonar frame according to the attitude X t , and obtain the azimuth angle of the local coordinates (x s , y s , z s )
Figure BDA0002399864350000065
and distance r, obtained by the following formula;

Figure BDA0002399864350000066
Figure BDA0002399864350000066

Figure BDA0002399864350000067
Figure BDA0002399864350000067

S404.利用对数函数的单调性,通过声呐测量值的反投影找到特征点的初始估计。使用每个特征的第一个观察值,包括距离r和方位角

Figure BDA0002399864350000068
测量值,S404. Using the monotonicity of the logarithmic function, find the initial estimates of the feature points by back-projecting the sonar measurement values. Use the first observation for each feature, including distance r and azimuth
Figure BDA0002399864350000068
Measurements,

Figure BDA0002399864350000071
Figure BDA0002399864350000071

S405.将未知仰角θ设置为0,然后使用水下机器人姿态Xt将点从声纳直角坐标(xs,ys,zs)转换为世界直角坐标(x,y,z),用作初始猜测特征点的三维方位。S405. Set the unknown elevation angle θ to 0, and then use the underwater robot attitude X t to convert the point from the sonar Cartesian coordinates (x s , y s , z s ) to the world Cartesian coordinates (x, y, z), which are used as Initial guess of the 3D orientation of the feature points.

S406.将预测的三维特征点位置转换为姿态Xt的声呐坐标系,相应的反投影函数π-1(Xb,mb,θ)根据基本姿态Xb,相应的方位和距离测量值mb和提供的高程来计算目标三维特征点位置角θ(水下机器人仰角θ),如图4所示;S406. Convert the predicted three-dimensional feature point position to the sonar coordinate system of the attitude X t , the corresponding back-projection function π -1 (X b , m b , θ) according to the basic attitude X b , the corresponding azimuth and distance measurement value m b and the provided elevation to calculate the target three-dimensional feature point position angle θ (underwater robot elevation angle θ), as shown in Figure 4;

S5.运用约束不足或约束充足的特征点修正水下机器人的仰角θ,修正水下目标点相对于水下机器人的三维方位,具体实现包括以下子步骤:S5. Use the feature points with insufficient or sufficient constraints to correct the elevation angle θ of the underwater robot, and correct the three-dimensional orientation of the underwater target point relative to the underwater robot. The specific implementation includes the following sub-steps:

S501.通过不同姿态观察目标特征点的仰角。S501. Observe the elevation angle of the target feature point through different attitudes.

控制水下机器人绕z轴作纯偏航旋转,旋转角度为yaw,前视声呐接收到的反射的方位角

Figure BDA0002399864350000076
Control the underwater robot to perform pure yaw rotation around the z-axis, the rotation angle is yaw, and the azimuth angle of the reflection received by the forward-looking sonar
Figure BDA0002399864350000076

特征点参数(x,y,z)参数转换:Feature point parameter (x, y, z) parameter conversion:

Figure BDA0002399864350000072
Figure BDA0002399864350000072

如图5所示,当姿态通过纯偏航旋转分开时,仰角弧线具有最小的重叠。As shown in Figure 5, the elevation arcs have minimal overlap when the poses are separated by pure yaw rotation.

从多个姿势中测量了该特征点,该点的仰角需要经过以下步骤进行修正。This feature point is measured from multiple poses, and the elevation angle of this point needs to be corrected through the following steps.

S502.将观察到的特征点分类为约束不足或约束充足的要素。查看测量结果是否足以约束其仰角,如果是这样,则使用标准参数化将其作为约束充足的特征点添加到因子图中;S502. Classify the observed feature points as under-constrained or sufficiently-constrained features. see if the measurement is sufficient to constrain its elevation angle, and if so, add it to the factor map as a well-constrained feature point using standard parameterization;

S503.为了确定点特征点是否受到充足约束,使用三自由度球面参数化,其中状态仅由特征点lj组成,S503. In order to determine whether the point feature points are sufficiently constrained, use the three-degree-of-freedom spherical parameterization, where the state consists only of the feature points lj,

Figure BDA0002399864350000073
Figure BDA0002399864350000073

由于传感器的姿态不是状态变量,因此将它们视为常量,并且预测函数hi(lj)使用可从总体因子图状态估计中获得的最新估计。Since the poses of the sensors are not state variables, they are treated as constants, and the prediction function h i (l j ) uses the latest estimates available from the population factor graph state estimates.

S504.以特征点l0的初始估计为线性化点,使用测量函数的泰勒级数展开,S504. Take the initial estimation of the feature point l 0 as the linearization point, use the Taylor series expansion of the measurement function,

Figure BDA0002399864350000074
Figure BDA0002399864350000074

Figure BDA0002399864350000075
Figure BDA0002399864350000075

S505.将优化简化为线性最小二乘问题,S505. Reduce optimization to linear least squares problem,

Figure BDA0002399864350000081
Figure BDA0002399864350000081

其中:

Figure BDA0002399864350000082
in:
Figure BDA0002399864350000082

其中A和b:where A and b:

Ai=∑i-1/2HiA i =∑i -1/2 H i ,

Figure BDA0002399864350000083
Figure BDA0002399864350000083

线性化点

Figure BDA0002399864350000084
被视为在零仰角处反投影的第一个方位和距离测量值。Linearization point
Figure BDA0002399864350000084
The first azimuth and distance measurement taken as a backprojection at zero elevation.

S506.确定优化是否受到测量约束,其中检查ATA是确定优化是否受到测量约束的关键。S506. Determine whether the optimization is constrained by measurement, wherein checking A T A is the key to determining whether the optimization is constrained by measurement.

如果仰角完全不受约束,则3×3矩阵ATA的秩将不足随着仰角的约束越来越大,ATA的最小特征值λ3的幅度将相对于前两个特征值λ1和λ2增大。因此,特征点必须满足标准

Figure BDA0002399864350000085
才能被认为具有充足的约束力,其中ρ是用户定义的可调阈值。如果不符合标准,则将特征点分类为约束不足。If the elevation angle is completely unconstrained, the rank of the 3×3 matrix A T A will be insufficient. As the elevation angle becomes more and more constrained, the magnitude of the smallest eigenvalue λ 3 of A T A will be relative to the first two eigenvalues λ 1 and λ2 increases. Therefore, the feature points must satisfy the criteria
Figure BDA0002399864350000085
can be considered sufficiently binding, where ρ is a user-defined adjustable threshold. If the criteria are not met, the feature point is classified as insufficiently constrained.

S507.从状态向量中删除约束不足的特征点,以使它们的位置不会在优化中明确建模。如图6所示,将与特征点lj对应的测量值收集到一个非参数因子fj中。该因子将特征点从其基本姿态Xb获得的第一个方位距离测量值mb视为常量,从而确定了特征点的两个球面坐标;S507. Remove under-constrained feature points from the state vector so that their positions are not explicitly modeled in the optimization. As shown in Figure 6, the measurements corresponding to feature points lj are collected into a nonparametric factor fj . This factor treats the first azimuth distance measurement m b of the feature point from its base pose X b as a constant, thereby determining the two spherical coordinates of the feature point;

在优化的每次迭代中,该因子都会通过以均匀的增量采样仰角在可行的仰角范围内进行搜索,并选择总重投影误差最低的仰角作为当前预测值:In each iteration of the optimization, this factor searches the range of feasible elevation angles by sampling the elevation angles in uniform increments, and selects the elevation angle with the lowest total reprojection error as the current predicted value:

Figure BDA0002399864350000086
Figure BDA0002399864350000086

其中Θ={θminmin+Δθ,…,θmax-Δθ,θmax},where Θ={θ minmin +Δθ,…,θ max -Δθ,θ max },

使用测量不确定度∑k,将重投影误差计算为特征点到姿态xk的投影与测量值mk之间的距离函数。Using the measurement uncertainty ∑k, the reprojection error is calculated as a function of the distance between the projection of the feature point to the pose xk and the measurement mk .

然后,此因子的成本函数是在最佳仰角下评估的总重投影误差:The cost function for this factor is then the total reprojection error evaluated at the optimal elevation angle:

Figure BDA0002399864350000087
Figure BDA0002399864350000087

S508.仅从状态向量中删除约束不足的特征点的仰角,然后将约束不足的特征点建模为因子图中的二维方位距离点。S508. Only the elevation angles of the under-constrained feature points are removed from the state vector, and then the under-constrained feature points are modeled as two-dimensional azimuth-distance points in the factor graph.

如图7所示,约束不足的特征点lj的所有测量值都组合为单个联合测量因子sj。该联合测量因子与非参数因子fj相似,不同之处在于它在计算重投影误差时使用了特征点的方位和距离估计,而不是基础姿态的测量值。As shown in Figure 7, all measurements of the under-constrained feature points l j are combined into a single joint measurement factor s j . This joint measure factor is similar to the nonparametric factor fj , except that it uses azimuth and distance estimates of feature points instead of measurements of the base pose when computing the reprojection error.

应当理解的是,本说明书未详细阐述的部分属于现有技术。It should be understood that the parts not described in detail in this specification belong to the prior art.

上面结合附图对本发明实施例作了详细说明,但是本发明不限于上述实施例,在所述技术领域普通技术人员所具备的知识范围内,还可以在不脱离本发明宗旨的前提下作出各种变化。The embodiments of the present invention have been described in detail above in conjunction with the accompanying drawings, but the present invention is not limited to the above-mentioned embodiments. Within the scope of knowledge possessed by those of ordinary skill in the technical field, various modifications can be made without departing from the purpose of the present invention. kind of change.

Claims (7)

1.一种面向水下机器人搜寻探摸的目标定位方法,其特征在于,包括以下步骤:1. a target positioning method for underwater robot search and exploration, is characterized in that, comprises the following steps: S1.通过水下机器人的前视声呐采集的声纳图像提取水下目标的A-KAZE特征点;S1. Extract the A-KAZE feature points of the underwater target through the sonar image collected by the forward-looking sonar of the underwater robot; S2.将带有A-KAZE特征的声呐图像输入卷积神经网络方法识别声呐图像中目标的A-KAZE特征点;S2. Input the sonar image with A-KAZE feature into the convolutional neural network method to identify the A-KAZE feature point of the target in the sonar image; S3.运用目标特征点与前置声呐的几何关系测算水下目标相对于水下机器人的二维方位;S3. Use the geometric relationship between the target feature point and the front sonar to measure the two-dimensional orientation of the underwater target relative to the underwater robot; S4.将水下目标特征点二维方位和水下机器人的姿态结合测算水下机器人的仰角θ,通过得到的仰角θ测算水下目标相对于水下机器人的三维方位;S4. Combine the two-dimensional orientation of the feature point of the underwater target and the attitude of the underwater robot to measure the elevation angle θ of the underwater robot, and measure the three-dimensional orientation of the underwater target relative to the underwater robot through the obtained elevation angle θ; S5.运用约束不足或约束充足的特征点修正水下机器人的仰角θ,修正水下目标点相对于水下机器人的三维方位。S5. Use the feature points with insufficient or sufficient constraints to correct the elevation angle θ of the underwater robot, and correct the three-dimensional orientation of the underwater target point relative to the underwater robot. 2.根据权利要求1所述的面向水下机器人搜寻探摸的目标定位方法,其特征在于,所述步骤S1中提取水下目标的A-KAZE特征点包括以下步骤:2. the target location method that faces underwater robot search and probe according to claim 1, is characterized in that, in described step S1, extracts the A-KAZE feature point of underwater target comprises the following steps: S101.定义一组演化时间构建非线性尺度空间;S101. Define a set of evolution times to construct a nonlinear scale space; S102.将像素单位中的离散集转换为时间单位;S102. Convert discrete sets in pixel units into time units; S103.给定输入图像和对比度因子,使用快速显式扩散方法;S103. Given an input image and a contrast factor, use a fast explicit diffusion method; S104.将快速显式扩散方法嵌入从粗到细的金字塔方法中;S104. Embed the fast explicit diffusion method into the coarse-to-fine pyramid method; S105.为每个声呐图像计算海森行列式;S105. Calculate the Hessian determinant for each sonar image; S106.使用级联沙尔滤波器计算二阶导数。S106. Calculate the second derivative using a cascaded Schall filter. 3.根据权利要求1所述的面向水下机器人搜寻探摸的目标定位方法,其特征在于,所述步骤S2的具体实现包括以下子步骤:3. the target positioning method facing underwater robot search and probe according to claim 1, is characterized in that, the concrete realization of described step S2 comprises following sub-steps: S201.使用GoogLeNet架构在声呐图像数据集上训练卷积神经网络。S201. Train a convolutional neural network on the sonar image dataset using the GoogLeNet architecture. 4.根据权利要求3所述的面向水下机器人搜寻探摸的目标定位方法,其特征在于:所述GoogLeNet架构包括五层,第一层和第二层为卷积层和最大池化层,第三层为inception层,第四层为特征层,是完全连接的层,第四层将先前的输出映射到Dim×1向量,第五层是完全连接的层,第五层将先前的特征层映射为3×1向量,并将映射为3×1向量的特征层与使用欧式损失的位置标签进行比较。4. the target positioning method for underwater robot search according to claim 3, is characterized in that: described GoogLeNet architecture comprises five layers, and the first layer and the second layer are convolution layer and maximum pooling layer, The third layer is the inception layer, the fourth layer is the feature layer, which is a fully connected layer, the fourth layer maps the previous output to a Dim × 1 vector, the fifth layer is a fully connected layer, and the fifth layer maps the previous feature The layers are mapped as 3×1 vectors, and the feature layers mapped as 3×1 vectors are compared with location labels using Euclidean loss. 5.根据权利要求1所述的面向水下机器人搜寻探摸的目标定位方法,其特征在于,所述步骤S3的具体实现包括以下子步骤:5. the target positioning method facing underwater robot search and probe according to claim 1, is characterized in that, the concrete realization of described step S3 comprises following sub-steps: S301.局部笛卡尔声呐坐标系与球面参数坐标系相互转化。S301. The local Cartesian sonar coordinate system and the spherical parameter coordinate system are transformed into each other. 6.根据权利要求1所述的面向水下机器人搜寻探摸的目标定位方法,其特征在于,所述步骤S4的具体实现包括以下子步骤:6. The target positioning method for underwater robot search and exploration according to claim 1, is characterized in that, the concrete realization of described step S4 comprises following sub-steps: S401.将水下目标特征点和水下机器人的姿态公式化为非线性最小二乘因子图优化,对于每个姿态Xt,包含以下6个参数(x,y,z,yaw,pitch,roll),对于每个特征点,包含以下3个参数(x,y,z);S401. Formulate the underwater target feature points and the attitude of the underwater robot into nonlinear least square factor graph optimization. For each attitude X t , it contains the following 6 parameters (x, y, z, yaw, pitch, roll) , for each feature point, contains the following 3 parameters (x, y, z); S402.将因子图求解为非线性最小二乘优化;S402. Solve the factor graph as nonlinear least squares optimization; S403.将特征点lj=(x,y,z)转换为声呐帧,获得局部坐标(xs,ys,zs)的方位角和距离;S403. Convert the feature point l j =(x, y, z) into a sonar frame, and obtain the azimuth and distance of the local coordinates (x s , y s , z s ); S404.利用对数函数的单调性,通过声呐测量值的反投影找到特征点的初始估计;S404. Use the monotonicity of the logarithmic function to find the initial estimate of the feature point through the back projection of the sonar measurement value; S405.将未知仰角θ设置为0,然后使用水下机器人姿态Xt将点从声纳直角坐标(xs,ys,zs)转换为世界直角坐标(x,y,z),用作初始猜测特征点的三维方位;S405. Set the unknown elevation angle θ to 0, and then use the underwater robot attitude X t to convert the point from the sonar Cartesian coordinates (x s , y s , z s ) to the world Cartesian coordinates (x, y, z), which are used as Initially guess the three-dimensional orientation of the feature point; S406.将预测的三维特征点位置转换为姿态Xt的声呐坐标系。S406. Convert the predicted three-dimensional feature point position to the sonar coordinate system of the attitude X t . 7.根据权利要求1所述的面向水下机器人搜寻探摸的目标定位方法,其特征在于,所述步骤S5的具体实现包括以下子步骤:7. The target positioning method for underwater robot search according to claim 1, wherein the concrete realization of the step S5 comprises the following sub-steps: S501.通过不同姿态观察目标特征点的仰角;S501. Observe the elevation angle of the target feature point through different attitudes; S502.将观察到的特征点分类为约束不足或约束充足的要素;S502. Classify the observed feature points as under-constrained or sufficiently-constrained elements; S503.为了确定点特征点是否受到充足约束,使用三自由度球面参数化;S503. In order to determine whether the point feature points are sufficiently constrained, use three-degree-of-freedom spherical parameterization; S504.以特征点l0的初始估计为线性化点,使用测量函数的泰勒级数展开;S504. Take the initial estimation of the feature point l 0 as the linearization point, and use the Taylor series expansion of the measurement function; S505.将优化简化为线性最小二乘问题;S505. Reduce optimization to linear least squares problem; S506.确定优化是否受到测量约束;S506. Determine whether the optimization is constrained by measurement; S507.从状态向量中完全删除约束不足的特征点;S507. Completely delete feature points with insufficient constraints from the state vector; S508.仅从状态向量中完全删除约束不足的特征点的仰角,然后将约束不足的特征点建模为因子图中的二维方位距离点。S508. Only completely remove the elevation angles of the under-constrained feature points from the state vector, and then model the under-constrained feature points as two-dimensional azimuth-distance points in the factor graph.
CN202010143341.XA 2020-03-04 2020-03-04 A target localization method for underwater robot search and exploration Pending CN111413698A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010143341.XA CN111413698A (en) 2020-03-04 2020-03-04 A target localization method for underwater robot search and exploration
CN202011065672.2A CN111983620B (en) 2020-03-04 2020-09-30 A target positioning method for underwater robot search and exploration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010143341.XA CN111413698A (en) 2020-03-04 2020-03-04 A target localization method for underwater robot search and exploration

Publications (1)

Publication Number Publication Date
CN111413698A true CN111413698A (en) 2020-07-14

Family

ID=71489211

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202010143341.XA Pending CN111413698A (en) 2020-03-04 2020-03-04 A target localization method for underwater robot search and exploration
CN202011065672.2A Active CN111983620B (en) 2020-03-04 2020-09-30 A target positioning method for underwater robot search and exploration

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202011065672.2A Active CN111983620B (en) 2020-03-04 2020-09-30 A target positioning method for underwater robot search and exploration

Country Status (1)

Country Link
CN (2) CN111413698A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801191A (en) * 2021-02-02 2021-05-14 中国石油大学(北京) Intelligent recommendation method, device and equipment for pipeline accident handling
CN114283327A (en) * 2021-12-24 2022-04-05 杭州电子科技大学 Target searching and approaching method based on underwater searching robot
CN116243720A (en) * 2023-04-25 2023-06-09 广东工业大学 AUV underwater object searching method and system based on 5G networking

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112529072B (en) * 2020-12-07 2024-08-09 中国船舶重工集团公司七五0试验场 Submerged object identification and positioning method based on sonar image processing
CN112859807B (en) * 2021-01-10 2022-03-22 西北工业大学 Evaluation method of underwater vehicle cooperative search effectiveness based on situational simulation and Monte Carlo
CN113379710B (en) * 2021-06-18 2024-02-02 上海大学 An underwater target sonar accurate measurement system and method

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103869824B (en) * 2014-03-05 2017-02-08 河海大学常州校区 Biological antenna model-based multi-robot underwater target searching method and device
KR20160073462A (en) * 2014-12-16 2016-06-27 아진산업(주) A method for monitoring underwater exploration robot
WO2017136014A2 (en) * 2015-11-13 2017-08-10 Flir Systems, Inc. Video sensor fusion and model based virtual and augmented reality systems and methods
RU2625349C1 (en) * 2016-06-28 2017-07-13 Акционерное общество "Научно-исследовательский институт "Вектор" Method for determination of spatial angular coordinates of radio signal in amplitude monopulse pelengage systems
US10528147B2 (en) * 2017-03-06 2020-01-07 Microsoft Technology Licensing, Llc Ultrasonic based gesture recognition
CN109676604B (en) * 2018-12-26 2020-09-22 清华大学 Robot curved surface motion positioning method and motion positioning system thereof
CN110246151B (en) * 2019-06-03 2023-09-15 南京工程学院 A method for underwater robot target tracking based on deep learning and monocular vision
CN110275169B (en) * 2019-06-12 2023-05-16 上海大学 A near-field detection and perception system for underwater robots
CN110568407B (en) * 2019-09-05 2023-06-27 武汉理工大学 Underwater navigation positioning method based on ultra-short baseline and dead reckoning
KR20190121275A (en) * 2019-10-07 2019-10-25 엘지전자 주식회사 System, apparatus and method for indoor positioning

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801191A (en) * 2021-02-02 2021-05-14 中国石油大学(北京) Intelligent recommendation method, device and equipment for pipeline accident handling
CN112801191B (en) * 2021-02-02 2023-11-21 中国石油大学(北京) Intelligent recommendation methods, devices and equipment for pipeline accident disposal
CN114283327A (en) * 2021-12-24 2022-04-05 杭州电子科技大学 Target searching and approaching method based on underwater searching robot
CN114283327B (en) * 2021-12-24 2024-04-05 杭州电子科技大学 Target searching and approaching method based on underwater searching robot
CN116243720A (en) * 2023-04-25 2023-06-09 广东工业大学 AUV underwater object searching method and system based on 5G networking
CN116243720B (en) * 2023-04-25 2023-08-22 广东工业大学 A 5G network-based AUV underwater object-finding method and system

Also Published As

Publication number Publication date
CN111983620B (en) 2024-02-20
CN111983620A (en) 2020-11-24

Similar Documents

Publication Publication Date Title
CN111413698A (en) A target localization method for underwater robot search and exploration
CN110782483B (en) Multi-view and multi-target tracking method and system based on distributed camera network
CN112183171B (en) A method and device for establishing a beacon map based on visual beacons
US9377310B2 (en) Mapping and positioning system
CN103649680B (en) Sensor location for 3D scanning
CN110285806A (en) Fast and precise positioning algorithm for mobile robot based on multiple pose correction
CN110118556A (en) A kind of robot localization method and device based on covariance mixing together SLAM
Tomono 3-D localization and mapping using a single camera based on structure-from-motion with automatic baseline selection
CN110490933A (en) Non-linear state space Central Difference Filter method based on single point R ANSAC
CN114118181B (en) High-dimensional regression point cloud registration method, system, computer equipment and application
CN112444246A (en) Laser fusion positioning method in high-precision digital twin scene
Bai et al. A survey of image-based indoor localization using deep learning
CN111964680A (en) A real-time positioning method of inspection robot
Li et al. Indoor multi-sensor fusion positioning based on federated filtering
CN112750161A (en) Map updating method for mobile robot and mobile robot positioning method
CN111739066A (en) A visual positioning method, system and storage medium based on Gaussian process
CN114723811A (en) Stereo vision positioning and mapping method for quadruped robot in unstructured environment
Olson Subpixel localization and uncertainty estimation using occupancy grids
Bai et al. SIO-UV: Rapid and robust sonar intertial odometry for underwater vehicles
Shekhar et al. Passive ranging using a moving camera
CN108416811A (en) A camera self-calibration method and device
Yang et al. A novel spatial pyramid-enhanced indoor visual positioning method
Zalama et al. Concurrent mapping and localization for mobile robots with segmented local maps
Butt et al. Multi-task learning for camera calibration
CN117455968B (en) Coordinate system transformation method and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200714