[go: up one dir, main page]

CN114998444B - A high-precision robot posture measurement system based on two-channel network - Google Patents

A high-precision robot posture measurement system based on two-channel network Download PDF

Info

Publication number
CN114998444B
CN114998444B CN202210554626.1A CN202210554626A CN114998444B CN 114998444 B CN114998444 B CN 114998444B CN 202210554626 A CN202210554626 A CN 202210554626A CN 114998444 B CN114998444 B CN 114998444B
Authority
CN
China
Prior art keywords
precision
posture
channel network
image
monocular camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210554626.1A
Other languages
Chinese (zh)
Other versions
CN114998444A (en
Inventor
丁伟利
刘国庆
华长春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yanshan University
Original Assignee
Yanshan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yanshan University filed Critical Yanshan University
Priority to CN202210554626.1A priority Critical patent/CN114998444B/en
Publication of CN114998444A publication Critical patent/CN114998444A/en
Application granted granted Critical
Publication of CN114998444B publication Critical patent/CN114998444B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)

Abstract

本发明涉及一种基于两通道网络的机器人高精度位姿测量系统,属于计算机视觉技术领域,包括单目相机、高精度位姿测量模块、高精度位姿标定机构、静态标志物和边缘端处理器;单目相机采集训练数据集时固定在高精度位姿标定机构末端,在线测试时与边缘端处理器封装为模块固定在低精度机械臂的末端;静态标志物放置于单目相机前方视野范围内;高精度位姿测量模块内植入位姿测量的两通道网络模型,所述模型通过输入拍摄的静态标志物图像以及位姿训练和位姿测量两个步骤实现机器人的高精度位姿测量。本发明只采集一张图像与参考图像组成图像对输入到模型中就能完成高精度位姿测量,省去了设计复杂标志物来获取物体三维距离信息,大幅度减少了工业成本。

The present invention relates to a high-precision posture measurement system for a robot based on a two-channel network, which belongs to the field of computer vision technology, and includes a monocular camera, a high-precision posture measurement module, a high-precision posture calibration mechanism, a static marker and an edge-end processor; when the monocular camera collects training data sets, it is fixed at the end of the high-precision posture calibration mechanism, and during online testing, it is packaged with the edge-end processor as a module and fixed at the end of a low-precision mechanical arm; the static marker is placed within the field of view in front of the monocular camera; a two-channel network model for posture measurement is implanted in the high-precision posture measurement module, and the model realizes the high-precision posture measurement of the robot by inputting the captured static marker image and the two steps of posture training and posture measurement. The present invention only collects an image and a reference image to form an image pair and inputs it into the model to complete the high-precision posture measurement, eliminating the need to design complex markers to obtain the three-dimensional distance information of the object, and greatly reducing the industrial cost.

Description

一种基于两通道网络的机器人高精度位姿测量系统A high-precision robot posture measurement system based on two-channel network

技术领域Technical Field

本发明涉及一种基于两通道网络的机器人高精度位姿测量系统,属于计算机视觉技术领域。The invention relates to a high-precision posture measurement system for a robot based on a two-channel network, and belongs to the technical field of computer vision.

背景技术Background Art

随着机器人的应用越来越广泛,企业对机器人的性能要求也越来越高。定位精度是影响机器人性能的重要指标,目前市场上的工业机器人重复定位精度普遍很高(可达0.01mm),但其绝对定位精度通常很低(大约几毫米),因此,要提高机器人的性能,就必须提高机器人的绝对定位精度。目前,用来对机器人进行绝对定位的方法有很多,例如提高材料的制造精度、运动学标定进度,利用激光、超声波和视觉传感器进行误差补偿等。其中视觉传感器相对于其他测量技术,成本低,并且能够像人类的眼睛一样获取丰富的信息,有利于在提高定位精度得同时实现智能控制。然而,现有的单目视觉定位方法主通过设计复杂标志物来获取物体三维信息;双目/多目视觉定位方法则需要通过复杂得特征匹配获得目标三维坐标。其中,设计复杂标志物来获取物体三维信息的方法可以在不添加任何辅助设备的条件下来获取工件在世界坐标系下的三维信息。这类方法在标志物的设计上较为复杂,对于标志物的精度有较高的要求。双目/多目特征点匹配计算目标三维坐标的方法简单。如专利CN108388854A提出的一种基于改进FAST-SURF算法的定位方法,利用SURF算法对特征点对的匹配以及RANSAC筛选,将匹配好的特征点对进行三角形测量原理处理,精确定位物体三维坐标。这类方法主要的问题是需要建立足够准确的匹配对,当遇见重复纹理或无纹理场景时,该类鲁棒性较差,定位精度易受影响。此外,该类方法由于匹配耗时,实时性也较差。另一类方法是基于深度学习的相机位姿估计方法,该类方法主要是对图像对进行训练,通过回归出两张图像之间的相对位姿实现高精度定位。但是该方法是基于光度视觉和纯色背景的,对不断变化的光照条件和存在不相关背景问题鲁棒性较差,并且属于重量化模型,难以实现工业部署。As robots are increasingly used, enterprises are also placing higher and higher demands on the performance of robots. Positioning accuracy is an important indicator that affects the performance of robots. The repetitive positioning accuracy of industrial robots on the market is generally high (up to 0.01 mm), but their absolute positioning accuracy is usually low (about a few millimeters). Therefore, to improve the performance of robots, it is necessary to improve the absolute positioning accuracy of robots. At present, there are many methods for absolute positioning of robots, such as improving the manufacturing accuracy of materials, kinematic calibration progress, and using lasers, ultrasonic waves and visual sensors for error compensation. Among them, visual sensors are low-cost compared to other measurement technologies, and can obtain rich information like human eyes, which is conducive to realizing intelligent control while improving positioning accuracy. However, the existing monocular vision positioning method mainly obtains the three-dimensional information of objects by designing complex markers; the binocular/multi-eye vision positioning method requires complex feature matching to obtain the target three-dimensional coordinates. Among them, the method of designing complex markers to obtain the three-dimensional information of objects can obtain the three-dimensional information of the workpiece in the world coordinate system without adding any auxiliary equipment. This type of method is more complicated in the design of markers and has higher requirements for the accuracy of markers. The method of calculating the three-dimensional coordinates of the target by binocular/multi-eye feature point matching is simple. For example, patent CN108388854A proposes a positioning method based on an improved FAST-SURF algorithm, which uses the SURF algorithm to match feature point pairs and RANSAC screening, and processes the matched feature point pairs according to the principle of triangulation to accurately locate the three-dimensional coordinates of the object. The main problem with this type of method is that it is necessary to establish sufficiently accurate matching pairs. When encountering repeated textures or textureless scenes, this type of method has poor robustness and the positioning accuracy is easily affected. In addition, since matching is time-consuming, the real-time performance of this type of method is also poor. Another type of method is a camera pose estimation method based on deep learning. This type of method mainly trains image pairs and achieves high-precision positioning by regressing the relative pose between two images. However, this method is based on photometric vision and pure color background, and has poor robustness to changing lighting conditions and the presence of irrelevant background problems. It is also a heavy quantitative model and is difficult to achieve industrial deployment.

发明内容Summary of the invention

本发明的目的是提供一种基于两通道网络的机器人高精度位姿测量系统,解决了现有技术需要设计复杂标志物,定位精度低,对不断变化的光照条件和存在不相关背景鲁棒性低,以及模型较大难以部署的问题。The purpose of the present invention is to provide a high-precision robot posture measurement system based on a two-channel network, which solves the problems of the prior art that complex markers need to be designed, low positioning accuracy, low robustness to changing lighting conditions and irrelevant backgrounds, and large models that are difficult to deploy.

为了实现上述目的,本发明采用的技术方案是:In order to achieve the above object, the technical solution adopted by the present invention is:

一种基于两通道网络的机器人高精度位姿测量系统,包括单目相机、高精度位姿测量模块、高精度位姿标定机构、静态标志物和边缘端处理器;所述单目相机在采集训练数据集的过程中固定在高精度位姿标定机构末端;所述单目相机在在线测试过程中通过连接线与边缘端处理器相连,并采用一体化封装技术封装为一个模块,所述模块固定在低精度机械臂的末端;所述静态标志物放置于单目相机前方视野范围内;所述高精度位姿测量模块内植入有位姿测量的两通道网络模型,所述两通道网络模型通过输入拍摄的静态标志物图像以及位姿训练和位姿测量两个步骤实现机器人的高精度位姿测量。A high-precision posture measurement system for a robot based on a two-channel network, comprising a monocular camera, a high-precision posture measurement module, a high-precision posture calibration mechanism, a static marker and an edge processor; the monocular camera is fixed at the end of the high-precision posture calibration mechanism during the process of collecting a training data set; the monocular camera is connected to the edge processor via a connecting line during an online test, and is packaged into a module using an integrated packaging technology, and the module is fixed at the end of a low-precision robotic arm; the static marker is placed within the field of view in front of the monocular camera; a two-channel network model for posture measurement is implanted in the high-precision posture measurement module, and the two-channel network model realizes high-precision posture measurement of the robot by inputting a captured static marker image and two steps of posture training and posture measurement.

本发明技术方案的进一步改进在于:所述基于两通道网络的机器人高精度位姿测量系统的具体流程如下:A further improvement of the technical solution of the present invention is that the specific process of the robot high-precision posture measurement system based on the two-channel network is as follows:

步骤1、将单目相机固定在高精度位姿标定机构末端,调整静态标志物的位置,使其能在单目相机的视野范围内能够清晰成像;Step 1: Fix the monocular camera at the end of the high-precision posture calibration mechanism, and adjust the position of the static marker so that it can be clearly imaged within the field of view of the monocular camera;

步骤2、控制高精度位姿标定机构采集N张静态标志物图像,并为每幅图像打上六维姿态标签,形成训练数据集和验证数据集;将打好标签的图像输入到两通道网络中通过深度学习服务器进行训练,训练完成后,进行精度测试,当满足精度要求时,把训练好的两通道网络模型部署到边缘端处理器中;Step 2: Control the high-precision posture calibration mechanism to collect N static marker images, and label each image with a six-dimensional posture label to form a training data set and a verification data set; input the labeled images into the two-channel network for training through the deep learning server. After the training is completed, perform an accuracy test. When the accuracy requirements are met, deploy the trained two-channel network model to the edge processor;

步骤3、将单目相机安装在边缘端处理器上,将单目相机和边缘端处理器上进行一体化封装,然后固定在低精度机械臂末端;Step 3: Install the monocular camera on the edge processor, integrate the monocular camera and the edge processor into a single package, and then fix them at the end of the low-precision robotic arm;

步骤4、位姿测量,基于训练好的两通道网络模型,进行低精度机械臂的在线位姿测量。Step 4: Posture measurement: Based on the trained two-channel network model, online posture measurement of the low-precision robotic arm is performed.

本发明技术方案的进一步改进在于:所述步骤2具体包括以下步骤:A further improvement of the technical solution of the present invention is that the step 2 specifically includes the following steps:

步骤2.1:数据集的构建与样本标签的生成Step 2.1: Dataset construction and sample label generation

控制高精度位姿标定机构在半径10mm高度20mm垂直圆柱体内均匀采样N张图片,以高精度位姿标定机构的六维姿态坐标作为样本标签,为每幅图像打上标签,用随机采样算法,从N张图片中选出部分作为验证数据集,其余的作为训练数据集;Control the high-precision pose calibration mechanism to uniformly sample N images in a vertical cylinder with a radius of 10mm and a height of 20mm. Use the six-dimensional pose coordinates of the high-precision pose calibration mechanism as sample labels, label each image, and use a random sampling algorithm to select some of the N images as validation data sets, and the rest as training data sets.

步骤2.2:构建网络架构Step 2.2: Build the network architecture

在数据加载模块图像经过随机配对形成图像对,并对图像对进行预处理,将图像对按照通道轴进行叠加,即将两张单通道图像叠加成一张双通道的图像,从而构建出两通道网络架构;采用深度可分离卷积代替传统卷积,并用全卷积代替全连接层,以此来减少两通道网络模型的参数量,实现两通道网络模型的轻量化,另外在两通道网络中加入轻量级注意力机制,使两通道网络更加专注于静态标志物;In the data loading module, images are randomly paired to form image pairs, and the image pairs are preprocessed. The image pairs are superimposed along the channel axis, that is, two single-channel images are superimposed into a dual-channel image, thereby constructing a two-channel network architecture; depthwise separable convolution is used instead of traditional convolution, and full convolution is used instead of full connection layer to reduce the number of parameters of the two-channel network model and realize the lightweight of the two-channel network model. In addition, a lightweight attention mechanism is added to the two-channel network to make the two-channel network more focused on static landmarks;

步骤2.3:选定损失函数,两通道网络的主要目的是估计任意两张图像之间的相对六维姿态,并且是用于低精度机械臂粗略定位后最终细化的过程,因此方向度量都在90°以内,所以选用欧拉角来确定方向,将变换标签按照欧拉角的形势分解为平移和旋转,最终的损失计算为:Step 2.3: Select the loss function. The main purpose of the two-channel network is to estimate the relative six-dimensional posture between any two images. It is also used for the final refinement after the rough positioning of the low-precision robot arm. Therefore, the direction measurement is within 90°. Therefore, Euler angles are used to determine the direction. The transformation label is decomposed into translation and rotation according to the form of Euler angles. The final loss is calculated as:

其中,ti分别是图像对的相对平移真实值和估计值,ri分别是图像对的相对旋转真实值与估计值,其中w(0<w<1)是权值为了权衡平移(毫米)和旋转(度)之间的度量大小,旋转相对于平移对静态标志物图像位姿影响更大,所以w值应该给定大一些,让网络更好的学习平移变量;RMS均方根误差的定义为:Among them, ti , are the true and estimated relative translation of the image pair, ri , are the true value and estimated value of the relative rotation of the image pair, respectively, where w (0 < w < 1) is the weight to balance the metric size between translation (mm) and rotation (degrees). Rotation has a greater impact on the position and posture of static landmark images than translation, so the w value should be given a larger value to allow the network to better learn the translation variable; the RMS root mean square error is defined as:

其中m代表参数的数量,对于平移m=3,对于旋转,用欧拉角表示,m=3。Where m represents the number of parameters, for translation m=3, for rotation, expressed in Euler angles, m=3.

步骤2.4:模型的训练与部署,将步骤2.2中预处理过的图像输入到两通道网络中通过深度学习服务器进行训练,并将两通道网络模型的保存设置成静态模式,训练完成后,进行精度测试,将测试集中的图像按照步骤步骤2.2中的预处理步骤进行预处理,并输入到保存的静态模式模型中,当满足精度要求时,把训练好的静态模型在边缘端处理器上进行部署。Step 2.4: Model training and deployment. Input the preprocessed images in step 2.2 into the two-channel network for training through the deep learning server, and set the two-channel network model to be saved in static mode. After the training is completed, perform an accuracy test. Preprocess the images in the test set according to the preprocessing steps in step 2.2 and input them into the saved static mode model. When the accuracy requirements are met, deploy the trained static model on the edge processor.

本发明技术方案的进一步改进在于:所述步骤4的具体步骤为:当低精度机械臂从任意方向到达指定位置后,固定在低精度机械臂机械手上的单目相机采集目标位置附近的一张静态标志物及其周围场景图像,与预先拍摄的目标位置参考姿态图像形成输入图像对,图像对输入到训练好的两通道网络模型中,两通道网络模型输出两张图片的相对位姿并控制低精度机械臂调整位姿,从而实现高精度的机器人六自由度位姿测量。A further improvement of the technical solution of the present invention lies in that the specific steps of step 4 are: when the low-precision robotic arm reaches the specified position from any direction, the monocular camera fixed on the manipulator of the low-precision robotic arm collects an image of a static marker near the target position and its surrounding scene, and forms an input image pair with the pre-shot reference posture image of the target position. The image pair is input into a trained two-channel network model, and the two-channel network model outputs the relative posture of the two images and controls the low-precision robotic arm to adjust the posture, thereby realizing high-precision six-degree-of-freedom posture measurement of the robot.

由于采用了上述技术方案,本发明取得的技术效果有:Due to the adoption of the above technical solution, the technical effects achieved by the present invention are as follows:

本发明对不断变化的光照条件以及图像中存在不相关背景影响测量精度的问题具有鲁棒性,并且应用了深度可分离卷积和全卷积,因此模型具有轻量化的特点,方便在边缘端处理器进行部署;同时,本发明只需要采集一张图像与参考图像组成图像对输入到模型中就可以完成机器人高精度的位姿测量。The present invention is robust to the problem of changing lighting conditions and irrelevant background in the image affecting the measurement accuracy, and applies deep separable convolution and full convolution, so the model is lightweight and convenient for deployment on edge processors. At the same time, the present invention only needs to collect an image and a reference image to form an image pair and input it into the model to complete the robot's high-precision posture measurement.

本发明简化了单目相机定位的过程,当机器人到达指令位置时捕获单帧图像与参考图像形成输入对输入到训练好的模型中,网络模型输出两张图片的相对位姿并控制机器人调整位姿,从而实现高精度的机器人位姿测量,省去了需要设计复杂标志物来获取物体的三维距离信息;The present invention simplifies the process of monocular camera positioning. When the robot reaches the command position, a single-frame image is captured and input into a reference image to form an input pair, which is then input into a trained model. The network model outputs the relative position and posture of the two images and controls the robot to adjust the position and posture, thereby achieving high-precision robot position and posture measurement, eliminating the need to design complex markers to obtain three-dimensional distance information of objects.

本发明使用了轻量化模型,模型占用内存小,能够部署到边缘端设备中,并加入了注意力机制,对不断变化光照条件和存在不相关背景具有鲁棒性,使得其在工业场景更具有普适性;The present invention uses a lightweight model that takes up little memory and can be deployed on edge devices. It also incorporates an attention mechanism that is robust to changing lighting conditions and irrelevant backgrounds, making it more universal in industrial scenarios.

本发明能够大幅度减少工业成本,将高精度位姿标定机构训练出来的模型部署到低精度机器人上,仅用一个工业单目相机和静态标志物就可以实现低精度机器人的高精度位姿测量,可以大幅度减少工业成本。The present invention can significantly reduce industrial costs, deploy the model trained by the high-precision posture calibration mechanism to a low-precision robot, and achieve high-precision posture measurement of the low-precision robot using only an industrial monocular camera and static markers, which can significantly reduce industrial costs.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1是本发明系统训练过程的流程图;Fig. 1 is a flow chart of the system training process of the present invention;

图2是本发明系统在线测试过程的流程图;FIG2 is a flow chart of an online testing process of the system of the present invention;

图3是本发明系统的结构示意图;FIG3 is a schematic diagram of the structure of the system of the present invention;

图4是本发明训练过程的硬件系统结构图;FIG4 is a block diagram of the hardware system of the training process of the present invention;

图5是本发明在线测试过程的硬件系统结构图;5 is a hardware system structure diagram of the online testing process of the present invention;

其中,1、高精度位姿标定机构,2、单目相机,3、静态标志物,4、深度学习服务器,5、低精度机械臂,6、边缘端处理器。Among them, 1. High-precision posture calibration mechanism, 2. Monocular camera, 3. Static markers, 4. Deep learning server, 5. Low-precision robotic arm, 6. Edge processor.

具体实施方式DETAILED DESCRIPTION

下面结合附图及具体实施例对本发明做进一步详细说明:The present invention is further described in detail below with reference to the accompanying drawings and specific embodiments:

一种基于两通道网络的机器人高精度位姿测量系统,包括单目相机2、高精度位姿测量模块、高精度位姿标定机构1、静态标志物3和边缘端处理器6;所述单目相机2通过连接线与边缘端处理器6相连,并采用一体化封装技术封装为一个模块,所述模块固定在高精度位姿标定机构1或低精度机械臂5的末端;所述静态标志物3放置于单目相机2前方视野范围内;所述高精度位姿测量模块内植入有位姿测量的两通道网络模型,所述两通道网络模型通过输入拍摄的静态标志物图像以及位姿训练和位姿测量两个步骤实现机器人的高精度位姿测量。A high-precision posture measurement system for a robot based on a two-channel network, comprising a monocular camera 2, a high-precision posture measurement module, a high-precision posture calibration mechanism 1, a static marker 3 and an edge processor 6; the monocular camera 2 is connected to the edge processor 6 via a connecting line, and is packaged into a module using an integrated packaging technology, and the module is fixed at the end of the high-precision posture calibration mechanism 1 or a low-precision robotic arm 5; the static marker 3 is placed in the field of view in front of the monocular camera 2; a two-channel network model for posture measurement is implanted in the high-precision posture measurement module, and the two-channel network model realizes high-precision posture measurement of the robot by inputting a captured static marker image and two steps of posture training and posture measurement.

将单目相机2固定在高精度位姿标定机构1末端某一位置,对应的单目相机2下方为静态标志物3,控制高精度位姿标定机构1在半径10mm高度20mm的垂直圆柱体内均匀采集1800张静态标志物图像,并给每幅图像打上六维姿态标签,在数据加载模块图像经过随机配对形成图像对,图像对输入到两通道网络中通过深度学习服务器4进行训练,训练完成后对测试集进行精度测试,当满足精度要求时,再把训练好的两通道模型部署到边缘端处理器6中,并将单目相机2和边缘端处理器6进行一体化封装安装在低精度机械臂5末端,然后进行在线测试,即当低精度机械臂5从各个方向到达指令位置后,采集一张静态标志物图像,与参考姿态静态标志物图像形成输入图像对,图像对输入到训练好的两通道网络模型中,模型输出两张图片的相对位姿并控制低精度机械臂5调整位姿,从而实现高精度的机器人六自由度位姿测量。The monocular camera 2 is fixed at a certain position at the end of the high-precision posture calibration mechanism 1, and the corresponding monocular camera 2 is under a static marker 3. The high-precision posture calibration mechanism 1 is controlled to uniformly collect 1800 static marker images in a vertical cylinder with a radius of 10mm and a height of 20mm, and each image is marked with a six-dimensional posture label. In the data loading module, the images are randomly paired to form image pairs, and the image pairs are input into the two-channel network for training through the deep learning server 4. After the training is completed, the accuracy of the test set is tested. When the accuracy requirements are met, the trained two-channel model is deployed to the edge processor 6, and the monocular camera 2 and the edge processor 6 are integrated and packaged and installed at the end of the low-precision robotic arm 5, and then an online test is performed, that is, when the low-precision robotic arm 5 reaches the instruction position from all directions, a static marker image is collected to form an input image pair with the reference posture static marker image, and the image pair is input into the trained two-channel network model. The model outputs the relative posture of the two images and controls the low-precision robotic arm 5 to adjust the posture, thereby realizing high-precision robot six-degree-of-freedom posture measurement.

优选的,所述单目相机2由一台相机和镜头组成,主要用于采集静态标志物3图像。本实施例中单目相机为大恒MER-503-20GM-P单目相机,分辨率为2448(H)*2048(V),帧率为20fps,数据接口为GIge。Preferably, the monocular camera 2 is composed of a camera and a lens, and is mainly used to collect images of static markers 3. In this embodiment, the monocular camera is a Daheng MER-503-20GM-P monocular camera with a resolution of 2448 (H) * 2048 (V), a frame rate of 20fps, and a data interface of GIge.

优选的,所述高精度位姿测量模块主要由两通道网络模型构成,所述两通道网络模型具有轻量级的特性,对不断变化的光照条件和图像中存在不相关背景影响测量精度等问题具有鲁棒性,并且能够以低于0.1mm的精度输出任意两张图像之间的相对位姿。Preferably, the high-precision pose measurement module is mainly composed of a two-channel network model, which has the characteristics of being lightweight, robust to problems such as changing lighting conditions and irrelevant background in the image that affects the measurement accuracy, and can output the relative pose between any two images with an accuracy of less than 0.1 mm.

优选的,所述高精度位姿标定机构1,主要用于数据集的采集和标记,通过高精度位姿标定机构1的六维姿态坐标作为训练数据集姿态标签。本实施例中高精度位姿标定机构1为高精度的四自由度串并混联机构,定位精度为正负0.03mm。Preferably, the high-precision posture calibration mechanism 1 is mainly used for data set collection and labeling, and the six-dimensional posture coordinates of the high-precision posture calibration mechanism 1 are used as posture labels of the training data set. In this embodiment, the high-precision posture calibration mechanism 1 is a high-precision four-degree-of-freedom serial-parallel hybrid mechanism with a positioning accuracy of plus or minus 0.03 mm.

优选的,所述静态标志物3主要用于两通道网络模型进行位姿测量的图像,通过捕获一张静态标志物图像与参考姿态图像形成输入对输入到两通道网络模型中,实现位姿测量。静态标志物3可以是自主设计的标志物,也可以是场景中本身存在的固定物体。本实施例中静态标志物3为棋盘格标定板。Preferably, the static marker 3 is mainly used for the image of the two-channel network model for posture measurement. By capturing a static marker image and a reference posture image to form an input pair and inputting it into the two-channel network model, posture measurement is achieved. The static marker 3 can be a self-designed marker or a fixed object existing in the scene. In this embodiment, the static marker 3 is a checkerboard calibration plate.

优选的,所述边缘端处理器6,主要用于部署高精度位姿测量模块。本实施例中采用NVIDIA Jetson nano。Preferably, the edge processor 6 is mainly used to deploy a high-precision posture measurement module. In this embodiment, NVIDIA Jetson nano is used.

所述基于两通道网络的机器人高精度位姿测量系统的具体流程如下:The specific process of the robot high-precision posture measurement system based on the two-channel network is as follows:

步骤1、将单目相机2其固定在四自由度串并混连机构末端,调整棋盘格标定板的位置,使其能在单目相机2的视野范围内能够清晰成像。本实施例调整棋盘格标定板位置,棋盘格标定板在单目相机正下方20cm左右位置。Step 1: Fix the monocular camera 2 at the end of the four-degree-of-freedom serial-parallel hybrid mechanism, and adjust the position of the checkerboard calibration plate so that it can clearly image within the field of view of the monocular camera 2. In this embodiment, the position of the checkerboard calibration plate is adjusted, and the checkerboard calibration plate is about 20 cm directly below the monocular camera.

步骤2、控制四自由度串并混连机构采集N张(N≥1800)棋盘格标定板图像,并为每幅图像打上六维姿态标签,形成训练数据集和验证数据集;将打好标签的图像输入到两通道网络中通过深度学习服务器4进行训练,训练完成后,进行精度测试,当满足精度要求时,把训练好的两通道网络模型部署到边缘端处理器6中。本实施例中采集了1800张棋盘格标定板图像,其中1700张作为训练数据集,剩下的作为验证数据集,并以四自由度串并混联机构的姿态坐标作为训练数据集和验证数据集的姿态标签。两通道网络训练过程流程图见图1,训练过程硬件系统结构图见图4。Step 2: Control the four-degree-of-freedom serial-parallel hybrid mechanism to collect N (N≥1800) chessboard calibration plate images, and label each image with a six-dimensional posture label to form a training data set and a verification data set; input the labeled images into the two-channel network for training through the deep learning server 4. After the training is completed, the accuracy test is performed. When the accuracy requirements are met, the trained two-channel network model is deployed to the edge processor 6. In this embodiment, 1800 chessboard calibration plate images are collected, of which 1700 are used as training data sets, and the rest are used as verification data sets, and the posture coordinates of the four-degree-of-freedom serial-parallel hybrid mechanism are used as the posture labels of the training data sets and the verification data sets. The flow chart of the two-channel network training process is shown in Figure 1, and the hardware system structure diagram of the training process is shown in Figure 4.

所述步骤2通过以下步骤实现其功能:The step 2 achieves its function through the following steps:

步骤2.1:数据集的构建与样本标签的生成Step 2.1: Dataset construction and sample label generation

控制四自由度串并混联机构在半径10mm高度20mm垂直圆柱体内均匀采样1800张图片,以四自由度串并混联机构的六维姿态坐标(另外两个旋转变量设为0)作为样本标签,为每幅图像打上标签,用随机采样算法,从1800张图片中选出100张作为验证数据集,其余的作为训练数据集。本实施例中图片为单通道灰度图像,并且图像中背景并不是纯净的,其中包含支撑支架,以及地面反光问题,随着四自由度串并混连机构位姿的变化,这些条件也随之变化。The four-degree-of-freedom serial-parallel hybrid mechanism is controlled to uniformly sample 1800 images in a vertical cylinder with a radius of 10mm and a height of 20mm. The six-dimensional posture coordinates of the four-degree-of-freedom serial-parallel hybrid mechanism (the other two rotation variables are set to 0) are used as sample labels to label each image. A random sampling algorithm is used to select 100 images from the 1800 images as a verification data set, and the rest are used as training data sets. In this embodiment, the image is a single-channel grayscale image, and the background in the image is not pure, including the support bracket and the ground reflection problem. As the posture of the four-degree-of-freedom serial-parallel hybrid mechanism changes, these conditions also change accordingly.

步骤2.2:构建网络架构Step 2.2: Build the network architecture

在数据加载模块图像经过随机配对形成图像对,并对图像对进行预处理,将图像对按照通道轴进行叠加,即将两张单通道图像叠加成一张双通道的图像,从而构建出两通道网络架构。采用深度可分离卷积代替传统卷积,并用全卷积代替全连接层,以此来减少两通道网络模型的参数量,实现两通道网络模型的轻量化,另外在两通道网络中加入轻量级注意力机制,使两通道网络更加专注于棋盘格标定板。本实施例中,轻量级注意力机制采用的是Squeeze-and-Excitation,通过学习来自动获取每个特征通道的重要程度,并抑制不太有用的特征。网络框架图见图3。In the data loading module, images are randomly paired to form image pairs, and the image pairs are preprocessed. The image pairs are superimposed according to the channel axis, that is, two single-channel images are superimposed into a dual-channel image, thereby constructing a two-channel network architecture. Depth-separable convolution is used instead of traditional convolution, and full convolution is used instead of full connection layer to reduce the number of parameters of the two-channel network model and realize the lightweight of the two-channel network model. In addition, a lightweight attention mechanism is added to the two-channel network to make the two-channel network more focused on the checkerboard calibration board. In this embodiment, the lightweight attention mechanism uses Squeeze-and-Excitation, which automatically obtains the importance of each feature channel through learning and suppresses less useful features. The network framework diagram is shown in Figure 3.

步骤2.3:选定损失函数,两通道网络的主要目的是估计任意两张图像之间的相对六维姿态,并且是用于低精度机械臂5粗略定位后最终细化的过程,因此方向度量都在90°以内,所以选用欧拉角来确定方向,将变换标签按照欧拉角的形势分解为平移和旋转,最终的损失计算为:Step 2.3: Select the loss function. The main purpose of the two-channel network is to estimate the relative six-dimensional posture between any two images, and it is used for the final refinement process after the rough positioning of the low-precision robot 5. Therefore, the direction measurement is within 90°, so Euler angles are used to determine the direction, and the transformation label is decomposed into translation and rotation according to the form of Euler angles. The final loss is calculated as:

其中,ti分别是图像对的相对平移真实值和估计值,ri分别是图像对的相对旋转真实值与估计值,其中w(0<w<1)是权值为了权衡平移(毫米)和旋转(度)之间的度量大小,旋转相对于平移对棋盘格标定板图像位姿影响更大,所以w值应该给定大一些,让网络更好的学习平移变量。RMS均方根误差的定义为:Among them, ti , are the true and estimated relative translation of the image pair, ri , They are the true value and estimated value of the relative rotation of the image pair, respectively, where w (0 < w < 1) is the weight to balance the metric size between translation (mm) and rotation (degrees). Rotation has a greater impact on the image pose of the checkerboard calibration plate than translation, so the w value should be given a larger value to allow the network to better learn the translation variable. The RMS root mean square error is defined as:

其中m代表参数的数量,对于平移m=3,对于旋转,用欧拉角表示,m=3。在本实例中权值参数w=0.99。Where m represents the number of parameters, for translation m = 3, for rotation, expressed in Euler angles, m = 3. In this example, the weight parameter w = 0.99.

步骤2.4:模型的训练与部署,将步骤2.2中预处理过的图像输入到两通道网络中通过深度学习服务器4进行训练,并将两通道网络模型的保存设置成静态模式,训练完成后,进行精度测试,将测试集中的图像按照步骤步骤2.2中的预处理步骤进行预处理,并输入到保存的静态模式模型中,所述静态模式模型输出图像对的平移和旋转的相对误差,静态模式模型精度要求是平移误差低于0.2mm,旋转误差低于0.2°,当满足精度要求时,把训练好的静态模型在NVIDIA Jetson nano处理器上进行部署。本实施例中模型的平移和旋转的精度均低于0.1mm/0.1°,满足精度要求。Step 2.4: Model training and deployment, input the preprocessed image in step 2.2 into the two-channel network for training through the deep learning server 4, and set the two-channel network model to be saved in static mode. After the training is completed, perform an accuracy test, preprocess the image in the test set according to the preprocessing steps in step 2.2, and input it into the saved static mode model, the static mode model outputs the relative error of the translation and rotation of the image pair, and the accuracy requirement of the static mode model is that the translation error is less than 0.2mm and the rotation error is less than 0.2°. When the accuracy requirements are met, the trained static model is deployed on the NVIDIA Jetson nano processor. The translation and rotation accuracy of the model in this embodiment are both less than 0.1mm/0.1°, which meets the accuracy requirements.

步骤3、将单目相机2安装在NVIDIA Jetson nano上,将单目相机2和NVIDIAJetson nano进行一体化封装,并将其固定在低精度机械臂5末端。Step 3: Install the monocular camera 2 on the NVIDIA Jetson nano, integrate the monocular camera 2 and the NVIDIA Jetson nano into a single package, and fix them at the end of the low-precision robotic arm 5.

步骤4、位姿测量,基于训练好的两通道网络模型,进行低精度机械臂5的在线位姿测量,即:当低精度机械臂5从任意方向到达指定位置后,固定在低精度机械臂5机械手上的单目相机2采集目标位置附近的一张棋盘格标定板及其周围场景图像,与预先拍摄的目标位置参考姿态图像形成输入图像对,图像对输入到训练好的两通道网络模型中,两通道网络模型输出两张图片的相对位姿并控制低精度机械臂5调整位姿,从而实现高精度的机器人六自由度位姿测量。本实施例中的低精度机械臂5是低精度的机器人,两通道网络通过在机器人进行粗略估计后进行最终的细化,将位姿误差控制在0.1mm/0.1°以内。在线测量过程流程图见图2,在线测试过程硬件系统结构图见图5。Step 4, posture measurement, based on the trained two-channel network model, the online posture measurement of the low-precision manipulator 5 is performed, that is: when the low-precision manipulator 5 reaches the specified position from any direction, the monocular camera 2 fixed on the manipulator of the low-precision manipulator 5 collects a chessboard calibration plate and its surrounding scene image near the target position, and forms an input image pair with the pre-shot target position reference posture image, and the image pair is input into the trained two-channel network model. The two-channel network model outputs the relative posture of the two images and controls the low-precision manipulator 5 to adjust the posture, thereby realizing high-precision robot six-degree-of-freedom posture measurement. The low-precision manipulator 5 in this embodiment is a low-precision robot, and the two-channel network performs final refinement after the robot makes a rough estimate, and the posture error is controlled within 0.1mm/0.1°. The online measurement process flow chart is shown in Figure 2, and the hardware system structure diagram of the online test process is shown in Figure 5.

所述的步骤4通过以下步骤实现其功能:The step 4 described above realizes its function through the following steps:

步骤4.1:将低精度机械臂5预计到达的目标位置定义为Di(i=1,2,…,n),并采集对应位置的棋盘格标定板图像,定义为Ii(i=1,2,…,n)。本实施例中目标位置的数量为5个,采集对应位置的棋盘格标定板图像为5张,并将棋盘格标定板图像保存至数据库中。Step 4.1: Define the target position that the low-precision robot arm 5 is expected to reach as D i (i=1, 2, ..., n), and collect the chessboard calibration board image of the corresponding position, which is defined as I i (i=1, 2, ..., n). In this embodiment, the number of target positions is 5, and 5 chessboard calibration board images of the corresponding positions are collected, and the chessboard calibration board images are saved in the database.

步骤4.2:当低精度机械臂5从各个方向到达目标位置Di位置后,即开启单目相机2,并获取此时棋盘格标定板图像。然后与Ii参考图像形成输入图像对,经过预处理,双通道图像输入到训练好的两通道网络模型中,两通道网络模型输出两张图片的相对位姿并控制低精度机械臂5调整位姿,实现高精度的机器人六自由度位姿测量。Step 4.2: When the low-precision manipulator 5 reaches the target position D i from all directions, the monocular camera 2 is turned on and the chessboard calibration plate image is obtained. Then, an input image pair is formed with the reference image I i . After preprocessing, the dual-channel image is input into the trained two-channel network model. The two-channel network model outputs the relative pose of the two images and controls the low-precision manipulator 5 to adjust the pose, thereby realizing high-precision robot six-degree-of-freedom pose measurement.

本发明对不断变化的光照条件以及图像中存在不相关背景影响测量精度的问题具有鲁棒性,并且应用了深度可分离卷积和全卷积,所以模型具有轻量化的特点,方便在边缘端处理器6进行部署,同时,本方法只需要采集一张静态标志物图像与参考图像组成图像对输入到两通道网络模型中们就可以完成机器人高精度的位姿测量。The present invention is robust to the problem of changing lighting conditions and irrelevant background in the image affecting the measurement accuracy, and applies deep separable convolution and full convolution, so the model is lightweight and convenient to deploy on the edge processor 6. At the same time, the method only needs to collect a static marker image and a reference image to form an image pair and input it into the two-channel network model to complete the robot's high-precision posture measurement.

本发明解决了现有技术需要设计复杂标志物,定位精度低,对不断变化的光照条件和存在不相关背景鲁棒性低,以及模型较大难以部署的问题。该系统通过控制高精度位姿标定机构1采集静态标志物图像,形成训练集,将训练集输入到网络中对网络参数进行训练,训练完成后进行精度测试,当训练好的模型满足精度要求时,将其部署到边缘端处理器6中,即可进行在线的机器人位姿测量,即:低精度机械臂5从各个方向到达指定位置后,采集一张静态标志物图片,与参考姿态图片形成输入图像对,图像对输入到训练好的两通道网络模型中,两通道网络模型自动输出两张图片的相对位姿,并实现低精度机械臂5位姿调整,从而实现高精度的机器人六自由度位姿测量。The present invention solves the problems of the prior art that complex markers need to be designed, positioning accuracy is low, robustness to changing lighting conditions and irrelevant backgrounds is low, and the model is large and difficult to deploy. The system controls the high-precision posture calibration mechanism 1 to collect static marker images to form a training set, inputs the training set into the network to train the network parameters, and performs accuracy testing after the training is completed. When the trained model meets the accuracy requirements, it is deployed to the edge processor 6, and the online robot posture measurement can be performed, that is: after the low-precision manipulator 5 reaches the specified position from all directions, a static marker image is collected to form an input image pair with the reference posture image, and the image pair is input into the trained two-channel network model. The two-channel network model automatically outputs the relative posture of the two images, and realizes the posture adjustment of the low-precision manipulator 5, thereby realizing high-precision robot six-degree-of-freedom posture measurement.

Claims (2)

1.一种基于两通道网络的机器人高精度位姿测量系统,其特征在于:包括单目相机(2)、高精度位姿测量模块、高精度位姿标定机构(1)、静态标志物(3)和边缘端处理器(6);所述单目相机(2)在采集训练数据集的过程中固定在高精度位姿标定机构(1)末端;所述单目相机(2)在在线测试过程中通过连接线与边缘端处理器(6)相连,并采用一体化封装技术封装为一个模块,所述模块固定在低精度机械臂(5)的末端;所述静态标志物(3)放置于单目相机(2)前方视野范围内;所述高精度位姿测量模块内植入有位姿测量的两通道网络模型,所述两通道网络模型通过输入拍摄的静态标志物图像以及位姿训练和位姿测量两个步骤实现机器人的高精度位姿测量;1. A robot high-precision posture measurement system based on a two-channel network, characterized in that it comprises a monocular camera (2), a high-precision posture measurement module, a high-precision posture calibration mechanism (1), a static marker (3) and an edge processor (6); the monocular camera (2) is fixed at the end of the high-precision posture calibration mechanism (1) during the process of collecting a training data set; the monocular camera (2) is connected to the edge processor (6) through a connecting line during an online test, and is packaged into a module using an integrated packaging technology, and the module is fixed at the end of a low-precision mechanical arm (5); the static marker (3) is placed in the field of view in front of the monocular camera (2); a two-channel network model for posture measurement is implanted in the high-precision posture measurement module, and the two-channel network model realizes the high-precision posture measurement of the robot by inputting a captured static marker image and two steps of posture training and posture measurement; 所述基于两通道网络的机器人高精度位姿测量系统的具体流程如下:The specific process of the robot high-precision posture measurement system based on the two-channel network is as follows: 步骤1、将单目相机(2)固定在高精度位姿标定机构(1)末端,调整静态标志物(3)的位置,使其能在单目相机(2)的视野范围内能够清晰成像;Step 1: fix the monocular camera (2) at the end of the high-precision posture calibration mechanism (1), and adjust the position of the static marker (3) so that it can be clearly imaged within the field of view of the monocular camera (2); 步骤2、控制高精度位姿标定机构(1)采集N张静态标志物图像,并为每幅图像打上六维姿态标签,形成训练数据集和验证数据集;将打好标签的图像输入到两通道网络中通过深度学习服务器(4)进行训练,训练完成后,进行精度测试,当满足精度要求时,把训练好的两通道网络模型部署到边缘端处理器(6)中;Step 2: Control the high-precision posture calibration mechanism (1) to collect N static marker images, and label each image with a six-dimensional posture label to form a training data set and a verification data set; input the labeled images into the two-channel network for training through the deep learning server (4); after the training is completed, perform an accuracy test, and when the accuracy requirements are met, deploy the trained two-channel network model to the edge processor (6); 具体包括以下步骤:The specific steps include: 步骤2.1:数据集的构建与样本标签的生成Step 2.1: Dataset construction and sample label generation 控制高精度位姿标定机构(1)在半径10mm高度20mm垂直圆柱体内均匀采样N张图片,以高精度位姿标定机构(1)的六维姿态坐标作为样本标签,为每幅图像打上标签,用随机采样算法,从N张图片中选出部分作为验证数据集,其余的作为训练数据集;Control the high-precision posture calibration mechanism (1) to uniformly sample N images in a vertical cylinder with a radius of 10 mm and a height of 20 mm, use the six-dimensional posture coordinates of the high-precision posture calibration mechanism (1) as sample labels, label each image, use a random sampling algorithm to select a portion of the N images as a verification data set, and use the rest as a training data set; 步骤2.2:构建网络架构Step 2.2: Build the network architecture 在数据加载模块图像经过随机配对形成图像对,并对图像对进行预处理,将图像对按照通道轴进行叠加,即将两张单通道图像叠加成一张双通道的图像,从而构建出两通道网络架构;采用深度可分离卷积代替传统卷积,并用全卷积代替全连接层,以此来减少两通道网络模型的参数量,实现两通道网络模型的轻量化,另外在两通道网络中加入轻量级注意力机制,使两通道网络更加专注于静态标志物(3);In the data loading module, images are randomly paired to form image pairs, and the image pairs are preprocessed. The image pairs are superimposed along the channel axis, that is, two single-channel images are superimposed into a dual-channel image, thereby constructing a two-channel network architecture; depthwise separable convolution is used instead of traditional convolution, and full convolution is used instead of full connection layer to reduce the number of parameters of the two-channel network model and realize the lightweight of the two-channel network model. In addition, a lightweight attention mechanism is added to the two-channel network to make the two-channel network more focused on static landmarks (3); 步骤2.3:选定损失函数,两通道网络的目的是估计任意两张图像之间的相对六维姿态,并且是用于低精度机械臂(5)粗略定位后最终细化的过程,因此方向度量都在90°以内,所以选用欧拉角来确定方向,将变换标签按照欧拉角的形势分解为平移和旋转,最终的损失计算为:Step 2.3: Select the loss function. The purpose of the two-channel network is to estimate the relative six-dimensional posture between any two images. It is used for the final refinement process after the rough positioning of the low-precision robot (5). Therefore, the direction measurement is within 90°. Therefore, Euler angles are used to determine the direction. The transformation label is decomposed into translation and rotation according to the form of Euler angles. The final loss is calculated as: 其中,ti分别是图像对的相对平移真实值和估计值,ri分别是图像对的相对旋转真实值与估计值,其中w是权值为了权衡平移和旋转之间的度量大小,旋转相对于平移对静态标志物图像位姿影响更大,所以w值应该给定大一些,让网络更好的学习平移变量;RMS均方根误差的定义为:Among them, ti , are the true and estimated relative translation of the image pair, ri , They are the true value and estimated value of the relative rotation of the image pair, respectively, where w is the weight. In order to balance the metric size between translation and rotation, rotation has a greater impact on the static landmark image pose than translation, so the w value should be given a larger value to allow the network to better learn the translation variable; the RMS root mean square error is defined as: 其中m代表参数的数量,对于平移m=3,对于旋转,用欧拉角表示,m=3;Where m represents the number of parameters, for translation m = 3, for rotation, expressed in Euler angles, m = 3; 步骤2.4:模型的训练与部署,将步骤2.2中预处理过的图像输入到两通道网络中通过深度学习服务器(4)进行训练,并将两通道网络模型的保存设置成静态模式,训练完成后,进行精度测试,将测试集中的图像按照步骤2.2中的预处理步骤进行预处理,并输入到保存的静态模式模型中,当满足精度要求时,把训练好的静态模型在边缘端处理器(6)上进行部署;Step 2.4: training and deployment of the model. The image preprocessed in step 2.2 is input into the two-channel network for training through the deep learning server (4), and the two-channel network model is saved in static mode. After the training is completed, an accuracy test is performed. The images in the test set are preprocessed according to the preprocessing steps in step 2.2 and input into the saved static mode model. When the accuracy requirements are met, the trained static model is deployed on the edge processor (6); 步骤3、将单目相机(2)安装在边缘端处理器(6)上,将单目相机(2)和边缘端处理器(6)上进行一体化封装,然后固定在低精度机械臂(5)末端;Step 3: Install the monocular camera (2) on the edge processor (6), integrate the monocular camera (2) and the edge processor (6) into a package, and then fix them at the end of the low-precision robotic arm (5); 步骤4、位姿测量,基于训练好的两通道网络模型,进行低精度机械臂(5)的在线位姿测量。Step 4: posture measurement: Based on the trained two-channel network model, online posture measurement of the low-precision robotic arm (5) is performed. 2.根据权利要求1所述的一种基于两通道网络的机器人高精度位姿测量系统,其特征在于:所述步骤4的具体步骤为:当低精度机械臂(5)从任意方向到达指定位置后,固定在低精度机械臂(5)机械手上的单目相机(2)采集目标位置附近的一张静态标志物(3)及其周围场景图像,与预先拍摄的目标位置参考姿态图像形成输入图像对,图像对输入到训练好的两通道网络模型中,两通道网络模型输出两张图片的相对位姿并控制低精度机械臂(5)调整位姿,从而实现高精度的机器人六自由度位姿测量。2. According to claim 1, a high-precision robot posture measurement system based on a two-channel network is characterized in that: the specific steps of step 4 are: when the low-precision robotic arm (5) reaches the specified position from any direction, the monocular camera (2) fixed on the manipulator of the low-precision robotic arm (5) collects a static marker (3) near the target position and its surrounding scene image, and forms an input image pair with a pre-shot target position reference posture image, and the image pair is input into a trained two-channel network model, and the two-channel network model outputs the relative posture of the two images and controls the low-precision robotic arm (5) to adjust the posture, thereby realizing high-precision robot six-degree-of-freedom posture measurement.
CN202210554626.1A 2022-05-20 2022-05-20 A high-precision robot posture measurement system based on two-channel network Active CN114998444B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210554626.1A CN114998444B (en) 2022-05-20 2022-05-20 A high-precision robot posture measurement system based on two-channel network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210554626.1A CN114998444B (en) 2022-05-20 2022-05-20 A high-precision robot posture measurement system based on two-channel network

Publications (2)

Publication Number Publication Date
CN114998444A CN114998444A (en) 2022-09-02
CN114998444B true CN114998444B (en) 2024-08-16

Family

ID=83027558

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210554626.1A Active CN114998444B (en) 2022-05-20 2022-05-20 A high-precision robot posture measurement system based on two-channel network

Country Status (1)

Country Link
CN (1) CN114998444B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115436881B (en) * 2022-10-18 2023-07-07 兰州大学 Positioning method, positioning system, computer equipment and readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109816725A (en) * 2019-01-17 2019-05-28 哈工大机器人(合肥)国际创新研究院 A kind of monocular camera object pose estimation method and device based on deep learning
CN112232214A (en) * 2020-10-16 2021-01-15 天津大学 A real-time object detection method based on deep feature fusion and attention mechanism

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381879B (en) * 2020-11-16 2024-09-06 跨维(深圳)智能数字科技有限公司 Object posture estimation method, system and medium based on image and three-dimensional model
CN113034581B (en) * 2021-03-15 2024-09-06 中国空间技术研究院 Relative pose estimation method of space targets based on deep learning
CN113927597B (en) * 2021-10-21 2023-04-07 燕山大学 Robot connecting piece six-degree-of-freedom pose estimation system based on deep learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109816725A (en) * 2019-01-17 2019-05-28 哈工大机器人(合肥)国际创新研究院 A kind of monocular camera object pose estimation method and device based on deep learning
CN112232214A (en) * 2020-10-16 2021-01-15 天津大学 A real-time object detection method based on deep feature fusion and attention mechanism

Also Published As

Publication number Publication date
CN114998444A (en) 2022-09-02

Similar Documents

Publication Publication Date Title
CN109255813B (en) Man-machine cooperation oriented hand-held object pose real-time detection method
WO2021023315A1 (en) Hand-eye-coordinated grasping method based on fixation point of person&#39;s eye
CN111089569B (en) Large box body measuring method based on monocular vision
CN109297413B (en) Visual measurement method for large-scale cylinder structure
CN105225269B (en) Object modelling system based on motion
CN103759716B (en) The dynamic target position of mechanically-based arm end monocular vision and attitude measurement method
CN105818167B (en) The method that hinged end effector is calibrated using long distance digital camera
CN108629831B (en) 3D Human Body Reconstruction Method and System Based on Parametric Human Template and Inertial Measurement
CN101419055A (en) Space target position and pose measuring device and method based on vision
CN107367229B (en) Free binocular stereo vision rotating shaft parameter calibration method
CN106625673A (en) Narrow space assembly system and assembly method
CN103099623B (en) Extraction method of kinesiology parameters
CN103198477B (en) Apple fruitlet bagging robot visual positioning method
CN110728715A (en) Camera angle self-adaptive adjusting method of intelligent inspection robot
CN105318838B (en) Single-plane calibration method for relation between laser range finder and tail end of mechanical arm
CN106960591B (en) A high-precision vehicle positioning device and method based on road surface fingerprints
CN110648362B (en) A Binocular Stereo Vision Badminton Positioning Recognition and Attitude Calculation Method
CN115546289A (en) Robot-based three-dimensional shape measurement method for complex structural part
CN115972093B (en) Workpiece surface measuring method and device and wing wallboard soft mold polishing method
CN106485207A (en) A kind of Fingertip Detection based on binocular vision image and system
CN111583342A (en) Target rapid positioning method and device based on binocular vision
CN110675453A (en) Self-positioning method for moving target in known scene
CN113208731B (en) Hand-eye calibration method of surgical puncture robot based on binocular vision system
CN115661862A (en) Pressure vision convolution model-based sitting posture sample set automatic labeling method
CN114998444B (en) A high-precision robot posture measurement system based on two-channel network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant