Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a high-precision three-dimensional positioning device and method for a nuclear radiation environment robot.
The purpose of the invention is realized by the following technical scheme:
a high-precision three-dimensional positioning method for a nuclear radiation environment robot comprises the following steps:
s1, the mobile robot obtains the three-dimensional environment point cloud map of the operation target, and the step S2 is executed;
s2, based on the three-dimensional environment point cloud map information, the robot completes the preliminary positioning of the operation target, and the step S3 is executed;
s3, the mobile robot drives the mechanical arm to move, so that the work target enters the positioning range of the robot structured light camera (4), and the step S4 is executed;
s4, the mobile robot calculates the three-dimensional data information in the positioning range of the structured light camera, and executes the step S5;
and S5, performing high-precision three-dimensional positioning on the working target according to the three-dimensional data information in the positioning range of the structured light camera.
Further, in step S1, the three-dimensional environment point cloud map is obtained by performing point cloud fusion through the movement of the mobile robot in combination with the laser radar scanning.
Further, in step S2, the preliminary positioning is performed by combining the movement of the robot and the mechanical arm with the laser radar scanning.
Further, in step S2, the preliminary positioning includes point cloud acquisition, point cloud data preprocessing, similarity measurement, and repositioning.
Further, in step S3, the mobile robot drives the mechanical arm to move based on the relative pose relationship between the laser radar and the structured light camera, the relative pose relationship between the laser radar and the structured light camera is obtained by calibrating the laser radar and the structured light camera, and then the path of the mechanical arm is planned according to the phase pose relationship between the operation target and the laser radar.
Further, in step S4, the three-dimensional data information in the structured light camera positioning range is obtained by a structured light decoding method, where the structured light decoding method includes projecting structured light by the structured light camera, collecting structured light information encoded by the work object by the structured light camera, decoding the collected encoded structured light information, and obtaining the three-dimensional data information in the positioning range of the work object relative to the structured light camera according to the decoded information.
Further, in step S5, the high-precision three-dimensional positioning includes: the high-precision three-dimensional positioning comprises the steps of firstly calibrating the structured light camera and the tail end of the mechanical arm to obtain the relative pose relationship between the structured light camera and the tail end of the mechanical arm, then selecting a positioning point according to three-dimensional data information of an operation target relative to the measurement range of the structured light camera, then planning the path of the mechanical arm according to the calibrated position relationship, and finally driving the mechanical arm to enable the tail end of the mechanical arm to reach the positioning point.
A high-precision three-dimensional positioning device for a nuclear radiation environment robot comprises a mobile robot, a laser radar and a structured light camera, wherein the mobile robot comprises a mobile platform and a mechanical arm, the mechanical arm is fixedly connected with the mobile platform, the laser radar is fixedly arranged at one end of the mechanical arm connected with the mobile platform, and the structured light camera is fixedly arranged at the other end of the mechanical arm; the mobile platform can be in a wheel type or a slide rail type according to an actual scene, the laser radar is a 3D laser radar, the position of the laser radar is located on the mechanical arm, the specific position can be selected according to the actual situation of the nuclear radiation scene, the laser radar is calibrated with the structured light camera after being fixed in position, the projection light source of the structured light camera (4) can adopt white light, LED light, laser and infrared light, the projection mode can adopt point structure projection, line structure projection and surface structure projection, and the acquisition mode can be monocular, binocular or monocular.
The invention has the beneficial effects that:
according to the invention, a working scene is accurately simulated through the primary positioning of the 3D laser radar, the three-dimensional information of the target is more accurately obtained through a structured light vision method, and the millimeter-level spatial position of the target object is obtained through the combination of the primary positioning and the high-precision three-dimensional positioning.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to fig. 1 to 6 of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, rather than all embodiments, and based on the embodiments of the present invention, a person skilled in the art can obtain all other embodiments without creative efforts.
In the description of the present invention, it is to be understood that the terms "counterclockwise", "clockwise", "longitudinal", "lateral", "up", "down", "front", "back", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", etc., indicate orientations or positional relationships based on those shown in the drawings, and are used for convenience of description only, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and thus, are not to be considered as limiting.
The binocular structure light system is preferably selected in the present invention only for convenience of describing the present invention, and does not indicate or imply that the present invention can only use the binocular structure light system.
A high-precision three-dimensional positioning method for a nuclear radiation environment robot comprises the following steps:
s1, the mobile robot obtains the three-dimensional environment point cloud map of the operation target, and the step S2 is executed;
s2, based on the three-dimensional environment point cloud map information, the robot completes the preliminary positioning of the operation target, and the step S3 is executed;
s3, the mobile robot drives the mechanical arm to move, so that the work target enters the positioning range of the robot structured light camera, and the step S4 is executed;
s4, the mobile robot calculates the three-dimensional data information in the positioning range of the structured light camera, and executes the step S5;
and S5, performing high-precision three-dimensional positioning on the working target according to the three-dimensional data information in the positioning range of the structured light camera.
The working principle of the scheme is briefly described as follows:
in the invention, a 3D laser radar emits a laser beam to realize initial positioning of a working scene, a structured light camera is arranged at the tail end of a mechanical arm, and a structured light camera system realizes precise positioning of a target by using a structured light vision measurement algorithm so as to guide the mechanical arm to perform specified operation;
according to fig. 1, firstly, the calibration work of the system is carried out; secondly, planning the path of the mobile robot by combining a given global three-dimensional map; then the mobile robot enables the 3D laser radar to enter an operation scene for primary positioning; then, accurately positioning by combining the target position and the structured light camera to obtain the three-dimensional coordinates of the operation target under the structured light camera coordinate system; and then calculating the position of the target at the tail end of the mechanical arm according to external parameters calibrated by the camera and the tail end of the mechanical arm, planning a path, finally controlling the mechanical arm to perform specified operation, and finally finishing all specified operation finishing work. The key technologies involved in this project have three major parts: the system calibration is used for determining the pose relationship among the structured light camera, the 3D laser radar and the robot; the three key technologies are described below respectively based on the preliminary sensing and positioning based on the 3D laser radar and the accurate sensing and positioning based on the visual measurement of the structured light system:
the system calibration mainly completes the determination of the pose relationship among the 3D laser radar, the structured light camera and the mobile robot, and is used for conveying the tail end of the mechanical arm to a position needing to be operated through the motion of the robot after identifying an operation target. Mainly comprises the calibration of a structured light camera; calibrating between the mechanical arm and the structured light camera and calibrating between the laser radar and the structured light camera;
in the method, images are collected by adopting a binocular camera, the two cameras simultaneously shoot calibration plates at different positions in space, then the images are calibrated according to the shot sequences, and the two cameras respectively carry out monocular camera calibration to obtain internal parameters and external parameters of the cameras. The extrinsic parameters here refer to the rotational-translational relationship between the camera coordinate system and the world coordinate system established in each step calibration. And then unifying the pose relations of the two cameras to a camera coordinate (generally to a left camera coordinate system) by combining the polar constraint, the consistency constraint and the like in the calibration plate images and the binocular vision in the left camera and the right camera. The calculated binocular calibration parameters need to be further optimized to obtain more accurate calibration parameters. Considering that the calibration plate has non-negligible geometric errors in the manufacturing process, the calibration parameters need to be optimized for the second time after the binocular camera is calibrated (the first optimization is the optimization for solving the distortion parameters of the camera);
before the mechanical arm performs target designation operation, the position of a target object relative to the tail end of the mechanical arm needs to be obtained, and the pose relationship between a target and a binocular camera is determined through a binocular vision measurement system, so that the relative pose between the mechanical arm (hand) and the binocular camera (eye) needs to be calibrated, and the coordinates of the mechanical arm (hand) and the binocular camera (eye) are unified under the same world coordinate system;
the hand-eye calibration is to obtain a conversion relationship between a coordinate system of a camera mounted on the robot arm and a coordinate system of the robot arm base so that the robot arm can use information acquired by the camera. For ease of operation, the coordinate system is normalized to the robot arm base point. The conversion matrix between the coordinate systems of the two tools before and after the mechanical arm moves can be calculated through the parameters of the mechanical arm sub-band. In order to solve the rotation and translation matrix, the mechanical arm needs to be moved for multiple times in the experiment, and the position coordinates of three mechanical arm gripping tools are obtained, so that multiple groups of equations for solving the hand-eye relationship are obtained. And (4) carrying out simultaneous solution on the equations to obtain the parameters of the hand-eye calibration. The target coordinates of the grabbing position can be converted to the mechanical arm base coordinates through parameters calibrated by hands and eyes, so that the mechanical arm can operate the target;
after the 3D laser radar-based initial positioning is completed, the pose relationship between the laser radar and the operation target is determined, in the process from the initial positioning to the accurate positioning, the 3D laser radar and the binocular camera are needed to be calibrated to acquire the pose relationship between the laser radar and the camera, path planning is further carried out through the pose relationship between the laser radar and the camera, and the binocular structure light vision measuring system is moved to the optimal position for measuring the operation target. The calibration process comprises the following steps: a calibration plate is placed in front of a binocular camera and images are collected, meanwhile, the 3D laser radar scans the direction of the calibration plate, point clouds of all poses and images of corresponding cameras are intercepted, the point clouds on the calibration plate are selected from the point clouds, and the poses of the calibration plate in laser radar detection data and a plane normal vector matrix are estimated. And calculating pixel coordinates of the inner corner points in the calibration picture by using methods such as corner point detection and the like, and calculating the position and the posture of the calibration plate in a camera coordinate system and a direction matrix of a normal vector of the calibration plate according to the coordinates of the corresponding corner points. Calculating a rotation matrix through normal vector matrixes in two coordinate systems, and optimizing a translation vector through minimizing the distance from the point cloud to a plane to finish external parameter calibration of the laser radar and the camera;
3D laser radar positioning process: firstly, scanning in a working scene through a 3D laser radar to obtain point cloud data; secondly, preprocessing the scanned point cloud data, filtering out unreasonable abnormal points to ensure the accuracy of subsequent algorithm processing, and simultaneously reducing the information quantity of the point cloud data, thereby reducing the calculation pressure and improving the calculation efficiency. The pretreatment mainly comprises three parts: removing outliers, removing mechanical arm points and down-sampling point clouds; then, similarity measurement is carried out according to the provided global three-dimensional map, before similarity measurement is carried out, geometric correspondence of two aggregation sets is established, two aggregation sets to be detected are established according to the number of each aggregation set, then, according to cosine similarity of a set histogram, and finally, a candidate scene with the highest similarity is selected, namely, matching of the scenes is considered to be completed; and finally, realizing the relocation of the laser radar by using algorithms such as 3D-NDT and the like. After matching is completed, initial positioning of the laser radar on the global three-dimensional map can be obtained, coordinates of the laser radar on the global three-dimensional map are obtained, and then the three-dimensional coordinates of the operation target in the global three-dimensional map in the whole process are combined, so that the motion planning of the robot can be guided. When a given global three-dimensional map is adopted, corresponding initial scanning positions, moving tracks, index information of point clouds and the like are required to be provided in addition to the map so as to increase the success of relocation and reduce the calculation amount;
accurate positioning process of a binocular structured light vision system: the binocular structured light camera system is composed of two cameras and a projector, and the projector is placed between the two cameras and used for projecting structured light grating images. Firstly, projecting a structured light image with a certain code on a working target by a projector fixed on a mechanical arm, and then acquiring a projection image of a target object by a binocular camera; then, expanding the phase of the captured projection image according to a decoding algorithm corresponding to the structured light code to obtain a continuous phase diagram of the operation target; then obtaining a depth map of the working target through a binocular stereo matching algorithm, calculating the distance, the relative position and the direction between the position of the working target and a binocular camera based on the obtained depth information and by combining internal and external parameters calibrated by the two eyes, and performing three-dimensional reconstruction of the target; and finally, estimating and positioning the pose of the operation target. Since the work object is approximately regular in shape, the spatial location for the object may be equated by the mass points of the target point cloud. And performing mass point calculation on the target point cloud obtained according to the structured light algorithm to obtain mass point coordinates and normal information, and then combining with a rotational translation matrix obtained by calibrating hands and eyes to transform the mass point coordinates into a coordinate system where the mechanical arm is located, so as to control the mechanical arm to operate the target.
Further, in step S1, the three-dimensional environment point cloud map is obtained by performing point cloud fusion through the movement of the mobile robot in combination with the laser radar scanning.
Further, in step S2, the preliminary positioning is performed by combining the movement of the robot and the mechanical arm with the laser radar scanning.
Further, in step S2, the preliminary positioning includes point cloud acquisition, point cloud data preprocessing, similarity measurement, and repositioning.
Further, in step S3, the mobile robot drives the mechanical arm to move based on the relative pose relationship between the laser radar and the structured light camera, the relative pose relationship between the laser radar and the structured light camera is obtained by calibrating the laser radar and the structured light camera, and then the path of the mechanical arm is planned according to the phase pose relationship between the operation target and the laser radar.
Further, in step S4, the three-dimensional data information in the structured light camera positioning range is obtained by a structured light decoding method, where the structured light decoding method includes projecting structured light by the structured light camera, collecting structured light information encoded by the work object by the structured light camera, decoding the collected encoded structured light information, and obtaining the three-dimensional data information in the positioning range of the work object relative to the structured light camera according to the decoded information.
Further, in step S5, the high-precision three-dimensional positioning includes: the high-precision three-dimensional positioning comprises the steps of firstly calibrating the structured light camera and the tail end of the mechanical arm to obtain the relative pose relationship between the structured light camera and the tail end of the mechanical arm, then selecting a positioning point according to three-dimensional data information of an operation target relative to the measurement range of the structured light camera, then planning the path of the mechanical arm according to the calibrated position relationship, and finally driving the mechanical arm to enable the tail end of the mechanical arm to reach the positioning point.
A high-precision three-dimensional positioning device for a nuclear radiation environment robot comprises a mobile robot, a laser radar 3 and a structured light camera 4, wherein the mobile robot comprises a mobile platform 1 and a mechanical arm 2, the mechanical arm 2 is fixedly connected with the mobile platform 1, the laser radar 3 is fixedly arranged at one end of the mechanical arm 2 connected with the mobile platform 1, and the structured light camera 4 is fixedly arranged at the other end of the mechanical arm 2; the mobile platform 1 can be wheeled or slide rail type according to an actual scene, the laser radar 3 is a 3D laser radar 3, the position of the laser radar is located on the mechanical arm 2, the specific position can be selected according to the actual situation of the nuclear radiation scene, the laser radar is calibrated with the structured light camera 4 after being fixed, the projection light source of the structured light camera 4 can adopt white light, LED light, laser and infrared light, the projection mode can adopt point structure projection, line structure projection and surface structure projection, and the acquisition mode can be monocular, binocular or binocular.
The foregoing is merely a preferred embodiment of the invention, it being understood that the embodiments described are part of the invention, and not all of it. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. The invention is not intended to be limited to the forms disclosed herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. And that modifications and variations may be effected by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.