Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a robot teleoperation auxiliary method and system based on binocular stereo vision.
In order to achieve the purpose, the invention adopts the following technical scheme:
the robot teleoperation auxiliary method based on binocular stereo vision comprises the following steps:
an image data output step: outputting the digital image data to an image measurement processor through a measurement camera, and simultaneously outputting the digital image data to a stereoscopic display processor through a monitoring camera;
an image data processing step: the image measurement processor receives the image data of the measurement camera, calculates the relative pose of the designated target, and transmits the relative pose to the stereo display processor and the robot master control system, and the stereo display processor receives the image data of the monitoring camera, performs polar line correction, and transmits the image data of the monitoring camera to the stereo display equipment;
an image data display step: and the stereo display equipment displays the image data output by the stereo display processor.
The image data output step includes:
the measuring cameras comprise a left measuring camera and a right measuring camera, and the left monitoring camera and the right monitoring camera are arranged between the left measuring camera and the right measuring camera;
digital image data is output to an image measurement processor through the left and right measurement cameras, and simultaneously digital image data is output to a stereoscopic display processor through the left and right monitoring cameras.
The image data output step includes:
the measuring camera outputs digital image data to the image measuring processor according to a sensor driving time sequence output by the image measuring processor; the monitoring camera outputs digital image data to the stereoscopic display processor according to a sensor driving timing sequence output by the stereoscopic display processor.
The stereoscopic display equipment comprises a stereoscopic display helmet and a stereoscopic display television, and image data output by the stereoscopic display processor is displayed on the stereoscopic display helmet or the stereoscopic display television according to actual requirements.
The display helmet synthesizes the images of the left monitoring camera and the right monitoring camera into an image with a size of 1920 multiplied by 1080, and the image is sent to an LED liquid crystal display screen in the display helmet through an HDMI interface to be displayed.
The image data processing step comprises:
the image measurement processor performs double-thread calculation on the digital image data output by the left measurement camera and the right measurement camera through the processor, and sets the left image sensor and the right image sensor to output image data according to 4 frames/second and the resolution of 1920 multiplied by 1200;
storing the digital image data in a synchronous state random access memory for use in attitude measurement;
the image measurement processor carries out edge extraction, edge matching, three-dimensional reconstruction of target point cloud, space plane fitting and pose measurement on the received image data of the measurement camera;
transmitting the image polar line correction linked list to the stereo display processor through an asynchronous serial port;
and transmitting the calculated pose information to the stereo display processor through an asynchronous serial port for an operator to watch.
In order to achieve the purpose, the invention also adopts the following technical scheme:
provided is a robot teleoperation auxiliary system based on binocular stereo vision, including:
an image data output unit: outputting the digital image data to an image measurement processor through a measurement camera, and simultaneously outputting the digital image data to a stereoscopic display processor through a monitoring camera;
an image data processing unit: the image measurement processor receives the image data of the measurement camera, calculates the relative pose of the designated target, and transmits the relative pose to the stereo display processor and the robot master control system, and the stereo display processor receives the image data of the monitoring camera, performs polar line correction, and transmits the image data of the monitoring camera to the stereo display equipment;
an image data display unit: and the stereo display equipment displays the image data output by the stereo display processor for an operator to watch.
The image data output unit:
the measuring cameras comprise a left measuring camera and a right measuring camera, and the left monitoring camera and the right monitoring camera are arranged between the left measuring camera and the right measuring camera;
digital image data is output to an image measurement processor through the left and right measurement cameras, and simultaneously digital image data is output to a stereoscopic display processor through the left and right monitoring cameras.
The image data output unit:
the measuring camera outputs digital image data to the image measuring processor according to a sensor driving time sequence output by the image measuring processor; the monitoring camera outputs digital image data to the stereoscopic display processor according to a sensor driving timing sequence output by the stereoscopic display processor.
The stereoscopic display equipment comprises a stereoscopic display helmet and a stereoscopic display television, and image data output by the stereoscopic display processor is displayed on the stereoscopic display helmet or the stereoscopic display television according to actual requirements for an operator to watch.
The display helmet synthesizes the images of the left monitoring camera and the right monitoring camera into an image with a size of 1920 multiplied by 1080, and the image is sent to an LED liquid crystal display screen in the display helmet through an HDMI interface to be displayed.
In the image data processing unit:
the image measurement processor performs double-thread calculation on the digital image data output by the left measurement camera and the right measurement camera through the processor, and sets the left image sensor and the right image sensor to output image data according to 4 frames/second and the resolution of 1920 multiplied by 1200;
storing the digital image data in a synchronous state random access memory for use in attitude measurement;
the image measurement processor carries out edge extraction, edge matching, three-dimensional reconstruction of target point cloud, space plane fitting and pose measurement on the received image data of the measurement camera;
transmitting the image polar line correction linked list to the stereo display processor through an asynchronous serial port;
and transmitting the calculated pose information to the stereo display processor through an asynchronous serial port for an operator to watch.
The invention has the beneficial effects that: the invention discloses a robot teleoperation auxiliary method and device based on binocular stereo vision, which are used for carrying out high-precision adjustment and calibration on a measuring camera and a monitoring camera according to a binocular camera erection scheme designed by combining measurement and monitoring functions, correcting and displaying binocular images by adopting a stereo helmet, manually marking interested targets and carrying out pose measurement and calculation. The invention can carry out real-time large-view-field immersion type monitoring, the operator has strong personally on-the-scene feeling, and meanwhile, the gesture measurement and calculation are carried out on the interested target, the actual measurement precision is superior to 0.95mm and 0.58 degrees, and the invention can effectively assist the operator to carry out teleoperation on the robot.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, an embodiment of the present invention provides a robot teleoperation assisting method based on binocular stereo vision, including the following steps:
step 1 is an image data output step: outputting the digital image data to an image measurement processor through a measurement camera, and simultaneously outputting the digital image data to a stereoscopic display processor through a monitoring camera;
step 2 is an image data processing step: the image measurement processor receives the image data of the measurement camera, calculates the relative pose of the designated target, and transmits the relative pose to the stereo display processor and the robot master control system, and the stereo display processor receives the image data of the monitoring camera, performs polar line correction, and transmits the image data of the monitoring camera to the stereo display equipment;
and step 3 is an image data display step: and the stereo display equipment displays the image data output by the stereo display processor for an operator to watch.
In the robot teleoperation auxiliary method based on binocular stereo vision, the robot teleoperation auxiliary method is practically applied to be composed of four cameras which are divided into two groups, the middle two cameras are monitoring cameras, the outer two cameras are measuring cameras, and a cube mirror for adjusting is installed at the top of each camera. In addition, the coordinate system of the imaging surface of the camera and the coordinate system of a prism of the camera are adjusted to be parallel through the auto-collimation theodolite, a reference prism is placed between the two monitoring cameras, the optical axes of the monitoring cameras are ensured to be parallel by adjusting the angle between the prism of the camera and the reference prism, and the included angle between the optical axis of the camera and the horizontal plane is measured to be 79 degrees. The internal and external parameter calibration of the four cameras uses a checkerboard calibration board, and the internal and external parameter calibration of the monitoring camera and the measuring camera is completed by using a Zhang Zhengyou plane calibration method.
In one embodiment, in the image data output step:
the measuring cameras comprise a left measuring camera and a right measuring camera, and the left monitoring camera and the right monitoring camera are arranged between the left measuring camera and the right measuring camera;
digital image data is output to an image measurement processor through the left and right measurement cameras, and simultaneously digital image data is output to a stereoscopic display processor through the left and right monitoring cameras.
As shown in fig. 2, in order to ensure the measurement accuracy and the working range of the measuring camera, when the focal length of the optical system of the measuring camera is designed to be 12mm, the field angle is 42 ° × 27 °, the base line B is fixed to be 183mm due to the volume limitation of the system, and the measurement distance is 350mm to 800mm, and the analytical relationship between the working range d1 d2 of the measuring camera and the arrangement angle α and the base line B of the camera is established, so that the arrangement of the binocular camera is determined.
As shown in fig. 3, when the horizontal included angle is 79 °, 254.4mm corresponds to a horizontal field angle of 40 ° when the horizontal included angle is measured from the optical axis of the camera to the horizontal, and 465.1mm corresponds to a horizontal field angle of 32 ° when the horizontal included angle is measured from the optical axis of the camera to the horizontal when the horizontal included angle is 350mm, and 465.1mm corresponds to a horizontal field angle of 32 ° when the horizontal included angle is 800 mm. Therefore, the optical axes of the two measuring cameras are symmetrically arranged at an included angle of 79 degrees with the horizontal.
As shown in fig. 4, when the horizontal included angle is different, the coverage of the horizontal viewing field is 254.4mm at a distance of 350mm, which corresponds to a horizontal angle of view of 40 °, and is 465.1mm at a distance of 800mm, which corresponds to a horizontal angle of view of 32 °, when the optical axes of the left and right surveillance cameras and the horizontal included angle are 79 °. Therefore, the optical axes of the two cameras are symmetrically arranged at an included angle of 79 degrees with the horizontal.
In one embodiment, the image data output step comprises:
the measuring camera outputs digital image data to the image measuring processor according to a sensor driving time sequence output by the image measuring processor; the monitoring camera outputs digital image data to the stereoscopic display processor according to a sensor driving timing sequence output by the stereoscopic display processor. The electronic part of the monitoring camera is designed the same as the electronic part of the measuring camera, and is composed of an on-the-eye VITA area array CMOS image sensor and a peripheral circuit, and digital image data is output according to a driving time sequence provided by an image measuring processor and transmitted to the image measuring processor, wherein a schematic circuit block diagram of the image measuring processor is shown in FIG. 6.
In one embodiment, the stereoscopic display device comprises a stereoscopic display helmet and a stereoscopic display television, and image data output by the stereoscopic display processor is displayed on the stereoscopic display helmet or the stereoscopic display television according to actual needs for an operator to watch.
In one embodiment, the display helmet combines the left and right surveillance camera images into a 1920 × 1080 size image that is sent to the LED lcd display in the display helmet via an HDMI interface for display.
In one embodiment, the image data processing step:
the image measurement processor performs double-thread calculation on the digital image data output by the left measurement camera and the right measurement camera through the processor, and sets the left image sensor and the right image sensor to output image data according to 4 frames/second and the resolution of 1920 multiplied by 1200;
storing the digital image data in a synchronous state random access memory for use in attitude measurement;
the image measurement processor carries out edge extraction, edge matching, three-dimensional reconstruction of target point cloud, space plane fitting and pose measurement on the received image data of the measurement camera;
transmitting the image polar line correction linked list to the stereo display processor through an asynchronous serial port;
and transmitting the calculated pose information to the stereo display processor through an asynchronous serial port for an operator to watch.
It should be noted that an edge is one of the most basic features of an image, which can greatly reduce the information to be processed but retain the shape information of the object in the image. The invention extracts the image edge as the main characteristic to complete the posture measurement of the three-dimensional object. As shown in fig. 7, the image measurement processor mainly performs edge extraction, edge matching, three-dimensional reconstruction, spatial plane fitting, and pose measurement of the left and right camera images.
It should be noted that, as shown in fig. 8a and 8b, two sets of target depth maps are generated by a binocular measuring camera, an algorithm automatically extracts a saliency plane of a target, performs three-dimensional reconstruction and spatial plane fitting, and calculates a pose of the target. The target pose coordinates are shown in fig. 9, and the target significant plane equation, the position coordinates and the normal angle side of the target position coordinates shown in fig. 9 refer to the following table;
|
objective significant plane equation
|
Position coordinates
|
Normal angle
|
Position 1
|
-0.00192144x+0.00111288y+0.0212844z=1
|
(3.39,-5.572,47.43)
|
(84.84,87.01,5.95)
|
Position 2
|
0.000629517x+0.000639321y+0.0208609z=1
|
(1.60,-5.926,48.16)
|
(88.27,88.24,3.00) |
In order to achieve the purpose, the invention also adopts the following technical scheme:
as shown in fig. 5, there is provided a robot teleoperation assistance system based on binocular stereo vision, including:
an image data output unit: outputting the digital image data to an image measurement processor through a measurement camera, and simultaneously outputting the digital image data to a stereoscopic display processor through a monitoring camera;
an image data processing unit: the image measurement processor receives the image data of the measurement camera, calculates the relative pose of the designated target, and transmits the relative pose to the stereo display processor and the robot master control system, and the stereo display processor receives the image data of the monitoring camera, performs polar line correction, and transmits the image data of the monitoring camera to the stereo display equipment;
an image data display unit: and the stereo display equipment displays the image data output by the stereo display processor for an operator to watch.
In the robot teleoperation auxiliary method based on binocular stereo vision, the robot teleoperation auxiliary method is practically applied to be composed of four cameras which are divided into two groups, the middle two cameras are monitoring cameras, the outer two cameras are measuring cameras, and a cube mirror for adjusting is installed at the top of each camera. In addition, the coordinate system of the imaging surface of the camera and the coordinate system of a prism of the camera are adjusted to be parallel through the auto-collimation theodolite, a reference prism is placed between the two monitoring cameras, the optical axes of the monitoring cameras are ensured to be parallel by adjusting the angle between the prism of the camera and the reference prism, and the included angle between the optical axis of the camera and the horizontal plane is measured to be 79 degrees. The internal and external parameter calibration of the four cameras uses a checkerboard calibration board, and the internal and external parameter calibration of the monitoring camera and the measuring camera is completed by using a Zhang Zhengyou plane calibration method.
In one embodiment, in the image data output unit:
the measuring cameras comprise a left measuring camera and a right measuring camera, and the left monitoring camera and the right monitoring camera are arranged between the left measuring camera and the right measuring camera;
digital image data is output to an image measurement processor through the left and right measurement cameras, and simultaneously digital image data is output to a stereoscopic display processor through the left and right monitoring cameras.
As shown in fig. 2, in order to ensure the measurement accuracy and the working range of the measuring camera, when the focal length of the optical system of the measuring camera is designed to be 12mm, the field angle is 42 ° × 27 °, the base line B is fixed to be 183mm due to the volume limitation of the system, and the measurement distance is 350mm to 800mm, and the analytical relationship between the working range d1 d2 of the measuring camera and the arrangement angle α and the base line B of the camera is established, so that the arrangement of the binocular camera is determined.
As shown in fig. 3, when the horizontal angle is different, the coverage of the horizontal viewing field is 254.4mm corresponding to the horizontal angle of view 40 ° at a distance of 350mm, and 465.1mm corresponding to the horizontal angle of view 32 ° at a distance of 800mm, when the camera optical axis forms a 79 ° angle with the horizontal. Therefore, the optical axes of the two measuring cameras are symmetrically arranged at an included angle of 79 degrees with the horizontal.
As shown in fig. 4, when the horizontal included angle is different, the coverage of the horizontal viewing field is 254.4mm at a distance of 350mm, which corresponds to a horizontal angle of view of 40 °, and is 465.1mm at a distance of 800mm, which corresponds to a horizontal angle of view of 32 °, when the optical axes of the left and right surveillance cameras and the horizontal included angle are 79 °. Therefore, the optical axes of the two cameras are symmetrically arranged at an included angle of 79 degrees with the horizontal.
In one embodiment, in the image data output unit:
the measuring camera outputs digital image data to the image measuring processor according to a sensor driving time sequence output by the image measuring processor; the monitoring camera outputs digital image data to the stereoscopic display processor according to a sensor driving timing sequence output by the stereoscopic display processor. The electronic part of the monitoring camera is designed the same as the electronic part of the measuring camera, and consists of a VITA area array CMOS image sensor and a peripheral circuit in the Anson, and digital image data is output according to a driving time sequence provided by the image measuring processor and transmitted to the image measuring processor.
In one embodiment, the stereoscopic display device comprises a stereoscopic display helmet and a stereoscopic display television, and image data output by the stereoscopic display processor is displayed on the stereoscopic display helmet or the stereoscopic display television according to actual needs for an operator to watch.
In one embodiment, the display helmet combines the left and right surveillance camera images into a 1920 × 1080 size image that is sent to the LED lcd display in the display helmet via an HDMI interface for display.
In one embodiment, the image data processing unit operates as follows:
the image measurement processor performs double-thread calculation on the digital image data output by the left measurement camera and the right measurement camera through the processor, and sets the left image sensor and the right image sensor to output image data according to 4 frames/second and the resolution of 1920 multiplied by 1200;
storing the digital image data in a synchronous state random access memory for use in attitude measurement;
the image measurement processor carries out edge extraction, edge matching, three-dimensional reconstruction of target point cloud, space plane fitting and pose measurement on the received image data of the measurement camera;
transmitting the image polar line correction linked list to the stereo display processor through an asynchronous serial port;
and transmitting the calculated pose information to the stereo display processor through an asynchronous serial port for an operator to watch.
It should be noted that an edge is one of the most basic features of an image, which can greatly reduce the information to be processed but retain the shape information of the object in the image. The invention extracts the image edge as the main characteristic to complete the posture measurement of the three-dimensional object. As shown in fig. 7, the image measurement processor mainly performs edge extraction, edge matching, three-dimensional reconstruction, spatial plane fitting, and pose measurement of the left and right camera images.
It should be noted that, as shown in fig. 8a and 8b, two sets of target depth maps are generated by a binocular measuring camera, an algorithm automatically extracts a saliency plane of a target, performs three-dimensional reconstruction and spatial plane fitting, and calculates a pose of the target. The target pose coordinates are shown in fig. 9, and the target significant plane equation, the position coordinates and the normal angle side of the target position coordinates shown in fig. 9 refer to the following table;
|
objective significant plane equation
|
Position coordinates
|
Normal angle
|
Position 1
|
-0.00192144x+0.00111288y+0.0212844z=1
|
(3.39,-5.572,47.43)
|
(84.84,87.01,5.95)
|
Position 2
|
0.000629517x+0.000639321y+0.0208609z=1
|
(1.60,-5.926,48.16)
|
(88.27,88.24,3.00) |
The above-described embodiments of the present invention should not be construed as limiting the scope of the present invention. Any other corresponding changes and modifications made according to the technical idea of the present invention should be included in the protection scope of the claims of the present invention.