Disclosure of Invention
In order to overcome the defects of the background technology, the invention provides a camera splicing and area matching indoor multi-mobile robot positioning method
The invention adopts the following technical means:
The indoor multi-mobile robot positioning method with the camera splicing and area matching comprises the following steps:
calibrating a monocular camera to obtain camera parameters, calibrating a ground map, and establishing a one-to-one correspondence between a ground map area and a real ground map area in a video image acquired by the monocular camera;
The method comprises the steps of 2, tracking a plurality of robots through a positioning tracking algorithm, fixing markers on a mobile robot, simultaneously using the markers as labels, matching different areas of each label with corresponding robots, and tracking the plurality of robots through tracking the markers with different areas;
step 3, positioning a centroid point, replacing a robot position coordinate with the centroid coordinate of the marker, acquiring the vertex coordinate and the three-side length of the marker through a two-point positioning pose, sequencing the three-side lengths, and acquiring the shortest length of the three-side lengths;
and 4, obtaining a moving track image of the mobile robot in a mode of splicing pictures of the cameras by fixing the cameras, so as to identify and position the moving track image.
Compared with the prior art, the invention has the following advantages:
The invention uses labels with different areas to well play a role in matching, and different mobile robots are distinguished; the method can simultaneously position a plurality of mobile robots, improves the positioning time and efficiency, and solves the problem of low positioning precision of the robots.
According to the invention, the bottom edge of each tag is shortest, and the head and the tail of each mobile robot can be well determined, so that the attitude angle in any direction can be obtained.
According to the invention, a plurality of fixed cameras are used for splicing a plurality of obtained pictures, so that the problem that a single camera is smaller in display picture is effectively solved.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
As shown in fig. 1-5, the invention provides a positioning method of an indoor multi-mobile robot with camera splicing and area matching, which comprises the following steps:
The method comprises the steps of 1, calibrating a monocular camera to obtain camera parameters, calibrating a ground map, establishing a one-to-one correspondence between a ground map area and a real ground map area in a video image acquired by the monocular camera, establishing a geometric model of camera imaging for determining the correlation between the position of a space object and a corresponding point in the image, and establishing a coordinate conversion relation according to the camera imaging model. In the conversion process of the coordinate system, the conversion relation between the world coordinate system and the pixel coordinate system is as follows:
Wherein [ X w Yw Zw 1]T ] represents the corresponding point homogeneous coordinates of the world coordinate system, [ X c Yc Zc 1]T ] represents any point homogeneous coordinates of the camera coordinate system, [ u v 1] T represents the corresponding points in the pixel coordinate system; The unit is pixel, f represents camera focal length, and unit is millimeter, d x、dy represents physical dimension of each pixel on x axis and y axis, and unit is millimeter; Representing the internal parameters of the camera, Representing camera outliers, R representing the rotation matrix, and t representing the translation vector.
The method comprises the steps of 2, tracking a plurality of robots through a positioning tracking algorithm, fixing markers on a mobile robot, simultaneously using the markers as labels, matching different areas of each label with corresponding robots, tracking the plurality of robots through tracking the markers with different areas, obtaining a color space of each frame of picture through a camera, converting RGB into LAB, graying, performing Gaussian filtering noise reduction and image corrosion operation, filtering red in the LAB color space, finding out the outline of each label, outputting the outline, and finally, traversing each outline, obtaining the area of each label through a function, and placing the area into a list for sorting, thereby realizing matching tracking.
The method comprises the steps of 3, positioning a centroid point, replacing a robot position coordinate with the centroid coordinate of a marker, obtaining the vertex coordinate of the marker, the length of three sides and sequencing the lengths of the three sides by two-point positioning gestures, obtaining the shortest length of the three sides by the three-point positioning gestures, obtaining the length of each side and sequencing by an identification algorithm, obtaining the shortest side as a bottom side, determining two vertex coordinates of the bottom side, obtaining the center coordinate of the bottom side according to the two vertex coordinates of the bottom side, and obtaining the gesture angle of the robot by combining the centroid coordinates;
Step 4, fixing a plurality of cameras around the field to obtain a moving track image of the mobile robot in a mode of splicing pictures of the cameras, converting each frame of picture in the video image into a required color space, graying, performing Gaussian filtering to reduce noise and image corrosion to enable the outline of the label to be more obvious, traversing the outlines of all labels and calculating the areas of the labels, and finally combining the method in the step 3, so that identification and positioning are performed. .
In a preferred embodiment, in the present application, the step 1 further includes the steps of:
Step 1-1, camera calibration and de-distortion, namely converting the world coordinate system into a pixel coordinate system, so that the positions of corresponding points on the surface of an object in space correspond to the positions of each point in an image one by one, calibrating the camera by using a Zhang Zhengyou calibration method to obtain internal parameters of the camera And Ginseng radix
The distortion process comprises radial distortion and tangential distortion, wherein the radial distortion is distortion distributed along the radial direction of the lens, the tangential distortion is generated due to the fact that the lens is not parallel to the plane of a camera sensor or the plane of an image, the influence is small, only the radial distortion is considered, the distortion parameter k 1,k2 is obtained by combining the internal parameter, the external parameter and the distortion model of the camera, and the distortion model is:
Where (u, v) denotes the ideal undistorted pixel coordinates, Representing the pixel coordinates after distortion, in pixels,(X, y) represents the ideal undistorted successive image coordinates,Representing the successive image coordinates after distortion in millimeters and k 1,k2 representing the distortion parameters.
Step 1-2, selecting an initial calibration point and an expected calibration point;
And 1-3, carrying out multi-picture inverse perspective transformation and splicing to obtain an image with the shape consistent with that of the ground map, wherein the spliced image after the multi-picture inverse perspective transformation is obtained. In the multi-picture inverse perspective transformation and splicing process, after each frame of picture of each camera is de-distorted, respectively adopting an inverse perspective transformation method to splice multiple pictures of an experimental field, and then utilizing the scaling relationship between the spliced pixel point coordinates and a real ground map to obtain the position coordinates of each robot under the real ground map, wherein the i-th camera inverse perspective transformation formula is as follows:
Wherein [ X i yi 1]T ] represents the pixel coordinate system point coordinates of the original image of the ith camera, [ X i Yi Zi]T ] represents the pixel point coordinates of the corresponding point after transformation of the ith camera, M i represents the transformation matrix of the ith camera of 3×3, and since the pixel coordinate system is two-dimensional coordinates, the transformation is two-dimensional to three-dimensional transformation, and the transformed pixel point coordinates are transformed to the corresponding point coordinates on the two-dimensional coordinate system [ X i′Yi′Zi′]T:
From the above, four pairs of coordinate values are needed to solve all unknowns in the M i matrix, where (L i,hi)、(0,hi)、(0,0)、(li, 0) represents the initial coordinate point of the original image of the ith camera, and (L i,Hi)、(0,Hi)、(0,0)、(Li, 0) represents the desired coordinate point after reverse perspective of the ith camera, all in pixels (piexl). The scaling of the spliced pixel coordinates and the real ground map in length: scaling of the spliced pixel coordinates and the real ground map in width: wherein L and H respectively represent the length and width of the real ground map;
Therefore, after the n cameras are spliced, the position coordinates of each robot under the real ground map are as follows: Wherein X represents the abscissa of each robot under the real ground map, Y represents the ordinate of each robot under the real ground map, and the units are meters.
In order to verify the effectiveness of the indoor multi-mobile robot positioning method with camera splicing and area matching in practical engineering application, the positioning effect is tested from the aspect of static positioning precision based on a self-built experimental platform.
In the positioning precision experiment, 9 positions shown in the figure in the scene range are found, the mobile robot is respectively placed at 9 points for positioning, the statistical algorithm outputs coordinate data, and the coordinate data is compared with the data measured in the actual experiment for analysis. The average static errors obtained by the test are all within 5mm, and the positioning accuracy is high.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments. In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments. In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners.
It should be noted that the above embodiments are merely for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the above embodiments, it should be understood by those skilled in the art that the technical solution described in the above embodiments may be modified or some or all of the technical features may be equivalently replaced, and these modifications or substitutions do not make the essence of the corresponding technical solution deviate from the scope of the technical solution of the embodiments of the present invention.