[go: up one dir, main page]

CN116124150B - A method for indoor multi-mobile robot positioning based on camera splicing and area matching - Google Patents

A method for indoor multi-mobile robot positioning based on camera splicing and area matching Download PDF

Info

Publication number
CN116124150B
CN116124150B CN202310195421.3A CN202310195421A CN116124150B CN 116124150 B CN116124150 B CN 116124150B CN 202310195421 A CN202310195421 A CN 202310195421A CN 116124150 B CN116124150 B CN 116124150B
Authority
CN
China
Prior art keywords
camera
coordinates
mobile robot
positioning
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310195421.3A
Other languages
Chinese (zh)
Other versions
CN116124150A (en
Inventor
梁忠超
王艳锋
黄茁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University China
Original Assignee
Northeastern University China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China filed Critical Northeastern University China
Priority to CN202310195421.3A priority Critical patent/CN116124150B/en
Publication of CN116124150A publication Critical patent/CN116124150A/en
Application granted granted Critical
Publication of CN116124150B publication Critical patent/CN116124150B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本发明提供一种摄像头拼接与面积匹配的室内多移动机器人定位方法,包括以下步骤:首先建立坐标转换模型,通过相机标定得到参数,将移动机器人在像素坐标系下的像素坐标转换为世界坐标系下的世界坐标;在移动机器人上设置标签,通过相关图像处理的联合算法得到视场角下每帧图像中标签的轮廓;最后通过函数计算得到每个轮廓的最小包围框,即可得到每个移动机器人上标签的面积,以不同的面积实现多目标匹配的功能;同时可以得到质心位置,最终获得每个移动机器人的朝向和姿态角。在室内定位的情况下,通过固定多个摄像头利用画面拼接算法解决单目相机画面较小的问题。本发明解决了多机器人定位困难的问题,定位精度较高,降低了视觉定位的复杂程度。

The present invention provides a method for indoor multi-mobile robot positioning with camera splicing and area matching, comprising the following steps: firstly, a coordinate conversion model is established, parameters are obtained through camera calibration, and the pixel coordinates of the mobile robot in the pixel coordinate system are converted into world coordinates in the world coordinate system; a label is set on the mobile robot, and the outline of the label in each frame image under the field of view angle is obtained through a joint algorithm of related image processing; finally, the minimum bounding box of each outline is obtained through function calculation, and the area of the label on each mobile robot can be obtained, and the function of multi-target matching can be realized with different areas; at the same time, the center of mass position can be obtained, and finally the orientation and attitude angle of each mobile robot can be obtained. In the case of indoor positioning, the problem of small monocular camera screen is solved by fixing multiple cameras and using the screen splicing algorithm. The present invention solves the problem of multi-robot positioning difficulties, has high positioning accuracy, and reduces the complexity of visual positioning.

Description

Indoor multi-mobile robot positioning method with camera splicing and area matching functions
Technical Field
The invention relates to an indoor multi-mobile robot vision positioning method, belongs to the technical field of vision robots, and particularly relates to a multi-mobile robot vision positioning method with multiple fixed cameras.
Background
The positioning technology is one of key technologies for mobile robot research, and plays a decisive role in normal operation of the robot. The positioning problem of the robot has very important significance in the intelligent research of the mobile robot, and is a key for realizing autonomous navigation and completing complex intelligent tasks in a specific environment.
Indoor positioning of a mobile robot refers to that the robot senses information of surrounding environment through a sensor and performs analysis processing, and the position information and the posture information of the robot are solved by using a specific positioning algorithm. At present, the application of visual positioning in the field of robot positioning is more and more extensive, and the visual positioning is the hottest direction in the current mobile robot indoor positioning technology research by virtue of the advantages of rich information acquisition, strong understanding capability to environment, lasting stability, low cost and the like.
The existing indoor visual positioning method still has defects in the actual positioning process and is easily limited by the field angle of a single camera, and has the problem that the positioning method is relatively complex, for example, the CN210119230U patent discloses an indoor visual positioning system which mainly uses an image acquisition device and a monocular camera to position a target so as to achieve the visual positioning effect, but the positioning method uses a monochromatic illumination module to acquire the image position so as to realize the positioning of the target, the positioning method is relatively complex, and the CN209280914U patent discloses an indoor visual positioning system which mainly uses an optical signal to acquire the image so as to achieve the indoor positioning effect, but the positioning method adopts an LED lamp to transmit the signal so as to acquire the image so as to realize the positioning of the target, and the positioning method is relatively complex.
Disclosure of Invention
In order to overcome the defects of the background technology, the invention provides a camera splicing and area matching indoor multi-mobile robot positioning method
The invention adopts the following technical means:
The indoor multi-mobile robot positioning method with the camera splicing and area matching comprises the following steps:
calibrating a monocular camera to obtain camera parameters, calibrating a ground map, and establishing a one-to-one correspondence between a ground map area and a real ground map area in a video image acquired by the monocular camera;
The method comprises the steps of 2, tracking a plurality of robots through a positioning tracking algorithm, fixing markers on a mobile robot, simultaneously using the markers as labels, matching different areas of each label with corresponding robots, and tracking the plurality of robots through tracking the markers with different areas;
step 3, positioning a centroid point, replacing a robot position coordinate with the centroid coordinate of the marker, acquiring the vertex coordinate and the three-side length of the marker through a two-point positioning pose, sequencing the three-side lengths, and acquiring the shortest length of the three-side lengths;
and 4, obtaining a moving track image of the mobile robot in a mode of splicing pictures of the cameras by fixing the cameras, so as to identify and position the moving track image.
Compared with the prior art, the invention has the following advantages:
The invention uses labels with different areas to well play a role in matching, and different mobile robots are distinguished; the method can simultaneously position a plurality of mobile robots, improves the positioning time and efficiency, and solves the problem of low positioning precision of the robots.
According to the invention, the bottom edge of each tag is shortest, and the head and the tail of each mobile robot can be well determined, so that the attitude angle in any direction can be obtained.
According to the invention, a plurality of fixed cameras are used for splicing a plurality of obtained pictures, so that the problem that a single camera is smaller in display picture is effectively solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to the drawings without inventive effort to a person skilled in the art.
FIG. 1 is a flow chart of the overall algorithm of the present invention.
FIG. 2 is a schematic diagram of a filtering algorithm according to the present invention.
FIG. 3 is a schematic diagram of the world, camera, image and pixel coordinate systems of the present invention.
Fig. 4 is a schematic diagram of the multi-camera fixing of the present invention (taking an experimental field covered by the camera 1 as an example).
FIG. 5 is a diagram of the experimental position of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
As shown in fig. 1-5, the invention provides a positioning method of an indoor multi-mobile robot with camera splicing and area matching, which comprises the following steps:
The method comprises the steps of 1, calibrating a monocular camera to obtain camera parameters, calibrating a ground map, establishing a one-to-one correspondence between a ground map area and a real ground map area in a video image acquired by the monocular camera, establishing a geometric model of camera imaging for determining the correlation between the position of a space object and a corresponding point in the image, and establishing a coordinate conversion relation according to the camera imaging model. In the conversion process of the coordinate system, the conversion relation between the world coordinate system and the pixel coordinate system is as follows:
Wherein [ X w Yw Zw 1]T ] represents the corresponding point homogeneous coordinates of the world coordinate system, [ X c Yc Zc 1]T ] represents any point homogeneous coordinates of the camera coordinate system, [ u v 1] T represents the corresponding points in the pixel coordinate system; The unit is pixel, f represents camera focal length, and unit is millimeter, d x、dy represents physical dimension of each pixel on x axis and y axis, and unit is millimeter; Representing the internal parameters of the camera, Representing camera outliers, R representing the rotation matrix, and t representing the translation vector.
The method comprises the steps of 2, tracking a plurality of robots through a positioning tracking algorithm, fixing markers on a mobile robot, simultaneously using the markers as labels, matching different areas of each label with corresponding robots, tracking the plurality of robots through tracking the markers with different areas, obtaining a color space of each frame of picture through a camera, converting RGB into LAB, graying, performing Gaussian filtering noise reduction and image corrosion operation, filtering red in the LAB color space, finding out the outline of each label, outputting the outline, and finally, traversing each outline, obtaining the area of each label through a function, and placing the area into a list for sorting, thereby realizing matching tracking.
The method comprises the steps of 3, positioning a centroid point, replacing a robot position coordinate with the centroid coordinate of a marker, obtaining the vertex coordinate of the marker, the length of three sides and sequencing the lengths of the three sides by two-point positioning gestures, obtaining the shortest length of the three sides by the three-point positioning gestures, obtaining the length of each side and sequencing by an identification algorithm, obtaining the shortest side as a bottom side, determining two vertex coordinates of the bottom side, obtaining the center coordinate of the bottom side according to the two vertex coordinates of the bottom side, and obtaining the gesture angle of the robot by combining the centroid coordinates;
Step 4, fixing a plurality of cameras around the field to obtain a moving track image of the mobile robot in a mode of splicing pictures of the cameras, converting each frame of picture in the video image into a required color space, graying, performing Gaussian filtering to reduce noise and image corrosion to enable the outline of the label to be more obvious, traversing the outlines of all labels and calculating the areas of the labels, and finally combining the method in the step 3, so that identification and positioning are performed. .
In a preferred embodiment, in the present application, the step 1 further includes the steps of:
Step 1-1, camera calibration and de-distortion, namely converting the world coordinate system into a pixel coordinate system, so that the positions of corresponding points on the surface of an object in space correspond to the positions of each point in an image one by one, calibrating the camera by using a Zhang Zhengyou calibration method to obtain internal parameters of the camera And Ginseng radix
The distortion process comprises radial distortion and tangential distortion, wherein the radial distortion is distortion distributed along the radial direction of the lens, the tangential distortion is generated due to the fact that the lens is not parallel to the plane of a camera sensor or the plane of an image, the influence is small, only the radial distortion is considered, the distortion parameter k 1,k2 is obtained by combining the internal parameter, the external parameter and the distortion model of the camera, and the distortion model is:
Where (u, v) denotes the ideal undistorted pixel coordinates, Representing the pixel coordinates after distortion, in pixels,(X, y) represents the ideal undistorted successive image coordinates,Representing the successive image coordinates after distortion in millimeters and k 1,k2 representing the distortion parameters.
Step 1-2, selecting an initial calibration point and an expected calibration point;
And 1-3, carrying out multi-picture inverse perspective transformation and splicing to obtain an image with the shape consistent with that of the ground map, wherein the spliced image after the multi-picture inverse perspective transformation is obtained. In the multi-picture inverse perspective transformation and splicing process, after each frame of picture of each camera is de-distorted, respectively adopting an inverse perspective transformation method to splice multiple pictures of an experimental field, and then utilizing the scaling relationship between the spliced pixel point coordinates and a real ground map to obtain the position coordinates of each robot under the real ground map, wherein the i-th camera inverse perspective transformation formula is as follows:
Wherein [ X i yi 1]T ] represents the pixel coordinate system point coordinates of the original image of the ith camera, [ X i Yi Zi]T ] represents the pixel point coordinates of the corresponding point after transformation of the ith camera, M i represents the transformation matrix of the ith camera of 3×3, and since the pixel coordinate system is two-dimensional coordinates, the transformation is two-dimensional to three-dimensional transformation, and the transformed pixel point coordinates are transformed to the corresponding point coordinates on the two-dimensional coordinate system [ X i′Yi′Zi′]T:
From the above, four pairs of coordinate values are needed to solve all unknowns in the M i matrix, where (L i,hi)、(0,hi)、(0,0)、(li, 0) represents the initial coordinate point of the original image of the ith camera, and (L i,Hi)、(0,Hi)、(0,0)、(Li, 0) represents the desired coordinate point after reverse perspective of the ith camera, all in pixels (piexl). The scaling of the spliced pixel coordinates and the real ground map in length: scaling of the spliced pixel coordinates and the real ground map in width: wherein L and H respectively represent the length and width of the real ground map;
Therefore, after the n cameras are spliced, the position coordinates of each robot under the real ground map are as follows: Wherein X represents the abscissa of each robot under the real ground map, Y represents the ordinate of each robot under the real ground map, and the units are meters.
In order to verify the effectiveness of the indoor multi-mobile robot positioning method with camera splicing and area matching in practical engineering application, the positioning effect is tested from the aspect of static positioning precision based on a self-built experimental platform.
In the positioning precision experiment, 9 positions shown in the figure in the scene range are found, the mobile robot is respectively placed at 9 points for positioning, the statistical algorithm outputs coordinate data, and the coordinate data is compared with the data measured in the actual experiment for analysis. The average static errors obtained by the test are all within 5mm, and the positioning accuracy is high.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments. In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments. In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners.
It should be noted that the above embodiments are merely for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the above embodiments, it should be understood by those skilled in the art that the technical solution described in the above embodiments may be modified or some or all of the technical features may be equivalently replaced, and these modifications or substitutions do not make the essence of the corresponding technical solution deviate from the scope of the technical solution of the embodiments of the present invention.

Claims (7)

1.一种摄像头拼接与面积匹配的室内多移动机器人定位方法,其特征在于,包括以下步骤:1. A method for indoor multi-mobile robot positioning with camera splicing and area matching, characterized in that it includes the following steps: 步骤1,对单目相机进行标定,获取相机参数;同时对地面地图进行标定,建立由所述单目相机采集到的视频图像中的地面地图区域与真实的地面地图区域的一一对应关系;Step 1, calibrate the monocular camera to obtain camera parameters; at the same time, calibrate the ground map to establish a one-to-one correspondence between the ground map area in the video image captured by the monocular camera and the real ground map area; 步骤2:通过定位跟踪算法,实现对多机器人跟踪;在移动机器人上固定标志物,同时使用所述标志物作为标签,每个标签的不同面积匹配对应的机器人,通过跟踪不同面积的标志物来达到跟踪多机器人;Step 2: Track multiple robots through positioning and tracking algorithms; fix markers on the mobile robots and use the markers as labels. The different areas of each label match the corresponding robots. Track multiple robots by tracking markers of different areas. 步骤3:进行质心点定位,以所述标志物的质心坐标代替机器人位置坐标;通过两点定位姿,所述两点分别代表标志物的质心坐标和最短边中心坐标,获取所述标志物的顶点坐标、三边长度并将所述三边长度排序,获取所述三边长度中的最短长度;通过识别算法获取所述标志物的三个顶点坐标,从而获取每条边的长度并排序,得出最短边为底边,并确定底边的两个顶点坐标;进而根据底边两顶点坐标获取底边的中心坐标,结合所述质心坐标获取所述机器人的姿态角;Step 3: Perform centroid positioning, and replace the robot position coordinates with the centroid coordinates of the marker; locate the posture through two points, the two points represent the centroid coordinates and the shortest side center coordinates of the marker respectively, obtain the vertex coordinates and three side lengths of the marker and sort the three side lengths to obtain the shortest length of the three side lengths; obtain the three vertex coordinates of the marker through the recognition algorithm, thereby obtaining the length of each side and sorting them, and the shortest side is the bottom side, and determine the two vertex coordinates of the bottom side; then obtain the center coordinates of the bottom side according to the two vertex coordinates of the bottom side, and obtain the posture angle of the robot in combination with the centroid coordinates; 步骤4:通过在场地四周固定多个摄像头,以多个摄像头画面拼接的形式,获得移动机器人的活动轨迹图像,将视频图像中的每帧图片转换成所需的颜色空间,并进行灰度化;再分别进行高斯滤波减噪、图像腐蚀,使得标签的轮廓更加明显;然后遍历所有标签的轮廓并计算其面积;最后结合所述步骤3中的方法,从而进行识别和定位。Step 4: By fixing multiple cameras around the site, the activity trajectory image of the mobile robot is obtained in the form of splicing multiple camera images, and each frame of the video image is converted into the required color space and grayed out; Gaussian filtering and image erosion are then performed to make the outline of the label more obvious; then the outlines of all labels are traversed and their areas are calculated; finally, the method in step 3 is combined to perform identification and positioning. 2.根据权利要求1所述的一种摄像头拼接与面积匹配的室内多移动机器人定位方法,其特征在于,所述步骤1中,还包括以下步骤:2. The method for indoor multi-mobile robot positioning with camera splicing and area matching according to claim 1, characterized in that the step 1 further comprises the following steps: 步骤1-1:相机标定,去畸变;Step 1-1: Camera calibration and distortion removal; 步骤1-2:初始标定点与期望标定点的选取;Step 1-2: Selection of initial calibration points and expected calibration points; 步骤1-3:多画面逆透视变换及拼接,获取多画面逆透视变换后的拼接图像是与地面地图形状一致的图像。Step 1-3: Inverse perspective transformation and stitching of multiple images, obtaining a stitched image after inverse perspective transformation of multiple images that is consistent with the shape of the ground map. 3.根据权利要求1所述的一种摄像头拼接与面积匹配的室内多移动机器人定位方法,其特征在于,为确定空间物体位置与其在图像中对应点之间在步骤1中所提到一一对应的相互关系,建立摄像机成像几何模型;根据摄像机成像模型,进行建立坐标转换关系。3. According to the camera splicing and area matching indoor multi-mobile robot positioning method described in claim 1, it is characterized in that in order to determine the one-to-one correspondence between the position of the spatial object and its corresponding point in the image mentioned in step 1, a camera imaging geometric model is established; based on the camera imaging model, a coordinate transformation relationship is established. 4.根据权利要求3所述的一种摄像头拼接与面积匹配的室内多移动机器人定位方法,其特征在于,所述摄像机成像几何模型,所述坐标系的转换过程中,世界坐标系与像素坐标系的转换关系为:4. According to the method for indoor multi-mobile robot positioning with camera splicing and area matching in claim 3, it is characterized in that, in the camera imaging geometric model, during the conversion process of the coordinate system, the conversion relationship between the world coordinate system and the pixel coordinate system is: 其中,[Xw Yw Zw 1]T表示世界坐标系的对应点齐次坐标;[Xc Yc Zc 1]T表示相机坐标系的任意一点齐次坐标;[u v 1]T表示像素坐标系中对应的点;分别称为相机x轴、y轴上的归一化焦距,单位为像素,f表示相机焦距,单位为毫米;dx、dy分别表示每一个像素在x轴、y轴上的物理尺寸,单位为毫米;表示相机内参,表示相机外参,R表示旋转矩阵,t表示平移向量。Among them, [X w Y w Z w 1] T represents the homogeneous coordinates of the corresponding point in the world coordinate system; [X c Y c Z c 1] T represents the homogeneous coordinates of any point in the camera coordinate system; [uv 1] T represents the corresponding point in the pixel coordinate system; are called the normalized focal lengths of the camera on the x-axis and y-axis, respectively, in pixels; f represents the focal length of the camera, in millimeters; dx and dy represent the physical size of each pixel on the x-axis and y-axis, respectively, in millimeters; represents the camera internal parameters, represents the camera external parameters, R represents the rotation matrix, and t represents the translation vector. 5.根据权利要求2所述的一种摄像头拼接与面积匹配的室内多移动机器人定位方法,所述相机标定与去畸变过程,即从世界坐标系转换到像素坐标系的过程,使物体表面在空间中对应点的位置与图像中每个点的位置一一对应;采用张正友标定法对相机进行标定,求得相机的内参和外参 5. According to the method for indoor multi-mobile robot positioning with camera splicing and area matching described in claim 2, the camera calibration and dedistortion process, that is, the process of converting from the world coordinate system to the pixel coordinate system, makes the position of the corresponding point on the surface of the object in space correspond to the position of each point in the image one by one; the camera is calibrated using the Zhang Zhengyou calibration method to obtain the camera's internal parameters and external reference 所述畸变过程包括:径向畸变和切向畸变;所述径向畸变为沿着透镜半径方向分布的畸变;所述切向畸变是由于透镜本身与相机传感器平面或图像平面不平行而产生的,影响较小,故只考虑径向畸变,结合相机内、外参数和畸变模型求得畸变参数k1,k2,则畸变模型为:The distortion process includes radial distortion and tangential distortion. The radial distortion is the distortion distributed along the radius of the lens. The tangential distortion is caused by the non-parallelism between the lens itself and the camera sensor plane or the image plane, and has a small impact. Therefore, only radial distortion is considered. The distortion parameters k 1 and k 2 are obtained by combining the camera internal and external parameters and the distortion model. The distortion model is: 其中,(u,v)表示理想无畸变的像素坐标,表示畸变后的像素坐标,单位为像素,(x,y)表示理想无畸变的连续图像坐标,表示畸变后的连续图像坐标,单位为毫米;k1,k2表示畸变参数。Among them, (u, v) represents the ideal undistorted pixel coordinates, Represents the pixel coordinates after distortion, in pixels. (x,y) represents the ideal undistorted continuous image coordinates, represents the coordinates of the continuous image after distortion, in millimeters; k 1 , k 2 represent the distortion parameters. 6.根据权利要求2所述的一种摄像头拼接与面积匹配的室内多移动机器人定位方法,其特征在于,所述多画面逆透视变换及拼接过程中,每一个相机的每帧图片去畸变之后,分别采取逆透视变换的方法进行实验场地的多画面拼接,之后再利用拼接后的像素点坐标与真实地面地图的缩放比例关系,从而得到每个机器人在真实地面地图下的位置坐标;第i个相机逆透视变换公式如下:6. According to claim 2, a method for indoor multi-mobile robot positioning with camera splicing and area matching is characterized in that, in the process of inverse perspective transformation and splicing of multiple images, after each frame of each camera is dedistorted, the multi-image splicing of the experimental site is performed by inverse perspective transformation, and then the scaling relationship between the pixel coordinates after splicing and the real ground map is used to obtain the position coordinates of each robot under the real ground map; the formula for inverse perspective transformation of the i-th camera is as follows: 其中,[xi yi 1]T表示第i个相机原始图像像素坐标系点坐标,[Xi Yi Zi]T表示第i个相机变换后对应点的像素点坐标,Mi表示3×3的第i个相机的变换矩阵;由于像素坐标系是二维坐标,而此变换是二维到三维的转换,将转换后的像素点坐标转换在二维坐标系上的相应点坐标[Xi′ Yi′ Zi′]TAmong them, [xiyi1 ] T represents the coordinates of the original image pixel coordinate system of the i-th camera, [ XiYiZi ] T represents the pixel coordinates of the corresponding point after the i-th camera transformation, and Mi represents the 3×3 transformation matrix of the i-th camera. Since the pixel coordinate system is a two - dimensional coordinate system, and this transformation is a two-dimensional to three-dimensional conversion, the transformed pixel coordinates are converted to the corresponding point coordinates in the two -dimensional coordinate system [ Xi′Yi′Zi ] T : 由上式可得需四对坐标值可以解出Mi矩阵中所有的未知数,其中(li,hi)、(0,hi)、(0,0)、(li,0)分别表示第i个相机原始图像的初始标定点,(Li,Hi)、(0,Hi)、(0,0)、(Li,0)表示第i个相机逆透视之后的期望标定点,单位都为像素(piexl);则拼接后像素坐标与真实地面地图在长度上的缩放比例:拼接后像素坐标与真实地面地图在宽度上的缩放比例:其中,L和H分别表示真实地面地图的长和宽;From the above formula, we can get that four pairs of coordinate values are needed to solve all the unknowns in the Mi matrix, where ( li , hi ), (0, hi ), (0,0), ( li ,0) represent the initial calibration points of the original image of the i-th camera, and ( Li , Hi ), (0, Hi ), (0,0), ( Li ,0) represent the expected calibration points after the i-th camera inverse perspective, and the units are all pixels (piexl); then the scaling ratio of the length of the spliced pixel coordinates and the real ground map is: The scaling ratio of the pixel coordinates after stitching to the width of the real ground map: Among them, L and H represent the length and width of the real ground map respectively; 故n个相机拼接之后每个机器人在真实地面地图下的位置坐标为:其中,X表示每个机器人在真实地面地图下的横坐标,Y表示每个机器人在真实地面地图下的纵坐标,单位都为米。Therefore, after the n cameras are spliced together, the position coordinates of each robot on the real ground map are: Among them, X represents the horizontal coordinate of each robot under the real ground map, and Y represents the vertical coordinate of each robot under the real ground map, and the units are both meters. 7.根据权利要求1所述的一种摄像头拼接与面积匹配的室内多移动机器人定位方法,其特征在于,所述步骤2中,将摄像头得到的每帧图片的颜色空间从RGB转化为LAB并进行灰度化;再进行高斯滤波减噪和图像腐蚀操作;之后在LAB颜色空间下对红色进行过滤处理,找出每个标签的轮廓并进行输出;最后,遍历每个轮廓后,利用函数求得每个标签的面积并放入列表中进行排序,从而实现匹配跟踪。7. According to the method for indoor multi-mobile robot positioning with camera splicing and area matching described in claim 1, it is characterized in that in the step 2, the color space of each frame of the picture obtained by the camera is converted from RGB to LAB and grayed; then Gaussian filtering and image corrosion operations are performed; then the red color is filtered in the LAB color space, the outline of each label is found and output; finally, after traversing each outline, the area of each label is obtained by using a function and put into a list for sorting, thereby realizing matching tracking.
CN202310195421.3A 2023-03-02 2023-03-02 A method for indoor multi-mobile robot positioning based on camera splicing and area matching Active CN116124150B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310195421.3A CN116124150B (en) 2023-03-02 2023-03-02 A method for indoor multi-mobile robot positioning based on camera splicing and area matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310195421.3A CN116124150B (en) 2023-03-02 2023-03-02 A method for indoor multi-mobile robot positioning based on camera splicing and area matching

Publications (2)

Publication Number Publication Date
CN116124150A CN116124150A (en) 2023-05-16
CN116124150B true CN116124150B (en) 2025-06-06

Family

ID=86297530

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310195421.3A Active CN116124150B (en) 2023-03-02 2023-03-02 A method for indoor multi-mobile robot positioning based on camera splicing and area matching

Country Status (1)

Country Link
CN (1) CN116124150B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110362096A (en) * 2019-08-13 2019-10-22 东北大学 A kind of automatic driving vehicle dynamic trajectory planing method based on local optimality

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9250081B2 (en) * 2005-03-25 2016-02-02 Irobot Corporation Management of resources for SLAM in large environments
CN109842756A (en) * 2017-11-28 2019-06-04 东莞市普灵思智能电子有限公司 A method and system for lens distortion correction and feature extraction
US11181925B2 (en) * 2018-02-23 2021-11-23 Crown Equipment Corporation Systems and methods for optical target based indoor vehicle navigation
CN111968177B (en) * 2020-07-22 2022-11-18 东南大学 Mobile robot positioning method based on fixed camera vision

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110362096A (en) * 2019-08-13 2019-10-22 东北大学 A kind of automatic driving vehicle dynamic trajectory planing method based on local optimality

Also Published As

Publication number Publication date
CN116124150A (en) 2023-05-16

Similar Documents

Publication Publication Date Title
Yan et al. Joint camera intrinsic and LiDAR-camera extrinsic calibration
JP4245963B2 (en) Method and system for calibrating multiple cameras using a calibration object
CN103759670B (en) A kind of object dimensional information getting method based on numeral up short
US7769205B2 (en) Fast three dimensional recovery method and apparatus
CN102155923B (en) Splicing measuring method and system based on three-dimensional target
EP4396779B1 (en) Methods and systems of generating camera models for camera calibration
CN114413958A (en) Monocular vision distance and speed measurement method of unmanned logistics vehicle
JPH10253322A (en) Method and apparatus for designating position of object in space
CN110763204B (en) Planar coding target and pose measurement method thereof
JP2003130621A (en) Method and system for measuring three-dimensional shape
CN110009682A (en) A kind of object recognition and detection method based on monocular vision
CN104976968A (en) Three-dimensional geometrical measurement method and three-dimensional geometrical measurement system based on LED tag tracking
CN112254670A (en) 3D information acquisition equipment based on optical scanning and intelligent vision integration
CN118642121B (en) Monocular vision ranging and laser point cloud fusion space positioning method and system
CN113191388A (en) Image acquisition system for target detection model training and sample generation method
CN112415010A (en) An imaging detection method and system
JP4825971B2 (en) Distance calculation device, distance calculation method, structure analysis device, and structure analysis method.
CN109410272A (en) A kind of identification of transformer nut and positioning device and method
CN113012238B (en) Method for quick calibration and data fusion of multi-depth camera
CN116124150B (en) A method for indoor multi-mobile robot positioning based on camera splicing and area matching
CN112304250B (en) Three-dimensional matching equipment and method between moving objects
CN112257535B (en) Three-dimensional matching equipment and method for avoiding object
Schönbein omnidirectional Stereo Vision for autonomous Vehicles
CN118135033B (en) RGBD sensor-assisted lidar and camera joint calibration method
Uyanik et al. A method for determining 3D surface points of objects by a single camera and rotary stage

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant