Disclosure of Invention
The invention aims to provide a navigation method based on a road boundary so as to improve the technical problems.
In order to achieve the above object, the embodiment of the present invention provides the following technical solutions:
the navigation method based on the road boundary comprises the following steps:
s1, shooting a road in real time through a camera arranged on a vehicle body to obtain a road image;
S2, acquiring a yaw angle and a lateral deviation distance of the vehicle body relative to a road based on the vehicle body coordinate system, the camera coordinate system and the road image;
S3, acquiring wheel type odometer information of the robot through a wheel type odometer, and constructing a factor graph model;
And S4, acquiring a motion trail of the robot through the factor graph model, and moving the robot based on the motion trail to finish the navigation of the robot.
Further, the acquiring a yaw angle and a lateral offset distance of the vehicle body with respect to the road based on the vehicle body coordinate system, the camera coordinate system, and the road image includes:
s2-1, acquiring road width, setting interval division values, setting first road virtual sideline coordinate information under a vehicle body coordinate system based on the road width and the interval division values, wherein the first road virtual sideline coordinate information comprises coordinate information of a plurality of road virtual sidelines;
S2-2, converting the first road virtual sideline coordinate information based on the conversion matrix of the vehicle body coordinate system and the camera coordinate system to obtain second road virtual sideline coordinate information under the camera coordinate system;
S2-3, acquiring a third road virtual boundary of a corresponding camera image plane based on the second road virtual boundary coordinate information, wherein the camera image plane is an image plane corresponding to an image shot by a camera in real time;
S2-4, dividing the road image by a SAM method to obtain a corresponding road detection boundary;
s2-5, determining an actual edge line of the road based on the third road virtual edge line and the road detection edge line;
S2-6, calculating the yaw angle and the lateral deviation distance of the vehicle body relative to the road based on the actual side line of the road.
Further, the formula of the transformation matrix of the vehicle body coordinate system and the camera coordinate system is as follows:
;
;
;
Wherein, A transformation matrix representing a vehicle body coordinate system and a camera coordinate system,A rotation transformation matrix representing the vehicle body coordinate system to the camera coordinate system,Representing a translation transformation matrix of the vehicle body coordinate system to the camera coordinate system.
Further, based on the second road virtual edge coordinate information, obtaining a third road virtual edge of the corresponding camera image plane includes:
S2-3-1, projecting the second road virtual sideline coordinate information to the camera image plane, and calculating pixel point coordinates of the second road virtual sideline coordinate information on the camera image plane;
s2-3-2, connecting pixel point coordinates corresponding to all the second road virtual side lines in the second road virtual side line coordinate information, and projecting the pixel point coordinates to the camera image plane to obtain the third road virtual side line.
Further, the road image is segmented by a SAM method to obtain a corresponding road detection boundary, which includes:
S2-4-1, dividing the road image by adopting a SAM2 type method, and determining each divided area;
S2-4-2, extracting the external contour in each partition area, and processing by Huo Fuxian transformation to obtain an initial road detection boundary and corresponding length information thereof;
s2-4-3, judging whether the length information corresponding to each initial road detection edge exceeds a threshold value, if so, reserving the initial road detection edge, otherwise, removing the initial road detection edge to obtain the road detection edge.
Further, determining an actual road edge based on the third road virtual edge and the road detection edge, comprising:
S2-5-1, selecting a road detection edge, sequentially matching with the third road virtual edges, calculating the matching degree of the road detection edge and each third road virtual edge, and selecting the highest matching degree as the road edge probability of the road detection edge;
s2-5-2, repeating the step S2-5-1 until all road detection edges are matched, and obtaining corresponding road edge probability;
s2-5-3, sorting all road surface edge probabilities according to the large-to-small road surface edge probabilities, and selecting road detection edges corresponding to the road surface edge probabilities of the first five road surface edge probabilities to obtain selected road detection edges;
S2-5-4, respectively calculating the distances between the distance center lines corresponding to the left half image and the right half image in the road image and the selected road detection boundary line, and selecting the selected road detection boundary line with the smallest distance as two road boundary lines to obtain the actual boundary line of the road.
Further, calculating a yaw angle and a lateral offset distance of the vehicle body with respect to the road based on the road actual boundary line, including:
s2-6-1, projecting the actual edge line of the road to the vehicle body coordinate system, and determining the relation between the actual edge line of the road and the vehicle body in a overlooking view;
S2-6-2, determining a middle point corresponding to the actual side line of the road based on the relation between the actual side line of the road and the vehicle body, and calculating the direction angle of the vehicle body relative to the road;
s2-6-3, calculating a yaw angle of the vehicle body relative to the road based on the direction angle of the vehicle body relative to the road;
S2-6-4, calculating the lateral deviation distance based on the middle point corresponding to the actual side line of the road.
Further, acquiring wheel odometer information of the robot through the wheel odometer, including:
S3-1, acquiring a yaw angle and a lateral deviation distance of the robot relative to a world coordinate system by adopting the same method as that of S1 to S2;
s3-2, taking the yaw angle and the lateral deviation distance of the S3-1 as observation factors;
s3-3, utilizing a wheel type odometer to acquire continuous relative movement of the robot, and accumulating X-direction displacement, Y-direction displacement and relative rotation angle during each movement to acquire corresponding wheel type odometer information;
S3-4, taking the wheel type odometer information as a motion factor;
s3-5, acquiring a history state node of the robot;
s3-6, constructing an initial factor graph model based on the motion factors and the observation factors;
S3-7, inputting the historical state nodes into the initial factor graph model to obtain corresponding historical motion tracks, and constructing corresponding objective functions;
And S3-8, optimizing the initial factor graph model based on the objective function to obtain an optimized factor graph model, and completing construction of the factor graph model.
The invention has the following effective effects:
The invention accurately extracts the road side line based on the observation of the set forward-looking camera without marking in advance, calculates the yaw angle and the lateral deviation distance relative to the road surface, and the motion trail of the robot is obtained by using the factor graph model and the wheel type odometer, the method is simple, the navigation of the robot can be completed without redundant data, the required calculation resources are less, the operation speed is high, and the method is suitable for various scenes.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present invention.
Referring to fig. 1, a navigation method based on a road boundary provided in this embodiment includes:
S1, shooting a road in real time through a camera arranged on a vehicle body to obtain a road image, determining a vehicle body coordinate system and a camera coordinate system, as shown in FIG. 2, wherein an AGV is an automatic guided vehicle;
S2, acquiring a yaw angle and a lateral deviation distance of the vehicle body relative to a road based on the vehicle body coordinate system, the camera coordinate system and the road image;
the acquiring a yaw angle and a lateral offset distance of the vehicle body relative to the road based on the vehicle body coordinate system, the camera coordinate system and the road image includes:
S2-1, acquiring road width, setting interval division values, and setting first road virtual boundary line coordinate information under a vehicle body coordinate system based on the road width and the interval division values;
The first road virtual side line coordinate information comprises coordinate information of a plurality of road virtual side lines, wherein the coordinate information of each road virtual side line comprises two corresponding coordinate points, and the two corresponding coordinate points are connected to obtain corresponding road virtual side lines;
As shown in fig. 3, assuming that the road width is 2m and the interval dividing value is set to be 10cm, one virtual road edge is set every 10cm, and there are 21 virtual road edges, each virtual road edge being represented by any two different points on the line and each virtual road edge being parallel to the vehicle body coordinate system Direction alongThe directions are distributed at equal intervals.A first road virtual boundary line under a vehicle body coordinate system is represented, and the expression is as follows:
;
Wherein, 、Respectively represent the virtual boundary lines of the first roadAny two points of non-uniformity are provided,、、Representation pointsThree-dimensional coordinates in the vehicle body coordinate system,、、Representation pointsThree-dimensional coordinates in the vehicle body coordinate system.
In the case of the view of figure 3,And a second road virtual boundary line under the vehicle body coordinate system is represented.
S2-2, converting the first road virtual sideline coordinate information based on the conversion matrix of the vehicle body coordinate system and the camera coordinate system to obtain second road virtual sideline coordinate information under the camera coordinate system;
The second road virtual side line coordinate information is similar to the first road virtual side line coordinate information and also comprises coordinate information of a plurality of second road virtual side lines, wherein the coordinate information of each second road virtual side line comprises two corresponding coordinate points, and the two corresponding coordinate points are connected to obtain corresponding second road virtual side lines;
The formula of the transformation matrix of the vehicle body coordinate system and the camera coordinate system is as follows:
;
;
;
Wherein, A transformation matrix representing a vehicle body coordinate system and a camera coordinate system,The rotation transformation matrix representing the vehicle body coordinate system to the camera coordinate system is a3 x 3 matrix,The translation transformation matrix representing the vehicle body coordinate system to the camera coordinate system is a3×1 matrix.
The transformation matrixes of the vehicle body coordinate system and the camera coordinate system are obtained based on the vehicle body and the camera in actual conditions, and the transformation matrixes corresponding to different installation positions of different vehicle bodies and different installation positions of the camera are different, and are specifically formulated according to a 3d structure diagram when the vehicle body is designed.
Virtual borderline for first roadWill correspond to the pointSum pointConversion is carried out to obtain:
;
Wherein, Representation pointsAt the corresponding point in the camera coordinate system,Representation pointsCorresponding points under the camera coordinate system.
Connection pointSum pointObtaining a virtual boundary of the first roadThe boundary under the camera coordinate system, i.e. the second road virtual boundary coordinate information.
S2-3, acquiring a third road virtual boundary of a corresponding camera image plane based on the second road virtual boundary coordinate information, wherein the camera image plane is an image plane corresponding to an image shot by a camera in real time;
Based on the second road virtual sideline coordinate information, obtaining a third road virtual sideline of the corresponding camera image plane comprises the following steps:
S2-3-1, projecting the second road virtual sideline coordinate information to the camera image plane, and calculating pixel point coordinates of the second road virtual sideline coordinate information on the camera image plane;
As shown in fig. 4, a camera internal reference matrix is obtained through camera calibration, and points corresponding to the second road virtual sideline coordinate information are obtained Sum pointProjecting to a camera image plane, and calculating corresponding pixel points according to the following formula:
;
;
Wherein, Representing the matrix of the camera's internal parameters,Representation pointsAt the pixel point corresponding to the camera image plane,Representation pointsAt the pixel point corresponding to the camera image plane,、Respectively representing pixel pointsIs used for the display of the display device,、Respectively representing pixel pointsColumn coordinates, row coordinates of (c).
S2-3-2, connecting pixel point coordinates corresponding to all the second road virtual side lines in the second road virtual side line coordinate information, and projecting the pixel point coordinates to the camera image plane to obtain the third road virtual side line.
Pixel pointAnd pixel pointAnd connecting to obtain a first road virtual boundary line in the third road virtual boundary lines. Repeating the above calculation, and connecting the pixel coordinates corresponding to all the second road virtual edges in the second road virtual edge coordinate information to obtain the third road virtual edge, and projecting all the road virtual edges in the third road virtual edge to the camera image plane, as shown in fig. 5.
The third road virtual side line is similar to the first road virtual side line coordinate information and also comprises a plurality of road virtual side lines and coordinate information of two coordinate points corresponding to the road virtual side lines;
S2-4, dividing the road image by a SAM method to obtain a corresponding road detection boundary;
The method for dividing the road image by SAM method to obtain corresponding road detection boundary line comprises:
S2-4-1, dividing the road image by adopting a SAM2 type method, and determining each divided area;
S2-4-2, extracting the external contour in each partition area, and processing by Huo Fuxian transformation to obtain an initial road detection boundary and corresponding length information thereof;
S2-4-3, judging whether the length information corresponding to each initial road detection edge exceeds a threshold value, if so, reserving the initial road detection edge, otherwise, removing the initial road detection edge to obtain the road detection edge. Wherein the threshold is 50 pixels.
S2-5, determining an actual edge line of the road based on the third road virtual edge line and the road detection edge line;
Determining an actual road edge based on the third road virtual edge and the road detection edge, comprising:
S2-5-1, selecting a road detection edge, sequentially matching with the third road virtual edges, calculating the matching degree of the road detection edge and each third road virtual edge, and selecting the highest matching degree as the road edge probability of the road detection edge;
The matching degree of the two image straight lines is defined as the reciprocal of the distance between the two straight lines and the intersection point of the upper frame and the lower frame of the image, and the corresponding formula is as follows:
;
Wherein, Represent the firstLane detection edge and the firstThe degree of matching between virtual edges of the third road,Represent the firstThe intersection point of the road detection side line and the upper frame,Represent the firstThe intersection point of the road detection side line and the lower frame,Represent the firstThe intersection point of the virtual boundary line of the third road and the upper frame,Represent the firstAnd the intersection point of the virtual boundary line of the third road and the lower frame.
As shown in fig. 6, the edge is detected by the roadAnd (d)The virtual boundary line of the third road is taken as an example, and the corresponding formula is as follows:
;
Wherein, Representing road detection sidelinesAnd (d)The degree of matching between virtual edges of the third road,Representing road detection sidelinesThe intersection point with the upper frame is defined by a central line,Representing road detection sidelinesAnd the intersection point with the lower frame.
S2-5-2, repeating the step S2-5-1 until all road detection edges are matched, and obtaining corresponding road edge probability;
s2-5-3, sorting all road surface edge probabilities according to the large-to-small road surface edge probabilities, and selecting road detection edges corresponding to the road surface edge probabilities of the first five road surface edge probabilities to obtain selected road detection edges;
S2-5-4, respectively calculating the distances between the distance center lines corresponding to the left half image and the right half image in the road image and the selected road detection boundary lines, and selecting the selected road detection boundary line with the smallest distance as two road boundary lines to obtain the actual boundary line of the road, as shown in fig. 7.
S2-6, calculating the yaw angle and the lateral deviation distance of the vehicle body relative to the road based on the actual side line of the road.
Based on the actual road borderline, calculating a yaw angle and a lateral offset distance of the vehicle body relative to the road, comprising:
s2-6-1, projecting the actual edge line of the road to the vehicle body coordinate system, and determining the relation between the actual edge line of the road and the vehicle body in a overlooking view;
The actual side line of the road was converted to the vehicle body coordinate system, and as shown in fig. 8, the camera mounting height was set to 0.7m at H.
Selecting an actual edge line of a road, and calculating a pixel point on the line, such as the pixel point in FIG. 7,、Is a pixel pointColumn coordinates, row coordinates of (c).
Based on the formula:
;
obtaining pixel points Depth of (2);
According to the formula:
;
projecting to a camera depth normalization coordinate system to obtain corresponding points Wherein, the method comprises the steps of,An inverse matrix representing the camera's reference matrix,、、Representing points corresponding to a camera depth normalized coordinate systemIs a three-dimensional coordinate of (c).
According to the formula:
;
Projection onto the ground in three-dimensional space, i.e. points in the camera coordinate system ;
Reuse of rotation transformation matrixTranslation transformation matrixPoint to PointConversion to body coordinate system, i.e。
Through the conversion process, four points on the actual side lines of the two roads under the vehicle body coordinate system are obtained.
Taking into account only the vehicle body coordinate systemDirection and directionDirection based on four points on two actual edges of road under vehicle body coordinate system、、、A relationship between the actual road edge and the vehicle body in a plan view is obtained as shown in fig. 9.
S2-6-2, determining a middle point corresponding to the actual side line of the road based on the relation between the actual side line of the road and the vehicle body、Calculating the direction angle of the vehicle body relative to the road;
Intermediate point Is the midpoint of fig. 9Sum pointIntermediate points between, intermediate pointsIs the midpoint of fig. 9Sum pointAn intermediate point therebetween;
According to the formula:
;
calculating the direction angle of the vehicle body relative to the road Wherein, the method comprises the steps of,、Indicating the angle of the body relative to the roadIs defined by the coordinates of (a).
S2-6-3, calculating the yaw angle of the vehicle body relative to the road based on the direction angle of the vehicle body relative to the roadThe corresponding formula is:
;
Wherein, Represents a vehicle body forward direction vector, and;
S2-6-4, calculating the lateral deviation distance of the vehicle body relative to the central line of the pavement based on the middle point corresponding to the actual side line of the roadThe corresponding formula is:
;
、 Representing intermediate points Is used for the purpose of determining the coordinates of (a),、Representing intermediate pointsIs defined by the coordinates of (a).
S3, acquiring wheel type odometer information of the robot through a wheel type odometer, and constructing a factor graph model;
Because the lateral deviation distance and the yaw angle obtained through vision cannot be directly used for robot navigation, the lateral deviation distance and the yaw angle are selected to be subjected to information fusion with the wheel type odometer, so that accurate robot plane coordinates are obtained and used for robot navigation.
Acquiring wheel odometer information of the robot through the wheel odometer, comprising:
S3-1, acquiring a yaw angle and a lateral deviation distance of the robot relative to a world coordinate system by adopting the same method as that of S1 to S2;
The origin of the world coordinate system defines the transverse road center point of the road surface position at the moment of starting the robot. The direction of the road is the X axis of the world coordinate system, the direction of the road faces the X axis, the direction of the road faces the Y axis of the world coordinate system, and the direction of the road faces the Z axis of the world coordinate system.
S3-2, taking the yaw angle and the lateral deviation distance of the S3-1 as observation factors;
Yaw angle of robot relative to world coordinate system I.e. at the moment of timeThe included angle of the lower robot relative to the x-axis of the world coordinate system is that the lateral deviation distance of the robot relative to the world coordinate system isI.e. at the moment of timeThe lateral offset distance of the lower robot.
S3-3, utilizing a wheel type odometer to acquire continuous relative movement of the robot, and accumulating X-direction displacement, Y-direction displacement and relative rotation angle during each movement to acquire corresponding wheel type odometer information;
The wheel type odometer can obtain continuous relative movement of the robot, and the position of the robot obtained by accumulation of the wheel type odometer gradually drifts due to measurement errors, wheel slipping and the like. Defining wheel type odometer information:
To be at the moment By the timeDisplacement in the X direction;
To be at the moment By the timeDisplacement in the Y direction;
To be at the moment By the timeRelative rotation angle between the two.
S3-4, the wheel type odometer information is obtained、、Providing relative motion information of the robot between adjacent time steps as a motion factor;
s3-5, acquiring a history state node of the robot;
s3-6, determining constraint conditions based on the motion factors and the observation factors, and constructing an initial factor graph model;
The factor graph model is a probability graph model, is suitable for processing the information fusion problem of a multi-source sensor, and has the main idea of solving the optimal state estimation by constructing a graph structure consisting of state variables and factors (constraint conditions) and combining the observation results of different sensors. The state variable is a state node, which is the moment of time Robot state of (2)Robot stateIncluding the pose of the robot in the coordinate system。
The constraint condition is determined based on a motion factor and an observation factor, and comprises a motion factor constraint and an observation factor constraint which respectively correspond to a motion constraint of the wheel type odometer and a measurement constraint of visual observation;
The relative movement measured by the odometer is Motion factor constraintThe method comprises the following steps:
;
Wherein, 、Respectively represent time of dayRobot state of (2)And time of dayRobot state of (2)。
At the moment of timeAt the time, the lateral offset distance and the yaw angle obtained by visual measurement areAndThen the observation factor is constrainedThe method comprises the following steps:
;
Wherein, An initial value representing the lateral distance of the robot,An initial value of the robot lateral heading angle is represented. .
S3-7, inputting the historical state nodes into the initial factor graph model to obtain corresponding historical motion tracks, and constructing corresponding objective functionsThe corresponding formula is:
;
Wherein, Representing the motion constraint factor(s),Representing a measurement constraint factor;
Objective function The optimization of (2) can be solved by a common nonlinear optimization algorithm (such as Gauss-Newton or Levenberg-Marquardt) so as to obtain the optimal robot motion trail.
And S3-8, optimizing the initial factor graph model based on the objective function to obtain an optimized factor graph model, and completing construction of the factor graph model.
And S4, acquiring a motion trail of the robot through the factor graph model, and moving the robot based on the motion trail to finish the navigation of the robot.
The trajectory output after optimization by the factor graph model is used for robot navigation instead of a wheel type odometer.
In summary, the invention accurately extracts the road side line based on the observation of the arranged forward-looking camera without marking in advance, calculates the yaw angle and the lateral deviation distance relative to the road surface, and the motion trail of the robot is obtained by using the factor graph model and the wheel type odometer, the method is simple, the navigation of the robot can be completed without redundant data, the required calculation resources are less, the operation speed is high, and the method is suitable for various scenes.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.