[go: up one dir, main page]

CN119197577B - A navigation method based on road edge - Google Patents

A navigation method based on road edge Download PDF

Info

Publication number
CN119197577B
CN119197577B CN202411679432.XA CN202411679432A CN119197577B CN 119197577 B CN119197577 B CN 119197577B CN 202411679432 A CN202411679432 A CN 202411679432A CN 119197577 B CN119197577 B CN 119197577B
Authority
CN
China
Prior art keywords
road
coordinate system
vehicle body
edge
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411679432.XA
Other languages
Chinese (zh)
Other versions
CN119197577A (en
Inventor
梁文斌
武文奇
杨月欣
贾霜
李丹丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Guoruan Technology Group Co Ltd
Original Assignee
Sichuan Guoruan Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Guoruan Technology Group Co Ltd filed Critical Sichuan Guoruan Technology Group Co Ltd
Priority to CN202411679432.XA priority Critical patent/CN119197577B/en
Publication of CN119197577A publication Critical patent/CN119197577A/en
Application granted granted Critical
Publication of CN119197577B publication Critical patent/CN119197577B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/3415Dynamic re-routing, e.g. recalculating the route when the user deviates from calculated route or after detecting real-time traffic data or accidents
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3446Details of route searching algorithms, e.g. Dijkstra, A*, arc-flags or using precalculated routes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于道路边线的导航方法,其涉及机器人导航技术领域。本发明基于设置的前视相机的观测,在不需要提前标注的情况下,准确提取出路面边线,并计算出相对于路面的偏航角和侧向偏离距离,再利用因子图模型和轮式里程计获取机器人的运动轨迹,方法简单,无需多余的数据即可完成对机器人的导航,所需计算资源少,运算速度快,且适用于多种场景。

The present invention discloses a navigation method based on road edgelines, which relates to the field of robot navigation technology. Based on the observation of a set forward-looking camera, the present invention accurately extracts the road edgelines without prior marking, calculates the yaw angle and lateral deviation distance relative to the road surface, and then uses a factor graph model and a wheel odometer to obtain the motion trajectory of the robot. The method is simple, the navigation of the robot can be completed without redundant data, the required computing resources are small, the computing speed is fast, and it is applicable to a variety of scenarios.

Description

Navigation method based on road boundary
Technical Field
The invention relates to the technical field of robot navigation, in particular to a navigation method based on road edges.
Background
High-precision positioning of robots in open areas, either outdoors or indoors, has been a problem. For outdoor situations, the method is GNSS positioning or differential positioning, but the GNSS positioning precision is low, and the differential positioning cost is very high. For indoor situations, lidar positioning is typically used. However, for a large-area, open indoor scene, the laser Lei Datong is often unable to locate because of insufficient collection features.
Disclosure of Invention
The invention aims to provide a navigation method based on a road boundary so as to improve the technical problems.
In order to achieve the above object, the embodiment of the present invention provides the following technical solutions:
the navigation method based on the road boundary comprises the following steps:
s1, shooting a road in real time through a camera arranged on a vehicle body to obtain a road image;
S2, acquiring a yaw angle and a lateral deviation distance of the vehicle body relative to a road based on the vehicle body coordinate system, the camera coordinate system and the road image;
S3, acquiring wheel type odometer information of the robot through a wheel type odometer, and constructing a factor graph model;
And S4, acquiring a motion trail of the robot through the factor graph model, and moving the robot based on the motion trail to finish the navigation of the robot.
Further, the acquiring a yaw angle and a lateral offset distance of the vehicle body with respect to the road based on the vehicle body coordinate system, the camera coordinate system, and the road image includes:
s2-1, acquiring road width, setting interval division values, setting first road virtual sideline coordinate information under a vehicle body coordinate system based on the road width and the interval division values, wherein the first road virtual sideline coordinate information comprises coordinate information of a plurality of road virtual sidelines;
S2-2, converting the first road virtual sideline coordinate information based on the conversion matrix of the vehicle body coordinate system and the camera coordinate system to obtain second road virtual sideline coordinate information under the camera coordinate system;
S2-3, acquiring a third road virtual boundary of a corresponding camera image plane based on the second road virtual boundary coordinate information, wherein the camera image plane is an image plane corresponding to an image shot by a camera in real time;
S2-4, dividing the road image by a SAM method to obtain a corresponding road detection boundary;
s2-5, determining an actual edge line of the road based on the third road virtual edge line and the road detection edge line;
S2-6, calculating the yaw angle and the lateral deviation distance of the vehicle body relative to the road based on the actual side line of the road.
Further, the formula of the transformation matrix of the vehicle body coordinate system and the camera coordinate system is as follows:
;
;
;
Wherein, A transformation matrix representing a vehicle body coordinate system and a camera coordinate system,A rotation transformation matrix representing the vehicle body coordinate system to the camera coordinate system,Representing a translation transformation matrix of the vehicle body coordinate system to the camera coordinate system.
Further, based on the second road virtual edge coordinate information, obtaining a third road virtual edge of the corresponding camera image plane includes:
S2-3-1, projecting the second road virtual sideline coordinate information to the camera image plane, and calculating pixel point coordinates of the second road virtual sideline coordinate information on the camera image plane;
s2-3-2, connecting pixel point coordinates corresponding to all the second road virtual side lines in the second road virtual side line coordinate information, and projecting the pixel point coordinates to the camera image plane to obtain the third road virtual side line.
Further, the road image is segmented by a SAM method to obtain a corresponding road detection boundary, which includes:
S2-4-1, dividing the road image by adopting a SAM2 type method, and determining each divided area;
S2-4-2, extracting the external contour in each partition area, and processing by Huo Fuxian transformation to obtain an initial road detection boundary and corresponding length information thereof;
s2-4-3, judging whether the length information corresponding to each initial road detection edge exceeds a threshold value, if so, reserving the initial road detection edge, otherwise, removing the initial road detection edge to obtain the road detection edge.
Further, determining an actual road edge based on the third road virtual edge and the road detection edge, comprising:
S2-5-1, selecting a road detection edge, sequentially matching with the third road virtual edges, calculating the matching degree of the road detection edge and each third road virtual edge, and selecting the highest matching degree as the road edge probability of the road detection edge;
s2-5-2, repeating the step S2-5-1 until all road detection edges are matched, and obtaining corresponding road edge probability;
s2-5-3, sorting all road surface edge probabilities according to the large-to-small road surface edge probabilities, and selecting road detection edges corresponding to the road surface edge probabilities of the first five road surface edge probabilities to obtain selected road detection edges;
S2-5-4, respectively calculating the distances between the distance center lines corresponding to the left half image and the right half image in the road image and the selected road detection boundary line, and selecting the selected road detection boundary line with the smallest distance as two road boundary lines to obtain the actual boundary line of the road.
Further, calculating a yaw angle and a lateral offset distance of the vehicle body with respect to the road based on the road actual boundary line, including:
s2-6-1, projecting the actual edge line of the road to the vehicle body coordinate system, and determining the relation between the actual edge line of the road and the vehicle body in a overlooking view;
S2-6-2, determining a middle point corresponding to the actual side line of the road based on the relation between the actual side line of the road and the vehicle body, and calculating the direction angle of the vehicle body relative to the road;
s2-6-3, calculating a yaw angle of the vehicle body relative to the road based on the direction angle of the vehicle body relative to the road;
S2-6-4, calculating the lateral deviation distance based on the middle point corresponding to the actual side line of the road.
Further, acquiring wheel odometer information of the robot through the wheel odometer, including:
S3-1, acquiring a yaw angle and a lateral deviation distance of the robot relative to a world coordinate system by adopting the same method as that of S1 to S2;
s3-2, taking the yaw angle and the lateral deviation distance of the S3-1 as observation factors;
s3-3, utilizing a wheel type odometer to acquire continuous relative movement of the robot, and accumulating X-direction displacement, Y-direction displacement and relative rotation angle during each movement to acquire corresponding wheel type odometer information;
S3-4, taking the wheel type odometer information as a motion factor;
s3-5, acquiring a history state node of the robot;
s3-6, constructing an initial factor graph model based on the motion factors and the observation factors;
S3-7, inputting the historical state nodes into the initial factor graph model to obtain corresponding historical motion tracks, and constructing corresponding objective functions;
And S3-8, optimizing the initial factor graph model based on the objective function to obtain an optimized factor graph model, and completing construction of the factor graph model.
The invention has the following effective effects:
The invention accurately extracts the road side line based on the observation of the set forward-looking camera without marking in advance, calculates the yaw angle and the lateral deviation distance relative to the road surface, and the motion trail of the robot is obtained by using the factor graph model and the wheel type odometer, the method is simple, the navigation of the robot can be completed without redundant data, the required calculation resources are less, the operation speed is high, and the method is suitable for various scenes.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method in an embodiment of the invention;
FIG. 2 is a schematic diagram of a camera coordinate system and a vehicle body coordinate system in an embodiment of the invention;
FIG. 3 is a schematic view of a virtual boundary of a first road in a vehicle body coordinate system according to an embodiment of the present invention;
FIG. 4 is a schematic view of pixel points of a camera image plane according to an embodiment of the present invention;
FIG. 5 is a schematic view of a third road virtual boundary of a camera image plane according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of matching degree in an embodiment of the present invention;
FIG. 7 is a schematic view of an actual side line of a road according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of the positions of a camera and a vehicle body according to an embodiment of the invention;
fig. 9 is a schematic diagram of a relationship coordinate between an actual boundary of a road and the vehicle body in the embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present invention.
Referring to fig. 1, a navigation method based on a road boundary provided in this embodiment includes:
S1, shooting a road in real time through a camera arranged on a vehicle body to obtain a road image, determining a vehicle body coordinate system and a camera coordinate system, as shown in FIG. 2, wherein an AGV is an automatic guided vehicle;
S2, acquiring a yaw angle and a lateral deviation distance of the vehicle body relative to a road based on the vehicle body coordinate system, the camera coordinate system and the road image;
the acquiring a yaw angle and a lateral offset distance of the vehicle body relative to the road based on the vehicle body coordinate system, the camera coordinate system and the road image includes:
S2-1, acquiring road width, setting interval division values, and setting first road virtual boundary line coordinate information under a vehicle body coordinate system based on the road width and the interval division values;
The first road virtual side line coordinate information comprises coordinate information of a plurality of road virtual side lines, wherein the coordinate information of each road virtual side line comprises two corresponding coordinate points, and the two corresponding coordinate points are connected to obtain corresponding road virtual side lines;
As shown in fig. 3, assuming that the road width is 2m and the interval dividing value is set to be 10cm, one virtual road edge is set every 10cm, and there are 21 virtual road edges, each virtual road edge being represented by any two different points on the line and each virtual road edge being parallel to the vehicle body coordinate system Direction alongThe directions are distributed at equal intervals.A first road virtual boundary line under a vehicle body coordinate system is represented, and the expression is as follows:
;
Wherein, Respectively represent the virtual boundary lines of the first roadAny two points of non-uniformity are provided,Representation pointsThree-dimensional coordinates in the vehicle body coordinate system,Representation pointsThree-dimensional coordinates in the vehicle body coordinate system.
In the case of the view of figure 3,And a second road virtual boundary line under the vehicle body coordinate system is represented.
S2-2, converting the first road virtual sideline coordinate information based on the conversion matrix of the vehicle body coordinate system and the camera coordinate system to obtain second road virtual sideline coordinate information under the camera coordinate system;
The second road virtual side line coordinate information is similar to the first road virtual side line coordinate information and also comprises coordinate information of a plurality of second road virtual side lines, wherein the coordinate information of each second road virtual side line comprises two corresponding coordinate points, and the two corresponding coordinate points are connected to obtain corresponding second road virtual side lines;
The formula of the transformation matrix of the vehicle body coordinate system and the camera coordinate system is as follows:
;
;
;
Wherein, A transformation matrix representing a vehicle body coordinate system and a camera coordinate system,The rotation transformation matrix representing the vehicle body coordinate system to the camera coordinate system is a3 x 3 matrix,The translation transformation matrix representing the vehicle body coordinate system to the camera coordinate system is a3×1 matrix.
The transformation matrixes of the vehicle body coordinate system and the camera coordinate system are obtained based on the vehicle body and the camera in actual conditions, and the transformation matrixes corresponding to different installation positions of different vehicle bodies and different installation positions of the camera are different, and are specifically formulated according to a 3d structure diagram when the vehicle body is designed.
Virtual borderline for first roadWill correspond to the pointSum pointConversion is carried out to obtain:
;
Wherein, Representation pointsAt the corresponding point in the camera coordinate system,Representation pointsCorresponding points under the camera coordinate system.
Connection pointSum pointObtaining a virtual boundary of the first roadThe boundary under the camera coordinate system, i.e. the second road virtual boundary coordinate information.
S2-3, acquiring a third road virtual boundary of a corresponding camera image plane based on the second road virtual boundary coordinate information, wherein the camera image plane is an image plane corresponding to an image shot by a camera in real time;
Based on the second road virtual sideline coordinate information, obtaining a third road virtual sideline of the corresponding camera image plane comprises the following steps:
S2-3-1, projecting the second road virtual sideline coordinate information to the camera image plane, and calculating pixel point coordinates of the second road virtual sideline coordinate information on the camera image plane;
As shown in fig. 4, a camera internal reference matrix is obtained through camera calibration, and points corresponding to the second road virtual sideline coordinate information are obtained Sum pointProjecting to a camera image plane, and calculating corresponding pixel points according to the following formula:
;
;
Wherein, Representing the matrix of the camera's internal parameters,Representation pointsAt the pixel point corresponding to the camera image plane,Representation pointsAt the pixel point corresponding to the camera image plane,Respectively representing pixel pointsIs used for the display of the display device,Respectively representing pixel pointsColumn coordinates, row coordinates of (c).
S2-3-2, connecting pixel point coordinates corresponding to all the second road virtual side lines in the second road virtual side line coordinate information, and projecting the pixel point coordinates to the camera image plane to obtain the third road virtual side line.
Pixel pointAnd pixel pointAnd connecting to obtain a first road virtual boundary line in the third road virtual boundary lines. Repeating the above calculation, and connecting the pixel coordinates corresponding to all the second road virtual edges in the second road virtual edge coordinate information to obtain the third road virtual edge, and projecting all the road virtual edges in the third road virtual edge to the camera image plane, as shown in fig. 5.
The third road virtual side line is similar to the first road virtual side line coordinate information and also comprises a plurality of road virtual side lines and coordinate information of two coordinate points corresponding to the road virtual side lines;
S2-4, dividing the road image by a SAM method to obtain a corresponding road detection boundary;
The method for dividing the road image by SAM method to obtain corresponding road detection boundary line comprises:
S2-4-1, dividing the road image by adopting a SAM2 type method, and determining each divided area;
S2-4-2, extracting the external contour in each partition area, and processing by Huo Fuxian transformation to obtain an initial road detection boundary and corresponding length information thereof;
S2-4-3, judging whether the length information corresponding to each initial road detection edge exceeds a threshold value, if so, reserving the initial road detection edge, otherwise, removing the initial road detection edge to obtain the road detection edge. Wherein the threshold is 50 pixels.
S2-5, determining an actual edge line of the road based on the third road virtual edge line and the road detection edge line;
Determining an actual road edge based on the third road virtual edge and the road detection edge, comprising:
S2-5-1, selecting a road detection edge, sequentially matching with the third road virtual edges, calculating the matching degree of the road detection edge and each third road virtual edge, and selecting the highest matching degree as the road edge probability of the road detection edge;
The matching degree of the two image straight lines is defined as the reciprocal of the distance between the two straight lines and the intersection point of the upper frame and the lower frame of the image, and the corresponding formula is as follows:
;
Wherein, Represent the firstLane detection edge and the firstThe degree of matching between virtual edges of the third road,Represent the firstThe intersection point of the road detection side line and the upper frame,Represent the firstThe intersection point of the road detection side line and the lower frame,Represent the firstThe intersection point of the virtual boundary line of the third road and the upper frame,Represent the firstAnd the intersection point of the virtual boundary line of the third road and the lower frame.
As shown in fig. 6, the edge is detected by the roadAnd (d)The virtual boundary line of the third road is taken as an example, and the corresponding formula is as follows:
;
Wherein, Representing road detection sidelinesAnd (d)The degree of matching between virtual edges of the third road,Representing road detection sidelinesThe intersection point with the upper frame is defined by a central line,Representing road detection sidelinesAnd the intersection point with the lower frame.
S2-5-2, repeating the step S2-5-1 until all road detection edges are matched, and obtaining corresponding road edge probability;
s2-5-3, sorting all road surface edge probabilities according to the large-to-small road surface edge probabilities, and selecting road detection edges corresponding to the road surface edge probabilities of the first five road surface edge probabilities to obtain selected road detection edges;
S2-5-4, respectively calculating the distances between the distance center lines corresponding to the left half image and the right half image in the road image and the selected road detection boundary lines, and selecting the selected road detection boundary line with the smallest distance as two road boundary lines to obtain the actual boundary line of the road, as shown in fig. 7.
S2-6, calculating the yaw angle and the lateral deviation distance of the vehicle body relative to the road based on the actual side line of the road.
Based on the actual road borderline, calculating a yaw angle and a lateral offset distance of the vehicle body relative to the road, comprising:
s2-6-1, projecting the actual edge line of the road to the vehicle body coordinate system, and determining the relation between the actual edge line of the road and the vehicle body in a overlooking view;
The actual side line of the road was converted to the vehicle body coordinate system, and as shown in fig. 8, the camera mounting height was set to 0.7m at H.
Selecting an actual edge line of a road, and calculating a pixel point on the line, such as the pixel point in FIG. 7,Is a pixel pointColumn coordinates, row coordinates of (c).
Based on the formula:
;
obtaining pixel points Depth of (2);
According to the formula:
;
projecting to a camera depth normalization coordinate system to obtain corresponding points Wherein, the method comprises the steps of,An inverse matrix representing the camera's reference matrix,Representing points corresponding to a camera depth normalized coordinate systemIs a three-dimensional coordinate of (c).
According to the formula:
;
Projection onto the ground in three-dimensional space, i.e. points in the camera coordinate system ;
Reuse of rotation transformation matrixTranslation transformation matrixPoint to PointConversion to body coordinate system, i.e
Through the conversion process, four points on the actual side lines of the two roads under the vehicle body coordinate system are obtained.
Taking into account only the vehicle body coordinate systemDirection and directionDirection based on four points on two actual edges of road under vehicle body coordinate systemA relationship between the actual road edge and the vehicle body in a plan view is obtained as shown in fig. 9.
S2-6-2, determining a middle point corresponding to the actual side line of the road based on the relation between the actual side line of the road and the vehicle bodyCalculating the direction angle of the vehicle body relative to the road;
Intermediate point Is the midpoint of fig. 9Sum pointIntermediate points between, intermediate pointsIs the midpoint of fig. 9Sum pointAn intermediate point therebetween;
According to the formula:
;
calculating the direction angle of the vehicle body relative to the road Wherein, the method comprises the steps of,Indicating the angle of the body relative to the roadIs defined by the coordinates of (a).
S2-6-3, calculating the yaw angle of the vehicle body relative to the road based on the direction angle of the vehicle body relative to the roadThe corresponding formula is:
;
Wherein, Represents a vehicle body forward direction vector, and;
S2-6-4, calculating the lateral deviation distance of the vehicle body relative to the central line of the pavement based on the middle point corresponding to the actual side line of the roadThe corresponding formula is:
;
Representing intermediate points Is used for the purpose of determining the coordinates of (a),Representing intermediate pointsIs defined by the coordinates of (a).
S3, acquiring wheel type odometer information of the robot through a wheel type odometer, and constructing a factor graph model;
Because the lateral deviation distance and the yaw angle obtained through vision cannot be directly used for robot navigation, the lateral deviation distance and the yaw angle are selected to be subjected to information fusion with the wheel type odometer, so that accurate robot plane coordinates are obtained and used for robot navigation.
Acquiring wheel odometer information of the robot through the wheel odometer, comprising:
S3-1, acquiring a yaw angle and a lateral deviation distance of the robot relative to a world coordinate system by adopting the same method as that of S1 to S2;
The origin of the world coordinate system defines the transverse road center point of the road surface position at the moment of starting the robot. The direction of the road is the X axis of the world coordinate system, the direction of the road faces the X axis, the direction of the road faces the Y axis of the world coordinate system, and the direction of the road faces the Z axis of the world coordinate system.
S3-2, taking the yaw angle and the lateral deviation distance of the S3-1 as observation factors;
Yaw angle of robot relative to world coordinate system I.e. at the moment of timeThe included angle of the lower robot relative to the x-axis of the world coordinate system is that the lateral deviation distance of the robot relative to the world coordinate system isI.e. at the moment of timeThe lateral offset distance of the lower robot.
S3-3, utilizing a wheel type odometer to acquire continuous relative movement of the robot, and accumulating X-direction displacement, Y-direction displacement and relative rotation angle during each movement to acquire corresponding wheel type odometer information;
The wheel type odometer can obtain continuous relative movement of the robot, and the position of the robot obtained by accumulation of the wheel type odometer gradually drifts due to measurement errors, wheel slipping and the like. Defining wheel type odometer information:
To be at the moment By the timeDisplacement in the X direction;
To be at the moment By the timeDisplacement in the Y direction;
To be at the moment By the timeRelative rotation angle between the two.
S3-4, the wheel type odometer information is obtainedProviding relative motion information of the robot between adjacent time steps as a motion factor;
s3-5, acquiring a history state node of the robot;
s3-6, determining constraint conditions based on the motion factors and the observation factors, and constructing an initial factor graph model;
The factor graph model is a probability graph model, is suitable for processing the information fusion problem of a multi-source sensor, and has the main idea of solving the optimal state estimation by constructing a graph structure consisting of state variables and factors (constraint conditions) and combining the observation results of different sensors. The state variable is a state node, which is the moment of time Robot state of (2)Robot stateIncluding the pose of the robot in the coordinate system
The constraint condition is determined based on a motion factor and an observation factor, and comprises a motion factor constraint and an observation factor constraint which respectively correspond to a motion constraint of the wheel type odometer and a measurement constraint of visual observation;
The relative movement measured by the odometer is Motion factor constraintThe method comprises the following steps:
;
Wherein, Respectively represent time of dayRobot state of (2)And time of dayRobot state of (2)
At the moment of timeAt the time, the lateral offset distance and the yaw angle obtained by visual measurement areAndThen the observation factor is constrainedThe method comprises the following steps:
;
Wherein, An initial value representing the lateral distance of the robot,An initial value of the robot lateral heading angle is represented. .
S3-7, inputting the historical state nodes into the initial factor graph model to obtain corresponding historical motion tracks, and constructing corresponding objective functionsThe corresponding formula is:
;
Wherein, Representing the motion constraint factor(s),Representing a measurement constraint factor;
Objective function The optimization of (2) can be solved by a common nonlinear optimization algorithm (such as Gauss-Newton or Levenberg-Marquardt) so as to obtain the optimal robot motion trail.
And S3-8, optimizing the initial factor graph model based on the objective function to obtain an optimized factor graph model, and completing construction of the factor graph model.
And S4, acquiring a motion trail of the robot through the factor graph model, and moving the robot based on the motion trail to finish the navigation of the robot.
The trajectory output after optimization by the factor graph model is used for robot navigation instead of a wheel type odometer.
In summary, the invention accurately extracts the road side line based on the observation of the arranged forward-looking camera without marking in advance, calculates the yaw angle and the lateral deviation distance relative to the road surface, and the motion trail of the robot is obtained by using the factor graph model and the wheel type odometer, the method is simple, the navigation of the robot can be completed without redundant data, the required calculation resources are less, the operation speed is high, and the method is suitable for various scenes.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (6)

1.一种基于道路边线的导航方法,其特征在于,包括:1. A navigation method based on road sidelines, characterized by comprising: S1、通过设置在车体上的相机对道路进行实时拍摄,获取道路图像;确定车体坐标系和相机坐标系;S1. Using a camera mounted on a vehicle body to take real-time photos of the road to obtain a road image; determining a vehicle body coordinate system and a camera coordinate system; S2、基于所述车体坐标系、所述相机坐标系和所述道路图像,获取车体相对于道路的偏航角和侧向偏离距离;S2. acquiring a yaw angle and a lateral deviation distance of the vehicle body relative to the road based on the vehicle body coordinate system, the camera coordinate system and the road image; S3、通过轮式里程计获取机器人的轮式里程计信息,构建因子图模型;S3, obtaining wheel odometer information of the robot through the wheel odometer, and constructing a factor graph model; S4、通过所述因子图模型获取机器人的运动轨迹;基于所述运动轨迹运行机器人,完成对机器人的导航;S4, obtaining the motion trajectory of the robot through the factor graph model; running the robot based on the motion trajectory to complete navigation of the robot; 所述基于所述车体坐标系、所述相机坐标系和所述道路图像,获取车体相对于道路的偏航角和侧向偏离距离,包括:The acquiring, based on the vehicle body coordinate system, the camera coordinate system and the road image, a yaw angle and a lateral deviation distance of the vehicle body relative to the road comprises: S2-1、获取道路宽度,设置间隔划分值;基于所述道路宽度、所述间隔划分值,设置车体坐标系下的第一道路虚拟边线坐标信息;所述第一道路虚拟边线坐标信息包括多条道路虚拟边线的坐标信息;每条所述道路虚拟边线的坐标信息包括对应的两个坐标点,连接两个所述坐标点得到对应的道路虚拟边线;S2-1, obtaining a road width and setting an interval division value; based on the road width and the interval division value, setting first road virtual boundary line coordinate information in a vehicle body coordinate system; the first road virtual boundary line coordinate information includes coordinate information of a plurality of road virtual boundary lines; the coordinate information of each road virtual boundary line includes two corresponding coordinate points, and the corresponding road virtual boundary line is obtained by connecting the two coordinate points; S2-2、基于所述车体坐标系和所述相机坐标系的转换矩阵,对第一道路虚拟边线坐标信息进行转换,得到相机坐标系下的第二道路虚拟边线坐标信息;S2-2, based on the conversion matrix of the vehicle body coordinate system and the camera coordinate system, convert the first road virtual boundary coordinate information to obtain the second road virtual boundary coordinate information in the camera coordinate system; S2-3、基于所述第二道路虚拟边线坐标信息,获取对应的相机像平面的第三道路虚拟边线;所述相机像平面为相机实时拍摄的图像对应的像平面;S2-3. Based on the coordinate information of the second road virtual boundary line, obtain a third road virtual boundary line corresponding to a camera image plane; the camera image plane is an image plane corresponding to an image captured by a camera in real time; S2-4、通过SAM类方法对所述道路图像进行分割,得到对应的道路检测边线;S2-4, segmenting the road image by a SAM method to obtain corresponding road detection edges; S2-5、基于所述第三道路虚拟边线和所述道路检测边线,确定道路实际边线;S2-5, determining an actual road edge line based on the third road virtual edge line and the road detection edge line; S2-6、基于所述道路实际边线,计算车体相对于道路的偏航角和侧向偏离距离;S2-6. Calculating the yaw angle and lateral deviation distance of the vehicle body relative to the road based on the actual edge line of the road; 所述基于所述第三道路虚拟边线和所述道路检测边线,确定道路实际边线,包括:The determining the actual road edge line based on the third road virtual edge line and the road detection edge line comprises: S2-5-1、选取一条道路检测边线,依次与第三道路虚拟边线进行匹配,计算该道路检测边线与各第三道路虚拟边线的匹配度,选择最高的匹配度作为该道路检测边线的路面边线概率;S2-5-1, select a road detection edgeline, match it with the third road virtual edgelines in sequence, calculate the matching degree between the road detection edgeline and each third road virtual edgeline, and select the highest matching degree as the road surface edgeline probability of the road detection edgeline; S2-5-2、重复S2-5-1,直至对所有道路检测边线进行匹配,得到对应的路面边线概率;S2-5-2, repeat S2-5-1 until all road detection edges are matched to obtain the corresponding road edge probability; S2-5-3、对所有路面边线概率根据从大到小进行排序,选取前五名的路面边线概率对应的道路检测边线,得到选取后的道路检测边线;S2-5-3, sorting all road edge probabilities from large to small, selecting the road detection edge corresponding to the top five road edge probabilities, and obtaining the selected road detection edge; S2-5-4、分别计算所述道路图像中的左半边图像和右半边图像对应的距离中线分别与选取后的道路检测边线的距离,选取距离最小的选取后的道路检测边线作为两条道路边线,即得到所述道路实际边线;S2-5-4, respectively calculating the distances between the distance midlines corresponding to the left half image and the right half image in the road image and the selected road detection edgeline, and selecting the selected road detection edgeline with the smallest distance as the two road edgelines, that is, obtaining the actual road edgeline; 所述匹配度对应的公式为:The formula corresponding to the matching degree is: ; 其中,表示道路检测边线和第条第三道路虚拟边线之间的匹配度,表示道路检测边线与上边框的交点,表示道路检测边线与下边框的交点,表示第条第三道路虚拟边线与上边框的交点,表示第条第三道路虚拟边线与下边框的交点。in, Indicates road detection edge and The matching degree between the third road virtual edges, Indicates road detection edge The intersection with the top border, Indicates road detection edge The intersection with the bottom border, Indicates The intersection of the third road virtual edge line and the upper border, Indicates The intersection of the third road virtual edge line and the lower border. 2.一种基于权利要求1所述的基于道路边线的导航方法,其特征在于,所述车体坐标系和所述相机坐标系的转换矩阵的公式为:2. A road edge-based navigation method according to claim 1, characterized in that the transformation matrix of the vehicle coordinate system and the camera coordinate system is expressed as follows: ; ; ; 其中,表示车体坐标系和相机坐标系的转换矩阵,表示车体坐标系到相机坐标系的旋转变换矩阵,表示车体坐标系到相机坐标系的平移变换矩阵。in, Represents the transformation matrix between the vehicle coordinate system and the camera coordinate system, Represents the rotation transformation matrix from the vehicle coordinate system to the camera coordinate system, Represents the translation transformation matrix from the vehicle coordinate system to the camera coordinate system. 3.一种基于权利要求1所述的基于道路边线的导航方法,其特征在于,所述基于所述第二道路虚拟边线坐标信息,获取对应的相机像平面的第三道路虚拟边线,包括:3. A road sideline based navigation method according to claim 1, characterized in that the step of obtaining a third road virtual sideline corresponding to a camera image plane based on the second road virtual sideline coordinate information comprises: S2-3-1、将所述第二道路虚拟边线坐标信息投影至所述相机像平面,计算所述第二道路虚拟边线坐标信息在所述相机像平面的像素点坐标;S2-3-1. Projecting the second road virtual boundary coordinate information onto the camera image plane, and calculating the pixel point coordinates of the second road virtual boundary coordinate information on the camera image plane; S2-3-2、连接所述第二道路虚拟边线坐标信息中的所有第二道路虚拟边线对应的像素点坐标,得到所述第三道路虚拟边线。S2-3-2. Connect the pixel point coordinates corresponding to all the second road virtual boundary lines in the second road virtual boundary line coordinate information to obtain the third road virtual boundary line. 4.一种基于权利要求1所述的基于道路边线的导航方法,其特征在于,所述通过SAM类方法对所述道路图像进行分割,得到对应的道路检测边线,包括:4. A navigation method based on road edge lines according to claim 1, characterized in that the road image is segmented by a SAM-type method to obtain corresponding road detection edge lines, comprising: S2-4-1、采用SAM2类方法对所述道路图像进行分割,确定各个分割区域;S2-4-1, segmenting the road image using a SAM2 method to determine each segmented area; S2-4-2、提取每个分割区域中的外部轮廓,利用霍夫线变换进行处理,得到初始道路检测边线及其对应的长度信息;S2-4-2, extracting the external contour in each segmented area, processing it using Hough line transform, and obtaining the initial road detection edge line and its corresponding length information; S2-4-3、判断每条所述初始道路检测边线对应的长度信息是否超过阈值;若是则保留该初始道路检测边线,反之则剔除该初始道路检测边线,得到所述道路检测边线。S2-4-3. Determine whether the length information corresponding to each of the initial road detection edges exceeds a threshold; if so, retain the initial road detection edge, otherwise remove the initial road detection edge to obtain the road detection edge. 5.一种基于权利要求1所述的基于道路边线的导航方法,其特征在于,所述基于所述道路实际边线,计算车体相对于道路的偏航角和侧向偏离距离,包括:5. A navigation method based on road sidelines according to claim 1, characterized in that the calculation of the yaw angle and lateral deviation distance of the vehicle body relative to the road based on the actual road sidelines comprises: S2-6-1、将所述道路实际边线投影至所述车体坐标系,确定俯视视角下所述道路实际边线与所述车体之间的关系;S2-6-1, projecting the actual road edge line to the vehicle body coordinate system to determine the relationship between the actual road edge line and the vehicle body from a bird's eye view; S2-6-2、基于所述道路实际边线与所述车体之间的关系,确定所述道路实际边线对应的中间点,计算所述车体相对于道路的方向角;S2-6-2, based on the relationship between the actual road edge line and the vehicle body, determining the middle point corresponding to the actual road edge line, and calculating the direction angle of the vehicle body relative to the road; S2-6-3、基于所述车体相对于道路的方向角,计算所述车体相对于道路的偏航角;S2-6-3, calculating the yaw angle of the vehicle body relative to the road based on the direction angle of the vehicle body relative to the road; S2-6-4、基于所述道路实际边线对应的中间点,计算所述侧向偏离距离。S2-6-4. Calculate the lateral deviation distance based on the middle point corresponding to the actual edge line of the road. 6.一种基于权利要求1所述的基于道路边线的导航方法,其特征在于,所述通过轮式里程计获取机器人的轮式里程计信息,包括:6. A road sideline based navigation method according to claim 1, characterized in that the wheel odometer information of the robot is obtained by using the wheel odometer, comprising: S3-1、采用与S1至S2相同的方法,获取机器人相对于世界坐标系的偏航角和侧向偏离距离;S3-1, using the same method as S1 to S2, obtain the yaw angle and lateral deviation distance of the robot relative to the world coordinate system; S3-2、将S3-1的偏航角和侧向偏离距离作为观测因子;S3-2, taking the yaw angle and lateral deviation distance of S3-1 as observation factors; S3-3、利用轮式里程计获取机器人的连续相对运动,累积每次运动时的X方向位移、Y方向位移和相对旋转角度,得到对应的所述轮式里程计信息;S3-3, using the wheel odometer to obtain the continuous relative motion of the robot, accumulating the X-direction displacement, Y-direction displacement and relative rotation angle during each motion, and obtaining the corresponding wheel odometer information; S3-4、将所述轮式里程计信息作为运动因子;S3-4, using the wheel odometer information as a motion factor; S3-5、获取机器人的历史状态节点;S3-5, obtain the robot's historical status node; S3-6、基于所述运动因子和所述观测因子,构建初始因子图模型;S3-6, constructing an initial factor graph model based on the motion factor and the observation factor; S3-7、将所述历史状态节点输入至所述初始因子图模型,得到对应的历史运动轨迹,构建对应的目标函数;S3-7, inputting the historical state node into the initial factor graph model, obtaining the corresponding historical motion trajectory, and constructing the corresponding objective function; S3-8、基于所述目标函数优化所述初始因子图模型,得到优化后的因子图模型,完成对因子图模型的构建。S3-8. Optimize the initial factor graph model based on the objective function to obtain an optimized factor graph model, thereby completing the construction of the factor graph model.
CN202411679432.XA 2024-11-22 2024-11-22 A navigation method based on road edge Active CN119197577B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411679432.XA CN119197577B (en) 2024-11-22 2024-11-22 A navigation method based on road edge

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411679432.XA CN119197577B (en) 2024-11-22 2024-11-22 A navigation method based on road edge

Publications (2)

Publication Number Publication Date
CN119197577A CN119197577A (en) 2024-12-27
CN119197577B true CN119197577B (en) 2025-01-28

Family

ID=94052762

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411679432.XA Active CN119197577B (en) 2024-11-22 2024-11-22 A navigation method based on road edge

Country Status (1)

Country Link
CN (1) CN119197577B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108297866A (en) * 2018-01-03 2018-07-20 西安交通大学 A kind of track holding control method of vehicle
CN114646317A (en) * 2022-03-17 2022-06-21 长沙慧联智能科技有限公司 Vehicle visual positioning and navigation control method, device, computer equipment, and medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6314843B2 (en) * 2015-01-06 2018-04-25 トヨタ自動車株式会社 Road boundary line recognition apparatus and road boundary line recognition method
CN105922991B (en) * 2016-05-27 2018-08-17 广州大学 Based on the lane departure warning method and system for generating virtual lane line
CN111738035A (en) * 2019-03-25 2020-10-02 比亚迪股份有限公司 Method, device and equipment for calculating yaw angle of vehicle
CN113790719B (en) * 2021-08-13 2023-09-12 北京自动化控制设备研究所 A UAV inertial/visual landing navigation method based on line features
CN114742914A (en) * 2022-04-15 2022-07-12 苏州挚途科技有限公司 Lane line generation method and device and electronic equipment
CN118279468A (en) * 2022-12-30 2024-07-02 鸿海精密工业股份有限公司 Lane marking method, electronic equipment and computer storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108297866A (en) * 2018-01-03 2018-07-20 西安交通大学 A kind of track holding control method of vehicle
CN114646317A (en) * 2022-03-17 2022-06-21 长沙慧联智能科技有限公司 Vehicle visual positioning and navigation control method, device, computer equipment, and medium

Also Published As

Publication number Publication date
CN119197577A (en) 2024-12-27

Similar Documents

Publication Publication Date Title
CN112396664B (en) Monocular camera and three-dimensional laser radar combined calibration and online optimization method
CN113436260B (en) Mobile robot pose estimation method and system based on multi-sensor tight coupling
CN109270534B (en) An online calibration method for smart car laser sensor and camera
Guindel et al. Automatic extrinsic calibration for lidar-stereo vehicle sensor setups
CN112767490B (en) Outdoor three-dimensional synchronous positioning and mapping method based on laser radar
CN111076733B (en) Robot indoor map building method and system based on vision and laser slam
CN114526745A (en) Drawing establishing method and system for tightly-coupled laser radar and inertial odometer
CN110243380B (en) A Map Matching Method Based on Multi-sensor Data and Angle Feature Recognition
CN115479598A (en) Positioning and mapping method based on multi-sensor fusion and tight coupling system
CN107741234A (en) The offline map structuring and localization method of a kind of view-based access control model
CN107301654A (en) A kind of positioning immediately of the high accuracy of multisensor is with building drawing method
CN108733039A (en) The method and apparatus of navigator fix in a kind of robot chamber
CN114325634B (en) A highly robust method for extracting traversable areas in wild environments based on LiDAR
CN102519481A (en) Implementation method of binocular vision speedometer
CN112464812A (en) Vehicle-based sunken obstacle detection method
CN114413958A (en) Monocular visual ranging and speed measurement method for unmanned logistics vehicles
Zhang LILO: A novel LiDAR–IMU SLAM system with loop optimization
CN110827353B (en) Robot positioning method based on monocular camera assistance
CN111476798B (en) Vehicle space morphology recognition method and system based on contour constraint
CN113406659A (en) Mobile robot position re-identification method based on laser radar information
CN116045965A (en) Multi-sensor-integrated environment map construction method
CN116403186B (en) FPN Swin Transformer and Pointnet ++ based automatic driving three-dimensional target detection method
CN118168545A (en) Weeding robot positioning and navigation system and method based on multi-source sensor fusion
CN113538620A (en) A SLAM mapping result evaluation method for 2D grid map
CN119399282B (en) Positioning and mapping method and system based on multi-source data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant