CN118942062B - Road structure identification method, device, vehicle and storage medium - Google Patents
Road structure identification method, device, vehicle and storage medium Download PDFInfo
- Publication number
- CN118942062B CN118942062B CN202410744953.2A CN202410744953A CN118942062B CN 118942062 B CN118942062 B CN 118942062B CN 202410744953 A CN202410744953 A CN 202410744953A CN 118942062 B CN118942062 B CN 118942062B
- Authority
- CN
- China
- Prior art keywords
- road
- width
- relative
- target
- clusters
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Traffic Control Systems (AREA)
Abstract
The application relates to a road structure identification method, a device, a vehicle and a storage medium. The method comprises the steps of obtaining a vehicle running track collected on a set route and a plurality of sections of road boundaries in a vehicle perception range, respectively determining the relative road widths of the vehicle running track and the plurality of sections of road boundaries, constructing an automatic driving map of the set route by adopting the vehicle running track, the plurality of sections of road boundaries and the relative road widths, and identifying a road structure of the set route according to the automatic driving map, wherein the road structure comprises at least one of a diversion road and a confluence road. The scheme provided by the application can solve the problem of the recognition of the split-flow road of the set route without high-precision map support.
Description
Technical Field
The present application relates to the field of automatic driving technologies, and in particular, to a method and apparatus for identifying a road structure, a vehicle, and a storage medium.
Background
The high-precision map provides powerful support for the automatic driving field with the centimeter-level precision, plays an important role in the decision-making planning algorithm of the automatic driving, and is an important facility in the automatic driving field.
In the related art, the road structure of the split-and-combined scene is relatively complex, and the high-precision map can well identify the complex road structure of the split-and-combined scene, but the problems of limited coverage range, low timeliness, high construction cost and the like of the high-precision map gradually emerge, so that the problem of split-and-combined road identification under the condition of no high-precision map support is urgently solved.
Disclosure of Invention
In order to solve or partially solve the problems existing in the related art, the present application provides a road structure recognition method, apparatus, vehicle, and storage medium capable of solving the problem of split-and-merge road recognition of a set route without high-definition map support.
The first aspect of the present application provides a road structure identification method, including:
acquiring a plurality of sections of road boundaries in a vehicle driving track and a vehicle perception range acquired on a set route;
determining the relative road widths of the vehicle driving track and the multi-section road boundary respectively;
Constructing an automatic driving map of the set route by adopting the vehicle running track, the multi-section road boundaries and the relative road widths;
And identifying a road structure of the set route according to the automatic driving map, wherein the road structure comprises at least one of a split road and a confluent road.
A second aspect of the present application provides a road structure recognition apparatus, comprising:
A route information acquisition module is set up and a route information acquisition module is set up, the method comprises the steps of acquiring a plurality of sections of road boundaries in a vehicle driving track and a vehicle perception range acquired on a set route;
The relative road width determining module is used for respectively determining the relative road widths of the vehicle running track and the multi-section road boundary;
The automatic driving map construction module is used for constructing an automatic driving map of the set route by adopting the vehicle running track, the multi-section road boundaries and the relative road widths;
And the road structure identification module is used for identifying the road structure of the set route according to the automatic driving map, and the road structure comprises at least one of a split road and a confluent road.
A third aspect of the application provides a vehicle comprising:
Processor, and
A memory having executable code stored thereon which, when executed by the processor, causes the processor to perform the method as described above.
A fourth aspect of the application provides a computer readable storage medium having stored thereon executable code which, when executed by a processor of a vehicle, causes the processor to perform a method as described above.
The technical scheme provided by the application can comprise the following beneficial effects:
The scheme provided by the application comprises the steps of acquiring a vehicle running track acquired on a set route and a plurality of sections of road boundaries in a vehicle perception range, respectively determining the relative road widths of the vehicle running track and the plurality of sections of road boundaries, constructing an automatic driving map of the set route by adopting the vehicle running track, the plurality of sections of road boundaries and the relative road widths, and identifying a road structure of the set route according to the automatic driving map, wherein the road structure comprises at least one of a diversion road and a confluence road. The application collects the vehicle running track and the multi-section road boundary in the perception range on the set route so as to obtain the relative road width of the set route, and then adopts the vehicle running track, the multi-section road boundary and the relative road width to construct the automatic driving map of the set route, so that even if the support of the high-precision map is not provided, the diversion road or the confluence road of the set route can be identified through the constructed automatic driving map.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed.
Drawings
The foregoing and other objects, features and advantages of the application will be apparent from the following more particular descriptions of exemplary embodiments of the application as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts throughout the exemplary embodiments of the application.
Fig. 1 is a flow chart of a road structure recognition method according to an embodiment of the present application;
FIG. 2 is another flow chart of a road structure recognition method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a learning map shown in an embodiment of the present application;
FIG. 4 is a schematic diagram of a scenario illustrating rightward diversion according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a scenario illustrating a left split according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a scenario illustrating a left merge according to an embodiment of the present application;
FIG. 7 is a schematic view of a right-hand merge scenario in accordance with an embodiment of the present application;
fig. 8 is a schematic view of a road structure recognition apparatus according to an embodiment of the present application;
Fig. 9 is a schematic structural view of a vehicle shown in an embodiment of the present application.
Detailed Description
Embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While embodiments of the present application are illustrated in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the application to those skilled in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms "first," "second," "third," etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the application. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In the related art, the road structure of the split-and-combined scene is relatively complex, and the high-precision map can well identify the complex road structure of the split-and-combined scene, but the problems of limited coverage range, low timeliness, high construction cost and the like of the high-precision map gradually emerge, so that the problem of split-and-combined road identification under the condition of no high-precision map support is urgently solved.
In view of the above problems, an embodiment of the present application provides a road structure identifying method, which collects a plurality of segments of road boundaries within a driving track and a sensing range of a vehicle on a set route, so as to obtain a relative road width of the set route, and then constructs an autopilot map of the set route by using the driving track, the plurality of segments of road boundaries and the relative road width of the vehicle, so that even if the autopilot map is not supported, a split road or a merge road of the set route can be identified by the constructed autopilot map.
The following describes the technical scheme of the embodiment of the present application in detail with reference to the accompanying drawings.
Fig. 1 is a flow chart of a road structure recognition method according to an embodiment of the present application.
Referring to fig. 1, the road structure recognition method of the present application includes:
S110, acquiring a vehicle running track acquired on a set route and a plurality of sections of road boundaries in a vehicle perception range.
The execution subject of the road structure recognition may be an automated driving system mounted on the vehicle. The embodiment of the application can be divided into a learning process, a construction process and an identification process. In the learning process, the user can set a departure place and a destination, and a route from the departure place to the destination can be set as the route. Wherein there may be a plurality of routes from the departure point to the destination, the autopilot system may select one of the fixed routes commonly used by the user as the set route, and for example, assuming that the routes from the departure point a to the destination B include route 1, route 2, and route 3, the user often walks route 2, route 2 may be the set route from the departure point a to the destination B.
If the departure point and the destination point are exchanged, the corresponding set route is also different because the traveling direction of the vehicle is changed. Illustratively, taking the route between the point a and the point B as an example, a trip route and a return route are divided, and assuming that the trip route refers to a route from the point a to the point B, the return route refers to a route from the point B to the point a, and if the route from the point a to the point B is right-to-right, i.e., the trip route is a route on the right road, the route from the point B to the point a is left-to-left, i.e., the return route is a route on the left road, and thus the trip route and the return route need to be set as different setting routes because the road structures, road conditions, road widths, and the like of the left road and the right road are different.
Since the set route is not supported by the high-precision map, an automatic driving mode of the fixed route can be started in the learning process, in the mode, the fixed route is taken as the set route, the automatic driving system can control the vehicle to automatically drive along the set route based on the sensing data of various sensors, and in the process of automatically driving along the set route, the driving track of the vehicle can be acquired and recorded through the data acquisition system, and the road boundary can be acquired and recorded through the sensors such as radar and cameras of the vehicle. In addition, in order to improve learning quality and driving safety, or the set route is not a fixed route, the embodiment of the application can also drive the vehicle to drive along the set route by a user, and in the process of driving along the set route, the driving track of the vehicle can be acquired and recorded by a data acquisition system, and the road boundary can be acquired and recorded by sensors such as a radar and a camera of the vehicle.
The sensing range of the vehicle sensor is limited, the road boundary collected by the vehicle sensor is a discontinuous curve, for example, an intersection is arranged in front of the vehicle, the width of the intersection is large, the vehicle sensor cannot collect the road boundary outside the sensing range, and only the road boundary in the sensing range can be collected, so that the road boundary in the sensing range of the vehicle is one section.
S120, determining the relative road widths of the vehicle running track and the multi-section road boundary.
The automatic driving system acquires the vehicle running track from the data acquisition system and the multi-section road boundary from various vehicle sensors, and then enters the construction process. In the construction process, the automatic driving system may determine the relative road width of the vehicle driving track and each road boundary, where the road boundaries may include a left road boundary and a right road boundary, the left road boundary refers to a road boundary collected by the vehicle sensor in the left direction of the vehicle driving track, the right road boundary refers to a road boundary collected by the vehicle sensor in the right direction of the vehicle driving track, and thus the relative road width may include a left road width and a right road width, the left road width refers to a relative road width of the vehicle driving track and the left road boundary, and the right road width refers to a relative road width of the vehicle driving track and the right road boundary.
S130, constructing an automatic driving map of the set route by adopting the vehicle running track, the multi-section road boundary and the relative road width.
The relative road width of the set route can be obtained through the vehicle running track and the multiple road boundaries in the perception range acquired on the set route, and then the automatic driving map of the set route can be constructed by adopting the vehicle running track, the multiple road boundaries and the relative road width, for example, the automatic driving map of the set route can be constructed by adopting the vehicle running track, the left road boundary, the right road boundary, the left road width and the right road width, so that the problem of automatic driving map construction of the set route without high-precision map support can be solved.
And S140, identifying a road structure of the set route according to the automatic driving map, wherein the road structure comprises at least one of a split road and a confluent road.
After the automatic driving map of the set route is constructed, the recognition process is entered. In the identifying process, since the automatic driving map contains the priori information of the set route, the priori information can comprise at least one of road boundary information, relative road width information and traffic selecting road information, and the automatic driving system can identify the road structure of the set route according to the road boundary information and the relative road width information, for example, identify the complex road structure such as the split road or the confluence road of the set route, thereby solving the problem of the split-confluence road identification of the set route without high-precision map support.
In addition, the automatic driving system can determine the passable road area of the vehicle according to the road information of the selected pass so that the vehicle can be directly controlled to automatically drive in the passable road area, thereby avoiding the situation of wrong-way driving caused by driving according to the navigation information, for example, at an intersection, the road on which the vehicle passes in the learning process represents the road on which the vehicle will travel in the future, and for the automatic driving with only the navigation information, the road on which the vehicle will travel needs to be selected at the intersection. For example, assuming that the vehicle encounters the intersection A, B in front of the vehicle, if the vehicle selects the road of the intersection a in the learning process, the road of the intersection a may be used as a vehicle driving track, an autopilot map is constructed based on the vehicle driving track, and the autopilot map carries the road information for selecting traffic, and when the vehicle encounters the intersection A, B again, the autopilot system may determine that the passable road area of the vehicle is the road of the intersection a according to the road information for selecting traffic, so that the vehicle may be directly controlled to automatically drive in the road of the intersection a, thereby not selecting which road of the intersection a again, but also selecting the road to be taken in the intersection A, B for the autopilot with only navigation information, and traveling according to the navigation information is also easy to get wrong.
Therefore, even if the high-precision map is not supported, the embodiment of the application can identify complex road structures such as the diversion road or the confluence road of the set route through the constructed automatic driving map, and can avoid the situation of walking by mistake according to the automatic driving map.
It should be noted that, the application range of the road structure identification method provided by the embodiment of the application is wider, and the method can be applied to an area without high-precision map coverage and an area with high-precision map coverage.
The method and the device can be used for acquiring the vehicle running track acquired on the set route and the multi-section road boundaries in the vehicle perception range, respectively determining the relative road widths of the vehicle running track and the multi-section road boundaries, constructing an automatic driving map of the set route by adopting the vehicle running track, the multi-section road boundaries and the relative road widths, and identifying the road structure of the set route according to the automatic driving map, wherein the road structure comprises at least one of a diversion road and a confluence road. The application collects the vehicle running track and the multi-section road boundary in the perception range on the set route so as to obtain the relative road width of the set route, and then adopts the vehicle running track, the multi-section road boundary and the relative road width to construct the automatic driving map of the set route, so that even if the support of the high-precision map is not provided, the diversion road or the confluence road of the set route can be identified through the constructed automatic driving map.
Fig. 2 is another flow chart of a road structure recognition method according to an embodiment of the application.
Referring to fig. 2, the road structure recognition method of the present application includes:
s210, acquiring a vehicle running track acquired on a set route and a plurality of sections of road boundaries in a vehicle perception range.
In the learning process, a departure place and a destination may be set, and a route from the departure place to the destination may be set as the set route. In the process of the vehicle running along the set route, the running track of the vehicle can be acquired and recorded through the data acquisition system, and the road boundary can be acquired and recorded through sensors such as radar, cameras and the like of the vehicle, so that the automatic driving system can acquire the running track of the vehicle from the data acquisition system and acquire the road boundary from various vehicle sensors.
The sensing range of the vehicle sensor is limited, and the road boundary acquired by the vehicle sensor is a discontinuous curve, so that the road boundary in the sensing range of the vehicle is segment by segment. As shown in fig. 3, the road boundaries may include left and right road boundaries, which are discontinuous curves, so that the road boundaries obtained by the automatic driving system from various vehicle sensors may include a plurality of left road boundaries and a plurality of right road boundaries within a vehicle perception range.
The type of road boundary may include at least one of a surrounding ground marking (e.g., a drain line, a double Huang Shixian, etc.), a physical segmentation facility (e.g., a fence, a water horse, etc.), and a boundary profile of a building, among others, that defines a traffic zone.
S220, taking the vehicle running track as a learning track, and extracting a plurality of learning track points from the learning track according to a preset interval distance.
In the construction process, the vehicle driving track collected in the learning process can be used as a learning track, the embodiment of the application can record the relative position relation between the road boundary and the learning track, and the relative position relation is used for representing that the road boundary and the learning track are in the same coordinate system, so that the learning map of the set route is constructed based on a plurality of sections of road boundaries and the learning track in the same coordinate system, and fig. 3 is the learning map.
According to the embodiment of the application, a plurality of learning track points can be extracted from the learning track according to the preset interval distance, wherein the learning track points can comprise a starting point and an end point of the learning track. As shown in fig. 3, learning track points O, A, B and C are extracted at a fixed preset interval distance on the learning track, where learning track point O may be the start point of the learning track. The smaller the preset interval distance is, the finer the learning track information is, so that the higher the accuracy of the automatic driving map constructed based on the learning track information is, but the larger the corresponding calculation amount and storage space are, so that the preset interval distance is generally not more than 2.0m, and preferably, the preset interval distance can be 0.5m to 1.0m.
It should be noted that, in general, the learning track may be a continuous curve, or may be a discontinuous curve, where the discontinuous condition may be manually made according to different learning track schemes. When the learning track is a discontinuous curve, the interrupted area is not required to extract the learning track points, and only the learning track points are required to be extracted on each section of curve according to a preset interval distance. The vehicle may travel in the discontinuous region by a free-region travel mode (a travel mode belonging to automatic driving).
It should be noted that, in the embodiment of the present application, other similar navigation lines or lines with direction indication may be used as the learning track, and also discrete points or line segments may be used as the learning track. When the learning track is a discrete point, the learning track point can be selected in an interpolation mode, and the discrete point can also be directly used as the learning track point. Regardless of the form of learning trajectory used, embodiments of the present application require the vehicle to travel one pass along the set route in order to obtain the road boundaries of the set route collected during travel.
S230, determining the relative road width of each learning track point and the corresponding road boundary.
The autopilot system may first determine the relative road width of each learned trajectory point and the corresponding road boundary. Specifically, the autopilot system may first determine a left road width for each learned trajectory point and a corresponding left road boundary, and determine a right road width for each learned trajectory point and a corresponding right road boundary.
In one embodiment, determining the relative road width of each learning track point and the corresponding road boundary includes:
The method comprises the steps of determining tangential directions of learning tracks at learning track points respectively, taking directions perpendicular to the tangential directions as transmitting directions, transmitting rays from the learning track points respectively along the corresponding transmitting directions until the rays meet corresponding road boundaries, obtaining the lengths of the rays, and determining the lengths of the rays as the relative road widths of the learning track points and the corresponding road boundaries.
The autopilot system may first determine the tangential direction of the learning trajectory at each learning trajectory point, respectively. The tangential direction at each learning track point includes an upward tangential direction and a downward tangential direction, the upward tangential direction indicates that the tangential direction points to the next learning track point, the downward tangential direction indicates that the tangential direction points to the previous learning track point, and one type of tangential direction may be selected in the embodiment of the present application, as shown in fig. 3, the upward tangential direction at each learning track point may be selected, for example, the upward tangential direction B at the learning track point B indicates that the tangential direction B points to the next learning track point C.
After determining the tangential direction at each learning track point, a direction perpendicular to the tangential direction may be taken as an emission direction, which represents a direction in which the learning track point emits rays toward the corresponding road boundary, and since the road boundary includes a left road boundary and a right road boundary, the emission direction may include a left emission direction and a right emission direction, a direction perpendicular to the tangential direction and located on the left side of the learning track may be taken as a left emission direction, and a direction perpendicular to the tangential direction and located on the right side of the learning track may be taken as a right emission direction. Specifically, a target angle θ of each learning track point may be calculated, and the target angle θ may refer to an angle between the tangential direction and the reference direction, as shown in fig. 3, and the target angle θ of the learning track point B is an angle between the tangential direction B and the reference direction. The reference direction can be selected at will, and the reference directions of all the learning track points are consistent. After the target angle θ of each learning track point is calculated, the target angle θ of each learning track point may be added to a preset angle to obtain a left emission angle corresponding to each learning track point, and the target angle θ of each learning track point may be subtracted from the preset angle to obtain a right emission angle corresponding to each learning track point.
Wherein, the preset angle may be set to pi/2, so the left-side emission angle may be (θ+pi/2), and the right-side emission angle may be (θ -pi/2). Both (θ+pi/2) and (θ -pi/2) represent angle values in radians, and thus pi/2 corresponds to an angle of 90 degrees. As shown in fig. 3, the direction of the left emission angle (θ+pi/2) may be determined as the left emission direction, and the direction of the right emission angle (θ -pi/2) may be determined as the right emission direction.
The rays corresponding to each learning track point may include a left ray and a right ray, as shown in fig. 3, each learning track point may be used as a left ray starting point, that is, from each learning track point, a left ray is emitted along a corresponding left emission direction, that is, a left ray is emitted in a direction of a left emission angle (θ+pi/2), until the left ray encounters a corresponding left road boundary, and the length of the left ray is obtained. And each learning track point can be used as a right ray starting point, namely, each learning track point is started respectively, right rays are emitted along the corresponding right emitting direction, namely, the right rays are emitted in the direction of the right emitting angle (theta-pi/2), until the right rays meet the corresponding right road boundary, and the emission is stopped, so that the length of the right rays is obtained.
The length of the ray may refer to the length from the ray end point to the ray start point (learning track point), and thus the length of the ray may be determined as the relative road width of each learning track point and the corresponding road boundary. It can be seen that each of the learning track points corresponds to two length values, such as emitting a left ray in the direction of the left emission angle (θ+pi/2), and the length of the left ray can be expressed as the left road widthEmitting right rays in the direction of right emission angle (theta-pi/2), wherein the length of the right rays can be taken as the width of the right road and is recorded as
It should be noted that the left road widthWhere l represents left side (left) and right side road widthIn (c), r represents right side (right). Where n represents the nth learning trajectory point.
In an embodiment, the method may further include:
and when the ray does not meet the corresponding road boundary and the length of the ray is greater than or equal to the preset length, taking the preset length as the length of the ray.
Since the road boundary is a discontinuous curve, there may be a case that the ray does not meet the corresponding road boundary, for this embodiment of the present application, a preset length may be preset, and the preset length may be any value, and preferably, the preset length may be 25.0m. When the ray does not meet the corresponding road boundary and the length of the ray is greater than or equal to the preset length 25.0m, the preset length 25.0m may be taken as the length of the ray, so that the length 25.0m of the ray is determined as the relative road width of the current learning track point and the corresponding virtual road boundary. Specifically, when the left ray does not meet the corresponding left road boundary and the length of the left ray is greater than or equal to the preset length 25.0m, the preset length 25.0m may be taken as the length of the left ray so as to determine the left length 25.0m as the left road width of the current learning trajectory point and the corresponding left virtual road boundary, and/or when the right ray does not meet the corresponding right road boundary and the length of the right ray is greater than or equal to the preset length 25.0m, the preset length 25.0m may be taken as the length of the right ray so as to determine the right length 25.0m as the right road width of the current learning trajectory point and the corresponding right virtual road boundary.
S240, filtering processing is carried out on each relative road width, and the target relative road width corresponding to each learning track point is obtained.
The relative road width determined based on the learning track points and the road boundaries is available to a certain extent, but is not smooth enough, and the phenomenon of left and right shaking possibly occurs when the vehicle runs, so that the embodiment of the application can carry out filtering treatment on the relative road width corresponding to each learning track point so as to obtain the target relative road width corresponding to each learning track point after the filtering treatment, and the target relative road width can be smoother than the relative road width without the filtering treatment.
The relative road width includes left road widthAnd right road widthThus, for each left road widthFiltering to obtain the left road width of the target corresponding to each learning track pointWidth of road on each right sideFiltering to obtain the right road width of the target corresponding to each learning track point
In an embodiment, filtering the respective relative road widths to obtain the target relative road widths corresponding to the respective learning track points may include:
and traversing each first cluster, and if the relative road width traversed to the current first cluster is larger than that of the adjacent first cluster, adjusting the relative road width of the current first cluster to be that of the adjacent first cluster until the traversing is finished, and obtaining the target relative road width corresponding to each adjusted learning track point.
The embodiment of the application can perform clustering processing on each relative road width so as to obtain a plurality of first clusters. The first clusters may include left first clusters and right first clusters, the left first clusters may be obtained by performing a clustering process on each left road width, and the right first clusters may be obtained by performing a clustering process on each right road width.
In one example, for the clustering process of the left road widths, each left road width may be arranged in the order of each learning track point, so as to obtain a left road width sequenceWhere N represents the number of learning trajectory points. Wherein the order of each learning track point can be determined according to the distance s between each learning track point and the starting point of the learning track, wherein the definition of s can be thatWherein d i represents the Euclidean distance from the i-th learning track point to the i-1-th learning track point, so that the s value corresponding to the current learning track point can be the sum of the Euclidean distances of all adjacent learning track points before the learning track point. Illustratively, as shown in fig. 3, when the start point of the learning track is the learning track point O, the s value corresponding to the learning track point O is 0, the s value corresponding to the learning track point a is d 1, the s value corresponding to the learning track point B is d 1+d2, and the s value corresponding to the learning track point C is d 1+d2+d3, where d 1 represents the euclidean distance from the learning track point a to the learning track point O, d 2 represents the euclidean distance from the learning track point B to the learning track point a, and d 3 represents the euclidean distance from the learning track point C to the learning track point B. In obtaining left road width sequenceThereafter, a difference value between two adjacent left road widths may be calculated, and if the difference value is less than or equal to a clustering threshold value, the two adjacent left road widths may be clustered into the same class, i.e., the two adjacent left road widths may be divided into the same left first cluster, wherein the clustering threshold value may be any value, for example, the clustering threshold value may be set to 3.0m. Illustratively, as shown in FIG. 3, assume that the left road width of the locus point O is learnedLeft road width with learning track point aThe difference value of the track points A is less than or equal to 3m, and the left road width of the track points A is learnedLeft road width with learning track point BThe difference value of the track points B is less than or equal to 3m, and the left road width of the track points B is learnedLeft road width with learning locus point CThe difference of (2) is greater than 3m, the left road width of the learning track point O can be calculatedLearning left road width of track point aAnd learn the left road width of track point BDivided into the same left first cluster.
According to the clustering mode, the left first cluster sequence can be obtainedWherein, Represents the mth left first cluster, M represents the number of first clusters (left first cluster/right first cluster), and M represents the mth learning trajectory point. Wherein, Comprising m+1 consecutive left road width sequences, the number of left road width sequences comprised by each left first cluster is not necessarily the same. Illustratively, assume that Three left first clusters can be clustered: thus the left first cluster sequence can be obtained:
similarly, the first cluster sequence on the right side can be obtained by clustering The clustering process of the right road width is the same as that of the left road width, and will not be described here again.
At the left side, the first cluster sequence is obtainedEach left first cluster can be traversed, if the left road width of the current left first cluster is larger than the left road width of the adjacent left first cluster, the left road width of the current left first cluster can be adjusted to the left road width of the adjacent left first cluster, and the adjusted target left road width sequence can be obtained until the traversing is finishedTarget left road width sequenceThe left road width of the target corresponding to each learning track point is included.
Specifically, traversing each left first cluster, if traversing to the last left road width in the current left first cluster is greater than or equal to a second threshold value and greater than or equal to the first left road width in the next left first cluster, the current road may be characterized as narrow-wide-narrow, so that all left road widths in the current left first cluster may be adjusted to the last left road width in the previous left first cluster, or all left road widths in the current left first cluster may be adjusted to the first left road width in the next left first cluster, wherein the second threshold value may be set to 3.0m. Illustratively, continuing with the above example, clustering results in three left-hand first clusters: Traversing to the current left first cluster When the first cluster on the left side is presentLeft first clusters which can be respectively adjacent to front and backAnd (5) comparing. Wherein the current left first clusterWith the first cluster on the left of the previousThe comparison process is as follows: wherein the current left first cluster With the next left first clusterThe comparison process is as follows: It can be seen that the first cluster on the current left All left road widths in (3)Are all compared with the previous left first clusterLast left road width of (3)Greater than 3.0m and the first cluster on the current left sideAll left road widths in (3)Is equal to the first cluster on the left sideFirst left road width of (a)Above 3.0m, the current left first cluster can be clusteredAll left road widths in (3)Are all adjusted to the previous left first clusterLast left road width of (3)Thereby obtaining the adjusted target left road width: or the current left first cluster can be used All left road widths in (3)Are all adjusted to the next left first clusterFirst left road width of (a)Thereby obtaining the adjusted target left road width:
similarly, the adjusted target right road width sequence can be obtained Target right road width sequenceThe right road width of the target corresponding to each learning track point is included. The adjustment process of the target right road width is the same as the adjustment process of the target left road width, and will not be described here again.
Therefore, under the condition that the road characteristics are narrow, wide and narrow, the width of the road is unified, so that the vehicle can be prevented from entering the target lane by mistake on the premise of avoiding collision between the vehicle and the road boundary in the automatic driving process. In addition, other road characteristics are not treated in a trade-off manner, otherwise problems such as collision may be caused, for example, the road characteristics are "wide-narrow-wide", and the uniform road width is not required in this case, because if the width is changed to be wide, the vehicle may collide with the road boundary when automatically driving in the passable road area.
Note that, the capital letter is used to represent a constant value, so that M represents the number of first clusters, N represents the number of learning trajectory points, and M < N in general. The lower case letters are used to represent index values, so n represents the nth learning trace point, m represents the mth learning trace point, e.gRepresents the left road width corresponding to the nth learning track point,The left road width corresponding to the N learning track points is shown.
S250, calculating passable road boundary points corresponding to the learning track points by adopting the corresponding relative road widths of the targets, wherein the passable road boundary points comprise left passable road boundary points and right passable road boundary points.
After the target relative road width corresponding to each learning track point is obtained, the corresponding target relative road width can be adopted to calculate the passable road boundary point corresponding to each learning track point. The passable road boundary points may include a left-side passable road boundary point and a right-side passable road boundary point, the left-side passable road boundary point may be calculated based on the target left-side road width, and the right-side passable road boundary point may be calculated based on the target right-side road width.
Specifically, each learning track point may be shifted to the corresponding left emission direction by the distance of the left road width of the target, that is, the direction of the left emission angle (θ+pi/2) of each learning track point may be shifted to the left road width of the target, so that the left passable road boundary point corresponding to each learning track point may be obtained. For example, as shown in fig. 3, assuming that the target left road width corresponding to the learning trajectory point B is 4m, the direction of the left emission angle (θ+pi/2) of the learning trajectory point B may be shifted by 4m, so that the left passable road boundary point corresponding to the learning trajectory point B may be obtained.
Similarly, each learning track point can be shifted to the corresponding right emission direction by the distance of the right road width of the target, namely, each learning track point is shifted to the right emission angle (theta-pi/2) direction by the distance of the right road width of the target, so that the right passable road boundary point corresponding to each learning track point can be obtained.
S260, sequentially connecting the plurality of left passable road boundary points to obtain a left passable road boundary, and sequentially connecting the plurality of right passable road boundary points to obtain a right passable road boundary.
The plurality of left passable road boundary points can be sequentially connected according to the sequence of each learning track point, so that a passable road left boundary can be obtained, for example, the left passable road boundary points can be connected into a curve from small to large according to the s value of the corresponding learning track point, and the curve is the passable road left boundary.
Similarly, a plurality of right passable road boundary points can be sequentially connected according to the sequence of each learning track point, so that a right passable road boundary can be obtained, for example, the right passable road boundary points can be connected into another curve from small to large according to the s value of the corresponding learning track point, and the other curve is the right passable road boundary.
S270, constructing a passable road area of the set route by adopting the left border of the passable road and the right border of the passable road, and outputting the passable road area as an automatic driving map of the set route.
After the left border and the right border of the passable road are obtained, the left border and the right border of the passable road can be adopted to construct a passable road area with a set route, for example, the area between the left border and the right border of the passable road can be used as the passable road area with the set route, and the passable road area is an automatic driving area where vehicles can stably pass, namely, an automatic driving system can control the vehicles to stably and automatically drive in the passable road area, so that the passable road area can be output as an automatic driving map with the set route.
S280, identifying a road structure of the passable road area according to the relative road width change of the target of the passable road area and surrounding ground marks, wherein the road structure comprises at least one of a diversion road and a confluence road.
In the identifying process, the passable road area is provided with the priori information of the set route, the priori information can comprise at least one of road boundary information, relative road width information and road information for selecting passing, the road boundary information can comprise surrounding ground marks of the passable road area, and the relative road width information can comprise target relative road width changes of the passable road area, so that the automatic driving system can identify the road structure of the passable road area according to the surrounding ground marks of the passable road area and the target relative road width changes, for example, identify complex road structures such as a diversion road or a confluence road of the passable road area, and the problem of the split-confluence road identification of the set route under the condition of no high-precision map support can be solved.
It should be noted that, the application range of the road structure identification method provided by the embodiment of the application is wider, and the method can be applied to an area without high-precision map coverage and an area with high-precision map coverage.
In one embodiment, the surrounding ground markings include a guide line, and identifying the road structure of the passable road area based on the target relative road width variation of the passable road area and the surrounding ground markings comprises:
The method comprises the steps of clustering relative road widths of targets to obtain a plurality of second clusters, calculating target relative road width difference values of two adjacent second clusters, determining target relative road width change of the two adjacent second clusters, judging whether the two adjacent second clusters correspond to a guide line, if the target relative road width difference value is larger than a first threshold value, the target relative road width change is smaller from large and corresponds to the guide line, identifying that a road structure of a passable road area at the two adjacent second clusters is a split road, or if the target relative road width difference value is larger than a first threshold value, the target relative road width change is larger from small and corresponds to the guide line, identifying that a road structure of the passable road area at the two adjacent second clusters is a converging road.
The relative road widths are filtered to obtain the target relative road widths, so that the filtered target relative road widths need to be clustered again according to the clustering method, and therefore a plurality of second clusters can be obtained, wherein the second clusters can comprise left second clusters and right second clusters, the left second clusters can be obtained by clustering the left road widths of each target, and the right second clusters can be obtained by clustering the right road widths of each target. Specifically, the left road width sequence is as described aboveClustering the width of each left road, and filtering the target left road width sequenceThe left road width of each target in the cluster is clustered again, so that a left second cluster sequence after the clustering again can be obtainedWhere K represents the number of second clusters (left second cluster/right second cluster), K.ltoreq.M. Illustratively, assume the first cluster sequence on the left: Wherein, Filtering processing is carried out on each left road width, and the target left road width after the filtering processing is obtained: the left second cluster sequence can be obtained by performing clustering treatment on the left road width of each target: Wherein,
Similarly, the right second cluster sequence after the re-clustering treatment can be obtained
Wherein the surrounding ground markings may include a drainage line, which may be a traffic marking for indicating the split and split of vehicles, thus obtaining a second cluster sequence on the leftThen, the target left road width difference value of two adjacent left second clusters can be calculatedAnd determining the target left road width change of the two adjacent left second clusters, and judging whether the two adjacent left second clusters correspond to the current lead. And, at the right side, obtaining a second cluster sequenceThen, the target right road width difference value of the two adjacent right second clusters can be calculatedAnd determining the target right road width change of the two adjacent right second clusters, and judging whether the two adjacent right second clusters correspond to the current lead.
If the width of the road on the left side of the target is differentThe method is larger than a first threshold value, and the adjacent two left second clusters are correspondingly provided with guide lines, so that the fact that the passable road area is provided with a split-flow scene at the adjacent two left second clusters is indicated, and then the type of the split-flow scene can be further identified based on the width change of the target left road. The type of the split-and-merge scene may be illustrated as a split road if the target left road width changes from large to small, and may be illustrated as a merge road if the target left road width changes from small to large.
Similarly, if the target right road width difference valueThe method is larger than a first threshold value, and the adjacent two right second clusters are correspondingly provided with guide lines, so that the fact that the passable road area is provided with a split-flow scene at the adjacent two right second clusters is indicated, and then the type of the split-flow scene can be further identified based on the width change of the target right road. The type of the split-and-merge scene may be illustrated as a split road if the target right road width changes from large to small, and may be illustrated as a merge road if the target right road width changes from small to large.
It should be noted that, the first threshold may be adjusted according to specific requirements, for example, the first threshold may be set to 3.5m, which is not limited in the embodiment of the present application.
In an embodiment, calculating the target relative road width difference of the two adjacent second clusters and determining the target relative road width change of the two adjacent second clusters may include:
The method comprises the steps of selecting a target relative road width at the last position in a previous second cluster and a target relative road width at the first position in a next second cluster, calculating the absolute value of the difference value between the target relative road width at the last position and the target relative road width at the first position to be used as the target relative road width difference value of the two adjacent second clusters, and determining the width change from the target relative road width at the last position to the target relative road width at the first position to be used as the target relative road width change of the two adjacent second clusters.
Wherein a target left road width may be selected from two adjacent left second clusters, respectively, as their representatives. Preferably, the two selected target left road widths may have an adjacent relationship, and the target left road width changes of the two adjacent left second clusters may be accurately reflected by the two adjacent target left road widths. Specifically, in the adjacent two left second clusters, the last target left road width in the former left second cluster has an adjacent relationship with the first target left road width in the latter third cluster, so the last target left road width can be selected from the former left second cluster, and the first target left road width can be selected from the latter third cluster. Illustratively, it is assumed that the adjacent two left-hand second clusters areWherein, Then the second cluster from the previous left side canSelecting the left road width of the target at the last positionAnd can be from the latter left second clusterSelecting the left road width of the target positioned at the first positionTwo adjacent target left road widthsAndCan be used for characterizing two adjacent left second clustersIs a target left road width variation.
Similarly, in two adjacent right second clusters, a target right road width at the last position in the previous right second cluster is selected, and a target right road width at the first position in the next right second cluster is selected. For example, the second cluster from the previous rightSelecting the right road width of the target at the last positionAnd can be from the latter right second clusterSelecting the right road width of the target at the first positionTwo adjacent target right road widthsAndCan be used for characterizing two adjacent right second clustersIs a target right road width variation.
After selecting the left road width of the target positioned at the tail positionFirst target left road widthThen, the width of the left road of the object at the last position can be calculatedWidth of left road with first targetSo that the absolute value of the difference of them is taken as the target left road width difference of the two adjacent left second clustersI.e.And determining a left road width from the last located targetTo the left road width of the object at the headSo that their width change is taken as the target left road width change for the two adjacent left second clusters.
Similarly, when the right road width of the last target is selectedFirst target right road widthThen, the width of the right road of the object at the last position can be calculatedRight road width with target at firstSo that the absolute value of the difference thereof is taken as the target right road width difference of the two adjacent right second clustersI.e.And determining a right road width from the last located targetTo the right road width of the object at the headSo that their width change is taken as the target right road width change for the two adjacent right second clusters.
In an embodiment, after identifying the road structure of the passable road area, the method may further include:
and identifying the diversion direction of the diversion road or the confluence direction of the confluence road according to the position of the diversion line.
Specifically, after the road structure of the passable road area at the adjacent two left-side second clusters is identified as the split road, the split direction of the split road at the adjacent two left-side second clusters may be identified according to the position of the drainage line. Or after recognizing that the road structure of the passable road area at the adjacent two right second clusters is the split road, the split direction of the split road at the adjacent two right second clusters may be recognized according to the position of the drainage line. Or after recognizing that the road structure of the passable road area at the adjacent two left second clusters is a merging road, the merging direction of the merging road at the adjacent two left second clusters may be recognized according to the position of the drainage line. Or after recognizing that the road structure of the passable road area at the adjacent two right second clusters is a merging road, the merging direction of the merging road at the adjacent two right second clusters may be recognized according to the position of the drainage line.
In one embodiment, the target relative road width includes a target left road width and a target right road width, and identifying a diverging direction of the diverging road or identifying a converging direction of the converging road according to a position of the guide line may include:
the method comprises the steps of identifying that the diversion direction of the diversion road is rightward in a preset range of the width of the left road of a target located at the first position, or identifying that the diversion direction of the diversion road is leftward in a preset range of the width of the right road of the target located at the first position, or identifying that the merging direction of the merging road is leftward in a preset range of the width of the left road of a target located at the last position, or identifying that the merging direction of the merging road is rightward in a preset range of the width of the right road of the target located at the last position.
In one example, if the drain line is at the target left road width at the headAnd identifying that the shunting directions of the shunting roads at the two adjacent left second clusters are rightward shunting directions, wherein the preset range can be dynamically adjusted according to a specific graph construction mode and the like, and for example, the preset range can be set to be 3.0m. In summary, as shown in FIG. 4, ifI.e.And the width of the road on the left side of the target is changed from large to small, namelyAnd on the smaller target left road widthCorresponding to a guide line in 3.0m nearby, identifying the width of the left road of the targetAndThe position corresponds to a scene of rightward shunting.
In another example, if the drain line is at the target right road width at the headAnd (3) identifying that the shunting directions of the shunting roads at the second clusters on the adjacent two right sides are leftward shunting directions. In summary, as shown in FIG. 5, ifI.e.And the target right road width changes from large to small, i.eAnd on a smaller target right road widthCorresponding to a guide line in 3.0m nearby, identifying the width of the right road of the targetAndThe scene of the left shunt is corresponding to the position.
In yet another example, if the drain line is at the target left road width at the last positionAnd (3) identifying that the merging direction of the merging roads at the second clusters on the adjacent two left sides is the left merging direction. In summary, as shown in FIG. 6, ifI.e.And the target left road width changes from small to large, i.eAnd on the smaller target left road widthCorresponding to a guide line in 3.0m nearby, identifying the width of the left road of the targetAndThe position corresponds to a scene merging leftwards.
In yet another example, if the drain line is at the target right road width at the last positionAnd (3) identifying that the merging direction of the split-flow roads at the second clusters on the adjacent two right sides is the right merging direction. In summary, as shown in FIG. 7, ifI.e.And the target right road width changes from small to large, i.eAnd on a smaller target right road widthCorresponding to a guide line in 3.0m nearby, identifying the width of the right road of the targetAndThe position corresponds to a scene merging rightward.
According to the scheme provided by the application, under the condition that the set route is not supported by a high-precision map, the embodiment of the application can acquire the vehicle running track and a plurality of sections of road boundaries in the perception range on the set route so as to obtain the relative road width of the set route, then the relative road width is converted into a passable road area more suitable for automatic driving through a filtering method, so that the phenomenon of shaking left and right when the vehicle runs is avoided, and then the scene of the split and merge can be accurately identified and specific split and merge directions are provided through the target relative road width change of the passable road area and surrounding ground marks, so that finer road structure information can be provided for automatic driving, and an automatic driving system can provide basis for automatic driving links such as vehicle path planning, vehicle lane changing time, vehicle speed planning and the like at the scene.
Corresponding to the embodiment of the application function implementation method, the application also provides a road structure identification device, a vehicle, a computer readable storage medium and corresponding embodiments.
Fig. 8 is a schematic view of a road structure recognition device according to an embodiment of the present application.
Referring to fig. 8, the present application provides a road structure recognition apparatus, which may include:
The set route information obtaining module 810 is configured to obtain a vehicle running track and a plurality of road boundaries within a vehicle perception range, which are collected on a set route;
a relative road width determining module 820 for determining the relative road widths of the vehicle driving track and the multi-section road boundary, respectively;
the automatic driving map construction module 830 is configured to construct an automatic driving map of a set route using a vehicle driving track, a plurality of road boundaries and a relative road width;
the road structure identification module 840 is configured to identify a road structure of the set route according to the autopilot map, where the road structure includes at least one of a split road and a merge road.
In one embodiment, the relative road width determination module 820 may include:
The learning track point extraction sub-module is used for taking the vehicle running track as a learning track and extracting a plurality of learning track points from the learning track according to a preset interval distance;
and the relative road width determining sub-module is used for determining the relative road width of each learning track point and the corresponding road boundary.
In one embodiment, the relative road width determination submodule may include:
a tangential direction determining unit for determining tangential directions of the learning tracks at the respective learning track points;
an emission direction determining unit configured to take a direction perpendicular to the tangential direction as an emission direction;
The first length obtaining unit is used for respectively starting from each learning track point, emitting rays along the corresponding emitting direction until the rays meet the corresponding road boundary, and ending the emission to obtain the length of the rays;
and a relative road width determining unit for determining the length of the ray as the relative road width of each learning track point and the corresponding road boundary.
In an embodiment, the relative road width determination submodule may further include:
the second length obtaining unit is used for taking the preset length as the length of the ray when the ray does not meet the corresponding road boundary and the length of the ray is greater than or equal to the preset length.
In one embodiment, the autopilot map construction module 830 may include:
The filtering processing sub-module is used for carrying out filtering processing on each relative road width to obtain a target relative road width corresponding to each learning track point;
The passable road boundary point calculation sub-module is used for calculating passable road boundary points corresponding to each learning track point by adopting the corresponding target relative road width, wherein the passable road boundary points comprise left passable road boundary points and right passable road boundary points;
The passable road boundary point connection submodule is used for sequentially connecting a plurality of left passable road boundary points to obtain a left passable road boundary, and sequentially connecting a plurality of right passable road boundary points to obtain a right passable road boundary;
and the passable road area construction submodule is used for constructing a passable road area of a set route by adopting a left border of the passable road and a right border of the passable road and outputting the passable road area as an automatic driving map of the set route.
In an embodiment, the filtering processing sub-module may include:
the first clustering unit is used for carrying out clustering processing on the relative road widths to obtain a plurality of first clusters;
And the relative road width adjusting unit is used for traversing each first cluster, and if the relative road width traversed to the current first cluster is larger than the relative road width of the adjacent first cluster, the relative road width of the current first cluster is adjusted to be the relative road width of the adjacent first cluster until the traversing is finished, and the adjusted target relative road width corresponding to each learning track point is obtained.
In one embodiment, the road structure identification module 840 may include:
And the road structure identification sub-module is used for identifying the road structure of the passable road area according to the relative road width change of the target of the passable road area and the surrounding ground marks.
In one embodiment, the surrounding ground marking includes a guide line, and the road structure identification sub-module may include:
The second clustering unit is used for clustering the relative road widths of all the targets to obtain a plurality of second clusters;
The second cluster information processing unit is used for calculating the target relative road width difference value of the two adjacent second clusters, determining the target relative road width change of the two adjacent second clusters and judging whether the two adjacent second clusters correspond to a current lead or not;
A split road identification unit for identifying that the road structure of the passable road area at two adjacent second clusters is a split road if the target relative road width difference is greater than the first threshold and the target relative road width is changed from large to small and a guide line is corresponding to the target relative road width difference
And the converging road identification unit is used for identifying that the road structure of the passable road area at two adjacent second clusters is a converging road if the difference value of the width of the target relative road is larger than the first threshold value, the width of the target relative road is changed from small to large and a guide line is correspondingly arranged.
In an embodiment, the second cluster information processing unit may include:
Two adjacent target relative road width selecting subunits, which are used for selecting the target relative road width positioned at the last position in the previous second cluster and selecting the target relative road width positioned at the first position in the next second cluster in two adjacent second clusters;
a target relative road width difference calculating subunit for calculating an absolute value of a difference between the target relative road width at the last position and the target relative road width at the first position as a target relative road width difference of two adjacent second clusters, and
And the target relative road width change determining subunit is used for determining the width change from the target relative road width at the last position to the target relative road width at the first position as the target relative road width change of two adjacent second clusters.
In an embodiment, after identifying the road structure of the passable road area, the apparatus may further include:
and the split-merging direction identification module is used for identifying the split-merging direction of the split-merging road or identifying the merging direction of the merging road according to the position of the guide line.
In one embodiment, the target relative road width includes a target left road width and a target right road width, and the split-and-merge direction identification module may include:
If the current lead is within the preset range of the width of the left road of the first target, the current direction of the current-dividing road is identified as the rightward current-dividing direction, or
If the current lead is within the preset range of the width of the right road at the first target, the current direction of the current-dividing road is recognized as the left current-dividing direction, or
If the current lead is within the preset range of the width of the left road at the last position, the merging direction of the merged road is identified as the left merging direction, or
If the current lead is within the preset range of the width of the right road at the last position, the merging direction of the merging roads is identified as the right merging direction.
The method and the device can be used for acquiring the vehicle running track acquired on the set route and the multi-section road boundaries in the vehicle perception range, respectively determining the relative road widths of the vehicle running track and the multi-section road boundaries, constructing an automatic driving map of the set route by adopting the vehicle running track, the multi-section road boundaries and the relative road widths, and identifying the road structure of the set route according to the automatic driving map, wherein the road structure comprises at least one of a diversion road and a confluence road. The application collects the vehicle running track and the multi-section road boundary in the perception range on the set route so as to obtain the relative road width of the set route, and then adopts the vehicle running track, the multi-section road boundary and the relative road width to construct the automatic driving map of the set route, so that even if the support of the high-precision map is not provided, the diversion road or the confluence road of the set route can be identified through the constructed automatic driving map.
The specific manner in which the respective modules perform the operations in the apparatus of the above embodiments has been described in detail in the embodiments related to the method, and will not be described in detail herein.
Fig. 9 is a schematic structural view of a vehicle shown in an embodiment of the present application.
Referring to fig. 9, a vehicle 900 includes a memory 910 and a processor 920.
The Processor 920 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Memory 910 may include various types of storage units such as system memory, read Only Memory (ROM), and persistent storage. Where the ROM may store static data or instructions required by the processor 920 or other modules of the computer. The persistent storage may be a readable and writable storage. The persistent storage may be a non-volatile memory device that does not lose stored instructions and data even after the computer is powered down. In some embodiments, the persistent storage device employs a mass storage device (e.g., magnetic or optical disk, flash memory) as the persistent storage device. In other embodiments, the persistent storage may be a removable storage device (e.g., diskette, optical drive). The system memory may be a read-write memory device or a volatile read-write memory device, such as dynamic random access memory. The system memory may store instructions and data that are required by some or all of the processors at runtime. Furthermore, memory 910 may include any combination of computer-readable storage media including various types of semiconductor memory chips (e.g., DRAM, SRAM, SDRAM, flash memory, programmable read-only memory), magnetic disks, and/or optical disks may also be employed. In some implementations, memory 910 may include a readable and/or writable removable storage device such as a Compact Disc (CD), a digital versatile disc read only (e.g., DVD-ROM, dual layer DVD-ROM), a blu-ray read only disc, an ultra-dense disc, a flash memory card (e.g., SD card, min SD card, micro-SD card, etc.), a magnetic floppy disk, and the like. The computer readable storage medium does not contain a carrier wave or an instantaneous electronic signal transmitted by wireless or wired transmission.
The memory 910 has stored thereon executable code that, when processed by the processor 920, can cause the processor 920 to perform some or all of the methods described above.
Furthermore, the method according to the application may also be implemented as a computer program or computer program product comprising computer program code instructions for performing part or all of the steps of the above-described method of the application.
Or the application may also be embodied as a computer-readable storage medium (or non-transitory machine-readable storage medium or machine-readable storage medium) having stored thereon executable code (or a computer program or computer instruction code) which, when executed by a processor of a vehicle, causes the processor to perform some or all of the steps of a method according to the application as described above.
The foregoing description of embodiments of the application has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Claims (12)
1. A method of identifying a road structure, comprising:
acquiring a plurality of sections of road boundaries in a vehicle driving track and a vehicle perception range acquired on a set route;
determining the relative road widths of the vehicle driving track and the multi-section road boundary respectively;
Constructing an automatic driving map of the set route by adopting the vehicle running track, the multi-section road boundaries and the relative road widths;
identifying a road structure of the set route according to the automatic driving map, wherein the road structure comprises at least one of a split road and a confluent road;
the identifying the road structure of the set route according to the autopilot map includes:
The method comprises the steps of clustering relative road widths of targets to obtain a plurality of second clusters, calculating target relative road width difference values of two adjacent second clusters, determining target relative road width changes of the two adjacent second clusters, judging whether the two adjacent second clusters correspond to current guiding lines, if the target relative road width difference values are larger than a first threshold value and the target relative road width changes are smaller from large and the current guiding lines correspond to the current guiding lines, recognizing that road structures of a passable road area at the two adjacent second clusters are shunting roads, or if the target relative road width difference values are larger than a first threshold value and the target relative road width changes to be larger from small and the current guiding lines correspond to the current guiding lines, recognizing that road structures of the passable road area at the two adjacent second clusters are converging roads.
2. The method of claim 1, wherein the determining the relative road widths of the vehicle travel track and the multi-segment road boundary, respectively, comprises:
Taking the vehicle running track as a learning track, and extracting a plurality of learning track points from the learning track according to a preset interval distance;
and determining the relative road width of each learning track point and the corresponding road boundary.
3. The method of claim 2, wherein determining the relative road widths of each learning trajectory point and the corresponding road boundary comprises:
Determining tangential directions of the learning tracks at the learning track points respectively;
Taking a direction perpendicular to the tangential direction as an emission direction;
Starting from each learning track point, emitting rays along the corresponding emitting direction until the rays meet the corresponding road boundary, and ending the emission to obtain the length of the rays;
The length of the ray is determined as the relative road width of the respective learning trajectory point and the corresponding road boundary.
4. A method according to claim 3, characterized in that the method further comprises:
and when the ray does not meet the corresponding road boundary and the length of the ray is greater than or equal to the preset length, taking the preset length as the length of the ray.
5. The method of claim 2, wherein constructing the routed autopilot map using the vehicle travel track, the multi-segment road boundary, and the relative road widths comprises:
Filtering each relative road width to obtain a target relative road width corresponding to each learning track point;
Calculating passable road boundary points corresponding to the learning track points by adopting the corresponding relative road widths of the targets, wherein the passable road boundary points comprise left passable road boundary points and right passable road boundary points;
Sequentially connecting a plurality of left passable road boundary points to obtain a left passable road boundary, and sequentially connecting a plurality of right passable road boundary points to obtain a right passable road boundary;
and constructing a passable road area of the set route by adopting the left passable road boundary and the right passable road boundary, and outputting the passable road area as an automatic driving map of the set route.
6. The method of claim 5, wherein the filtering the respective relative road widths to obtain the target relative road widths corresponding to the respective learning track points comprises:
Clustering the relative road widths to obtain a plurality of first clusters;
And traversing each first cluster, and if the relative road width of the current first cluster is larger than the relative road width of the adjacent first cluster, adjusting the relative road width of the current first cluster to the relative road width of the adjacent first cluster until the traversing is finished, and obtaining the adjusted target relative road width corresponding to each learning track point.
7. The method of claim 1, wherein calculating the target relative road width difference for two adjacent second clusters and determining the target relative road width change for the two adjacent second clusters comprises:
Selecting the target relative road width at the last position in the previous second cluster and selecting the target relative road width at the first position in the next second cluster from the two adjacent second clusters;
Calculating the absolute value of the difference value between the target relative road width at the last position and the target relative road width at the first position to serve as the target relative road width difference value of the two adjacent second clusters; and determining a width change from the last target relative road width to the first target relative road width as a target relative road width change for the two adjacent second clusters.
8. The method of claim 7, wherein after said identifying the routed road structure, the method further comprises:
and identifying the diversion direction of the diversion road or the confluence direction of the confluence road according to the position of the diversion line.
9. The method of claim 8, wherein the target relative road width comprises a target left road width and a target right road width, wherein the identifying the diverging direction of the diverging road or the converging direction of the converging road based on the position of the flow guide line comprises:
if the diversion line is within the preset range of the width of the left road of the first target, the diversion direction of the diversion road is identified as the rightward diversion direction, or
If the diversion line is within the preset range of the width of the right road of the first target, the diversion direction of the diversion road is identified as the left diversion direction, or
If the drainage line is within the preset range of the width of the left road of the last target, the merging direction of the merging roads is recognized as the left merging direction, or
And if the guide line is within the preset range of the width of the right road of the last target, identifying that the merging direction of the merging roads is the right merging direction.
10. A road structure identification device, characterized by comprising:
A route information acquisition module is set up and a route information acquisition module is set up, the method comprises the steps of acquiring a plurality of sections of road boundaries in a vehicle driving track and a vehicle perception range acquired on a set route;
The relative road width determining module is used for respectively determining the relative road widths of the vehicle running track and the multi-section road boundary;
The automatic driving map construction module is used for constructing an automatic driving map of the set route by adopting the vehicle running track, the multi-section road boundaries and the relative road widths;
the road structure identification module is used for identifying the road structure of the set route according to the automatic driving map, wherein the road structure comprises at least one of a split road and a confluent road;
the road structure identification module includes:
The road structure identification sub-module is used for carrying out clustering processing on the relative road widths of all the targets to obtain a plurality of second clusters, calculating the relative road width difference value of the targets of the two adjacent second clusters, determining the relative road width change of the targets of the two adjacent second clusters, judging whether the two adjacent second clusters correspond to a guide line, if the relative road width difference value of the targets is larger than a first threshold value and the relative road width change of the targets is smaller than a first threshold value and the guide line corresponds to the guide line, identifying that the road structure of the passable road area at the two adjacent second clusters is a shunting road, or if the relative road width difference value of the targets is larger than a first threshold value and the relative road width change of the targets is larger than the guide line, identifying that the road structure of the passable road area at the two adjacent second clusters is a confluent road.
11. A vehicle, characterized by comprising:
Processor, and
A memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the method of any of claims 1-9.
12. A computer readable storage medium having executable code stored thereon, which when executed by a processor of a vehicle causes the processor to perform the method of any of claims 1-9.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410744953.2A CN118942062B (en) | 2024-06-07 | 2024-06-07 | Road structure identification method, device, vehicle and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410744953.2A CN118942062B (en) | 2024-06-07 | 2024-06-07 | Road structure identification method, device, vehicle and storage medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN118942062A CN118942062A (en) | 2024-11-12 |
| CN118942062B true CN118942062B (en) | 2025-04-29 |
Family
ID=93343412
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202410744953.2A Active CN118942062B (en) | 2024-06-07 | 2024-06-07 | Road structure identification method, device, vehicle and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN118942062B (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105069217A (en) * | 2015-07-31 | 2015-11-18 | 南京邮电大学 | Road dynamic partitioning model based city rescue simulation method |
| CN112061121A (en) * | 2019-05-21 | 2020-12-11 | 铃木株式会社 | Vehicle travel control device |
Family Cites Families (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113022573B (en) * | 2019-12-06 | 2022-11-04 | 华为技术有限公司 | Road structure detection method and device |
| KR20210130324A (en) * | 2020-04-21 | 2021-11-01 | 현대자동차주식회사 | Advanced Driver Assistance System, Vehicle having the same and method for controlling the vehicle |
| CN111696059B (en) * | 2020-05-28 | 2022-04-29 | 武汉中海庭数据技术有限公司 | Lane line smooth connection processing method and device |
| JP7086245B1 (en) * | 2021-03-18 | 2022-06-17 | 三菱電機株式会社 | Course generator and vehicle control device |
| CN115031740A (en) * | 2022-06-23 | 2022-09-09 | 岚图汽车科技有限公司 | Road reconstruction system and method based on electronic horizon |
| CN115410171A (en) * | 2022-09-13 | 2022-11-29 | 合肥四维图新科技有限公司 | Method, device and equipment for identifying vehicle driving road and storage medium |
| CN116433786A (en) * | 2022-12-13 | 2023-07-14 | 广州小鹏自动驾驶科技有限公司 | Traffic flow scene generation method, device, equipment and storage medium |
| CN116486360A (en) * | 2023-04-27 | 2023-07-25 | 高德软件有限公司 | Road boundary determination method, device, electronic equipment and storage medium |
| CN117760441A (en) * | 2023-12-22 | 2024-03-26 | 浙江吉利控股集团有限公司 | Method, device, equipment and storage medium for determining intersection range |
-
2024
- 2024-06-07 CN CN202410744953.2A patent/CN118942062B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105069217A (en) * | 2015-07-31 | 2015-11-18 | 南京邮电大学 | Road dynamic partitioning model based city rescue simulation method |
| CN112061121A (en) * | 2019-05-21 | 2020-12-11 | 铃木株式会社 | Vehicle travel control device |
Also Published As
| Publication number | Publication date |
|---|---|
| CN118942062A (en) | 2024-11-12 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN112512890B (en) | Abnormal driving behavior recognition method | |
| CN115265564B (en) | Lane marking method and device | |
| EP3703033A1 (en) | Track prediction method and device for obstacle at junction | |
| US20200265710A1 (en) | Travelling track prediction method and device for vehicle | |
| RU2706763C1 (en) | Vehicle localization device | |
| JP6852793B2 (en) | Lane information management method, driving control method and lane information management device | |
| EP2012088B1 (en) | Road information generating apparatus, road information generating method and road information generating program | |
| CN105359200B (en) | For handling the method that the measurement data of vehicle begins look for parking stall for determination | |
| JP6800575B2 (en) | Methods and systems to assist drivers in their own vehicles | |
| KR102485480B1 (en) | A method and apparatus of assisting parking by creating virtual parking lines | |
| JP2018197964A (en) | Vehicle control method and apparatus | |
| CN109903574B (en) | Method and device for acquiring traffic information at intersection | |
| CN115195773A (en) | Apparatus and method for controlling vehicle driving, and recording medium | |
| US20230221136A1 (en) | Roadmap generation system and method of using | |
| JP5888275B2 (en) | Road edge detection system, method and program | |
| CN111291141B (en) | Track similarity determination method and device | |
| CN116027375B (en) | Positioning method and device for automatic driving vehicle, electronic equipment and storage medium | |
| CN117508232A (en) | Track prediction method, device, equipment and medium for vehicle surrounding obstacle | |
| CN118942062B (en) | Road structure identification method, device, vehicle and storage medium | |
| WO2025252214A1 (en) | Method and apparatus for constructing vehicle passage road, vehicle and readable storage medium | |
| KR20240048748A (en) | Method and apparatus of determinig line information | |
| CN113188550B (en) | Map management and path planning method and system for tracking automatic driving vehicle | |
| CN115268463A (en) | Obstacle avoidance path planning method, vehicle and storage medium | |
| US20250050906A1 (en) | Autonomous Driving Control Apparatus and Method Thereof | |
| CN113033527A (en) | Scene recognition method and device, storage medium and unmanned equipment |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |