CN118898825B - Road environment state perception method, equipment, medium, program product and vehicle - Google Patents
Road environment state perception method, equipment, medium, program product and vehicle Download PDFInfo
- Publication number
- CN118898825B CN118898825B CN202411379932.1A CN202411379932A CN118898825B CN 118898825 B CN118898825 B CN 118898825B CN 202411379932 A CN202411379932 A CN 202411379932A CN 118898825 B CN118898825 B CN 118898825B
- Authority
- CN
- China
- Prior art keywords
- point cloud
- cloud data
- obstacle
- road surface
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
- G01S17/931—Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
- G01C21/1652—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C22/00—Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/42—Simultaneous measurement of distance and other co-ordinates
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- General Physics & Mathematics (AREA)
- Electromagnetism (AREA)
- Computer Networks & Wireless Communication (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Automation & Control Theory (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses a road environment state sensing method, equipment, medium, program product and a vehicle, which are applied to the technical field of automatic driving. The method comprises the steps of dividing road surface dense point cloud data in a target area in front of vehicle driving into near-field dense point cloud data and far-field dense point cloud data. And analyzing the road surface flatness of the far-field dense point cloud data, and determining suspected barriers. And verifying whether the suspected obstacle is a target obstacle according to the near-field dense point cloud data, determining three-dimensional morphology information of the target obstacle according to the point cloud data matching result at adjacent moments when the suspected obstacle is verified to be the target obstacle, and generating a road environment state sensing result of the vehicle according to the three-dimensional morphology information of each target obstacle acquired in the running process of the vehicle. The invention can solve the problem that the related technology cannot accurately identify road obstacles, can accurately identify the road pothole area and the low raised obstacle area, and assist the vehicle to more accurately identify the road environment state.
Description
Technical Field
The present invention relates to the field of autopilot technology, and in particular, to a road environment state sensing method, an electronic device, a non-volatile storage medium, a computer program product, and an unmanned vehicle.
Background
With the rapid development of artificial intelligence technology and Internet of things technology, automatic driving technology has also been developed accordingly. The road flatness not only affects the normal operation of the automatic driving vehicle and the comfort of passengers, but also affects the service life of the road and the infrastructure thereof, and the automatic driving perception system not only needs to pay attention to typical obstacles such as pedestrians, vehicles and the like, but also needs to monitor whether the road has pits and low-rise obstacles.
The related technology predicts the occupied probability of each voxel grid and the possible target category through OCC (Occupancy Network, occupied network) and realizes the perception of the road full scene. However, because the cost of marking data of the low-rise obstacle and the hollow area is too high, and the accuracy of the OCC algorithm cannot reach the identification accuracy of the hollow area and the low-rise obstacle, whether the hollow area and the low-rise obstacle exist on the road cannot be accurately determined.
In view of this, accurately discern road pothole area and road low protruding barrier area, assist the vehicle to discern the road environment state more meticulously, be the technical problem that the skilled person need solve.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the application and thus may include information that does not form the prior art that is already known to those of ordinary skill in the art.
Disclosure of Invention
The invention provides a road environment state sensing method, electronic equipment, a nonvolatile storage medium, a computer program product and an unmanned vehicle, which can accurately identify a road pothole area and a road low-rise obstacle area and assist the vehicle to more accurately identify the road environment state.
In order to solve the technical problems, the invention provides the following technical scheme:
in one aspect, the present invention provides a road environment state sensing method, including:
The method comprises the steps of obtaining road surface dense point cloud data in a target area in front of vehicle running, dividing the road surface dense point cloud data into near-field dense point cloud data and far-field dense point cloud data according to the distance between the point cloud data and the vehicle distance, determining suspected obstacles in the target area through analyzing the road surface flatness of the far-field dense point cloud data, checking whether the suspected obstacles are target obstacles according to the near-field dense point cloud data, determining three-dimensional morphology information of the target obstacles according to the point cloud data of adjacent moments and matching results thereof when the suspected obstacles are verified to be target obstacles, and generating road environment state sensing results of the vehicle according to the three-dimensional morphology information of each target obstacle obtained in the vehicle running process, wherein the road surface dense point cloud data are obtained by removing point cloud data of the road surface area which is not the target area in the splicing results of the point cloud data at the current moment and the spliced point cloud data at the last moment.
In a first exemplary embodiment, the determining the suspected obstacle by analyzing the road surface flatness of the far-field dense point cloud data includes:
The method comprises the steps of carrying out rasterization processing on a first target subarea corresponding to far-field dense point cloud data according to a first preset grid size, dividing the first target subarea according to a clustering result of average normal vectors in grids of the first target subarea to obtain multiple types of first road surface areas, carrying out rasterization processing on a current road surface area according to a second preset grid size, determining a road surface leveling area meeting preset flatness according to the direction of the average normal vectors of the grids of the current first road surface area, merging the rest areas of the current first road surface area excluding the road surface leveling area based on a space distance relation, and taking the rest areas as space positions of suspected obstacles, wherein the target area comprises a first target subarea and a second target subarea, the maximum distance between the second target subarea and a vehicle in the vehicle running direction is smaller than the minimum distance between the first target subarea and the vehicle, the point cloud data corresponding to the first target subarea is far-field point cloud data, and the point cloud data corresponding to the second target subarea is the near-field cloud data, and the near-field cloud data corresponding to the first target subarea is smaller than the first preset grid size.
In a second exemplary embodiment, the determining the road surface leveling area that meets the preset leveling according to the direction of the average normal vector of each grid of the current first road surface area includes:
The method comprises the steps of obtaining three-dimensional space plane equations of corresponding grids by fitting according to three-dimensional position information of point cloud data in each grid of a current first road surface area, calculating average normal vectors in each grid of the current first road surface area based on the three-dimensional space plane equations of each grid of the current first road surface area, voting whether each average normal vector is a road surface normal vector or not according to the average normal vector direction at intervals of a preset angle threshold value, taking a target average normal vector with the largest number of votes as the road surface normal vector, and taking a grid corresponding to the target average normal vector as a road surface leveling area.
In a third exemplary embodiment, the acquiring the road surface dense point cloud data in the target area in front of the vehicle running includes:
The method comprises the steps of obtaining vehicle motion data and vehicle motion gesture data at the current moment, determining initial pose change information according to the pose point cloud data and the point cloud data at the moment, taking the initial pose change information as an iterative calculation initial value, carrying out point cloud matching calculation on the point cloud data at the current moment and the point cloud data at the moment, determining a rotation change matrix and a translation vector, carrying out transformation on the spliced point cloud data at the moment according to the rotation change matrix and the translation vector, removing point cloud data which are not the positions of the vehicle at the current moment in the transformed spliced point cloud data, taking the point cloud data as dense point cloud data at the moment, splicing the dense point cloud data at the moment and the point cloud data at the moment, obtaining dense point cloud data at the current moment, carrying out semantic segmentation processing on the obtained driving scene image data, obtaining driving area image information, and removing non-dense point cloud data corresponding to the road surface data in the road surface according to the relation between image pixels and three-dimensional points of the point clouds.
In a fourth exemplary embodiment, before the splicing the dense point cloud data of the previous time and the point cloud data of the current time, the method further includes:
The method comprises the steps of obtaining laser point cloud data, vehicle motion data and vehicle motion posture data carrying a time stamp at the current moment, carrying out interpolation processing on corresponding vehicle motion data and vehicle motion posture data according to the time stamp of each three-dimensional point of the laser point cloud data, and carrying out corresponding compensation on the interpolated point cloud data according to vehicle motion information and vehicle posture information to obtain point cloud distortion calibration data serving as the point cloud data at the current moment.
In a fifth exemplary embodiment, the verifying whether the suspected obstacle is a target obstacle according to near-field dense point cloud data includes:
The method comprises the steps of obtaining near-field non-flat pavement dense point cloud data, obtaining point cloud data of a road surface flat area in the near-field dense point cloud data, obtaining a rotation matrix and a translation vector according to a point cloud matching result of the point cloud data at the current moment and the dense point cloud data at the last moment, updating space position information of the suspected obstacle in vehicle coordinates according to the rotation matrix and the translation vector, obtaining obstacle dense point cloud data of the suspected obstacle from the near-field non-flat pavement dense point cloud based on the space position information, and determining whether the suspected obstacle is a target obstacle or not through target recognition of the obstacle dense point cloud data.
In a sixth exemplary embodiment, the removing the point cloud data of the road surface leveling area in the near-field dense point cloud data includes:
According to a third preset grid size, performing rasterization processing on a second target subarea corresponding to the near-field dense point cloud data, dividing the second target subarea into a plurality of second pavement areas according to a clustering result of average normal vectors of grids of the second target subarea, correcting corresponding point cloud data according to inclination angles of the second pavement areas, removing corrected pavement flat area point cloud data in the second pavement areas according to a laser radar mounting height, and splicing residual point cloud data of the second pavement areas with the pavement flat area point cloud data removed according to corresponding spatial position relations to obtain near-field non-flat pavement dense point cloud.
In a seventh exemplary embodiment, the identifying the object by performing object identification on the dense point cloud data of the obstacle includes:
The method comprises the steps of training a point cloud identification model in advance, inputting the obstacle dense point cloud data into the point cloud identification model, outputting an identification result that the obstacle dense point cloud data is a hollow area or a convex obstacle by the point cloud identification model, and determining that the suspected obstacle is a target obstacle if the obstacle dense point cloud data is the hollow area or the convex obstacle.
In an eighth exemplary embodiment, the determining the three-dimensional morphology information of the target obstacle according to the point cloud data of the adjacent time and the matching result thereof includes:
The method comprises the steps of carrying out three-dimensional morphology fitting processing on obstacle dense point cloud data of a target obstacle to obtain initial three-dimensional morphology information of the target obstacle, removing the obstacle dense point cloud data from near-field non-flat road surface dense point cloud to serve as dense point cloud data of the last moment, obtaining point cloud data of the current moment when a vehicle continues to run, splicing the point cloud data of the current moment and the dense point cloud data of the last moment to obtain dense point cloud data of the current moment, determining new obstacle dense point cloud data of the target obstacle from the dense point cloud data of the current moment based on new space position information of the target obstacle in a vehicle coordinate system of the current moment, and carrying out three-dimensional morphology fitting on the new obstacle dense point cloud data to obtain three-dimensional morphology information of the current moment of the target obstacle to update the initial three-dimensional morphology information, wherein the non-flat road surface dense point cloud is obtained by removing the point cloud data of a road surface flat area in the near-field non-flat road surface dense point cloud data.
In a ninth exemplary embodiment, before the step of determining the new spatial position information of the target obstacle in the vehicle coordinate system at the current time, the method further includes:
And determining a new rotation matrix and a new translation vector according to a point cloud matching result, and updating new space position information of the target obstacle in a vehicle coordinate system at the current moment according to the new rotation matrix and the new translation vector.
In a tenth exemplary embodiment, after determining the three-dimensional morphology information of the target obstacle according to the point cloud data of the adjacent time and the matching result thereof, the method further includes:
and if the new target obstacle exists, performing three-dimensional morphology fitting on the corresponding dense point cloud data to obtain the three-dimensional morphology information of the new target obstacle.
The invention also provides an electronic device comprising a processor for implementing the steps of the road environment state awareness method according to any one of the preceding claims when executing a computer program stored in a memory.
The invention finally provides a non-volatile storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the road environment state awareness method according to any of the preceding claims.
The invention also provides a computer program product comprising computer programs/instructions which when executed by a processor implement the steps of the road environment state awareness method of any preceding claim.
The invention finally provides an unmanned vehicle comprising an autopilot awareness system for implementing the steps of the road environment state awareness method according to any one of the preceding claims when executing a computer program stored in a memory.
The technical scheme provided by the invention has the advantages that based on the point cloud data acquired by the laser radar, the multi-level obstacle detection recognition and verification are set according to the data distribution characteristics, whether the obstacles such as the low-rise obstacle and the pothole area exist in front of the vehicle can be accurately recognized, and when the obstacles exist, the vehicles are confirmed to be continuously close to the vehicle along with the continuous advancing of the vehicles, the granularity of the point cloud data in the low-rise obstacle and the pothole area is smaller, so that the three-dimensional shape data of the target obstacle is determined according to the point cloud data at the adjacent moment and the matching result thereof, the three-dimensional shape data precision of the obstacle can be gradually improved, the vehicle can be assisted to recognize the environment state more accurately, and the safety and the comfort of the vehicle are improved.
In addition, the invention also provides corresponding electronic equipment, a nonvolatile storage medium, a computer program product and an unmanned vehicle for realizing the road environment state sensing method, so that the method has more practicability, and the electronic equipment, the nonvolatile storage medium, the computer program product and the unmanned vehicle have corresponding advantages.
The technical features mentioned above, the technical features to be mentioned below and the technical features shown in the drawings alone may be arbitrarily combined with each other as long as the combined technical features are not contradictory. All possible combinations of features are specifically described herein. Any one of the plurality of sub-features contained in the same sentence may be applied independently, and not necessarily with other sub-features. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Drawings
For a clearer description of the present invention or of the technical solutions related thereto, the following brief description will be given of the drawings used in the description of the embodiments or of the related art, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained from these drawings without the inventive effort of a person skilled in the art.
FIG. 1 is a schematic flow chart of a road environment state sensing method according to the present invention;
FIG. 2 is a schematic illustration of a target area ahead of a vehicle in an illustrative example provided by the present invention;
FIG. 3 is a schematic representation of a coordinate system definition in an illustrative example provided by the present invention;
FIG. 4 is a flow chart of a method for computing dense point cloud data on a road surface in an exemplary embodiment of the present invention;
FIG. 5 is a flowchart of another road environment state sensing method according to the present invention;
FIG. 6 is a block diagram of an embodiment of a road environment state sensing device according to the present invention;
FIG. 7 is a block diagram of an embodiment of an electronic device according to the present invention;
fig. 8 is a schematic diagram of obstacle recognition of an unmanned vehicle according to the present invention.
Detailed Description
In order to make the technical scheme of the present invention better understood by those skilled in the art, the present invention will be further described in detail with reference to the accompanying drawings and the detailed description. Wherein the terms "first," "second," "third," "fourth," and the like in the description and in the above figures are used for distinguishing between different objects and not necessarily for describing a particular sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations of the two, are intended to cover a non-exclusive inclusion. The term "exemplary" means "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
It is understood that the road pothole or the low raised obstacle can cause the automatic driving vehicle to run unstably and jolt, which not only affects the comfort of the passengers, but also causes the vehicle to suddenly run away, thereby causing traffic accidents. If the vehicle runs on the hollow and low-rise obstacle road surface for a long time, the suspension system, tires and sensors of the vehicle can be damaged, and the performance and the safety of the vehicle are further affected. In addition, the road pothole can destroy the pavement material, leads to the subsidence of road bed, if not in time restores, and pothole area is extremely easy to enlarge and deepen, accelerates the damage of road infrastructure, influences life and bearing capacity of road. In order to avoid the above problems, during the running of the vehicle, whether a pothole area or a raised obstacle area exists on the road surface in front of the vehicle is monitored in real time, the driver is reminded of the road surface detection condition and warns to slow down the speed of the vehicle, or the intelligent suspension system is used for absorbing and counteracting the impact and vibration caused by the pothole/low raised obstacle on the road surface.
Currently, the traditional way to monitor roads for the presence of potholes or low-rise obstacles is by manual visual inspection, by structural engineers and certification inspectors, periodically checking the road for the presence of potholes or low-rise obstacles, and reporting the location of such target areas. This approach relies on personal experience of inspectors and engineers, which not only does not guarantee accuracy of the test results, but is inefficient, costly, and dangerous. In order to solve the problems of the defects existing in the manual visual inspection, the related art realizes the detection of a road target area with high efficiency and better safety by a computer vision technology, wherein the target area comprises a road obstacle area which influences the normal operation of a vehicle, and the road obstacle area comprises but is not limited to a hollow area and a low raised obstacle area, such as raised cracks.
The related art realizes three-dimensional road imaging through, for example, 3D (three-dimensional) point cloud modeling by using a computer vision algorithm, and then detects a road surface obstacle region based on a segmentation method, so as to improve the accuracy of detection of a road target region. In this way, the robustness of road point cloud modeling is improved through algorithms such as RANSAC (Random Sample Consensus, random sample consensus algorithm), the image classification accuracy is improved by DCNN (Deep Convolution Neural Network, deep convolutional neural network model), but the method needs a small field of view, and is based on a single frame 3D road point cloud being a plane or quadric surface, the detection accuracy and scene adaptability are difficult to achieve practical use, and the width, depth and the like of a road pit are difficult to obtain with high accuracy, volume, etc. For example, in the related art, detection of a pavement depression of an unmanned vehicle is realized through multi-sensor fusion, the method scans a pavement in front through sound waves, when the depression is detected, a pavement image is captured by a camera, then a preprocessed image is subjected to threshold segmentation to obtain a binarized image, and whether water exists in the depression is judged by counting the number of white pixel points in the binarized image. The width of the edge of the pothole from the edges of the roads on two sides is calculated through the image information shot by the camera, the minimum torque of the unmanned vehicle passing through the pothole is calculated, and finally whether the unmanned vehicle can safely pass through is judged through the relation between the minimum torque and the maximum torque. The image binarization of the method is greatly influenced by a threshold value, is difficult to adapt to various illumination conditions, and cannot give information such as pit depth and the like. Another related technology utilizes an RGB-D (Red Green Blue Depth, color image with depth information) depth camera to realize a short raised obstacle and pavement pothole detection party, the method utilizes an original point cloud of the RGB-D depth camera to perform preprocessing to obtain a corresponding plane point cloud, extracts the short raised obstacle point cloud data from the plane point cloud, projects the extracted short raised obstacle point cloud onto a plane model parameter plane, performs downsampling and convex hull segmentation on the short raised obstacle point cloud on the model parameter plane, transforms the convex hull under a map coordinate system and distributes the convex hull, merges the projected short raised obstacle point cloud into a ground plane point cloud to form a new plane point cloud, transforms the new plane point cloud into a base_footprint (robot body coordinate system) to form a ground plane two-dimensional grid map, performs pseudo-pothole filtering on the ground plane two-dimensional grid map, and extracts the pothole by an edge extraction method. Because the effective working distance of the RGB-D camera is very limited and is usually within 10 meters, and the RGB-D camera is basically invalid under outdoor strong light, the accuracy of point cloud generated by the RGB-D camera is poor due to the influence of the projection density of a generator, the effectiveness of the practical use process is to be verified, and the final detection accuracy cannot be ensured by using the field Jing Shouxian. Another related art realizes detection of a road pit based on a neural network and binocular stereoscopic vision, and the method acquires left and right views of a road scene using a binocular camera installed in front of a vehicle, and detects the left view using a target detection neural network. When the existence of the pit is detected, three-dimensional matching is carried out on the left view and the right view to obtain a depth map, the pit and the depth map of the pavement around the pit are cut by utilizing the pit position information predicted by the neural network, and finally, the road plane around the pit is fitted by using a RANSAC and genetic algorithm combined method, so that the distance from the pit point cloud to the fitting plane is calculated, and the pit depth is obtained by summing and averaging the distances. and determining the minimum circumscribed rectangle of the projection point by projecting the pit point cloud onto the fitting road plane, so as to determine the area of the minimum circumscribed rectangle and obtain the area of the pit. However, because of the existence of the cavity point in the parallax, the method has larger estimation error in the depth direction, and is difficult to obtain the high-precision pit depth and the bulge height, so that the detection precision of the road target area is not high. In the related technology, the probability that each voxel grid is occupied and the target category possibly contained are predicted through OCC, so that the perception of the whole scene of the road is realized, but because the cost of marking data of the low-rise obstacle and the hollow area is too high, and the accuracy of an OCC algorithm cannot reach the identification accuracy of the hollow area and the low-rise obstacle, whether the hollow area and the low-rise obstacle exist on the road cannot be accurately determined. therefore, the related technology cannot accurately identify the road pothole area and the road low-rise obstacle area, and the vehicle cannot be assisted to accurately identify the road environment state.
In view of the above, in order to realize that the automatic driving perception system meets the scene requirement of continuous refinement of granularity, besides the recognition of typical obstacles such as pedestrians, vehicles and the like, the invention sets multi-stage obstacle detection recognition and verification according to the data distribution characteristics based on laser radar data, can accurately recognize low-rise obstacles and pothole areas, and assist vehicles to recognize the road environment state more accurately. Having described aspects of the invention, various non-limiting embodiments of the invention are described in detail below. Numerous specific details are set forth in the following description in order to provide a better understanding of the invention. It will be understood by those skilled in the art that the present invention may be practiced without these specific details. In other instances, well known methods, procedures, components, and circuits have not been described in detail so as not to obscure the present invention.
Referring to fig. 1, fig. 1 is a flow chart of a road environment state sensing method provided in the present embodiment, where the embodiment may include the following:
s101, acquiring road surface dense point cloud data in a target area in front of vehicle driving, and dividing the road surface dense point cloud data into near-field dense point cloud data and far-field dense point cloud data according to the distance between the point cloud data and the vehicle distance.
The target area is a vehicle forward travel area extending forward from the head of the vehicle in the travel direction of the vehicle, and may be an area having a vehicle forward length of 100m (meters) and a vehicle forward width of 20m, as shown in fig. 2. The road surface dense point cloud data is obtained by removing point cloud data of a road surface area which is not a target area in a splicing result of the point cloud data at the current moment and the spliced point cloud data at the last moment, the point cloud data is acquired by a point cloud data acquisition sensor of a vehicle, the vehicle can be any vehicle with an automatic driving function, the vehicle is not limited to the automatic driving vehicle or an unmanned vehicle, the sensors for realizing the automatic driving function are completely multiplexed, and other sensors such as a navigation system, an inertial measurement unit, an odometer and a laser radar at a vehicle end are not required to be additionally arranged. In order to obtain the accuracy of the point cloud data, the vehicle sensors can be calibrated before the point cloud data are obtained, the coordinate system transformation relation among the sensors is obtained, and furthermore, in the actual working process, the time alignment of the data acquired by the multi-source sensors can be ensured through a data time synchronization mechanism. The dense point cloud data refers to point cloud data obtained after point cloud data at adjacent moments are spliced, namely, the point cloud data at the current moment is spliced with the point cloud data at the previous moment, the acquired point cloud data are very sparse in consideration of the fact that the number of lines or resolution of a laser radar is very limited, and the sparsity of the point cloud data is more prominent along with the distance between the point cloud data and the laser radar. In order to avoid the problem that the sparsity of the point cloud data can cause missed detection of short raised barriers or hollow areas with smaller missed detection space scales, the step is to realize the point cloud densification processing of the point cloud data in a point cloud splicing mode, so that the sparsity of the point cloud is reduced, and the missed detection rate is reduced. Because the probability of encountering an obstacle is small when the vehicle is just started, the point cloud data at the previous moment is the spliced data of the previous point cloud data. In order to reduce the complexity of subsequent data processing and improve the accuracy and efficiency of obstacle identification, the invention eliminates the point cloud data which is not the road surface area in the spliced point cloud data, so that the rest point cloud data is the point cloud data of the road surface area in the target area, and the step is defined as road surface dense point cloud data.
In the invention, if obstacles such as low-rise obstacles and hollow areas exist in the target area in front of the vehicle, the point cloud data information acquired by the sensor is more abundant along with the gradual approach of the vehicle in the running process, and because the near field verification is more complex and requires more calculation resources, the step can be firstly carried out initial recognition through the far field area, and if the suspected low-rise obstacles/hollow areas are not found in the far field area, the near field verification calculation flow is not required to be started, so that the unnecessary calculation load can be effectively reduced. Based on the above, the step divides the target area in front of the vehicle running into a first target sub-area and a second target sub-area according to the distance between the vehicle and the preset distance threshold, that is, the target area comprises the first target sub-area and the second target sub-area. The distance threshold value can be preset according to the current laser radar line number or resolution, and the lower the line number or resolution is, the smaller the distance threshold value is. The maximum distance between the second target sub-area and the vehicle along the running direction of the vehicle is smaller than the minimum distance between the first target sub-area and the vehicle, namely, the first target sub-area is a far-field area, the second target sub-area is a near-field area, the distance threshold is 20m, the far-field area after the front 20m of the vehicle is the first target sub-area, the size of the first target sub-area is 80m multiplied by 20m, the near-field area in the front 20m of the vehicle is the second target sub-area, and the size of the second target sub-area is 20m multiplied by 20 m. In order to facilitate discrimination, collecting point cloud data of a first target subarea as far-field point cloud data, collecting point cloud data of a second target subarea as near-field point cloud data, and correspondingly, collecting point cloud data corresponding to the first target subarea as far-field dense point cloud data and collecting point cloud data corresponding to the second target subarea as near-field dense point cloud data.
In order to facilitate subsequent data processing, the invention also correspondingly defines a related coordinate system, as shown in fig. 3, the vehicle comprises a far-field camera, a near-field camera, a laser radar, an odometer and an inertial measurement unit, and in order to facilitate drawing, the far-field camera and the near-field camera are not distinguished, and the odometer and the inertial measurement unit are not drawn. In the case of the view of figure 3,In order to be a coordinate system of the camera,Along the direction of the optical axis of the camera.Is a standard camera coordinate system, the coordinate system isAlong withRotating, whereinVertically downward.In the form of a laser radar coordinate system,Is a local world coordinate system defined on a road surface, wherein,The vertical direction is upward and the vertical direction is vertical,Is directed horizontally forward and is directed toward the front,For the horizontal left, a laser radar coordinate system can be adopted as a vehicle coordinate system, and a local world coordinate system is a coordinate system where a vehicle runs on a road surface.
S102, determining suspected barriers in the target area by analyzing the road surface flatness of far-field dense point cloud data.
In this embodiment, after the far-field dense point cloud data is obtained by dividing in the previous step, the road surface flatness distribution condition in the far-field region may be calculated by any method capable of calculating the flatness by analyzing the road surface flatness of the point cloud data in the road surface region of the dense point cloud data. By analyzing the road surface flatness, it is possible to determine, for a region where the road surface flatness exceeds a preset flatness threshold value, as an obstacle in the far field region, that is, to detect whether or not there is a low raised obstacle/depressed region in the far field region. Since the obstacle detection accuracy of this step is comparatively coarse, the accuracy of the detected obstacle is not high, and this step defines it as a suspected obstacle. In order to facilitate the verification of the suspected obstacle in the subsequent processing, if a plurality of suspected obstacles in the target area are determined in the step, unique identification information can be generated for the low-rise obstacle/hollow area identified in the target area through the corresponding relation of the space coordinates, so that the correlation and tracking are facilitated.
And S103, checking whether the suspected obstacle is a target obstacle according to the near-field dense point cloud data, and determining three-dimensional morphology information of the target obstacle according to the point cloud data at adjacent moments and the matching result thereof when the suspected obstacle is verified to be the target obstacle.
The target obstacle is a high Cheng Bijiao small or small obstacle on the running road surface of the vehicle, including but not limited to a hollow area, a low-rise obstacle such as a raised crack and a deceleration strip. After the obstacle is far-field identified in the previous step, the step verifies the identified obstacle to determine whether the suspected obstacle is a real obstacle, and for convenience of distinction, the suspected obstacle verified as the real obstacle is defined as the target obstacle. Considering that the error of the collected point cloud data of the laser radar is larger along with the longer distance, when the short raised obstacle/pothole area is far away from the vehicle, only primary identification is carried out through S102, and the suspected short raised obstacle/pothole area is obtained. As the vehicle continuously advances, the low-rise obstacle and the hollow area gradually approach the vehicle, the collected point cloud data are richer, the position information of the target obstacle can be determined based on the matching result of the point cloud data at the adjacent moment, the shape information of each identified low-rise obstacle/hollow area can be obtained according to the point cloud data at the adjacent moment, as the vehicle continuously approaches the target obstacle, the point cloud data are richer and higher in precision, and the shape characteristics of the target obstacle can be gradually refined by continuously iterating and updating the three-dimensional shape information of the target obstacle.
S104, generating a road environment state sensing result of the vehicle according to the three-dimensional morphology information of each target obstacle obtained in the running process of the vehicle.
When the three-dimensional appearance information of the target obstacle encountered in the driving process of the vehicle is determined in the last step, the three-dimensional appearance information comprises, but is not limited to, position information and appearance information of the target obstacle, the appearance information comprises geometric forms and orientations of the target obstacle, the geometric forms comprise length, width and height, corresponding road environment state sensing results can be generated based on the three-dimensional appearance information, namely, the three-dimensional appearance information of the target obstacle is reflected on the road environment state sensing results, and thus, the three-dimensional appearance information plays a corresponding role in road side sensing scenes, intelligent traffic scenes, V2X (vehicle to everything, namely information exchange of the vehicle to the outside), intelligent security scenes and collaborative sensing scenes, such as barrier avoidance processing used for accurately evaluating the harmfulness of short raised obstacle/pothole areas, such as automatic route planning.
In the technical scheme provided by the embodiment, based on the point cloud data acquired by the laser radar, multi-level obstacle detection, identification and verification are set according to the data distribution characteristics, whether the obstacle such as the low-rise obstacle and the pothole area exist in front of the vehicle can be accurately identified, and when the obstacle exists, the vehicle is confirmed to be continuously close to the vehicle along with the continuous advancing of the vehicle, the granularity of the point cloud data in the vehicle is smaller, and thus, the three-dimensional shape data precision of the obstacle can be gradually improved according to the point cloud data at adjacent moments and the matching result thereof, the environment state can be further identified in a finer manner by assisting the vehicle, and the safety and the comfort of the vehicle are improved.
In the above embodiment, how to perform step S101 is not limited, and an exemplary obtaining manner of the road surface dense point cloud data is given in this embodiment, as shown in fig. 4, may include the following:
the method comprises the steps of obtaining vehicle motion data and vehicle motion gesture data at the current moment, determining initial pose change information according to the pose point cloud data and splicing point cloud data at the last moment at the current moment, performing point cloud matching calculation on the point cloud data at the current moment and the splicing point cloud data at the last moment, determining a rotation change matrix and a translation vector, transforming the splicing point cloud data at the last moment according to the rotation change matrix and the translation vector, removing point cloud data which are not the position of the vehicle at the current moment in the transformed splicing point cloud data, taking the point cloud data as dense point cloud data at the last moment, splicing the dense point cloud data at the last moment with the point cloud data at the current moment to obtain dense point cloud data at the current moment, performing semantic segmentation processing on the obtained driving scene image data to obtain driving area image information, and removing non-pavement data in the point cloud data at the current moment according to the corresponding relation between image pixels and three-dimensional points of the point cloud to obtain dense point cloud data in a target area.
In the step, because single-frame point cloud data are sparse, particularly, low-harness or low-resolution laser radar is more obvious in sparsity, and short raised barriers and hollow areas to be identified by the method are very easy to miss. Therefore, the point cloud data are spliced in time sequence, so that the consistency of the point cloud data is improved. For example, in consideration of space-time continuity of point cloud data, a point cloud matching algorithm, such as ICP (ITERATIVE CLOSEST POINT, iterative closest point algorithm) and NDT (Normal Distribution Transform, normal distribution transformation), may be used to implement matching of point clouds, and for the same scene, two frames of point cloud data are acquired from different distances or angles, and because the two frames of point cloud data have overlapping areas, i.e. spatial structures of the point clouds are similar, the ICP or NDT algorithm may be used to acquire a transformation relationship of the two frames of point cloud data based on the point cloud data in the overlapping areas, i.e. obtain a rotation transformation matrix R and a translation vector T. In order to obtain a more accurate matching result and reduce the matching calculation amount, the vehicle motion data can be obtained through the odometer, the vehicle motion attitude data can be obtained through the inertia strategy unit, the vehicle motion attitude data is used as an initial position and attitude change parameter between the point cloud data of the current time t and the previous splicing point cloud as an initial value of iterative calculation, and the iterative steps can be effectively reduced. After the point cloud matching calculation is completed, the dense point cloud at the previous moment, namely the moment T-1, is transformed by acquiring a rotation transformation matrix R and a translation vector T.
As the vehicle continuously moves forward, dense point cloud data enters a region behind the vehicle head after the time t-1 is changed. Therefore, the point cloud data, which is not the position of the vehicle at the current moment, in the transformed spliced point cloud data needs to be removed, and by taking the laser radar coordinate system as the vehicle coordinate system, and removing the part, which is smaller than 0, of the x coordinate in the dense point cloud after t-1 moment transformation, of the transformed spliced point cloud data, the point cloud data, which is not the position of the vehicle at the current moment, in the spliced point cloud data is removed, and then the point cloud data at the current moment t is spliced, so that the dense point cloud data at the current moment is obtained.
In order to reduce the subsequent calculation amount and improve the accuracy of subsequent pavement segmentation calculation, the embodiment can further perform preprocessing on the dense point cloud data, namely removing the point cloud data of the non-pavement area. One exemplary implementation of the acquisition of the road surface area is to acquire a vehicle forward-facing camera, such as a far-field camera, a near-field camera, a wide-angle camera, etc., and acquire an image, i.e., running scene image data, from which the road surface area is extracted using a semantic segmentation algorithm. And then, acquiring the corresponding relation between the image pixels and the point cloud points by using calibration parameters of the camera and the laser radar. An exemplary obtaining manner of the correspondence between the two is that a three-dimensional coordinate system of a camera imaging system and a laser radar system is defined according to the process shown in fig. 3, and a spatial transformation relationship of the two coordinate systems can be obtained through a limited number of known spatial homonymous points, which is called calibration. When two sets of spatial correspondence are obtained, the correspondence of any spatial homonymous points in the two sets of coordinate systems can be obtained. Based on the principle, the corresponding relation between the image pixel points and the space points can be obtained by combining the imaging principle, and then the corresponding relation between the image pixel points and the space points is based on the corresponding relation between the space points and then corresponds to the point cloud data. And identifying which point clouds belong to the road according to the corresponding relation between the image pixels and the point clouds, so that point-by-point judgment is carried out on the point cloud data, and non-road point cloud data are removed. And then removing the point cloud data of the non-road surface area, and finally acquiring dense point cloud data of the road surface area of interest in front of the vehicle according to a preset area of interest, such as the 100m multiplied by 20m area in fig. 2, so as to obtain the road surface dense point cloud data.
In order to further improve the processing flexibility of the point cloud densification, the embodiment also provides another simpler point cloud densification method, wherein the point cloud data can be matched by adopting a point cloud matching algorithm in consideration of the space-time continuity of the point cloud data. The vehicle moves, so that the position and the posture of the vehicle at the current moment and the last moment are changed, and certain position and posture change exists between two frames of point cloud data, the inertial measurement unit can obtain the posture information of the vehicle at different moments, and the odometer can obtain the moving distance between the two moments of the vehicle. From these two data, pose change data can be estimated. In order to obtain a more accurate matching result and reduce the calculated amount, initial pose change data between point cloud data at the current moment and the previous splicing point cloud can be obtained through an odometer and an inertial measurement unit. And after the point cloud matching calculation is completed, the point cloud at the current moment is superimposed on the historical splice point cloud data to obtain dense point cloud data.
From the above, the implementation effectively reduces the sparsity of the point cloud data, reduces the omission ratio of road surface barriers, and effectively improves the recognition accuracy of low-rise barriers or hollow areas by carrying out densification processing on the point cloud data based on the point cloud splicing mode.
In order to further improve the recognition accuracy of the low-profile raised obstacle or the hollow area, considering the limitation of the data receiving principle of the mechanical LiDAR (LIGHTLASER DETECTION AND RANGING, laser radar), distortion is generated along the running movement direction of the vehicle in the running process of the vehicle, and the distortion is increased along with the increase of the vehicle speed, so that the judgment error in the longitudinal direction consistent with the direction of the vehicle head is extremely easy to be caused, that is, if the distortion is not eliminated, larger errors are generated for the recognition, the morphology judgment and the like of the low-profile raised obstacle/the hollow area, and the like, based on the embodiment, the distortion of the laser radar point cloud data is corrected through the point cloud data acquired by the odometer and the inertia measuring unit, as shown in fig. 4, the distortion correction process can comprise the following steps:
The method comprises the steps of obtaining laser point cloud data, vehicle motion data and vehicle motion posture data carrying a time stamp at the current moment, carrying out interpolation processing on corresponding vehicle motion data and vehicle motion posture data according to the time stamp of each three-dimensional point of the laser point cloud data, and carrying out corresponding compensation on the interpolated point cloud data according to vehicle motion information and vehicle posture information to obtain point cloud distortion calibration data serving as the point cloud data at the current moment.
The vehicle movement distance information in a unit time period is acquired by using an odometer, vehicle movement posture data in a corresponding unit time period is acquired by using an inertial measurement unit, the frame frequency of the mechanical laser radar is 10hz, and the unit time period can be set to be 0.1 second correspondingly, namely, the vehicle movement distance information in 0.1 second is acquired by using the odometer, and the vehicle movement posture data in 0.1 second is acquired by using the inertial measurement unit. Of course, the method can be flexibly selected according to the actual application scene, and the realization of the method is not affected. Each point in the laser point cloud data of the embodiment is attached with time stamp information, and for each point in the point cloud, the obtained vehicle motion data and vehicle posture data are interpolated according to the time stamp information, and the coordinates thereof are compensated. Taking compensation of vehicle motion data as an example, the actual moment of current data frame acquisition isFor a certain point cloud data pointIts uncompensated coordinates areWith a timestamp of. If inThe vehicle moves along the x-axis for s meters in time, and then the compensated coordinates are. In the actual application scene, when the moving direction and the posture of the vehicle change, corresponding compensation can be carried out, so that the point cloud data distortion caused by the motion of the vehicle is eliminated.
As can be seen from the above, the distortion of the point cloud data of the lidar is eliminated through the vehicle pose data, the accuracy of the point cloud data is improved, and the recognition accuracy of the low-rise obstacle or the hollow area is improved.
Considering that a road surface may exist or continuously exist a plurality of sections of uphill and downhill conditions or inclined conditions, because the elevation range of the low-rise obstacle/the hollow area is smaller, if the road surface inclined factors are considered, the low-rise obstacle and the hollow area are easy to misjudge or miss, so as to further improve the recognition precision of the low-rise obstacle/the hollow area, based on the embodiment, the invention also provides a highly-precise suspected obstacle determination method, which can comprise the following contents:
The method comprises the steps of carrying out rasterization processing on a first target subarea corresponding to far-field dense point cloud data according to a first preset grid size, dividing the first target subarea according to a clustering result of average normal vectors in grids of the first target subarea to obtain multiple types of first road surface areas, carrying out rasterization processing on a current road surface area according to a second preset grid size, determining a road surface leveling area meeting preset flatness according to the direction of the average normal vector of each grid of the current first road surface area, and merging the rest areas of the current first road surface area excluding the road surface leveling area based on a space distance relation to serve as the space position of a suspected obstacle.
In this embodiment, the rasterization process may be performed under the target viewing angle, for example, generating a grid under the bird's eye view angle, where the large spatial scale of the first preset grid size is beneficial to obtaining a flat area of the road surface, and then the small spatial scale of the second preset grid size is utilized to perform the calculation, which is beneficial to obtaining more detailed information. For example, the first predetermined grid size may be 0.2 m by 0.2 m, and the second predetermined grid size may be 0.02 m by 0.02 m, as shown in fig. 2 and 3, in the followingThe space 0.2m x 0.2m is taken as a statistical area in a plane, the far field area (80 x 20 m) of the front 20m of the automobile is rasterized according to the space 0.2m x 0.2m, each road surface area is defined as a first road surface area for the convenience of distinction, and the first road surface area is rasterized according to the grid size of 0.02 m x 0.02 m. Of course, the values of the second preset grid size and the first preset grid size can be flexibly set according to the situation, and only the condition that the second preset grid size is smaller than the first preset grid size is required to be satisfied, and the invention is not limited in any way. And determining average normal vectors in each grid based on the three-dimensional space plane equation of each grid, clustering the average normal vectors of each grid by any clustering method such as MEAN SHIFT (mean shift) and k-mean (k average), and dividing the first target subarea according to the clustering result of the average normal vectors of each grid to obtain a plurality of first road surface areas such as different uphill and downhill road sections. For each first road surface area, resetting grids, and calculating the local average normal vector of each grid again to analyze the road surface flatness.
The road surface flatness detection method meets the judgment standard that whether the preset flatness is the preset road is smooth or uneven, namely has obstacles. The method comprises the steps of obtaining three-dimensional space plane equations of corresponding grids through fitting according to three-dimensional position information of point cloud data inside each grid of a current first road surface area, calculating average normal vectors in each grid of the current first road surface area based on the three-dimensional space plane equations of each grid of the current first road surface area, voting whether each average normal vector is a road surface normal vector or not according to the average normal vector direction at intervals of a preset angle threshold value, taking the target average normal vector with the largest number of votes as the road surface normal vector, and taking the grid corresponding to the target average normal vector as a road surface leveling area. The preset angle threshold may be selected according to the actual situation, and votes are cast at intervals of 2 ° according to the local average normal vector direction, for example, 2 ° and the most votes are regarded as the road surface normal, that is, the road surface flat area, and are removed. And merging the rest according to the spatial distance relation, thereby obtaining one or more suspected low-rise obstacle/hollow areas, including the spatial positions thereof in the local world coordinate system.
Further, as the vehicle continues to travel forward, unique ID information may be assigned to suspected low-rise obstacle and pothole areas newly entered into the local world coordinate system, through which ID information may be used for subsequent further tracking and continuous refinement calculations for each suspected low-rise obstacle/pothole area.
As can be seen from the above, the method and the device for identifying the suspected low-rise obstacle and the hollow area are simple and easy to implement, have high accuracy and are beneficial to improving the identification accuracy of the low-rise obstacle or the hollow area by taking the consistency of the local normal vectors as a criterion. Furthermore, the road surfaces of different areas are obtained through road surface sectional fitting, and then suspected barriers in each road surface are further identified one by one, so that the suspected barriers and the hollow areas are easy to miss, and the identification precision of the suspected barriers or the hollow areas is effectively improved.
Considering that the horizontal/vertical angular resolution of the lidar of the vehicle configuration is better than 0.06 °, the spatial perception capability of the lidar increases with the closer the target distance. Calculated at 0.06 ° in angular resolution, the minimum spatial perceptibility is 5.2 cm at 50 meters from the radar, 2.0 cm at 20 meters from the radar, 1 cm at 10 meters from the radar, and 0.5 cm at 5 meters from the radar. The present invention improves the classification and recognition precision of the low-short raised obstacle/hollow area based on the near-field dense point cloud, but the above embodiment does not limit how to verify the suspected obstacle, and the present invention also provides an exemplary implementation manner of verifying whether the suspected obstacle is a target obstacle according to the near-field dense point cloud data, which may include the following contents:
The method comprises the steps of obtaining near-field non-flat pavement dense point cloud data, obtaining point cloud data of a road surface flat area in the near-field dense point cloud data, obtaining a rotation matrix and a translation vector according to a point cloud matching result of the point cloud data at the current moment and the dense point cloud data at the last moment, updating spatial position information of suspected obstacles in vehicle coordinates according to the rotation matrix and the translation vector, obtaining obstacle dense point cloud data of the suspected obstacles from the near-field non-flat pavement dense point cloud based on the spatial position information, and determining whether the suspected obstacles are target obstacles or not through target recognition of the obstacle dense point cloud data.
As shown in fig. 5, in the present embodiment, as the distance between the obstacle and the vehicle is closer, the spatial resolution of the near-field dense point cloud data is higher, and in order to further reduce the subsequent calculation amount and improve the obstacle recognition accuracy, the present embodiment extracts the dense point cloud of the near-field non-flat road surface from the acquired near-field dense point cloud. The method comprises the steps of performing rasterization processing on a second target subarea corresponding to near-field dense point cloud data according to a third preset grid size, dividing the second target subarea into a plurality of second pavement areas according to a clustering result of average normal vectors of grids of the second target subarea, correcting corresponding point cloud data according to inclination angles of the second pavement areas, removing corrected pavement flat area point cloud data in the second pavement areas according to a laser radar installation height, and splicing residual point cloud data of the second pavement areas with the pavement flat area point cloud data removed according to corresponding spatial position relations to obtain near-field non-flat pavement dense point cloud. The third preset grid size can be flexibly selected according to practical requirements, such as 0.2 m×0.2, for near-field dense point cloud A, inAnd taking the space size of 0.2 m multiplied by 0.2 m as a statistical area in the plane, rasterizing a near field area of 20m in front of the vehicle, namely a second target sub-area, and calculating the average normal vector of the fitting plane of the point cloud data in each grid. Similarly, the local average normal vector of each grid can be clustered, the second target subarea is further segmented according to the clustering result, and a plurality of road surface areas are obtained and defined as second road surface areas for distinguishingWherein, the method comprises the steps of, wherein,The total number of N pavement areas, and simultaneously obtaining the inclination angle of each second pavement area. For each second road surface areaBased on point cloud coordinate transformation, i.e. three-dimensional coordinate transformation, and its inclination angleCorrecting, namely, multiplying the three-dimensional coordinate transformation matrix by the coordinates of the point cloud to realize correction, so that the inclination angle of the road surface leveling area is 0, and the corrected dense point cloud can be defined as. For each corrected dense point cloud. Each point in the point cloud data has xyz coordinates, wherein the z coordinates are vertical directions, and the laser radar installation height is a basic parameter for a vehicle with an automatic driving function, and the height H from the center of the laser radar to the ground can be known by combining the structural design parameters of the vehicle. With this height H as a threshold, select. + -.For example, the point cloud data can be filtered by 5 cm and only remain in the coordinatesThe points in the range are the point cloud data of the uneven road surface area, and the point cloud of the uneven road surface area is removed to obtain the residual dense point cloud in the area. Will beAccording to its correspondenceAnd splicing the positions of the near-field regions to obtain the dense point cloud D of the near-field non-flat pavement.
As the vehicle continues to advance, a portion of the far field identified suspected low-profile obstacle/pothole area, the near field area, is stepped. The granularity of the point cloud data in the method is smaller, and convenience conditions are provided for accurately judging the attribute of the point cloud data. When obtaining the dense point cloud of the near-field non-flat pavement, for the suspected low-raised barrier/hollow area entering the near-field areaThe verification is started, namely the point cloud at the current moment T is matched with the dense point cloud at the moment T-1 to obtain a rotation matrix R and a translation vector T, a matrix of 4 multiplied by 4 can be formed by the R and the T, and then the matrix multiplication can be utilized to updateThe position and occupancy space information in the vehicle coordinate system, i.e. the update space position information. Dense point cloud D and D based on near-field non-flat pavementIs used to obtain a dense point cloud therein. For example, the point cloud recognition model may be trained in advance based on any one of the point cloud recognition network models, for example, the point cloud recognition model is obtained based on pointnet (classification network name) and pointnet ++ (classification network name) training, the obstacle dense point cloud data is input to the point cloud recognition model, the point cloud recognition model outputs the recognition result that the obstacle dense point cloud data is a pothole area or a convex obstacle, and if the obstacle dense point cloud data is a pothole area or a convex obstacle, the suspected obstacle is determined to be a target obstacle. If the three-dimensional surface fitting method is used for the three-dimensional surface fitting of the low-rise obstacle/hollow area based on the dense point cloud in the low-rise obstacle/hollow area, the three-dimensional surface (curved surface) information of the low-rise obstacle/hollow area can be obtained, and key information such as orientation, length, width, depth and the like of the low-rise obstacle/hollow area can be obtained.
Further, for a scene with multiple target obstacles, due to the fact that vehicles in the obstacles are far, missed judgment may occur, and after one suspected obstacle is verified to be the target obstacle, dense point cloud data contained in the target obstacle can be removed from near-field non-flat pavement dense point cloud D. Then after the identified obstacle point cloud is removed, the rest point cloud is further spliced with the subsequent point cloud data frame, so that the consistency is improved, and the obstacle identification accuracy can be improved.
As can be seen from the above, in this embodiment, the near field region is divided into multiple segments, and the point cloud data is corrected based on the regional gradient after segmentation, which is beneficial to further improving the classification and identification accuracy of the low-profile raised barrier/hollow region.
The above embodiment does not limit how to approach an obstacle continuously along with the running of the vehicle and perform a step-by-step refinement process on the three-dimensional shape information of the obstacle, and based on the above embodiment, the present invention further provides an implementation process for determining the three-dimensional shape information of the target obstacle according to the point cloud data at adjacent moments and the matching result thereof, which may include the following contents:
The method comprises the steps of carrying out three-dimensional morphology fitting processing on dense point cloud data of a target obstacle to obtain initial three-dimensional morphology information of the target obstacle, removing the dense point cloud data of the target obstacle from the dense point cloud of a near-field non-smooth road surface to serve as dense point cloud data of the last moment, obtaining point cloud data of the current moment when a vehicle continues to run, splicing the point cloud data of the current moment and the dense point cloud data of the last moment to obtain dense point cloud data of the current moment, determining new dense point cloud data of the target obstacle from the dense point cloud data of the current moment based on new space position information of the target obstacle in a vehicle coordinate system of the current moment, and carrying out three-dimensional morphology fitting on the new dense point cloud data of the target obstacle to obtain three-dimensional morphology information of the current moment of the target obstacle so as to update initial three-dimensional morphology information.
The near-field non-flat pavement dense point cloud is obtained by removing point cloud data of a pavement flat area in the near-field dense point cloud data. Because the spatial position of the obstacle does not move, the density of the obstacle point cloud is gradually increased along with the continuous advancing of the vehicle, based on the fact, as shown in fig. 5, for each identified obstacle, the position of the obstacle in the current dense point cloud is obtained only by matching the current frame point cloud with the previous frame point cloud, and the obstacle point cloud is updated to be a denser (or finer) point cloud, so that the data precision of the point cloud morphology is further improved. Further, in order to improve accuracy and efficiency of determining the new spatial position information, point cloud matching may be performed on the point cloud data at the current time and the dense point cloud data at the previous time, then a new rotation matrix and a new translation vector are determined according to the point cloud matching result, and the new spatial position information of the target obstacle in the vehicle coordinate system at the current time is updated according to the new rotation matrix and the new translation vector.
The three-dimensional morphology data refinement process of the last frame of the identified short-raised obstacle/pothole area in the near field region can comprise the steps of obtaining a rotation matrix R and a translation vector T according to the matching of the point cloud at the current moment T and the dense point cloud obtained at the moment T-1, and updatingDense point cloud and based on new near-field non-flat pavementAnd based on the internal dense point cloud, performing three-dimensional morphology fitting, acquiring key information such as orientation, length, width, depth and the like, and removing dense point cloud data contained in the target from the new near-field non-flat pavement dense point cloud.
In order to further improve the recognition accuracy of the target obstacle, the residual dense point cloud data after removing the dense point cloud data of the obstacle at each moment is subjected to target recognition to determine whether a new target obstacle exists or not, and if the new target obstacle exists, the corresponding dense point cloud data is subjected to three-dimensional morphology fitting to obtain the three-dimensional morphology information of the new target obstacle. In other words, after removing the point cloud data of the identified target obstacle from the near-field non-flat road surface dense point cloud that is the latest at the present moment according to the above embodiment, the remaining dense point cloud area may be also subjected toIdentification is made for eachCan obtain the dense point cloud therein, and utilizes the point cloud identification model pairAnd if the internal dense point cloud is the low raised obstacle/hollow area, performing three-dimensional morphology fitting based on the internal dense point cloud to acquire key information such as orientation, length, width, depth and the like.
From the above, the present embodiment is based on the distribution characteristics of the point cloud data, that is, the closer to the laser radar, the denser the point cloud distribution, and the more accurate the spatial positioning accuracy. As the vehicle continuously moves forward, the confirmed low-rise obstacle/hollow area continuously approaches the vehicle, and the granularity of the point cloud data in the vehicle is smaller, so that the three-dimensional morphology data precision of the vehicle can be gradually improved, and further, the high-precision calculation of the three-dimensional morphology information of the target obstacle is realized.
It should be noted that, in the present invention, the steps are not strictly executed sequentially, so long as they conform to the logic sequence, and the steps may be executed simultaneously or according to a certain preset sequence, and fig. 1 and fig. 5 are only schematic, and do not represent only such an execution sequence.
The invention also provides a corresponding device for the road environment state sensing method, so that the method has higher practicability. Wherein the device may be described separately from the functional module and the hardware. In the following description, the road environment state sensing device provided by the present invention is used to implement the road environment state sensing method provided by the present invention, and in this embodiment, the road environment state sensing device may include or be divided into one or more program modules, where the one or more program modules are stored in a storage medium and executed by one or more processors, to implement the road environment state sensing method disclosed in the first embodiment. Program modules in the present embodiment refer to a series of computer program instruction segments capable of performing specific functions, and are more suitable than programs themselves for describing the execution of the road environment state sensing device in a storage medium. The following description will specifically describe functions of each program module of the present embodiment, and the road environment state sensing device described below and the road environment state sensing method described above may be referred to correspondingly to each other.
Based on the angles of the functional modules, referring to fig. 6, fig. 6 is a block diagram of a road environment state sensing device provided in this embodiment under a specific implementation manner, where the device may include:
The point cloud data acquisition module 601 is configured to acquire road surface dense point cloud data in a target area in front of a vehicle running, and divide the road surface dense point cloud data into near-field dense point cloud data and far-field dense point cloud data according to a distance between the point cloud data and a vehicle distance.
The far-field identification module 602 is configured to determine a suspected obstacle in the target area by analyzing the road surface flatness of the far-field dense point cloud data.
The near field verification module 603 is configured to verify whether the suspected obstacle is a target obstacle according to the near field dense point cloud data, and determine three-dimensional morphology information of the target obstacle according to the point cloud data at adjacent time and a matching result thereof when verifying that the suspected obstacle is the target obstacle.
The environment sensing module 604 is configured to generate a sensing result of a road environment state of the vehicle according to three-dimensional morphology information of each target obstacle obtained during the driving process of the vehicle, where the dense point cloud data of the road surface is obtained by removing point cloud data of a road surface area that is not the target area from a splicing result of the point cloud data of the current time and the spliced point cloud data of the previous time.
Illustratively, in some implementations of the present embodiment, the far field identification module 602 described above may be further configured to:
The method comprises the steps of carrying out rasterization processing on a first target subarea corresponding to far-field dense point cloud data according to a first preset grid size, dividing the first target subarea according to a clustering result of average normal vectors in grids of the first target subarea to obtain multiple types of first road surface areas, carrying out rasterization processing on a current road surface area according to a second preset grid size, determining a road surface leveling area meeting preset flatness according to the direction of the average normal vectors of each grid of the current first road surface area, merging the rest areas of the current first road surface area excluding the road surface leveling area based on a space distance relation to serve as space positions of suspected obstacles, wherein the target area comprises the first target subarea and the second target subarea, the maximum distance between the second target subarea and a vehicle along the running direction of the vehicle is smaller than the minimum distance between the first target subarea and the vehicle, the point cloud data corresponding to the first target subarea is far-field dense point cloud data, the point cloud data corresponding to the second target subarea is near-field dense point cloud data, and the second preset grid size is smaller than the first preset grid size.
As an exemplary implementation manner of the foregoing embodiment, the far-field identifying module 602 may be further configured to obtain a three-dimensional space plane equation of the corresponding grid by fitting according to three-dimensional position information of point cloud data inside each grid of the current first road surface area, calculate an average normal vector in each grid of the current first road surface area based on the three-dimensional space plane equation of each grid of the current first road surface area, vote whether each average normal vector is a road surface normal vector according to an average normal vector direction at intervals of a preset angle threshold, and use a target average normal vector with the largest number of votes as the road surface normal vector, and use a grid corresponding to the target average normal vector as the road surface leveling area.
The point cloud data obtaining module 601 may be further configured to obtain vehicle motion data and vehicle motion gesture data at a current time as pose point cloud data, determine initial pose change information according to pose point cloud data and point cloud data at a previous time, perform point cloud matching calculation on the point cloud data at the current time and the point cloud data at the previous time by using the initial pose change information as an iterative calculation initial value, determine a rotation change matrix and a translation vector, transform the point cloud data at the previous time according to the rotation change matrix and the translation vector, remove point cloud data, which is not the position of the vehicle at the current time, in the transformed point cloud data, as dense point cloud data at the previous time, splice the dense point cloud data at the previous time with the point cloud data at the current time, obtain dense point cloud data at the current time, perform semantic segmentation processing on the obtained driving scene image data, obtain driving area image information, and remove dense point cloud data corresponding to the road surface data in the three-dimensional point corresponding to the point cloud data at the current time according to the image pixel and the point based on the driving area image information.
As an exemplary implementation manner of the foregoing embodiment, the above-mentioned point cloud data obtaining module 601 may be further configured to obtain laser point cloud data, vehicle motion data, and vehicle motion posture data that carry a timestamp at a current time, interpolate corresponding vehicle motion data and vehicle motion posture data according to the timestamps of the three-dimensional points of the laser point cloud data, and compensate the interpolated point cloud data according to the vehicle motion information and the vehicle posture information, so as to obtain point cloud distortion calibration data as the point cloud data at the current time.
The near field verification module 603 may be further configured to remove point cloud data of a road surface flat area in near field dense point cloud data to obtain near field non-flat road surface dense point cloud, obtain a rotation matrix and a translation vector according to a point cloud matching result of the point cloud data at a current time and dense point cloud data at a previous time, update spatial position information of a suspected obstacle in vehicle coordinates according to the rotation matrix and the translation vector, obtain obstacle dense point cloud data of the suspected obstacle from the near field non-flat road surface dense point cloud based on the spatial position information, and determine whether the suspected obstacle is a target obstacle by performing target recognition on the obstacle dense point cloud data.
As an exemplary implementation manner of the foregoing embodiment, the near-field verification module 603 may be further configured to perform rasterization processing on a second target sub-area corresponding to the near-field dense point cloud data according to a third preset grid size, divide the second target sub-area into a plurality of second road surface areas according to a clustering result of an average normal vector of each grid of the second target sub-area, correct the corresponding point cloud data according to an inclination angle of each second road surface area, reject the corrected road surface flat area point cloud data in each second road surface area according to a laser radar mounting height, and splice the remaining point cloud data of each second road surface area from which the road surface flat area point cloud data is rejected according to a corresponding spatial position relationship, so as to obtain the near-field non-flat road surface dense point cloud.
As another exemplary implementation manner of the foregoing embodiment, the near field verification module 603 may be further configured to pre-train a point cloud identification model, input obstacle dense point cloud data to the point cloud identification model, output an identification result that the obstacle dense point cloud data is a pothole area or a convex obstacle by the point cloud identification model, and determine that the suspected obstacle is a target obstacle if the obstacle dense point cloud data is a pothole area or a convex obstacle.
The near-field verification module 603 may be further configured to perform three-dimensional morphology fitting processing on the dense obstacle point cloud data of the target obstacle to obtain initial three-dimensional morphology information of the target obstacle, remove the dense obstacle point cloud data from the dense obstacle point cloud of the near-field non-flat road surface to serve as dense point cloud data of a previous moment, obtain point cloud data of a current moment in a continuous driving process of the vehicle, splice the dense point cloud data of the current moment with the dense point cloud data of the previous moment to obtain dense point cloud data of the current moment, determine new dense obstacle point cloud data of the target obstacle from the dense point cloud data of the current moment based on new spatial position information of the target obstacle in a vehicle coordinate system of the current moment, and perform three-dimensional morphology fitting on the new dense obstacle point cloud data to obtain three-dimensional morphology information of the current moment of the target obstacle to update the initial three-dimensional morphology information, where the dense obstacle point cloud of the near-field non-flat road surface is obtained by removing the dense point cloud data of the road surface in the dense point cloud data of the near-field.
As an exemplary implementation manner of the foregoing embodiment, the foregoing near field verification module 603 may be further configured to perform point cloud matching on the point cloud data at the current time and the dense point cloud data at the previous time, determine a new rotation matrix and a new translation vector according to the point cloud matching result, and update new spatial position information of the target obstacle in the vehicle coordinate system at the current time according to the new rotation matrix and the new translation vector.
As an exemplary implementation manner of the foregoing embodiment, the near-field verification module 603 may be further configured to perform target recognition on dense point cloud data remaining after removing dense point cloud data of an obstacle at each moment to determine whether a new target obstacle exists, and if the new target obstacle exists, perform three-dimensional morphology fitting on corresponding dense point cloud data to obtain three-dimensional morphology information of the new target obstacle.
The road environment state sensing device is described from the perspective of a functional module, and further, the invention also provides an electronic device, which is described from the perspective of hardware. Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. As shown in fig. 7, the electronic device comprises a memory 70 for storing a computer program, and a processor 71 for implementing the steps of the road environment state awareness method as mentioned in any of the above embodiments when executing the computer program.
Processor 71 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and processor 71 may also be a controller, microcontroller, microprocessor, or other data processing chip, among others. The processor 71 may be implemented in at least one hardware form of DSP (DIGITAL SIGNAL Processing), FPGA (Field-Programmable gate array), PLA (Programmable Logic Array ). The processor 71 may also include a main processor, which is a processor for processing data in a wake-up state, also called a CPU (Central Processing Unit ), and a coprocessor, which is a low-power processor for processing data in a standby state. In some embodiments, the processor 71 may be integrated with a GPU (Graphics Processing Unit, graphics processor) for rendering and drawing of content to be displayed by the display screen. In some embodiments, the processor 71 may also include an AI (ARTIFICIAL INTELLIGENCE ) processor for processing computing operations related to machine learning.
Memory 70 may include one or more computer non-volatile storage media, which may be non-transitory. Memory 70 may also include high-speed random access memory as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. Memory 70 may be an internal storage unit of the electronic device, such as a hard disk of a server, in some embodiments. The memory 70 may also be an external storage device of the electronic device, such as a plug-in hard disk provided on a server, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD), etc. in other embodiments. Further, the memory 70 may also include both internal storage units and external storage devices of the electronic device. The memory 70 may be used not only to store application software installed in the electronic device and various types of data, such as code of a program during execution of the road environment state sensing method, but also to temporarily store data that has been output or is to be output. In this embodiment, the memory 70 is at least used to store a computer program 701, which, when loaded and executed by the processor 71, is capable of implementing the relevant steps of the road environment state awareness method disclosed in any of the foregoing embodiments. In addition, the resources stored in the memory 70 may further include an operating system 702, data 703, and the like, where the storage manner may be transient storage or permanent storage. Operating system 702 may include Windows, unix, linux, among other things. The data 703 may include, but is not limited to, data corresponding to road environment state awareness results, and the like.
In some embodiments, the electronic device may further include a display screen 72, an input/output interface 73, a communication interface 74, alternatively referred to as a network interface, a power supply 75, and a communication bus 76. Among other things, a display screen 72, an input output interface 73 such as a Keyboard (Keyboard) pertain to a user interface, which may also include standard wired interfaces, wireless interfaces, and the like. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch, or the like. The display may also be referred to as a display screen or display unit, as appropriate, for displaying information processed in the electronic device and for displaying a visual user interface. Communication interface 74 may include, for example, a wired interface and/or a wireless interface, such as a WI-FI interface, a bluetooth interface, etc., typically used to establish a communication connection between an electronic device and another electronic device. The communication bus 76 may be a peripheral component interconnect standard (PERIPHERAL COMPONENT INTERCONNECT, PCI) bus, or an extended industry standard architecture (extended industry standard architecture, EISA) bus, or the like. The bus may be classified as an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in fig. 7, but not only one bus or one type of bus.
Those skilled in the art will appreciate that the configuration shown in fig. 7 is not limiting of the electronic device and may include more or fewer components than shown, for example, may also include sensors 77 to perform various functions.
It will be appreciated that if the road environment state sensing method in the above embodiment is implemented in the form of a software functional unit and sold or used as a separate product, it may be stored in a non-volatile storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution contributing to the related art, or may be embodied in the form of a software product stored in a storage medium, which performs all or part of the steps of the methods of the various embodiments of the present invention. The storage medium includes, but is not limited to, a U disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), an electrically erasable programmable ROM, registers, a hard disk, a multimedia card, a card-type Memory (e.g., SD or DX Memory, etc.), a magnetic Memory, a removable disk, a CD-ROM, a magnetic disk, or an optical disk, etc., which can store program codes. Based on this, the present invention also provides a non-volatile storage medium, on which a computer program is stored, which when executed by a processor, is configured to implement the steps of the road environment state sensing method according to any one of the above embodiments.
It will be appreciated that if the road environment state sensing method in the above embodiments is implemented in the form of a software functional unit and sold or used as a separate product, the computer software product may not be stored in a storage medium of one entity, and may be directly transmitted to a computer or other device having information processing capability through a wired network or a wireless network, for example, to perform all or part of the steps of the method in the embodiments of the present invention. Based on this understanding, the technical solution of the present invention may be embodied essentially or partly in the form of a software product or all or part of the technical solution, and based on this, the present invention also provides a computer program product, which stores a computer program, which when executed by a processor, performs the steps of the road environment state sensing method as described in any of the embodiments above.
Finally, the invention also provides an unmanned vehicle, which completely multiplexes own sensors, such as a camera and a laser radar, does not need to additionally install other sensors, and embeds a computer program for realizing the road environment state sensing method described in any embodiment in the existing automatic driving sensing system. Then, considering the property that the space precision of the point cloud is higher along with the closer the target is to the laser radar, through near field verification by far field identification, the calculation consumption can be effectively reduced, the position and the shape information of each low-rise obstacle/pothole area can be corrected step by step, the low-rise obstacle can be accurately identified as early as possible in an automatic driving scene of an unmanned vehicle, such as a speed bump and a crack, a pothole, a pit or a water pit, and the like, as shown in fig. 8, and reasonable planning is made, so that the safety and the comfort of a self-driving system can be effectively improved.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, so that the same or similar parts between the embodiments are referred to each other. For the hardware including the electronic device, the nonvolatile storage medium, the computer program product, and the unmanned vehicle disclosed in the embodiments, since it corresponds to the method disclosed in the embodiments, the description is relatively simple, and the relevant points refer to the description of the method section.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The road environment state sensing method, the electronic device, the nonvolatile storage medium, the computer program product and the unmanned vehicle provided by the invention are described in detail. The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to facilitate an understanding of the method of the present invention and its core ideas. It should be noted that, based on the embodiments of the present invention, all other embodiments obtained by a person skilled in the art without making any inventive effort fall within the scope of protection of the present invention. The present invention is capable of numerous modifications and adaptations without departing from the principles of the present invention, and such modifications and adaptations are intended to be within the scope of the present invention.
Claims (14)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411379932.1A CN118898825B (en) | 2024-09-30 | 2024-09-30 | Road environment state perception method, equipment, medium, program product and vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411379932.1A CN118898825B (en) | 2024-09-30 | 2024-09-30 | Road environment state perception method, equipment, medium, program product and vehicle |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118898825A CN118898825A (en) | 2024-11-05 |
CN118898825B true CN118898825B (en) | 2025-01-24 |
Family
ID=93267966
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202411379932.1A Active CN118898825B (en) | 2024-09-30 | 2024-09-30 | Road environment state perception method, equipment, medium, program product and vehicle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118898825B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119148163B (en) * | 2024-11-15 | 2025-03-11 | 北京理工大学 | Autonomous navigation method, device and medium of unmanned vehicle in unknown environment |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112347999A (en) * | 2021-01-07 | 2021-02-09 | 深圳市速腾聚创科技有限公司 | Obstacle recognition model training method, obstacle recognition method, device and system |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112329754B (en) * | 2021-01-07 | 2021-05-14 | 深圳市速腾聚创科技有限公司 | Obstacle recognition model training method, obstacle recognition method, device and system |
CN113569915B (en) * | 2021-06-30 | 2024-04-02 | 广西大学 | Multi-strategy rail transit obstacle recognition method based on laser radar |
WO2024044227A2 (en) * | 2022-08-24 | 2024-02-29 | Worcester Polytechnic Institute | Visually coherent lighting for mobile augmented reality |
-
2024
- 2024-09-30 CN CN202411379932.1A patent/CN118898825B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112347999A (en) * | 2021-01-07 | 2021-02-09 | 深圳市速腾聚创科技有限公司 | Obstacle recognition model training method, obstacle recognition method, device and system |
Also Published As
Publication number | Publication date |
---|---|
CN118898825A (en) | 2024-11-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10949684B2 (en) | Vehicle image verification | |
CN109766878B (en) | A kind of method and apparatus of lane detection | |
WO2018068653A1 (en) | Point cloud data processing method and apparatus, and storage medium | |
JP6442834B2 (en) | Road surface height shape estimation method and system | |
CN103770704A (en) | System and method for recognizing parking space line markings for vehicle | |
CN110674705A (en) | Small-sized obstacle detection method and device based on multi-line laser radar | |
CN117576652B (en) | Road object identification method and device, storage medium and electronic equipment | |
CN118898825B (en) | Road environment state perception method, equipment, medium, program product and vehicle | |
CN114913399B (en) | Vehicle trajectory optimization method and intelligent transportation system | |
Petrovai et al. | A stereovision based approach for detecting and tracking lane and forward obstacles on mobile devices | |
JP4296287B2 (en) | Vehicle recognition device | |
CN109791607A (en) | It is detected from a series of images of video camera by homography matrix and identifying object | |
CN118898838B (en) | Method, device, medium and vehicle for determining three-dimensional shape information of road obstacles | |
CN117392423A (en) | Lidar-based target true value data prediction method, device and equipment | |
CN118898689B (en) | Cost map generation method, equipment, medium, product and unmanned vehicle | |
CN118941626A (en) | Roadside visual positioning method, device, equipment and medium for traffic participants | |
CN117671644A (en) | Signboard detection method and device and vehicle | |
CN117671616A (en) | A target recognition method, device, equipment and storage medium | |
KR102368262B1 (en) | Method for estimating traffic light arrangement information using multiple observation information | |
CN116935344A (en) | Road boundary polygonal contour construction method and system | |
CN112020722A (en) | Road shoulder identification based on three-dimensional sensor data | |
WO2022271750A1 (en) | Three-dimensional object detection with ground removal intelligence | |
CN118898826B (en) | Road target detection method, equipment, medium, product and unmanned vehicle | |
CN111414848B (en) | Full-class 3D obstacle detection method, system and medium | |
CN118429563B (en) | Elevation matching method and system for vehicle three-dimensional map road |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |