[go: up one dir, main page]

CN115578703A - Laser perception fusion optimization method, device and equipment and readable storage medium - Google Patents

Laser perception fusion optimization method, device and equipment and readable storage medium Download PDF

Info

Publication number
CN115578703A
CN115578703A CN202211216856.3A CN202211216856A CN115578703A CN 115578703 A CN115578703 A CN 115578703A CN 202211216856 A CN202211216856 A CN 202211216856A CN 115578703 A CN115578703 A CN 115578703A
Authority
CN
China
Prior art keywords
target
unclassified
fusion
classified
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211216856.3A
Other languages
Chinese (zh)
Inventor
吴鹏
张鹏
刘杏
李兆干
许鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongfeng Trucks Co ltd
Original Assignee
Dongfeng Trucks Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongfeng Trucks Co ltd filed Critical Dongfeng Trucks Co ltd
Priority to CN202211216856.3A priority Critical patent/CN115578703A/en
Publication of CN115578703A publication Critical patent/CN115578703A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application relates to a laser perception fusion optimization method, a device, equipment and a readable storage medium, relating to the technical field of intelligent driving, and comprising the steps of determining a travelable road edge point set of a position where a vehicle is located based on a high-precision map, and generating an interested area according to the travelable road edge point set; filtering point cloud information detected by the laser radar based on the region of interest to obtain a target obstacle point cloud; respectively carrying out target identification on target obstacle point clouds through a spatial clustering algorithm and a deep learning algorithm to obtain an unclassified target result set and a classified target result set; and performing target fusion on the unclassified target result set and the classified target result set to obtain a fused target set. The problem of long tail can not be identified is avoided through spatial clustering and high accuracy map to this application, reduces the possibility of missing the inspection, improves the perception and fuses the accuracy that the target detected for the detection discernment of common target only need be paid attention to in the degree of depth study, and then reduces the cost of long tail data marking and collection.

Description

Laser perception fusion optimization method, device and equipment and readable storage medium
Technical Field
The application relates to the technical field of intelligent driving, in particular to a laser perception fusion optimization method, device and equipment and a readable storage medium.
Background
Accurate perception is one of the key technologies for developing intelligent autopilot. Currently, in the field of intelligent driving environment perception, how to detect surrounding environment information in real time and ensure safe driving of an intelligent vehicle is the most important task of environment perception, and how to configure different sensors to realize the corresponding grade automatic driving function of the intelligent vehicle is uncertain for different automatic driving grades. However, in the autopilot module of L3 and above functional classes, lidar is an indispensable sensor.
The existing sensor fusion technical scheme taking vision as a main route or taking laser radar as a main route needs to adopt at least more than two sensors on environment perception fusion, and is not suitable for perception fusion of a single laser radar sensor.
At present, the perception fusion of a single laser radar sensor mainly carries out target detection in a deep learning-based mode, but a great deal of long tail problems still exist in a data-driven deep learning technical route, so that the problem of environmental perception missing detection is easy to occur; while the current main solution to the long tail problem is to iterate the model by collecting data of the long tail problem, this approach can result in high cost of the lidar data.
Disclosure of Invention
The application provides a laser perception fusion optimization method, a laser perception fusion optimization device and a readable storage medium, so that the problem that long tails cannot be identified is avoided, the possibility of missing detection and the cost are reduced, and the accuracy of perception fusion target detection is improved.
In a first aspect, a laser perception fusion optimization method is provided, which includes the following steps:
determining a travelable road edge point set of the position of the vehicle based on the high-precision map, and generating an interested area according to the travelable road edge point set;
filtering point cloud information detected by the laser radar based on the region of interest to obtain a target obstacle point cloud;
respectively carrying out target identification on the target obstacle point cloud through a spatial clustering algorithm and a deep learning algorithm to obtain an unclassified target result set and a classified target result set;
and performing target fusion on the unclassified target result set and the classified target result set to obtain a fusion target set comprising at least one fusion target.
In some embodiments, the performing target fusion on the unclassified target result set and the classified target result set to obtain a fused target set includes:
matching each unclassified target in the unclassified target result set with each classified target in the classified target result set to obtain a matching result;
and determining at least one fusion target according to the matching result to form a fusion target set.
In some embodiments, the determining at least one fusion target according to the matching result includes:
if a certain unclassified target is not successfully matched with all classified targets in the classified target set, taking the certain unclassified target as a fusion target;
if a certain unclassified target is successfully matched with one classified target in the classified target set, taking the classified target successfully matched with the certain unclassified target as a fusion target, and removing the certain unclassified target;
and if the classified target is not successfully matched with all the unclassified targets in the unclassified target set, taking the classified target as a fusion target.
In some embodiments, if a classification target is not successfully matched with all of the unclassified targets in the unclassified target set, taking the classification target as a fusion target includes:
if a certain classified target is not successfully matched with all the unclassified targets in the unclassified target set, judging whether the certain classified target continuously appears for a preset number of times;
if so, taking the certain classification target as a fusion target;
if not, removing the classification target.
In some embodiments, said matching each unclassified object in said unclassified object result set with each classified object in said classified object result set comprises:
calculating the Euclidean distance between each unclassified target in an unclassified target result set and each classified target in the classified target result set;
generating a nearest neighbor distance cost matrix based on the Euclidean distance;
and determining whether the unclassified target in the unclassified target result set is successfully matched with the classified target in the classified target result set or not according to a minimum cost principle and the nearest neighbor distance cost matrix.
In some embodiments, the filtering the point cloud information detected by the lidar based on the region of interest to obtain a target obstacle point cloud includes:
preliminarily filtering point cloud information detected by the laser radar based on the region of interest to obtain first point cloud information corresponding to the region of interest;
and performing ground point cloud filtering on the first point cloud information to obtain a target obstacle point cloud.
In some embodiments, the determining a set of travelable road edge points of the vehicle location based on the high-precision map includes:
marking a driving road edge point set corresponding to a preset driving route on a high-precision map based on the preset driving route of a vehicle;
and when the position information of the vehicle is received, screening out a drivable road edge point set of the position of the vehicle from the driving road edge point set marked on the high-precision map according to the position information.
In a second aspect, a laser-aware fusion optimization apparatus is provided, including:
the generating unit is used for determining a travelable road edge point set of the position of the vehicle based on the high-precision map and generating the region of interest according to the travelable road edge point set;
the filtering unit is used for filtering point cloud information detected by the laser radar based on the region of interest to obtain a target obstacle point cloud;
the identification unit is used for respectively carrying out target identification on the target obstacle point cloud through a spatial clustering algorithm and a deep learning algorithm to obtain an unclassified target result set and a classified target result set;
and the fusion unit is used for carrying out target fusion on the unclassified target result set and the classified target result set to obtain a fusion target set containing at least one fusion target.
In a third aspect, a laser-aware fusion optimization apparatus is provided, including: the laser perception fusion optimization method comprises a memory and a processor, wherein at least one instruction is stored in the memory, and is loaded and executed by the processor to realize the laser perception fusion optimization method.
In a fourth aspect, a computer-readable storage medium is provided, which stores a computer program, which when executed by a processor, implements the aforementioned laser-aware fusion optimization method.
The application provides a laser perception fusion optimization method, a device, equipment and a readable storage medium, which comprises the steps of determining a travelable road edge point set of a position where a vehicle is located based on a high-precision map, and generating an interested area according to the travelable road edge point set; filtering point cloud information detected by the laser radar based on the region of interest to obtain target obstacle point cloud; respectively carrying out target identification on the target obstacle point cloud through a spatial clustering algorithm and a deep learning algorithm to obtain an unclassified target result set and a classified target result set; and performing target fusion on the unclassified target result set and the classified target result set to obtain a fusion target set comprising at least one fusion target. The confidence coefficient of common targets is improved through the deep learning and spatial clustering fusion mode, and the filtering based on the high-precision map and the accurate region of interest is realized, the unusual targets can be effectively identified through spatial clustering, the missed detection caused by the long tail problem of the deep learning can be effectively avoided, the accuracy of sensing fusion target detection is improved, the problem that the long tail problem cannot be identified is avoided through the spatial clustering algorithm and the high-precision map, the possibility of missed detection is reduced, the deep learning only needs to pay attention to the detection identification of the common targets, and the cost of marking and collecting the long tail data is reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a laser sensing fusion optimization method provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of a detailed laser sensing fusion optimization method provided in an embodiment of the present application;
fig. 3 is a schematic flowchart of a specific target fusion process provided in an embodiment of the present application;
fig. 4 is a schematic structural diagram of a laser sensing fusion optimization device provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of a laser-sensing fusion optimization device provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making creative efforts shall fall within the protection scope of the present application.
Referring to fig. 1 to 3, an embodiment of the present application provides a laser sensing fusion optimization method, including the following steps:
step S10: determining a travelable road edge point set of the position of the vehicle based on the high-precision map, and generating an interested area according to the travelable road edge point set;
exemplarily, the point cloud can be effectively identified through a spatial clustering algorithm, the phenomenon that the long tail problem cannot be identified does not exist, and the point cloud information detected by the laser radar sensor needs to be accurately filtered in a Region of Interest (ROI), so that the false detection rate of the spatial clustering method is reduced, and the false detection of an automatic driving vehicle caused by ground points, road edges and the like in the driving process is avoided. It can be understood that, in this embodiment, the point cloud in the precise ROI area required by the spatial clustering algorithm is obtained based on the high-precision map; the method comprises the steps of dynamically sending a drivable road edge static point set through a high-precision map module, generating an ROI (region of interest) according to the road edge static point set, namely sequentially connecting three-dimensional point sets sent by the high-precision map module to form a closed region, wherein the closed region is called as an ROI region (the outline formed by the ROI region is a top view, namely the outline formed by elevation z-axis information under a BEV (bird's-eye view) view angle, and carrying out accurate filtering on point cloud through the ROI region.
Further, the determining a set of travelable road edge points of the vehicle position based on the high-precision map includes:
marking a driving road edge point set corresponding to a preset driving route on a high-precision map based on the preset driving route of a vehicle;
and when the position information of the vehicle is received, screening out a drivable road edge point set of the position of the vehicle from the drivable road edge point set marked on the high-precision map according to the position information.
Exemplarily, in this embodiment, the high-precision positioning module in the autonomous vehicle sends the location information of the vehicle in real time, and after the high-precision map module receives the location information of the vehicle, the feasible road edge point set of the location of the vehicle is dynamically sent out, and at this time, the point cloud detected by the laser radar sensor and the point set sent out by the high-precision map need to be uniformly converted into the vehicle body coordinate system. Specifically, roads corresponding to a preset driving route of the automatic driving vehicle are subjected to point cloud data collection through a laser radar sensor in advance to build a map, and then a high-precision map module is used for marking a driving road edge point set corresponding to the preset driving route on a static map; and after the high-precision map module receives the vehicle position information sent by the high-precision positioning module in real time, the travelable road edge point set information which is marked on the static map and corresponds to the position of the automatic driving vehicle is sent in real time.
Because the high-precision map module sends the point sets of the peripheral areas of the local automatic driving vehicles in real time under the condition that the dynamic positions of the automatic driving vehicles are known, the ROI generated by the point sets is more stable than the ROI generated based on the traditional algorithms such as visual perception or laser point cloud segmentation and the like; meanwhile, the efficiency and the accuracy of filtering the point cloud are higher through the point set information provided by the high-precision map, and the computational power requirement is low.
Step S20: filtering point cloud information detected by the laser radar based on the region of interest to obtain target obstacle point cloud;
exemplarily, in this embodiment, it should be understood that after an ROI region generated by a travelable road edge point set determined by a high-precision map is acquired, point cloud information detected by a laser radar sensor at a certain time is filtered based on the ROI region, and then a target obstacle point cloud can be extracted.
Further, the filtering the point cloud information detected by the laser radar based on the region of interest to obtain a target obstacle point cloud includes:
preliminarily filtering point cloud information detected by the laser radar based on the region of interest to obtain first point cloud information corresponding to the region of interest;
and performing ground point cloud filtering on the first point cloud information to obtain a target obstacle point cloud.
Exemplarily, in the present embodiment, the point cloud information detected by the laser radar in the vehicle body coordinate system is filtered based on the ROI, so that only the point cloud located in the ROI is retained in the point cloud information; and then, performing ground point cloud filtering on the point cloud in the ROI by adopting a ground removing algorithm so as to effectively remove noise point cloud with fluctuant ground, and further reserving non-ground point cloud, namely target obstacle point cloud, in the closed region. The ground point cloud filtering algorithm (i.e., the ground point cloud extraction method) adopted in the embodiment mainly comprises the following steps: firstly, sorting point clouds according to an elevation z, and acquiring preliminary ground point clouds in the range of the lowest ground point interval and the highest ground point interval; after the preliminary ground point cloud is obtained, solving a ground equation through SVD (singular value decomposition); and finally, iterating for N times (the iteration number N can be determined according to actual requirements), and obtaining the minimum error as a final ground point set.
Step S30: respectively carrying out target identification on the target obstacle point cloud through a spatial clustering algorithm and a deep learning algorithm to obtain an unclassified target result set and a classified target result set;
exemplarily, in the present embodiment, for a target obstacle point cloud, target recognition perception is performed through a spatial clustering algorithm and a deep learning algorithm, respectively. The method comprises the following steps of (1) identifying whether a target obtained by a traditional spatial clustering algorithm is classified, namely obtaining an identification result which is an unclassified target result set; the targets identified by the deep learning algorithm are classified, and common targets are easily and stably identified, namely, the identification result obtained by deep learning is a classification target result set. It can be understood that the objects in the unclassified object result set are all unclassified, and the objects in the classified object result set are classified, that is, the objects output by spatial clustering are not classified, and the deep learning output objects possess class information.
Specifically, a space clustering algorithm is used for identifying a target aiming at a target obstacle point cloud, and a non-classified target result is output; and simultaneously, identifying the target by using a deep learning algorithm, and outputting a classified target result. However, when the deep learning algorithm is used for target identification, the problem that long-tail targets are not detected often exists, and the main reason is that the data labeling and acquisition of the long-tail targets are difficult and the cost is high, namely, all target obstacles in complete special scenes are difficult to collect in data concentration; in the embodiment, the long-tail target is identified in an auxiliary manner by a spatial clustering method, so that the safety is guaranteed.
Step S40: and performing target fusion on the unclassified target result set and the classified target result set to obtain a fusion target set comprising at least one fusion target.
Exemplarily, in the embodiment, a spatial clustering algorithm and a deep learning algorithm respectively output identified targets for target obstacle point clouds, and target fusion is performed at this stage; it can be understood that the spatial clustering algorithm and the target obtained by the deep learning algorithm can be matched, and the targets obtained by the spatial clustering algorithm and the deep learning algorithm are fused according to the matching result, so as to finally obtain a fused target set.
Therefore, based on the long tail problem caused by the deep learning laser sensing technology, the method and the device detect the long tail data in a spatial clustering mode so as to reduce the collection cost and difficulty of the long tail data; carry out the confidence promotion to common target (such as vehicle, pedestrian etc.) through the integration of deep learning and spatial clustering mode, common target can use the target detection attribute of deep learning to give first place to promptly, and spatial clustering algorithm supplements as the attribute, and carries out spatial clustering to the point cloud after the accurate ROI that the high accuracy map provided filters for unusual target object (such as wild boar, rabbit etc.) also can be effectively discerned, and then can avoid the laser perception that leads to because of the long tail problem that deep learning exists to fuse and miss the detection. In addition, because the spatial clustering algorithm cannot effectively identify the target category information, the embodiment provides a more accurate automatic driving function by deeply learning the identified common target, so as to solve the problem that the category and target attribute (such as speed and 3D frame) of the spatial clustering method are not accurate.
Further, the performing target fusion on the unclassified target result set and the classified target result set to obtain a fused target set includes:
matching each unclassified target in the unclassified target result set with each classified target in the classified target result set to obtain a matching result;
and determining at least one fusion target according to the matching result to form a fusion target set.
Exemplarily, in this embodiment, it should be understood that, when a common target obtained by deep learning detection is successfully matched with a target obtained by spatial clustering algorithm detection, deep learning may be mainly performed, that is, the common target obtained by deep learning is put into a fused target result, and redundancy of the common target obtained by spatial clustering algorithm detection is removed; and aiming at the non-detected unusual targets in the deep learning, the non-classified targets detected by the spatial clustering algorithm are taken as references and put into the fused target result to form a fused target set.
Further, the matching each unclassified target in the unclassified target result set with each classified target in the classified target result set includes:
calculating Euclidean distance between each unclassified target in an unclassified target result set and each classified target in the classified target result set;
generating a nearest neighbor distance cost matrix based on the Euclidean distance;
and determining whether the unclassified target in the unclassified target result set is successfully matched with the classified target in the classified target result set or not according to a minimum cost principle and the nearest neighbor distance cost matrix.
Exemplarily, in the present embodiment, referring to fig. 3, a target list formed by all targets output by the spatial clustering algorithm for identifying the target obstacle point cloud is set as a, and a target list formed by all targets output by the deep learning algorithm for identifying the target obstacle point cloud is set as B; gradually and circularly traversing the target list A and the target list B, and calculating the Euclidean distance between each target in A and each target in B; after traversing, generating a nearest neighbor (Euclidean) distance cost matrix of the target list A and the target list B based on the calculated Euclidean distance, and setting the nearest neighbor distance cost matrix as CostMatrix; then, circularly traversing each element in the CostMatrix, and judging whether the nearest neighbor distance dist (namely Euclidean distance) is smaller than a set threshold value threshold; if the nearest neighbor distance is smaller than the threshold value threshold, keeping the original distance dist unchanged; otherwise, setting the original distance dist to be infinite until all elements in the CostMatrix matrix are traversed to form a new CostMatrix matrix; and after the traversal is finished, solving the matching corresponding relation between the target list A in the new CostMatrix matrix and each target in the target list B by adopting a Hungarian algorithm according to a minimum cost principle.
Take the example that target list A includes target X, target Y, and target list B includes target Z, target K, and target P: in the new CostMatrix, assume that the nearest neighbor distance between the target X and the target Z is dist1, the nearest neighbor distance between the target X and the target K is dist2, the nearest neighbor distance between the target X and the target P is dist3, the nearest neighbor distance between the target Y and the target Z is dist4, the nearest neighbor distance between the target Y and the target K is dist5, and the nearest neighbor distance between the target Y and the target P is dist6, where dist1 and dist2 are smaller than a preset threshold D, and dist3, dist4, dist5, and dist6 are all greater than D.
Since dist3, dist4, dist5 and dist6 are all larger than D, it is indicated that the target Y is unsuccessfully matched with the target Z, the target K and the target P, and the target P is unsuccessfully matched with the target X and the target Y; and since both dist1 and dist2 are smaller than D, it is indicated that the preliminary matching between the target X and the target Z and the target K is successful, and at this time, it is required to determine whether the final matching between the target X and the target Z or the final matching between the target X and the target K is successful according to the matching cost. The matching cost can be directly characterized according to the size of the nearest neighbor distance, can also be characterized according to the size of the difference between the nearest neighbor distance and the preset threshold value D, can also be characterized by other modes, and can be specifically determined according to actual requirements.
The embodiment is characterized by the magnitude of the difference between the nearest neighbor distance and the preset threshold D as an example: assuming that the difference between dist1 and D is E, and the difference between dist2 and D is F, and E is less than F, then since E is less than F, i.e., the matching cost between target X and target Z is less than the matching cost between target X and target K, it can be determined that target X successfully matches target Z, and target X does not successfully match target K.
It can be understood that after the hungarian algorithm, the matching results of the target list a and the target list B are divided into the following three types: list A of unsuccessful matches (i.e., match _ A), list B of unsuccessful matches (i.e., match _ B), and lists A and B of successful matches (i.e., match _ A and match _ B).
Further, the determining at least one fusion target according to the matching result includes:
if a certain unclassified target is not successfully matched with all classified targets in the classified target set, taking the certain unclassified target as a fusion target;
if a certain unclassified target is successfully matched with one of the classified targets in the classified target set, taking the classified target successfully matched with the certain unclassified target as a fusion target, and removing the certain unclassified target;
and if the classified target is not successfully matched with all the unclassified targets in the unclassified target set, taking the classified target as a fusion target.
Exemplarily, in this embodiment, the fusion method adopted according to the matching result is as follows: (1) if the deep learning is successfully matched with the spatial clustering target, taking the deep learning target as a main target to improve the confidence coefficient of the target and remove the spatial clustering target; (2) if the deep learning is not detected and the spatial clustering is detected, directly putting the spatial clustering target into the fused target result; (3) if spatial clustering is not detected and deep learning is detected, the deep learning target can be added into the fusion target result.
Specifically, for the target in the unmetch _ a that is not successfully matched, the target can be added into a fusion list (i.e., a fusion target set); aiming at the match _ A and the match _ B which are successfully matched, the targets in the match _ B which are successfully matched are reserved, the confidence of each target in the match _ B is improved, the confidence is added into the fusion list, and the match _ A list is deleted; and directly adding the target in the unmetch _ B which is not successfully matched into the fusion list.
For example, if the unatch _ A includes object 1 and object 2, the match _Aincludes object 3, object 4, and object 5, and the match _ B also includes object 3, object 4, and object 5, and the unatch _Bincludes object 6 and object 7, then object 1 and object 2 in unatch _ A, object 3, object 4, and object 5 in match _ B, and object 6 and object 7 in unatch _ B can be added to the fused object set, i.e., the fused object set includes match _ B + unatch _ A + unatch _ B, and the output fused object will include object 1, object 2, object 3, object 4, object 5, object 6, and object 7.
Further, if a classification target is not successfully matched with all the unclassified targets in the unclassified target set, taking the classification target as a fusion target, including:
if a certain classified target is not successfully matched with all the unclassified targets in the unclassified target set, judging whether the certain classified target continuously appears for a preset number of times;
if so, taking the certain classification target as a fusion target;
if not, removing the classification target.
Exemplarily, in this embodiment, in order to further improve the accuracy of the fusion target, for the unsuccessfully matched unmatch _ B (the unmatch _ B includes the target 6 and the target 7), it may be further determined whether the target in the unmatch _ B continuously appears for a preset number of times (the preset number of times may be determined according to actual requirements, and is not limited herein), and if yes, the preset number of times is retained; otherwise, it is deleted from the match _ B. For example, if the preset number of times is 3, determining whether the target 6 in the unmatch _ B exists for 3 consecutive frames, and if the target 6 exists for 3 consecutive frames or more than 3 consecutive frames, retaining the target 6 in the unmatch _ B; if the target 6 only appears in 2 or 1 consecutive frames, the target 6 is removed from the match _ B.
Assuming that object 6 does not appear 3 times in succession, but object 7 appears 3 times in succession, where match _ B includes only object 7, and match _ A includes object 3, object 4, and object 5, and match _ B also includes object 3, object 4, and object 5, and where match _Aincludes object 1 and object 2, then the fused objects output at this time will include object 1, object 2, object 3, object 4, object 5, and object 7.
In conclusion, the ROI area point cloud is accurately acquired through the assistance of the high-precision map, the accuracy and the efficiency of a spatial clustering algorithm are improved, and the false detection rate is reduced; secondly, deep learning only needs to pay attention to detection and identification of common targets through spatial clustering so as to reduce labeling and acquisition cost of long-tail data, and meanwhile, the dependence of the deep learning on the long-tail problem can be reduced to a certain extent through the addition of a spatial clustering algorithm; the laser fusion strategy combining the deep learning and the spatial clustering realizes advantage complementation between the spatial clustering and the deep learning, can complete basic automatic driving functions (such as ACC, LKA and the like), and can avoid the safety problem caused by the long tail problem. In addition, the single laser radar sensor fusion strategy provided by the application can solve the problem of long tail caused by the fact that deep learning cannot be exhaustive, namely, the long tail is supplemented in a spatial clustering mode, and a first safety principle is ensured.
Referring to fig. 4, an embodiment of the present application further provides a laser sensing fusion optimization apparatus, including:
the generating unit is used for determining a travelable road edge point set of the position of the vehicle based on the high-precision map and generating the region of interest according to the travelable road edge point set;
the filtering unit is used for filtering point cloud information detected by the laser radar based on the region of interest to obtain a target obstacle point cloud;
the identification unit is used for respectively carrying out target identification on the target obstacle point cloud through a spatial clustering algorithm and a deep learning algorithm to obtain an unclassified target result set and a classified target result set;
and the fusion unit is used for carrying out target fusion on the unclassified target result set and the classified target result set to obtain a fusion target set containing at least one fusion target.
Further, the fusion unit is specifically configured to:
matching each unclassified target in the unclassified target result set with each classified target in the classified target result set to obtain a matching result;
and determining at least one fusion target according to the matching result to form a fusion target set.
Further, the fusion unit is specifically further configured to:
if a certain unclassified target is not successfully matched with all classified targets in the classified target set, taking the certain unclassified target as a fusion target;
if a certain unclassified target is successfully matched with one of the classified targets in the classified target set, taking the classified target successfully matched with the certain unclassified target as a fusion target, and removing the certain unclassified target;
and if the certain classified target is not successfully matched with all the unclassified targets in the unclassified target set, taking the certain classified target as a fusion target.
Further, the fusion unit is specifically further configured to:
if a certain classified target is not successfully matched with all the unclassified targets in the unclassified target set, judging whether the certain classified target continuously appears for a preset number of times;
if so, taking the certain classification target as a fusion target;
if not, removing the classification target.
Further, the fusion unit is specifically further configured to:
calculating the Euclidean distance between each unclassified target in an unclassified target result set and each classified target in the classified target result set;
generating a nearest neighbor distance cost matrix based on the Euclidean distance;
and determining whether the unclassified target in the unclassified target result set is successfully matched with the classified target in the classified target result set or not according to a minimum cost principle and the nearest neighbor distance cost matrix.
Further, the filter unit is specifically configured to:
preliminarily filtering point cloud information detected by the laser radar based on the region of interest to obtain first point cloud information corresponding to the region of interest;
and performing ground point cloud filtering on the first point cloud information to obtain a target obstacle point cloud.
Further, the generating unit is specifically configured to:
marking a driving road edge point set corresponding to a preset driving route on a high-precision map based on the preset driving route of a vehicle;
and when the position information of the vehicle is received, screening out a drivable road edge point set of the position of the vehicle from the drivable road edge point set marked on the high-precision map according to the position information.
It should be noted that, as will be clearly understood by those skilled in the art, for convenience and brevity of description, the specific working processes of the apparatus and the units described above may refer to the corresponding processes in the foregoing embodiment of the laser sensing fusion optimization method, and are not described herein again.
The laser-aware fusion optimization apparatus provided in the above embodiment may be implemented in the form of a computer program, and the computer program may be run on the laser-aware fusion optimization device shown in fig. 5.
The embodiment of the present application further provides a laser sensing fusion optimization device, including: the laser perception fusion optimization method comprises a memory, a processor and a network interface which are connected through a system bus, wherein at least one instruction is stored in the memory, and the at least one instruction is loaded and executed by the processor so as to realize all steps or partial steps of the laser perception fusion optimization method.
The network interface is used for performing network communication, such as sending assigned tasks. It will be appreciated by those skilled in the art that the configuration shown in fig. 5 is a block diagram of only a portion of the configuration associated with the present application, and is not intended to limit the computing device to which the present application may be applied, and that a particular computing device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
The Processor may be a CPU, other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device discrete hardware components, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, the processor being the control center of the computer device and the various interfaces and lines connecting the various parts of the overall computer device.
The memory may be used to store computer programs and/or modules, and the processor may implement various functions of the computer device by executing or executing the computer programs and/or modules stored in the memory, as well as by invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a video playing function, an image playing function, etc.), and the like; the storage data area may store data (such as video data, image data, etc.) created according to the use of the cellular phone, etc. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
The embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, all or part of the steps of the foregoing laser sensing fusion optimization method are implemented.
The embodiments of the present application may implement all or part of the foregoing processes, and may also be implemented by a computer program instructing related hardware, where the computer program may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of the foregoing methods. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, U.S. disk, removable hard disk, magnetic diskette, optical disk, computer memory, read-Only memory (ROM), random Access Memory (RAM), electrical carrier wave signal, telecommunications signal, software distribution medium, etc. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, in accordance with legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunications signals.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, server, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of other like elements in a process, method, article, or system comprising the element.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is merely exemplary of the present application and is presented to enable those skilled in the art to understand and practice the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A laser perception fusion optimization method is characterized by comprising the following steps:
determining a travelable road edge point set of the position of the vehicle based on the high-precision map, and generating an interested area according to the travelable road edge point set;
filtering point cloud information detected by the laser radar based on the region of interest to obtain target obstacle point cloud;
respectively carrying out target identification on the target obstacle point cloud through a spatial clustering algorithm and a deep learning algorithm to obtain an unclassified target result set and a classified target result set;
and performing target fusion on the unclassified target result set and the classified target result set to obtain a fusion target set containing at least one fusion target.
2. The laser-aware fusion optimization method of claim 1, wherein the performing target fusion on the unclassified target result set and the classified target result set to obtain a fused target set comprises:
matching each unclassified target in the unclassified target result set with each classified target in the classified target result set to obtain a matching result;
and determining at least one fusion target according to the matching result to form a fusion target set.
3. The laser-aware fusion optimization method of claim 2, wherein the determining at least one fusion target according to the matching result comprises:
if a certain unclassified target is not successfully matched with all classified targets in the classified target set, taking the certain unclassified target as a fusion target;
if a certain unclassified target is successfully matched with one of the classified targets in the classified target set, taking the classified target successfully matched with the certain unclassified target as a fusion target, and removing the certain unclassified target;
and if the certain classified target is not successfully matched with all the unclassified targets in the unclassified target set, taking the certain classified target as a fusion target.
4. The laser-aware fusion optimization method of claim 3, wherein if a classification target is not successfully matched with all unclassified targets in the unclassified target set, using the classification target as a fusion target comprises:
if a certain classified target is not successfully matched with all the unclassified targets in the unclassified target set, judging whether the certain classified target continuously appears for a preset number of times;
if so, taking the certain classification target as a fusion target;
if not, removing the classification target.
5. The laser-aware fusion optimization method of claim 2, wherein the matching each unclassified object in the unclassified object result set with each classified object in the classified object result set comprises:
calculating Euclidean distance between each unclassified target in an unclassified target result set and each classified target in the classified target result set;
generating a nearest neighbor distance cost matrix based on the Euclidean distance;
and determining whether the unclassified target in the unclassified target result set is successfully matched with the classified target in the classified target result set or not according to a minimum cost principle and the nearest neighbor distance cost matrix.
6. The laser perception fusion optimization method of claim 1, wherein the filtering point cloud information detected by a laser radar based on the region of interest to obtain a target obstacle point cloud comprises:
preliminarily filtering point cloud information detected by the laser radar based on the region of interest to obtain first point cloud information corresponding to the region of interest;
and performing ground point cloud filtering on the first point cloud information to obtain a target obstacle point cloud.
7. The laser-aware fusion optimization method of claim 1, wherein determining the set of travelable road edge points for the vehicle based on the high-precision map comprises:
marking a driving road edge point set corresponding to a preset driving route on a high-precision map based on the preset driving route of a vehicle;
and when the position information of the vehicle is received, screening out a drivable road edge point set of the position of the vehicle from the drivable road edge point set marked on the high-precision map according to the position information.
8. A laser-aware fusion optimization apparatus, comprising:
the generating unit is used for determining a travelable road edge point set of the position of the vehicle based on the high-precision map and generating the region of interest according to the travelable road edge point set;
the filtering unit is used for filtering point cloud information detected by the laser radar based on the region of interest to obtain a target obstacle point cloud;
the identification unit is used for respectively carrying out target identification on the target obstacle point cloud through a spatial clustering algorithm and a deep learning algorithm to obtain an unclassified target result set and a classified target result set;
and the fusion unit is used for carrying out target fusion on the unclassified target result set and the classified target result set to obtain a fusion target set containing at least one fusion target.
9. A laser-aware fusion optimization device, comprising: a memory and a processor, the memory having stored therein at least one instruction that is loaded and executed by the processor to implement the laser-aware fusion optimization method of any of claims 1 to 7.
10. A computer-readable storage medium characterized by: the computer storage medium stores a computer program which, when executed by a processor, implements the laser-aware fusion optimization method of any of claims 1 to 7.
CN202211216856.3A 2022-09-30 2022-09-30 Laser perception fusion optimization method, device and equipment and readable storage medium Pending CN115578703A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211216856.3A CN115578703A (en) 2022-09-30 2022-09-30 Laser perception fusion optimization method, device and equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211216856.3A CN115578703A (en) 2022-09-30 2022-09-30 Laser perception fusion optimization method, device and equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN115578703A true CN115578703A (en) 2023-01-06

Family

ID=84582218

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211216856.3A Pending CN115578703A (en) 2022-09-30 2022-09-30 Laser perception fusion optimization method, device and equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN115578703A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117710920A (en) * 2023-12-11 2024-03-15 探维科技(苏州)有限公司 Detection method and device for mobile subject and drivable area

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117710920A (en) * 2023-12-11 2024-03-15 探维科技(苏州)有限公司 Detection method and device for mobile subject and drivable area
CN117710920B (en) * 2023-12-11 2024-10-29 探维科技(苏州)有限公司 Method and device for detecting movable body and movable area thereof

Similar Documents

Publication Publication Date Title
CN110148196B (en) Image processing method and device and related equipment
CN112329754B (en) Obstacle recognition model training method, obstacle recognition method, device and system
CN112967283A (en) Target identification method, system, equipment and storage medium based on binocular camera
CN117576652B (en) Road object identification method and device, storage medium and electronic equipment
CN112507887B (en) Intersection sign extracting and associating method and device
KR20170104287A (en) Driving area recognition apparatus and method for recognizing driving area thereof
CN115018879B (en) Target detection method, computer readable storage medium and driving device
EP4528677A1 (en) Autonomous-driving environmental perception method, medium and vehicle
CN113255444A (en) Training method of image recognition model, image recognition method and device
CN115131759B (en) Traffic marking recognition method, device, computer equipment and storage medium
CN114299247B (en) Road traffic signs and markings rapid detection and troubleshooting methods
CN116309943B (en) Parking lot semantic map road network construction method and device and electronic equipment
US20250014355A1 (en) Road obstacle detection method and apparatus, and device and storage medium
CN114898321A (en) Method, device, equipment, medium and system for detecting road travelable area
CN117671644A (en) Signboard detection method and device and vehicle
CN116469073A (en) Target identification method, device, electronic equipment, medium and automatic driving vehicle
CN112699711A (en) Lane line detection method, lane line detection device, storage medium, and electronic apparatus
CN114120254A (en) Road information identification method, device and storage medium
CN115792945B (en) Floating obstacle detection method and device, electronic equipment and storage medium
CN115578703A (en) Laser perception fusion optimization method, device and equipment and readable storage medium
CN111161542B (en) Vehicle identification method and device
CN113189610B (en) Map-enhanced autopilot multi-target tracking method and related equipment
CN112131947A (en) Road indication line extraction method and device
CN113932820A (en) Object detection method and device
CN115376093A (en) Object prediction method and device in intelligent driving and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination