[go: up one dir, main page]

CN113989784B - A road scene type recognition method and system based on vehicle-mounted laser point cloud - Google Patents

A road scene type recognition method and system based on vehicle-mounted laser point cloud Download PDF

Info

Publication number
CN113989784B
CN113989784B CN202111452174.8A CN202111452174A CN113989784B CN 113989784 B CN113989784 B CN 113989784B CN 202111452174 A CN202111452174 A CN 202111452174A CN 113989784 B CN113989784 B CN 113989784B
Authority
CN
China
Prior art keywords
point cloud
point
intersection
road
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111452174.8A
Other languages
Chinese (zh)
Other versions
CN113989784A (en
Inventor
方莉娜
王康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou University
Original Assignee
Fuzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou University filed Critical Fuzhou University
Priority to CN202111452174.8A priority Critical patent/CN113989784B/en
Publication of CN113989784A publication Critical patent/CN113989784A/en
Application granted granted Critical
Publication of CN113989784B publication Critical patent/CN113989784B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Electromagnetism (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to a road scene type identification method and system based on vehicle-mounted laser point cloud data, wherein the method comprises the following steps: s1, acquiring a ground point cloud based on vehicle-mounted laser point cloud data, and further acquiring a road boundary point cloud; s2, calculating a main direction of a road boundary point, calculating a current track point vector by taking the running direction of the vehicle-mounted laser scanning system as the advancing direction of the track point, calculating an included angle value between the main direction and the track point vector, calculating a characteristic value of the included angle of the road boundary in the neighborhood of the track point by track point, and classifying a road straight line segment and a road junction segment according to different included angle values; s3, clustering intersection segments into independent objects, and classifying the intersection and the T-shaped intersection by adopting a dynamic graph convolutional neural network DGCNN to obtain straight line segments, the intersection and the T-shaped intersection. The method and the system are beneficial to identifying the road boundaries of different road scene types.

Description

Road scene type identification method and system based on vehicle-mounted laser point cloud
Technical Field
The invention belongs to the technical field of road scene identification, and particularly relates to a road scene type identification method and system based on a vehicle-mounted laser point cloud.
Background
The vehicle-mounted laser scanning system is used as a novel mapping technology which rapidly develops, can rapidly and accurately acquire the space information of road scenes and surrounding ground objects, and is widely applied to the fields of urban traffic management, intelligent traffic, high-precision maps and the like. The high-precision map is a precondition for realizing automatic driving, the road boundary scene type is taken as an important component of the static high-precision map, and the accurate identification of the type is helpful for assisting the planning and decision of the driving path of the automatic driving automobile.
At present, road scene recognition is mainly divided into two methods: an image-based road scene type recognition method and a GPS track data-based road type recognition method. The road scene type recognition method based on the image is mainly used for extracting, describing and matching features of the image, and is used for recognizing the type of the intersection by summarizing semantic features, spatial structures, topological shapes and the like of different road types, however, the road type recognition realized by the method is dependent on manually designed features, the recognition effect is seriously dependent on the accuracy of feature item design, and the image-based method is sensitive to illumination and is difficult to recognize complex road scene types. The GPS track data used by the method is based on track areas left by vehicles, track data incomplete is easy to be caused on roads with fewer vehicles running, and road scenes are difficult to accurately identify.
Patent CN108877267a discloses an intersection detection method based on a vehicle-mounted monocular camera, which designs an intersection type classification and distance estimation loss function to construct a comprehensive loss function, constructs a characteristic coding sub-network, an intersection type classification sub-network and a distance classification sub-network to form an intersection type detection network, and takes a road color image collected by the vehicle-mounted monocular camera as an input network to output the type of an intersection of a road. However, the method is based on the fact that the resolution of monocular images acquired by the vehicle-mounted monocular camera is low, the content of information is limited, and the method is difficult to apply to large-scene road scenes.
Patent CN106896353a discloses an unmanned vehicle intersection detection method based on three-dimensional laser radar, which uses the laser radar loaded by the unmanned vehicle to collect road scene point cloud data, converts the point cloud data into raster images to extract multi-frame height information diagrams, uses pixel points of the height information diagrams as feature vectors, searches intersection points in front of the unmanned vehicle based on the actual position of the unmanned vehicle and a real map, and finally inputs the feature vectors and the intersection points of the height information diagrams based on a support vector machine to classify the type of the intersection where the unmanned vehicle is located. However, the method converts the point cloud into the grid image, the generated image still has precision loss, and the classification precision depends on the selection of the characteristics, so that the classification precision of the road scene type is affected.
Patent CN109271858a discloses an intersection recognition method based on driving track and visual lane boundary line data, which extracts track speed as zero data track point set based on driving track data, generates intersection stop line based on track point set based on density clustering algorithm, cuts based on stop line data and camera lane boundary line data associated with a road, and extracts the position of an intersection. However, the method can only identify the road junction area and cannot distinguish different road junction types.
Patent CN110688958a discloses an intersection identification method based on GoogLeNet neural network, which takes road network data as an intersection sample, inputs the intersection sample into GoogLeNet convolutional neural network for training, and identifies different road classes. However, the intersections identified by the method can only identify urban main road scenes, and the types of high-speed turnouts, non-turnouts and the like are difficult to identify.
Patent CN109635722a discloses a high-resolution remote sensing image intersection automatic identification method, which is based on the remote sensing image to perform rough extraction of road class and non-road class, and the intersection is identified by extracting road skeleton intersection as candidate intersection coordinates. However, the method is only suitable for urban main roads with obvious road area characteristics, intersections are difficult to identify at secondary main roads, and the type of the intersections cannot be identified at the identified intersections.
Patent CN109815993a discloses a method for regional feature extraction, database establishment and intersection identification based on a GPS track, which extracts a mobile GPS track based on speed change of track data, extracts road features by using a feature extraction window, and finally identifies an intersection based on a KNN algorithm of a sliding window. However, the method is difficult to identify areas with less track data, and cannot distinguish the type of the identified intersections.
Disclosure of Invention
The invention aims to provide a road scene type identification method and system based on a vehicle-mounted laser point cloud, which are beneficial to identifying road boundaries of different road scene types.
In order to achieve the above purpose, the invention adopts the following technical scheme: a road scene type identification method based on vehicle-mounted laser point cloud data comprises the following steps:
s1, acquiring a ground point cloud based on vehicle-mounted laser point cloud data, and further acquiring a road boundary point cloud;
s2, calculating a main direction of a road boundary point, calculating a current track point vector by taking the running direction of the vehicle-mounted laser scanning system as the advancing direction of the track point, calculating an included angle value between the main direction and the track point vector, calculating a characteristic value of the included angle of the road boundary in the neighborhood of the track point by track point, and classifying a road straight line segment and a road junction segment according to different included angle values;
S3, clustering intersection segments into independent objects, and classifying the intersection and the T-shaped intersection by adopting a dynamic graph convolutional neural network DGCNN to obtain straight line segments, the intersection and the T-shaped intersection.
Further, the step S1 specifically includes the following steps:
s11, acquiring ground point cloud by adopting a cloth simulation filtering method based on vehicle-mounted laser point cloud data;
S12, based on the ground point cloud obtained in the step S11, segmenting the ground point cloud by adopting a super-voxel method, wherein each point of the super-voxel has the same semantic label, the normal similar road boundary point cloud is segmented in the same semantic label, and all the road boundary point clouds in the label form a voxel block;
S13, calculating an included angle average delta theta between a normal vector in point cloud in each voxel block of the super-voxel of the ground point cloud and a Z axis of a point cloud data coordinate system, calculating a difference delta h between the maximum elevation and the minimum elevation of the point cloud in each voxel block, and calculating voxel blocks meeting t normal、theight according to a voxel block normal threshold t normal and a voxel block elevation difference threshold t height, wherein the extracted voxel blocks are the voxel blocks containing the road boundary point cloud;
s14, based on the track line of the vehicle-mounted laser scanning system, reserving data points in a set range on two sides of a road, and removing data points outside the set range;
S15, clustering the roughly extracted road boundary voxel blocks by adopting European clustering, setting a clustering cluster point number threshold t num, and removing the clustering clusters smaller than the threshold number points to obtain denoised voxel blocks, namely the road boundary point cloud.
Further, the step S2 specifically includes the following steps:
S21, calculating a main direction of the road boundary point by adopting a principal component analysis method based on the acquired road boundary point cloud;
Inputting road boundary point clouds, randomly selecting a point p i in the point clouds, searching a point cloud p 1、p2、p3…pn in a p i neighborhood r, and calculating the average value of the point clouds in the x and y directions in the neighborhood r Calculating covariance matrix C of the neighborhood points, and selecting a feature vector corresponding to the maximum feature value in the covariance matrix as a main direction of the x-direction and the y-direction of the road boundary point p i The following formula is satisfied:
wherein n is the number of point clouds in the point cloud neighborhood; The average value of the x i、yi direction of the points in the neighborhood is represented, and x i、yi represents coordinate values of the ith point cloud in the x direction and the y direction respectively; cov (x, y) represents the x, y covariance matrix;
S22, taking a vehicle-mounted laser scanning system track point t i as a center, establishing a local search region ROI, calculating a direction vector v i of an initial track point t i and an end track point t j in the ROI region, and calculating a main direction of v i and a road boundary point p 1、p2、p3…pn in the ROI region The included angle theta 1、θ2、θ3…θn is called the included angle characteristic value of the road boundary point, and satisfies the following formula:
S23, establishing a local ROI i by taking a track point t i of the vehicle-mounted laser scanning system as a center and counting the accumulated sum of characteristic values theta i of boundary point included angles in the local ROI i Track point by track point calculationTraversing all track points in sequence, and calculating the accumulated sum of included angle characteristic values of road boundary points in the region corresponding to the ROI i of all track pointsIntersection region ROI i included angle characteristic value accumulation sumFar greater than the straight line segment region sum of angle characteristic valuesSetting a threshold value theta threshold to separate the straight line segment from the intersection region: If the current ROI i area is larger than theta threshold, the current ROI i area is an intersection point cloud, otherwise, the current ROI i area is a straight line segment point cloud, namely the following formula is satisfied:
Further, the step S3 specifically includes the following steps:
s31, clustering the intersection point clouds into independent objects based on a communication branch clustering algorithm according to the intersection point clouds extracted in the S23, wherein the clustering distance is C cluster, and classifying intersection types by using DGCNN networks based on the clustered intersection independent objects;
S32, sampling the intersection object into n points by using the farthest points, inputting DGCNN networks, wherein the data form is n multiplied by 3,3 represents the initial feature number of the point cloud feature dimension, obtaining the high-dimensional feature of the sampling point of n multiplied by 1024 by the input data through 3 layers Edge-Conv and 1 layer multi-layer perceptron MLP calculation, gathering the semantically similar points, extracting the local feature information of different levels of the point cloud, finally obtaining the global feature of n multiplied by 1 by using the maximum pooling, distributing the extracted global feature to each point, splicing the features of different levels extracted by each EdgeConv, fusing all feature information through MLP, reducing the dimension of the high-dimensional feature, and finally outputting the probability value of 2 multiplied by 1 representing that the current object is classified into 2 categories by the network, determining the category of the intersection object, and realizing the classification of the intersection and T-shaped intersection.
The invention also provides a road scene type identification system based on the vehicle-mounted laser point cloud data, which comprises a memory, a processor and computer program instructions which are stored on the memory and can be run by the processor, wherein the method steps can be realized when the processor runs the computer program instructions.
Compared with the prior art, the invention has the following beneficial effects: the limitation of utilizing track data or remote sensing data aiming at intersection type classification in the prior art is broken, the vehicle-mounted laser point cloud road boundary is directly oriented, different road scene types are identified, and road intersection types of different urban areas and high-speed scenes are identified. According to the road boundary intersection type identification method, the included angle between the main direction of the road boundary and the track line direction of the vehicle-mounted scanning system is calculated, the straight line segments and the intersection areas are classified based on the difference of included angles between the intersection position and the straight line areas, the independent objects of the intersection data areas are clustered, the intersection and the T-shaped intersection are classified by using the deep neural network based on the clustered intersection areas, and the road boundary intersection type identification is realized.
Drawings
FIG. 1 is a flow chart of a method implementation of an embodiment of the present invention;
FIG. 2 is a schematic view of voxel segmentation of a ground point Yun Chao in an embodiment of the present invention;
FIG. 3 is a schematic diagram of a road boundary extraction result in an embodiment of the present invention;
FIG. 4 is a schematic view of a main direction of a road boundary in an embodiment of the present invention;
FIG. 5 is a diagram of a characteristic value of a road boundary angle in an embodiment of the present invention;
FIG. 6 is a diagram illustrating the sum of the characteristic values of the included angles of the road boundary trace point areas according to the embodiment of the present invention;
FIG. 7 is a diagram showing the classification results of the road opening and the straight line segment according to the embodiment of the invention;
FIG. 8 is a diagram illustrating a result of classifying the types of the road openings according to an embodiment of the present invention;
FIG. 9 is a diagram of a DGCNN model architecture in an embodiment of the present invention.
Detailed Description
The invention will be further described with reference to the accompanying drawings and examples.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the application. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the present application. As used herein, the singular is also intended to include the plural unless the context clearly indicates otherwise, and furthermore, it is to be understood that the terms "comprises" and/or "comprising" when used in this specification are taken to specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof.
As shown in fig. 1, the present embodiment provides a road scene type identification method based on vehicle-mounted laser point cloud data, which includes the following steps:
As shown in fig. 1, the present embodiment provides a road scene type identification method based on vehicle-mounted laser point cloud data, which includes the following steps:
S1, acquiring a ground point cloud based on vehicle-mounted laser point cloud data, and further acquiring a road boundary point cloud.
S2, calculating a main direction of a road boundary point, calculating a current track point vector by taking the running direction of the vehicle-mounted laser scanning system as the advancing direction of the track point, calculating an included angle value between the main direction and the track point vector, calculating a characteristic value of the included angle of the road boundary in the track point neighborhood by track point, and classifying the road straight line section and the road junction section according to different included angle values.
S3, clustering intersection segments into independent objects, and classifying the intersection and the T-shaped intersection by adopting a dynamic graph convolutional neural network (DYNAMIC GRAPH CNN, DGCNN) to obtain straight line segments, the intersection and the T-shaped intersection.
The specific implementation method of the step S1 is as follows:
S11, acquiring ground point cloud by adopting a cloth simulation filtering method based on vehicle-mounted laser point cloud data.
S12, based on the ground point cloud obtained in the step S11, the ground point cloud is segmented by adopting a super-voxel method, each point of the super-voxel has the same semantic label, the normal similar road boundary point cloud is segmented in the same semantic label, and the normal similar means that the normal included angle of the two road boundary point clouds is smaller than a set epsilon value. All the road boundary point clouds in the label form a voxel block, and the label is expressed as the voxel block. The ground point Yun Chao voxel segmentation is shown in figure 2.
S13, calculating an included angle average delta theta between a normal vector in the point cloud in each voxel block of the super-voxel of the ground point cloud and a Z axis of a point cloud data coordinate system, calculating a difference delta h between the maximum elevation and the minimum elevation of the point cloud in each voxel block, and calculating a voxel block meeting t normal、theight according to a voxel block normal threshold t normal and a voxel block elevation difference threshold t height, wherein the extracted voxel block is the voxel block containing the road boundary point cloud.
S14, based on the track line of the vehicle-mounted laser scanning system, retaining data points in the set range on two sides of the road, and removing the data points outside the set range.
S15, clustering the roughly extracted road boundary voxel blocks by adopting European clustering, setting a clustering cluster point number threshold t num, and removing the clustering clusters smaller than the threshold number points to obtain denoised voxel blocks, namely a road boundary point cloud, as shown in figure 3.
The specific implementation method of the step S2 is as follows:
S21, calculating the main direction of the road boundary point by adopting a principal component analysis method based on the acquired road boundary point cloud.
Inputting road boundary point clouds, randomly selecting a point p i in the point clouds, searching a point cloud p 1、p2、p3…pn in a p i neighborhood r, and calculating the average value of the point clouds in the x and y directions in the neighborhood rCalculating covariance matrix C of the neighborhood points, and selecting a feature vector corresponding to the maximum feature value in the covariance matrix as a main direction of the x-direction and the y-direction of the road boundary point p i The main direction of the road boundary is as shown in fig. 4, satisfying the following formula:
wherein n is the number of point clouds in the point cloud neighborhood; The average value of the x i、yi direction of the points in the neighborhood is represented, and x i、yi represents coordinate values of the ith point cloud in the x direction and the y direction respectively; cov (x, y) represents the x, y covariance matrix.
S22, taking a vehicle-mounted laser scanning system track point t i as a center, establishing a local search region ROI, calculating a direction vector v i of an initial track point t i and an end track point t j in the ROI region, and calculating a main direction of v i and a road boundary point p 1、p2、p3…pn in the ROI regionIs referred to as the characteristic value of the included angle of the road boundary point, as shown in fig. 5, satisfying the following formula:
S23, establishing a local ROI i by taking a track point t i of the vehicle-mounted laser scanning system as a center and counting the accumulated sum of characteristic values theta i of boundary point included angles in the local ROI i Track point by track point calculationTraversing all track points in sequence, and calculating the accumulated sum of included angle characteristic values of road boundary points in the region corresponding to the ROI i of all track pointsAs shown in FIG. 6, the intersection region ROI i includes an angle feature value summationFar greater than the straight line segment region sum of angle characteristic valuesSetting a threshold value theta threshold to separate the straight line segment from the intersection region: Greater than θ threshold, the current ROI i region is an intersection point cloud, otherwise is a straight line segment point cloud, as shown in fig. 7, that is, the following formula is satisfied:
the specific implementation method of the step S3 is as follows:
s31, clustering the intersection point clouds into independent objects based on a connected branch clustering algorithm according to the intersection point clouds extracted in the S23, wherein the clustering distance is C cluster, and classifying intersection types by using DGCNN networks based on the clustered intersection independent objects.
S32, sampling the intersection object into n points by using the farthest points, inputting a DGCNN network shown in FIG. 9, wherein the data form is n multiplied by 3,3 represents the initial feature number of the point cloud feature dimension, calculating the input data through 3 layers Edge-Conv and 1 Layer multi-Layer perceptron MLP (Muti-Layer permission, MLP) to obtain the sampling point high-dimensional feature of n multiplied by 1024, aggregating the points which are similar in meaning, extracting the local feature information of different levels of the point cloud, finally obtaining the global feature of n multiplied by 1 by using the maximum pooling, distributing the extracted global feature to each point, splicing the different level features extracted by each EdgeConv, fusing all feature information through MLP, reducing the dimension of the high-dimensional feature, finally outputting the probability value corresponding to the 2 multiplied by 1 representing the current object classified into 2 categories by the network, determining the category of the object, and realizing the classification of the intersection and the T-shaped intersection, as shown in FIG. 8.
The present embodiment also provides a road scene type recognition system for implementing the above method, which includes a memory, a processor, and computer program instructions stored on the memory and executable by the processor, when the processor executes the computer program instructions, the above method steps can be implemented.
The invention provides a road scene type identification scheme based on vehicle-mounted laser point cloud data, which comprises the steps of firstly extracting ground point cloud based on vehicle-mounted laser point cloud and cloth simulation filtering, clustering road boundary points by utilizing a boundary enhanced super-voxel method, re-weighting normal vectors in voxel blocks, extracting road boundary points by utilizing a normal vector threshold and a voxel block elevation difference threshold, removing discrete point cloud far from the center of a track line, and removing discrete point cloud smaller than the points based on a communication branch method to obtain the road boundary point cloud. Calculating the main direction of the road boundary based on the road boundary, taking a track line driven by a vehicle-mounted laser scanning system as the advancing direction, calculating the included angle value of the track line driving direction and the main direction of the road boundary, counting the change of the included angle value in the corresponding areas of different track points, sequencing the included angle values in the different areas, selecting the first ten as a threshold value, and classifying intersection areas and non-intersection areas; based on the extracted intersection region, using the connected branch clusters as independent objects, classifying the intersections and the T-shaped intersections by using a dynamic graph neural network classification network model, and combining the classified straight line segments to obtain the T-shaped intersections, the intersections and the straight line segments.
Compared with the prior art, the method is directly constructed for the vehicle-mounted laser point cloud road boundary, the intersection points and the non-intersection points are classified by calculating the change of the direction included angle value between the main direction of the road boundary and the trajectory line, the intersection data are clustered into independent objects, the intersection and the T-shaped intersection are classified by using the DGCNN model, and the road boundary of different road scene types is finally output, so that a new research method is provided for vehicle-mounted laser point cloud road scene type identification.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the invention in any way, and any person skilled in the art may make modifications or alterations to the disclosed technical content to the equivalent embodiments. However, any simple modification, equivalent variation and variation of the above embodiments according to the technical substance of the present invention still fall within the protection scope of the technical solution of the present invention.

Claims (4)

1. The road scene type identification method based on the vehicle-mounted laser point cloud data is characterized by comprising the following steps of:
s1, acquiring a ground point cloud based on vehicle-mounted laser point cloud data, and further acquiring a road boundary point cloud;
s2, calculating a main direction of a road boundary point, calculating a current track point vector by taking the running direction of the vehicle-mounted laser scanning system as the advancing direction of the track point, calculating an included angle value between the main direction and the track point vector, calculating a characteristic value of the included angle of the road boundary in the neighborhood of the track point by track point, and classifying a road straight line segment and a road junction segment according to different included angle values;
S3, clustering intersection segments into independent objects, and classifying the intersection and the T-shaped intersection by adopting a dynamic graph convolutional neural network DGCNN to obtain straight line segments, the intersection and the T-shaped intersection;
the step S2 specifically includes the following steps:
S21, calculating a main direction of the road boundary point by adopting a principal component analysis method based on the acquired road boundary point cloud;
Inputting road boundary point clouds, randomly selecting a point p i in the point clouds, searching a point cloud p 1、p2、p3…pn in a p i neighborhood r, and calculating the average value of the point clouds in the x and y directions in the neighborhood r Calculating covariance matrix C of the neighborhood points, and selecting a feature vector corresponding to the maximum feature value in the covariance matrix as a main direction of the x-direction and the y-direction of the road boundary point p i The following formula is satisfied:
wherein n is the number of point clouds in the point cloud neighborhood; The average value of the x i、yi direction of the points in the neighborhood is represented, and x i、yi represents coordinate values of the ith point cloud in the x direction and the y direction respectively; cov (x, y) represents the x, y covariance matrix;
S22, taking a vehicle-mounted laser scanning system track point t i as a center, establishing a local search region ROI, calculating a direction vector v i of an initial track point t i and an end track point t j in the ROI region, and calculating a main direction of v i and a road boundary point p 1、p2、p3…pn in the ROI region The included angle theta 1、θ2、θ3…θn is called the included angle characteristic value of the road boundary point, and satisfies the following formula:
S23, establishing a local ROI i by taking a track point t i of the vehicle-mounted laser scanning system as a center and counting the accumulated sum of characteristic values theta i of boundary point included angles in the local ROI i Track point by track point calculationTraversing all track points in sequence, and calculating the accumulated sum of included angle characteristic values of road boundary points in the region corresponding to the ROI i of all track pointsIntersection region ROI i included angle characteristic value accumulation sumFar greater than the straight line segment region sum of angle characteristic valuesSetting a threshold value theta threshold to separate the straight line segment from the intersection region: If the current ROI i area is larger than theta threshold, the current ROI i area is an intersection point cloud, otherwise, the current ROI i area is a straight line segment point cloud, namely the following formula is satisfied:
2. the method for identifying the road scene type based on the vehicle-mounted laser point cloud data according to claim 1, wherein the step S1 specifically comprises the following steps:
s11, acquiring ground point cloud by adopting a cloth simulation filtering method based on vehicle-mounted laser point cloud data;
S12, based on the ground point cloud obtained in the step S11, segmenting the ground point cloud by adopting a super-voxel method, wherein each point of the super-voxel has the same semantic label, the normal similar road boundary point cloud is segmented in the same semantic label, and all the road boundary point clouds in the label form a voxel block;
S13, calculating an included angle average delta theta between a normal vector in point cloud in each voxel block of the super-voxel of the ground point cloud and a Z axis of a point cloud data coordinate system, calculating a difference delta h between the maximum elevation and the minimum elevation of the point cloud in each voxel block, and calculating voxel blocks meeting t normal、theight according to a voxel block normal threshold t normal and a voxel block elevation difference threshold t height, wherein the extracted voxel blocks are the voxel blocks containing the road boundary point cloud;
s14, based on the track line of the vehicle-mounted laser scanning system, reserving data points in a set range on two sides of a road, and removing data points outside the set range;
S15, clustering the roughly extracted road boundary voxel blocks by adopting European clustering, setting a clustering cluster point number threshold t num, and removing the clustering clusters smaller than the threshold number points to obtain denoised voxel blocks, namely the road boundary point cloud.
3. The method for identifying the road scene type based on the vehicle-mounted laser point cloud data according to claim 1, wherein the step S3 specifically comprises the following steps:
s31, clustering the intersection point clouds into independent objects based on a communication branch clustering algorithm according to the intersection point clouds extracted in the S23, wherein the clustering distance is C cluster, and classifying intersection types by using DGCNN networks based on the clustered intersection independent objects;
S32, sampling the intersection object into n points by using the farthest points, inputting DGCNN networks, wherein the data form is n multiplied by 3,3 represents the initial feature number of the point cloud feature dimension, obtaining the high-dimensional feature of the sampling point of n multiplied by 1024 by the input data through 3 layers Edge-Conv and 1 layer multi-layer perceptron MLP calculation, gathering the semantically similar points, extracting the local feature information of different levels of the point cloud, finally obtaining the global feature of n multiplied by 1 by using the maximum pooling, distributing the extracted global feature to each point, splicing the features of different levels extracted by each EdgeConv, fusing all feature information through MLP, reducing the dimension of the high-dimensional feature, and finally outputting the probability value of 2 multiplied by 1 representing that the current object is classified into 2 categories by the network, determining the category of the intersection object, and realizing the classification of the intersection and T-shaped intersection.
4. A road scene type recognition system based on vehicle-mounted laser point cloud data, characterized by comprising a memory, a processor and computer program instructions stored on the memory and executable by the processor, which, when executed by the processor, are capable of implementing the method steps of any of claims 1-3.
CN202111452174.8A 2021-11-30 2021-11-30 A road scene type recognition method and system based on vehicle-mounted laser point cloud Active CN113989784B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111452174.8A CN113989784B (en) 2021-11-30 2021-11-30 A road scene type recognition method and system based on vehicle-mounted laser point cloud

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111452174.8A CN113989784B (en) 2021-11-30 2021-11-30 A road scene type recognition method and system based on vehicle-mounted laser point cloud

Publications (2)

Publication Number Publication Date
CN113989784A CN113989784A (en) 2022-01-28
CN113989784B true CN113989784B (en) 2024-11-15

Family

ID=79732848

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111452174.8A Active CN113989784B (en) 2021-11-30 2021-11-30 A road scene type recognition method and system based on vehicle-mounted laser point cloud

Country Status (1)

Country Link
CN (1) CN113989784B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114399762B (en) * 2022-03-23 2022-06-10 成都奥伦达科技有限公司 Road scene point cloud classification method and storage medium
CN114743210B (en) * 2022-03-30 2025-06-20 北京百度网讯科技有限公司 Parallel road detection method, device and electronic equipment
CN114690780B (en) * 2022-04-13 2024-10-29 中国矿业大学 Gradient and curve passing method of unmanned tracked electric locomotive in deep limited space
CN114863048B (en) * 2022-05-17 2025-05-30 亿咖通(湖北)技术有限公司 Vectorized data generation method and electronic device for road boundary line
CN115060250B (en) * 2022-06-22 2025-07-25 北京百度网讯科技有限公司 Method, device, electronic equipment and medium for updating map data
CN115953608B (en) * 2023-03-09 2023-05-30 江苏金寓信息科技有限公司 Laser point cloud data clustering identification method for model construction
CN116092038B (en) * 2023-04-07 2023-06-30 中国石油大学(华东) Point cloud-based large transportation key road space trafficability judging method
CN116612594A (en) * 2023-05-11 2023-08-18 深圳市云之音科技有限公司 Intelligent monitoring and outbound system and method based on big data
CN117612127B (en) * 2024-01-19 2024-04-26 福思(杭州)智能科技有限公司 Scene generation method and device, storage medium and electronic equipment
CN117606508B (en) * 2024-01-23 2024-05-31 智道网联科技(北京)有限公司 Method and device for acquiring closest point in track, electronic equipment and storage medium
CN118038415B (en) * 2024-04-12 2024-07-05 厦门中科星晨科技有限公司 Laser radar-based vehicle identification method, device, medium and electronic equipment
CN118470362B (en) * 2024-04-29 2025-03-18 江苏海洋大学 A method for extracting mesoscale eddy migration paths based on RDP trajectory division and direction clustering

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390169A (en) * 2013-07-19 2013-11-13 武汉大学 Sorting method of vehicle-mounted laser scanning point cloud data of urban ground objects
CN111985322A (en) * 2020-07-14 2020-11-24 西安理工大学 A Lidar-based Road Environment Element Perception Method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113378800B (en) * 2021-07-27 2021-11-09 武汉市测绘研究院 Automatic classification and vectorization method for road sign lines based on vehicle-mounted three-dimensional point cloud

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390169A (en) * 2013-07-19 2013-11-13 武汉大学 Sorting method of vehicle-mounted laser scanning point cloud data of urban ground objects
CN111985322A (en) * 2020-07-14 2020-11-24 西安理工大学 A Lidar-based Road Environment Element Perception Method

Also Published As

Publication number Publication date
CN113989784A (en) 2022-01-28

Similar Documents

Publication Publication Date Title
CN113989784B (en) A road scene type recognition method and system based on vehicle-mounted laser point cloud
Zai et al. 3-D road boundary extraction from mobile laser scanning data via supervoxels and graph cuts
Lian et al. DeepWindow: Sliding window based on deep learning for road extraction from remote sensing images
WO2018068653A1 (en) Point cloud data processing method and apparatus, and storage medium
Yenikaya et al. Keeping the vehicle on the road: A survey on on-road lane detection systems
Tümen et al. Intersections and crosswalk detection using deep learning and image processing techniques
Romdhane et al. An improved traffic signs recognition and tracking method for driver assistance system
CN114359876B (en) Vehicle target identification method and storage medium
CN105160309A (en) Three-lane detection method based on image morphological segmentation and region growing
CN113759391A (en) Passable area detection method based on laser radar
Zhang et al. A framework for turning behavior classification at intersections using 3D LIDAR
Rateke et al. Passive vision region-based road detection: A literature review
Zhang et al. GC-Net: Gridding and clustering for traffic object detection with roadside LiDAR
CN109670455A (en) Computer vision lane detection system and its detection method
US20250014355A1 (en) Road obstacle detection method and apparatus, and device and storage medium
CN118363382A (en) AGV operation monitoring method and system
CN111931683A (en) Image recognition method, image recognition device and computer-readable storage medium
Ghahremannezhad et al. Automatic road detection in traffic videos
Alshehri et al. Unmanned aerial vehicle detection and tracking using image segmentation and Bayesian filtering
Coronado et al. Detection and classification of road signs for automatic inventory systems using computer vision
Ding et al. A comprehensive approach for road marking detection and recognition
Börcs et al. A model-based approach for fast vehicle detection in continuously streamed urban LIDAR point clouds
Xuan et al. Robust lane-mark extraction for autonomous driving under complex real conditions
CN117593685A (en) Method and device for constructing true value data and storage medium
CN114821500B (en) Relocation method and device based on multi-source feature fusion of point cloud

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant