CN112560747B - Lane boundary interactive extraction method based on vehicle-mounted point cloud data - Google Patents
Lane boundary interactive extraction method based on vehicle-mounted point cloud data Download PDFInfo
- Publication number
- CN112560747B CN112560747B CN202011540676.1A CN202011540676A CN112560747B CN 112560747 B CN112560747 B CN 112560747B CN 202011540676 A CN202011540676 A CN 202011540676A CN 112560747 B CN112560747 B CN 112560747B
- Authority
- CN
- China
- Prior art keywords
- point cloud
- cloud data
- line
- lane boundary
- lane
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a lane boundary interactive extraction method based on vehicle-mounted point cloud data, which comprises the steps of acquiring a vehicle-mounted point cloud data set, manually selecting a starting direction, training sample data, identifying and denoising lane boundary point cloud, extracting and denoising lane boundary center lines, and fitting and extracting vector boundaries; and taking the two ends of the fitting vector boundary as new starting points and extending directions, and repeating the steps after training the sample data until a new lane boundary cannot be obtained, so as to obtain the lane boundary of the vehicle-mounted point cloud data. The invention can reduce the complexity of single data to be processed by dividing GPS time into scanning lines and an interactive extraction method, improve the data processing efficiency and improve the extraction precision of lane vector boundaries; meanwhile, training sample data is repeated and a threshold value is updated each time when the lane vector boundary is extracted in an iterative mode, the problem that the low-reflection-intensity identification line cannot be identified is effectively avoided, and the interactive high-precision rapid extraction of the lane boundary under the complex condition is achieved.
Description
Technical Field
The invention relates to the technical field of vehicle-mounted point cloud data processing, in particular to a lane boundary interactive extraction method based on vehicle-mounted point cloud data.
Background
The vehicle-mounted laser scanning system is a measurement mode in which a global satellite navigation system (Global Navigation SATELLITE SYSTEM, GNSS), an image sensor (CCD) and an inertial measurement system (Inertial Navigation System, INS) are highly integrated with a three-dimensional laser scanning system by taking a vehicle as a mounting platform, has the advantages of being rich in information quantity, high in precision, strong in flexibility and the like, can quickly acquire accurate three-dimensional information of a lane and surrounding scenes, and plays a role in data support in the aspects of high-precision map construction, lane information maintenance, urban three-dimensional visualization and the like. The vehicle-mounted mobile measurement system is designed based on the idea of modularity, and each sensor is assembled as a component of the system, so that the sensor is conveniently updated. Through decades of development at home and abroad, the software and hardware technology of the system is mature, and corresponding products of a plurality of companies enter a commercialization stage. Along with urgent demands for digital city construction, vehicle-mounted mobile measurement systems and sensors are advanced in space resolution, measurement precision and other aspects, and related technologies such as three-dimensional model construction, point cloud data processing, computer graphics and the like are developed, so that extraction and classification of vehicle-mounted point clouds are increasingly studied.
The lane marking is important in the running process of the vehicle, and is a core element of a high-precision map on which unmanned driving depends besides providing running route information for a driver. Therefore, in order to ensure driving safety, the lane management department maintains and updates the lane marking information every year, and repairs the damaged lane marking according to the information. The traditional method for measuring the lane marking adopts a total station or a Global Positioning System (GPS) to collect the marking information, and the method has low efficiency and needs to consume a large amount of manpower and material resources although the accuracy can be ensured. The current common method is to extract the identification line information from the photo or video, and although the method can meet the requirement of the lane management department on the speed, the data is greatly influenced by illumination in the processing process, and the accuracy is not high. The vehicle-mounted mobile measurement system can rapidly and accurately acquire the three-dimensional coordinates of the road surface, and the reflection intensity information of the vehicle-mounted mobile measurement system can reflect ground objects with different materials and extract the ground objects according to the difference of the reflection intensity of the lane mark line and other ground objects. However, the lane marking line point cloud is difficult to realize completely and accurately extraction due to the influence of the position and the scanning incidence angle of the vehicle-mounted laser scanner.
In the prior art, the method for extracting the lane marking lines from the vehicle-mounted scanning data is mainly divided into the following two types:
The first category is to generate a characteristic image according to the reflection intensity projection of the scanning data, and extract the characteristic image by using a mature image processing algorithm. The method analyzes the characteristic image by utilizing the reflection intensity and the height value, and then extracts the lane marking line by using a distributed probability Hough transform algorithm, but the method has poor extraction effect on some complex marking lines. The other processing methods are that the characteristic image is segmented according to the lane trend, each segmented area is analyzed, the self-adaptive threshold value is determined, and the lane marking line is extracted by adopting a multi-threshold value method; the vehicle-mounted scanning data is filtered according to the reflection intensity and the elevation, a characteristic image is generated, and finally classification is carried out according to the shape and the size of the lane marking line. The processing method also comprises the steps of generating an elevation characteristic image based on elevation information of the point cloud, extracting the lane point cloud by using an Ojin method and a canny operator, and finally extracting the lane mark line according to reflection intensity, semantic information and geometric characteristics of the mark line. The method is characterized in that characteristic images are generated according to reflection intensity projection of scanned data, and the lane identification lines are extracted by using an image processing algorithm, so that the identification lines in the form of pictures are obtained, and three-dimensional information is lacked.
Another type of method is to directly extract the lane identification line point cloud from the vehicle-mounted scan data. And extracting the point cloud of the lane marking line by calculating an intensity threshold according to the reflection intensity difference between the lane marking line and the road surface. Some processing methods divide the lane point cloud, calculate a threshold value in each divided area by using the Ojin method, extract the lane mark line according to the threshold value, and finally filter noise points according to the density difference of the mark line and other features. The lane identification line candidate point cloud is extracted through the self-adaptive threshold value of the point cloud, then the identification line is extracted according to the Hough transformation algorithm, and finally the three-dimensional point cloud is registered with the high-definition image, so that information such as marks and colors is extracted. Still other processing methods first filter the point cloud based on the reflected intensity, then determine the lane marker line position according to the scan angle of each laser point, and finally fit the marker line and screen using a least square method. The method for directly extracting the point cloud of the lane marking line from the vehicle-mounted scanning data can extract the three-dimensional information of the lane marking line, but has poor effect of extracting the lane marking line with lower reflection intensity.
In addition, most of point cloud extraction in actual production is a full-automatic extraction method. The automatic extraction method inevitably has places with incorrect extraction and unsatisfactory precision in the extraction results, the distribution of the problem results is difficult to predict, the time cost for modifying the problem results is very high, the method cannot be widely applied in actual production, and the difficulty is brought to the extraction of the lane boundaries of the vehicle-mounted point cloud.
Disclosure of Invention
The invention aims to solve the technical problem of providing the interactive extraction method for the lane boundary based on the vehicle-mounted point cloud data, which is used for realizing the interaction of the lane boundary extraction on the vehicle-mounted point cloud data, is quick and has high lane precision.
In order to solve the technical problems, the invention provides a lane boundary interactive extraction method based on vehicle-mounted point cloud data, which comprises the following steps:
step 1: acquiring an initial vehicle-mounted point cloud data set, and preprocessing the initial vehicle-mounted point cloud data set;
Step 2: manually selecting two points in the vehicle-mounted point cloud data set, and determining a starting point and a lane boundary extending direction;
step 3: setting buffer areas on two sides of the extending direction of the lane boundary, searching point cloud data in the buffer areas on two sides of the starting line segment, clustering the point cloud data in the buffer areas, and calculating a threshold value group TI i for identifying the point cloud of the lane boundary identification line according to the reflection intensity of a clustering center C i;
Step 4: dividing the point cloud data in the buffer area into different scanning lines according to the GPS time of the point cloud data, comparing the reflection intensity value of each scanning line with a threshold value group TI i to obtain lane boundary point cloud data, and denoising the lane boundary point cloud data to obtain denoised lane boundary point cloud data M P;
step 5: extracting lane boundary center line data point cloud according to M P, and denoising the lane boundary center line data point cloud to obtain M P';
step 6: extracting lane vector boundaries by utilizing a Targelas-Puck algorithm after M P' is thinned, so as to obtain a fitting multi-section line;
Step 7: and (3) taking two end points of a straight line at the tail of the fitting multi-section line as a new starting point and a lane boundary extending direction, returning to the execution step (3) until the new fitting multi-section line cannot be obtained any more, and taking all the fitting multi-section lines obtained at the moment as the lane boundaries of the vehicle-mounted point cloud data.
Further, the specific process in the step 1 is as follows:
Acquiring initial point cloud data and photos of a lane by using a vehicle-mounted three-dimensional laser scanning system, performing format conversion, track calculation and pre-inspection on the data and the photos, establishing a coordinate system, performing SLAM calculation and deriving a vehicle-mounted point cloud data set;
and manually cutting and deleting non-ground point cloud data above the road surface in the vehicle-mounted point cloud data set, and reserving the road surface point cloud data.
Further, the method used in the clustering of the point cloud data in the buffer area in the step 3 is K-means clustering, and the specific process is as follows:
Dividing the reflection intensity information of the searched point cloud data into two categories by using K-means clustering, wherein the aggregation centers of the two categories are C 1、C2 and satisfy C 1>C2 respectively, C 1 is marked line data, and C 2 is non-marked line data;
A threshold set TI i of lane boundary identification line point cloud identifications is calculated from C 1 and C 2, the threshold set TI i including a minimum difference in reflected intensities of the marked and non-marked data TI 1 and a minimum reflected intensity of the marked data TI 2, where TI 1=(C1-C2)×r1,TI2=(C1+C2)×r1.
Further, in the step 4, the point cloud data in the buffer area is divided into different scan lines according to the GPS time of the point cloud data, and the specific process is as follows:
The method comprises the steps of sorting point cloud data in a buffer area by utilizing GPS time of the point cloud data, comparing a time difference T between two adjacent points P i and P i+1 in the point cloud data with a preset GPS time difference threshold value delta T, dividing P i and P i+1 into different scanning lines if T is larger than delta T, otherwise, dividing P i and P i+1 into the same scanning line.
Further, in the step 4, the reflection intensity value of each scanning line is compared with the threshold value group TI i to obtain lane boundary point cloud data, which specifically includes:
Ordering the point cloud data in each scanning line according to the GPS time to obtain the reflection intensity I i of each point cloud data; calculating a minimum value I min and a maximum value I max of the reflection intensity of the point cloud data in each scanning line; if I max-Imin>TI1 is met, the point of the scanning line, the reflection intensity of which meets I i>TI2, is regarded as candidate lane boundary point cloud data; the method comprises the steps of presetting a neighborhood point threshold value TN 1 and a length threshold value TD 1 of lane line data points in a scanning line, comparing neighborhood point TN i and a length TD i of candidate lane boundary point cloud data with the preset threshold value, and regarding the candidate lane boundary point cloud data as final lane boundary point cloud data if the neighborhood point TN i is larger than TN 1 and the length TD i is larger than TD 1.
Further, in the step 4, denoising the lane boundary point cloud data to obtain denoised lane boundary point cloud data M P, specifically:
Presetting a selected neighborhood radius r 2 and a neighborhood minimum data quantity TN 2, and deleting the selected neighborhood radius r 2 as noise if the neighborhood point number of points in the neighborhood radius r 2 in the lane boundary point cloud data is smaller than TN 2.
Further, in the step 5, a lane boundary center line data point cloud is extracted according to M P, and the specific process is as follows:
Performing segmentation processing on each scanning line, and calculating the distance d ' i between two adjacent points in each scanning line as P ' i and P ' i+1; comparing d 'i with a preset interval threshold TD 2, and if d' i>TD2, classifying P 'i and P' i+1 into different line segments; and selecting lane boundary central line data according to the line number and the line segment number of the lane boundary line.
Further, in the step 5, the data point cloud of the lane boundary center line is denoised to obtain M P', and when the lane boundary center line is a solid line, the specific process of denoising is as follows:
All points in the lane boundary centerline data point cloud are ordered by distance from the starting point,
Calculating the distance d ' i between two adjacent points P ' i and P ' i+1 in sequence from the starting point, comparing d ' i with a preset distance threshold TD 3, and deleting all points behind the ith point if d ' i>TD3;
For each of the reserved points P ' i, the front and rear adjacent points are P ' i-1 and P ' i+1 respectively, connecting P ' i-1 and P ' i+1 to obtain straight lines, the distance d ' "i of P" i to the straight line is calculated and compared with the preset distance threshold TD 5, and all points where d ' "i is greater than TD 5 are considered noise and deleted.
Further, in the step 5, the data point cloud of the lane boundary center line is denoised to obtain M P', and when the lane boundary center line is a dotted line, the specific process of denoising is as follows:
All points in the lane boundary centerline data point cloud are ordered by distance from the starting point,
Segmenting the broken line by setting a distance threshold TD 4, and if the distance d ' i>TD4 between two points on the lane boundary central line, then the P ' i and P ' i+1 are respectively distributed into different line segments;
Calculating the distance d ' i between two adjacent points P ' i and P ' i+1 in sequence from the starting point, comparing d ' i with a preset distance threshold TD 3 ', and deleting all points behind the ith point if d ' i>TD3 ';
For each of the reserved points P ' i, the front and rear adjacent points are P ' i-1 and P ' i+1 respectively, connecting P ' i-1 and P ' i+1 to obtain straight lines, the distance d ' "i of P" i to the straight line is calculated and compared with the preset distance threshold TD 5, and all points where d ' "i is greater than TD 5 are considered noise and deleted.
Further, in the step 6, after the M P' is diluted, a road vector boundary is extracted by using a dawster-plck algorithm, and the specific process is as follows:
Step 6-1: setting a resolution r 3 to dilute the center line of the lane boundary,
Step 6-2: selecting two points A, B in M P' and connecting to obtain a straight line segment;
Step 6-3: selecting a point C with the largest distance from the straight line segment on the A, B curves as a dividing point, and calculating the distance d from the point C to the straight line segment;
Step 6-4: comparing d with a preset threshold TD 6, and if the distance d is smaller than TD 6, taking the straight line segment as the fit of a curve between A, B; if the distance d is greater than TD 6, dividing the curve between A, B into two sections of curves by using C, and carrying the two sections of divided curves into the steps 6-3 to fit respectively;
step 6-5: and returning to the step 6-2 until the curve among all the points in the M P' is completely fitted, and sequentially connecting all the dividing points to form a broken line to obtain a fitted multi-section line.
The invention has the beneficial effects that:
(1) The complexity of the data to be processed in a single time can be reduced by dividing GPS time into scanning lines and an interactive extraction method, and the data processing efficiency is improved; meanwhile, auxiliary data provided by the interactive extraction method can effectively improve the extraction accuracy of the lane vector boundary.
(2) Analyzing the reflection intensity value of the scanning line in the buffer area after clustering calculation of the threshold value identified by the identification line to extract the central line data of the lane identification line, and then fitting the lane boundary by utilizing the Target-Puck algorithm; on the basis, the last straight line of the fitted boundary is used as a new starting point and the lane extending direction to iteratively extract the lane vector boundary, and the iterative extraction process continuously updates the threshold value extracted by the lane boundary identification line, so that the problem that the low reflection intensity identification line cannot be identified can be effectively avoided. The interactive high-precision rapid extraction of the lane boundary line under the complex conditions of single line, double line, solid line, broken line and the like is realized.
Drawings
Fig. 1 is a flow chart of the present invention.
Fig. 2 is a top view of a road point cloud collected by a vehicle-mounted three-dimensional laser scanning system in an embodiment of the invention.
Fig. 3 is a schematic diagram of road surface point cloud data obtained by preprocessing fig. 2.
Fig. 4 is a schematic diagram of point cloud data in a starting direction of a double solid line according to an embodiment of the present invention.
Fig. 5 is a schematic diagram of a recognition result of a double-solid-line lane boundary point cloud in an embodiment of the invention.
Fig. 6 is a schematic diagram of a recognition result of a lane boundary center line with double solid lines in an embodiment of the invention.
Fig. 7 is a schematic diagram of a superimposed display result of a boundary point cloud and a center line point cloud of a lane with double solid lines in an embodiment of the present invention.
Fig. 8 is a schematic diagram of a global display result of double-solid vector boundary extraction in an embodiment of the present invention.
Fig. 9 is a schematic diagram showing a result of the partial enlarged view of the portion a in fig. 8.
Fig. 10 is a schematic diagram of point cloud data in a single-dashed line starting direction according to an embodiment of the present invention.
FIG. 11 is a diagram illustrating a single-dashed lane boundary centerline recognition result according to an embodiment of the present invention.
Fig. 12 is a schematic diagram of a global display result of single-dashed lane vector boundary extraction in an embodiment of the present invention.
Fig. 13 is a schematic diagram showing the result of the partial enlarged view of the portion B in fig. 12.
Fig. 14 is a schematic diagram of the extraction result of all lane boundaries.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and specific examples, which are not intended to be limiting, so that those skilled in the art will better understand the invention and practice it.
In the description of the present invention, the term "comprising" is intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may, optionally, include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Referring to a flowchart of fig. 1, an embodiment of a lane boundary interactive extraction method based on vehicle-mounted point cloud data of the present invention includes the following steps:
Step 1: and acquiring an initial vehicle-mounted point cloud data set, and preprocessing the initial vehicle-mounted point cloud data set.
Step 1-1: a Leica Pegasus Two Ultimate vehicle-mounted three-dimensional laser scanning system is used for acquiring initial point cloud data and pictures of a lane, the scanning efficiency of the equipment is 100 ten thousand points/s, the ranging precision is 1mm, the horizontal precision is 0.02m, and the elevation precision is 0.015m. Performing GNSS format conversion, track calculation and pre-examination by using hardware matched IE software, establishing a coordinate system by using matched Infinity software, performing SLAM calculation (track modification) by using matched AutoP software, and deriving an LAS format point cloud dataset, wherein the obtained initial lane point cloud top view is shown in fig. 2, and the point cloud display effect is a result after gray scale rendering according to reflection intensity.
Step 1-2: preprocessing point cloud data in RealWorks software, manually cutting and deleting non-ground point cloud data above a road surface in a vehicle-mounted point cloud data set, and reserving the road surface point cloud data, wherein the preprocessing result is shown in fig. 3.
Step 2: and manually selecting two points in the vehicle-mounted point cloud data set to determine a starting point and a lane boundary extending direction. Two points on the lane boundary are selected manually as P 1 and P 2 respectively, wherein P 1 is used as a starting point, and the extending direction of the lane boundary is determined according to the two points, namely, P1-P2 is used as the extending direction of the lane boundary. The truncated linear equation y=k×x+b is calculated using the coordinates of P 1 and P 2.
In this embodiment, the lane boundary with double solid lines is processed first, and as shown in fig. 4, two points on the center line of the lane boundary with double solid lines are manually selected to determine the starting point and the extending direction of the lane boundary. An interactive lane vector boundary extraction method is used, and comprises the steps of manually preprocessing and manually selecting two points to determine a starting point and a lane boundary extending direction. The interactive extraction method can reduce the complexity of single-time data to be processed, has higher data processing efficiency, and can effectively improve the extraction precision of the lane vector boundary by the auxiliary data provided in the interactive extraction process.
Step 3: setting buffer areas on two sides of the extending direction of the lane boundary, searching point cloud data in the buffer areas on two sides of the starting line segment, classifying the point cloud data in the buffer areas by using K-means clustering, and calculating a threshold value group TI i for identifying the point cloud of the lane boundary identification line according to the reflection intensity of a clustering center C i; and (5) training the sample point cloud data. Only the point cloud data on two sides of the initial line segment is subjected to cluster analysis, and the threshold value group TI i can be rapidly acquired.
Step 3-1: setting a buffer area radius r 1, wherein r 1 is more than 0 and less than 1; in this example r 1 is 0.3m.
Step 3-2: searching point cloud data in a buffer area by using a point-to-straight line distance formula; the distance formula from the point to the straight line isWhere k is the slope of the line in the direction of extension, x i is the abscissa of the calculated point, y i is the ordinate of the calculated point, and b is the ordinate of the intersection of the line in the direction of extension and the y-axis.
Step 3-3: the reflection intensity information of the point cloud data in the buffer zone is divided into two categories by using K-means clustering, the aggregation centers of the two categories are C 1、C2 and meet the requirement of C 1>C2, wherein C 1 is marked line data, and C 2 is non-marked line data.
Step 3-4: when calculating the threshold set TI i of the lane boundary identification line point cloud identification according to C 1 and C 2, the threshold set TI i includes the minimum difference TI 1 of the reflected intensities of the marked and non-marked data and the minimum value TI 2 of the reflected intensities of the marked data, where TI 1=(C1-C2)×r1,TI2=(C1+C2)×r1.
Step 3-2: the method comprises the steps of dividing point cloud data in a buffer area into different scanning lines according to GPS time of the point cloud data, sorting the point cloud data in the buffer area according to the GPS time, calculating GPS time difference between adjacent points P i and P i+1, and if the time difference is larger than a preset threshold value delta T, classifying P i and P i+1 into different scanning lines.
Step 3-3: and sorting the point cloud data in the buffer area according to the GPS time, calculating the GPS time difference between the adjacent points P i and P i+1, and if the time difference is larger than a preset threshold delta T, classifying P i and P i+1 as different scanning lines. In this embodiment, the point cloud data in the buffer area is ordered according to the GPS time, and the ordering mode is according to the time of the GPS. When the vehicle-mounted platform runs, each scanning line of the vehicle-mounted laser scanner is perpendicular to the extending direction of the lane and approximates to a cross section of the lane, a relatively large time difference exists between the end of each scanning line and the beginning of the next scanning line, each point cloud in the point cloud data is provided with GPS time, the GPS time precision can reach nanosecond level, and the scanning lines are extracted by utilizing the GPS time difference value of the adjacent points. In this embodiment DeltaT is set to 1.0e-4.
Step 3-4: and carrying out lane boundary point cloud extraction on each scanning line to obtain a lane boundary point cloud set M P.
Step 4: dividing the point cloud data in the buffer area into different scanning lines according to the GPS time of the point cloud data, comparing the reflection intensity value of each scanning line with a threshold value group TI i to obtain lane boundary point cloud data, and denoising the lane boundary point cloud data to obtain denoised lane boundary point cloud data M P; and (5) completing the identification and denoising of the lane boundary point cloud.
Step 4-1: the point cloud data in the buffer area is divided into different scanning lines according to the GPS time of the point cloud data.
Step 4-1-1: and ordering the point cloud data in the buffer area by utilizing the GPS time of the point cloud data. And dividing the point cloud data in the buffer area into scanning lines by using GPS time, analyzing the point cloud elevation distribution characteristics of each scanning line data, and setting an elevation threshold value to extract road surface edge points as lane boundary point clouds. The complexity of the single data to be processed is reduced, and the data processing efficiency is improved.
Step 4-1-2: comparing the time difference T= GPST i-GPSTi+1 between two adjacent points P i and P i+1 in the point cloud data with a preset GPS time difference threshold value delta T, wherein if T is larger than delta T, P i and P i+1 are different scanning lines, otherwise, P i and P i+1 are the same scanning line; wherein the GPS times of t= GPST i-GPSTi+1,Pi and P i+1 are GPST i and GPST i+1, respectively, where i is the index value of the point and i is a positive integer greater than or equal to 1; in this embodiment DeltaT is set to 1.0e-4s.
Step 4-2: and comparing the reflection intensity value of each scanning line with the threshold value group TI i to obtain lane boundary point cloud data.
Step 4-2-1: ordering the point cloud data in each scanning line according to the GPS time to obtain the reflection intensity I i of each point cloud data;
Step 4-2-2: calculating a minimum value I min and a maximum value I max of the reflection intensity of the point cloud data in each scanning line; if I max-Imin>TI1 is met, lane data are considered to exist in the scanning line, and the point of which the reflection intensity in the scanning line meets I i>TI2 is regarded as candidate lane boundary point cloud data;
Step 4-2-3: the method comprises the steps of presetting a neighborhood point threshold value TN 1 and a length threshold value TD 1 of lane line data points in a scanning line, comparing neighborhood point TN i and a length TD i of candidate lane boundary point cloud data with the preset threshold value, and regarding the candidate lane boundary point cloud data as final lane boundary point cloud data if the neighborhood point TN i is larger than TN 1 and the length TD i is larger than TD 1. In this example, TN 1=5、TD1 =0.05m is set.
Step 4-3: and denoising the lane boundary point cloud data to obtain denoised lane boundary point cloud data M P. Denoising the lane boundary point cloud data by analyzing the neighborhood points of the lane line data points, presetting a neighborhood radius r 2 and a neighborhood minimum data quantity TN 2, and if the neighborhood points of the points in the neighborhood radius r 2 in the lane boundary point cloud data are smaller than TN 2, treating the points as noise deletion. In this embodiment, r 2=0.05m,TN2 =2 is set, and if the number of the neighborhood points of the point is less than or equal to 2, the point is regarded as noise deletion.
Step 5: extracting lane boundary center line data point cloud according to M P, and denoising the lane boundary center line data point cloud to obtain M P'; and (5) extracting and denoising the lane boundary center line.
Step 5-1: and extracting a lane boundary center line data point cloud according to M P.
Step 5-1-1: each scan line is segmented, the coordinates of the distance d ' i,P'i between two adjacent points in each scan line are calculated as P ' i and P ' i+1 (the coordinates of x ' i,y'i),P'i+1 are calculated as (x ' i+1,y'i+1),
Step 5-1-2: d 'i is compared to a preset interval threshold TD 2, and if d' i>TD2, P 'i and P' i+1 are assigned to different line segments. The identified scan line is segmented by calculating the adjacent point distance in the scan line, in this embodiment, the interval threshold is set to TD 2 =0.05m, and if the number of segmented line segments is 2, the segment is reserved. The cloud data of the boundary points of the double-solid-line lane identified by the invention is shown in fig. 5.
Step 5-1-3: and judging the line number of the lane boundary line, and selecting lane boundary central line data by combining the line number and the line segment number. If the lane boundary line to be processed is a single line, the number of line segments after the scanning line segmentation is 1, and the midpoint of the line segments is calculated to be used as lane boundary central line data; if the lane boundary line to be processed is double-line, the number of line segments after the scanning line segmentation is 2, and the midpoints of two adjacent endpoints of the two line segments are calculated as lane boundary center line data.
Step 5-2: and denoising the lane boundary center line data point cloud to obtain M P'.
Step 5-2-1: all points in the lane boundary centerline data point cloud are ordered by distance from the starting point. If the center line of the lane boundary is a solid line, executing the step 5-2-3; if the center line of the lane boundary is a broken line, executing the step 5-2-2;
Step 5-2-2: the dashed line is segmented. Setting a distance threshold TD 4, wherein if the distance d ' i>TD4 between two points on the lane boundary central line, P ' i and P ' i+1 belong to different line segments; wherein the dashed set distance threshold TD 3' is greater than the solid set distance threshold TD 3; and denoising the lane boundary center line data point cloud after segmentation. In the present embodiment, the distance threshold TD 3 =10m is set.
Step 5-2-3: calculating the distance d ' i between two adjacent points P ' i and P ' i+1 in sequence from the starting point, comparing d ' i with a preset distance threshold TD 3, and deleting all points behind the ith point if d ' i>TD3;
step 5-2-4: the front and rear adjacent points of each retention point P ' i are P ' i-1 and P ' i+1 respectively, connecting P ' i-1 and P ' i+1 to obtain a straight line y, the distance d ' "i of P" i to the straight line y is calculated and compared with the preset distance threshold TD 5, and all points where d ' "i is greater than TD 5 are considered noise and deleted. In the present embodiment, the distance threshold TD 5 =0.03 m is set.
The result of the identified double-solid line boundary point cloud through denoising is shown in fig. 6, the center line data is smooth and high in integrity, the superimposed display result of the double-solid line lane boundary point cloud and the center line point cloud is shown in fig. 7, each point in the lane center line data in fig. 7 is located at the center position of a scanning line, and the recognition accuracy of the lane boundary center line point cloud is high.
Step 6: thinning M P', and extracting lane vector boundaries by utilizing a Target Laplacian-Puck algorithm to obtain a fitting multi-section line; and finishing vector boundary extraction.
Step 6-1: the resolution r 3 is set to dilute the lane boundary center line, and the resolution r 3 is set to be 0.1m in the embodiment.
Step 6-2: selecting two points A, B in M P' and connecting to obtain a straight line segment;
Step 6-3: selecting a point C with the largest distance from the straight line segment on the A, B curves as a dividing point, and calculating the distance d from the point C to the straight line segment;
Step 6-4: comparing d with a preset threshold TD 6, and if the distance d is smaller than TD 6, taking the straight line segment as the fit of a curve between A, B; if the distance d is greater than TD6, dividing the A, B curves into two sections of curves by using C, and returning to the step 7-3 to respectively perform corresponding operations on the divided two sections of curves; in the present embodiment, the distance threshold TD 6 =0.03 m is set.
Step 6-5: and returning to the step 7-2 until the curve between all points in the M P' is fitted, sequentially connecting all the dividing points to form a broken line, and obtaining a fitted multi-section line as an approximate multi-section line of the curve, namely a lane vector boundary.
The double-solid-line vector boundary extraction results are shown in fig. 8 and 9, fig. 8 is a global display result, fig. 9 is a display result of a partial enlarged view of a part A in fig. 8, the vector boundary extracted in fig. 9 is positioned at the center of the lane boundary point cloud, and the vector boundary extraction result has high precision.
Step 7: and (3) taking two end points of a straight line at the tail of the fitting multi-section line as a new starting point and a lane boundary extending direction, returning to the step (3) until the new fitting multi-section line cannot be obtained any more, and taking all the fitting multi-section lines obtained at the moment as lane boundaries of the vehicle-mounted point cloud data.
According to the invention, the vehicle-mounted three-dimensional laser scanning system is used for acquiring the three-dimensional point cloud data and extracting the lane boundary, so that the three-dimensional information is reserved, and the precision is improved. In the process of extracting the lane boundary, the center line coordinates of the lane boundary extracted in the step 5 are three-dimensional data. Because the difference of the local position and the elevation of the point cloud is small (namely, the difference of Z coordinate data is small), the data point of the central line of the lane boundary is denoised in the step 5, and only the x and y coordinate data of the three-dimensional data are considered when the lane vector boundary is fitted in the step 6. And if Z coordinate data is considered, instrument measurement errors are easily introduced, and the threshold value is difficult to calculate, so that an accurate extraction effect can be achieved by only considering x and y coordinate data in the denoising and fitting processes.
In this embodiment, the single-dashed lane boundary is further processed, and first, two points are manually selected to determine the starting point and the extending direction of the lane boundary. As shown in fig. 10, two points selected manually are two endpoints of a single-dashed line lane boundary, a single-dashed line segmentation process sets a distance parameter TD 4 =1.0m, and other parameters are set the same as those of a double-solid line, wherein the number of the identification line segments in the single-dashed line identification process is 1, the single-dashed line lane boundary point cloud extraction result is shown in fig. 11, the single-dashed line lane boundary 28 segments are extracted altogether, the single-dashed line lane vector boundary extraction result is shown in fig. 12 and 13, fig. 12 is a global display result, the lane vector boundary is smooth and complete, fig. 13 is a display result of a partial enlarged view of a part B in fig. 12, and the vector boundary is located in the middle of the lane line point cloud, which indicates that the accuracy of the lane boundary extraction result extracted by the invention is higher.
Fig. 14 shows the extraction results of lane boundaries including the double-solid line lane and the single-broken line lane in the present embodiment, including 1 double-solid line lane and 6 single-broken line lane boundaries, and the extraction results of lane boundaries are smooth and complete after the total of 7 interactive extractions.
The above-described embodiments are merely preferred embodiments for fully explaining the present invention, and the scope of the present invention is not limited thereto. Equivalent substitutions and modifications will occur to those skilled in the art based on the present invention, and are intended to be within the scope of the present invention. The protection scope of the invention is subject to the claims.
Claims (8)
1. A lane boundary interactive extraction method based on vehicle-mounted point cloud data is characterized by comprising the following steps of: the method comprises the following steps:
step 1: acquiring an initial vehicle-mounted point cloud data set, and preprocessing the initial vehicle-mounted point cloud data set;
Step 2: manually selecting two points in the vehicle-mounted point cloud data set, and determining a starting point and a lane boundary extending direction;
step 3: setting buffer areas on two sides of the extending direction of the lane boundary, searching point cloud data in the buffer areas on two sides of the starting line segment, clustering the point cloud data in the buffer areas, and calculating a threshold value group TI i for identifying the point cloud of the lane boundary identification line according to the reflection intensity of a clustering center C i;
The method used in the step 3 for clustering the point cloud data in the buffer area is K-means clustering, and the specific process is as follows: dividing the reflection intensity information of the searched point cloud data into two categories by using K-means clustering, wherein the aggregation centers of the two categories are C 1、C2 and satisfy C 1>C2 respectively, C 1 is marked line data, and C 2 is non-marked line data; setting a radius r 1 of a buffer area, calculating a threshold value group TI i of the point cloud recognition of the lane boundary mark line according to C 1 and C 2, wherein the threshold value group TI i comprises a minimum difference TI 1 of reflection intensities of marked lines and non-marked line data and a minimum value TI 2 of reflection intensities of marked line data, and TI 1=(C1-C2)×r1,TI2=(C1+C2)×r1;
Step 4: dividing the point cloud data in the buffer area into different scanning lines according to the GPS time of the point cloud data, comparing the reflection intensity value of each scanning line with a threshold value group TI i to obtain lane boundary point cloud data, and denoising the lane boundary point cloud data to obtain denoised lane boundary point cloud data M P;
In the step 4, the reflection intensity value of each scanning line is compared with a threshold value group TI i to obtain lane boundary point cloud data, which comprises the following specific steps: ordering the point cloud data in each scanning line according to the GPS time to obtain the reflection intensity I i of each point cloud data; calculating a minimum value I min and a maximum value I max of the reflection intensity of the point cloud data in each scanning line; if I max-Imin>TI1 is met, the point of the scanning line, the reflection intensity of which meets I i>TI2, is regarded as candidate lane boundary point cloud data; a neighborhood point threshold value TN 1 and a length threshold value TD 1 of the lane line data points in the preset scanning line are set, neighborhood point TN i and length TD i of the candidate lane boundary point cloud data are compared with the preset threshold value, and if the neighborhood point TN i is larger than TN 1 and the length TD i is larger than TD 1, the candidate lane boundary point cloud data are regarded as final lane boundary point cloud data;
step 5: extracting lane boundary center line data point cloud according to M P, and denoising the lane boundary center line data point cloud to obtain M P';
step 6: extracting lane vector boundaries by utilizing a Targelas-Puck algorithm after M P' is thinned, so as to obtain a fitting multi-section line;
Step 7: and (3) taking two end points of a straight line at the tail of the fitting multi-section line as a new starting point and a lane boundary extending direction, returning to the execution step (3) until the new fitting multi-section line cannot be obtained any more, and taking all the fitting multi-section lines obtained at the moment as the lane boundaries of the vehicle-mounted point cloud data.
2. The method for interactive extraction of lane boundaries based on vehicle-mounted point cloud data according to claim 1, wherein the specific process in step 1 is as follows:
Acquiring initial point cloud data and photos of a lane by using a vehicle-mounted three-dimensional laser scanning system, performing format conversion, track calculation and pre-inspection on the data and the photos, establishing a coordinate system, performing SLAM calculation and deriving a vehicle-mounted point cloud data set;
and manually cutting and deleting non-ground point cloud data above the road surface in the vehicle-mounted point cloud data set, and reserving the road surface point cloud data.
3. The method for interactive extraction of lane boundaries based on vehicle-mounted point cloud data according to claim 1, wherein in the step 4, the point cloud data in the buffer area is divided into different scan lines according to the GPS time of the point cloud data, and the specific process is as follows:
The method comprises the steps of sorting point cloud data in a buffer area by utilizing GPS time of the point cloud data, comparing a time difference T between two adjacent points P i and P i+1 in the point cloud data with a preset GPS time difference threshold value delta T, dividing P i and P i+1 into different scanning lines if T is larger than delta T, otherwise, dividing P i and P i+1 into the same scanning line.
4. The method for interactively extracting the lane boundary based on the vehicle-mounted point cloud data according to claim 1, wherein the step 4 of denoising the lane boundary point cloud data to obtain denoised lane boundary point cloud data M P specifically comprises:
Presetting a selected neighborhood radius r 2 and a neighborhood minimum data quantity TN 2, and deleting the data as noise if the neighborhood number of points in the neighborhood radius r 2 in the lane boundary point cloud data is smaller than TN 2.
5. The interactive extraction method of lane boundary based on vehicle-mounted point cloud data according to claim 1, wherein the step 5 extracts lane boundary center line data point cloud according to M P, and the specific process is as follows:
Performing segmentation processing on each scanning line, and calculating the distance d ' i between two adjacent points in each scanning line as P ' i and P ' i+1; comparing d 'i with a preset interval threshold TD 2, and if d' i>TD2, classifying P 'i and P' i+1 into different line segments; and selecting lane boundary central line data according to the line number and the line segment number of the lane boundary line.
6. The interactive extraction method of lane boundaries based on vehicle-mounted point cloud data according to claim 1, wherein in the step 5, the lane boundary center line data point cloud is denoised to obtain M P', and when the lane boundary center line is a solid line, the specific process of denoising is as follows:
All points in the lane boundary centerline data point cloud are ordered by distance from the starting point,
Calculating the distance d ' i between two adjacent points P ' i and P ' i+1 in sequence from the starting point, comparing d ' i with a preset distance threshold TD 3, and deleting all points behind the ith point if d ' i>TD3;
For each of the reserved points P ' i, the front and rear adjacent points are P ' i-1 and P ' i+1 respectively, connecting P ' i-1 and P ' i+1 to obtain straight lines, the distance d ' "i of P" i to the straight line is calculated and compared with the preset distance threshold TD 5, and all points where d ' "i is greater than TD 5 are considered noise and deleted.
7. The lane boundary interactive extraction method based on vehicle-mounted point cloud data according to claim 1, wherein: in the step 5, the data point cloud of the lane boundary center line is denoised to obtain M P', and when the lane boundary center line is a broken line, the specific denoising process is as follows:
All points in the lane boundary centerline data point cloud are ordered by distance from the starting point,
Segmenting the broken line by setting a distance threshold TD 4, and if the distance d ' i>TD4 between two points on the lane boundary central line, then the P ' i and P ' i+1 are respectively distributed into different line segments;
Calculating the distance d ' i between two adjacent points P ' i and P ' i+1 in sequence from the starting point, comparing d ' i with a preset distance threshold TD 3 ', and deleting all points behind the ith point if d ' i>TD3 ';
For each of the reserved points P ' i, the front and rear adjacent points are P ' i-1 and P ' i+1 respectively, connecting P ' i-1 and P ' i+1 to obtain straight lines, the distance d ' "i of P" i to the straight line is calculated and compared with the preset distance threshold TD 5, and all points where d ' "i is greater than TD 5 are considered noise and deleted.
8. The interactive extraction method of lane boundaries based on vehicle-mounted point cloud data according to claim 1, wherein in the step 6, after M P' is thinned, a lane vector boundary is extracted by using a dawster-plck algorithm, and the specific process is as follows:
Step 6-1: setting a resolution r 3 to dilute the center line of the lane boundary,
Step 6-2: selecting two points A, B in M P' and connecting to obtain a straight line segment;
Step 6-3: selecting a point C with the largest distance from the straight line segment on the A, B curves as a dividing point, and calculating the distance d from the point C to the straight line segment;
Step 6-4: comparing d with a preset threshold TD 6, and if the distance d is smaller than TD 6, taking the straight line segment as the fit of a curve between A, B; if the distance d is greater than TD 6, dividing the curve between A, B into two sections of curves by using C, and carrying the two sections of divided curves into the steps 6-3 to fit respectively;
step 6-5: and returning to the step 6-2 until the curve among all the points in the M P' is completely fitted, and sequentially connecting all the dividing points to form a broken line to obtain a fitted multi-section line.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011540676.1A CN112560747B (en) | 2020-12-23 | 2020-12-23 | Lane boundary interactive extraction method based on vehicle-mounted point cloud data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011540676.1A CN112560747B (en) | 2020-12-23 | 2020-12-23 | Lane boundary interactive extraction method based on vehicle-mounted point cloud data |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112560747A CN112560747A (en) | 2021-03-26 |
CN112560747B true CN112560747B (en) | 2024-08-20 |
Family
ID=75030944
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011540676.1A Active CN112560747B (en) | 2020-12-23 | 2020-12-23 | Lane boundary interactive extraction method based on vehicle-mounted point cloud data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112560747B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113238209B (en) * | 2021-04-06 | 2024-01-16 | 宁波吉利汽车研究开发有限公司 | Road sensing methods, systems, equipment and storage media based on millimeter wave radar |
CN113269897B (en) * | 2021-07-19 | 2021-11-09 | 深圳市信润富联数字科技有限公司 | Method, device and equipment for acquiring surface point cloud and storage medium |
CN113902864B (en) * | 2021-10-18 | 2022-11-01 | 奥特酷智能科技(南京)有限公司 | Vector map generation method and system for mine field and computer system |
US20230266451A1 (en) * | 2022-02-21 | 2023-08-24 | GM Global Technology Operations LLC | Online lidar-to-ground alignment |
CN114782925B (en) * | 2022-06-17 | 2022-09-02 | 四川省公路规划勘察设计研究院有限公司 | A method and equipment for vectorization of highway guardrail based on vehicle LIDAR data |
CN115171376B (en) * | 2022-06-27 | 2024-01-05 | 肇庆小鹏新能源投资有限公司广州分公司 | Map data processing method, server and storage medium |
CN115575976A (en) * | 2022-10-13 | 2023-01-06 | 深圳市正浩创新科技股份有限公司 | Path planning method, device, computer readable medium and electronic device along edge |
CN116048372A (en) * | 2023-03-06 | 2023-05-02 | 上海合见工业软件集团有限公司 | Pen touch command system for EDA software |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014191796A (en) * | 2013-03-28 | 2014-10-06 | Pasco Corp | Feature surface detection device, feature surface detection method and program |
CN111986115A (en) * | 2020-08-22 | 2020-11-24 | 王程 | Accurate elimination method for laser point cloud noise and redundant data |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101549155B1 (en) * | 2014-12-18 | 2015-10-06 | 주식회사 누리공간산업 | Method of automatic extraction of building boundary from lidar data |
CN105446917B (en) * | 2015-12-03 | 2019-09-27 | 西安羚控电子科技有限公司 | The device that the data record and location information of a kind of PPK-RTK obtains |
CN106780524B (en) * | 2016-11-11 | 2020-03-06 | 厦门大学 | A 3D point cloud road boundary automatic extraction method |
CN108345822B (en) * | 2017-01-22 | 2022-02-01 | 腾讯科技(深圳)有限公司 | Point cloud data processing method and device |
JP6871782B2 (en) * | 2017-03-31 | 2021-05-12 | 株式会社パスコ | Road marking detector, road marking detection method, program, and road surface detector |
CN108062517B (en) * | 2017-12-04 | 2020-06-23 | 武汉大学 | Automatic extraction method of unstructured road boundary line based on vehicle laser point cloud |
CN108898672A (en) * | 2018-04-27 | 2018-11-27 | 厦门维斯云景信息科技有限公司 | A kind of semi-automatic cloud method making three-dimensional high-definition mileage chart lane line |
CN109376586B (en) * | 2018-09-05 | 2020-12-29 | 武汉中海庭数据技术有限公司 | Road boundary line interactive automatic extraction method based on laser point cloud |
KR102083909B1 (en) * | 2018-10-23 | 2020-03-04 | 주식회사 모빌테크 | Automatic extraction method for lane data information for autonomous driving vehicles based on point cloud map |
KR20200065590A (en) * | 2018-11-30 | 2020-06-09 | 웨이즈원 주식회사 | Method and apparatus for detecting lane center point for accurate road map |
CN109949326B (en) * | 2019-03-21 | 2020-09-08 | 苏州工业园区测绘地理信息有限公司 | Building contour line extraction method based on knapsack type three-dimensional laser point cloud data |
CN110210415B (en) * | 2019-06-05 | 2022-09-30 | 福州大学 | Vehicle-mounted laser point cloud road marking identification method based on graph structure |
CN110490888B (en) * | 2019-07-29 | 2022-08-30 | 武汉大学 | Highway geometric feature vectorization extraction method based on airborne laser point cloud |
CN110705595B (en) * | 2019-09-04 | 2022-02-25 | 苏州工业园区测绘地理信息有限公司 | Parking space automatic extraction method based on knapsack type three-dimensional laser point cloud data |
CN110969624B (en) * | 2019-11-07 | 2023-08-01 | 哈尔滨工程大学 | A LiDAR 3D Point Cloud Segmentation Method |
CN111273305B (en) * | 2020-02-18 | 2022-03-25 | 中国科学院合肥物质科学研究院 | Multi-sensor fusion road extraction and indexing method based on global and local grid maps |
CN111985322B (en) * | 2020-07-14 | 2024-02-06 | 西安理工大学 | Road environment element sensing method based on laser radar |
-
2020
- 2020-12-23 CN CN202011540676.1A patent/CN112560747B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014191796A (en) * | 2013-03-28 | 2014-10-06 | Pasco Corp | Feature surface detection device, feature surface detection method and program |
CN111986115A (en) * | 2020-08-22 | 2020-11-24 | 王程 | Accurate elimination method for laser point cloud noise and redundant data |
Also Published As
Publication number | Publication date |
---|---|
CN112560747A (en) | 2021-03-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112560747B (en) | Lane boundary interactive extraction method based on vehicle-mounted point cloud data | |
CN110148196B (en) | Image processing method and device and related equipment | |
Zai et al. | 3-D road boundary extraction from mobile laser scanning data via supervoxels and graph cuts | |
CN114488194B (en) | A method for target detection and recognition on structured roads for intelligent driving vehicles | |
CN113989784B (en) | A road scene type recognition method and system based on vehicle-mounted laser point cloud | |
CN107516077B (en) | Traffic sign information extraction method based on fusion of laser point cloud and image data | |
Monnier et al. | Trees detection from laser point clouds acquired in dense urban areas by a mobile mapping system | |
CN109726717B (en) | A vehicle comprehensive information detection system | |
Teo et al. | Pole-like road object detection from mobile lidar system using a coarse-to-fine approach | |
El-Halawany et al. | Detection of road poles from mobile terrestrial laser scanner point cloud | |
Mi et al. | A two-stage approach for road marking extraction and modeling using MLS point clouds | |
CN109448000B (en) | A Segmentation Method of Traffic Direction Sign Image | |
CN105160309A (en) | Three-lane detection method based on image morphological segmentation and region growing | |
CN110956100A (en) | High-precision map generation method and device, electronic equipment and storage medium | |
CN112862844B (en) | Road boundary interactive extraction method based on vehicle-mounted point cloud data | |
CN111487643B (en) | Building detection method based on laser radar point cloud and near-infrared image | |
CN115294293B (en) | Method for automatically compiling high-precision map road reference line based on low-altitude aerial photography result | |
CN114299247B (en) | Road traffic signs and markings rapid detection and troubleshooting methods | |
CN110363054B (en) | Road marking line identification method, device and system | |
CN108509950B (en) | Railway contact net support number plate detection and identification method based on probability feature weighted fusion | |
Liu et al. | Deep-learning and depth-map based approach for detection and 3-D localization of small traffic signs | |
CN113239733A (en) | Multi-lane line detection method | |
CN115273033A (en) | Method, device, medium and equipment for extracting road marking | |
CN111476157A (en) | Lane guide arrow recognition method under intersection monitoring environment | |
Soheilian et al. | 3D road marking reconstruction from street-level calibrated stereo pairs |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: 215000 surveying and mapping geographic information building, 101 Suhong Middle Road, Suzhou Industrial Park, Jiangsu Province Applicant after: Yuance Information Technology Co.,Ltd. Address before: 215000 surveying and mapping geographic information building, 101 Suhong Middle Road, Suzhou Industrial Park, Jiangsu Province Applicant before: SUZHOU INDUSTRIAL PARK SURVEYING MAPPING AND GEOINFORMATION Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |