[go: up one dir, main page]

CN118015068A - A road surface structure depth prediction method, device, terminal equipment and medium - Google Patents

A road surface structure depth prediction method, device, terminal equipment and medium Download PDF

Info

Publication number
CN118015068A
CN118015068A CN202410291596.9A CN202410291596A CN118015068A CN 118015068 A CN118015068 A CN 118015068A CN 202410291596 A CN202410291596 A CN 202410291596A CN 118015068 A CN118015068 A CN 118015068A
Authority
CN
China
Prior art keywords
depth
image
binary image
representing
ratio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410291596.9A
Other languages
Chinese (zh)
Other versions
CN118015068B (en
Inventor
但汉成
陆冰洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN202410291596.9A priority Critical patent/CN118015068B/en
Publication of CN118015068A publication Critical patent/CN118015068A/en
Application granted granted Critical
Publication of CN118015068B publication Critical patent/CN118015068B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • EFIXED CONSTRUCTIONS
    • E01CONSTRUCTION OF ROADS, RAILWAYS, OR BRIDGES
    • E01CCONSTRUCTION OF, OR SURFACES FOR, ROADS, SPORTS GROUNDS, OR THE LIKE; MACHINES OR AUXILIARY TOOLS FOR CONSTRUCTION OR REPAIR
    • E01C23/00Auxiliary devices or arrangements for constructing, repairing, reconditioning, or taking-up road or like surfaces
    • E01C23/01Devices or auxiliary means for setting-out or checking the configuration of new surfacing, e.g. templates, screed or reference line supports; Applications of apparatus for measuring, indicating, or recording the surface configuration of existing surfacing, e.g. profilographs
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4023Scaling of whole images or parts thereof, e.g. expanding or contracting based on decimating pixels or lines of pixels; based on inserting pixels or lines of pixels
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • G06T3/608Rotation of whole images or parts thereof by skew deformation, e.g. two-pass or three-pass rotation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/64Analysis of geometric attributes of convexity or concavity
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/77Determining position or orientation of objects or cameras using statistical methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20152Watershed segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Structural Engineering (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Civil Engineering (AREA)
  • Architecture (AREA)
  • Image Processing (AREA)

Abstract

本申请适用于道路工程技术领域,提供了一种路面构造深度预测方法、装置、终端设备及介质,通过采集RGB图像数据,构建深度图;计算像素与相邻像素之间的平均深度和深度方差,并基于其对像素的深度值进行修正,得到修正深度图;计算相对凹面积比例;构建图像金字塔;基于高斯局部自适应阈值,对每个尺度的图像进行二值化,并对二进制图像进行上采样,得到调整后的二值图;对图像金字塔中各尺度的图像进行融合,并对调整后的二值图和融合后的二值图进行按位或运算,得到最终的二值图;计算最大骨颗粒径比;根据相对凹面积比例、最大骨颗粒径比以及预训练后的GBT模型,预测路面构造深度。本申请能提高路面构造深度预测的准确性,降低复杂度。

The present application is applicable to the field of road engineering technology, and provides a method, device, terminal equipment and medium for predicting the depth of pavement structure, which constructs a depth map by collecting RGB image data; calculates the average depth and depth variance between pixels and adjacent pixels, and based on them, corrects the depth value of the pixel to obtain a corrected depth map; calculates the relative concave area ratio; constructs an image pyramid; based on a Gaussian local adaptive threshold, binarizes the image of each scale, and upsamples the binary image to obtain an adjusted binary image; fuses the images of each scale in the image pyramid, and performs a bitwise OR operation on the adjusted binary image and the fused binary image to obtain a final binary image; calculates the maximum bone particle diameter ratio; predicts the pavement structure depth according to the relative concave area ratio, the maximum bone particle diameter ratio and the pre-trained GBT model. The present application can improve the accuracy of pavement structure depth prediction and reduce complexity.

Description

Pavement structure depth prediction method and device, terminal equipment and medium
Technical Field
The application belongs to the technical field of road engineering, and particularly relates to a pavement structure depth prediction method, a pavement structure depth prediction device, terminal equipment and a pavement structure depth prediction medium.
Background
Pavement construction depth prediction is an important problem in the field of road engineering technology, and can provide key information for road maintenance and safety management.
Currently, the commonly used pavement construction depth prediction technology comprises a method based on a laser technology, a digital image technology method, a volume method (sand paving method) and the like. The method based on the laser technology is to use a laser scanner or a linear laser to vertically irradiate the road surface, and then measure the time of reflecting the laser to calculate the height change of the road surface; digital image technology methods refer to the use of digital image technology to measure and evaluate the depth of formation of a pavement surface; volumetric methods refer to determining the depth of a pavement construction by laying a layer of material (typically sand) on the pavement and measuring the volume of the material.
Although the method can realize pavement structure depth prediction, the method has the defects of time and labor waste, low precision, low equipment, incapability of providing visual information, low instantaneity and the like, and has certain limitations in practical application.
Disclosure of Invention
The application provides a pavement structure depth prediction method, a pavement structure depth prediction device, terminal equipment and a medium, which can solve the problems of low accuracy and complexity of the traditional pavement structure depth prediction method.
In a first aspect, the present application provides a pavement construction depth prediction method, including:
Collecting RGB image data of a road surface, and constructing a depth map according to the RGB image data;
for each pixel in the depth map, calculating the average depth and the depth variance between the pixel and the adjacent pixel within the preset radius range, and correcting the depth value of the pixel based on the average depth and the depth variance to obtain a corrected depth map;
Calculating the relative concave area proportion of the corrected depth map; the relative concave area ratio is used for representing the roughness of the pavement;
Downsampling RGB image data to construct an image pyramid; the image pyramid includes RGB images of multiple scales;
Based on Gaussian local self-adaptive threshold values, binarizing the image of each scale to obtain a binary image, and upsampling the binary image to obtain an adjusted binary image;
fusing the images of all scales in the image pyramid to obtain a fused binary image, and performing bit-wise OR operation on the adjusted binary image and the fused binary image to obtain a final binary image;
Making a circumcircle for all aggregates in the final binary image, and calculating the ratio of the diameter of the maximum circumcircle to the image width to obtain the maximum bone particle diameter ratio; the maximum bone particle size ratio is used for describing the ratio of the maximum particle size of aggregate used on the pavement to the width of the whole depth map, and the aggregate comprises broken stone, cobble and sand stone;
and predicting the pavement construction depth according to the relative concave area proportion, the maximum bone particle size ratio and the GBT model after pre-training.
Optionally, correcting the depth value of the pixel based on the average depth and the depth variance includes:
By calculation formula Obtaining the depth value/>, after denoising; WhereinRepresenting an initial depth value of a pixel,Representing noise variance,Representing depth variance,Represents the average depth;
By calculation formula Obtaining corrected depth value; Wherein,All represent fitting coefficients.
Alternatively, the calculation expression of the relative concave area ratio is as follows:
Wherein, Representing the relative concave area ratio,Representing the number of pixels relative to the concave portion,Representing the size of a horizontal pixel,Representing the size of the vertical pixels,Represents theHorizontal pixels,Represents theVertical pixels,
Optionally, binarizing the image of each scale based on the gaussian local adaptive threshold to obtain a binary image, upsampling the binary image to obtain an adjusted binary image, including:
for each pixel in the image of each scale, the pair passes through a calculation formula
Obtaining binary image after binarization of scale image; WhereinRepresenting a gaussian local adaptation threshold,Representing a constant for controlling the offset of the gaussian local adaptation threshold relative to the local variance,Represents a local average valueRepresenting the local standard deviation;
And (3) through up-sampling, the binary image is adjusted to be the same as the corrected depth map in size, and an adjusted binary image is obtained.
Alternatively, the expression of the maximum bone particle size ratio is; WhereinRepresenting the diameter of the largest circumscribed circle,Representing the image width.
In a second aspect, the present application provides a road surface texture depth prediction apparatus comprising:
the image acquisition module is used for acquiring RGB image data of the road surface and constructing a depth map according to the RGB image data;
The depth correction module is used for calculating the average depth and the depth variance between the pixel and the adjacent pixel in the preset radius range for each pixel in the depth map respectively, and correcting the depth value of the pixel based on the average depth and the depth variance to obtain a corrected depth map;
The concave area proportion calculation module is used for calculating the relative concave area proportion of the corrected depth map; the relative concave area ratio is used for representing the roughness of the pavement;
The image pyramid module is used for downsampling RGB image data to construct an image pyramid; the image pyramid includes RGB images of multiple scales;
the binarization image adjustment module is used for binarizing the image of each scale based on the Gaussian local self-adaptive threshold value to obtain a binary image, and upsampling the binary image to obtain an adjusted binary image;
The image fusion module is used for fusing the images of all scales in the image pyramid to obtain a fused binary image, and carrying out bit-wise OR operation on the adjusted binary image and the fused binary image to obtain a final binary image;
The maximum bone particle diameter ratio calculation module is used for making circumscribed circles for all aggregates in the final binary image, and calculating the ratio of the diameter of the maximum circumscribed circle to the image width to obtain the maximum bone particle diameter ratio; the maximum bone particle size ratio is used for describing the ratio of the maximum particle size of aggregate used on the pavement to the width of the whole depth map, and the aggregate comprises broken stone, cobble and sand stone;
And the depth prediction module is used for predicting the pavement construction depth according to the relative concave area proportion, the maximum bone particle size ratio and the pre-trained GBT model.
In a third aspect, the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the above-mentioned road surface construction depth prediction method when executing the computer program.
In a fourth aspect, the present application provides a computer readable storage medium storing a computer program which when executed by a processor implements the above road construction depth prediction method.
The scheme of the application has the following beneficial effects:
According to the pavement structure depth prediction method provided by the application, the average depth and the depth variance are used for correcting the depth value of the pixel, so that the negative influence caused by image noise can be reduced, and the accuracy of pavement structure depth prediction is improved; the pavement texture is represented from different dimensions by the relative concave area proportion and the maximum bone particle size ratio, complex calculation in the traditional method is simplified, the interpretation is strong, the visualization is realized, and meanwhile, the pavement construction depth can be accurately predicted by combining the relative concave area proportion and the maximum bone particle size ratio with the GBT model.
Other advantageous effects of the present application will be described in detail in the detailed description section which follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a pavement structure depth prediction method according to an embodiment of the present application;
FIG. 2 is a graph showing the comparison of the predicted effect of the pavement construction depth prediction method and the sand paving method according to an embodiment of the present application;
FIG. 3 is a graph showing the error contrast between the pavement construction depth prediction method and the sanding method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a pavement structure depth prediction apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in the present description and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
Furthermore, the terms "first," "second," "third," and the like in the description of the present specification and in the appended claims, are used for distinguishing between descriptions and not necessarily for indicating or implying a relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
Aiming at the problems of lower accuracy and complexity of the traditional pavement structure depth prediction method, the application provides a variety of pavement structure depth prediction methods, devices, terminal equipment and media, and the method corrects the depth value of pixels through average depth and depth variance, so that negative influence caused by image noise can be reduced, and the accuracy of pavement structure depth prediction is improved; the pavement texture is represented from different dimensions by the relative concave area proportion and the maximum bone particle size ratio, complex calculation in the traditional method is simplified, the interpretation is strong, the visualization is realized, and meanwhile, the pavement construction depth can be accurately predicted by combining the relative concave area proportion and the maximum bone particle size ratio with the GBT model.
The following describes an exemplary road surface texture depth prediction method provided by the present application.
As shown in fig. 1, the pavement structure depth prediction method provided by the application comprises the following steps:
and 11, collecting RGB image data of the pavement, and constructing a depth map according to the RGB image data.
Illustratively, in an embodiment of the present application, the above RGB image data includes 1 reference image and 12 source images photographed from a plurality of angles. The reference image is an image obtained by taking the camera with the optical axis perpendicular to the road surface, the source image is an image obtained by taking the camera with the optical axis not perpendicular to the road surface, and the shooting angles corresponding to different source images are different from each other. Meanwhile, in consideration of precision, the RGB image in the embodiment of the application is selected to be shot at a place about 15 cm to 20 cm away from the road surface to be measured, the shooting angle between the source images is set to be 30 DEG to 40 DEG in consideration of feature matching between a series of images in the depth map construction, and the focus of the depth map construction is to recover depth information from the reference image to extract road surface texture features, so that the overlapping degree between the reference image and the source images is set to be not less than 70%.
In the specific implementation, in order to ensure the consistent size of the reference image, a steel hollow square calibration plate can be used, and the length of the inner side of the calibration plate is 10 centimeters (cm); furthermore, considering the variability of lighting conditions at the time of outdoor acquisition, the inner edge of the calibration plate should be aligned with the camera imaging area to control the photographing height and achieve uniform image pixel size, and photographed at the same time of day as much as possible to ensure uniform lighting conditions.
The following is an exemplary description of a process of constructing a depth map from RGB image data.
For example, a structured light beam method (sfM, structure from Motion, a three-dimensional technique for recovering a scene and a camera pose from a group of images photographed from different perspectives, and generally, open source software such as colmap) may be used to obtain camera pose information from RGB (Red Green Blue) image data, obtain a series of images (a reference image and a source image) and camera parameters, then construct a corresponding depth map of the series of images using a PATCHMATCHNET model, and use the depth map corresponding to the reference image as a test map of a subsequent prediction effect.
Since the reference image represents relative depth values under the local camera coordinate system, the depth map constructed by the same measurement point may have different depth ranges without precisely fixed photographing distances and angles. Therefore, in order for the depth value to be comparable and interpretable, it needs to be mapped to a uniform range of [0,1 ].
Specifically, by a calculation formula
Obtaining normalized depth values; WhereinRepresenting depth values before normalization,Representing maximum depth value,Representing a minimum depth value.
And step 12, calculating the average depth and the depth variance between the pixel and the adjacent pixel in the preset radius range for each pixel in the depth map, and correcting the depth value of the pixel based on the average depth and the depth variance to obtain a corrected depth map.
It should be noted that, considering the influence of the image noise on the prediction effect, before executing step 12, the depth map obtained in step 11 needs to be processed by using a bilateral filter, then edge filling is performed on the depth map, and finally adaptive local noise reduction is performed.
Wherein correcting the depth value of the pixel based on the average depth and the depth variance comprises:
By calculation formula Obtaining the depth value/>, after denoising; WhereinRepresenting an initial depth value of a pixel,Representing noise variance,Representing depth variance,Represents the average depth, in the formulaWillDepth values are fitted to the depth values on the plane.
Furthermore, due to the road slope and the relative tilt between the camera and the plane, incorrect relative depth information extracted from the reference view may affect the accuracy of the road performance assessment if no correction is made. Thus, in an embodiment of the present application, a random sample consensus (RANSAC, random Sample Consensus algorithm) algorithm is used to obtain a third order polynomial of the fitted surface, as follows:
Wherein, All represent fitting coefficients,Representing the surface fit value. Subsequently, the tilt effect is eliminated by subtracting the corresponding surface fitting value from the RGB image data.
And step 13, calculating the relative concave area ratio of the corrected depth map.
The relative concave area ratio is used to characterize the roughness of the road surface, which is composed of many tiny undulations, depressions and protrusions. The relative concave area is an important parameter for quantifying the proportion of concave portions in the road surface structure and describing the roughness of the road surface.
Specifically, the calculation expression of the relative concave area ratio is as follows:
Wherein, Representing the relative concave area ratio,Representing the number of pixels relative to the concave portion,Representing the size of a horizontal pixel,Representing the size of the vertical pixels,Represents theHorizontal pixels,Represents theVertical pixels,
Step 14, downsampling the RGB image data to construct an image pyramid.
The image pyramid includes RGB images of multiple scales.
And 15, binarizing the image of each scale based on the Gaussian local self-adaptive threshold to obtain a binary image, and upsampling the binary image to obtain an adjusted binary image.
Based on the gaussian local adaptive threshold, binarizing the image of each scale to obtain a binary image, and upsampling the binary image to obtain an adjusted binary image, wherein the method comprises the following steps:
step 15.1, for each pixel in the image of each scale, passing the calculation formula
Obtaining binary image after binarization of scale image
Wherein,Representing a gaussian local adaptation threshold,Representing a constant for controlling the offset of the gaussian local adaptation threshold relative to the local variance,Represents a local average valueRepresenting the local standard deviation.
And 15.2, adjusting the binary image to be the same as the corrected depth map in size through upsampling to obtain an adjusted binary image.
And step 16, fusing the images of all scales in the image pyramid to obtain a fused binary image, and performing bit-wise OR operation on the adjusted binary image and the fused binary image to obtain a final binary image.
After step 16 is performed, in order to improve the accuracy of prediction, in the embodiment of the present application, hole filling and adhesion segmentation are performed on the adjusted binary image, which will be described below.
The hole filling operation is implemented using a variation of the flood filling algorithm. The algorithm is essentially a region growing process that starts with a single foreground pixel and extends to include all connected foreground pixels belonging to the same object. Illustratively, the white void areas within the aggregate are filled with black to facilitate subsequent analysis and identification.
For adhesion segmentation, an adjustable watershed algorithm can be used to segment adhesion particles. The algorithm is based on the watershed principle, but can be adjusted to adapt to different surface conditions by adjusting parameters so as to control the level of segmentation details. The watershed algorithm views the image as a terrain, where higher intensity gradients correspond to higher peaks and lower intensity gradients correspond to lower valleys. Illustratively, the water follows a path of decreasing gradient, ultimately forming a segmented region.
And step 17, making a circumcircle for all aggregates in the final binary image, and calculating the ratio of the diameter of the maximum circumcircle to the image width to obtain the maximum bone particle diameter ratio.
Maximum bone particle size ratio is used to describe the ratio of the maximum bone particle size of a road surface to the ratio of the maximum particle size of aggregate (crushed stone, cobble, gravel, etc.) used on the road surface to the width of the entire depth map (100 mm).
Specifically, the expression of the maximum bone particle size ratio is; WhereinRepresenting the diameter of the largest circumscribed circle,Representing the image width.
And step 18, predicting the pavement construction depth according to the relative concave area proportion, the maximum bone particle size ratio and the pre-trained GBT model.
Specifically, ① data collection and segmentation:
a dataset containing target variable MTD and features P, D is collected. The data set is divided into a training set and a validation set. 70% of the data was used to train the model and 30% of the data was used to evaluate model performance.
② Model training:
The GBT model is trained using the training set. The model iterates repeatedly, each of which trains a new decision tree to reduce the prediction error.
③ And (3) model tuning:
And optimizing the model parameters according to the model performance so as to improve the accuracy of prediction. Cross-validation techniques are used to select the best parameters.
④ Model evaluation:
The performance of the model is evaluated using the validation set. The evaluation index includes Mean Square Error (MSE), decision coefficient (R-squared), mean Absolute Error (MAE).
⑤ Predicting a target value:
Once the model is trained and performs well, the values of features P and D may be input into the model to predict the target values. The GBT model will generate the final prediction result by combining predictions of multiple decision trees.
In order to verify the effectiveness of the pavement structure depth prediction method provided by the application, in one embodiment of the application, the steps are sequentially performed, and pavement structure depth prediction is performed on 40 sets of test data. These predictions are then compared to reference values measured using the sanding method, the prediction effect being as shown in figure 2, and the absolute and relative errors between the two are compared, as shown in figure 3.
As can be seen from fig. 2 and 3, the accuracy of the pavement structure depth prediction method provided by the application is higher, and most absolute errors are within 0.15 millimeters (mm), the relative errors are generally not more than 16%, so that the requirements of practical application are met.
The following describes an exemplary road surface texture depth prediction apparatus provided by the present application.
As shown in fig. 4, the road surface structure depth prediction apparatus 400 includes:
the image acquisition module 401 is used for acquiring RGB image data of the road surface and constructing a depth map according to the RGB image data;
The depth correction module 402 is configured to calculate, for each pixel in the depth map, an average depth and a depth variance between the pixel and an adjacent pixel within a preset radius range, and correct a depth value of the pixel based on the average depth and the depth variance, so as to obtain a corrected depth map;
A concave area ratio calculating module 403, configured to calculate a relative concave area ratio of the corrected depth map; the relative concave area ratio is used for representing the roughness of the pavement;
An image pyramid module 404, configured to downsample RGB image data to construct an image pyramid; the image pyramid includes RGB images of multiple scales;
a binarized image adjustment module 405, configured to binarize the image of each scale based on a gaussian local adaptive threshold, to obtain a binary image, and upsample the binary image to obtain an adjusted binary image;
the image fusion module 406 is configured to fuse the images of each scale in the image pyramid to obtain a fused binary image, and perform bitwise or operation on the adjusted binary image and the fused binary image to obtain a final binary image;
the maximum bone particle diameter ratio calculating module 407 is configured to make a circumcircle for all aggregates in the final binary image, and calculate a ratio of a diameter of the maximum circumcircle to an image width to obtain a maximum bone particle diameter ratio; the maximum bone particle size ratio is used for describing the ratio of the maximum particle size of aggregate used on the pavement to the width of the whole depth map, and the aggregate comprises broken stone, cobble and sand stone;
The depth prediction module 408 is configured to predict a pavement construction depth according to the relative concave area ratio, the maximum bone particle size ratio, and the pre-trained GBT model.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
As shown in fig. 5, an embodiment of the present application provides a terminal device, and as shown in fig. 5, a terminal device D10 of the embodiment includes: at least one processor D100 (only one processor is shown in fig. 5), a memory D101 and a computer program D102 stored in the memory D101 and executable on the at least one processor D100, the processor D100 implementing the steps in any of the various method embodiments described above when executing the computer program D102.
Specifically, when the processor D100 executes the computer program D102, by collecting RGB image data of a road surface and constructing a depth map according to the RGB image data, respectively, calculating an average depth and a depth variance between each pixel in the depth map and adjacent pixels in a preset radius range, correcting a depth value of each pixel based on the average depth and the depth variance to obtain a corrected depth map, calculating a relative concave area ratio of the corrected depth map, downsampling RGB image data, constructing an image pyramid, performing binarization on an image of each scale based on a gaussian local adaptive threshold value to obtain a binary image, upsampling the binary image to obtain an adjusted binary image, fusing the images of each scale in the image pyramid to obtain a fused binary image, performing bit-wise or operation on the adjusted binary image and the fused binary image to obtain a final binary image, performing circumscribed circles on all aggregates in the final binary image, calculating a ratio of a maximum circumscribed circle diameter to an image width, obtaining a maximum bone diameter ratio, predicting a bone diameter ratio, a maximum bone depth ratio, and a gbparticle diameter ratio, and a predicted depth ratio, and a training model. The average depth and the depth variance are used for correcting the depth value of the pixel, so that negative influence caused by image noise can be reduced, and the accuracy of pavement structure depth prediction is improved; the pavement texture is represented from different dimensions by the relative concave area proportion and the maximum bone particle size ratio, complex calculation in the traditional method is simplified, the interpretation is strong, the visualization is realized, and meanwhile, the pavement construction depth can be accurately predicted by combining the relative concave area proportion and the maximum bone particle size ratio with the GBT model.
The Processor D100 may be a central processing unit (CPU, central Processing Unit), the Processor D100 may also be other general purpose processors, digital signal processors (DSP, digital Signal processors), application SPECIFIC INTEGRATED integrated circuits (ASICs), off-the-shelf Programmable gate arrays (FPGAs) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory D101 may in some embodiments be an internal storage unit of the terminal device D10, for example a hard disk or a memory of the terminal device D10. The memory D101 may also be an external storage device of the terminal device D10 in other embodiments, for example, a plug-in hard disk, a smart memory card (SMC, smart Media Card), a Secure Digital (SD) card, a flash memory card (FLASH CARD) or the like, which are provided on the terminal device D10. Further, the memory D101 may also include both an internal storage unit and an external storage device of the terminal device D10. The memory D101 is used for storing an operating system, an application program, a boot loader (BootLoader), data, other programs, etc., such as program codes of the computer program. The memory D101 may also be used to temporarily store data that has been output or is to be output.
Embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements steps for implementing the various method embodiments described above.
Embodiments of the present application provide a computer program product enabling a terminal device to carry out the steps of the method embodiments described above when the computer program product is run on the terminal device.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to the pavement construction depth prediction apparatus/terminal equipment, recording medium, computer Memory, read-Only Memory (ROM), random access Memory (RAM, random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other manners. For example, the apparatus/network device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The pavement structure depth prediction method provided by the application has the following advantages:
① The existing method for estimating the pavement structure depth based on image processing is generally started from an 8-bit gray level image, the scheme can acquire a 32-bit depth image, the data storage precision of the image is high, and the depth prediction with higher precision can be realized;
② The self-adaptive local bilateral depth map filtering algorithm is provided, and compared with the traditional algorithm, the algorithm has better performance in the aspect of retaining the original data characteristics;
③ The method is more practical than a method based on plane fitting;
④ Aiming at the data format of the depth map, the pavement texture features, namely the relative concave area ratio P and the maximum aggregate particle size ratio D, are provided, and the pavement texture is commonly represented from multiple dimensions;
⑤ A nonlinear regressor gradient-lifted tree (GBT) model is used for processing complex relations between features, and better regression results and stability are shown.
While the foregoing is directed to the preferred embodiments of the present application, it will be appreciated by those skilled in the art that various modifications and adaptations can be made without departing from the principles of the present application, and such modifications and adaptations are intended to be comprehended within the scope of the present application.

Claims (8)

1. A pavement structure depth prediction method, comprising:
collecting RGB image data of a road surface, and constructing a depth map according to the RGB image data;
calculating the average depth and the depth variance between the pixel and the adjacent pixel in the preset radius range for each pixel in the depth map, and correcting the depth value of the pixel based on the average depth and the depth variance to obtain a corrected depth map;
calculating the relative concave area proportion of the corrected depth map; the relative concave area proportion is used for representing the roughness of the road surface;
downsampling the RGB image data to construct an image pyramid; the image pyramid comprises RGB images of multiple scales;
Binarizing each scale image based on a Gaussian local self-adaptive threshold to obtain a binary image, and upsampling the binary image to obtain an adjusted binary image;
Fusing the images of all scales in the image pyramid to obtain a fused binary image, and performing bit-wise OR operation on the adjusted binary image and the fused binary image to obtain a final binary image;
Making a circumcircle for all aggregates in the final binary image, and calculating the ratio of the diameter of the maximum circumcircle to the image width to obtain the maximum bone particle diameter ratio; the maximum bone particle size ratio is used for describing the ratio of the maximum particle size of aggregate used on the pavement to the width of the whole depth map, and the aggregate comprises broken stone, cobble and sand stone;
And predicting the pavement construction depth according to the relative concave area proportion, the maximum bone particle size ratio and the pre-trained GBT model.
2. The pavement structure depth prediction method according to claim 1, wherein the correcting the depth value of the pixel based on the average depth and the depth variance includes:
By calculation formula Obtaining the depth value/>, after denoising; WhereinRepresenting an initial depth value of the pixel,Representing noise variance,Representing the depth variance,Representing the average depth;
By calculation formula Obtaining corrected depth value; Wherein,All represent fitting coefficients.
3. The pavement construction depth prediction method according to claim 2, wherein the calculation expression of the relative concave area ratio is as follows:
Wherein, Representing the relative concave area ratio,Representing the number of pixels relative to the concave portion,Representing the size of a horizontal pixel,Representing the size of the vertical pixels,Represents theHorizontal pixels,Represents theVertical pixels,
4. The method according to claim 1, wherein binarizing the image of each scale based on the gaussian local adaptive threshold to obtain a binary image, upsampling the binary image to obtain an adjusted binary image, and comprising:
for each pixel in the image of each scale, the pair passes through a calculation formula
Obtaining a binary image of the scale after binarization of the image; WhereinRepresenting the gaussian local adaptation threshold,Representing a constant for controlling the offset of the gaussian local adaptation threshold relative to the local variance,Represents a local average valueRepresenting the local standard deviation;
And (3) through up-sampling, the binary image is adjusted to be the same as the corrected depth map in size, and the adjusted binary image is obtained.
5. The pavement structure depth prediction method according to claim 1, wherein the expression of the maximum bone particle size ratio is; WhereinRepresenting the diameter of the largest circumscribed circle,Representing the image width.
6. A pavement structure depth prediction apparatus, comprising:
The image acquisition module is used for acquiring RGB image data of the road surface and constructing a depth map according to the RGB image data;
the depth correction module is used for calculating the average depth and the depth variance between the pixel and the adjacent pixel within the preset radius range for each pixel in the depth map respectively, and correcting the depth value of the pixel based on the average depth and the depth variance to obtain a corrected depth map;
the concave area proportion calculation module is used for calculating the relative concave area proportion of the corrected depth map; the relative concave area proportion is used for representing the roughness of the road surface;
the image pyramid module is used for downsampling the RGB image data to construct an image pyramid; the image pyramid comprises RGB images of multiple scales;
The binarization image adjustment module is used for binarizing each scale image based on Gaussian local self-adaptive threshold values to obtain binary images, and upsampling the binary images to obtain adjusted binary images;
The image fusion module is used for fusing the images of all scales in the image pyramid to obtain a fused binary image, and carrying out bit-wise OR operation on the adjusted binary image and the fused binary image to obtain a final binary image;
The maximum bone particle diameter ratio calculation module is used for making circumscribed circles for all aggregates in the final binary image, and calculating the ratio of the diameter of the maximum circumscribed circle to the image width to obtain the maximum bone particle diameter ratio; the maximum bone particle size ratio is used for describing the ratio of the maximum particle size of aggregate used on the pavement to the width of the whole depth map, and the aggregate comprises broken stone, cobble and sand stone;
and the depth prediction module is used for predicting the pavement construction depth according to the relative concave area proportion, the maximum bone particle size ratio and the pre-trained GBT model.
7. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the road construction depth prediction method according to any one of claims 1 to 5 when executing the computer program.
8. A computer-readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the pavement construction depth prediction method according to any one of claims 1 to 5.
CN202410291596.9A 2024-03-14 2024-03-14 A road surface structure depth prediction method, device, terminal equipment and medium Active CN118015068B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410291596.9A CN118015068B (en) 2024-03-14 2024-03-14 A road surface structure depth prediction method, device, terminal equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410291596.9A CN118015068B (en) 2024-03-14 2024-03-14 A road surface structure depth prediction method, device, terminal equipment and medium

Publications (2)

Publication Number Publication Date
CN118015068A true CN118015068A (en) 2024-05-10
CN118015068B CN118015068B (en) 2024-07-09

Family

ID=90952030

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410291596.9A Active CN118015068B (en) 2024-03-14 2024-03-14 A road surface structure depth prediction method, device, terminal equipment and medium

Country Status (1)

Country Link
CN (1) CN118015068B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118773996A (en) * 2024-09-13 2024-10-15 安徽省交通规划设计研究总院股份有限公司 Depth detection method of large-void pavement structure based on sound processing technology
CN119469041A (en) * 2025-01-09 2025-02-18 晋城合为规划设计集团有限公司 A method and system for measuring rock and soil depth in geological exploration

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1290851A (en) * 2000-11-03 2001-04-11 华南理工大学 Method for measuring and evaluating depth of road surface construction by digital image technology
CN101358837A (en) * 2008-09-24 2009-02-04 重庆交通大学 Method of Determining Surface Structure Depth of Exposed Concrete by Surface Fitting Method
US20130155061A1 (en) * 2011-12-16 2013-06-20 University Of Southern California Autonomous pavement condition assessment
CN105113375A (en) * 2015-05-15 2015-12-02 南京航空航天大学 Pavement cracking detection system and method based on line structured light
CN109584286A (en) * 2019-01-22 2019-04-05 东南大学 A kind of bituminous pavement construction depth calculation method based on generalized regression nerve networks
US20210287383A1 (en) * 2020-03-14 2021-09-16 Purdue Research Foundation Pavement macrotexture determination using multi-view smartphone images
EP3901911A1 (en) * 2020-04-23 2021-10-27 Siemens Aktiengesellschaft Object measurement method and device thereof
CN114049618A (en) * 2022-01-12 2022-02-15 河北工业大学 Graph-point-graph transformation-based pavement three-dimensional disease PCI calculation method
CN114913134A (en) * 2022-04-21 2022-08-16 中南大学 Tunnel shotcrete roughness identification method, terminal device and storage medium
CN115294066A (en) * 2022-08-09 2022-11-04 重庆科技学院 Sandstone particle size detection method
CN116716778A (en) * 2023-05-26 2023-09-08 安徽省高速公路试验检测科研中心有限公司 Pavement structure depth detection method based on laser vision
CN117291880A (en) * 2023-09-15 2023-12-26 安徽相驰车业有限公司 A brake pad surface defect detection system based on image analysis

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1290851A (en) * 2000-11-03 2001-04-11 华南理工大学 Method for measuring and evaluating depth of road surface construction by digital image technology
CN101358837A (en) * 2008-09-24 2009-02-04 重庆交通大学 Method of Determining Surface Structure Depth of Exposed Concrete by Surface Fitting Method
US20130155061A1 (en) * 2011-12-16 2013-06-20 University Of Southern California Autonomous pavement condition assessment
CN105113375A (en) * 2015-05-15 2015-12-02 南京航空航天大学 Pavement cracking detection system and method based on line structured light
CN109584286A (en) * 2019-01-22 2019-04-05 东南大学 A kind of bituminous pavement construction depth calculation method based on generalized regression nerve networks
US20210287383A1 (en) * 2020-03-14 2021-09-16 Purdue Research Foundation Pavement macrotexture determination using multi-view smartphone images
EP3901911A1 (en) * 2020-04-23 2021-10-27 Siemens Aktiengesellschaft Object measurement method and device thereof
CN114049618A (en) * 2022-01-12 2022-02-15 河北工业大学 Graph-point-graph transformation-based pavement three-dimensional disease PCI calculation method
CN114913134A (en) * 2022-04-21 2022-08-16 中南大学 Tunnel shotcrete roughness identification method, terminal device and storage medium
CN115294066A (en) * 2022-08-09 2022-11-04 重庆科技学院 Sandstone particle size detection method
CN116716778A (en) * 2023-05-26 2023-09-08 安徽省高速公路试验检测科研中心有限公司 Pavement structure depth detection method based on laser vision
CN117291880A (en) * 2023-09-15 2023-12-26 安徽相驰车业有限公司 A brake pad surface defect detection system based on image analysis

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
HAN-CHENG DAN, ET.AL: "Investigation on the fractal characteristic of asphalt pavement texture roughness inforporating 3D reconstruction technology", 《ELECTRONIC RESEARCH ARCHIVE》, vol. 31, no. 4, 27 February 2023 (2023-02-27), pages 2337 - 2357 *
HAN-CHENG DAN: "Evaluation of asphalt pavement texture using multiview stereo reconstruction based on deep learning", 《CONSTRUCTION AND BUILDING MATERIALS》, vol. 412, 19 January 2024 (2024-01-19), pages 1 - 17 *
JIA LIANG, ET.AL: "A novel pavement mean texture depth evaluation strategy based on three-dimensional pavement data filtered by a new filtering approach", 《MEASUREMENT》, vol. 166, 15 December 2020 (2020-12-15), pages 1 - 12 *
JINCHAO GUAN, ET.AL: "Multi-scale asphalt pavement deformation detection and measurement based on machine learning of full field-of-view digital surface data", 《TRANSPORTATION RESEARCH PART C: EMERGING TECHNOLOGIES》, vol. 152, 31 July 2023 (2023-07-31), pages 1 - 28 *
ZIHANG WENG, ET.AL: "Pavement texture depth estimation using image-based multiscale features", 《AUTOMATION IN CONSTRUCTION》, vol. 141, 9 June 2022 (2022-06-09), pages 1 - 13, XP087134041, DOI: 10.1016/j.autcon.2022.104404 *
但汉成等: "潮湿山区路面凝冰机理及路面抗滑性研究综述", 《武汉理工大学学报(交通科学与工程版)》, vol. 38, no. 4, 31 August 2014 (2014-08-31), pages 719 - 724 *
卢新利: "基于熵的路面纹理磨耗衰减特性表征方法", 《中国科技论文》, vol. 18, no. 8, 4 September 2023 (2023-09-04), pages 897 - 904 *
彭昭志等: "沥青路面表面构造深度数字图像检测方法", 《科技创新与应用》, no. 8, 30 March 2023 (2023-03-30), pages 122 - 124 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118773996A (en) * 2024-09-13 2024-10-15 安徽省交通规划设计研究总院股份有限公司 Depth detection method of large-void pavement structure based on sound processing technology
CN119469041A (en) * 2025-01-09 2025-02-18 晋城合为规划设计集团有限公司 A method and system for measuring rock and soil depth in geological exploration

Also Published As

Publication number Publication date
CN118015068B (en) 2024-07-09

Similar Documents

Publication Publication Date Title
Li et al. Automatic pavement-crack detection and segmentation based on steerable matched filtering and an active contour model
CN118015068A (en) A road surface structure depth prediction method, device, terminal equipment and medium
CN110246130B (en) Airport pavement crack detection method based on infrared and visible light image data fusion
CN114494240B (en) Ballastless track slab crack measurement method based on multi-scale collaborative deep learning
CN117094914B (en) Smart city road monitoring system based on computer vision
CN111507426A (en) No-reference image quality grading evaluation method and device based on visual fusion characteristics
CN120198416B (en) Geological structure crack automatic identification and detection system based on deep learning
CN118781121B (en) Road pavement construction quality detection method and system
CN115909285A (en) Radar and video signal fused vehicle tracking method
CN120388226A (en) An intelligent system and method for iron stick yam based on image recognition
CN117670874A (en) A method for detecting internal cracks in box beams based on image processing
CN116630426A (en) A method and system for extracting flooded submerged areas
Fang et al. An intensity-enhanced method for handling mobile laser scanning point clouds
CN117037105B (en) Pavement crack filling detection method, system, terminal and medium based on deep learning
CN116630311B (en) Pavement damage identification alarm method for highway administration
CN118052764A (en) Method, device, computer equipment and storage medium for detecting visibility of foggy images
CN117635615A (en) Defect detection method and system for punching molds based on deep learning
CN120472242A (en) Asphalt pavement crack detection method and system
CN112085725A (en) Residual film residual quantity detection method and early warning system based on heuristic iterative algorithm
CN117575970B (en) Classification-based satellite image automatic processing method, device, equipment and medium
CN118297089A (en) A capacitor automatic code scanning control method and system
Ghanta et al. A Hessian-based methodology for automatic surface crack detection and classification from pavement images
CN116934753A (en) Water and soil conservation monitoring method based on remote sensing image
CN117670799A (en) Rock slag size measurement methods, devices, equipment and storage media in complex environments
CN117011364B (en) Reinforcing steel bar diameter identification method, system, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant