Disclosure of Invention
The application provides a pavement structure depth prediction method, a pavement structure depth prediction device, terminal equipment and a medium, which can solve the problems of low accuracy and complexity of the traditional pavement structure depth prediction method.
In a first aspect, the present application provides a pavement construction depth prediction method, including:
Collecting RGB image data of a road surface, and constructing a depth map according to the RGB image data;
for each pixel in the depth map, calculating the average depth and the depth variance between the pixel and the adjacent pixel within the preset radius range, and correcting the depth value of the pixel based on the average depth and the depth variance to obtain a corrected depth map;
Calculating the relative concave area proportion of the corrected depth map; the relative concave area ratio is used for representing the roughness of the pavement;
Downsampling RGB image data to construct an image pyramid; the image pyramid includes RGB images of multiple scales;
Based on Gaussian local self-adaptive threshold values, binarizing the image of each scale to obtain a binary image, and upsampling the binary image to obtain an adjusted binary image;
fusing the images of all scales in the image pyramid to obtain a fused binary image, and performing bit-wise OR operation on the adjusted binary image and the fused binary image to obtain a final binary image;
Making a circumcircle for all aggregates in the final binary image, and calculating the ratio of the diameter of the maximum circumcircle to the image width to obtain the maximum bone particle diameter ratio; the maximum bone particle size ratio is used for describing the ratio of the maximum particle size of aggregate used on the pavement to the width of the whole depth map, and the aggregate comprises broken stone, cobble and sand stone;
and predicting the pavement construction depth according to the relative concave area proportion, the maximum bone particle size ratio and the GBT model after pre-training.
Optionally, correcting the depth value of the pixel based on the average depth and the depth variance includes:
By calculation formula Obtaining the depth value/>, after denoising; WhereinRepresenting an initial depth value of a pixel,Representing noise variance,Representing depth variance,Represents the average depth;
By calculation formula Obtaining corrected depth value; Wherein,,All represent fitting coefficients.
Alternatively, the calculation expression of the relative concave area ratio is as follows:
Wherein, Representing the relative concave area ratio,Representing the number of pixels relative to the concave portion,Representing the size of a horizontal pixel,Representing the size of the vertical pixels,Represents theHorizontal pixels,,Represents theVertical pixels,。
Optionally, binarizing the image of each scale based on the gaussian local adaptive threshold to obtain a binary image, upsampling the binary image to obtain an adjusted binary image, including:
for each pixel in the image of each scale, the pair passes through a calculation formula
Obtaining binary image after binarization of scale image; WhereinRepresenting a gaussian local adaptation threshold,Representing a constant for controlling the offset of the gaussian local adaptation threshold relative to the local variance,Represents a local average valueRepresenting the local standard deviation;
And (3) through up-sampling, the binary image is adjusted to be the same as the corrected depth map in size, and an adjusted binary image is obtained.
Alternatively, the expression of the maximum bone particle size ratio is; WhereinRepresenting the diameter of the largest circumscribed circle,Representing the image width.
In a second aspect, the present application provides a road surface texture depth prediction apparatus comprising:
the image acquisition module is used for acquiring RGB image data of the road surface and constructing a depth map according to the RGB image data;
The depth correction module is used for calculating the average depth and the depth variance between the pixel and the adjacent pixel in the preset radius range for each pixel in the depth map respectively, and correcting the depth value of the pixel based on the average depth and the depth variance to obtain a corrected depth map;
The concave area proportion calculation module is used for calculating the relative concave area proportion of the corrected depth map; the relative concave area ratio is used for representing the roughness of the pavement;
The image pyramid module is used for downsampling RGB image data to construct an image pyramid; the image pyramid includes RGB images of multiple scales;
the binarization image adjustment module is used for binarizing the image of each scale based on the Gaussian local self-adaptive threshold value to obtain a binary image, and upsampling the binary image to obtain an adjusted binary image;
The image fusion module is used for fusing the images of all scales in the image pyramid to obtain a fused binary image, and carrying out bit-wise OR operation on the adjusted binary image and the fused binary image to obtain a final binary image;
The maximum bone particle diameter ratio calculation module is used for making circumscribed circles for all aggregates in the final binary image, and calculating the ratio of the diameter of the maximum circumscribed circle to the image width to obtain the maximum bone particle diameter ratio; the maximum bone particle size ratio is used for describing the ratio of the maximum particle size of aggregate used on the pavement to the width of the whole depth map, and the aggregate comprises broken stone, cobble and sand stone;
And the depth prediction module is used for predicting the pavement construction depth according to the relative concave area proportion, the maximum bone particle size ratio and the pre-trained GBT model.
In a third aspect, the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the above-mentioned road surface construction depth prediction method when executing the computer program.
In a fourth aspect, the present application provides a computer readable storage medium storing a computer program which when executed by a processor implements the above road construction depth prediction method.
The scheme of the application has the following beneficial effects:
According to the pavement structure depth prediction method provided by the application, the average depth and the depth variance are used for correcting the depth value of the pixel, so that the negative influence caused by image noise can be reduced, and the accuracy of pavement structure depth prediction is improved; the pavement texture is represented from different dimensions by the relative concave area proportion and the maximum bone particle size ratio, complex calculation in the traditional method is simplified, the interpretation is strong, the visualization is realized, and meanwhile, the pavement construction depth can be accurately predicted by combining the relative concave area proportion and the maximum bone particle size ratio with the GBT model.
Other advantageous effects of the present application will be described in detail in the detailed description section which follows.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in the present description and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
Furthermore, the terms "first," "second," "third," and the like in the description of the present specification and in the appended claims, are used for distinguishing between descriptions and not necessarily for indicating or implying a relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
Aiming at the problems of lower accuracy and complexity of the traditional pavement structure depth prediction method, the application provides a variety of pavement structure depth prediction methods, devices, terminal equipment and media, and the method corrects the depth value of pixels through average depth and depth variance, so that negative influence caused by image noise can be reduced, and the accuracy of pavement structure depth prediction is improved; the pavement texture is represented from different dimensions by the relative concave area proportion and the maximum bone particle size ratio, complex calculation in the traditional method is simplified, the interpretation is strong, the visualization is realized, and meanwhile, the pavement construction depth can be accurately predicted by combining the relative concave area proportion and the maximum bone particle size ratio with the GBT model.
The following describes an exemplary road surface texture depth prediction method provided by the present application.
As shown in fig. 1, the pavement structure depth prediction method provided by the application comprises the following steps:
and 11, collecting RGB image data of the pavement, and constructing a depth map according to the RGB image data.
Illustratively, in an embodiment of the present application, the above RGB image data includes 1 reference image and 12 source images photographed from a plurality of angles. The reference image is an image obtained by taking the camera with the optical axis perpendicular to the road surface, the source image is an image obtained by taking the camera with the optical axis not perpendicular to the road surface, and the shooting angles corresponding to different source images are different from each other. Meanwhile, in consideration of precision, the RGB image in the embodiment of the application is selected to be shot at a place about 15 cm to 20 cm away from the road surface to be measured, the shooting angle between the source images is set to be 30 DEG to 40 DEG in consideration of feature matching between a series of images in the depth map construction, and the focus of the depth map construction is to recover depth information from the reference image to extract road surface texture features, so that the overlapping degree between the reference image and the source images is set to be not less than 70%.
In the specific implementation, in order to ensure the consistent size of the reference image, a steel hollow square calibration plate can be used, and the length of the inner side of the calibration plate is 10 centimeters (cm); furthermore, considering the variability of lighting conditions at the time of outdoor acquisition, the inner edge of the calibration plate should be aligned with the camera imaging area to control the photographing height and achieve uniform image pixel size, and photographed at the same time of day as much as possible to ensure uniform lighting conditions.
The following is an exemplary description of a process of constructing a depth map from RGB image data.
For example, a structured light beam method (sfM, structure from Motion, a three-dimensional technique for recovering a scene and a camera pose from a group of images photographed from different perspectives, and generally, open source software such as colmap) may be used to obtain camera pose information from RGB (Red Green Blue) image data, obtain a series of images (a reference image and a source image) and camera parameters, then construct a corresponding depth map of the series of images using a PATCHMATCHNET model, and use the depth map corresponding to the reference image as a test map of a subsequent prediction effect.
Since the reference image represents relative depth values under the local camera coordinate system, the depth map constructed by the same measurement point may have different depth ranges without precisely fixed photographing distances and angles. Therefore, in order for the depth value to be comparable and interpretable, it needs to be mapped to a uniform range of [0,1 ].
Specifically, by a calculation formula
Obtaining normalized depth values; WhereinRepresenting depth values before normalization,Representing maximum depth value,Representing a minimum depth value.
And step 12, calculating the average depth and the depth variance between the pixel and the adjacent pixel in the preset radius range for each pixel in the depth map, and correcting the depth value of the pixel based on the average depth and the depth variance to obtain a corrected depth map.
It should be noted that, considering the influence of the image noise on the prediction effect, before executing step 12, the depth map obtained in step 11 needs to be processed by using a bilateral filter, then edge filling is performed on the depth map, and finally adaptive local noise reduction is performed.
Wherein correcting the depth value of the pixel based on the average depth and the depth variance comprises:
By calculation formula Obtaining the depth value/>, after denoising; WhereinRepresenting an initial depth value of a pixel,Representing noise variance,Representing depth variance,Represents the average depth, in the formulaWillDepth values are fitted to the depth values on the plane.
Furthermore, due to the road slope and the relative tilt between the camera and the plane, incorrect relative depth information extracted from the reference view may affect the accuracy of the road performance assessment if no correction is made. Thus, in an embodiment of the present application, a random sample consensus (RANSAC, random Sample Consensus algorithm) algorithm is used to obtain a third order polynomial of the fitted surface, as follows:
Wherein, All represent fitting coefficients,Representing the surface fit value. Subsequently, the tilt effect is eliminated by subtracting the corresponding surface fitting value from the RGB image data.
And step 13, calculating the relative concave area ratio of the corrected depth map.
The relative concave area ratio is used to characterize the roughness of the road surface, which is composed of many tiny undulations, depressions and protrusions. The relative concave area is an important parameter for quantifying the proportion of concave portions in the road surface structure and describing the roughness of the road surface.
Specifically, the calculation expression of the relative concave area ratio is as follows:
Wherein, Representing the relative concave area ratio,Representing the number of pixels relative to the concave portion,Representing the size of a horizontal pixel,Representing the size of the vertical pixels,Represents theHorizontal pixels,,Represents theVertical pixels,。
Step 14, downsampling the RGB image data to construct an image pyramid.
The image pyramid includes RGB images of multiple scales.
And 15, binarizing the image of each scale based on the Gaussian local self-adaptive threshold to obtain a binary image, and upsampling the binary image to obtain an adjusted binary image.
Based on the gaussian local adaptive threshold, binarizing the image of each scale to obtain a binary image, and upsampling the binary image to obtain an adjusted binary image, wherein the method comprises the following steps:
step 15.1, for each pixel in the image of each scale, passing the calculation formula
Obtaining binary image after binarization of scale image。
Wherein,Representing a gaussian local adaptation threshold,Representing a constant for controlling the offset of the gaussian local adaptation threshold relative to the local variance,Represents a local average valueRepresenting the local standard deviation.
And 15.2, adjusting the binary image to be the same as the corrected depth map in size through upsampling to obtain an adjusted binary image.
And step 16, fusing the images of all scales in the image pyramid to obtain a fused binary image, and performing bit-wise OR operation on the adjusted binary image and the fused binary image to obtain a final binary image.
After step 16 is performed, in order to improve the accuracy of prediction, in the embodiment of the present application, hole filling and adhesion segmentation are performed on the adjusted binary image, which will be described below.
The hole filling operation is implemented using a variation of the flood filling algorithm. The algorithm is essentially a region growing process that starts with a single foreground pixel and extends to include all connected foreground pixels belonging to the same object. Illustratively, the white void areas within the aggregate are filled with black to facilitate subsequent analysis and identification.
For adhesion segmentation, an adjustable watershed algorithm can be used to segment adhesion particles. The algorithm is based on the watershed principle, but can be adjusted to adapt to different surface conditions by adjusting parameters so as to control the level of segmentation details. The watershed algorithm views the image as a terrain, where higher intensity gradients correspond to higher peaks and lower intensity gradients correspond to lower valleys. Illustratively, the water follows a path of decreasing gradient, ultimately forming a segmented region.
And step 17, making a circumcircle for all aggregates in the final binary image, and calculating the ratio of the diameter of the maximum circumcircle to the image width to obtain the maximum bone particle diameter ratio.
Maximum bone particle size ratio is used to describe the ratio of the maximum bone particle size of a road surface to the ratio of the maximum particle size of aggregate (crushed stone, cobble, gravel, etc.) used on the road surface to the width of the entire depth map (100 mm).
Specifically, the expression of the maximum bone particle size ratio is; WhereinRepresenting the diameter of the largest circumscribed circle,Representing the image width.
And step 18, predicting the pavement construction depth according to the relative concave area proportion, the maximum bone particle size ratio and the pre-trained GBT model.
Specifically, ① data collection and segmentation:
a dataset containing target variable MTD and features P, D is collected. The data set is divided into a training set and a validation set. 70% of the data was used to train the model and 30% of the data was used to evaluate model performance.
② Model training:
The GBT model is trained using the training set. The model iterates repeatedly, each of which trains a new decision tree to reduce the prediction error.
③ And (3) model tuning:
And optimizing the model parameters according to the model performance so as to improve the accuracy of prediction. Cross-validation techniques are used to select the best parameters.
④ Model evaluation:
The performance of the model is evaluated using the validation set. The evaluation index includes Mean Square Error (MSE), decision coefficient (R-squared), mean Absolute Error (MAE).
⑤ Predicting a target value:
Once the model is trained and performs well, the values of features P and D may be input into the model to predict the target values. The GBT model will generate the final prediction result by combining predictions of multiple decision trees.
In order to verify the effectiveness of the pavement structure depth prediction method provided by the application, in one embodiment of the application, the steps are sequentially performed, and pavement structure depth prediction is performed on 40 sets of test data. These predictions are then compared to reference values measured using the sanding method, the prediction effect being as shown in figure 2, and the absolute and relative errors between the two are compared, as shown in figure 3.
As can be seen from fig. 2 and 3, the accuracy of the pavement structure depth prediction method provided by the application is higher, and most absolute errors are within 0.15 millimeters (mm), the relative errors are generally not more than 16%, so that the requirements of practical application are met.
The following describes an exemplary road surface texture depth prediction apparatus provided by the present application.
As shown in fig. 4, the road surface structure depth prediction apparatus 400 includes:
the image acquisition module 401 is used for acquiring RGB image data of the road surface and constructing a depth map according to the RGB image data;
The depth correction module 402 is configured to calculate, for each pixel in the depth map, an average depth and a depth variance between the pixel and an adjacent pixel within a preset radius range, and correct a depth value of the pixel based on the average depth and the depth variance, so as to obtain a corrected depth map;
A concave area ratio calculating module 403, configured to calculate a relative concave area ratio of the corrected depth map; the relative concave area ratio is used for representing the roughness of the pavement;
An image pyramid module 404, configured to downsample RGB image data to construct an image pyramid; the image pyramid includes RGB images of multiple scales;
a binarized image adjustment module 405, configured to binarize the image of each scale based on a gaussian local adaptive threshold, to obtain a binary image, and upsample the binary image to obtain an adjusted binary image;
the image fusion module 406 is configured to fuse the images of each scale in the image pyramid to obtain a fused binary image, and perform bitwise or operation on the adjusted binary image and the fused binary image to obtain a final binary image;
the maximum bone particle diameter ratio calculating module 407 is configured to make a circumcircle for all aggregates in the final binary image, and calculate a ratio of a diameter of the maximum circumcircle to an image width to obtain a maximum bone particle diameter ratio; the maximum bone particle size ratio is used for describing the ratio of the maximum particle size of aggregate used on the pavement to the width of the whole depth map, and the aggregate comprises broken stone, cobble and sand stone;
The depth prediction module 408 is configured to predict a pavement construction depth according to the relative concave area ratio, the maximum bone particle size ratio, and the pre-trained GBT model.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
As shown in fig. 5, an embodiment of the present application provides a terminal device, and as shown in fig. 5, a terminal device D10 of the embodiment includes: at least one processor D100 (only one processor is shown in fig. 5), a memory D101 and a computer program D102 stored in the memory D101 and executable on the at least one processor D100, the processor D100 implementing the steps in any of the various method embodiments described above when executing the computer program D102.
Specifically, when the processor D100 executes the computer program D102, by collecting RGB image data of a road surface and constructing a depth map according to the RGB image data, respectively, calculating an average depth and a depth variance between each pixel in the depth map and adjacent pixels in a preset radius range, correcting a depth value of each pixel based on the average depth and the depth variance to obtain a corrected depth map, calculating a relative concave area ratio of the corrected depth map, downsampling RGB image data, constructing an image pyramid, performing binarization on an image of each scale based on a gaussian local adaptive threshold value to obtain a binary image, upsampling the binary image to obtain an adjusted binary image, fusing the images of each scale in the image pyramid to obtain a fused binary image, performing bit-wise or operation on the adjusted binary image and the fused binary image to obtain a final binary image, performing circumscribed circles on all aggregates in the final binary image, calculating a ratio of a maximum circumscribed circle diameter to an image width, obtaining a maximum bone diameter ratio, predicting a bone diameter ratio, a maximum bone depth ratio, and a gbparticle diameter ratio, and a predicted depth ratio, and a training model. The average depth and the depth variance are used for correcting the depth value of the pixel, so that negative influence caused by image noise can be reduced, and the accuracy of pavement structure depth prediction is improved; the pavement texture is represented from different dimensions by the relative concave area proportion and the maximum bone particle size ratio, complex calculation in the traditional method is simplified, the interpretation is strong, the visualization is realized, and meanwhile, the pavement construction depth can be accurately predicted by combining the relative concave area proportion and the maximum bone particle size ratio with the GBT model.
The Processor D100 may be a central processing unit (CPU, central Processing Unit), the Processor D100 may also be other general purpose processors, digital signal processors (DSP, digital Signal processors), application SPECIFIC INTEGRATED integrated circuits (ASICs), off-the-shelf Programmable gate arrays (FPGAs) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory D101 may in some embodiments be an internal storage unit of the terminal device D10, for example a hard disk or a memory of the terminal device D10. The memory D101 may also be an external storage device of the terminal device D10 in other embodiments, for example, a plug-in hard disk, a smart memory card (SMC, smart Media Card), a Secure Digital (SD) card, a flash memory card (FLASH CARD) or the like, which are provided on the terminal device D10. Further, the memory D101 may also include both an internal storage unit and an external storage device of the terminal device D10. The memory D101 is used for storing an operating system, an application program, a boot loader (BootLoader), data, other programs, etc., such as program codes of the computer program. The memory D101 may also be used to temporarily store data that has been output or is to be output.
Embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements steps for implementing the various method embodiments described above.
Embodiments of the present application provide a computer program product enabling a terminal device to carry out the steps of the method embodiments described above when the computer program product is run on the terminal device.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to the pavement construction depth prediction apparatus/terminal equipment, recording medium, computer Memory, read-Only Memory (ROM), random access Memory (RAM, random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other manners. For example, the apparatus/network device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The pavement structure depth prediction method provided by the application has the following advantages:
① The existing method for estimating the pavement structure depth based on image processing is generally started from an 8-bit gray level image, the scheme can acquire a 32-bit depth image, the data storage precision of the image is high, and the depth prediction with higher precision can be realized;
② The self-adaptive local bilateral depth map filtering algorithm is provided, and compared with the traditional algorithm, the algorithm has better performance in the aspect of retaining the original data characteristics;
③ The method is more practical than a method based on plane fitting;
④ Aiming at the data format of the depth map, the pavement texture features, namely the relative concave area ratio P and the maximum aggregate particle size ratio D, are provided, and the pavement texture is commonly represented from multiple dimensions;
⑤ A nonlinear regressor gradient-lifted tree (GBT) model is used for processing complex relations between features, and better regression results and stability are shown.
While the foregoing is directed to the preferred embodiments of the present application, it will be appreciated by those skilled in the art that various modifications and adaptations can be made without departing from the principles of the present application, and such modifications and adaptations are intended to be comprehended within the scope of the present application.