CN111310840A - Data fusion processing method, device, equipment and storage medium - Google Patents
Data fusion processing method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN111310840A CN111310840A CN202010112841.7A CN202010112841A CN111310840A CN 111310840 A CN111310840 A CN 111310840A CN 202010112841 A CN202010112841 A CN 202010112841A CN 111310840 A CN111310840 A CN 111310840A
- Authority
- CN
- China
- Prior art keywords
- measurement result
- candidate
- measurement
- measured object
- random forest
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 69
- 238000007499 fusion processing Methods 0.000 title claims abstract description 39
- 238000005259 measurement Methods 0.000 claims abstract description 341
- 238000007637 random forest analysis Methods 0.000 claims abstract description 87
- 230000004927 fusion Effects 0.000 claims abstract description 66
- 238000012549 training Methods 0.000 claims description 43
- 230000015654 memory Effects 0.000 claims description 20
- 230000008569 process Effects 0.000 abstract description 19
- 230000008901 benefit Effects 0.000 description 12
- 238000004891 communication Methods 0.000 description 4
- 238000004590 computer program Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010835 comparative analysis Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/86—Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/86—Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
- G01S13/865—Combination of radar systems with lidar systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/24323—Tree-organised classifiers
Landscapes
- Engineering & Computer Science (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Networks & Wireless Communication (AREA)
- Radar Systems Or Details Thereof (AREA)
- Traffic Control Systems (AREA)
Abstract
The embodiment of the application discloses a data fusion processing method, a data fusion processing device, data fusion processing equipment and a storage medium, relates to the field of data fusion, and can be used for automatic driving. The specific implementation scheme is as follows: determining at least two candidate measurement result combinations according to the measurement results of different sensors on the measured object; the candidate measurement result combination comprises at least one element; the element comprises at least two measurements; and determining a target measurement result combination with the same element association to the measured object from the at least two candidate measurement result combinations by adopting a random forest model, and taking the elements in the target measurement result combination as a fusion result. According to the method and the device, the measuring results associated with the same measured object are determined through the random forest model, normal distribution and manual parameter adjustment are not needed in the whole process, and the accuracy and the calling accuracy of the finally obtained fusion result are improved.
Description
Technical Field
The embodiment of the application relates to the field of data processing, in particular to the technical field of data fusion, and specifically relates to a data fusion processing method, device, equipment and storage medium, which can be used for automatic driving.
Background
The data fusion of the sensors is to combine the data characteristics of the same object acquired by different sensors to obtain high-precision information of each attribute of the object. In the process of realizing the sensor data fusion, whether the data characteristics acquired from different sensors are associated with the same measured object needs to be judged, so that the accuracy of the subsequent sensor data fusion is ensured.
In the prior art, normal distribution and manually set standard deviation are usually adopted to simulate measurement error distribution of data characteristics under various attributes, so that whether different sensors detect the same object is judged, and data fusion of different sensors is completed according to judgment results. However, the measurement errors of the sensors do not always follow normal distribution, and the accuracy of the standard deviation obtained by manual debugging is low, so that the error of the judgment process of whether different sensors detect the same object is large, and the accuracy of the post-fusion result is seriously affected.
Disclosure of Invention
The embodiment of the application discloses a data fusion processing method, a data fusion processing device, data fusion processing equipment and a data fusion processing storage medium, which can improve the accuracy and the calling-in rate of finally obtained fusion results.
In a first aspect, an embodiment of the present application discloses a data fusion processing method, including:
determining at least two candidate measurement result combinations according to the measurement results of different sensors on the measured object; the candidate measurement result combination comprises at least one element; the element comprises at least two measurements;
and determining a target measurement result combination with the same element association to the measured object from the at least two candidate measurement result combinations by adopting a random forest model, and taking the elements in the target measurement result combination as a fusion result.
One embodiment in the above application has the following advantages or benefits: and determining a target measurement result combination of which the elements are associated with the same measured object from a candidate measurement result combination formed by the measurement results of the measured object by different sensors by adopting a random forest model, and further taking the elements in the target measurement result combination as a fusion result. According to the scheme of the embodiment, the measurement results associated with the same measured object are determined through the random forest model, normal distribution and manual parameter adjustment are not needed in the whole process, the accuracy of the judgment result is improved, and the accuracy and the calling-in rate of the finally obtained fusion result are improved.
In addition, the data fusion processing method according to the above embodiment of the present application may further have the following additional technical features:
optionally, the number of elements in the candidate measurement combination is equal to the number value of the measurement of the first sensor; the number of the candidate measurement result combinations is the quotient of the first factorial product and the second factorial product; the first factorial is a factorial of a quantitative value of a measurement of a second sensor; the second factorial is a factorial of a difference in a number of measurements of the second sensor and the first sensor; wherein a magnitude of the measurement of the first sensor is less than or equal to a magnitude of the measurement of the second sensor.
One embodiment in the above application has the following advantages or benefits: a determination mode of the candidate measurement result combination and the number of elements in the candidate measurement result combination is provided, and the comprehensiveness of the generated candidate measurement result combination is ensured, so that the target measurement result combination can be determined more accurately in the following.
Optionally, determining, by using a random forest model, a target measurement result combination having the same element association with the measured object from the at least two candidate measurement result combinations, including:
inputting elements in the at least two candidate measurement result combinations into a random forest model to obtain the probability that the elements are associated with the same measured object;
determining the overall similarity of the at least two candidate measurement result combinations according to the probability that the elements in the at least two candidate measurement result combinations are associated with the same measured object;
and combining the candidate measurement results with the highest overall similarity as a target measurement result combination.
One embodiment in the above application has the following advantages or benefits: and calculating the probability of associating each element in each candidate measurement result combination with the same measured object by adopting a random forest model, further determining the overall similarity of each candidate measurement result combination according to the probability of associating each element in each candidate measurement result combination with the same measured object, and taking the candidate measurement result combination with the highest overall similarity as a target measurement result combination. The whole process does not need to adjust parameters according to normal distribution and manpower, and the accuracy of target measurement result combination is improved.
Optionally, before determining, by using a random forest model, a target measurement result combination having the same element association with the measured object from the at least two candidate measurement result combinations, the method further includes:
acquiring sample measurement result sets acquired by different sensors, adding classification labels to the sample measurement result sets, and constructing training sample sets;
and training a random forest model by adopting the training sample set.
Optionally, after determining, by using a random forest model, a target measurement result combination with the same element association with the measured object from the at least two candidate measurement result combinations, and taking an element in the target measurement result combination as a fusion result, the method further includes:
and optimizing and updating the random forest model according to the at least two candidate measurement result combinations and the fusion result.
One embodiment in the above application has the following advantages or benefits: according to sample measurement results acquired by different sensors, a training sample set is constructed, and the random forest model is trained through a large number of training sample sets, so that the accuracy of the output result of the random forest model is ensured. In addition, in the embodiment, after data fusion is completed each time, parameters in the random forest model are continuously optimized and updated through the current fusion result, and the accuracy of the output probability of the random forest model is improved.
Optionally, the different sensors are any two of a laser radar, a millimeter wave radar and an image collector; the measurement is a velocity measurement and/or a position measurement.
Optionally, if the different sensors include an image collector, the measurement result further includes: projecting the result;
wherein the projection result comprises: the image acquisition device is used for acquiring the image of the object to be measured, and acquiring at least one of marking frame information of the object to be measured in the image acquired by the image acquisition device, the number of points of the object to be measured acquired by the laser radar or the millimeter wave radar, and projection data of the points of the object to be measured acquired by the laser radar or the millimeter wave radar projected into the image of the object to be measured acquired by the image acquisition device.
One embodiment in the above application has the following advantages or benefits: the types of different sensors and the content of the measurement result are introduced, the measurement result in the embodiment has more dimensions, and the embodiment can judge whether the measurement results of various dimensions are related to the same measured object. And whether the elements in the measurement result combination are associated with the same measured object is judged through the multi-dimensional measurement result, so that the judgment accuracy is further improved.
In a second aspect, an embodiment of the present application provides a data fusion processing apparatus, including:
the measurement result combination module is used for determining at least two candidate measurement result combinations according to the measurement results of different sensors on the measured object; the candidate measurement result combination comprises at least one element; the element comprises at least two measurements;
and the fusion result determining module is used for determining a target measurement result combination with the same element association with the measured object from the at least two candidate measurement result combinations by adopting a random forest model, and taking the element in the target measurement result combination as a fusion result.
In a third aspect, an embodiment of the present application provides an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the data fusion processing method according to any embodiment of the application.
In a fourth aspect, embodiments of the present application provide a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute a data fusion processing method according to any of the embodiments of the present application.
One embodiment in the above application has the following advantages or benefits: and determining a target measurement result combination of which the elements are associated with the same measured object from a candidate measurement result combination formed by the measurement results of the measured object by different sensors by adopting a random forest model, and further taking the elements in the target measurement result combination as a fusion result. According to the scheme of the embodiment, the measurement results associated with the same measured object are determined through the random forest model, normal distribution and manual parameter adjustment are not needed in the whole process, the accuracy of the judgment result is improved, and the accuracy and the calling-in rate of the finally obtained fusion result are improved.
Other effects of the above-described alternative will be described below with reference to specific embodiments.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1 is a flow chart of a data fusion processing method according to a first embodiment of the present application;
FIG. 2 is a flow chart of a data fusion processing method according to a second embodiment of the present application;
fig. 3 is a schematic structural diagram of a data fusion processing apparatus according to a fourth embodiment of the present application;
fig. 4 is a block diagram of an electronic device for implementing the data fusion processing method according to the embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
First embodiment
Fig. 1 is a flowchart of a data fusion processing method according to a first embodiment of the present application, which is applicable to data fusion of measurement results acquired by different sensors for different measured objects, and which can be executed by a data fusion processing apparatus, which can be implemented in software and/or hardware, and is preferably configured in an electronic device. As shown in fig. 1, the method specifically includes the following steps:
s101, determining at least two candidate measurement result combinations according to the measurement results of different sensors on the measured object.
In the present application, the type of the sensor may include a laser radar, a millimeter wave radar, an image collector, and the like. The different sensors are at least two sensors of different types. Preferably, the present embodiment selects two different types of sensors, i.e., any two of the laser radar, the millimeter wave radar, and the image collector.
The measured object in the application can be determined according to the current actual acquisition scene and the purpose of the fusion data. For example, if the captured scene is a street scene and the fused data is used to assist the autonomous vehicle in determining information about obstacles (e.g., pedestrians and vehicles) on the street, then the object under test may be an obstacle on the street. The number of the measured objects in the application can be one or a plurality of. Optionally, the number of the detected objects detected by the different sensors in the same time and in the same scene may be the same or different due to different acquisition ranges or acquisition positions of the different sensors, for example, if there are three detected objects in the current scene, at this time, the laser radar and the image acquirer may both detect the three detected objects, or the laser radar may detect the three detected objects, and the image acquirer only detects two of the detected objects.
The sensor can be used for obtaining data characteristics of the measured object under different measurement dimensions in the process of carrying out data detection on the measured object. Optionally, the types of measurement results may include, but are not limited to: velocity measurements, position measurements, and projection results, among others. Optionally, in this embodiment, the types of the sensors are different, and the types of the measurement results of the sensors to the object to be measured are also different. Specifically, the correspondence between the kind of the sensor and the kind of the measurement result of the measured object will be described in detail in the following embodiments.
The candidate measurement result combinations in the present application may be obtained by arranging and combining the measurement results of at least one measured object detected by different sensors according to a certain rule, where the number of the candidate measurement result combinations is multiple, each candidate measurement result combination includes at least one element, and the element includes at least two measurement results, that is, each element is formed by combining one measurement result of each different sensor. Specifically, in the embodiment of the present application, the manner of determining each candidate measurement result combination of the at least two candidate measurement result combinations according to the measurement results of the different sensors on the measured object may be to select one measurement result from all measurement results of each sensor to combine into the first element in the candidate measurement result combination; and selecting one measurement result from the residual measurement results of each sensor to form a second element in the candidate measurement result combination, and repeating the steps until the residual measurement result of a certain sensor is 0, so that the candidate measurement result combination is determined to be finished. According to the method, all the measurement results of different sensors are ranked and combined, so that the generated candidate measurement results comprise all possible situations after ranking and combination.
Optionally, when the number of different sensors is large, the number of measurement results related to elements in each candidate measurement result combination is large, which may cause the accuracy of determining whether the measurement results are associated with the same measured object to be affected, so in order to ensure the accuracy of the fusion result, the different sensors of the present application are preferably two different sensors, namely, the first sensor and the second sensor. If the quantity value m of the measurement result of the first sensor is less than or equal to the quantity value n of the measurement result of the second sensor, the quantity of the elements in the candidate measurement result combination determined in the step is equal to the quantity value m of the measurement result of the first sensor; the number of candidate measurement combinations is the quotient of the first factorial and the second factorial (i.e., n!/(n-m)!); the first factorial is a factorial of a magnitude of a measurement of a second sensor n! (ii) a The second factorial is a factorial (n-m) | of a difference in a number of measurements of the second sensor and the first sensor! .
For example, assuming that the different sensors are lidar (lidar) and millimeter wave radar (radar), the measured objects are objects 1 and 2, the measurement results acquired by the lidar for the object 1 and the object 2 are l1 and l2, and the measurement results acquired by the millimeter wave radar for the object 1 and the object 2 are r1 and r2, two candidate measurement result combinations can be determined according to the method described above: (l1r1, l2r2) and (l1r2, l2r 1). Each candidate measurement result combination includes two elements, for example, the candidate measurement result combination (l1r1, l2r2) includes two elements l1r1 and l2r2, and each element is formed by combining one measurement result of all measurement results of the measured object detected by the laser radar and the millimeter wave radar.
S102, determining a target measurement result combination with the same element association with the measured object from at least two candidate measurement result combinations by adopting a random forest model, and taking the elements in the target measurement result combination as a fusion result.
In the application, the random forest model may be a neural network model constructed based on a random forest algorithm, and the random forest model may be used to determine whether each element in the candidate measurement result combination is associated with the same measured object. For example, for the element l1r1, the random forest model may determine whether l1 and r1 in the element are related to the same measured object, that is, determine whether the measurement result l1 of the lidar and the measurement result r1 of the millimeter-wave radar correspond to the same object. The random forest model of the application is obtained by training a large amount of data in advance, and the training process of the random forest model will be described in detail in the following embodiments. And will not be described in detail herein.
Optionally, in the application, a random forest model may be adopted to analyze elements in each candidate measurement result, and a candidate measurement result in which each element included in the candidate measurement result is associated with the same measured object is found as a target measurement result. The specific implementation method can comprise the following three substeps:
and S1021, inputting elements in at least two candidate measurement result combinations into a random forest model to obtain the probability that the elements are associated with the same measured object.
Specifically, the sub-step may be to input each element in each candidate measurement result combination into a pre-trained random forest model, so that the random forest model analyzes the input elements based on an algorithm during training, and calculates and outputs a probability that the input elements are associated with the same measured object. Illustratively, assuming that there are two candidate measurement result combinations, i.e., (l1r1, l2r2) and (l1r2, l2r1), the two candidate measurement result combinations include the following elements: l1r1, l2r2, l1r2 and l2r1, and the four elements are sequentially input into a random forest model, so that the probability that l1 and r1 in the l1r1 of the four elements are related to the same object is 1; the probability that l2 and r2 in l2r2 are associated with the same object is probability 2; the probability that l1 and r2 in l1r2 are associated with the same object is probability 3; the probability that l2 and r1 in l2r1 are associated with the same object is probability 4.
S1022, determining the overall similarity of the at least two candidate measurement result combinations according to the probability that the elements in the at least two candidate measurement result combinations are associated with the same measured object.
Specifically, in this sub-step, for each candidate measurement result combination, the probabilities of the included elements obtained in S1021 are accumulated, so as to obtain the overall similarity of the candidate measurement result combination. Illustratively, the probability 1 and the probability 2 obtained in S1021 are summed to obtain the overall similarity, i.e., the overall similarity 1, of the candidate measurement result combination (l1r1, l2r2), and the probability 3 and the probability 4 obtained in S1021 are summed to obtain the overall similarity, i.e., the overall similarity 2, of the candidate measurement result combination (l1r2, l2r 1).
And S1023, combining the candidate measurement results with the highest overall similarity to serve as a target measurement result combination.
Specifically, this sub-step performs comparative analysis on the overall similarity of each candidate measurement result combination calculated in S1022, and selects a candidate measurement result combination with the highest overall similarity as the target measurement result combination. For example, if the overall similarity 1 calculated in S1022 is greater than the overall similarity 2, the candidate measurement result combination (l1r1, l2r2) corresponding to the overall similarity 1 is taken as the target measurement result combination.
In the present application, the process of data fusion of the measurement results of different sensors is substantially to combine the data characteristics of the same measured object collected by different sensors to obtain high-precision information of each attribute of the measured object. And each element in the target measurement result combination determined in the step is a combination of measurement results associated with the same measured object, so that the elements in the target measurement result combination can be directly used as a fusion result of the measured results after the target measurement result combination is determined in the step. For example, assuming that the target measurement result combinations are (l1r1, l2r2), it may be a fusion result with the element l1r1 contained therein as the object to be measured 1 and the element l2r2 as the object to be measured 2.
According to the technical scheme, a random forest model is adopted, a target measurement result combination with the same element association to the measured object is determined from candidate measurement result combinations formed by measurement results of different sensors to the measured object, and then the elements in the target measurement result combination are used as fusion results. According to the scheme of the embodiment, the measurement results associated with the same measured object are determined through the random forest model, normal distribution and manual parameter adjustment are not needed in the whole process, the accuracy of the judgment result is improved, and the accuracy and the calling-in rate of the finally obtained fusion result are improved.
Second embodiment
Fig. 2 is a flowchart of a data fusion processing method according to a second embodiment of the present application, and this embodiment performs further optimization based on the first embodiment, specifically explaining a training process of a random forest model. As shown in fig. 2, the method specifically includes the following steps:
s201, obtaining sample measurement result sets collected by different sensors, adding classification labels to the sample measurement result sets, and constructing training sample sets.
In the embodiment of the present application, the sample measurement result set may be a set of a large number of measurement results acquired in advance by different sensors for a plurality of different measured objects. It should be noted that the types of the sample measurement results acquired by the different sensors are the same as the types of the measurement results acquired by the different sensors for the measured object when data fusion is actually performed subsequently.
Specifically, in this embodiment, different sensors may be controlled in advance to acquire measurement results of a plurality of different measured objects, and the acquired results are acquired as a sample measurement result set, and when a classification tag is added to the acquired sample measurement result set, one measurement result may be selected from the sample measurement result sets of different sensors to be combined, and then a classification tag that is related to the same measured object is added to the combined result, that is, it is determined whether each measurement result in the combined result is a measurement result of a different sensor for the same measured object, if so, the classification tag related to the same measured object is added to the combined result, otherwise, the classification tag not related to the same measured object is added to the combined result. The combined result after adding the classification label can be used as a training sample in a training sample set, and the sample measurement results of different sensors are arranged and combined according to the method, so that the training sample set containing a large number of training samples can be obtained.
And S202, training the random forest model by adopting a training sample set.
In the embodiment of the application, the process of training the random forest model by using the training sample set may be that an initial random forest model is constructed based on a random forest algorithm, then each group of training samples in the training sample set is sequentially input into the initial random forest model, the random forest model is trained, and parameters in the model are optimized. After finishing training in one stage (for example, training for a certain time, or training a preset number of training sample sets, etc.), the accuracy of the trained random forest model may be checked by using the test sample, if the check is passed, that is, the accuracy of the probability of outputting the same measured object associated with the trained random forest model for the input test sample is greater than a preset requirement, it indicates that the training of the current random forest model is finished, otherwise, it indicates that the accuracy of the trained random forest model does not meet the requirement, and a new training sample set needs to be obtained again to continue training the random forest model until the accuracy of the random forest model meets the preset requirement.
S203, determining at least two candidate measurement result combinations according to the measurement results of different sensors on the measured object.
Wherein the candidate measurement combination comprises at least one element; the element includes at least two measurements.
S204, determining a target measurement result combination with the same element association to the measured object from at least two candidate measurement result combinations by adopting a random forest model, and taking the element in the target measurement result combination as a fusion result.
S205, optimizing and updating the random forest model according to the combination and fusion result of the at least two candidate measurement results.
In order to continuously improve the accuracy of the random forest model in the process of using the random forest model, in the embodiment of the application, after the data fusion is completed by using the random forest model each time, the current random forest model is optimized and updated according to the fusion result, and the specific optimization updating process may be as follows: and then, optimizing and updating each parameter in the current random forest model through back propagation of a neural network according to the actual condition that each element in the at least two candidate measurement results, the probability of each element in the random forest model being output in the current fusion process and being associated with the same measured object, and the probability of each element in the at least two candidate measurement results being output in the random forest model and being associated with the same measured object, and the output probability of the random forest model being correct.
According to the technical scheme, before data fusion is carried out, training sample sets are constructed according to sample measurement results collected by different sensors, the random forest model is trained through a large number of training sample sets, and accuracy of output results of the random forest model is guaranteed. When data fusion is carried out, a random forest model is adopted, a target measurement result combination with the same element association to the measured object is determined from candidate measurement result combinations formed by the measurement results of different sensors to the measured object, and then the elements in the target measurement result combination are used as fusion results. After data fusion is carried out, parameters in the random forest model are continuously optimized and updated through the fusion result, and accuracy of the output probability of the random forest model is improved. According to the scheme of the embodiment, the accuracy of the random forest model output probability is guaranteed through training before data fusion and optimization updating after data fusion, the problem that the accuracy is not high when data fusion is completed according to normal distribution and manual parameter adjustment in the prior art is solved, and the accuracy and the accurate calling rate of a finally obtained fusion result are guaranteed.
Third embodiment
In the embodiments of the present application, the types of measurement results of different sensors with respect to a measured object are explained on the basis of the above embodiments. The types of different sensors in the embodiments of the present application may include, but are not limited to: laser radar, millimeter wave radar, image collector (such as camera), etc. In order to improve the accuracy of subsequently judging the correlation between the measurement results of different sensors and the same measured result, in the embodiment of the application, the different sensors are preferably any two of a laser radar, a millimeter wave radar and an image collector; the measurements of the different sensors may include velocity measurements and/or position measurements. The velocity and position measurements in this application may include measurements in multiple directional dimensions, for example, the velocity and position measurements may include measurements in an x-coordinate direction and measurements in a y-coordinate direction.
Optionally, if the different sensors include an image collector, the measurement result further includes: projecting the result; wherein the projection result comprises: the image acquisition device is used for acquiring the image of the object to be measured, and acquiring at least one of marking frame information of the object to be measured in the image acquired by the image acquisition device, the number of points of the object to be measured acquired by the laser radar or the millimeter wave radar, and projection data of the points of the object to be measured acquired by the laser radar or the millimeter wave radar projected into the image of the object to be measured acquired by the image acquisition device. The marking frame of the measured object in the image collected by the image collector can be a boundary frame of the measured object contained in the image collected by the frame selection image collector.
Next, the present embodiment explains the types of measurement results of the measured object acquired by the following three different sensor combinations:
the first condition, if the different sensors of this application are laser radar and millimeter wave radar, then these two different sensors include to the measuring result of testee collection:
1) detecting the x coordinate of each measured object by using a laser radar;
2) detecting the y coordinate of each measured object by using a laser radar;
3) detecting the speed of each measured object in the x direction by using a laser radar;
4) detecting the speed of each measured object in the y direction by using a laser radar;
5) detecting the x coordinate of each measured object by the millimeter wave radar;
6) detecting the y coordinate of each measured object by the millimeter wave radar;
7) detecting the speed of each measured object in the x direction by the millimeter wave radar;
8) and detecting the speed of each measured object in the y direction by the millimeter wave radar.
In the first case, the measurement result at this time only includes the velocity measurement result and the position measurement result, and does not include the projection result. Namely, the above 1) -2) is the position measurement result of the laser radar to the measured object, and the above 3) -4) is the speed measurement result of the laser radar to the measured object; 5) to 6) above are the results of the position measurement of the object to be measured by the millimeter wave radar, and 7) to 8) above are the results of the velocity measurement of the object to be measured by the millimeter wave radar.
The second situation, if the different sensors of this application are laser radar and image collector, then these two different sensors include to the measuring result of testee collection:
1) detecting the x coordinate of each measured object by using a laser radar;
2) detecting the y coordinate of each measured object by using a laser radar;
3) detecting the speed of each measured object in the x direction by using a laser radar;
4) detecting the speed of each measured object in the y direction by using a laser radar;
5) detecting the x coordinate of each measured object by the image collector;
6) detecting the y coordinate of each measured object by the image collector;
7) detecting the speed of each measured object in the x direction by an image collector;
8) detecting the speed of each measured object in the y direction by the image collector;
9) the number of points of each measured object detected by the laser radar;
10) projecting the points of the measured object acquired by the laser radar to the number of points in the marking frame of the measured object acquired by the image acquisition device;
11) the x coordinate of the upper left corner of the marking frame of the measured object is acquired by the image acquisition device;
12) the y coordinate of the upper left corner of the marking frame of the measured object is acquired by the image acquisition device;
13) the x coordinate of the lower right corner of the marking frame of the measured object is acquired by the image acquisition device;
14) the y coordinate of the lower right corner of the marking frame of the measured object is acquired by the image acquisition device;
15) projecting the three-dimensional point of the measured object acquired by the laser radar to the minimum value of the x coordinate of the two-dimensional point on the image of the measured object acquired by the image acquisition device;
16) projecting the three-dimensional point of the measured object acquired by the laser radar to the minimum value of the y coordinate of the two-dimensional point on the image of the measured object acquired by the image acquisition device;
17) projecting the three-dimensional point of the measured object acquired by the laser radar to the maximum value of the x coordinate of the two-dimensional point on the image of the measured object acquired by the image acquisition device;
18) and projecting the three-dimensional point of the measured object acquired by the laser radar to the maximum value of the y coordinate of the two-dimensional point on the image of the measured object acquired by the image acquisition device.
In the second case, the measurement results at this time include not only the velocity measurement result and the position measurement result but also the projection result. Namely, the above 1) and 2) are the position measurement results of the laser radar on the measured object, and the above 3) -4) are the speed measurement results of the laser radar on the measured object; the above 5) -6) is the position measurement result of the image collector on the measured object, and the above 7) -8) is the speed measurement result of the image collector on the measured object; the above 9) is the number of points of the measured object acquired by the laser radar; 11) -14) above is the marking frame information of the measured object in the image collected by the image collector; 10), 15) -18) are projection data of points of the measured object collected by the laser radar projected into the image of the measured object collected by the image collector.
The third situation, if the different sensors of this application are millimeter wave radar and image collector, then these two different sensors include to the measuring result of testee collection:
1) detecting the x coordinate of each measured object by the image collector;
2) detecting the y coordinate of each measured object by the image collector;
3) detecting the speed of each measured object in the x direction by an image collector;
4) detecting the speed of each measured object in the y direction by the image collector;
5) detecting the x coordinate of each measured object by the millimeter wave radar;
6) detecting the y coordinate of each measured object by the millimeter wave radar;
7) detecting the speed of each measured object in the x direction by the millimeter wave radar;
8) detecting the speed of each measured object in the y direction by the millimeter wave radar;
9) the width of a marking frame of a measured object in an image collected by an image collector;
10) the height of a marking frame of a measured object in an image collected by an image collector;
11) projecting the point of the measured object acquired by the millimeter wave radar onto the image acquired by the image acquisition device, wherein the distance between the marking frame of the measured object closest to the point of the measured object and the marking frame of the measured object is in the x direction;
12) the point of the measured object collected by the millimeter wave radar is projected on the image collected by the image collector, and the distance between the marking frame of the measured object closest to the point is in the y direction.
In case three, the measurement results at this time include not only the velocity measurement result and the position measurement result but also the projection result. Namely, the 1) and the 2) are the position measurement results of the image collector on the measured object, and the 3) -4) are the speed measurement results of the image collector on the measured object; 5) to 6) above are the results of position measurement of the object to be measured by the millimeter wave radar, and 7) to 8) above are the results of velocity measurement of the object to be measured by the millimeter wave radar; 9) -10) above is the mark frame information of the measured object in the image collected by the image collector; 11) -12) above are projection data in which a point of the measured object acquired by the millimeter wave radar is projected into an image of the measured object acquired by the image acquisition device.
It should be noted that, in the three cases described in this embodiment, the content included in the measurement result is the position measurement result and the speed measurement result, or the position measurement result, the speed measurement result, and the projection measurement result, and the content included in the measurement result is relatively comprehensive. However, in the process of actual data fusion, a certain part of the given measurement results may be selected, and it is not limited that all kinds of measurement results described above are necessarily included in each case, and the selection may be performed according to actual data fusion requirements.
Optionally, based on the above three cases, in the embodiment of the present application, a random forest model may be trained correspondingly for each case, and according to the types of different sensors, a corresponding random forest model is selected to select a target measurement result combination from at least two candidate measurement result combinations formed by measurement results of the random forest model, so that elements in the target measurement result combination are used as a fusion result. The method may also be implemented by training a random forest model that is applicable to the three situations to perform data fusion on the measurement results of the measured object by the different sensors introduced in the three situations, which is not limited in this embodiment. The above embodiment of the specific data fusion execution process has been described, and details are not described in this embodiment.
According to the technical scheme, the types of different sensors and the content of the measurement result are introduced, the measurement result in the embodiment is comprehensive in dimensionality, the comprehensive multi-dimensional measurement result can be applied to the data fusion processing method introduced in each embodiment, the fact that whether the measurement results of various dimensions are related to the same measured object or not is judged by adopting the random forest model is achieved, the judgment accuracy is improved, and in addition, the comprehensiveness of the finally obtained fusion result can be improved through the comprehensive multi-dimensional measurement result.
Fourth embodiment
Fig. 3 is a schematic structural diagram of a data fusion processing device according to a fourth embodiment of the present application, which is applicable to a case where data fusion is performed on measurement results acquired by different sensors for different measured objects, and the device can implement the data fusion processing method according to any embodiment of the present application. The apparatus 300 specifically comprises the following:
the measurement result combination module 301 is configured to determine at least two candidate measurement result combinations according to measurement results of different sensors on a measured object; the candidate measurement result combination comprises at least one element; the element comprises at least two measurements;
a fusion result determining module 302, configured to determine, by using a random forest model, a target measurement result combination with the same element association as the measured object from the at least two candidate measurement result combinations, and use an element in the target measurement result combination as a fusion result.
According to the technical scheme, a random forest model is adopted, a target measurement result combination with the same element association to the measured object is determined from candidate measurement result combinations formed by measurement results of different sensors to the measured object, and then the elements in the target measurement result combination are used as fusion results. According to the scheme of the embodiment, the measurement results associated with the same measured object are determined through the random forest model, normal distribution and manual parameter adjustment are not needed in the whole process, the accuracy of the judgment result is improved, and the accuracy and the calling-in rate of the finally obtained fusion result are improved.
Further, the number of elements in the candidate combination of measurements is equal to the quantitative value of the measurements of the first sensor; the number of the candidate measurement result combinations is the quotient of the first factorial product and the second factorial product; the first factorial is a factorial of a quantitative value of a measurement of a second sensor; the second factorial is a factorial of a difference in a number of measurements of the second sensor and the first sensor; wherein a magnitude of the measurement of the first sensor is less than or equal to a magnitude of the measurement of the second sensor.
Further, when determining the target measurement result combination of the measured object with the same element association from the at least two candidate measurement result combinations by using the random forest model, the fusion result determining module 302 is specifically configured to:
inputting elements in the at least two candidate measurement result combinations into a random forest model to obtain the probability that the elements are associated with the same measured object;
determining the overall similarity of the at least two candidate measurement result combinations according to the probability that the elements in the at least two candidate measurement result combinations are associated with the same measured object;
and combining the candidate measurement results with the highest overall similarity as a target measurement result combination.
Further, the apparatus further includes a model training module, where the model training module is configured to:
acquiring sample measurement result sets acquired by different sensors, adding classification labels to the sample measurement result sets, and constructing training sample sets;
and training a random forest model by adopting the training sample set.
Further, the apparatus further includes a model updating module, where the model updating module is configured to:
and optimizing and updating the random forest model according to the at least two candidate measurement result combinations and the fusion result.
Furthermore, the different sensors are any two of a laser radar, a millimeter wave radar and an image collector; the measurement is a velocity measurement and/or a position measurement.
Further, if the different sensors include an image collector, the measurement result further includes: projecting the result;
wherein the projection result comprises: the image acquisition device is used for acquiring the image of the object to be measured, and acquiring at least one of marking frame information of the object to be measured in the image acquired by the image acquisition device, the number of points of the object to be measured acquired by the laser radar or the millimeter wave radar, and projection data of the points of the object to be measured acquired by the laser radar or the millimeter wave radar projected into the image of the object to be measured acquired by the image acquisition device.
Fifth embodiment
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
Fig. 4 is a block diagram of an electronic device according to the data fusion processing method in the embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 4, the electronic apparatus includes: one or more processors 401, memory 402, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display Graphical information for a Graphical User Interface (GUI) on an external input/output device, such as a display device coupled to the Interface. In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations, e.g., as a server array, a group of blade servers, or a multi-processor system. In fig. 4, one processor 401 is taken as an example.
The memory 402, which is a non-transitory computer-readable storage medium, may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules corresponding to the data fusion processing method in the embodiment of the present application, for example, the measurement result combining module 301 and the fusion result determining module 302 shown in fig. 3. The processor 401 executes various functional applications of the server and data processing by running non-transitory software programs, instructions, and modules stored in the memory 402, that is, implements the data fusion processing method in the above-described method embodiment.
The memory 402 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the electronic device of the data fusion processing method, and the like. Further, the memory 402 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 402 may optionally include a memory remotely located from the processor 401, and these remote memories may be connected to the electronic device of the data fusion processing method through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the data fusion processing method may further include: an input device 403 and an output device 404. The processor 401, the memory 402, the input device 403 and the output device 404 may be connected by a bus or other means, and fig. 4 illustrates an example of a connection by a bus.
The input device 403 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device of the data fusion processing method, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or other input devices. The output device 404 may include a display device, an auxiliary lighting device such as a Light Emitting Diode (LED), a tactile feedback device, and the like; the tactile feedback device is, for example, a vibration motor or the like. The Display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) Display, and a plasma Display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, Integrated circuitry, Application Specific Integrated Circuits (ASICs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs, also known as programs, software applications, or code, include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or Device for providing machine instructions and/or data to a Programmable processor, such as a magnetic disk, optical disk, memory, Programmable Logic Device (PLD), including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device for displaying information to a user, for example, a Cathode Ray Tube (CRT) or an LCD monitor; and a keyboard and a pointing device, such as a mouse or a trackball, by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here, or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), the internet, and blockchain networks.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, a random forest model is adopted, a target measurement result combination with the same element association to the measured object is determined from candidate measurement result combinations formed by measurement results of different sensors to the measured object, and then the elements in the target measurement result combination are used as fusion results. According to the scheme of the embodiment, the measurement results associated with the same measured object are determined through the random forest model, normal distribution and manual parameter adjustment are not needed in the whole process, the accuracy of the judgment result is improved, and the accuracy and the calling-in rate of the finally obtained fusion result are improved.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Claims (16)
1. A data fusion processing method is characterized by comprising the following steps:
determining at least two candidate measurement result combinations according to the measurement results of different sensors on the measured object; the candidate measurement result combination comprises at least one element; the element comprises at least two measurements;
and determining a target measurement result combination with the same element association to the measured object from the at least two candidate measurement result combinations by adopting a random forest model, and taking the elements in the target measurement result combination as a fusion result.
2. The method of claim 1, wherein the number of elements in the candidate combination of measurements is equal to a quantitative value of the measurement of the first sensor; the number of the candidate measurement result combinations is the quotient of the first factorial product and the second factorial product; the first factorial is a factorial of a quantitative value of a measurement of a second sensor; the second factorial is a factorial of a difference in a number of measurements of the second sensor and the first sensor; wherein a magnitude of the measurement of the first sensor is less than or equal to a magnitude of the measurement of the second sensor.
3. The method of claim 1, wherein determining a target combination of measurements from the at least two candidate combinations of measurements that are element-associated with the same measured object using a random forest model comprises:
inputting elements in the at least two candidate measurement result combinations into a random forest model to obtain the probability that the elements are associated with the same measured object;
determining the overall similarity of the at least two candidate measurement result combinations according to the probability that the elements in the at least two candidate measurement result combinations are associated with the same measured object;
and combining the candidate measurement results with the highest overall similarity as a target measurement result combination.
4. The method of claim 1, further comprising, prior to determining a target combination of measurements from the at least two candidate combinations of measurements that are element-associated with the same measured object using a random forest model:
acquiring sample measurement result sets acquired by different sensors, adding classification labels to the sample measurement result sets, and constructing training sample sets;
and training a random forest model by adopting the training sample set.
5. The method as claimed in claim 4, wherein after determining a target measurement combination with elements related to the same measured object from the at least two candidate measurement combinations by using a random forest model, and using the elements in the target measurement combination as a fusion result, the method further comprises:
and optimizing and updating the random forest model according to the at least two candidate measurement result combinations and the fusion result.
6. The method of any one of claims 1-4, wherein the different sensors are any two of a lidar, a millimeter wave radar, and an image collector; the measurement is a velocity measurement and/or a position measurement.
7. The method of claim 6, wherein if the different sensor comprises an image collector, the measurement result further comprises: projecting the result;
wherein the projection result comprises: the image acquisition device is used for acquiring the image of the object to be measured, and acquiring at least one of marking frame information of the object to be measured in the image acquired by the image acquisition device, the number of points of the object to be measured acquired by the laser radar or the millimeter wave radar, and projection data of the points of the object to be measured acquired by the laser radar or the millimeter wave radar projected into the image of the object to be measured acquired by the image acquisition device.
8. A data fusion processing apparatus, characterized in that the apparatus comprises:
the measurement result combination module is used for determining at least two candidate measurement result combinations according to the measurement results of different sensors on the measured object; the candidate measurement result combination comprises at least one element; the element comprises at least two measurements;
and the fusion result determining module is used for determining a target measurement result combination with the same element association with the measured object from the at least two candidate measurement result combinations by adopting a random forest model, and taking the element in the target measurement result combination as a fusion result.
9. The apparatus of claim 8, wherein the number of elements in the candidate combination of measurements is equal to the quantitative value of the measurement of the first sensor; the number of the candidate measurement result combinations is the quotient of the first factorial product and the second factorial product; the first factorial is a factorial of a quantitative value of a measurement of a second sensor; the second factorial is a factorial of a difference in a number of measurements of the second sensor and the first sensor; wherein a magnitude of the measurement of the first sensor is less than or equal to a magnitude of the measurement of the second sensor.
10. The apparatus according to claim 8, wherein the fusion result determining module is specifically configured to:
inputting elements in the at least two candidate measurement result combinations into a random forest model to obtain the probability that the elements are associated with the same measured object;
determining the overall similarity of the at least two candidate measurement result combinations according to the probability that the elements in the at least two candidate measurement result combinations are associated with the same measured object;
and combining the candidate measurement results with the highest overall similarity as a target measurement result combination.
11. The apparatus according to claim 8, further comprising a model training module, specifically configured to:
acquiring sample measurement result sets acquired by different sensors, adding classification labels to the sample measurement result sets, and constructing training sample sets;
and training a random forest model by adopting the training sample set.
12. The apparatus according to claim 11, further comprising a model update module, specifically configured to:
and optimizing and updating the random forest model according to the at least two candidate measurement result combinations and the fusion result.
13. The device according to any one of claims 8 to 11, wherein the different sensors are any two of a laser radar, a millimeter wave radar and an image collector; the measurement is a velocity measurement and/or a position measurement.
14. The apparatus of claim 13, wherein if the different sensor comprises an image collector, the measurement result further comprises: projecting the result;
wherein the projection result comprises: the image acquisition device is used for acquiring the image of the object to be measured, and acquiring at least one of marking frame information of the object to be measured in the image acquired by the image acquisition device, the number of points of the object to be measured acquired by the laser radar or the millimeter wave radar, and projection data of the points of the object to be measured acquired by the laser radar or the millimeter wave radar projected into the image of the object to be measured acquired by the image acquisition device.
15. An electronic device, characterized in that the electronic device comprises:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the data fusion processing method of any one of claims 1-7.
16. A non-transitory computer readable storage medium storing computer instructions for causing a computer to execute the data fusion processing method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010112841.7A CN111310840B (en) | 2020-02-24 | 2020-02-24 | Data fusion processing method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010112841.7A CN111310840B (en) | 2020-02-24 | 2020-02-24 | Data fusion processing method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111310840A true CN111310840A (en) | 2020-06-19 |
CN111310840B CN111310840B (en) | 2023-10-17 |
Family
ID=71158341
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010112841.7A Active CN111310840B (en) | 2020-02-24 | 2020-02-24 | Data fusion processing method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111310840B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111830502A (en) * | 2020-06-30 | 2020-10-27 | 广州小鹏车联网科技有限公司 | Data set establishing method, vehicle and storage medium |
CN111985578A (en) * | 2020-09-02 | 2020-11-24 | 深圳壹账通智能科技有限公司 | Multi-source data fusion method, device, computer equipment and storage medium |
CN112147601A (en) * | 2020-09-03 | 2020-12-29 | 南京信息工程大学 | Sea surface small target detection method based on random forest |
CN112214531A (en) * | 2020-10-12 | 2021-01-12 | 海南大学 | Cross-data, information and knowledge multi-modal feature mining method and component |
CN112733907A (en) * | 2020-12-31 | 2021-04-30 | 上海商汤临港智能科技有限公司 | Data fusion method and device, electronic equipment and storage medium |
CN113343458A (en) * | 2021-05-31 | 2021-09-03 | 潍柴动力股份有限公司 | Model selection method and device for engine sensor, electronic equipment and storage medium |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE10324897A1 (en) * | 2003-05-30 | 2004-12-23 | Robert Bosch Gmbh | Object detection method for a vehicle driver assistance system based on maximum a-posteriori estimation of Bayesian probability functions for independently obtained measurement values |
US20090228409A1 (en) * | 2008-03-10 | 2009-09-10 | Eklund Neil H | Method, Apparatus And Computer Program Product For Predicting A Fault Utilizing Multi-Resolution Classifier Fusion |
CN102323323A (en) * | 2011-07-12 | 2012-01-18 | 南京医科大学 | Preparation method for 17 beta-estradiol molecular imprinting film electrochemical sensor |
US20150098609A1 (en) * | 2013-10-09 | 2015-04-09 | Honda Motor Co., Ltd. | Real-Time Multiclass Driver Action Recognition Using Random Forests |
CN107358142A (en) * | 2017-05-15 | 2017-11-17 | 西安电子科技大学 | Polarimetric SAR Image semisupervised classification method based on random forest composition |
US20180189667A1 (en) * | 2016-12-29 | 2018-07-05 | Intel Corporation | Entropy-based weighting in random forest models |
CN108304490A (en) * | 2018-01-08 | 2018-07-20 | 有米科技股份有限公司 | Text based similarity determines method, apparatus and computer equipment |
CN108333569A (en) * | 2018-01-19 | 2018-07-27 | 杭州电子科技大学 | A kind of asynchronous multiple sensors fusion multi-object tracking method based on PHD filtering |
CN108614601A (en) * | 2018-04-08 | 2018-10-02 | 西北农林科技大学 | A kind of facility luminous environment regulation and control method of fusion random forests algorithm |
CN109473142A (en) * | 2018-10-10 | 2019-03-15 | 深圳韦格纳医学检验实验室 | The construction method of sample data sets and its hereditary birthplace prediction technique |
CN110231156A (en) * | 2019-06-26 | 2019-09-13 | 山东大学 | Service robot kinematic system method for diagnosing faults and device based on temporal aspect |
CN110334767A (en) * | 2019-07-08 | 2019-10-15 | 重庆大学 | An Improved Random Forest Method for Air Quality Classification |
CN110641472A (en) * | 2018-06-27 | 2020-01-03 | 百度(美国)有限责任公司 | Safety monitoring system for autonomous vehicle based on neural network |
CN110704543A (en) * | 2019-08-19 | 2020-01-17 | 上海机电工程研究所 | Multi-type multi-platform information data self-adaptive fusion system and method |
CN110703732A (en) * | 2019-10-21 | 2020-01-17 | 北京百度网讯科技有限公司 | Correlation detection method, device, equipment and computer readable storage medium |
-
2020
- 2020-02-24 CN CN202010112841.7A patent/CN111310840B/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE10324897A1 (en) * | 2003-05-30 | 2004-12-23 | Robert Bosch Gmbh | Object detection method for a vehicle driver assistance system based on maximum a-posteriori estimation of Bayesian probability functions for independently obtained measurement values |
US20090228409A1 (en) * | 2008-03-10 | 2009-09-10 | Eklund Neil H | Method, Apparatus And Computer Program Product For Predicting A Fault Utilizing Multi-Resolution Classifier Fusion |
CN102323323A (en) * | 2011-07-12 | 2012-01-18 | 南京医科大学 | Preparation method for 17 beta-estradiol molecular imprinting film electrochemical sensor |
US20150098609A1 (en) * | 2013-10-09 | 2015-04-09 | Honda Motor Co., Ltd. | Real-Time Multiclass Driver Action Recognition Using Random Forests |
US20180189667A1 (en) * | 2016-12-29 | 2018-07-05 | Intel Corporation | Entropy-based weighting in random forest models |
CN107358142A (en) * | 2017-05-15 | 2017-11-17 | 西安电子科技大学 | Polarimetric SAR Image semisupervised classification method based on random forest composition |
CN108304490A (en) * | 2018-01-08 | 2018-07-20 | 有米科技股份有限公司 | Text based similarity determines method, apparatus and computer equipment |
CN108333569A (en) * | 2018-01-19 | 2018-07-27 | 杭州电子科技大学 | A kind of asynchronous multiple sensors fusion multi-object tracking method based on PHD filtering |
CN108614601A (en) * | 2018-04-08 | 2018-10-02 | 西北农林科技大学 | A kind of facility luminous environment regulation and control method of fusion random forests algorithm |
CN110641472A (en) * | 2018-06-27 | 2020-01-03 | 百度(美国)有限责任公司 | Safety monitoring system for autonomous vehicle based on neural network |
CN109473142A (en) * | 2018-10-10 | 2019-03-15 | 深圳韦格纳医学检验实验室 | The construction method of sample data sets and its hereditary birthplace prediction technique |
CN110231156A (en) * | 2019-06-26 | 2019-09-13 | 山东大学 | Service robot kinematic system method for diagnosing faults and device based on temporal aspect |
CN110334767A (en) * | 2019-07-08 | 2019-10-15 | 重庆大学 | An Improved Random Forest Method for Air Quality Classification |
CN110704543A (en) * | 2019-08-19 | 2020-01-17 | 上海机电工程研究所 | Multi-type multi-platform information data self-adaptive fusion system and method |
CN110703732A (en) * | 2019-10-21 | 2020-01-17 | 北京百度网讯科技有限公司 | Correlation detection method, device, equipment and computer readable storage medium |
Non-Patent Citations (2)
Title |
---|
叶建芳;刘强;李雪莹;: "基于随机森林的疲劳驾驶检测识别模型的优化研究" * |
李林超;曲栩;张健;王永岗;李汉初;冉斌;: "基于特征级融合的高速公路异质交通流数据修复方法" * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111830502A (en) * | 2020-06-30 | 2020-10-27 | 广州小鹏车联网科技有限公司 | Data set establishing method, vehicle and storage medium |
CN111830502B (en) * | 2020-06-30 | 2021-10-12 | 广州小鹏自动驾驶科技有限公司 | Data set establishing method, vehicle and storage medium |
CN111985578A (en) * | 2020-09-02 | 2020-11-24 | 深圳壹账通智能科技有限公司 | Multi-source data fusion method, device, computer equipment and storage medium |
CN112147601A (en) * | 2020-09-03 | 2020-12-29 | 南京信息工程大学 | Sea surface small target detection method based on random forest |
CN112147601B (en) * | 2020-09-03 | 2023-05-26 | 南京信息工程大学 | Sea surface small target detection method based on random forest |
CN112214531A (en) * | 2020-10-12 | 2021-01-12 | 海南大学 | Cross-data, information and knowledge multi-modal feature mining method and component |
CN112733907A (en) * | 2020-12-31 | 2021-04-30 | 上海商汤临港智能科技有限公司 | Data fusion method and device, electronic equipment and storage medium |
CN113343458A (en) * | 2021-05-31 | 2021-09-03 | 潍柴动力股份有限公司 | Model selection method and device for engine sensor, electronic equipment and storage medium |
CN113343458B (en) * | 2021-05-31 | 2023-07-18 | 潍柴动力股份有限公司 | Engine sensor selection method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111310840B (en) | 2023-10-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111310840B (en) | Data fusion processing method, device, equipment and storage medium | |
CN111401208B (en) | Obstacle detection method and device, electronic equipment and storage medium | |
CN111324115B (en) | Obstacle position detection fusion method, obstacle position detection fusion device, electronic equipment and storage medium | |
CN111753765B (en) | Sensing device detection method, sensing device detection apparatus, sensing device detection device and storage medium | |
US11615605B2 (en) | Vehicle information detection method, electronic device and storage medium | |
CN111273268B (en) | Automatic driving obstacle type identification method and device and electronic equipment | |
KR20210052409A (en) | Lane line determination method and apparatus, lane line positioning accuracy evaluation method and apparatus, device, and program | |
CN111583668B (en) | Traffic jam detection method and device, electronic equipment and storage medium | |
CN112415552A (en) | Vehicle position determining method and device and electronic equipment | |
CN112880674B (en) | A method, device, equipment and storage medium for positioning a traveling device | |
CN112507949A (en) | Target tracking method and device, road side equipment and cloud control platform | |
CN111220164A (en) | Positioning method, device, equipment and storage medium | |
CN113759349B (en) | Calibration method of laser radar and positioning equipment Equipment and autonomous driving vehicle | |
CN111638528B (en) | Positioning method, positioning device, electronic equipment and storage medium | |
CN112147632A (en) | Method, device, equipment and medium for testing vehicle-mounted laser radar perception algorithm | |
CN111079079B (en) | Data correction method, device, electronic equipment and computer readable storage medium | |
CN112184914B (en) | Method and device for determining three-dimensional position of target object and road side equipment | |
US20220044559A1 (en) | Method and apparatus for outputing vehicle flow direction, roadside device, and cloud control platform | |
CN111612852A (en) | Method and apparatus for verifying camera parameters | |
CN111932611B (en) | Object position acquisition method and device | |
CN113091757B (en) | Map generation method and device | |
CN111666891A (en) | Method and apparatus for estimating obstacle motion state | |
CN111767843A (en) | Three-dimensional position prediction method, device, equipment and storage medium | |
CN111462072B (en) | Point cloud picture quality detection method and device and electronic equipment | |
CN112184828A (en) | External parameter calibration method and device for laser radar and camera and automatic driving vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |