WO2019114807A1 - 多传感器目标信息融合 - Google Patents
多传感器目标信息融合 Download PDFInfo
- Publication number
- WO2019114807A1 WO2019114807A1 PCT/CN2018/121008 CN2018121008W WO2019114807A1 WO 2019114807 A1 WO2019114807 A1 WO 2019114807A1 CN 2018121008 W CN2018121008 W CN 2018121008W WO 2019114807 A1 WO2019114807 A1 WO 2019114807A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- time
- sensor
- fusion
- actual measurement
- measurement results
- Prior art date
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 146
- 238000005259 measurement Methods 0.000 claims abstract description 249
- 238000000034 method Methods 0.000 claims abstract description 83
- 238000005457 optimization Methods 0.000 claims abstract description 21
- 239000011159 matrix material Substances 0.000 claims description 80
- 238000006243 chemical reaction Methods 0.000 claims description 40
- 238000004590 computer program Methods 0.000 claims description 20
- 230000009466 transformation Effects 0.000 claims description 16
- 238000007499 fusion processing Methods 0.000 claims description 5
- 230000006870 function Effects 0.000 description 21
- 238000010586 diagram Methods 0.000 description 17
- 230000007704 transition Effects 0.000 description 12
- 238000012545 processing Methods 0.000 description 10
- 238000006073 displacement reaction Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 230000026676 system process Effects 0.000 description 4
- 230000003044 adaptive effect Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/251—Fusion techniques of input or preprocessed data
Definitions
- the invention belongs to the field of automobiles and relates to multi-sensor target information fusion, and more particularly relates to an optimization method of multi-sensor target information fusion and synchronization of multi-sensor target information fusion and multi-sensor sensing.
- the update speed of the sensor on the target will be affected to varying degrees, plus the update speed of different sensors is not complete.
- the processing of target information fusion will also be affected.
- the general treatment in the industry is that when the target information fusion task is triggered, the sensor sensing result is usually taken as the latest input to perform target information fusion.
- the update of the sensor's recognition result with respect to the target is slow or not updated, the problem that the optimal estimation of the fusion result is not accurate is caused.
- the present invention has been made to overcome one or more of the above disadvantages, or other disadvantages, and the technical solutions employed are as follows.
- a method for optimizing multi-sensor target information fusion comprising: step S1: obtaining, for each time, a fusion prediction result of all sensors related to a target state at a current time; step S2: obtaining Actual measurement results of the respective sensors at the current time with respect to the target state; Step S3: For each set of actual measurement results, based on the fusion prediction result and the actual set of measurement results, the corresponding sensor at the current time is obtained with respect to the target state.
- the result is estimated; and step S4: fusing the optimal estimation results of all the sensors to determine a weight corresponding to each of the optimal estimation results, thereby obtaining an optimal fusion estimation result with respect to the target state at the current time.
- the step S3 includes: Step S31: calculating, for each set of actual measurement results, a corresponding conversion matrix based on the fusion prediction result and the set of actual measurement results; S32: Calculate a covariance corresponding to each set of actual measurement results; Step S33: Calculate a corresponding Kalman gain based on the corresponding conversion matrix and the corresponding covariance for each set of actual measurement results And step S34: calculating, for each set of actual measurement results, a corresponding current time based on the fusion prediction result, the corresponding Kalman gain, the set of actual measurement results, and the corresponding transformation matrix The corresponding sensor's optimal estimation of the target state.
- the step S4 includes: Step S41: determining a weight corresponding to each of the optimal estimation results according to a covariance corresponding to each set of actual measurement results; and step S42: An optimal fusion estimation result for all sensors with respect to the target state at the current time is calculated based on each of the optimal estimation results and corresponding weights.
- the method further includes: Step S5: correcting the covariance obtained in the step S32 based on the conversion matrix obtained in the step S31 and the Kalman gain obtained in the step S33, A corrected covariance is obtained.
- the covariance corresponding to each of the actual measurement results at the current time is obtained by using the corrected covariance at the previous moment. of.
- the fusion prediction result at the current time is obtained using the optimal fusion estimation result with respect to the target state at the previous time.
- a method for synchronizing multi-sensor target information fusion with multi-sensor sensing comprising: step P1: obtaining actual measurement results of respective sensors with respect to a target state while obtaining actual measurement for each set The time of the result is recorded separately, so that a first timestamp is recorded for each set of actual measurement results; step P2: the time at which the execution of the target information fusion process is started is recorded as the second timestamp; and step P3: respectively calculating the first timestamps Time difference between each first time instant represented by the time stamp and the second time instant represented by the second time stamp; Step P4: updating its corresponding actual measurement obtained at the first time based on each calculated time difference As a result, a corresponding estimated measurement result at the second time instant is obtained; step P5: obtaining a fusion prediction result of all sensors regarding the target state at the second time; and step P6: estimating for each set Measuring results, obtaining a corresponding at the second moment based on the fusion prediction result
- the step P6 includes: Step P61: calculating, for each set of estimated measurement results, a corresponding one based on the fusion prediction result and the set of estimated measurement results a conversion matrix; step P62: calculating a covariance corresponding to each set of estimated measurement results; step P63: for each set of estimated measurement results, based on the corresponding conversion matrix and the corresponding covariance Calculating a corresponding Kalman gain; and step P64: based on the fusion prediction result, the corresponding Kalman gain, the set of estimated measurement results, and the corresponding transformation matrix for each set of estimated measurement results And calculating a corresponding optimal estimation result of the corresponding sensor at the second moment with respect to the target state.
- the method further includes: step P7: fusing the optimal estimation results of all the sensors to determine a weight corresponding to each of the optimal estimation results, thereby obtaining the The optimal fusion estimation result for the target state at the second moment.
- step P8 correcting the covariance obtained in step P62 according to the conversion matrix obtained in step P61 and the Kalman gain obtained in step P63, A corrected covariance is obtained.
- the covariance corresponding to the each set of estimated measurement results at the second moment is utilized by the first moment Corrected covariance to get.
- the fusion prediction result at the second moment is an optimal fusion estimation result with respect to a target state using the first moment acquired.
- a computer apparatus comprising a memory, a processor, and a computer program stored on the memory and operable on the processor, the processor executing the program to implement the method according to the present invention The steps of the method on the one hand and/or the second aspect.
- a recording medium having stored thereon a computer program executed by a computer to carry out the steps of the method according to the first aspect and/or the second aspect of the invention.
- an assisted driving method comprising the method of optimizing multi-sensor target information fusion according to the first aspect of the invention.
- an assisted driving method comprising the method of optimizing multi-sensor target information fusion according to the second aspect of the invention.
- an assisted driving system comprising an apparatus for optimizing multi-sensor target information fusion according to the present invention.
- FIG. 1 is a flow chart showing an example of an optimization method of multi-sensor target information fusion according to an embodiment of the present invention.
- step S3 of FIG. 1 is an example sub-flow diagram of step S3 of FIG. 1 in accordance with one embodiment of the present invention.
- step S4 of FIG. 1 is an example sub-flow diagram of step S4 of FIG. 1 in accordance with one embodiment of the present invention.
- FIG. 4 is a block diagram of an example of an apparatus for optimizing multi-sensor target information fusion in accordance with an embodiment of the present invention.
- FIG. 5 is an example block diagram of a computer device for performing an optimization method for multi-sensor target information fusion in accordance with an embodiment of the present invention, in accordance with an embodiment of the present invention.
- FIG. 6 is an example flow diagram of a method for synchronizing multi-sensor target information with multi-sensor sensing, in accordance with an embodiment of the present invention.
- FIG. 7 is an exemplary sub-flow diagram of step P6 of Figure 6 in accordance with one embodiment of the present invention.
- FIG. 8 is an example block diagram of an apparatus for synchronizing multi-sensor target information with multi-sensor sensing, in accordance with an embodiment of the present invention.
- FIG. 9 is an example block diagram of a computer device for performing a method for synchronizing multi-sensor target information with multi-sensor sensing, in accordance with an embodiment of the present invention, in accordance with an embodiment of the present invention.
- the computer program instructions may be stored in a computer readable memory, which may instruct a computer or other programmable processor to perform functions in a particular manner such that the instructions stored in the computer readable memory comprise an implementation flow diagram and/or The production of the instruction part of the function/operation specified in one or more boxes of the block diagram.
- the method and apparatus for optimizing multi-sensor target information fusion according to the present invention can be applied, for example, to a scene in which a target around a vehicle is sensed.
- the longitudinal position relative to the vehicle position, the longitudinal speed, the longitudinal acceleration, the lateral relative vehicle position, the lateral speed, and the lateral acceleration can be used to characterize the motion state of any one of the targets around the vehicle, and multiple vehicles on the vehicle.
- Each set of measurements sensed by each of the sensors relates to the above six aspects or can be calculated by calculating the values of the above six aspects. Since the characteristics of each sensor and the measurement error for each aspect are different, the optimal fusion result for any one target can be obtained by the optimization method and apparatus for multi-sensor target information fusion according to the present invention described in detail below. And the fusion weight is reasonably determined.
- the method S100 includes the step of obtaining, for each time, a fusion prediction result of all the sensors regarding the target state at the current time (step S1).
- the following formula (1) can be utilized to determine a fusion prediction result for a target state of a certain target:
- F is the system state transition matrix
- X(t-1) is the optimal fusion estimation result (described later) about the target state at time t-1
- W(t) is the system noise.
- the method S100 may further include the step of obtaining actual measurement results of the respective sensors at the current time with respect to the target state (step S2).
- the method S100 may further include the following steps: obtaining, for each set of actual measurement results, a corresponding sensor at the current moment based on the fusion prediction result and the set of actual measurement results.
- the optimal estimation result regarding the target state step S3.
- step S3 The details of step S3 will be described in detail in conjunction with FIG. 2.
- the step S3 includes the following steps: calculating, for each set of actual measurement results, a corresponding conversion matrix based on the fusion prediction result and the set of actual measurement results (Ste S31).
- equation (2) can be used to determine the corresponding transformation matrix:
- Z ik (t) is the kth actual measurement result of the i-th sensor with respect to the target state at time t
- H ik is the conversion matrix corresponding to the actual measurement result of the kth set of the i-th sensor. It is the fusion prediction result of all the sensors with respect to the target state at time t
- V(t) is the measurement noise.
- the step S3 may further include the following steps: calculating a covariance corresponding to each set of actual measurement results (step S32).
- equation (3) can be used to determine the corresponding covariance:
- I the covariance corresponding to the kth actual measurement result of the i-th sensor at time t with respect to the target state
- F is the system state transition matrix
- F T is the transposed matrix of the system state transition matrix
- P ik (t-1) It is the covariance (described later) corresponding to the optimal estimation result of the kth actual measurement result for the i-th sensor at time t-1
- Q is the covariance of the system process noise.
- the step S3 may further include: calculating, for each set of actual measurement results, a corresponding one based on the corresponding conversion matrix and the corresponding covariance Kalman gain (step S33).
- kg ik (t) is the Kalman gain corresponding to the kth actual measurement result of the i-th sensor at time t with respect to the target state
- H ik is the conversion matrix corresponding to the actual measurement result of the kth set of the i-th sensor
- H ik T is the same
- R is the covariance of the measurement process noise.
- the step S3 may further include the following steps: for each set of actual measurement results, based on the fusion prediction result, the corresponding Kalman gain, and the actual set The measurement result and the corresponding conversion matrix are used to calculate an optimal estimation result of the corresponding sensor at the current time with respect to the target state (step S34).
- X ik (t) is the optimal estimation result corresponding to the actual measurement result of the kth set of the i-th sensor at time t
- kg ik (t) is the Kalman gain corresponding to the kth actual measurement result of the i-th sensor at time t with respect to the target state
- H ik is the conversion matrix corresponding to the kth actual measurement result of the i-th sensor.
- the method S100 may further include the steps of: fusing the optimal estimation results of all the sensors to determine a weight corresponding to each of the optimal estimation results, thereby obtaining The optimal fusion estimation result with respect to the target state at the current time (step S4).
- equation (6) can be utilized to obtain an optimal fusion estimate for the target state at the current time:
- X(t) is the optimal fusion estimation result for all sensors with respect to the target state at time t
- f is the fusion function
- X ik (t) is the actual measurement of the kth set with the i-th sensor at time t
- step S4 will be described in detail in conjunction with FIG.
- the step S4 includes the following steps: determining a weight corresponding to each of the optimal estimation results according to a covariance corresponding to each set of actual measurement results (step S41) .
- the covariance corresponding to all sets of actual measurements of all sensors is calculated using equation (3) above (ie, After that, each of the optimal estimation results corresponding to each set of actual measurement results according to the size of each covariance (ie, X 11 (t), X 12 (t), ..., X ik (t) Assign weights (ie, w 11 (t), w 12 (t), ..., w ik (t)).
- the step S4 may further include the step of: calculating an optimality of all sensors regarding the target state at the current time based on each of the optimal estimation results and corresponding weights.
- the estimation result is merged (step S42).
- the corresponding optimal estimate (ie, X 11 (t) is utilized for the assigned weights (ie, w 11 (t), w 12 (t), ..., w ik (t)) , X 12 (t), ..., X ik (t)) performs a weighting operation to obtain an optimal fusion estimation result for all sensors with respect to the target state at time t. Further, as shown in the above formula (1), X(t) at time t can also be used to calculate the time at t+1.
- the method S100 may further include the following steps: correcting the covariance obtained in step S32 according to the conversion matrix obtained in step S31 and the Kalman gain obtained in step S33, Obtaining a corrected covariance (step S5, not shown), the corrected covariance may be used to calculate a covariance corresponding to the corresponding actual measurement at the next moment of the current time (see the above equation ( 3)).
- equation (7) is used to obtain the corrected covariance at the current time:
- P ik (t) is the corrected covariance of the kth actual measurement result for the i-th sensor at time t
- I is the identity matrix
- kg ik (t) is obtained in step S33 above.
- H ik is the conversion matrix corresponding to the actual measurement result of the k-th set of the i-th sensor obtained in step S31.
- the weights can be adjusted in real time to obtain an accurate target fusion estimation result.
- step S1 in FIG. 1 may be performed after step S2, or may be performed simultaneously, for example, in FIG. Step S31 may be performed after step S32, or may be performed simultaneously, and the like.
- the apparatus 100 includes a first unit 101 configured to obtain, for each time instant, a fusion prediction result for all sensors of a target state at a current time.
- equation (8) can be utilized to determine a fusion prediction result for a target state of a certain target:
- F is the system state transition matrix
- X(t-1) is the optimal fusion estimation result (described later) about the target state at time t-1
- W(t) is the system noise.
- the apparatus 100 may further include a second unit 102 configured to obtain actual measurements of the respective sensors at the current time with respect to the target state.
- the apparatus 100 may further include a third unit configured to obtain, based on the fusion prediction result and the set of actual measurement results, for each set of actual measurement results. The optimal estimate of the target state at the current time.
- the internal structure of the third unit 103 will be described in detail below.
- the third unit 103 includes a 3A unit (not shown) configured to be based on the fusion prediction result and the actual measurement result for each set of actual measurement results. To calculate the corresponding transformation matrix.
- Z ik (t) is the kth actual measurement result of the i-th sensor with respect to the target state at time t
- H ik is the conversion matrix corresponding to the actual measurement result of the kth set of the i-th sensor. It is the fusion prediction result of all the sensors with respect to the target state at time t
- V(t) is the measurement noise.
- the third unit 103 may further include a 3B unit (not shown) configured to calculate a covariance corresponding to each set of actual measurements.
- I the covariance corresponding to the kth actual measurement result of the i-th sensor at time t with respect to the target state
- F is the system state transition matrix
- F T is the transposed matrix of the system state transition matrix
- P ik (t-1) It is the covariance (described later) corresponding to the optimal estimation result of the kth actual measurement result for the i-th sensor at time t-1
- Q is the covariance of the system process noise.
- the third unit 103 may further include a 3C unit (not shown) configured to be based on the corresponding conversion matrix and the corresponding for each set of actual measurement results. The covariance is used to calculate the corresponding Kalman gain.
- kg ik (t) is the Kalman gain corresponding to the kth actual measurement result of the i-th sensor at time t with respect to the target state
- H ik is the conversion matrix corresponding to the actual measurement result of the kth set of the i-th sensor
- H ik T is the same
- R is the covariance of the measurement process noise.
- the third unit 103 may further include a 3D unit (not shown) configured to, based on the fusion prediction result, the corresponding one for each set of actual measurement results
- the Kalman gain, the set of actual measurements, and the corresponding transformation matrix are used to calculate an optimal estimate of the corresponding sensor at the current time with respect to the target state.
- equation (12) can be used to calculate the corresponding optimal estimate:
- X ik (t) is the optimal estimation result corresponding to the actual measurement result of the kth set of the i-th sensor at time t
- kg ik (t) is the Kalman gain corresponding to the kth actual measurement result of the i-th sensor at time t with respect to the target state
- H ik is the conversion matrix corresponding to the kth actual measurement result of the i-th sensor.
- the apparatus 100 may further include a fourth unit 104 configured to fuse the optimal estimation results for all of the sensors to determine each of the optimal estimation results. The corresponding weights, and thus the optimal fusion estimation results for the target state at the current time.
- equation (13) can be utilized to obtain an optimal fusion estimate for the target state at the current time:
- X(t) is the optimal fusion estimation result for all sensors with respect to the target state at time t
- f is the fusion function
- X ik (t) is the actual measurement of the kth set with the i-th sensor at time t
- the internal structure of the fourth unit 104 will be described in detail below.
- the fourth unit 104 includes a 4A unit (not shown) configured to determine each of the optimal estimation results according to a covariance corresponding to each set of actual measurement results. Corresponding weights.
- the covariance corresponding to all sets of actual measurements of all sensors is calculated using equation (10) above (ie, After that, each of the optimal estimation results corresponding to each set of actual measurement results according to the size of each covariance (ie, X 11 (t), X 12 (t), ..., X ik (t) Assign weights (ie, w 11 (t), w 12 (t), ..., w ik (t)).
- the fourth unit 104 may further include a 4B unit (not shown) configured to calculate a target at the current time based on each of the optimal estimation results and corresponding weights Optimal fusion estimation results for all sensors in the state.
- a 4B unit (not shown) configured to calculate a target at the current time based on each of the optimal estimation results and corresponding weights Optimal fusion estimation results for all sensors in the state.
- the corresponding optimal estimate (ie, X 11 (t) is utilized for the assigned weights (ie, w 11 (t), w 12 (t), ..., w ik (t)) , X 12 (t), ..., X ik (t)) performs a weighting operation to obtain an optimal fusion estimation result for all sensors with respect to the target state at time t.
- X(t) at time t can also be used to calculate the time at t+1.
- the apparatus 100 may further include a fifth unit (not shown) configured to: according to the conversion matrix obtained in the 3A unit and the Carl obtained in the 3C unit Manner gain to correct the covariance obtained in the 3B unit to obtain a corrected covariance, which can be used to calculate the covariance corresponding to the corresponding actual measurement at the next moment in the current time (See equation (10) above).
- a fifth unit (not shown) configured to: according to the conversion matrix obtained in the 3A unit and the Carl obtained in the 3C unit Manner gain to correct the covariance obtained in the 3B unit to obtain a corrected covariance, which can be used to calculate the covariance corresponding to the corresponding actual measurement at the next moment in the current time (See equation (10) above).
- equation (14) is used to obtain the corrected covariance at the current time:
- P ik (t) is the corrected covariance of the kth actual measurement result for the i-th sensor at time t
- I is the identity matrix
- kg ik (t) is obtained in step S33 above.
- H ik is the conversion matrix corresponding to the actual measurement result of the k-th set of the i-th sensor obtained in step S31.
- the weights can be adjusted in real time to obtain an accurate target fusion estimation result.
- the above-described multi-sensor target information fusion optimization method and apparatus can enable the assisted driving system to adopt more optimized data when assisted driving, thereby facilitating decision-making and control thereof, for example, adaptive Auxiliary driving functions or scenes such as cruising, emergency braking, etc. can make better decisions based on optimized data, such functions or scenarios can also include body stability control and the like.
- assisted driving including the above method
- the manner of the method or the manner of the assisted driving system of the above apparatus or the computer apparatus for performing the above method or the computer program for executing the above method or the computer program for realizing the functions of the above apparatus or recorded The manner in which a computer program can read a recording medium.
- FIG. 5 A computer apparatus for performing an optimization method of multi-sensor target information fusion according to an embodiment of the present invention is illustrated in FIG. 5, in accordance with an embodiment of the present invention.
- computer device 200 includes a memory 201 and a processor 202.
- computer device 200 also includes a computer program stored on memory 201 and executable on processor 202.
- the processor executes the program to implement various steps of an optimization method for multi-sensor target information fusion according to an embodiment of the present invention as shown in FIGS. 1, 2, and 3.
- the method S100 or the apparatus 100 of an embodiment of the present invention can obtain one or more of the following beneficial effects:
- the covariance and the optimal estimation result corresponding to all the measurement results are obtained, and then the optimal fusion estimation result about the target state is calculated, so that the sensor measurement result is combined.
- the covariance can determine the weights corresponding to the optimal estimation results. Therefore, even if the sensor is affected by hardware performance and environment, and the detection result does not conform to the physical motion principle, the weight can be adjusted in real time to achieve the target. Optimal fusion estimation improves the performance of fusion estimation.
- the method P100 includes the steps of: obtaining actual measurement results of the respective sensors with respect to the target state and simultaneously recording the times at which each set of actual measurement results are obtained, so as to record one for each set of actual measurement results.
- a timestamp step P1
- a corresponding receive timestamp is recorded.
- t 111 For the first actual measurement result of the first sensor, its reception time stamp is marked as t 111
- t 112 For the second actual measurement result of the first sensor, its reception time stamp is marked as t 112 , ...
- t 1ik For the kth actual measurement result of the i-th sensor, its reception time stamp is marked as t 1ik , which are collectively referred to herein as the first time stamp (ie, the reception time stamp).
- the method P100 may further include the step of recording the time at which the execution of the target information fusion processing is started as the second time stamp (step P2).
- the moment at which the fusion of all sets of actual measurements of all sensors is processed is recorded as a second timestamp (ie, a fused timestamp).
- the method P100 may further include: calculating, respectively, between each first time instant represented by each first time stamp and the second time represented by the second time stamp Time difference (step P3).
- the time difference can be calculated using the following equation (15):
- ⁇ t ik is the time difference corresponding to the actual measurement result of the kth set of the i-th sensor
- t 1ik is the first time instant of the first time stamp corresponding to the actual measurement result of the k-th set of the i-th sensor
- t 2 Is the second time instant represented by the second timestamp.
- the method P100 may further include the following steps: updating each corresponding actual measurement result obtained at the first moment based on each calculated time difference to obtain a corresponding The estimated measurement result at the second time (step P4).
- the actual measurements obtained at t 1ik may be updated using the following equations (16) through (18):
- the method P100 may further include the step of obtaining a fusion prediction result of all sensors regarding the target state at the second moment (step P5).
- the following formula (19) can be utilized to determine a fusion prediction result for a target state of a certain target:
- F is the system state transition matrix
- X(t 1 ) is the optimal fusion estimation result with respect to the target state at the first time t 1 (after Said)
- W(t 2 ) is the system noise.
- the method P100 may further include the following steps: obtaining, for each set of estimated measurement results, based on the fusion prediction result and the set of estimated measurement results, The optimal estimation result of the corresponding sensor at the two moments with respect to the target state (step P6).
- step P6 The details of step P6 will be described in detail in conjunction with FIG.
- the step P6 includes the following steps: calculating, according to the fusion prediction result and the set of estimated measurement results, for each set of estimated measurement results.
- the conversion matrix (step P61).
- H ik is a conversion matrix corresponding to the k-th actual measurement result of the i-th sensor
- V(t 2 ) is the measurement noise
- f( ⁇ t ik ) is a function of calculating the measurement noise weight according to ⁇ t
- the larger the ⁇ t ik is, the measurement noise The bigger.
- the step P6 may further include the following steps: calculating a covariance corresponding to each set of estimated measurement results (step P62).
- I the covariance corresponding to the kth set of estimated measurements of the i-th sensor at the second time t 2 with respect to the target state
- F is the system state transition matrix
- F T is the transposed matrix of the system state transition matrix
- P ik ( t 1ik ) is the covariance (described later) corresponding to the optimal estimation result of the kth set of estimated measurement results for the i-th sensor at the first time t 1
- Q is the covariance of the system process noise.
- the step P6 may further include the following steps: calculating, according to the corresponding conversion matrix and the corresponding covariance, for each set of estimated measurement results, Kalman gain (step P63).
- k ik (t 2 ) is the Kalman gain corresponding to the kth set of estimated measurements of the i-th sensor at the second time t 2 with respect to the target state
- H ik is the transformation matrix corresponding to the actual measurement result of the kth set of the i-th sensor
- H ik T is the transposed matrix of the transformation matrix corresponding to the kth actual measurement result of the i-th sensor
- R is the covariance of the measurement process noise.
- the step P6 may further include the following steps: for each set of estimated measurement results, based on the fusion prediction result, the corresponding Kalman gain, the set The measurement result and the corresponding conversion matrix are estimated to calculate an optimal estimation result of the corresponding sensor at the second time with respect to the target state (step P64).
- X ik (t 2 ) is the optimal estimation result corresponding to the kth set of estimated measurement results of the i-th sensor at the second time t 2
- kg ik (t 2 ) is the kth set of estimated measurement results of the i-th sensor at the second time t 2 with respect to the target state.
- Kalman gain It is an estimated measurement result at the second time t 2 corresponding to the kth actual measurement result of the i-th sensor
- H ik is a conversion matrix corresponding to the k-th set actual measurement result of the i-th sensor.
- the method P100 may further include the following steps: the optimal for all the sensors The estimation results are fused to determine a weight corresponding to each of the optimal estimation results, thereby obtaining an optimal fusion estimation result with respect to the target state at the second time (step P7, not shown).
- Equation (24) can be utilized to obtain an optimal fusion estimate for the target state at the second instant t 2 :
- X(t 2 ) is the optimal fusion estimation result for all sensors in the target state at the second time t 2
- f is a fusion function
- X ik (t 2 ) is at the second time t 2 and the ith
- the optimal estimation result corresponding to the kth set of estimated measurements of the sensors Is the covariance corresponding to the kth set of estimated measurements of the i-th sensor at the second time t 2 with respect to the target state.
- the optimal fusion estimation result X(t 1 ) at the current time for example, the first time t 1
- the next time for example, the second time t 2
- the method P100 may further include the following steps: correcting the covariance obtained in step P62 according to the conversion matrix obtained in step P61 and the Kalman gain obtained in step P63, A corrected covariance is obtained (step P8, not shown), and the corrected covariance can be used to calculate the next moment at the current time (eg, the second time t 2 ) (eg, the third time t 3 ) The covariance corresponding to the estimated measurement results (see equation (21) above).
- the corrected covariance at the current time (eg, second time t 2 ) is obtained using the following equation (25):
- P ik (t 2 ) is the corrected covariance of the kth set of estimated measurements for the ith sensor at the second time t 2
- I is the identity matrix
- kg ik (t 2 ) is above
- H ik is the kth of the i-th sensor obtained in step P61
- P at the current time for example, the second time t 2
- Ik (t 2 ) can also be used to calculate at the next moment (eg, third moment t 3 )
- step P5 in FIG. 6 may be performed before step P4, or may be performed simultaneously.
- step P61 in FIG. 7 may be performed after step P62. It can also be executed at the same time, and so on.
- the apparatus 800 includes a sixth unit 801 configured to obtain actual measurement results of the respective sensors with respect to the target state while recording the times at which each set of actual measurement results are obtained, respectively, for each set.
- the actual measurement result record has a first timestamp.
- a corresponding receive timestamp is recorded.
- t 111 For the first actual measurement result of the first sensor, its reception time stamp is marked as t 111
- t 112 For the second actual measurement result of the first sensor, its reception time stamp is marked as t 112 , ...
- t 1ik For the kth actual measurement result of the i-th sensor, its reception time stamp is marked as t 1ik , which are collectively referred to herein as the first time stamp (ie, the reception time stamp).
- the apparatus 800 may further include a seventh unit 802 configured to record a time at which the execution of the target information fusion processing is started as the second time stamp.
- the moment at which the fusion of all sets of actual measurements of all sensors is processed is recorded as a second timestamp (ie, a fused timestamp).
- the apparatus 800 may further include an eighth unit 803 configured to separately calculate respective first time instants and second time timestamp representations of respective first timestamp representations. The time difference between the second moments.
- the time difference can be calculated using the following equation (26):
- ⁇ t ik is the time difference corresponding to the actual measurement result of the kth set of the i-th sensor
- t 1ik is the first time instant of the first time stamp corresponding to the actual measurement result of the k-th set of the i-th sensor
- t 2 Is the second time instant represented by the second timestamp.
- the apparatus 800 can further include a ninth unit 804 configured to update its corresponding actual measurement obtained at the first time based on each calculated time difference. As a result, a corresponding estimated measurement at the second time instant is obtained.
- the apparatus 800 can further include a tenth unit 805 configured to obtain a fusion prediction result for all sensors of the target state at the second time instant.
- the following equation (30) may be utilized to determine a fusion prediction result for a target state of a certain target:
- F is the system state transition matrix
- X(t 1 ) is the optimal fusion estimation result with respect to the target state at the first time t 1 (after Said)
- W(t 2 ) is the system noise.
- the apparatus 800 can further include an eleventh unit 806 configured to determine the fusion prediction result and the set of estimated measurement results for each set of estimated measurement results. To obtain an optimal estimation result of the corresponding sensor at the second moment with respect to the target state.
- the 11th unit 806 includes a 6A unit (not shown) configured to, based on the fusion prediction result and the set, for each of the set of estimated measurements The measurement results are estimated to calculate the corresponding transformation matrix.
- equation (31) can be used to determine the corresponding transformation matrix:
- H ik is a conversion matrix corresponding to the k-th actual measurement result of the i-th sensor
- V(t 2 ) is the measurement noise
- f( ⁇ t ik ) is a function of calculating the measurement noise weight according to ⁇ t
- the larger the ⁇ t ik is, the measurement noise The bigger.
- the 11th unit 806 may further include a 6B unit (not shown) configured to calculate a covariance corresponding to each set of estimated measurements.
- equation (32) can be used to determine the corresponding covariance:
- I the covariance corresponding to the kth set of estimated measurements of the i-th sensor at the second time t 2 with respect to the target state
- F is the system state transition matrix
- F T is the transposed matrix of the system state transition matrix
- P ik ( t 1ik ) is the covariance (described later) corresponding to the optimal estimation result of the kth set of estimated measurement results for the i-th sensor at the first time t 1
- Q is the covariance of the system process noise.
- the 11th unit 806 may further include a 6th C unit (not shown) configured to, based on the corresponding conversion matrix and the The corresponding covariance is used to calculate the corresponding Kalman gain.
- Equation (33) can be used to calculate the corresponding Kalman gain:
- k ik (t 2 ) is the Kalman gain corresponding to the kth set of estimated measurements of the i-th sensor at the second time t 2 with respect to the target state
- H ik is the transformation matrix corresponding to the actual measurement result of the kth set of the i-th sensor
- H ik T is the transposed matrix of the transformation matrix corresponding to the kth actual measurement result of the i-th sensor
- R is the covariance of the measurement process noise.
- the 11th unit 806 may further include a 6D unit configured to, based on the fusion prediction result, the corresponding Kalman gain, for each set of estimated measurement results, The set of estimated measurements and the corresponding transformation matrix are used to calculate an optimal estimation result of the corresponding sensor at the second moment with respect to the target state.
- equation (34) can be used to calculate the corresponding optimal estimate:
- X ik (t 2 ) is the optimal estimation result corresponding to the kth set of estimated measurement results of the i-th sensor at the second time t 2
- kg ik (t 2 ) is the kth set of estimated measurement results of the i-th sensor at the second time t 2 with respect to the target state.
- Kalman gain It is an estimated measurement result at the second time t 2 corresponding to the kth actual measurement result of the i-th sensor
- H ik is a conversion matrix corresponding to the k-th set actual measurement result of the i-th sensor.
- the apparatus 800 may further include an eighth unit (not shown) configured to fuse the optimal estimation results of all of the sensors to determine each of the optimalities The weight corresponding to the estimated result, and thus the optimal fusion estimation result with respect to the target state at the second moment.
- an eighth unit (not shown) configured to fuse the optimal estimation results of all of the sensors to determine each of the optimalities The weight corresponding to the estimated result, and thus the optimal fusion estimation result with respect to the target state at the second moment.
- equation (35) can be utilized to obtain an optimal fusion estimate for the target state at the second instant t 2 :
- X(t 2 ) is the optimal fusion estimation result for all sensors in the target state at the second time t 2
- f is a fusion function
- X ik (t 2 ) is at the second time t 2 and the ith
- the optimal estimation result corresponding to the kth set of estimated measurements of the sensors Is the covariance corresponding to the kth set of estimated measurements of the i-th sensor at the second time t 2 with respect to the target state.
- the optimal fusion estimation result X(t 1 ) at the current time for example, the first time t 1
- the next time for example, the second time t 2
- the apparatus 800 may further include a ninth unit (not shown) configured to be based on the conversion matrix obtained in the 6A unit and the Kalman obtained in the 6C unit Gain to correct the covariance obtained in unit 6B to obtain a corrected covariance that can be used to calculate the next moment at the current time (eg, second time t 2 ) (eg, The covariance corresponding to the estimated measurement result at the third time t 3 ) (see equation (32) above).
- a ninth unit (not shown) configured to be based on the conversion matrix obtained in the 6A unit and the Kalman obtained in the 6C unit Gain to correct the covariance obtained in unit 6B to obtain a corrected covariance that can be used to calculate the next moment at the current time (eg, second time t 2 ) (eg, The covariance corresponding to the estimated measurement result at the third time t 3 ) (see equation (32) above).
- the corrected covariance at the current time (eg, second time t 2 ) is obtained using the following equation (36):
- P ik (t 2 ) is the corrected covariance of the kth set of estimated measurements for the ith sensor at the second time t 2
- I is the identity matrix
- kg ik (t 2 ) is above
- H ik is the kth of the i-th sensor obtained in step P61
- P at the current time for example, the second time t 2
- Ik (t 2 ) can also be used to calculate at the next moment (eg, third moment t 3 )
- the above method and apparatus for synchronizing multi-sensor target information fusion with multi-sensor sensing can enable the assisted driving system to adopt more optimized data when applied to assisted driving, thereby contributing to Decision making and control, for example, enabling adaptive driving functions such as adaptive cruising, emergency braking, or the like to make better decisions based on optimized data, such functions or scenarios may also include body stability control and the like.
- Means a mode comprising the assisted driving method of the above method or a mode of the assisting driving system comprising the above device or a computer device for performing the above method or a computer program for performing the above method or a function for realizing the function of the above device
- a mode comprising the assisted driving method of the above method or a mode of the assisting driving system comprising the above device or a computer device for performing the above method or a computer program for performing the above method or a function for realizing the function of the above device
- FIG. 9 A computer apparatus for performing a method for synchronizing multi-sensor target information with multi-sensor sensing in accordance with an embodiment of the present invention is illustrated in FIG.
- computer device 900 includes a memory 901 and a processor 902.
- computer device 900 also includes a computer program stored on memory 901 and executable on processor 902.
- the processor executes the program to implement various steps of a method for synchronizing multi-sensor target information with multi-sensor sensing, such as shown in FIGS. 6 and 7 in accordance with an embodiment of the present invention.
- the method P100 or the device 800 of an embodiment of the present invention can obtain one or more of the following beneficial effects:
- the present invention can also be embodied as a recording medium in which a program for causing a computer to execute an optimization method of multi-sensor target information fusion according to an embodiment of the present invention is stored.
- the present invention can also be embodied as a recording medium in which a program for causing a computer to execute a method for synthesizing multi-sensor target information fusion with multi-sensor sensing according to still another embodiment of the present invention is stored.
- a disk for example, a magnetic disk, an optical disk, or the like
- a card for example, a memory card, an optical card, or the like
- a semiconductor memory for example, a ROM, a nonvolatile memory, or the like
- a tape can be used as the recording medium.
- Various types of recording media such as tapes, cassette tapes, and the like.
- a computer program for causing a computer to execute an optimization method of multi-sensor target information fusion in the above-described embodiment or a computer program for causing a computer to implement the function of the multi-sensor target information fusion optimization apparatus in the above-described embodiment by recording in these recording media The circulation thereof can improve the cost, portability, and versatility.
- the above-mentioned recording medium is loaded on a computer, and a computer program recorded on the recording medium is read by the computer and stored in the memory, and the processor (CPU: Central Processing Unit), MPU: Micro Processing Unit (micro processing unit) reads and executes the computer program from the memory, thereby enabling the optimization method of multi-sensor target information fusion in the above embodiment and optimizing the multi-sensor target information fusion in the above embodiment The function of the device.
- CPU Central Processing Unit
- MPU Micro Processing Unit
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Computational Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Algebra (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
本发明涉及多传感器目标信息融合。本发明的多传感器目标信息融合的优化方法包括以下步骤:针对每个时刻,获得在当前时刻的关于目标状态的所有传感器的融合预测结果;获得在当前时刻的各个传感器关于目标状态的实际测量结果;针对每一套实际测量结果,基于所述融合预测结果和该套实际测量结果来获得在当前时刻的相应传感器关于目标状态的最优估计结果;以及对所有传感器的所述最优估计结果进行融合以确定每个所述最优估计结果对应的权重、进而获得在当前时刻的关于目标状态的最优融合估计结果。
Description
本发明属于汽车领域,涉及多传感器目标信息融合,更具体地涉及多传感器目标信息融合的优化方法以及多传感器目标信息融合与多传感器感测的同步。
目前,在全世界汽车领域,“智能化”是一个明确的发展方向。美国国家交通安全管理局(NHTSA)也已经给出了从辅助驾驶到自动驾驶的汽车分级标准,但无论是辅助驾驶还是自动驾驶,都需要对目标感测传感器的结果进行融合,以用于减少重叠目标和弥补不同传感器结果的缺点。
然而,说起涉及目标的传感器结果的融合,就会涉及到用于感测目标的多个传感器的结果之间的权重分配问题。由于这样的融合需要实时处理大量的传感器结果并对这些结果进行滤波以便向决策模块提供一个稳定线性变化的目标群,所以往往需要消耗大量处理能力,而传统嵌入式处理器芯片的处理性能比较低下,因此,在利用传统嵌入式处理器芯片来处理这样的融合过程时,只能根据传感器的特性和结果的最大误差来人为地确定融合权重分配。由于这样获得的融合权重是事前定义好的权重,因此,特别是在传感器存在跳变的情况下,存在权重分配不合理且不具有普适性的问题,进而无法获得准确的目标融合估计结果。
并且,在实际处理时,由于各传感器供应商在算法方面的差异、传感器硬件性能的强弱等,传感器关于目标的更新速度均会受到不同程度的影响,再加上不同传感器的更新速度不尽相同,目标信息融合时的处理也会受到影响。
对此,业界普遍处理方式是,当触发目标信息融合任务时,通常会取上一次的传感器感知结果作为最新的输入来进行目标信息融合。 但是,当传感器关于目标的识别结果的更新较慢或者没有更新时,将导致融合结果的最优估计不够准确的问题。
发明内容
本发明是为了克服上述缺点的一个或多个、或其它缺点而完成的,所采用的技术方案如下。
按照本发明的第一方面,提供一种多传感器目标信息融合的优化方法,包括:步骤S1:针对每个时刻,获得在当前时刻的关于目标状态的所有传感器的融合预测结果;步骤S2:获得在当前时刻的各个传感器关于目标状态的实际测量结果;步骤S3:针对每一套实际测量结果,基于所述融合预测结果和该套实际测量结果来获得在当前时刻的相应传感器关于目标状态的最优估计结果;以及步骤S4:对所有传感器的所述最优估计结果进行融合以确定每个所述最优估计结果对应的权重、进而获得在当前时刻的关于目标状态的最优融合估计结果。
进一步地,在根据本发明的第一方面中,所述步骤S3包括:步骤S31:针对每一套实际测量结果,基于所述融合预测结果和该套实际测量结果来计算对应的转换矩阵;步骤S32:计算所述每一套实际测量结果对应的协方差;步骤S33:针对所述每一套实际测量结果,基于所述对应的转换矩阵和所述对应的协方差来计算对应的卡尔曼增益;以及步骤S34:针对所述每一套实际测量结果,基于所述融合预测结果、所述对应的卡尔曼增益、该套实际测量结果以及所述对应的转换矩阵,来计算对应的在当前时刻的相应传感器关于目标状态的最优估计结果。
进一步地,在根据本发明的第一方面中,所述步骤S4包括:步骤S41:根据各套实际测量结果对应的协方差来确定每个所述最优估计结果对应的权重;以及步骤S42:基于每个所述最优估计结果以及对应的权重来计算在当前时刻的关于目标状态的所有传感器的最优融合估计结果。
进一步地,在根据本发明的第一方面中,还包括::步骤S5:根据在步骤S31中得到的转换矩阵和在步骤S33中得到的卡尔曼增益来校 正在步骤S32中得到的协方差,以获得经校正的协方差。
进一步地,在根据本发明的第一方面中,在所述步骤S32中,在当前时刻的所述每一套实际测量结果对应的协方差是利用在前一时刻的经校正的协方差来获得的。
进一步地,在根据本发明的第一方面中,在所述步骤S1中,在当前时刻的所述融合预测结果是利用前一时刻的关于目标状态的最优融合估计结果来获得的。
按照本明的一个方面,提供一种用于使多传感器目标信息融合与多传感器感测同步的方法,包括:步骤P1:获得各个传感器关于目标状态的实际测量结果同时对获得每一套实际测量结果的时刻分别进行记录,以便针对每一套实际测量结果记录有一个第一时间戳;步骤P2:将开始执行目标信息融合处理的时刻记录为第二时间戳;步骤P3:分别计算各个第一时间戳表征的各个第一时刻与所述第二时间戳表征的第二时刻之间的时间差;步骤P4:基于计算出的每一个时间差来更新其对应的在所述第一时刻获得的实际测量结果,以获得对应的在所述第二时刻的预估测量结果;步骤P5:获得在所述第二时刻的关于目标状态的所有传感器的融合预测结果;以及步骤P6:针对每一套预估测量结果,基于所述融合预测结果和该套预估测量结果来获得在所述第二时刻的相应传感器关于目标状态的最优估计结果。
进一步地,在根据本发明的第二方面中,所述步骤P6包括:步骤P61:针对所述每一套预估测量结果,基于所述融合预测结果和该套预估测量结果来计算对应的转换矩阵;步骤P62:计算所述每一套预估测量结果对应的协方差;步骤P63:针对所述每一套预估测量结果,基于所述对应的转换矩阵和所述对应的协方差来计算对应的卡尔曼增益;以及步骤P64:针对所述每一套预估测量结果,基于所述融合预测结果、所述对应的卡尔曼增益、该套预估测量结果以及所述对应的转换矩阵,来计算对应的在所述第二时刻的相应传感器关于目标状态的最优估计结果。
进一步地,在根据本发明的第二方面中,还包括:步骤P7:对所 有传感器的所述最优估计结果进行融合以确定每个所述最优估计结果对应的权重、进而获得在所述第二时刻的关于目标状态的最优融合估计结果。
进一步地,在根据本发明的第二方面中,还包括:步骤P8:根据在步骤P61中得到的转换矩阵和在步骤P63中得到的卡尔曼增益来校正在步骤P62中得到的协方差,以获得经校正的协方差。
进一步地,在根据本发明的第二方面中,在所述步骤P62中,在所述第二时刻的所述每一套预估测量结果对应的协方差是利用在所述第一时刻的经校正的协方差来获得的。
进一步地,在根据本发明的第二方面中,在所述步骤P1中,在所述第二时刻的所述融合预测结果是利用所述第一时刻的关于目标状态的最优融合估计结果来获得的。
按照本发明的第三方面,提供一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现根据本发明的第一方面和/或第二方面的方法的步骤。
按照本发明的第四方面,提供一种记录介质,其上存储有计算机程序,该程序被计算机执行以实现根据本发明的第一方面和/或第二方面的方法的步骤。
按照本发明的第五方面,提供一种辅助驾驶方法,包括根据本发明的第一方面的多传感器目标信息融合的优化方法。
按照本发明的第六方面,提供一种辅助驾驶方法,包括根据本发明的第二方面的多传感器目标信息融合的优化方法。
按照本发明的第七方面,提供一种辅助驾驶系统,包括根据本发明的多传感器目标信息融合的优化装置。
根据以下描述和附图本发明的以上特征和操作将变得更加显而易见。
从结合附图的以下详细说明中,将会使本发明的上述和其他目的及优点更加完整清楚,其中,相同或相似的要素采用相同的标号表示。
图1是根据本发明的一实施方式的多传感器目标信息融合的优化方法的示例流程图。
图2是根据本发明的一个实施例的图1中的步骤S3的示例子流程图。
图3是根据本发明的一个实施例的图1中的步骤S4的示例子流程图。
图4是根据本发明的一实施方式的多传感器目标信息融合的优化装置的示例框图。
图5是根据本发明的一实施方式的用于执行根据本发明的一实施方式的多传感器目标信息融合的优化方法的计算机设备的示例框图。
图6是根据本发明的一实施方式的用于使多传感器目标信息融合与多传感器感测同步的方法的示例流程图。
图7是根据本发明的一个实施例的图6中的步骤P6的示例子流程图。
图8是根据本发明的一实施方式的用于使多传感器目标信息融合与多传感器感测同步的装置的示例框图。
图9是根据本发明的一实施方式的用于执行根据本发明的一实施方式的用于使多传感器目标信息融合与多传感器感测同步的方法的计算机设备的示例框图。
以下将结合附图对本发明涉及的方法及装置、计算机设备和记录介质作进一步的详细描述。需要注意的是,以下的具体实施方式是示例性而非限制的,其旨在提供对本发明的基本了解,并不旨在确认本发明的关键或决定性的要素或限定所要保护的范围。
下文参考本发明实施例的方法和装置的框图说明、框图和/或流程图来描述本发明。将理解这些流程图说明和/或框图的每个框、以及流 程图说明和/或框图的组合可以由计算机程序指令来实现。可以将这些计算机程序指令提供给通用计算机、专用计算机或其它可编程数据处理设备的处理器以构成机器,以便由计算机或其它可编程数据处理设备的处理器执行的这些指令创建用于实施这些流程图和/或框和/或一个或多个流程框图中指定的功能/操作的部件。
可以将这些计算机程序指令存储在计算机可读存储器中,这些指令可以指示计算机或其它可编程处理器以特定方式实现功能,以便存储在计算机可读存储器中的这些指令构成包含实施流程图和/或框图的一个或多个框中指定的功能/操作的指令部件的制作产品。
可以将这些计算机程序指令加载到计算机或其它可编程数据处理器上以使一系列的操作步骤在计算机或其它可编程处理器上执行,以便构成计算机实现的进程,以使计算机或其它可编程数据处理器上执行的这些指令提供用于实施此流程图和/或框图的一个或多个框中指定的功能或操作的步骤。还应该注意在一些备选实现中,框中所示的功能/操作可以不按流程图所示的次序来发生。例如,依次示出的两个框实际可以基本同时地执行或这些框有时可以按逆序执行,具体取决于所涉及的功能/操作。
本发明所涉及的多传感器目标信息融合的优化方法及装置例如可以应用在对车辆周围的目标进行感测的场景中。在这样的场景下,例如可以用纵向相对本车位置、纵向速度、纵向加速度、横向相对本车位置、横向速度、横向加速度来表征车辆周围的任何一个目标的运动状态,而车辆上的多个传感器中的每一个所感测得到的每一套测量结果涉及上述六个方面或能通过计算得出上述六个方面的数值。由于每个传感器的特性和对于每个方面的测量误差不同,因此,通过以下详细说明的本发明所涉及的多传感器目标信息融合的优化方法及装置,可以获得针对任何一个目标的最优融合结果并且合理地确定融合权重。
图1是根据本发明的一实施方式的多传感器目标信息融合的优化方法的示例流程图。如图1所示,该方法S100包括以下步骤:针对每个时刻,获得在当前时刻的关于目标状态的所有传感器的融合预测结 果(步骤S1)。
在一个示例中,对于时刻t,可以利用以下数式(1)来确定关于某个目标的目标状态的融合预测结果:
在一个实施例中,如图1所示,所述方法S100还可以包括如下步骤:获得在当前时刻的各个传感器关于目标状态的实际测量结果(步骤S2)。
在一个实施例中,如图1所示,所述方法S100还可以包括如下步骤:针对每一套实际测量结果,基于所述融合预测结果和该套实际测量结果来获得在当前时刻的相应传感器关于目标状态的最优估计结果(步骤S3)。
关于步骤S3的细节,将结合图2来详细说明。
具体地,在一个实施例中,如图2所示,所述步骤S3包括以下步骤:针对每一套实际测量结果,基于所述融合预测结果和该套实际测量结果来计算对应的转换矩阵(步骤S31)。
在一个示例中,对于各个传感器所获得的各套实际测量结果,可以利用以下数式(2)来确定对应的转换矩阵:
其中,Z
ik(t)是在t时刻的第i个传感器关于目标状态的第k套实际测量结果,H
ik是与第i个传感器的第k套实际测量结果对应的转换矩阵,
是在t时刻的关于目标状态的所有传感器的融合预测结果,V(t)是测量噪声。
在一个实施例中,如图2所示,所述步骤S3还可以包括如下步骤:计算所述每一套实际测量结果对应的协方差(步骤S32)。
在一个示例中,对于各个传感器所获得的各套实际测量结果,可以利用以下数式(3)来确定对应的协方差:
其中,
是在t时刻的第i个传感器关于目标状态的第k套实际测量结果对应的协方差,F是系统状态转移矩阵,F
T是系统状态转移矩阵的转置矩阵,P
ik(t-1)是在t-1时刻的针对第i个传感器的第k套实际测量结果的最优估计结果对应的协方差(后述),Q是系统过程噪声的协方差。
在一个实施例中,如图2所示,所述步骤S3还可以包括如下步骤:针对所述每一套实际测量结果,基于所述对应的转换矩阵和所述对应的协方差来计算对应的卡尔曼增益(步骤S33)。
在一个示例中,对于各个传感器所获得的各套实际测量结果,可以利用以下数式(4)来计算对应的卡尔曼增益:
其中,kg
ik(t)是在t时刻的第i个传感器关于目标状态的第k套实际测量结果对应的卡尔曼增益,
是在t时刻的第i个传感器关于目标状态的第k套实际测量结果对应的协方差,H
ik是与第i个传感器的第k套实际测量结果对应的转换矩阵,H
ik
T是与第i个传感器的第k套实际测量结果对应的转换矩阵的转置矩阵,R是测量过程噪声的协方差。
在一个实施例中,如图2所示,所述步骤S3还可以包括如下步骤:针对所述每一套实际测量结果,基于所述融合预测结果、所述对应的卡尔曼增益、该套实际测量结果以及所述对应的转换矩阵,来计算对应的在当前时刻的相应传感器关于目标状态的最优估计结果(步骤S34)。
在一个示例中,对于各个传感器所获得的各套实际测量结果,可以利用以下数式(5)来计算对应的最优估计结果:
其中,X
ik(t)是在t时刻的与第i个传感器的第k套实际测量结果对应的最优估计结果,
是在t时刻的关于目标状态的所有传感器的融合预测结果,kg
ik(t)是在t时刻的第i个传感器关于目标状态的第k套 实际测量结果对应的卡尔曼增益,Z
ik(t)是在t时刻的第i个传感器关于目标状态的第k套实际测量结果,H
ik是与第i个传感器的第k套实际测量结果对应的转换矩阵。
在针对每一套实际测量结果计算出在当前时刻的相应传感器关于目标状态的最优估计结果之后,返回至图1。在一个实施例中,如图1所示,所述方法S100还可以包括如下步骤:对所有传感器的所述最优估计结果进行融合以确定每个所述最优估计结果对应的权重、进而获得在当前时刻的关于目标状态的最优融合估计结果(步骤S4)。
在一个示例中,可以利用以下数式(6)来获得在当前时刻的关于目标状态的最优融合估计结果:
其中,X(t)是在t时刻的关于目标状态的所有传感器的最优融合估计结果,f是融合函数,X
ik(t)是在t时刻的与第i个传感器的第k套实际测量结果对应的最优估计结果,
是在t时刻的第i个传感器关于目标状态的第k套实际测量结果对应的协方差。
关于步骤S4的细节,将结合图3来详细说明。
具体地,在一个实施例中,如图3所示,所述步骤S4包括以下步骤:根据各套实际测量结果对应的协方差来确定每个所述最优估计结果对应的权重(步骤S41)。
在一个示例中,在利用上述数式(3)来计算出所有传感器的所有套实际测量结果对应的协方差(即,
)之后,根据各个协方差的大小来对与每一套实际测量结果对应的每一个最优估计结果(即,X
11(t),X
12(t),...,X
ik(t))分配权重(即,w
11(t),w
12(t),...,w
ik(t))。
在一个实施例中,如图3所示,所述步骤S4还可以包括如下步骤:基于每个所述最优估计结果以及对应的权重来计算在当前时刻的关于目标状态的所有传感器的最优融合估计结果(步骤S42)。
在一个示例中,利用所分配的权重(即,w
11(t),w
12(t),...,w
ik(t))对相应的最优估计结果(即,X
11(t),X
12(t),...,X
ik(t))进行加权运算,从而得到在t时刻的关于目标状态的所有传感器的最优融合估计结果。此外,如 上述数式(1)所示,在t时刻的X(t)还可以用于计算在t+1时刻的
可选地,在一个实施例中,所述方法S100还可以包括如下步骤:根据在步骤S31中得到的转换矩阵和在步骤S33中得到的卡尔曼增益来校正在步骤S32中得到的协方差,以获得经校正的协方差(步骤S5,未示出),所述经校正的协方差可以用于计算在当前时刻的下一时刻的对应的实际测量结果对应的协方差(请参见上述数式(3))。
在一个示例中,利用以下数式(7)来获得在当前时刻的经校正的协方差:
其中,P
ik(t)是在t时刻的针对第i个传感器的第k套实际测量结果的经校正的协方差,I是单位矩阵,kg
ik(t)是在上述步骤S33中得到的在t时刻的第i个传感器关于目标状态的第k套实际测量结果对应的卡尔曼增益,H
ik是在步骤S31中得到的与第i个传感器的第k套实际测量结果对应的转换矩阵,
是在步骤S32中得到的在t时刻的第i个传感器关于目标状态的第k套实际测量结果对应的协方差,此外,在t时刻的P
ik(t)还可以用于计算在t+1时刻的
通过以上步骤,能够实时地调整权重,从而获得准确的目标融合估计结果。
此外,需要说明的是,虽然在图1、图2、图3中示出了步骤之间的顺序,但是本领域技术人员应当理解,图1、图2、图3仅仅是示例,上述步骤之间的先后关系并不限定于图1、图2、图3中所示出的情况,例如,图1中的步骤S1可以在步骤S2之后执行,也可以同时执行,又例如,图2中的步骤S31可以在步骤S32之后执行,也可以同时执行,等等。
接下来,参照图4来说明用于执行图1中所示出的方法的多传感器目标信息融合的优化装置。
如图4所示,该装置100包括第1单元101,其被配置成,针对每个时刻,获得在当前时刻的关于目标状态的所有传感器的融合预测结果。
在一个示例中,对于时刻t,可以利用以下数式(8)来确定关于某个目标的目标状态的融合预测结果:
在一个实施例中,如图4所示,所述装置100还可以包括第2单元102,其被配置成获得在当前时刻的各个传感器关于目标状态的实际测量结果。
在一个实施例中,如图4所示,所述装置100还可以包括第3单元,其被配置成,针对每一套实际测量结果,基于所述融合预测结果和该套实际测量结果来获得在当前时刻的相应传感器关于目标状态的最优估计结果。
关于第3单元103的内部结构,将在下面详细说明。
具体地,在一个实施例中,所述第3单元103包括第3A单元(未示出),其被配置成,针对每一套实际测量结果,基于所述融合预测结果和该套实际测量结果来计算对应的转换矩阵。
在一个示例中,对于各个传感器所获得的各套实际测量结果,可以利用以下数式(9)来确定对应的转换矩阵:
其中,Z
ik(t)是在t时刻的第i个传感器关于目标状态的第k套实际测量结果,H
ik是与第i个传感器的第k套实际测量结果对应的转换矩阵,
是在t时刻的关于目标状态的所有传感器的融合预测结果,V(t)是测量噪声。
在一个实施例中,所述第3单元103还可以包括第3B单元(未示出),其被配置成计算所述每一套实际测量结果对应的协方差。
在一个示例中,对于各个传感器所获得的各套实际测量结果,可以利用以下数式(10)来确定对应的协方差:
其中,
是在t时刻的第i个传感器关于目标状态的第k套实际测量结果对应的协方差,F是系统状态转移矩阵,F
T是系统状态转移矩阵的转置矩阵,P
ik(t-1)是在t-1时刻的针对第i个传感器的第k套实际测量结果的最优估计结果对应的协方差(后述),Q是系统过程噪声的协方差。
在一个实施例中,所述第3单元103还可以包括第3C单元(未示出),其被配置成,针对所述每一套实际测量结果,基于所述对应的转换矩阵和所述对应的协方差来计算对应的卡尔曼增益。
在一个示例中,对于各个传感器所获得的各套实际测量结果,可以利用以下数式(11)来计算对应的卡尔曼增益:
其中,kg
ik(t)是在t时刻的第i个传感器关于目标状态的第k套实际测量结果对应的卡尔曼增益,
是在t时刻的第i个传感器关于目标状态的第k套实际测量结果对应的协方差,H
ik是与第i个传感器的第k套实际测量结果对应的转换矩阵,H
ik
T是与第i个传感器的第k套实际测量结果对应的转换矩阵的转置矩阵,R是测量过程噪声的协方差。
在一个实施例中,所述第3单元103还可以包括第3D单元(未示出),其被配置成,针对所述每一套实际测量结果,基于所述融合预测结果、所述对应的卡尔曼增益、该套实际测量结果以及所述对应的转换矩阵,来计算对应的在当前时刻的相应传感器关于目标状态的最优估计结果。
在一个示例中,对于各个传感器所获得的各套实际测量结果,可以利用以下数式(12)来计算对应的最优估计结果:
其中,X
ik(t)是在t时刻的与第i个传感器的第k套实际测量结果对应的最优估计结果,
是在t时刻的关于目标状态的所有传感器的融合预测结果,kg
ik(t)是在t时刻的第i个传感器关于目标状态的第k套实际测量结果对应的卡尔曼增益,Z
ik(t)是在t时刻的第i个传感器关于 目标状态的第k套实际测量结果,H
ik是与第i个传感器的第k套实际测量结果对应的转换矩阵。
在针对每一套实际测量结果计算出在当前时刻的相应传感器关于目标状态的最优估计结果之后,返回至图4。在一个实施例中,如图4所示,所述装置100还可以第4单元104,其被配置成,对所有传感器的所述最优估计结果进行融合以确定每个所述最优估计结果对应的权重、进而获得在当前时刻的关于目标状态的最优融合估计结果。
在一个示例中,可以利用以下数式(13)来获得在当前时刻的关于目标状态的最优融合估计结果:
其中,X(t)是在t时刻的关于目标状态的所有传感器的最优融合估计结果,f是融合函数,X
ik(t)是在t时刻的与第i个传感器的第k套实际测量结果对应的最优估计结果,
是在t时刻的第i个传感器关于目标状态的第k套实际测量结果对应的协方差。
关于第4单元104的内部结构,将在下面详细说明。
具体地,在一个实施例中,所述第4单元104包括第4A单元(未示出),其被配置成,根据各套实际测量结果对应的协方差来确定每个所述最优估计结果对应的权重。
在一个示例中,在利用上述数式(10)来计算出所有传感器的所有套实际测量结果对应的协方差(即,
)之后,根据各个协方差的大小来对与每一套实际测量结果对应的每一个最优估计结果(即,X
11(t),X
12(t),...,X
ik(t))分配权重(即,w
11(t),w
12(t),...,w
ik(t))。
在一个实施例中,所述第4单元104还可以包括第4B单元(未示出),其被配置成,基于每个所述最优估计结果以及对应的权重来计算在当前时刻的关于目标状态的所有传感器的最优融合估计结果。
在一个示例中,利用所分配的权重(即,w
11(t),w
12(t),...,w
ik(t))对相应的最优估计结果(即,X
11(t),X
12(t),...,X
ik(t))进行加权运算,从而得到在t时刻的关于目标状态的所有传感器的最优融合估计结果。此外,如上述数式(8)所示,在t时刻的X(t)还可以用于计算在t+1时刻的
可选地,在一个实施例中,所述装置100还可以包括第5单元(未示出),其被配置成:根据在第3A单元中得到的转换矩阵和在第3C单元中得到的卡尔曼增益来校正在第3B单元中得到的协方差,以获得经校正的协方差,所述经校正的协方差可以用于计算在当前时刻的下一时刻的对应的实际测量结果对应的协方差(请参见上述数式(10))。
在一个示例中,利用以下数式(14)来获得在当前时刻的经校正的协方差:
其中,P
ik(t)是在t时刻的针对第i个传感器的第k套实际测量结果的经校正的协方差,I是单位矩阵,kg
ik(t)是在上述步骤S33中得到的在t时刻的第i个传感器关于目标状态的第k套实际测量结果对应的卡尔曼增益,H
ik是在步骤S31中得到的与第i个传感器的第k套实际测量结果对应的转换矩阵,
是在步骤S32中得到的在t时刻的第i个传感器关于目标状态的第k套实际测量结果对应的协方差,此外,在t时刻的P
ik(t)还可以用于计算在t+1时刻的
通过以上单元,能够实时地调整权重,从而获得准确的目标融合估计结果。
根据本发明的一实施方式的上述多传感器目标信息融合的优化方法及装置在应用于辅助驾驶时能够使得辅助驾驶系统采用更为优化的数据,从而有助于其决策与控制,例如使自适应巡航、紧急制动等辅助驾驶功能或场景能基于优化的数据做出更好的决策,这样的功能或场景还可以包括车身稳定控制等。
虽然在此之前以多传感器目标信息融合的优化方法及装置的实施方式为中心进行了说明,但是本发明不限定于这些实施方式,也可以将本发明实施为以下方式:包含上述方法的辅助驾驶方法的方式或者包含上述装置的辅助驾驶系统的方式或者用于执行上述方法的计算机设备或者用于执行上述方法的计算机程序的方式或者用于实现上述装置的功能的计算机程序的方式或者记录有该计算机程序的计算机可读取的记录介质的方式。
在图5中示出了根据本发明的一实施方式的用于执行根据本发明的一实施方式的多传感器目标信息融合的优化方法的计算机设备。如图5所示,计算机设备200包括存储器201和处理器202。虽然未图示,但是计算机设备200还包括存储在存储器201上并可在处理器202上运行的计算机程序。所述处理器执行所述程序时实现例如如图1、图2、图3所示的根据本发明的一实施方式的多传感器目标信息融合的优化方法的各个步骤。
相对于现有技术,本发明的一实施方式的方法S100或装置100可以获得如下有益效果的一个或多个:
1)能够实时计算融合权重,确保在大部分场景下,权重分配的结果是合适的;
2)通过对每一个传感器的每套测量结果都应用卡尔曼滤波,得到所有测量结果对应的协方差和最优估计结果,进而计算出关于目标状态的最优融合估计结果,使得结合传感器测量结果的协方差,能够决定最优估计结果对应的各个权重,由此即使出现传感器由于受到硬件性能和环境的影响,检测结果出现不符合物理运动原理的情况,也能够实时调整权重,实现对目标的最优融合估计,提高融合估计的性能。
图6是根据本发明的一实施方式的用于使多传感器目标信息融合与多传感器感测同步的方法的示例流程图。如图6所示,该方法P100包括以下步骤:获得各个传感器关于目标状态的实际测量结果同时对获得每一套实际测量结果的时刻分别进行记录,以便针对每一套实际测量结果记录有一个第一时间戳(步骤P1)。
在一个示例中,当一套实际测量结果接收完毕后,记录对应的一个接收时间戳。例如,对于第1个传感器的第一套实际测量结果,将其接收时间戳标记为t
111,对于第1个传感器的第二套实际测量结果,将其接收时间戳标记为t
112,……,对于第i个传感器的第k套实际测量结果,将其接收时间戳标记为t
1ik,在本文中将这些时间戳统称为第一时间戳(即,接收时间戳)。
在一个实施例中,如图6所示,所述方法P100还可以包括如下步 骤:将开始执行目标信息融合处理的时刻记录为第二时间戳(步骤P2)。
在一个示例中,将对所有传感器的所有套实际测量结果进行融合处理的时刻记录为第二时间戳(即,融合时间戳)。
在一个实施例中,如图6所示,所述方法P100还可以包括如下步骤:分别计算各个第一时间戳表征的各个第一时刻与所述第二时间戳表征的第二时刻之间的时间差(步骤P3)。
在一个示例中,可以利用以下数式(15)来计算所述时间差:
Δt
ik=t
2-t
1ik…(15),
其中,Δt
ik是与第i个传感器的第k套实际测量结果对应的时间差,t
1ik是与第i个传感器的第k套实际测量结果对应的第一时间戳表征的第一时刻,t
2是第二时间戳表征的第二时刻。
在一个实施例中,如图6所示,所述方法P100还可以包括如下步骤:基于计算出的每一个时间差来更新其对应的在所述第一时刻获得的实际测量结果,以获得对应的在所述第二时刻的预估测量结果(步骤P4)。
在一个示例中,假设在Δt
ik的时间内车辆的位移发生了变化,则可以利用以下数式(16)至(18)来更新在t
1ik获得的实际测量结果:
其中,
是车辆纵向速度,
是车辆横向速度,ΔX
vcs_ik是与第i个传感器的第k套实际测量结果对应的车辆在Δt内在纵向产生的位移,ΔY
vcs_ik是与第i个传感器的第k套实际测量结果对应的车辆在Δt内在横向产生的位移,
是与第i个传感器的第k套实际测量结果对应的在第二时刻t
2的预估测量结果,ω是与第i个传感器的第k套实际测量结果对应的车辆在Δt内产生的偏转角度,Z
ik_x(t
1ik)是在第一时刻获得的第i个传感器的第k套实际测量结果的纵向分量,Z
ik_y(t
1ik)是在第一 时刻获得的第i个传感器的第k套实际测量结果的横向分量。
在一个实施例中,如图6所示,所述方法P100还可以包括如下步骤:获得在所述第二时刻的关于目标状态的所有传感器的融合预测结果(步骤P5)。
在一个示例中,对于第二时刻t
2,可以利用以下数式(19)来确定关于某个目标的目标状态的融合预测结果:
在一个实施例中,如图6所示,所述方法P100还可以包括如下步骤:针对每一套预估测量结果,基于所述融合预测结果和该套预估测量结果来获得在所述第二时刻的相应传感器关于目标状态的最优估计结果(步骤P6)。
关于步骤P6的细节,将结合图7来详细说明。
具体地,在一个实施例中,如图7所示,所述步骤P6包括以下步骤:针对所述每一套预估测量结果,基于所述融合预测结果和该套预估测量结果来计算对应的转换矩阵(步骤P61)。
在一个示例中,针对各套预估测量结果,可以利用以下数式(20)来确定对应的转换矩阵:
其中,
是与第i个传感器的第k套实际测量结果对应的在第二时刻t
2的预估测量结果,H
ik是与第i个传感器的第k套实际测量结果对应的转换矩阵,
是在第二时刻t
2的关于目标状态的所有传感器的融合预测结果,V(t
2)是测量噪声,f(Δt
ik)是根据Δt计算测量噪声权重的函数,Δt
ik越大,测量噪声越大。
在一个实施例中,如图7所示,所述步骤P6还可以包括如下步骤:计算所述每一套预估测量结果对应的协方差(步骤P62)。
在一个示例中,对于各套预估测量结果,可以利用以下数式(21) 来确定对应的协方差:
其中,
是在第二时刻t
2的第i个传感器关于目标状态的第k套预估测量结果对应的协方差,F是系统状态转移矩阵,F
T是系统状态转移矩阵的转置矩阵,P
ik(t
1ik)是在第一时刻t
1的针对第i个传感器的第k套预估测量结果的最优估计结果对应的协方差(后述),Q是系统过程噪声的协方差。
在一个实施例中,如图7所示,所述步骤P6还可以包括如下步骤:针对所述每一套预估测量结果,基于所述对应的转换矩阵和所述对应的协方差来计算对应的卡尔曼增益(步骤P63)。
在一个示例中,对于各套预估测量结果,可以利用以下数式(22)来计算对应的卡尔曼增益:
其中,kg
ik(t
2)是在第二时刻t
2的第i个传感器关于目标状态的第k套预估测量结果对应的卡尔曼增益,
是在第二时刻t
2的第i个传感器关于目标状态的第k套预估测量结果对应的协方差,H
ik是与第i个传感器的第k套实际测量结果对应的转换矩阵,H
ik
T是与第i个传感器的第k套实际测量结果对应的转换矩阵的转置矩阵,R是测量过程噪声的协方差。
在一个实施例中,如图7所示,所述步骤P6还可以包括如下步骤:针对所述每一套预估测量结果,基于所述融合预测结果、所述对应的卡尔曼增益、该套预估测量结果以及所述对应的转换矩阵,来计算对应的在所述第二时刻的相应传感器关于目标状态的最优估计结果(步骤P64)。
在一个示例中,对于各套预估测量结果,可以利用以下数式(23)来计算对应的最优估计结果:
其中,X
ik(t
2)是在第二时刻t
2的与第i个传感器的第k套预估测量结果对应的最优估计结果,
是在第二时刻t
2的关于目标状态的所有传 感器的融合预测结果,kg
ik(t
2)是在第二时刻t
2的第i个传感器关于目标状态的第k套预估测量结果对应的卡尔曼增益,
是与第i个传感器的第k套实际测量结果对应的在第二时刻t
2的预估测量结果,H
ik是与第i个传感器的第k套实际测量结果对应的转换矩阵。
通过上述步骤,可以针对每一套实际测量结果获得准确的关于目标状态的最优估计结果。
可选地,在一个实施例中,在计算出与每一个传感器的每一套实际测量结果对应的最优估计结果之后,所述方法P100还可以包括如下步骤:对所有传感器的所述最优估计结果进行融合以确定每个所述最优估计结果对应的权重、进而获得在所述第二时刻的关于目标状态的最优融合估计结果(步骤P7,未示出)。
在一个示例中,可以利用以下数式(24)来获得在第二时刻t
2的关于目标状态的最优融合估计结果:
其中,X(t
2)是在第二时刻t
2的关于目标状态的所有传感器的最优融合估计结果,f是融合函数,X
ik(t
2)是在第二时刻t
2的与第i个传感器的第k套预估测量结果对应的最优估计结果,
是在第二时刻t
2的第i个传感器关于目标状态的第k套预估测量结果对应的协方差。此外,如上述数式(19)所示,在当前时刻(例如,第一时刻t
1)的最优融合估计结果X(t
1)还可以用于计算在下一时刻(例如,第二时刻t
2)的融合预测结果
可选地,在一个实施例中,所述方法P100还可以包括如下步骤:根据在步骤P61中得到的转换矩阵和在步骤P63中得到的卡尔曼增益来校正在步骤P62中得到的协方差,以获得经校正的协方差(步骤P8,未示出),所述经校正的协方差可以用于计算在当前时刻(例如,第二时刻t
2)的下一时刻(例如,第三时刻t
3)的预估测量结果对应的协方差(请参见上述数式(21))。
在一个示例中,利用以下数式(25)来获得在当前时刻(例如,第二时刻t
2)的经校正的协方差:
其中,P
ik(t
2)是在第二时刻t
2的针对第i个传感器的第k套预估测量结果的经校正的协方差,I是单位矩阵,kg
ik(t
2)是在上述步骤P63中得到的在第二时刻t
2的第i个传感器关于目标状态的第k套预估测量结果对应的卡尔曼增益,H
ik是在步骤P61中得到的与第i个传感器的第k套实际测量结果对应的转换矩阵,
是在步骤P62中得到的在第二时刻t
2的第i个传感器关于目标状态的第k套预估测量结果对应的协方差,此外,在当前时刻(例如,第二时刻t
2)的P
ik(t
2)还可以用于计算在下一时刻(例如,第三时刻t
3)的
此外,需要说明的是,虽然在图6、图7中示出了步骤之间的顺序,但是本领域技术人员应当理解,图6、图7仅仅是示例,上述步骤之间的先后关系并不限定于图6、图7中所示出的情况,例如,图6中的步骤P5可以在步骤P4之前执行,也可以同时执行,又例如,图7中的步骤P61可以在步骤P62之后执行,也可以同时执行,等等。
接下来,参照图8来说明用于使多传感器目标信息融合与多传感器感测同步的装置。
如图8所示,该装置800包括第6单元801,其被配置成,获得各个传感器关于目标状态的实际测量结果同时对获得每一套实际测量结果的时刻分别进行记录,以便针对每一套实际测量结果记录有一个第一时间戳。
在一个示例中,当一套实际测量结果接收完毕后,记录对应的一个接收时间戳。例如,对于第1个传感器的第一套实际测量结果,将其接收时间戳标记为t
111,对于第1个传感器的第二套实际测量结果,将其接收时间戳标记为t
112,……,对于第i个传感器的第k套实际测量结果,将其接收时间戳标记为t
1ik,在本文中将这些时间戳统称为第一时间戳(即,接收时间戳)。
在一个实施例中,如图8所示,该装置800还可以包括第7单元802,其被配置成,将开始执行目标信息融合处理的时刻记录为第二时间戳。
在一个示例中,将对所有传感器的所有套实际测量结果进行融合处理的时刻记录为第二时间戳(即,融合时间戳)。
在一个实施例中,如图8所示,该装置800还可以包括第8单元803,其被配置成,分别计算各个第一时间戳表征的各个第一时刻与所述第二时间戳表征的第二时刻之间的时间差。
在一个示例中,可以利用以下数式(26)来计算所述时间差:
Δt
ik=t
2-t
1ik…(26),
其中,Δt
ik是与第i个传感器的第k套实际测量结果对应的时间差,t
1ik是与第i个传感器的第k套实际测量结果对应的第一时间戳表征的第一时刻,t
2是第二时间戳表征的第二时刻。
在一个实施例中,如图8所示,该装置800还可以包括第9单元804,其被配置成,基于计算出的每一个时间差来更新其对应的在所述第一时刻获得的实际测量结果,以获得对应的在所述第二时刻的预估测量结果。
在一个示例中,假设在Δt
ik的时间内车辆的位移发生了变化,则可以利用以下数式(27)至(29)来更新在t
1ik获得的实际测量结果:
其中,
是车辆纵向速度,
是车辆横向速度,ΔX
vcs_ik是与第i个传感器的第k套实际测量结果对应的车辆在Δt内在纵向产生的位移,ΔY
vcs_ik是与第i个传感器的第k套实际测量结果对应的车辆在Δt内在横向产生的位移,
是与第i个传感器的第k套实际测量结果对应的在第二时刻t
2的预估测量结果,ω是与第i个传感器的第k套实际测量结果对应的车辆在Δt内产生的偏转角度,Z
ik_x(t
1ik)是在第一时刻获得的第i个传感器的第k套实际测量结果的纵向分量,Z
ik_y(t
1ik)是在第一时刻获得的第i个传感器的第k套实际测量结果的横向分量。
在一个实施例中,如图8所示,该装置800还可以包括第10单元805,其被配置成,获得在所述第二时刻的关于目标状态的所有传感器的融合预测结果。
在一个示例中,对于第二时刻t
2,可以利用以下数式(30)来确定关于某个目标的目标状态的融合预测结果:
在一个实施例中,如图8所示,该装置800还可以包括第11单元806,其被配置成,针对每一套预估测量结果,基于所述融合预测结果和该套预估测量结果来获得在所述第二时刻的相应传感器关于目标状态的最优估计结果。
关于第11单元806的细节,将在下面详细说明。
具体地,在一个实施例中,所述第11单元806包括第6A单元(未示出),其被配置成,针对所述每一套预估测量结果,基于所述融合预测结果和该套预估测量结果来计算对应的转换矩阵。
在一个示例中,针对各套预估测量结果,可以利用以下数式(31)来确定对应的转换矩阵:
其中,
是与第i个传感器的第k套实际测量结果对应的在第二时刻t
2的预估测量结果,H
ik是与第i个传感器的第k套实际测量结果对应的转换矩阵,
是在第二时刻t
2的关于目标状态的所有传感器的融合预测结果,V(t
2)是测量噪声,f(Δt
ik)是根据Δt计算测量噪声权重的函数,Δt
ik越大,测量噪声越大。
在一个实施例中,所述第11单元806还可以包括第6B单元(未示出),其被配置成,计算所述每一套预估测量结果对应的协方差。
在一个示例中,对于各套预估测量结果,可以利用以下数式(32)来确定对应的协方差:
其中,
是在第二时刻t
2的第i个传感器关于目标状态的第k套预估测量结果对应的协方差,F是系统状态转移矩阵,F
T是系统状态转移矩阵的转置矩阵,P
ik(t
1ik)是在第一时刻t
1的针对第i个传感器的第k套预估测量结果的最优估计结果对应的协方差(后述),Q是系统过程噪声的协方差。
在一个实施例中,所述第11单元806还可以包括第6C单元(未示出),其被配置成,针对所述每一套预估测量结果,基于所述对应的转换矩阵和所述对应的协方差来计算对应的卡尔曼增益。
在一个示例中,对于各套预估测量结果,可以利用以下数式(33)来计算对应的卡尔曼增益:
其中,kg
ik(t
2)是在第二时刻t
2的第i个传感器关于目标状态的第k套预估测量结果对应的卡尔曼增益,
是在第二时刻t
2的第i个传感器关于目标状态的第k套预估测量结果对应的协方差,H
ik是与第i个传感器的第k套实际测量结果对应的转换矩阵,H
ik
T是与第i个传感器的第k套实际测量结果对应的转换矩阵的转置矩阵,R是测量过程噪声的协方差。
在一个实施例中,所述第11单元806还可以包括第6D单元,其被配置成,针对所述每一套预估测量结果,基于所述融合预测结果、所述对应的卡尔曼增益、该套预估测量结果以及所述对应的转换矩阵,来计算对应的在所述第二时刻的相应传感器关于目标状态的最优估计结果。
在一个示例中,对于各套预估测量结果,可以利用以下数式(34)来计算对应的最优估计结果:
其中,X
ik(t
2)是在第二时刻t
2的与第i个传感器的第k套预估测量结果对应的最优估计结果,
是在第二时刻t
2的关于目标状态的所有传感器的融合预测结果,kg
ik(t
2)是在第二时刻t
2的第i个传感器关于目标 状态的第k套预估测量结果对应的卡尔曼增益,
是与第i个传感器的第k套实际测量结果对应的在第二时刻t
2的预估测量结果,H
ik是与第i个传感器的第k套实际测量结果对应的转换矩阵。
通过上述单元,可以针对每一套实际测量结果获得准确的关于目标状态的最优估计结果。
可选地,在一个实施例中,该装置800还可以包括第8单元(未示出),其被配置成,对所有传感器的所述最优估计结果进行融合以确定每个所述最优估计结果对应的权重、进而获得在所述第二时刻的关于目标状态的最优融合估计结果。
在一个示例中,可以利用以下数式(35)来获得在第二时刻t
2的关于目标状态的最优融合估计结果:
其中,X(t
2)是在第二时刻t
2的关于目标状态的所有传感器的最优融合估计结果,f是融合函数,X
ik(t
2)是在第二时刻t
2的与第i个传感器的第k套预估测量结果对应的最优估计结果,
是在第二时刻t
2的第i个传感器关于目标状态的第k套预估测量结果对应的协方差。此外,如上述数式(30)所示,在当前时刻(例如,第一时刻t
1)的最优融合估计结果X(t
1)还可以用于计算在下一时刻(例如,第二时刻t
2)的融合预测结果
可选地,在一个实施例中,该装置800还可以包括第9单元(未示出),其被配置成,根据在第6A单元中得到的转换矩阵和在第6C单元中得到的卡尔曼增益来校正在第6B单元中得到的协方差,以获得经校正的协方差,所述经校正的协方差可以用于计算在当前时刻(例如,第二时刻t
2)的下一时刻(例如,第三时刻t
3)的预估测量结果对应的协方差(请参见上述数式(32))。
在一个示例中,利用以下数式(36)来获得在当前时刻(例如,第二时刻t
2)的经校正的协方差:
其中,P
ik(t
2)是在第二时刻t
2的针对第i个传感器的第k套预估测 量结果的经校正的协方差,I是单位矩阵,kg
ik(t
2)是在上述步骤P63中得到的在第二时刻t
2的第i个传感器关于目标状态的第k套预估测量结果对应的卡尔曼增益,H
ik是在步骤P61中得到的与第i个传感器的第k套实际测量结果对应的转换矩阵,
是在步骤P62中得到的在第二时刻t
2的第i个传感器关于目标状态的第k套预估测量结果对应的协方差,此外,在当前时刻(例如,第二时刻t
2)的P
ik(t
2)还可以用于计算在下一时刻(例如,第三时刻t
3)的
根据本发明的一实施方式的上述用于使多传感器目标信息融合与多传感器感测同步的方法及装置在应用于辅助驾驶时能够使得辅助驾驶系统采用更为优化的数据,从而有助于其决策与控制,例如使自适应巡航、紧急制动等辅助驾驶功能或场景能基于优化的数据做出更好的决策,这样的功能或场景还可以包括车身稳定控制等。
虽然在此之前以用于使多传感器目标信息融合与多传感器感测同步的方法及装置的实施方式为中心进行了说明,但是本发明不限定于这些实施方式,也可以将本发明实施为以下方式:包含上述方法的辅助驾驶方法的方式或者包含上述装置的辅助驾驶系统的方式或者用于执行上述方法的计算机设备或者用于执行上述方法的计算机程序的方式或者用于实现上述装置的功能的计算机程序的方式或者记录有该计算机程序的计算机可读取的记录介质的方式。
在图9中示出了根据本发明的一实施方式的用于执行根据本发明的一实施方式的用于使多传感器目标信息融合与多传感器感测同步的方法的计算机设备。如图9所示,计算机设备900包括存储器901和处理器902。虽然未图示,但是计算机设备900还包括存储在存储器901上并可在处理器902上运行的计算机程序。所述处理器执行所述程序时实现例如如图6、图7所示的根据本发明的一实施方式的用于使多传感器目标信息融合与多传感器感测同步的方法的各个步骤。
相对于现有技术,本发明一实施方式的方法P100或装置800可以获得如下有益效果的一个或多个:
1)即使在传感器感测结果没有更新或者更新周期较慢的情况下, 也能够确保获得足够准确的融合结果;
2)除了传感器感测结果没有更新或者更新周期较慢的场景以外,在各传感器更新其感测结果的周期不一致的场景下也能够确保获得足够准确的融合结果。
另外,如上所述,本发明也可以被实施为一种记录介质,在其中存储有用于使计算机执行根据本发明的一实施方式的多传感器目标信息融合的优化方法的程序。
本发明也可以被实施为还一种记录介质,在其中存储有用于使计算机执行根据本发明的又一实施方式的用于使多传感器目标信息融合与多传感器感测同步的方法的程序。
在此,作为记录介质,能采用盘类(例如,磁盘、光盘等)、卡类(例如,存储卡、光卡等)、半导体存储器类(例如,ROM、非易失性存储器等)、带类(例如,磁带、盒式磁带等)等各种方式的记录介质。
通过在这些记录介质中记录使计算机执行上述实施方式中的多传感器目标信息融合的优化方法的计算机程序或使计算机实现上述实施方式中的多传感器目标信息融合的优化装置的功能的计算机程序并使其流通,从而能使成本的低廉化以及可携带性、通用性提高。
而且,在计算机上装载上述记录介质,由计算机读出在记录介质中记录的计算机程序并储存在存储器中,计算机所具备的处理器(CPU:Central Processing Unit(中央处理单元)、MPU:Micro Processing Unit(微处理单元))从存储器读出该计算机程序并执行,由此,能执行上述实施方式中的多传感器目标信息融合的优化方法并能实现上述实施方式中的多传感器目标信息融合的优化装置的功能。
本领域普通技术人员应当了解,本发明不限定于上述的实施方式,本发明可以在不偏离其主旨与范围内以许多其它的形式实施。因此,所展示的示例与实施方式被视为示意性的而非限制性的,在不脱离如所附各权利要求所定义的本发明精神及范围的情况下,本发明可能涵盖各种的修改与替换。
Claims (15)
- 一种多传感器目标信息融合的优化方法,其特征在于,包括:步骤S1:针对每个时刻,获得在当前时刻的关于目标状态的所有传感器的融合预测结果;步骤S2:获得在当前时刻的各个传感器关于目标状态的实际测量结果;步骤S3:针对每一套实际测量结果,基于所述融合预测结果和该套实际测量结果来获得在当前时刻的相应传感器关于目标状态的最优估计结果;以及步骤S4:对所有传感器的所述最优估计结果进行融合以确定每个所述最优估计结果对应的权重、进而获得在当前时刻的关于目标状态的最优融合估计结果。
- 根据权利要求1所述的优化方法,其特征在于,所述步骤S3包括:步骤S31:针对每一套实际测量结果,基于所述融合预测结果和该套实际测量结果来计算对应的转换矩阵;步骤S32:计算所述每一套实际测量结果对应的协方差;步骤S33:针对所述每一套实际测量结果,基于所述对应的转换矩阵和所述对应的协方差来计算对应的卡尔曼增益;以及步骤S34:针对所述每一套实际测量结果,基于所述融合预测结果、所述对应的卡尔曼增益、该套实际测量结果以及所述对应的转换矩阵,来计算对应的在当前时刻的相应传感器关于目标状态的最优估计结果。
- 根据权利要求2所述的优化方法,其特征在于,所述步骤S4包括:步骤S41:根据各套实际测量结果对应的协方差来确定每个所述最优估计结果对应的权重;以及步骤S42:基于每个所述最优估计结果以及对应的权重来计算在当前时刻的关于目标状态的所有传感器的最优融合估计结果。
- 根据权利要求2所述的优化方法,其特征在于,还包括:步骤S5:根据在步骤S31中得到的转换矩阵和在步骤S33中得到 的卡尔曼增益来校正在步骤S32中得到的协方差,以获得经校正的协方差。
- 根据权利要求4所述的优化方法,其特征在于,在所述步骤S32中,在当前时刻的所述每一套实际测量结果对应的协方差是利用在前一时刻的经校正的协方差来获得的。
- 根据权利要求1至5中的任一项所述的方法,其特征在于,在所述步骤S1中,在当前时刻的所述融合预测结果是利用前一时刻的关于目标状态的最优融合估计结果来获得的。
- 一种用于使多传感器目标信息融合与多传感器感测同步的方法,其特征在于,包括:步骤P1:获得各个传感器关于目标状态的实际测量结果同时对获得每一套实际测量结果的时刻分别进行记录,以便针对每一套实际测量结果记录有一个第一时间戳;步骤P2:将开始执行目标信息融合处理的时刻记录为第二时间戳;步骤P3:分别计算各个第一时间戳表征的各个第一时刻与所述第二时间戳表征的第二时刻之间的时间差;步骤P4:基于计算出的每一个时间差来更新其对应的在所述第一时刻获得的实际测量结果,以获得对应的在所述第二时刻的预估测量结果;步骤P5:获得在所述第二时刻的关于目标状态的所有传感器的融合预测结果;以及步骤P6:针对每一套预估测量结果,基于所述融合预测结果和该套预估测量结果来获得在所述第二时刻的相应传感器关于目标状态的最优估计结果。
- 根据权利要求7所述的方法,其特征在于,所述步骤P6包括:步骤P61:针对所述每一套预估测量结果,基于所述融合预测结果和该套预估测量结果来计算对应的转换矩阵;步骤P62:计算所述每一套预估测量结果对应的协方差;步骤P63:针对所述每一套预估测量结果,基于所述对应的转换矩阵和所述对应的协方差来计算对应的卡尔曼增益;以及步骤P64:针对所述每一套预估测量结果,基于所述融合预测结果、所述对应的卡尔曼增益、该套预估测量结果以及所述对应的转换矩阵, 来计算对应的在所述第二时刻的相应传感器关于目标状态的最优估计结果。
- 根据权利要求8所述的方法,其特征在于,还包括:步骤P7:对所有传感器的所述最优估计结果进行融合以确定每个所述最优估计结果对应的权重、进而获得在所述第二时刻的关于目标状态的最优融合估计结果。
- 根据权利要求8所述的方法,其特征在于,还包括:步骤P8:根据在步骤P61中得到的转换矩阵和在步骤P63中得到的卡尔曼增益来校正在步骤P62中得到的协方差,以获得经校正的协方差。
- 根据权利要求10所述的方法,其特征在于,在所述步骤P62中,在所述第二时刻的所述每一套预估测量结果对应的协方差是利用在所述第一时刻的经校正的协方差来获得的。
- 根据权利要求9所述的方法,其特征在于,在所述步骤P1中,在所述第二时刻的所述融合预测结果是利用所述第一时刻的关于目标状态的最优融合估计结果来获得的。
- 一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现根据权利要求1至12中的任一项所述的方法的步骤。
- 一种记录介质,其上存储有计算机程序,其特征在于,该程序可被计算机执行以实现根据权利要求1至12中的任一项所述的方法的步骤。
- 一种辅助驾驶方法,其特征在于,包括:如权利要求1至6中的任一项所述的多传感器目标信息融合的优化方法;和/或如权利要求7至12中的任一项所述的用于使多传感器目标信息融合与多传感器感测同步的方法。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP18889848.0A EP3726429A4 (en) | 2017-12-15 | 2018-12-14 | MULTIPLE SENSOR TARGET INFORMATION FUSION |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711346193.6A CN108573270B (zh) | 2017-12-15 | 2017-12-15 | 使多传感器目标信息融合与多传感器感测同步的方法及装置、计算机设备和记录介质 |
CN201711346193.6 | 2017-12-15 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019114807A1 true WO2019114807A1 (zh) | 2019-06-20 |
Family
ID=63575918
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/121008 WO2019114807A1 (zh) | 2017-12-15 | 2018-12-14 | 多传感器目标信息融合 |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP3726429A4 (zh) |
CN (1) | CN108573270B (zh) |
WO (1) | WO2019114807A1 (zh) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110798848A (zh) * | 2019-09-27 | 2020-02-14 | 国家电网有限公司 | 无线传感器数据融合方法及装置、可读存储介质和终端 |
CN111881955A (zh) * | 2020-07-15 | 2020-11-03 | 北京经纬恒润科技有限公司 | 多源传感器信息融合方法及装置 |
CN112003891A (zh) * | 2020-07-16 | 2020-11-27 | 山东省网联智能车辆产业技术研究院有限公司 | 用于智能网联车辆控制器的多传感数据融合方法 |
CN113219347A (zh) * | 2021-04-27 | 2021-08-06 | 东软睿驰汽车技术(沈阳)有限公司 | 一种电池参数测量方法及装置 |
CN114065876A (zh) * | 2022-01-11 | 2022-02-18 | 华砺智行(武汉)科技有限公司 | 基于路侧多传感器的数据融合方法、装置、系统及介质 |
CN114608589A (zh) * | 2022-03-04 | 2022-06-10 | 西安邮电大学 | 一种多传感器信息融合方法及系统 |
CN114646955A (zh) * | 2020-12-21 | 2022-06-21 | 阿里巴巴集团控股有限公司 | 数据融合理方法、装置、电子设备、及计算机存储介质 |
CN114964270A (zh) * | 2022-05-17 | 2022-08-30 | 驭势科技(北京)有限公司 | 融合定位方法、装置、车辆及存储介质 |
CN115792796A (zh) * | 2023-02-13 | 2023-03-14 | 鹏城实验室 | 基于相对观测等效模型的协同定位方法、装置及终端 |
CN119808005A (zh) * | 2025-03-11 | 2025-04-11 | 中南大学 | 同类监测传感器的数据动态融合方法及系统 |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108573270B (zh) * | 2017-12-15 | 2020-04-28 | 上海蔚来汽车有限公司 | 使多传感器目标信息融合与多传感器感测同步的方法及装置、计算机设备和记录介质 |
CN110378178B (zh) * | 2018-09-30 | 2022-01-28 | 毫末智行科技有限公司 | 目标跟踪方法及装置 |
KR102592830B1 (ko) * | 2018-12-05 | 2023-10-23 | 현대자동차주식회사 | 차량용 센서퓨전 타겟 예측 장치 및 그의 센서 퓨전 타겟 예측 방법과 그를 포함하는 차량 |
CN111348046B (zh) * | 2018-12-24 | 2021-06-15 | 毫末智行科技有限公司 | 目标数据融合方法、系统及机器可读存储介质 |
US11214261B2 (en) * | 2019-06-11 | 2022-01-04 | GM Global Technology Operations LLC | Learn association for multi-object tracking with multi sensory data and missing modalities |
CN110720096B (zh) * | 2019-07-03 | 2022-07-08 | 深圳市速腾聚创科技有限公司 | 一种多传感器状态估计方法、装置及终端设备 |
CN112208529B (zh) * | 2019-07-09 | 2022-08-02 | 毫末智行科技有限公司 | 用于目标检测的感知系统、驾驶辅助方法和无人驾驶设备 |
CN110879598A (zh) * | 2019-12-11 | 2020-03-13 | 北京踏歌智行科技有限公司 | 车辆用多传感器的信息融合方法和装置 |
CN111191734B (zh) * | 2020-01-03 | 2024-05-28 | 北京汽车集团有限公司 | 传感器数据融合方法、装置、设备及存储介质 |
CN112712549B (zh) * | 2020-12-31 | 2024-08-09 | 上海商汤临港智能科技有限公司 | 数据处理方法、装置、电子设备以及存储介质 |
CN114528940B (zh) * | 2022-02-18 | 2024-11-15 | 深圳海星智驾科技有限公司 | 一种多传感器目标融合方法及装置 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160103214A1 (en) * | 2014-10-08 | 2016-04-14 | Src, Inc. | Use of Range-Rate Measurements in a Fusion Tracking System via Projections |
CN106291533A (zh) * | 2016-07-27 | 2017-01-04 | 电子科技大学 | 一种基于amd的分布式多传感器融合算法 |
CN108573271A (zh) * | 2017-12-15 | 2018-09-25 | 蔚来汽车有限公司 | 多传感器目标信息融合的优化方法及装置、计算机设备和记录介质 |
CN108573270A (zh) * | 2017-12-15 | 2018-09-25 | 蔚来汽车有限公司 | 使多传感器目标信息融合与多传感器感测同步的方法及装置、计算机设备和记录介质 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105352535A (zh) * | 2015-09-29 | 2016-02-24 | 河海大学 | 一种基于多传感器数据融合的测量方法 |
-
2017
- 2017-12-15 CN CN201711346193.6A patent/CN108573270B/zh active Active
-
2018
- 2018-12-14 EP EP18889848.0A patent/EP3726429A4/en active Pending
- 2018-12-14 WO PCT/CN2018/121008 patent/WO2019114807A1/zh unknown
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160103214A1 (en) * | 2014-10-08 | 2016-04-14 | Src, Inc. | Use of Range-Rate Measurements in a Fusion Tracking System via Projections |
CN106291533A (zh) * | 2016-07-27 | 2017-01-04 | 电子科技大学 | 一种基于amd的分布式多传感器融合算法 |
CN108573271A (zh) * | 2017-12-15 | 2018-09-25 | 蔚来汽车有限公司 | 多传感器目标信息融合的优化方法及装置、计算机设备和记录介质 |
CN108573270A (zh) * | 2017-12-15 | 2018-09-25 | 蔚来汽车有限公司 | 使多传感器目标信息融合与多传感器感测同步的方法及装置、计算机设备和记录介质 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3726429A4 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110798848A (zh) * | 2019-09-27 | 2020-02-14 | 国家电网有限公司 | 无线传感器数据融合方法及装置、可读存储介质和终端 |
CN111881955A (zh) * | 2020-07-15 | 2020-11-03 | 北京经纬恒润科技有限公司 | 多源传感器信息融合方法及装置 |
CN111881955B (zh) * | 2020-07-15 | 2023-07-04 | 北京经纬恒润科技股份有限公司 | 多源传感器信息融合方法及装置 |
CN112003891A (zh) * | 2020-07-16 | 2020-11-27 | 山东省网联智能车辆产业技术研究院有限公司 | 用于智能网联车辆控制器的多传感数据融合方法 |
CN114646955A (zh) * | 2020-12-21 | 2022-06-21 | 阿里巴巴集团控股有限公司 | 数据融合理方法、装置、电子设备、及计算机存储介质 |
CN113219347A (zh) * | 2021-04-27 | 2021-08-06 | 东软睿驰汽车技术(沈阳)有限公司 | 一种电池参数测量方法及装置 |
CN114065876A (zh) * | 2022-01-11 | 2022-02-18 | 华砺智行(武汉)科技有限公司 | 基于路侧多传感器的数据融合方法、装置、系统及介质 |
CN114608589A (zh) * | 2022-03-04 | 2022-06-10 | 西安邮电大学 | 一种多传感器信息融合方法及系统 |
CN114964270A (zh) * | 2022-05-17 | 2022-08-30 | 驭势科技(北京)有限公司 | 融合定位方法、装置、车辆及存储介质 |
CN114964270B (zh) * | 2022-05-17 | 2024-04-26 | 驭势科技(北京)有限公司 | 融合定位方法、装置、车辆及存储介质 |
CN115792796A (zh) * | 2023-02-13 | 2023-03-14 | 鹏城实验室 | 基于相对观测等效模型的协同定位方法、装置及终端 |
CN119808005A (zh) * | 2025-03-11 | 2025-04-11 | 中南大学 | 同类监测传感器的数据动态融合方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
CN108573270B (zh) | 2020-04-28 |
EP3726429A4 (en) | 2021-08-18 |
EP3726429A1 (en) | 2020-10-21 |
CN108573270A (zh) | 2018-09-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019114807A1 (zh) | 多传感器目标信息融合 | |
WO2019114757A1 (zh) | 多传感器目标信息融合的优化方法及装置、计算机设备和记录介质 | |
CN107481292B (zh) | 车载摄像头的姿态误差估计方法和装置 | |
CN110084832B (zh) | 相机位姿的纠正方法、装置、系统、设备和存储介质 | |
CN113551665B (zh) | 一种用于运动载体的高动态运动状态感知系统及感知方法 | |
CN111556833B (zh) | 车辆控制装置及其控制方法和车辆控制系统 | |
JP5012615B2 (ja) | 情報処理装置、および画像処理方法、並びにコンピュータ・プログラム | |
CN105606096B (zh) | 一种载体运动状态信息辅助的姿态和航向计算方法和系统 | |
CN105716610B (zh) | 一种地磁场模型辅助的载体姿态和航向计算方法和系统 | |
CN110887480B (zh) | 基于mems传感器的飞行姿态估计方法及系统 | |
WO2023165093A1 (zh) | 视觉惯性里程计模型的训练方法、位姿估计方法、装置、电子设备、计算机可读存储介质及程序产品 | |
CN110533724B (zh) | 基于深度学习和注意力机制的单目视觉里程计的计算方法 | |
CN116691677B (zh) | 车辆驾驶控制方法、装置、车辆及存储介质 | |
CN109542094B (zh) | 无期望图像的移动机器人视觉镇定控制 | |
CN117268381B (zh) | 一种航天器状态的判断方法 | |
KR20220158628A (ko) | 깊이 보조 시각적 관성 주행 거리 측정을 위한 방법 및 장치 | |
CN113720349A (zh) | 一种基于卡尔曼滤波的里程计信息平滑方法 | |
CN114063131A (zh) | 一种gnss/ins/轮速组合定位实时平滑的方法 | |
JP7580637B2 (ja) | 目標追尾装置及び目標追尾方法 | |
CN113932815B (zh) | 稳健性优化Kalman滤波相对导航方法、装置、设备和存储介质 | |
JP2024074254A (ja) | カメラ姿勢を推定する装置及び方法 | |
CN117537812A (zh) | 基于量化窗口的立体视觉惯性里程计定位方法及系统 | |
Thalagala | Comparison of state marginalization techniques in visual inertial navigation filters | |
CN114322991B (zh) | 设备的姿态检测方法、装置、电子设备及计算机存储介质 | |
CN117705107B (zh) | 基于两阶段稀疏舒尔补的面向视觉惯性定位方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18889848 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2018889848 Country of ref document: EP Effective date: 20200715 |