[go: up one dir, main page]

0% found this document useful (0 votes)
59 views10 pages

Multisensor Data Fusion in Car Safety

This document discusses multisensor data fusion, which involves combining data from multiple sensors. It describes a model for multisensor data fusion with five levels of processing: 1) object refinement, 2) situation refinement, 3) threat refinement, 4) process refinement, and 5) cognitive refinement. It also discusses applications for automotive safety systems that use multisensor data fusion to detect obstacles, identify object types, monitor vehicle movements, and detect potential collisions.

Uploaded by

Ravindra B
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views10 pages

Multisensor Data Fusion in Car Safety

This document discusses multisensor data fusion, which involves combining data from multiple sensors. It describes a model for multisensor data fusion with five levels of processing: 1) object refinement, 2) situation refinement, 3) threat refinement, 4) process refinement, and 5) cognitive refinement. It also discusses applications for automotive safety systems that use multisensor data fusion to detect obstacles, identify object types, monitor vehicle movements, and detect potential collisions.

Uploaded by

Ravindra B
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Display,

actuators,
Sensor Input Processing Output signals,
control

Tutorial 14:
Multisensor Data
Fusion
Part 14 in a series of tutorials in instrumentation and measurement

David Macii, Andrea Boni, Mariolino De Cecco, and Dario Petri

M
ultisensor data fusion is a multilevel, the past 15 years, several data fusion methods have
multifaceted process dealing with the been proposed.
automatic detection, association, correla- This article describes some data fusion models and
tion, estimation, and combination of data from single some applications to next-generation car safety and
and multiple information sources. The results of a data driver assistance systems. These applications are par-
fusion process help users make decisions in complicated ticularly suitable to provide an overview of multisensor
scenarios. Integration of multiple sensor data was data fusion starting from the plain detection of
originally needed for military applica- multiple objects around a given host
tions in ocean surveillance, air-to- vehicle to inferring (in order of
air and surface-to-air defense, increasing complexity):

14
or battlefield intelligence. ◗ The position and speed of
More recently, multisen- possible obstacles
sor data fusion has also
included the nonmilitary
fields of remote environ-
mental sensing, medi-
Fourteenth
in a
◗ The type of the objects in
the road environment
◗ The relative movement
and distribution of ob-
cal diagnosis, automated
monitoring of equipment,
robotics, and automotive
Series stacles over a given area
◗ The early detection of a
possible collision
systems. ◗ Possible suggestions for
Inputs to a multisensor data prompt and effective counter-
fusion system include raw sensor measures (e.g., sudden braking,
data, commands, model parameters, steering wheel adjustment, etc.).
and other a priori information. Output is much more For instance, the techniques developed within the
heterogeneous and can range from simple estimates European Union project titled “Preventive and Active
about the attributes of a certain entity (a physical Safety Applications Contribute to the Road Safety Goals
phenomenon, a military target, or a fault condition in on European Roads” (PReVENT) are able to create
a machine) to complex inferences about current or safety zones around vehicles by means of embedded
future relationships between multiple entities. During multisensor data fusion systems that sense the type and

24 IEEE Instrumentation & Measurement Magazine June 2008


1094-6969/08/$25.00©2008IEEE
Display,
actuators,
Sensor Input Processing Output signals,
control

◗ The level 1 processing (Object Refinement) has three major


objectives:
• Perform a correct association between sensor data and
multiple entities
• Estimate the parameters or attributes that are most
significant for the considered application
• Identify an entity on the basis of a set of extracted
features.
◗ The level 2 processing (Situation Refinement) elaborates the
output results of the level 1 processing to extract useful
information about the relationships between multiple
entities located in the same environment.
◗ The level 3 processing (Threat Refinement) is used to make
future predictions based on the current situation in order
to detect possible threats or dangerous scenarios.
◗ The level 4 processing (Process Refinement/Resource Manage-
ment) monitors and controls the overall data fusion pro-
cess to assess and to improve its real-time performance on
Fig. 1. Qualitative overview of possible car safety systems based on the basis of possible application-specific needs or opera-
multisensor data fusion. [1] tional requirements.
◗ Finally, the level 5 processing (Cognitive Refinement) trans-
significance of impending dangers (see Figure 1) [1]. Depend-
forms the results of the data fusion process into a form
ing on the nature of the threat, active and preventive safety
that can be easily and meaningfully interpreted by the
systems inform, warn, and actively assist the driver to avoid
users, e.g., by means of cognitive aids to focus users’ at-
an accident or mitigate its possible consequences.
tention or to support human decisions.
Data Fusion Models In order to manage the entire data fusion process through
Several models have been proposed that generalize the data control input commands or information requests, the sys-
fusion problem. The most famous is the Joint Directors of tem also must include a Human-Computer Interaction (HCI)
Laboratories (JDL) model [2]. The JDL model was created interface as well as a Data Management unit, a lightweight
by the Data Fusion Working Group established in 1986 with database, to store, archive, compress, and protect the col-
the purpose of unifying research terminology to promote lected data.
technology transfer and cooperation between groups (see The main drawback of the JDL model is that it is generally
Figure 2). too artificial to be implemented in practice. Sometimes the five
According to this model, the sources of information used levels of processing are not required by a given application.
for data fusion can include both local and distributed sensors However, a level-based approach is essential to define a hierar-
(those physically linked to other platforms), or environmental chical taxonomy of algorithms and techniques to pave the way
data, a priori data, and human guidance or inferences. Using to more specific models.
these sources of information, the overall JDL data fusion pro- One level-based approach similar to the JDL model has
cess consists of six different levels of processing: been proposed by the ProFusion2 consortium in the field of
◗ The level 0 processing (Source Preprocessing) is used to rear- car safety and driver-assistance applications. It consists of four
range the data collected from different sensors within a levels (see Figure 3) [3]:
common time and space reference system (data alignment) ◗ The sensor refinement level (corresponding to level 0 of
to remove redundant information acquired by different the JDL model) defines a common space-time reference
sensors or to filter out the wideband noise. frame and estimates and compensates for the uncer-
tainty associated with the
output of the sensors.
Data fusion domain ◗ The object refinement level
(corresponding to level 1 of
Level 0 Level 1
Preprocessing Object refinement Support database the JDL model) associates
Level 4 multiple sensor data with
Process refinement different obstacles on the
Human-
Sources of
computer
road. Then, such data are
information
interaction processed to estimate the
Level 5 attributes or identities of
Cognitive refinement
Level 2
the entities.
Level 3
Situation refinement Threat refinement Fusion database ◗ The situation refinement level
(corresponding to level 2 of
the JDL model) starts with
Fig. 2. Block diagram of the top-level Joint Directors of Laboratories (JDL) data fusion model.

June 2008 IEEE Instrumentation & Measurement Magazine 25


the results of object refinement and estimates the relation- various vehicles detected on a road, for example. In car safety
ships between multiple entities. applications, these kinds of algorithms can also be employed
◗ The decision level (including parts of levels 3 and 5 of the to construct a list of the objects (e.g., a tree, a motorcycle, a road
JDL model) takes the physical model results from the sign, etc.) surrounding the host vehicle.
situation refinement phase and makes decisions about the Other inference problems are usually focused on testing
safest action to take (e.g.. alerting the driver or, in extreme different hypotheses to make a sensible choice among various
cases, taking control of the car). alternatives. Whereas the attribute or entity estimation prob-
These decisions depend not only on the estimated situa- lems are usually only part of the JDL level 1 of processing,
tions, but also on the level of confidence associated with them. hypothesis testing techniques may affect multiple levels. Their
This approach is in accordance with normal human behavior complexity depends on the complexity of the corresponding
and is crucially important in car safety applications. Anytime inference problem. They can range from determining whether
a driver has to make a crucial decision (e.g., after a sudden a certain measurement result is related to a given vehicle—a
obstacle detection), the “strength” of the corresponding action simple data-entity association problem at level 1—to predic-
(e.g., the intensity of braking) depends on our belief that brak- tions about possible collisions at level 3. The following sections
ing is necessary. discuss attribute and identity estimation issues and some hy-
pothesis testing techniques.
Data Fusion General Issues
Commonly adopted inference techniques in multisensor data Entity Parameter or Attribute Estimation
fusion applications can be roughly grouped into three main Any entity parameter or attribute estimation problem should
categories [4]: estimation algorithms for entity parameters or address four types of issues:
attributes, identity estimation algorithms for recognition, and ◗ The definition of the system models
other hypothesis-testing criteria (data-entity association, situ- ◗ The definition of the optimization criteria
ation analysis, etc.). ◗ The selection of the optimization approach
Estimation algorithms generally return the values of some ◗ The selection of the processing approach.
quantitative entity parameters or attributes that are particu- The possible alternatives related to these issues are sum-
larly significant for the application considered. For instance, marized in Figure 4.
in car safety and driver assistance systems, estimations could A system model is often built by defining state vectors, obser-
be made for: vation equations, and equations of motion. It is also affected by
◗ kinematic parameters (e.g., the position and the relative ve- possible implementation-specific issues such as the amount
locity) of the objects observed outside the host vehicle or of memory available to the processor doing the modeling. The
◗ host parameters detected by monitoring the actions of the choice of the attributes or parameters that have to be estimated
driver (e.g., the pressure on brake or clutch pedals). depends on the ability to predict the future state of an entity on
Identity estimation techniques rely on special classification the basis of the observed data. As a general rule, a state vector
algorithms that are used to recognize an object on the basis of must contain the minimum set of independent parameters
some significant extracted features—the shape or patterns of able to describe the behavior of the entity of interest. In prob-

Fig. 3. Block diagram of the ProFusion2 data fusion model specifically defined for car safety and driver assistance applications.

26 IEEE Instrumentation & Measurement Magazine June 2008


Display,
actuators,
Sensor Input Processing Output signals,
control

Fig. 4. Overview of the four issues associated to a generic parameter or attribute estimation problem.

lems involving vehicles in motion, the state variables are often complexity, and available resources. A linear model is usually
chosen to be the distance and the relative velocity of the ob- adequate for observations that are closely spaced in time.
stacles surrounding the host vehicle. The observation equations After defining the system model, the estimation process
are defined to predict the future observations using just the aims at finding the state vector enabling the “best fit” between
current state of the system. When dealing with vehicles, this the actual observed data and the values predicted by Equation
often means deriving equations to predict a vehicle’s future (1). If we refer to the residual vector as the N-long vector contain-
position and velocity. ing the differences between the observed and the predicted
Thus, if x(ti) represents the value of the state vector at time, values of ŷ(ti) at the times ti, with i = 1,…, N, the most widely
ti, then the predicted observation, ŷ(ti), at the same time is used optimization criteria are the following:
◗ Least square (LS) optimization: this approach relies on the
minimization of the sum of the squares of the residuals.
yˆ ti h x ti n ti (1) ◗ Weighted least square (WLS) optimization: this technique
is based on the minimization of the sum of the weighted
where h[x(ti)] is a function of the state vector returning the pre- squares of the residuals.
dicted observations at a time ti, and n(ti) represents the random ◗ Bayesian weighted least square (BWLS) optimization: it is
acquisition noise. If the state vector is constant over time, the similar to WLS, but in this case the sum of the weighted
estimation problem is called static. Conversely, if the state vec- squares of the residuals is constrained by the a priori
tor changes as a function of time the estimation problem is re- knowledge of the state vector x.
ferred to as dynamic, and an additional set of equations of motion ◗ Mean square error (MSE) optimization: this criterion seeks to
is required to propagate the state of a system from the initial minimize the expected value of the squared error vector.
time t0 to the time ti in which an observation is collected. ◗ Maximum likelihood estimate (MLE): in this case, the op-
The typical form of an equation of motion is the following: timal value of x maximizes the multivariate probability
distribution modeling the observation noise.
x ti F ti , t0 x t0 (2) Notice that any optimization criterion requires the minimi-
zation or maximization of a certain cost function. The choice
where F(·,·) is the state propagation function matrix. As a rule of the best function depends on the type of problem requir-
of thumb, the definition of the equations of motion results ing optimization as well as on the statistics of observational
from the trade-off between realism, accuracy, computational noise.

June 2008 IEEE Instrumentation & Measurement Magazine 27


The optimization approaches can rely either on direct methods M
or indirect methods. Direct methods determine the value of the z f wj pj b (3)
j 0
state vector corresponding to the maximum or the minimum of
the cost function. This result can be achieved either by comput-
ing the maximum or minimum of the cost function by using the During the training phase, the weights wj, j = 0,…M, and b of
conjugate gradient (derivative methods) or by applying uphill or the ANN change adaptively on the basis of specific updating
downhill simplex algorithms (nonderivative methods). Indirect rules. A frequently used technique for supervised training is the
methods rely mostly on the Newton-Raphson techniques [5]. back-propagation algorithm. When using this method, after pro-
Two basic processing approaches are generally used to solve viding a new training sample to the network, the correspond-
the estimation problem: the batch processing and the sequential ing output is computed using (3) and it is compared with the ex-
estimation techniques. In the batch approach, sensor data are pected output value. Afterward, starting from the ANN output,
processed together after they are collected (as in the case of a the local output errors associated to the neurons in the previous
Fast Fourier Transform). Conversely, when sequential estima- network levels are calculated iteratively, and the weights wj, j =
tion techniques are used, the state vector is updated after each 0,…M, are updated to lower the local errors.
acquisition, e.g., through a Kalman filter or an extended Kal- Since their introduction at the end of the 1980s, ANNs have
man filter. The choice of the most suitable processing approach been extensively used in several applications due to their
depends on the considered application as well as on the avail- general applicability and capability of solving a large class of
able computing resources. For instance, in car safety applica- problems, including those connected to sensor fusion. Un-
tions, where real-time performance is of crucial importance, fortunately, ANNs suffer from two problems. First, the error
sequential estimation techniques are preferable. function to be minimized is nonlinear. Therefore, the optimiza-
tion process could return a suboptimal solution because a local
Entity Identification minimum rather than the global solution could result from the
In order to perform entity identification (e.g., road obstacle rec- optimization process. Second, no convincing theories exist to
ognition) data from sensors are usually preprocessed to extract guarantee the “generalization properties” of the solution. It
key features used as input to a classifier. Usually, the most com- is not possible to ensure that a certain classification is correct
monly used classifiers are previously trained by using a given set when feature vectors not considered during the training phase
of labeled examples representing samples of the input-output are used. In order to overcome these two problems, alternative
relationship of the system. The result of the training is a set of learning-from-examples approaches, such as SVMs have been
parameters that in turn can be used to perform the classification proposed. SVMs are derived from statistical learning theory and
through learning-from-examples algorithms. The most used are based on the Structural Risk Minimization principle [7]. SVMs
techniques of this kind are Artificial Neural Networks (ANNs) provide better performance than traditional learning machines
and Support Vector Machines (SVMs). The common characteris- in various applications—for example, in solving classification
tic of such techniques is that they do not require any knowledge and regression problems. To apply SVM algorithms, the input
of the system dynamics because they rely only on a set of input- patterns are mapped onto a multidimensional feature space, in
output pairs of data. which the hyperplane separating the points associated with two
ANNs have been extensively used in several sensor fu- classes of objects—two types of vehicles, for example—is deter-
sion applications because of their robustness and versatility mined with the maximum margin (see Figure 6).
in learning to characterize the input-output relationships From the mathematical point of view, such an optimal hy-
of unknown systems [6]. In fact, they are used not only for perplane can be obtained by solving a constrained quadratic
identification purposes at level 1, but also for more involved optimization problem. Phrasing our initial problem in such
inference activities such as those carried out at levels 2 , 3, and a way that it leads to the need to solve such an optimization
5. ANNs try to emulate the behavior of biological nervous problem is advantageous because any quadratic optimiza-
systems. They consist of
several layers of process-
ing elements called neu-
rons that can be intercon-
nected in many ways (see
Figure 5).
In practice, the output
z of an ANN results from a
nonlinear transformation
f (·) (e.g., through step or
sigmoid functions) of an
M-long input data vector
p = (p1,…, pM). Such a trans-
formation is obtained as a
combination of single com-
putations executed by each
processing element, i.e., Fig. 5. Basic structure of an artificial neural network (ANN).

28 IEEE Instrumentation & Measurement Magazine June 2008


Display,
actuators,
Sensor Input Processing Output signals,
control

Hypothesis Testing Techniques


The classical hypothesis testing technique compares two exclu-
sive hypotheses usually referred to as the null hypothesis, H0, and
the alternative hypothesis, H1. For instance, when collecting obser-
vations from multiple rangefinders or image sensors placed on
the back of a host vehicle, we can state that either “the observed
data are related to the following car” (null hypothesis, H0) or
that “the observed data are not related to the following car” (al-
ternative hypothesis, H1). This case study can be regarded as an
elementary example of a data-entity association problem. A test
of significance, based on empirical probabilities and a decision
rule, is used to determine the likelihood that the collected data
would be observed if the null hypothesis H0 were true.
Fig. 6. Hyperplane separating two classes of object in a typical identification Several decisional rules can be used for this purpose. The
problem based on Support Vector Machines (SVM). simplest ones are the maximum a posteriori and maximum likeli-
hood criteria. The maximum a posteriori decisional rule accepts
tion problem has only one local minimum, which is also the
H0 as true if the empirical probability of H0 for a given set of
desired global minimum. Furthermore, no explicit knowledge
observations, y, is larger than the empirical probability of H1
of the mapping is required if special functions are used that
for the same set of observations, i.e., if P(H0|y) > P(H1|y).
realize implicit dot-product operations in the feature space.
Alternatively, the maximum likelihood approach, given a set of
Such functions, K(·,·), are denoted as kernels. Thus, if ui with j =
observations and the a priori probabilities of hypotheses H0
0,…,NSV is the jth input pattern—also called a support vector
and H1, accepts H0 as true if P(y|H0) > P(y|H1).
(SV)—used to train the machine, and if u is the vector to be esti-
Notice that the classic hypothesis testing methods enable
mated, the classification function can be written as a weighted
comparison of just two hypotheses at a time. This is not gener-
sum of predefined kernels, i.e.,
ally suitable for sensor data fusion applications, where several
N SV alternative hypotheses often have to be assessed. This problem
z j K u j ,u b (4) can be solved by using the Bayesian approach [5]. Assume that
j 0
H1, H2,…, Hm are m mutually exclusive hypotheses associated
where aj is a generic parameter associated with each “impor- with a certain set of observations y. If P(Hi) is the apriori prob-
m
tant” input pattern pj, and it is determined by solving the dual ability that Hi is true (with P Hi 1 ) and if P(y|Hi) represents
i 1
form of the constrained quadratic problem that maximizes the the probability of observing y when the hypothesis Hi is true, it
margin between the two classes. The principal advantage of follows from the Bayes’ theorem that:
this formulation is that no local minima are present. In prac-
P y|H i P H i
tice, the solution corresponding to the optimal hyperplane is P H i|y m
i=1,…m, (5)
written as a combination of NSV support vectors. Because the P y|H i P H i
i 1
solution of the optimization problem is usually “sparse,” it
is appealing when the algorithm is to be implemented on a Notice that Equation (5) returns a direct estimate of the
resource-constrained platform. This is why SVMs have been probability that the hypothesis Hi is true on the basis of both
extensively used in sensor fusion applications based on em- some experimental evidence, y, and a set of apriori prob-
bedded systems. abilities P(Hi), i = 1,…m. Such probabilities can be determined

Fig. 7. Block diagram of a typical fusion process based on the Bayesian inference approach.

June 2008 IEEE Instrumentation & Measurement Magazine 29


either by considering probability density functions obtained to the Basic Probability Assignment (BPA) functions (sometimes
from measurement data or by making use of subjective like- called also probability mass functions) as the functions m: 2Q →
hood assessments. [0,1] such that:
The benefits of the Bayesian inference method in multi- ◗ m (Ø) = 0
sensor data fusion can be highlighted by means of a simple ◗ m (Bi) ≤ 1 ∀BiŒ2Q
example related to the identity fusion process (see Figure 7). ◗ m Bi 1
Bi ∈2Q
Suppose that N smart sensors deployed on a host vehicle re-
turn n different declarations Dj, (j = 1,…,N) about the identity the support of the proposition Bi is given by the sum of the BPA
of a certain road obstacle. If we assume that the classifica- functions assigned by an observer or sensor to all the possible
tion/identification algorithms are able to distinguish among elements of the power set 2Bi of Bi, i.e.,
M different objects, each smart sensor j = 1,…,N can estimate
the probability P(Dj|Hi) of making the declaration Dj about Spt Bi m Xj (7)
the obstacle identity given the hypothesis Hi of actually ob- X j ∈2 Bi

serving the ith obstacle with i = 1,…,M. Therefore, if the stand- For instance, if Bi is an elementary proposition, such as
alone hypothesis probabilities P(Hi) are known due to some A1, then Spt(Bi) = m(A1), whereas if Bi = A1∨A2∨A3, Spt(Bi) =
a priori information, the conditional probabilities [P(Hi| Dj) m(A1) + m(A2) + m(A3) + m(A1∨A2) + m(A2∨A3) + m(A1∨A3) +
… P(HM| Dj)] for any identity declaration can be determined m(A1∨A2∨A3).
using Equation (5). Accordingly, a set of M joint multisensor Similarly to the Bayes’ combination formula, in the D-S
probabilities of having detected a certain obstacle as a result method also some rules to combine the BPA functions related
of multiple declarations is given by the Bayes’ combination to independent sources that must be defined. In the simplest
rule, i.e.: case (i.e., when just two sources of information are consid-
ered) the basic Dempster’s combination rule states that the
P H i P D1|H i ...P DN |H i
P H i|D1 ,..., DN M
i=1,…,M (6) joint probability mass associated with a generic proposition Bi
P H i P D1|H j ...P DN |H j results from the product sum of all individual BPA functions
j 1
over the total probability associated to nonconflicting proposi-
Ultimately, the recognized obstacle (i.e., the result of the tions. In symbols, this means that:
identity fusion inference process) is most probably the hypoth-
esis whose joint probability function, Equation (6), is maxi- m1 X k m2 X j
B X ki X j
mum. This decisional rule is usually referred to as maximum a m12 Bi (8)
1 c
posteriori probability (MAP) criterion.
The Bayesian inference process for data fusion suffers from where Xk and Xj are two generic propositions of the power set
three main drawbacks. The first is that estimating the a priori of Bi and
probabilities, P(Hi), is not always feasible. The second is that
the observations collected by the sensors are not always sta- c m1 X k m2 X j (9)
Xk X j
tistically independent. The third is that we have to be sure that
the M hypotheses are mutually exclusive.
Some of these issues can be tackled by the so-called Demp- is the factor that take conflicts into account. Observe that if
ster-Shafer (D-S) method [5]. The D-S approach is formally a we have a large number of conflicts, the normalization factor
generalization of the Bayesian inference process, but it is closer (1 – c) may become very small, thus leading to a counterin-
to the human way of thinking. In particular, starting from a set tuitive increment of the joint probability mass. For this reason,
of hypotheses, the D-S method aims at estimating the probabil- more sophisticated combination rules have recently been
ity intervals associated with elementary or general propositions proposed [5].
resulting from the combination of different hypotheses. The The application of the D-S method in sensor fusion systems
elementary propositions consist of an individual hypothesis and is very similar to that of the Bayesian approach. For instance,
are mutually exclusive. On the other hand, the general proposi- within the project PReVENT, the D-S method is used to identi-
tions are obtained by combining the elementary propositions fy the driver’s maneuvers, thus triggering possible preventive
using the “OR” Boolean operator. As a consequence, they may actions if some danger is detected [1]. In this specific case study,
contain overlapping or even conflicting hypotheses. the set q of elementary propositions was chosen to be q = {“lane
If q = {A1, A2, …, An} is the set of n elementary propositions change”, “overtaking”, “free flow”, “cut-in”, “merging”, “follow-
and 2Q represents the power set of q containing all the possible ing vehicle in the same path”, “following vehicle in the next lane”,
general propositions, the D-S method defines the evidential in- “unknown maneuver”} [3]. Of course, a complex maneuver can
terval associated to a generic proposition Bi ∈2Q as IBi = [Spt(Bi), consist of multiple elementary maneuvers. After collecting
Pls(Bi)], where Spt(Bi) is the support for the proposition Bi (i.e., heterogeneous sets of estimates from various sources of infor-
a metric of its likelihood), whereas Pls(Bi) = 1 – Spt(B̄ i) is its plau- mation (e.g., the Time to Lane Crossing with Constant Velocity
sibility, namely the probability of not supporting the negated (TLC_CV), The Time to Lane Crossing with Constant Accelera-
proposition B̄ i. Due to the definition of support and plausibility, tion (TLC_CA), The Predicted Minimum Distance (PMD) and
the probability P(Bi) associated with the proposition Bi has to so on), it is possible to assign different BPA functions to each
lie within the corresponding evidential interval. By referring source of information for any of the maneuver types in q. Some

30 IEEE Instrumentation & Measurement Magazine June 2008


Display,
actuators,
Sensor Input Processing Output signals,
control

examples of BPA values associated to the TLC_CV parameter ◗ In the hypothesis evaluation (HE) step the entries of the as-
for various maneuver types are shown in Figure 8. sociation matrix are processed to assess the likelihood of
In this example, the TLC_CV parameter represents the a certain hypothesis from a quantitative point of view. In
estimated time for lane crossing when the host vehicle keeps practice, this means that all the entries of an association
on moving with constant velocity. Notice, that if the TLC_CV matrix are filled in with numerical values resulting from
is less than 1 second, the BPA of lane change (LC) and overtake the application of the chosen “closeness” function on the
(OV) is obviously high. Conversely, when the TLC_CV value measurement or predicted data.
increases, the BPA of other kinds of maneuvers tends to grow. ◗ Finally, in the hypothesis selection (HS) step the results of
By properly combining the various probability masses, at first the HE phase are sorted and processed to determine the
the support and plausibility boundaries for every maneuver best set of hypotheses explaining the incoming data.
type and source of information can be estimated, and then The data-entity association problem is critical because it
the joint probability masses can be calculated using Equation affects the accuracy of the whole data fusion process. In car
(8). Finally, a vector of evidential intervals related to all the safety applications, a wrong data-entity association could lead
possible maneuvers is built up. Some decision logic similar to to erroneous and potentially dangerous preventive actions. In
the logic adopted in the Bayesian approach will be used to es- fact, if the observations of a cluster of laser rangefinders are as-
tablish the result of the maneuver classification process. In this sociated to the wrong obstacle, the relative velocity of the cor-
way, some dangerous situations (e.g., when a driver is about to responding object could be grossly over- or underestimated,
overtake the in-front vehicle, but another car outside the back thus leading to either excessive or inadequate braking.
mirror field of view is in turn overtaking the host vehicle) can When the situation refinement performed by the system
be detected and promptly signaled to the driver. requires the analysis of very involved scenarios (e.g., to predict
Consider that the statistical inference techniques men- possible collisions with other vehicles on the road), the infer-
tioned above can be applied at different levels of processing ence mechanisms could rely not only on statistics but also on
in a data fusion system. For instance, at level 1 they can be ap- advanced abilities of reasoning and cognition such as planning,
plied to solve the well-known entity-data association problem. deduction, and induction. Emulating such abilities is the basic
Performing a data-entity association means mapping multiple objective of artificial intelligence techniques. An exhaustive de-
sets of data to the correct entity when multiple objects are in the scription of such techniques is beyond the scope of this paper.
same area at the same time. Usually, the term association refers In the specific field of data fusion techniques, interpreta-
to the attempt of estimating “the closeness” between two data tion of fused data for situation or threat analysis relies mostly
(e.g., two observations or two feature vectors) or between a on expert, or knowledge-based, systems (KBS) [8]. In general, the
new observation and an existing track containing some signifi- structure of an expert system consists of four logical parts:
cant parameters related to the same entity. ◗ A knowledge base (KB) containing all the basic information
Common association measures are correlation coefficients, representing the expertise of the system. Such knowledge
distance measures, and probabilistic similarity measures [4]. can be described through various techniques, including
From an operative viewpoint, the data-object association pro- production rules, which perform a certain action if some evi-
cess usually consists of three steps that are summarized in the dence exists; semantic nets, i.e., graph-based representations
following describing possible relationships between classes of objects
◗ In the hypothesis generation (HG) step the incoming data and related specific instances (e.g., the class of a vehicle with
are processed to create a so-called association matrix: a respect to the vehicle specific model); frames, namely data
table linking the input data and some hypotheses describ- records summarizing the properties of classes and objects;
ing how data could be related. and scripts, representing situations and events through acts,
scenes, settings, and actions, as in a theater play
◗ A global database including dynamic data changing at
run-time
◗ A control structure or inference engine that attempts to find
one or more rules to make an inference about the current
situation by starting from input dynamic data and the
knowledge base
◗ Finally, a human-machine interface that gives users con-
trol over the whole process and allows the user to make
decisions on the basis of the results obtained by the data
fusion process.
The combination of the KB, the database, and the infer-
ence engine enables the implementation of automated rea-
soning techniques. The basic inference process underlying
an expert system for car safety purposes is shown schemati-
cally in Figure 9. Starting from the dynamic input data (e.g.,
Fig. 8. Example of basic assignment probability (BPA) functions associated the velocity and direction of the host vehicle as well as the
with the Time to Lane Crossing with Constant Velocity (TLC_CV) parameter in a type of nearby vehicles, their position, and speed values)
car safety application.

June 2008 IEEE Instrumentation & Measurement Magazine 31


some possible rules dictating the behavior of the car are overall uncertainty associated with the data fusion process is a
selected from the KB. This can be done through searching challenging task. This is due to several reasons:
and pattern matching procedures. For instance, if a bulky ◗ The traditional uncertainty estimation techniques de-
obstacle is detected closer than the expected safety distance, scribed in the well-known Guide for the Estimation of
say a van stopping suddenly at a crossroads, the system can Uncertainty in Measurements cannot be applied when the
decide to slow down the host vehicle automatically. Given relationships are not expressed by analytical functions
that multiple rules often appear to be applicable, the control [9]. Although several research activities are currently
structure has to choose the best rule according to various focused on solving this problem, there is no commonly
criteria. For example, if two or more obstacles are detected accepted approach for uncertainty estimation in data fu-
by the system in heavy traffic conditions, the driver could be sion systems.
invited to brake, to turn, to stop, or to perform other conser- ◗ The total uncertainty depends not only on the contri-
vative actions depending on various concurrent rules that butions affecting the raw sensor data, but also on the
make use of parameters such as models describing the input-output characteristic of
◗ The time when the observed input data are stored into the each sensor.
database (recency criterion) ◗ Data fusion algorithms operating at a higher inference
◗ The relative velocity of the host vehicle with the respect to level cannot correct possible errors inserted into a lower
the dangerous obstacles (in this case faster vehicles could level of processing. For instance, a wrong extraction of
be provided with higher priority values when applying certain signal features cannot be compensated for by even
the corresponding rule) the best pattern recognition algorithm.
◗ A specific combination of size, velocity, and distance ◗ Finally, the identification tasks based on ANNs, SVMs,
values. In fact, rules demanding the fulfillment of most or other similar learning-from-examples classification
involved conditions are more likely to lead to a specific algorithms suffer from the uncertainty associated with
result (specificity criterion). the chosen training data set.
As shown in Figure 9, the inferential process is usually per-
formed iteratively until either a conclusion is reached with a
final exit rule or no applicable rules are found. In the latter case,
the KBS system declares that no conclusion can be reached and
may request further input data.

Benefits and Limitations of Multisensor


Data Fusion
Multisensor data fusion has several qualitative benefits
compared to the results achievable using a single sensor
alone:
◗ Robust operational performance, improved system reliability,
and improved detection: combining multiple, often redun-
dant, measurement data increases the probability of de-
tecting a certain event, even when, for whatever reason,
some sensors do not work properly.
◗ Extended spatial and temporal coverage: using multiple sen-
sors, one sensor is often able to detect an event or measure
a quantity in a position or at a time in which another
cannot.
◗ Increased confidence and reduced ambiguity: if several sen-
sors contribute to a measurement result, the level of
confidence of the fused values is higher than that of each
individual output.
◗ Increased reliability: a system relying on different sensors is
less vulnerable to disruption caused by human actions or
natural phenomena.
◗ Enhanced spatial resolution: multiple sensors enable the
system to create a synthetic aperture, the resolution of
which can be better than the resolution achievable using
only one sensor.
In spite of these advantages, data fusion suffers also from
some problems regarding the accuracy of the achievable esti-
mates or inferences. In fact, given that any data fusion process
combines many different sources of uncertainty through
different types of algorithms, estimating and managing the Fig. 9. Flow chart of a generic inference engine in a knowledge-based system.

32 IEEE Instrumentation & Measurement Magazine June 2008


Display,
actuators,
Sensor Input Processing Output signals,
control

Additionally, the accuracy of a data fusion system strongly Italy. His research interests include the design, implementa-
depends on the specific operating conditions under which the tion, and testing of embedded systems, with special emphasis
system itself is used. on wireless sensor networks.

Conclusions Andrea Boni graduated in elec-


Multisensor data fusion is an emerging discipline. The rapid tronic engineering in 1996. He
evolution of high-performance, inexpensive, and low-power received a Ph.D. degree in elec-
computing components for pervasive systems, such as wire- tronics and computer science in
less sensor networks, will enable the development of complex 2001. He joined the Department
sensor fusion applications. The idea underlying data fusion is of Information Engineering and
to combine information collected by different sensors, to make Computer Science, University
inferences at different levels of abstraction about an entity or of Trento, Italy, where he teaches
a situation, and to enable either prompt human reactions or digital electronics. His main sci-
autonomous machine decisions. This tutorial provides readers entific interests are the study and
with a basic overview of data fusion terminology, models, and development of digital circuits
algorithms with the help of some examples related to next- for advanced information pro-
generation car safety and driver assistance systems. cessing, with particular attention to programmable logic devices,
digital signal theory and analysis, statistical signal processing,
References
statistical learning theory, and support vector machines. The ap-
[1] M. Schultze, T. Mäkinen, I. Irion, M. Flament, T. Kessel, “Final
plication of such interests focuses on identification and control of
Report,” PReVENT, D15, v. 1.5, Jan. 31, 2008.
nonlinear systems, pattern recognition, and signals processing.
[2] F.E. White, “A model for data fusion,” in Proc. 1st National
Symposium on Sensor Fusion, vol. 2, pp. 149–158, 1988.
Mariolino De Cecco received a
[3] A. Polychronopoulos, A. Amditis, U. Scheunert, and T. Tatschke,
degree in electronic engineering
“Revisiting JDL model for automotive safety applications: the
with first-class honors from the
PF2 functional model,” in Proc. of the 9th International Conference on
University of Ancona, Italy, in
Information Fusion, pp. 1–7, 2006.
1995 and a Ph.D. degree in me-
[4] R. Goodman, R.P. Mahler, and H.T. Nguyen, Mathematics of Data chanical measurements for engi-
Fusion, Dordrecht, The Netherlands: Springer, 1997. neering in 1998. He is currently an
[5] D.L. Hall and S.A.H. McMullen, Mathematical Techniques in associate professor of mechanical
Multisensor Data Fusion, 2nd ed., Norwood, MA: Artech House, measurements and robotics and
2004. sensor fusion at the University
[6] C. Bishop, Neural Networks for Pattern Recognition, Oxford, CT: of Trento. His interests include
Oxford University Press, 1995. sensor fusion applications, three-
[7] V. Vapnik, Statistical Learning Theory, New York: John Wiley & dimensional reconstruction by vision systems, signal processing,
Sons, Inc., 1998. mobile robotics, and space mechanisms and qualification.
[8] A.J. Gonzalez and D.D. Dankel, The Engineering of Knowledge-Based
Systems: Theory and Practice, Upper Saddle River, NJ: Prentice-Hall, Dario Petri received the Laurea
Inc., 1993. degree summa cum laude and the
[9] Guide to Expression of Uncertainty in Measurement, ISO ENV Ph.D. degree in electronics engi-
13005:1999. neering from the University of
Padua, Italy, in 1986 and 1990, re-
David Macii (macii@disi.unitn. spectively. Currently, he is a full
it) received a degree in electron- professor of electronic instru-
ic engineering and a Ph.D. de- mentation in the Department
gree in information engineering Information Engineering and
from the University of Perugia, Computer Science at the Univer-
Perugia, Italy, in 2000 and 2003, sity of Trento, Italy. Since 2004, he
respectively, and the Master’s has been the chairperson of the
degree in advanced studies International Ph.D. School in Information and Communication
in embedded system design Technology in the same department. His research activities are
from the University of Lugano, in the area of measurement science and technology, and they
Lugano, Switzerland, in July are particularly focused on data acquisition systems design and
2005. Since January 2005, he has testing, digital electronic systems design and characterization,
been an assistant professor in the Department of Information and application of digital signal processing and statistical pa-
Engineering and Computer Science of the University of Trento, rameter estimation methods to measurement problems.

June 2008 IEEE Instrumentation & Measurement Magazine 33

You might also like