Disclosure of Invention
The present invention is directed to a military reconnaissance system performance evaluation method to solve the above-mentioned technical problems.
In order to solve the technical problems, the invention adopts the following technical scheme: the military reconnaissance system efficiency evaluation method comprises the steps of establishing an efficiency evaluation model based on information theory, wherein the establishment of the efficiency evaluation model comprises the establishment of an information integrity model, an information accuracy model and an information timeliness model, and the efficiency evaluation model consists of an accuracy QSaccIntegrity QScompAnd aging degree QScurrComposition, the overall efficacy E can be expressed as: e ═ WSaccQSacc+WScompQScomp+WScurrQScurrWherein W isSacc、WScompAnd WScurrAre respectively the accuracy QSaccIntegrity QScompAnd aging degree QScurrThe weight of (2).
Preferably, the information theory is a state description of a system or event or a message about the state, and let M ═ x1,x2,…,xnIs the set of all possible states of system X, P ═ P1,p2,…,pnThe probability of occurrence of each state is set, and according to the shannon entropy concept, if the probability of occurrence of a state of a system or an event can be represented mathematically, the entropy of the information describing the system is:
H(x)=-∑xlog[p(x)]
if the system state set is a continuous interval [ a, b ] and there is a probability distribution density function f (x), then the information entropy is:
preferably, the information integrity model establishment comprises a target detection process, wherein n signals to be detected are set, and H is enabled
0Indicating "no target signal present", H
1Indicating "there is a target signal present", D
0Meaning "decision is targeted", D
1Meaning "decide not to target", i.e. for H
0:z(t)、H
1Z (t) performing a statistical test to determine which hypothesis is true, z (t) being an observation signal, and comparing H
0And H
1Magnitude of probability of occurrence, i.e. comparative posterior probability P (H)
0| z) and P (H)
1I z), which probability is large and which is true, is expressed as P (H) by the decision formula
1|z)>P(H
0I z) is satisfied, it is judged as H
1,P(H
1|z)<P(H
0I z) is satisfied, it is judged as H
0According to the Bayesian formula, the posterior probability can be expressed as:
wherein P (z) is the probability density of z, P (zH)
0)、P(zH
1) Is a conditional probability density when
When it is established, it is judged as H
1(ii) a When in use
When it is established, it is judged as H
0;P(H
0)、P(H
1) Are respectively H
0Hypothesis sum H
1The assumed prior probabilities, generally known as a priori knowledge,
is a decision threshold value;
four cases occur in the decision making of the binary detection problem, which describe the performance of the binary detection device, and they can be expressed by conditional probabilities as follows:
(1) let H
0If true, the decision is D
0Denotes selection H
0For true correct decision, use conditional probability P (D)
0|H
0) It is shown that,
(2) let H
1If true, the decision is D
1Denotes selection H
1For true correct decision, use conditional probability P (D)
1|H
1) It is shown that,
in signal detection, there is a target signal and the decision is made that there is a target, also called the probability of detection, with P
dIt is shown that,
(3) let H
0If true, the decision is D
1Denotes selection H
0For false first type of erroneous decision, using conditional probability P (D)
1|H
1) It is shown that,
in signal detection, if there is no target signal and the target is determined to be present, also called false alarm probability, P is used
fIt is shown that,
(4) let H
1If true, the decision is D
0Denotes selection H
1For false second type of erroneous decision, use is made of the conditional probability P (D)
0|H
1) It is shown that,
in signal detection, if there is a target signal and the decision is no target, also called false alarm probability, P is used
mRepresents;
according to a Bayes formula and a total probability formula, a corresponding posterior probability can be obtained:
as can be seen from the basic principle of information theory, h (x) represents the degree of loss of information amount caused by false alarm and false alarm, and the greater the false alarm and false alarm probability, the greater the loss of information,
the information acquisition integrity model is
In the formula
Hmax(X) -entropy at maximum uncertainty, reached when both false alarm probability and false alarm probability are 0.5;
Hmin(X) -entropy at minimum uncertainty, reached when both false alarm probability and false alarm probability are 0;
the result of enemy interference reduces my correct decision P (D)0|H0) And P (D)1|H1) (probability of detection) so that H (X) is increased, QScomp(X) is decreased.
Preferably, the information accuracy model establishing step is as follows:
let the one-dimensional random variable be [ - Δ, Δ [ - Δ [ ]]The interval obeys the uniform distribution of equal probability, delta is the maximum range of uncertainty of random variables, is generally priori knowledge, and the probability density function of the interval is
Its information entropy is
If the one-dimensional continuous random variable obeys normal distribution, the probability density function is
Its information entropy is
The difference between the two information entropies is the reduction degree of the uncertain range
Generalizing this conclusion to the case of N dimensions, where the N-dimensional continuous random vector X is (X)
1,x
2,…,x
n)
TIs defined as a joint entropy of
If the N-dimensional continuous random vector X follows a normal distribution, its probability density function is
Wherein [ mu ] is1,μ2,…,μN]Is a mean value and has a covariance matrix
The non-diagonal elements being random variables x
iAnd x
jCovariance value of (a) of
i,j=(x
i-μ
i)(x
j-μ
j) Get, a random variable x
iAnd x
jIs expressed as
When i is j then ∑
i,jIs the variance of the covariance matrix;
the joint entropy of the N-dimensional continuous random vector X is
Wherein | Σ | is a modulus of a determinant of the covariance matrix Σ, and since n is a constant, H (x) is simplified to obtain a relative entropy Hr(X) log | ∑ i, i.e. related to covariance only,
for multivariate normal distribution, it is first assumed that the maximum joint entropy exists, i.e. it is reached when the distribution is uniform
Hmax(X)=log|∑|max
Definition of QSacc(X) is the interval [0, 1 ]]A value in between, and
the information entropy can obtain the representation Q of the information acquisition accuracySacc(X),0≤QSacc(X) is less than or equal to 1, namely the information element { a ≦ is reflected1,a2,…,aCValue and degree of mastery of relationship between them, when QSacc(X) → 1, indicating the highest accuracy and QSacc(X) → 0 indicates the lowest accuracy.
Preferably, the information aging model is established by the following steps: the degree of recency of the obtained information described by the time effectiveness of the information can be expressed as
tiIndicating the current time, i.e. the time at which the combat unit has requested the information, tlIndicates the latest update time t of the information0Which is the time when the information actually begins to exist, and eta is a coefficient related to the importance of the information. .
The invention has the beneficial effects that:
the method adopts an information theory method, establishes an information integrity model, an information accuracy model and an information timeliness model of the military reconnaissance system, achieves the aim of evaluating the comprehensive combat effectiveness of the military reconnaissance system, is complete in theory, advanced in technology and strong in operability, is suitable for carrying out combat effectiveness evaluation on various military reconnaissance systems, and can provide technical support for equipment development, combat planning and other work.
Detailed Description
In order to make the technical means, the original characteristics, the achieved purposes and the effects of the invention easily understood, the invention is further described below with reference to the specific embodiments and the attached drawings, but the following embodiments are only the preferred embodiments of the invention, and not all embodiments. Based on the embodiments in the implementation, other embodiments obtained by those skilled in the art without any creative efforts belong to the protection scope of the present invention.
Specific embodiments of the present invention are described below with reference to the accompanying drawings.
Example 1
As shown in figure 1, the military reconnaissance system efficiency evaluation method comprises the steps of establishing an efficiency evaluation model based on information theory, establishing the efficiency evaluation model, and establishing an information integrity model, an information accuracy model and an information timeliness model, wherein the efficiency evaluation model is composed of an accuracy QSaccIntegrity QScompAnd aging degree QScurrComposition, the overall efficacy E can be expressed as: e ═ WSaccQSacc+WScompQScomp+WScurrQScurrWherein W isSacc、WScompAnd WScurrAre respectively the accuracy QSaccIntegrity QScompAnd aging degree QScurrThe weight of (2).
Information theory is a description of the state of a system or event or a message about that state, let M ═ x1,x2,…,xnIs the set of all possible states of system X, P ═ P1,p2,…,pnThe probability of occurrence of each state is set, and according to the shannon entropy concept, if the probability of occurrence of a state of a system or an event can be represented mathematically, the entropy of the information describing the system is:
H(x)=-∑xlog[p(x)]
if the system state set is a continuous interval [ a, b ] and there is a probability distribution density function f (x), then the information entropy is:
the information integrity model establishment comprises a target detection process and n to-be-detected modelsDetecting a signal, let H
0Indicating "no target signal present", H
1Indicating "there is a target signal present", D
0Meaning "decision is targeted", D
1Meaning "decide not to target", i.e. for H
0:z(t)、H
1Z (t) performing a statistical test to determine which hypothesis is true, z (t) being an observation signal, and comparing H
0And H
1Magnitude of probability of occurrence, i.e. comparative posterior probability P (H)
0| z) and P (H)
1I z), which probability is large and which is true, is expressed as P (H) by the decision formula
1|z)>P(H
0I z) is satisfied, it is judged as H
1,P(H
1|z)<P(H
0I z) is satisfied, it is judged as H
0According to the Bayesian formula, the posterior probability can be expressed as:
wherein P (z) is the probability density of z, P (zH)
0)、P(zH
1) Is a conditional probability density when
When it is established, it is judged as H
1(ii) a When in use
When it is established, it is judged as H
0;P(H
0)、P(H
1) Are respectively H
0Hypothesis sum H
1The assumed prior probabilities, generally known as a priori knowledge,
is a decision threshold value;
four cases occur in the decision making of the binary detection problem, which describe the performance of the binary detection device, and they can be expressed by conditional probabilities as follows:
(1) let H
0If true, the decision is D
0Denotes selection H
0For true correct decision, use conditional probability P (D)
0|H
0) It is shown that,
(2) let H
1If true, the decision is D
1Denotes selection H
1For true correct decision, use conditional probability P (D)
1|H
1) It is shown that,
in signal detection, there is a target signal and the decision is made that there is a target, also called the probability of detection, with P
dIt is shown that,
(3) let H
0If true, the decision is D
1Denotes selection H
0For false first type of erroneous decision, using conditional probability P (D)
1|H
1) It is shown that,
in signal detection, if there is no target signal and the target is determined to be present, also called false alarm probability, P is used
fIt is shown that,
(4) let H
1If true, the decision is D
0Denotes selection H
1For false second type of erroneous decision, use is made of the conditional probability P (D)
0|H
1) It is shown that,
in signal detection, if there is a target signal and the decision is no target, also called false alarm probability, P is used
mRepresents;
according to a Bayes formula and a total probability formula, a corresponding posterior probability can be obtained:
as can be seen from the basic principle of information theory, h (x) represents the degree of loss of information amount caused by false alarm and false alarm, and the greater the false alarm and false alarm probability, the greater the loss of information,
the information acquisition integrity model is
In the formula
Hmax(X) -entropy at maximum uncertainty, reached when both false alarm probability and false alarm probability are 0.5;
Hmin(X) -entropy at minimum uncertainty, reached when both false alarm probability and false alarm probability are 0;
the result of enemy interference reduces my correct decision P (D)0|H0) And P (D)1|H1) (probability of detection) so that H (X) is increased, QScomp(X) is decreased.
The information accuracy model establishment steps are as follows:
let the one-dimensional random variable be [ - Δ, Δ [ - Δ [ ]]The interval obeys the uniform distribution of equal probability, delta is the maximum range of uncertainty of random variables, is generally priori knowledge, and the probability density function of the interval is
Its information entropy is
If the one-dimensional continuous random variable obeys normal distribution, the probability density function is
Its information entropy is
The difference between the two information entropies is the reduction degree of the uncertain range
Generalizing this conclusion to the case of N dimensions, where the N-dimensional continuous random vector X is (X)
1,x
2,…,x
n)
TIs defined as a joint entropy of
If the N-dimensional continuous random vector X follows a normal distribution, its probability density function is
Wherein [ mu ] is1,μ2,…,μN]Is a mean value and has a covariance matrix
The non-diagonal elements being random variables x
iAnd x
jCovariance value of (a) of
i,j=(x
i-μ
i)(x
j-μ
j) Get, a random variable x
iAnd x
jIs expressed as
When i is j then ∑
i,jIs the variance of the covariance matrix;
the joint entropy of the N-dimensional continuous random vector X is
Wherein | Σ | is a modulus of a determinant of the covariance matrix Σ, and since n is a constant, H (x) is simplified to obtain a relative entropy Hr(X) log | ∑ i, i.e. related to covariance only,
for multivariate normal distribution, it is first assumed that the maximum joint entropy exists, i.e. it is reached when the distribution is uniform
Hmax(X)=log|∑|max
Definition of QSacc(X) is the interval [0, 1 ]]A value in between, and
the information entropy can obtain the representation Q of the information acquisition accuracySacc(X),0≤QSacc(X) is less than or equal to 1, namely the information element { a ≦ is reflected1,a2,…,aCValue and degree of mastery of relationship between them, when QSacc(X) → 1, indicating the highest accuracy and QSacc(X) → 0 indicates the lowest accuracy.
The information aging model establishing steps are as follows: the degree of recency of the obtained information described by the time effectiveness of the information can be expressed as
tiIndicating the current time, i.e. the time at which the combat unit has requested the information, tlIndicates the latest update time t of the information0Which is the time when the information actually begins to exist, and eta is a coefficient related to the importance of the information.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and the preferred embodiments of the present invention are described in the above embodiments and the description, and are not intended to limit the present invention. The scope of the invention is defined by the appended claims and equivalents thereof.