[go: up one dir, main page]

CN116740561B - SAR target recognition method based on fusion of ASC features and multi-scale depth features - Google Patents

SAR target recognition method based on fusion of ASC features and multi-scale depth features

Info

Publication number
CN116740561B
CN116740561B CN202310552287.8A CN202310552287A CN116740561B CN 116740561 B CN116740561 B CN 116740561B CN 202310552287 A CN202310552287 A CN 202310552287A CN 116740561 B CN116740561 B CN 116740561B
Authority
CN
China
Prior art keywords
image
feature
features
fusion
sar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310552287.8A
Other languages
Chinese (zh)
Other versions
CN116740561A (en
Inventor
王英华
刘靓
刘宏伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202310552287.8A priority Critical patent/CN116740561B/en
Publication of CN116740561A publication Critical patent/CN116740561A/en
Application granted granted Critical
Publication of CN116740561B publication Critical patent/CN116740561B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于ASC特征与多尺度深度特征融合的SAR目标识别方法,包括:获取观测目标的原始SAR复数图像,并提取每一幅SAR复数图像对应的属性散射中心;对属性散射中心进行图像重构,获取不同尺度的全局与局部重构图;构建包括特征提取模块以及特征融合模块的深度神经网络,以对多尺度深度特征图与重构图对应的二值化图像进行不同层面的特征融合;将原始SAR复数图像和重构图对应的二值化图像一起输入到训练好的深度神经网络中处理,并输出目标识别结果。该方法采用精确估计的ASC参数对目标进行多种类型的重构,并将其与不同尺度的深度特征进行融合,为网络提供更多的信息,从而提升了识别性能。

The present invention discloses a SAR target recognition method based on the fusion of ASC features and multi-scale depth features, comprising: obtaining the original SAR complex image of the observed target and extracting the attribute scattering center corresponding to each SAR complex image; reconstructing the attribute scattering center to obtain global and local reconstruction images of different scales; constructing a deep neural network including a feature extraction module and a feature fusion module to perform feature fusion at different levels on the multi-scale depth feature map and the binarized image corresponding to the reconstructed image; inputting the original SAR complex image and the binarized image corresponding to the reconstructed image into a trained deep neural network for processing, and outputting the target recognition result. The method uses accurately estimated ASC parameters to perform multiple types of reconstructions on the target and fuses them with depth features of different scales, providing more information to the network, thereby improving recognition performance.

Description

SAR target recognition method based on fusion of ASC features and multi-scale depth features
Technical Field
The invention belongs to the technical field of radar target identification, and particularly relates to an SAR target identification method based on fusion of ASC features and multi-scale depth features.
Background
SAR (SYNTHETIC APERTURE RADAR ) is an active earth observation system, and is beneficial to a unique electromagnetic scattering imaging mechanism, can work all the day around the clock and can image in a long distance and high resolution, so that the SAR is widely applied to the military and civil fields. SAR images lack color information and are susceptible to speckle noise, compared to optical images, and interpretation of SAR images is therefore more difficult. ATR (automatic target recognition ) is a key topic for intelligent interpretation of SAR images and has received extensive attention from researchers.
In recent years, with rapid development of deep learning, researchers have proposed various SAR target recognition algorithms based on deep learning. Deep learning is a data driven algorithm whose excellent performance often depends on a large amount of training data. However, the acquisition of large amounts of marked measured SAR data requires high costs, the number of SAR image data sets is small and the size is small compared to the optical data sets. The problem of insufficient samples is greatly affected in the SAR ATR algorithm using deep learning. However, SAR images also have their unique features, and by effectively using these features, the problem of insufficient training samples can be alleviated. Because of its unique electromagnetic scattering properties, the ASC (Attributed SCATTERING CENTER, attribute scattering center) model, which is an effective method for interpreting SAR measurements, can be used to extract and estimate parameters of SAR targets, which provides the physical correlation characteristics of complex targets and has a certain degree of interference immunity, and by using ASC characteristics, can improve the recognition performance of SAR ATR algorithm.
In view of the above problems, some work has been conducted to develop a related study. For example, yang et al in 2020, in its published paper "Efficient Attributed Scatter Center Extraction Based on Image-Domain Sparse Representation"(IEEE Transactions on Single Processing), proposed a rapid attribute scattering center extraction algorithm based on sparse representation of image domain, which proposed and demonstrated that scattering centers have translatory and additivity in image domain, and uses these properties to perform dictionary reduction, and uses newton method to optimize continuous valued parameter estimation results when solving sparse coefficients, so as to obtain rapid and highly accurate attribute scattering center parameter estimation results. In the paper "Multiscale CNN Based on Component Analysis for SAR ATR"(IEEE Transactions on Geoscience and Remote Sensing) published by Li et al 2021, an algorithm for performing multi-scale fusion on deep learning features and attribute scattering center features is used, the overall attribute scattering center reconstruction and binarization results of eight types of attribute scattering center reconstruction are respectively fused with the same deep network features, and the fused features are used as the use features of a target recognition task, so that the recognition accuracy of the network is improved. In the paper "AConvolutional Neural Network Combined with Attributed Scattering Centers for SAR ATR"(MDPI Remote Sensing) published by Zhou et al 2021, the extraction result of the attribute scattering center of the SAR image is subjected to 3D imaging of the scattering center model, the imaging result is sent to a convolutional neural network for training, the trained features are fused with depth features obtained by SAR image training, and the fusion features are used for performing a target recognition task. Each attribute scattering center extracted from the target is imaged in paper "Integrating the Reconstructed Scattering Center Feature Maps With Deep CNN Feature Maps for Automatic SAR Target Recognition"(IEEE Geoscience and Remote Sensing Letters) published by Zhang et al in 2021, imaging results of all the individual attribute scattering centers are directly spliced with depth network features, the spliced features are used for completing the target identification task, and a parameter migration method in migration learning is used when the depth network is trained, so that training efficiency is effectively improved.
However, in the above four methods, the first method has an artificial zeroing operation when performing parameter estimation of the target attribute scattering center, that is, after one attribute scattering center is extracted, all the pixel points of the area covered by the scattering center are zeroed, so as to ensure that the extraction cannot be repeated at the same position, and this step increases inaccuracy of the parameter estimation result of the attribute scattering center. The second method and the third method are based on the extraction mode of the attribute scattering center of the first method, the depth feature extraction of the SAR image uses a convolution neural network structure designed by the user, the network is fully trained, the higher time cost is needed, the multi-scale feature fusion of the second method actually only uses the last layer of features of the depth network, the multi-scale features of the depth network are not utilized enough, and the target recognition performance is also influenced. In the fourth method, the extraction of the attribute scattering center adopts a frequency domain extraction algorithm, the number of the attribute scattering center points required to be extracted is more under the condition of obtaining the same reconstruction performance of the target, and the frequency domain extraction method has low calculation efficiency and higher memory requirement.
In summary, the accuracy and the extraction efficiency of the attribute scattering center features used by the existing method are always influenced by high memory and high computational complexity, so that the target recognition performance is influenced, and the depth network used in the existing algorithm can be trained into a depth network with good classification performance through a plurality of iterations, so that the training cost is high.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides an SAR target recognition method based on fusion of ASC features and multi-scale depth features. The technical problems to be solved by the invention are realized by the following technical scheme:
an SAR target recognition method based on fusion of ASC features and multi-scale depth features comprises the following steps:
Step 1, acquiring original SAR complex images of an observation target, and extracting an attribute scattering center corresponding to each SAR complex image based on an ASC parameter estimation algorithm of improved image domain sparse representation;
step2, performing image global and local reconstruction on the attribute scattering center, and obtaining global and local reconstruction images with different scales in a downsampling mode;
Step 3, constructing a deep neural network comprising a feature extraction module and a feature fusion module, wherein,
The characteristic extraction module is used for carrying out multi-scale characteristic extraction on the amplitude image corresponding to the original SAR complex image to obtain a multi-scale depth characteristic image;
the feature fusion module is used for carrying out feature fusion on different layers on the extracted multi-scale depth feature map and the binarized image corresponding to the reconstruction map;
And 4, inputting the original SAR complex image and the binarized image corresponding to the reconstruction image into a trained deep neural network for processing, and outputting a target recognition result.
The invention has the beneficial effects that:
1. The method fully utilizes the physical properties of the targets reflected by the ASC model, uses the ASC parameter set accurately estimated to reconstruct various types of observed targets in an image domain, fuses the ASC parameter set with depth features of different scales, and provides more information for a network, thereby improving the target identification performance;
2. The method is improved and optimized aiming at the defects of a newer rapid ASC extraction algorithm based on an image domain, and the artificial zero setting operation is removed in the ASC extraction process of the target by changing the processing mode of the residual image and the corresponding initial dictionary atom selection mode when the sparse coefficient is solved, so that the ASC parameter estimation result of the target is more accurate and the reconstruction error of the target is smaller;
3. According to the invention, a parameter migration mode is used in the deep network training process, and the front 13-layer parameter result of VGG16Net which is fully trained on an image Net data set is used as the initial parameter of the front 13 layers of the deep network, so that the network can obtain better recognition performance through less rounds of training, and the training efficiency can be improved.
The present invention will be described in further detail with reference to the accompanying drawings and examples.
Drawings
FIG. 1 is a schematic flow chart of a SAR target recognition method based on fusion of ASC features and multi-scale depth features provided by an embodiment of the present invention;
FIG. 2 is a flow chart of a prior art fast attribute scattering center extraction algorithm based on sparse representation of image domains;
FIG. 3 is a flowchart of a fast attribute scattering center extraction algorithm for improved sparse representation of image domain provided by an embodiment of the present invention;
FIG. 4 is a schematic diagram of the structure and forward propagation of a deep neural network according to an embodiment of the present invention;
FIG. 5 is a T72 raw SAR image in simulation experiments;
FIG. 6 is a result of attribute scattering center extraction and image reconstruction of a T72 raw SAR image using the algorithm of the present invention;
fig. 7 is a binarized image corresponding to the reconstruction result in fig. 6.
Detailed Description
The present invention will be described in further detail with reference to specific examples, but embodiments of the present invention are not limited thereto.
Example 1
Referring to fig. 1, fig. 1 is a flowchart of a SAR target recognition method based on fusion of ASC features and multi-scale depth features, provided in an embodiment of the present invention, which includes:
Step 1, acquiring original SAR complex images of an observation target, and extracting an attribute scattering center corresponding to each SAR complex image based on an ASC parameter estimation algorithm of improved image domain sparse representation.
The existing fast attribute scattering center extraction algorithm flow based on image domain sparse representation is shown in fig. 2, and when the parameter estimation of the target attribute scattering center is performed, the artificial zero setting operation exists, namely after one attribute scattering center is extracted, all the regional pixel points covered by the scattering center are set to zero, so that the situation that the extraction cannot be repeated at the same position is ensured, and the inaccuracy of the parameter estimation result of the attribute scattering center is increased. Aiming at the problem, the embodiment improves a fast attribute scattering center extraction algorithm based on image domain sparse representation, and the parameter estimation result is inaccurate due to the fact that the artificial zeroing operation exists in the extraction algorithm.
In this embodiment, the improved ASC parameter estimation algorithm of image domain sparse representation is used to extract the scattering center of SAR image S, the number of extracted scattering center points Q is set to 25, each scattering center point corresponds to a feature vector, and the feature vector corresponding to the ith scattering center can be expressed asWhere A i denotes the complex amplitude, a i denotes the frequency dependent factor, x i and y i denote the position coordinates in the distance and azimuth directions, respectively, L i denotes the length of the scattering center,And γ i denotes the direction angle and azimuth dependence factor of the scattering center.
Specifically, referring to fig. 3, fig. 3 is a flowchart of an improved fast attribute scattering center extraction algorithm for image domain sparse representation according to an embodiment of the present invention, which includes:
11 Firstly, SAR echo signals are converted into an image domain, and the problem to be solved by the improved rapid attribute scattering center extraction algorithm based on image domain sparse representation is still to estimate attribute scattering center parameters of the target from the back scattering echoes of the target, namely the number Q of the attribute scattering centers forming a complex target and parameter sets theta m of the attribute scattering centers, m=1, 2. This problem can be described as follows:
wherein f represents the operating frequency of the radar, Represents the range of synthetic pore diameters, f k,Respectively, represent a discrete f-number,K. H represents the number of discrete points in the frequency direction and the azimuth direction, x, y represents the coordinates of the pixel points after conversion to the image domain, f 0 represents the center frequency, c represents the speed of light,Back-scattered echo data representing the object,Echo data representing the qth attribute scattering center, Q representing the total number of attribute scattering centers, σ q representing a sparse coefficient, θ q representing a set of parameters for the qth scattering center, ε representing an error coefficient and ε >0,S (x, y), D (x, y; θ q) representing, respectivelyAnd (3) withA corresponding image domain representation; And (3) with Respectively represent after discretizationAnd (3) with
Equation (1) remains a sparse representation problem by applying a binary pattern toAnd (3) withBy applying the same linear imaging operator beta {.cndot }, the specific expression is represented by the expression (2), the expression (1) can be converted into the image domain expression, as represented by the expression (3):
wherein S (x, y), D (x, y; θ q) respectively represent And (3) withThe corresponding image domain representation.
12 Using NOMP algorithm to solve the parameters of the attribute scattering centers of S (x, y), and adding a parameter fine correction process in the solving process to obtain a plurality of attribute scattering centers.
First, an initial dictionary is established, and a residual image R (x, y) is initialized, and R (x, y) =s (x, y). Then, the optimized NOMP algorithm is utilized to extract the attribute scattering center. The extraction of the attribute scattering center of the observation target is divided into four steps, namely atom selection, atom parameter fine estimation, least square solution calculation and residual error calculation.
Specifically, step 12) includes:
a) An initial dictionary phi is established, and the expression is as follows:
in the formula,
Wherein, the Represents a normalized attribute scattering center image, Θ loc and Θ dis represent parameter sets corresponding to a local attribute scattering center and a distributed attribute scattering center, respectively, Θ AαlocLloc,Θ γlocxy corresponds to the parameters a, α, L,Γ, x, y, where A represents complex amplitude, α represents a frequency dependent factor, x and y represent position coordinates in the distance and azimuth directions, respectively, L represents the length of the scattering center,And gamma represents the direction angle and the azimuth dependence factor of the scattering center respectively, - AαdisLdis,Θ γdisxy corresponds to the parameters a, α, L,The specific meaning of each parameter is the same as described above for γ, x, y, "×" represents the cartesian product.
When the initialization dictionary phi is built, let Θ A={1},Θαloc={0},Θαdis={0},ΘLloc={0},ΘLdis={2ΔL,4ΔL,...,2NL deltal },Θγloc={0},Θγdis={0},This is because the values of the frequency-dependent factor α and the azimuth-dependent factor γ have little influence on the attribute scattering center echo signal and can be ignored temporarily, so that Θ αlocαdisγloc is set to {0}. The initialization dictionary Φ is thus built up.
B) And (3) performing atom selection on the initial dictionary phi, and selecting an atom theta i_chose with the maximum similarity with the current residual image R (x, y) from the initial dictionary phi as a rough estimation result of the current ith ASC parameter.
Specifically, for a given initial dictionary Φ, the first step of the NOMP algorithm is to select the atom from Φ that best matches (has the largest inner product) the residual image R (x, y), as shown in the following formula:
wherein (-) * represents a conjugate operation.
C) Judging whether the position parameter of the currently selected atom theta i_chose is the same as the position parameter of the atom which is selected last time, discarding the currently selected atom theta i_chose, and re-selecting the atom with the maximum similarity after removing the atom theta i_chose as a rough estimation result of the round, otherwise, executing the step d).
Specifically, the embodiment performs optimization of the atom selection method, records that the currently selected dictionary atom is θ i_chose, that is, θ i_chose=θi, records that the dictionary atom selected in the last execution of step b) is θ i_last, compares whether θ i_chose and θ i_last are the same atom, if θ i_last≠θi_chose, continues the following steps to perform parameter estimation, if θ i_last=θi_chose, gives up the atom with the largest current similarity, and selects the atom with the largest similarity after removing θ i_last as θ i_chose of the present round.
D) And taking the rough estimation result theta i_chose as an initial point, carrying out fine estimation on the ith ASC parameter to obtain a fine estimation result theta i,opt, and putting the fine estimation result theta i,opt into the selected atom set phi Gen.
Specifically, since the values of most of the attribute scattering center parameters are continuous, the parameters obtained in step b) are inaccurate and require further refined estimation. Therefore, the following equation is solved by newton method using the rough estimation result θ i_chose of the current ith ASC parameter obtained in step b) as an initial point:
where θ/A represents the remaining set of parameters except for the removal parameter A.
Note Φ Gen is the collection of the attribute scattering center images corresponding to the parameter θ i,opt, and initializeThe method comprises the following steps:
e) The input image S (x, y) is approximated by atoms in Φ Gen and the sparse coefficients are solved by the least squares method.
Specifically, using least squares estimation, the input S (x, y) is approximated by an atom in Φ Gen, as shown in the following equation:
Wherein the method comprises the steps of Representing the set of coefficients corresponding to each atom,Represents the optimal coefficients for approximating the input S (x, y) using Φ Gen.
F) And updating the residual image according to the sparse coefficient.
Specifically, the update formula is:
g) Repeating the operations from step b) to step f) until the current residual image R (x, y) can no longer extract valid attribute scattering centers, and exiting the loop.
So far, the extraction of the attribute scattering center of the observed target is finished by using the improved algorithm.
Preferably, in the present embodiment, 25 attribute scattering centers are extracted for each SAR complex image using the above method.
The method is improved and optimized for the defects of a recently proposed image domain-based quick ASC extraction algorithm, and the artificial zeroing operation is removed in the process of extracting the ASC of the target by changing the processing mode of the residual image and the selection mode of initial dictionary atoms when the sparse coefficient is solved, so that the ASC parameter estimation result of the target is more accurate and the reconstruction error of the target is smaller.
And 2, performing image global and local reconstruction on the attribute scattering center, and obtaining global and local reconstruction images with different scales in a downsampling mode.
21 And (3) bringing the attribute scattering center parameters of the observed target obtained in the step (1) into a defined expression of an attribute scattering center model to obtain integral back scattering echo data of the target and back scattering echo data of an independent attribute scattering center.
Specifically, according to the definition of the attribute scattering center model, that is, the backward scattering echo of an object in the high frequency region can be regarded as the superposition of many independent scattering point echoes, the specific form is as follows:
Wherein, the Indicating that the signal is a frequency domain signal; representing the set of all parameters, and respectively representing the backscattering coefficient, the frequency dependent factor, the length, the inclination angle, the azimuth dependent factor, the distance-oriented coordinate and the azimuth-oriented coordinate from left to right; Represents the back scattering echo signal of the observed target, Q represents the total number of attribute scattering centers constituting the current complex target, and theta q represents the parameter set of the Q-th attribute scattering center.
Wherein, the Echo signals representing the qth individual attribute scattering center; representing additive white Gaussian noise, where The specific representation of (2) is as follows:
Wherein, the Sinc (·) =sin (·)/(·), f 0 represents the center frequency of radar operation, c represents the speed of light, a q represents the backscattering coefficient, a q represents the frequency dependent factor, x q and y q represent the position coordinates of the range and azimuth directions respectively, L q,The three parameters gamma q describe the length, tilt angle and azimuth dependence factor of the attribute scattering center, respectively.
The back scattering echo of the single scattering point can be obtained by respectively introducing the estimation result of the scattering center parameters of the observation target into the step (13)The target back-scattered echo reconstructed by the extracted attribute scattering center result can be obtained by recombination (12)Then, a linear imaging operator beta {. Is applied to the frequency domain echo, and a reconstructed image S (x, y) can be obtained.
22 Respectively applying linear imaging operators to the whole back scattering echo data of the target and the back scattering echo data of the independent attribute scattering center to correspondingly obtain a global reconstruction map and a local reconstruction map of a plurality of independent scattering points;
The global reconstruction size and the global reconstruction size are both 128×128 identical to the original SAR complex image size, and the number of the obtained local reconstruction is 25 because of 25 attribute scattering centers extracted in the embodiment.
23 A global reconstruction having a size of 128×128 is downsampled to obtain a global reconstruction S recon_all_64 having a size of 64×64 and a global reconstruction S recon_all_32 having a size of 32×32, respectively, and a local reconstruction having a size of 128×128 is downsampled to obtain a local reconstruction S recon_single_64 having a size of 64×64.
It can be understood that, in this embodiment, corresponding thresholds are further set for three images obtained after the downsampling operation, so as to obtain three types of binarized graphs, which specifically includes the following steps:
For the global reconstruction S recon_all_64, a threshold t=0.01 is set, all pixel values on S recon_all_64 are compared with the threshold t, the point where the pixel value is greater than the threshold t is set to 255, and the point where the pixel value is less than the threshold t is set to 0, resulting in a binary image B recon_all_64.
For 25 partial reconstruction images S recon_single_64, a threshold t=0.01 is set, and two-value processing is performed on 25 partial reconstruction images S recon_single_64, respectively. Comparing all pixel values on the S recon_single_64 with a threshold t, wherein the point with the pixel value larger than the threshold t is set to 255, and the point with the pixel value smaller than the threshold t is set to 0. 25 binary images B recon_single_64 were obtained.
For the global reconstruction S recon_all_32, a threshold t=0.01 is set, all pixel values on S recon_all_32 are compared with the threshold t, the point where the pixel value is greater than the threshold t is set to 255, and the point where the pixel value is less than the threshold t is set to 0, resulting in a binary image B recon_all_32.
Step 3, constructing a deep neural network comprising a feature extraction module and a feature fusion module, wherein,
The feature extraction module is used for carrying out multi-scale feature extraction on the amplitude image corresponding to the original complex SAR image to obtain a multi-scale depth feature map;
the feature fusion module is used for carrying out feature fusion on different layers on the extracted multi-scale depth feature map and the binary image corresponding to the reconstruction image.
First, a feature extraction module is constructed.
In the embodiment, the constructed feature extraction module comprises 12 layers of convolution layers and 3 layers of maximum pooling layers, and the structure of the feature extraction module sequentially comprises a first convolution layer L C1, a second convolution layer L C2, a third maximum pooling layer L p3, a fourth convolution layer L C4, a fifth convolution layer L C5, a sixth maximum pooling layer L p6, a seventh convolution layer L C7, an eighth convolution layer L C8, a ninth convolution layer L C9, a tenth maximum pooling layer L p10, an eleventh convolution layer L C11, a twelfth convolution layer L C12, a thirteenth convolution layer L C13, a fourteenth convolution layer L C14 and a fifteenth convolution layer L C15, wherein the network structure of the front 13 layers is the same as the structure of the front 13 layers of the VGG16Net network.
The depth global feature output by the fifth convolution layer L C5 is used as a first-scale depth feature to be fused extracted by the feature extraction module;
the depth global feature output by the ninth convolution layer L C9 is used as a second-scale depth feature to be fused extracted by the feature extraction module;
The depth global feature output by the fifteenth convolution layer L C15 is used as a third-scale depth feature to be fused extracted by the feature extraction module.
Specifically, the parameters of each layer are set as follows: the number of convolution kernels of the 12 convolution layers is set to 64, 128, 256, 512, 256, 32, the convolution kernel sizes are all set to 3 multiplied by 3, the convolution kernel step sizes are all set to 1, and the ReLU activation functions are used as the activation functions; the core sizes of the 3 layers of the maximum pooling layers are all 2×2, and the step sizes are all set to 2.
Because the front 13-layer network structure of the feature extraction module constructed in this embodiment is identical to the VGG16Net network structure, when the network is trained subsequently, the parameter result trained on the ImageNet dataset can be used as the initial parameter of the front 13-layer of the feature extraction network in this embodiment.
Then, a feature fusion module is constructed.
In the embodiment, the feature fusion module comprises a local feature image layer fusion unit and a whole feature image layer fusion unit, wherein,
The local feature image layer fusion unit fuses the first-scale depth feature to be fused and the second-scale depth feature to be fused extracted by the feature extraction module with binary images corresponding to three different reconstruction images for multiple times, and three fusion feature images are correspondingly obtained;
The integrated feature map layer fusion unit is used for fusing the three fusion feature maps and the third-scale depth feature to be fused extracted by the feature extraction module to obtain a final fusion feature.
Specifically, as shown in fig. 4, the local feature fusion is performed three times in this embodiment, namely, the depth global feature of the 5 th layer is fused with 25 binary images B recon_single_64 reconstructed by a single scattering point, the depth global feature of the 5 th layer is fused with the binary image B recon_all_64 reconstructed integrally, and the depth global feature of the 9 th layer is fused with the binary image B recon_all_32 reconstructed integrally. The specific operation is as follows:
step1 first fusion:
The binary image B recon_single_64 reconstructed by the single scattering point has the size of 64×64×25, the binary image B recon_single_64 is multiplied by the global feature of depth of the depth feature extraction module 5 layer with the size of 64×64×128 along the channel dimension to obtain a fusion feature of 64×64×25×128, and global average pooling (Global Average Pooling, GAP) operation is carried out on the fusion feature to obtain a feature of 25×128 dimension.
In order to compress the 25 component features into one vector, statistical functions may be used at corresponding positions of the 25 vectors, and in consideration of representativeness and calculation amount, the embodiment uses max (·) and mean (·) statistical functions to complete the vector compression process, which is specifically shown in the formula (14):
C(·)=max(·)+mean(·) (14)
Wherein, C (-) represents the fusion mode used, namely, max (-) and mean (-) are respectively used at the corresponding positions of 25 local component feature vectors, and then the results are added to finally obtain a feature of 1X 128 dimension, namely, a first fusion feature.
Step2 second fusion:
the overall reconstruction binary image B recon_all_64 with the size of 64×64 is multiplied by the global feature with the size of 64×64×128 of the depth feature extraction module 5 layer along the channel dimension to obtain a 64×64×128 fusion feature, and then GAP operation is performed on the fusion feature to obtain a1×128-dimensional feature, namely a second fusion feature.
Step3 third fusion:
the overall reconstruction binary image B recon_all_32 with the size of 32×32 is multiplied by the global feature with the size of 32×32×256 of the 9 th layer of the depth feature extraction module along the channel dimension to obtain a 32×32×256 fusion feature, and then the GAP operation is performed on the fusion feature to obtain a1×256-dimensional feature, namely a third fusion feature.
For the overall feature map layer, four features are fused together in this embodiment, that is, the depth feature extraction module outputs the depth feature with the final output size of 16×16×32, and obtains the global network feature with the size of 1×32 (that is, the depth feature to be fused with the third dimension) after GAP operation, the first fusion feature with 1×128 dimensions is generated in the local feature fusion, the second fusion feature with 1×128 dimensions is generated in the local feature fusion, and the third fusion feature with 1×256 dimensions is generated in the third fusion in the local feature fusion. These four features are stitched along a first dimension to yield an overall fused feature of size 1 x 544.
And finally, constructing a fully-connected network to classify the final fusion characteristics to obtain a target classification result.
The fully-connected network FC comprises two fully-connected layers, an activation layer, a Dropout layer and a classifier layer, and has the structure that a first fully-connected layer L F1, a second activation layer L F2, a third Dropout layer L d3, a fourth fully-connected layer L F4 and a fifth classification layer L F5 are respectively arranged, the input of the network is a feature vector of 544-dimension after fusion, and the output of the network is a category prediction vector of 3-dimension
The parameters of each layer are set as follows, the dimensions of the two fully connected layers are 544 multiplied by 512 and 512 multiplied by 3, the active layer uses a ReLU activation function, the drop probability of the Dropout layer is 0.5, and the classifier layer uses a softmax classifier.
The above parts are combined together in the order shown in fig. 4 to obtain the deep neural network ψ.
It can be appreciated that after the deep neural network is built, the network needs to be trained, and then the trained network is used for target recognition.
In this embodiment, the deep neural network is trained in the following manner:
Extracting attribute scattering centers from the actual measured SAR complex images with the labels, performing global and local reconstruction, and obtaining global and local reconstruction images with different scales in a downsampling mode;
Inputting the labeled SAR complex images and the binary images corresponding to the global and local reconstruction images with different scales into a constructed deep neural network for forward propagation as shown in fig. 4;
And calculating the classification loss, and updating network parameters in a back propagation mode to obtain a trained network. Wherein the classification loss uses a cross entropy loss function, as shown below,
Wherein n represents the number of training samples, y i represents the category label of the i-th input image in the form of one-time thermal coding,Representing the corresponding prediction category label.
According to the method, the physical properties of the targets reflected by the ASC model are fully utilized, the accurately estimated ASC parameter set is used for carrying out multi-type reconstruction on the observed targets in the image domain, the ASC parameter set is fused with depth features of different scales, more information is provided for a network, and therefore target identification performance is improved.
And 4, inputting the original complex SAR image of the test data and the binary image corresponding to the reconstructed image into a trained deep neural network for processing, and outputting a target recognition result.
The invention uses the parameter migration mode in the deep network training process, and uses the front 13 layers of parameter results of VGG16Net which are fully trained on the ImageNet data set as the initial parameters of the deep network, so that the network can obtain better recognition performance through less times of training, and can also improve training efficiency.
Example two
The method provided by the invention is simulated by taking a specific scene as an example, so as to verify the effectiveness of the attribute scattering center extraction algorithm in the invention.
Specifically, in this experiment, the T72 original SAR image shown in fig. 5 is taken as an extraction object, and the method of the present invention is used to extract the attribute scattering center and reconstruct the image, and the results are shown in fig. 6 and 7.
Fig. 6 is a result of performing attribute scattering center extraction and image reconstruction on a T72 original SAR image by using the algorithm of the present invention, wherein (a) is a global reconstruction of 128 pixels in size reconstructed using the extracted 25 scattering center points, (b) is a global reconstruction of 64 pixels in size obtained by downsampling, (c) is a local reconstruction of 64 pixels in size obtained by downsampling, and (d) is a global reconstruction of 32 pixels in size obtained by downsampling.
Fig. 7 is a binarized image corresponding to the reconstruction result in fig. 6. Wherein, (a) the image is a global reconstruction binarized image of 64 pixels in size obtained through a downsampling operation, (b) the image is a local reconstruction binarized image of 64 pixels in size obtained through a downsampling operation, and (c) the image is a global reconstruction binarized image of 32 pixels in size obtained through a downsampling operation.
In order to further verify the SAR target recognition method based on fusion of ASC features and multi-scale depth features, the embodiment also recognizes the SAR target recognition method on the disclosed moving and static target MSTAR data sets.
The MSTAR dataset used for the experiment was a complex image with a resolution of 0.3m x 0.3m, each 128 x 128 pixels in size. The data set used in this experiment identifies scenes for three categories of MSTAR. The three categories of target data are T72, BMP2, BTR70, respectively. Wherein BMP2 and T72 each contain three different sequence numbers, each class of training data contains only one sequence number of each type, and the test data contains all sequence numbers of each type. The experimental data are specifically set forth in table 1.
TABLE 1 MSTAR class 3 target recognition scenario
The following table 2 shows the identification results of the method of the present invention on the MSTAR three categories of identification data shown in the above table 1, and combines the SAR ATR method of the attribute scattering center and the convolutional neural network with the existing identification method (ACNNC for short, the multiscale SAR ATR convolutional neural network based on component analysis from article A Convolutional Neural Network Combined with Attributed Scattering Centers for SAR ATR,IEEE Transactions on Geoscience and Remote Sensing,Zhou Y,2021)、 for short, CA-MCNN for short, and article Multiscale CNN Based on Component Analysis for SAR ATR,IEEE Transactions on Geoscience and Remote Sensing,Li Y,2021) for comparison.
TABLE 2 detailed recognition results of different recognition methods on MSTAR 3 class target data
Application method Accuracy of identification
The method provided by the invention 0.9890
ACNNC 0.9795
CA-MVCC 0.9861
Because the SAR image has the problem of insufficient training data, the problem of small samples is very prominent in SAR image recognition, in order to further verify the effectiveness of the invention, small sample experiments are carried out on three categories of MSTAR target data shown in table 1, small sample experimental conditions are simulated by randomly selecting a certain proportion of training samples, and the average result of 10 experiments is selected as a recognition result. And compared with the existing SAR target recognition method based on limited training data of a network generated by angular rotation by a small sample recognition method (ARGN for short, the SAR ATR method based on an improved polar coordinate mapping classifier from article SAR Target Recognition With Limited Training Data Based on Angular Rotation Generative Network,IEEE Geoscience and Remote Sensing Letters,Sun Y,2019)、 (M-PMC for short, the SAR target recognition method based on a data enhanced convolutional neural network from article Modified Polar Mapping Classifier for SAR Automatic Target Recognition,IEEE Transactions on Aerospace and Electronic Systems,Park J,2014)、 (DA-CNN for short, the SAR image classification method based on a deep convolutional network from article Convolutional Neural Network with Data Augmentation for SAR Target Recognition,IEEE Geoscience and Remote Sensing Letters,Ding J,2016)、 (A-ConvNet for short, the recognition results of the method and the method under the small sample environment from article Target Classification using the Deep Convolutional Networks for SAR images,IEEE Transactions on Geoscience and Remote Sensing,Chen S,2016). are shown in the table 3).
TABLE 3 comparison of the identification performance of the inventive method with some prior methods in a small sample environment
The results in table 3 above were all trained by randomly selecting training samples in the corresponding proportions, with the sample ratios representing the ratio of the number of randomly selected samples to the number of all training samples. 10 experiments are carried out on the duty ratio value of each sample, and the average value of the 10 experiments is taken as the recognition result of the final sample duty ratio. It can be seen that when only less than half of training samples are used, the recognition accuracy of the method is superior to that of other comparison methods, which effectively proves the effectiveness of the model in the case of insufficient training data, and when the sample ratio is 0.1, namely each training data has only 22 samples, the average recognition accuracy of 1365 test samples reaches 89.46 percent.
The experimental results can show that in the identification experiments of three types of MSTAR target data, the method obtains better identification results under all training sample experimental conditions and less than 50% of small sample experimental conditions. The validity of the method is verified, and the method can better utilize global information and local information to obtain effective and stable target feature representation, so that the method has certain validity and feasibility.
The foregoing is a further detailed description of the invention in connection with the preferred embodiments, and it is not intended that the invention be limited to the specific embodiments described. It will be apparent to those skilled in the art that several simple deductions or substitutions may be made without departing from the spirit of the invention, and these should be considered to be within the scope of the invention.

Claims (9)

1.一种基于ASC特征与多尺度深度特征融合的SAR目标识别方法,其特征在于,包括:1. A SAR target recognition method based on the fusion of ASC features and multi-scale depth features, characterized by comprising: 步骤1:获取观测目标的原始SAR复数图像,并基于改进的图像域稀疏表示的ASC参数估计算法提取每一幅SAR复数图像对应的属性散射中心;Step 1: Obtain the original SAR complex image of the observed target and extract the attribute scattering center corresponding to each SAR complex image based on the improved image domain sparse representation ASC parameter estimation algorithm; 11)对原始SAR复数图像数据中的目标后向散射回波数据和第q个属性散射中心的回波数据施加同样的线性成像算子,并估计其属性散射中心参数;其中,分别表示离散的f表示雷达的工作频率,表示合成孔径范围,θq表示第q个散射中心的参数集合;11) Target backscatter echo data in the original SAR complex image data and the echo data of the qth attribute scattering center Apply the same linear imaging operator and estimate its attribute scattering center parameters; where, Respectively represent discrete f represents the operating frequency of the radar, represents the synthetic aperture range, θq represents the parameter set of the qth scattering center; 12)使用NOMP算法对S(x,y)进行属性散射中心参数求解,并在求解过程中增加参数精细矫正过程,以获得若干属性散射中心;其中,S(x,y)表示对应的图像域表示,x,y分别表示转换到图像域以后的像素点坐标;12) Use NOMP algorithm to solve the attribute scattering center parameters of S(x,y), and add parameter fine correction process in the solution process to obtain several attribute scattering centers; where S(x,y) represents The corresponding image domain representation, x, y respectively represent the pixel coordinates after conversion to the image domain; a)建立初始字典Φ;同时,初始化残差图像R(x,y),并令R(x,y)=S(x,y);a) Establish the initial dictionary Φ; at the same time, initialize the residual image R(x,y) and set R(x,y) = S(x,y); b)对所述初始字典Φ进行原子选择,从中选出与当前残差图像R(x,y)相似度最大的原子θi_chose作为当前第i个ASC参数的粗估计结果;b) performing atom selection on the initial dictionary Φ, and selecting the atom θ i_chose having the greatest similarity with the current residual image R(x, y) as the rough estimation result of the current i-th ASC parameter; c)判断当前所选原子θi_chose的位置参数与上一次选择原子位置参数是否相同,相同时,放弃当前所选的原子θi_chose,并重新挑选去除该θi_chose后的相似度最大的原子作为本轮的粗估计结果;否则,执行步骤d);c) determining whether the position parameters of the currently selected atom θ i_chose are the same as those of the last selected atom. If they are the same, discard the currently selected atom θ i_chose and reselect the atom with the greatest similarity after removing θ i_chose as the rough estimation result of this round; otherwise, proceed to step d); d)将所述粗估计结果作为初始点,对所述第i个ASC参数进行精估计,得到精估计结果并将其加入挑选原子集合ΦGen中;d) using the rough estimation result as an initial point, performing a fine estimation on the i-th ASC parameter, obtaining a fine estimation result and adding it to the selected atom set Φ Gen ; e)利用所述挑选原子集合ΦGen中的所有原子来近似表示输入图像S(x,y),并利用最小二乘法求解稀疏系数;e) using all atoms in the selected atom set Φ Gen to approximate the input image S (x, y), and solving the sparse coefficients using the least squares method; f)根据所述稀疏系数更新残差图像;f) updating the residual image according to the sparse coefficients; g)重复步骤b)到步骤f)的操作,直到当前残差图像R(x,y)无法再提取出有效的属性散射中心,退出循环;g) Repeat steps b) to f) until no valid attribute scattering centers can be extracted from the current residual image R(x, y), and then exit the loop; 步骤2:对所述属性散射中心进行图像全局与局部重构,并通过下采样的方式获得不同尺度的全局和局部重构图;Step 2: reconstruct the image globally and locally for the attribute scattering center, and obtain global and local reconstruction images of different scales by downsampling; 步骤3:构建包括特征提取模块以及特征融合模块的深度神经网络;其中,Step 3: Construct a deep neural network including a feature extraction module and a feature fusion module; 所述特征提取模块用于对所述原始SAR复数图像对应的幅度图像进行多尺度特征提取,得到多尺度深度特征图;The feature extraction module is used to perform multi-scale feature extraction on the amplitude image corresponding to the original SAR complex image to obtain a multi-scale depth feature map; 所述特征融合模块用于将提取的多尺度深度特征图与所述重构图对应的二值化图像进行不同层面的特征融合;The feature fusion module is used to perform feature fusion at different levels on the extracted multi-scale depth feature map and the binarized image corresponding to the reconstructed image; 步骤4:将所述原始SAR复数图像和所述重构图对应的二值化图像一起输入到训练好的深度神经网络中进行处理,并输出目标识别结果。Step 4: Input the original SAR complex image and the binarized image corresponding to the reconstructed image into the trained deep neural network for processing, and output the target recognition result. 2.根据权利要求1所述的基于ASC特征与多尺度深度特征融合的SAR目标识别方法,其特征在于,在步骤11)中,所述线性成像算子表示为:2. The SAR target recognition method based on the fusion of ASC features and multi-scale depth features according to claim 1, characterized in that in step 11), the linear imaging operator is expressed as: 所述属性散射中心参数表示为:The attribute scattering center parameter is expressed as: 其中,K、H分别表示频率向与方位向离散点的个数,f0表示中心频率,c表示光速,Q表示属性散射中心总个数,σq表示稀疏系数,ε表示误差系数,且有ε>0,D(x,y;θq)表示对应的图像域表示。Where K and H represent the number of discrete points in frequency and azimuth, respectively; f0 represents the center frequency; c represents the speed of light; Q represents the total number of attribute scattering centers; σq represents the sparsity coefficient; ε represents the error coefficient, and ε>0;D(x,y; θq ) represents The corresponding image domain representation. 3.根据权利要求1所述的基于ASC特征与多尺度深度特征融合的SAR目标识别方法,其特征在于,在步骤1中,对每一幅SAR复数图像,提取25个属性散射中心。3. The SAR target recognition method based on the fusion of ASC features and multi-scale depth features according to claim 1 is characterized in that, in step 1, 25 attribute scattering centers are extracted for each SAR complex image. 4.根据权利要求1所述的基于ASC特征与多尺度深度特征融合的SAR目标识别方法,其特征在于,步骤2包括:4. The SAR target recognition method based on the fusion of ASC features and multi-scale depth features according to claim 1, wherein step 2 comprises: 21)将步骤1得到的观测目标的属性散射中心参数带入属性散射中心模型的定义表达式中,获得目标整体的后向散射回波数据与单独属性散射中心的后向散射回波数据;21) Substituting the attribute scattering center parameters of the observed target obtained in step 1 into the definition expression of the attribute scattering center model to obtain the backscattered echo data of the entire target and the backscattered echo data of the individual attribute scattering centers; 22)对所述目标整体的后向散射回波数据和所述单独属性散射中心的后向散射回波数据分别施加相同的线性成像算子,对应得到全局重构图以及若干张单独散射点的局部重构图;22) applying the same linear imaging operator to the backscatter echo data of the entire target and the backscatter echo data of the individual attribute scattering centers, respectively, to obtain a global reconstruction image and a plurality of local reconstruction images of the individual scattering points; 其中,所述全局重构图大小、局部重构图大小与所述原始SAR复数图像大小一致,均为128×128;The size of the global reconstructed image and the size of the local reconstructed image are consistent with the size of the original SAR complex image, both of which are 128×128; 23)对所述全局重构图进行下采样,分别得到大小为64×64的全局重构图Srecon_all_64和大小为32×32的全局重构图Srecon_all_32,对局部重构图进行下采样,得到大小为64×64的局部重构图Srecon_single_6423) Down-sampling the global reconstructed image to obtain a global reconstructed image S recon_all_64 of size 64×64 and a global reconstructed image S recon_all_32 of size 32×32, and down-sampling the local reconstructed image to obtain a local reconstructed image S recon_single_64 of size 64×64. 5.根据权利要求4所述的基于ASC特征与多尺度深度特征融合的SAR目标识别方法,其特征在于,在步骤3中,构建的特征提取模块包括12层卷积层和3层最大池化层,其结构依次为:第一卷积层LC1,第二卷积层LC2,第三最大池化层Lp3,第四卷积层LC4,第五卷积层LC5,第六最大池化层Lp6,第七卷积层LC7,第八卷积层LC8,第九卷积层LC9,第十最大池化层Lp10,第十一卷积层LC11,第十二卷积层LC12,第十三卷积层LC13,第十四卷积层LC14,第十五卷积层LC15;其中,前13层网络结构与VGG16Net的前13层网络结构相同;5. The SAR target recognition method based on the fusion of ASC features and multi-scale deep features according to claim 4, characterized in that in step 3, the constructed feature extraction module includes 12 convolutional layers and 3 maximum pooling layers, and its structure is as follows: first convolutional layer L C1 , second convolutional layer L C2 , third maximum pooling layer L p3 , fourth convolutional layer L C4 , fifth convolutional layer L C5 , sixth maximum pooling layer L p6 , seventh convolutional layer L C7 , eighth convolutional layer L C8 , ninth convolutional layer L C9 , tenth maximum pooling layer L p10 , eleventh convolutional layer L C11 , twelfth convolutional layer L C12 , thirteenth convolutional layer L C13 , fourteenth convolutional layer L C14 , and fifteenth convolutional layer L C15 ; wherein the network structure of the first 13 layers is the same as the first 13 layers of VGG16Net; 其中,第五卷积层LC5输出的深度全局特征作为特征提取模块提取的第一尺度待融合深度特征;Among them, the deep global features output by the fifth convolutional layer LC5 are used as the first-scale deep features to be fused extracted by the feature extraction module; 第九卷积层LC9输出的深度全局特征作为特征提取模块提取的第二尺度待融合深度特征;The deep global features output by the ninth convolutional layer LC9 are used as the second-scale deep features to be fused by the feature extraction module; 第十五卷积层LC15输出的深度全局特征作为特征提取模块提取的第三尺度待融合深度特征。The deep global features output by the fifteenth convolutional layer LC15 are used as the third-scale deep features to be fused by the feature extraction module. 6.根据权利要求5所述的基于ASC特征与多尺度深度特征融合的SAR目标识别方法,其特征在于,在步骤3中,构建的特征融合模块包括局部特征图层面融合单元和整体特征图层面融合单元;其中,6. The SAR target recognition method based on the fusion of ASC features and multi-scale depth features according to claim 5 is characterized in that in step 3, the constructed feature fusion module includes a local feature map level fusion unit and an overall feature map level fusion unit; wherein, 所述局部特征图层面融合单元分多次对特征提取模块提取的第一尺度待融合深度特征和第二尺度待融合深度特征与三种不同的重构图对应的二值化图像进行融合,对应得到三个融合特征图;The local feature map layer fusion unit fuses the first-scale depth features to be fused and the second-scale depth features to be fused extracted by the feature extraction module with the binarized images corresponding to the three different reconstructed images multiple times to obtain three fused feature maps; 所述整体特征图层面融合单元用于对三个融合特征图与特征提取模块提取的第三尺度待融合深度特征进行融合,得到最终的融合特征。The overall feature map level fusion unit is used to fuse the three fused feature maps with the third-scale depth feature to be fused extracted by the feature extraction module to obtain the final fused feature. 7.根据权利要求6所述的基于ASC特征与多尺度深度特征融合的SAR目标识别方法,其特征在于,所述局部特征图层面融合单元分三次对所述特征提取模块提取的第一尺度待融合深度特征和第二尺度待融合深度特征与不同重构图对应的二值化图像进行融合,对应得到三个融合特征图,包括:7. The SAR target recognition method based on the fusion of ASC features and multi-scale depth features according to claim 6 is characterized in that the local feature map layer fusion unit fuses the first-scale depth features to be fused and the second-scale depth features to be fused extracted by the feature extraction module with the binarized images corresponding to different reconstructed images three times to obtain three fused feature maps, including: 将所述单独散射点的局部重构图Srecon_single_64对应的二值化图像与所述第一尺度待融合深度特征沿通道维度相乘,再对其进行GAP操作,得到第一融合特征图;Multiplying the binary image corresponding to the local reconstructed image S recon_single_64 of the single scattering point by the first-scale depth feature to be fused along the channel dimension, and then performing a GAP operation on the image to obtain a first fused feature map; 将所述全局重构图Srecon_all_64对应的二值化图像与所述第一尺度待融合深度特征沿通道维度相乘,再对其进行GAP操作,得到第二融合特征图;Multiplying the binarized image corresponding to the global reconstruction map S recon_all_64 by the first-scale depth feature to be fused along the channel dimension, and then performing a GAP operation on the multiplied image to obtain a second fused feature map; 将所述全局重构图Srecon_all_32对应的二值化图像与所述第二尺度待融合深度特征沿通道维度相乘,再对其进行GAP操作,得到第三融合特征图。The binarized image corresponding to the global reconstruction map S recon_all_32 is multiplied by the second-scale depth feature to be fused along the channel dimension, and then a GAP operation is performed on the image to obtain a third fused feature map. 8.根据权利要求7所述的基于ASC特征与多尺度深度特征融合的SAR目标识别方法,其特征在于,在步骤3中,还包括:8. The SAR target recognition method based on the fusion of ASC features and multi-scale depth features according to claim 7, characterized in that in step 3, it further comprises: 构建全连接网络,以对所述最终的融合特征进行分类,得到目标分类结果;Constructing a fully connected network to classify the final fusion features to obtain a target classification result; 其中,所述全连接网络包括两层全连接层、一层激活层、一层Dropout层和一层分类器层,其结构依次为:第一全连接层LF1、第二激活层LF2、第三Dropout层Ld3、第四全连接层LF4、第五分类层LF5The fully connected network includes two fully connected layers, one activation layer, one Dropout layer and one classifier layer, and its structure is as follows: first fully connected layer L F1 , second activation layer L F2 , third Dropout layer L d3 , fourth fully connected layer L F4 , fifth classification layer L F5 ; 所述全连接网络的输入为特征融合模块输出的最终的融合图像对应的特征向量,输出为3维的类别预测向量。The input of the fully connected network is the feature vector corresponding to the final fused image output by the feature fusion module, and the output is a 3D category prediction vector. 9.根据权利要求8所述的基于ASC特征与多尺度深度特征融合的SAR目标识别方法,其特征在于,所述深度神经网络采用如下方式进行训练:9. The SAR target recognition method based on the fusion of ASC features and multi-scale depth features according to claim 8, wherein the deep neural network is trained in the following manner: 对有标签的实测SAR复数图像提取属性散射中心,并进行重构,得到不同尺度的全局和局部重构图;Extract attribute scattering centers from labeled measured SAR complex images and reconstruct them to obtain global and local reconstruction images at different scales. 将所述有标签的SAR复数图像和不同尺度的全局和局部重构图对应的二值化图像输入到构建的深度神经网络中进行前向传播;同时,使用在ImageNet数据集上训练好的参数结果作为特征提取模块中前13层的初始参数;Input the labeled SAR complex image and the binarized images corresponding to the global and local reconstruction images at different scales into the constructed deep neural network for forward propagation; at the same time, use the parameter results trained on the ImageNet dataset as the initial parameters of the first 13 layers in the feature extraction module; 计算分类损失,并通过反向传播方式来更新网络参数,得到训练好的网络。Calculate the classification loss and update the network parameters through back propagation to obtain a trained network.
CN202310552287.8A 2023-05-16 2023-05-16 SAR target recognition method based on fusion of ASC features and multi-scale depth features Active CN116740561B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310552287.8A CN116740561B (en) 2023-05-16 2023-05-16 SAR target recognition method based on fusion of ASC features and multi-scale depth features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310552287.8A CN116740561B (en) 2023-05-16 2023-05-16 SAR target recognition method based on fusion of ASC features and multi-scale depth features

Publications (2)

Publication Number Publication Date
CN116740561A CN116740561A (en) 2023-09-12
CN116740561B true CN116740561B (en) 2025-08-12

Family

ID=87900180

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310552287.8A Active CN116740561B (en) 2023-05-16 2023-05-16 SAR target recognition method based on fusion of ASC features and multi-scale depth features

Country Status (1)

Country Link
CN (1) CN116740561B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117746014B (en) * 2023-12-07 2025-03-04 北京交通大学 SAR image target recognition method and system integrating electromagnetic scattering characteristics
CN119723195B (en) * 2024-12-13 2025-11-25 西安电子科技大学 A video SAR joint moving target detection method
CN119762803A (en) * 2025-03-06 2025-04-04 西安电子科技大学 A method for extracting scattering features from SAR images based on self-supervised learning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5164730A (en) * 1991-10-28 1992-11-17 Hughes Aircraft Company Method and apparatus for determining a cross-range scale factor in inverse synthetic aperture radar systems
WO2022074643A1 (en) * 2020-10-08 2022-04-14 Edgy Bees Ltd. Improving geo-registration using machine-learning based object identification

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10706857B1 (en) * 2020-04-20 2020-07-07 Kaizen Secure Voiz, Inc. Raw speech speaker-recognition
CN112131962B (en) * 2020-08-28 2023-08-15 西安电子科技大学 SAR Image Recognition Method Based on Electromagnetic Scattering Feature and Deep Network Feature
CN113240047B (en) * 2021-06-02 2022-12-02 西安电子科技大学 SAR target recognition method based on component analysis multi-scale convolutional neural network
CN114565856B (en) * 2022-02-25 2024-09-27 西安电子科技大学 Target identification method based on multiple fusion depth neural network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5164730A (en) * 1991-10-28 1992-11-17 Hughes Aircraft Company Method and apparatus for determining a cross-range scale factor in inverse synthetic aperture radar systems
WO2022074643A1 (en) * 2020-10-08 2022-04-14 Edgy Bees Ltd. Improving geo-registration using machine-learning based object identification

Also Published As

Publication number Publication date
CN116740561A (en) 2023-09-12

Similar Documents

Publication Publication Date Title
CN113095417B (en) SAR Target Recognition Method Based on Fusion Graph Convolution and Convolutional Neural Network
CN112488210B (en) A method for automatic classification of 3D point clouds based on graph convolutional neural networks
Feng et al. Electromagnetic scattering feature (ESF) module embedded network based on ASC model for robust and interpretable SAR ATR
CN116740561B (en) SAR target recognition method based on fusion of ASC features and multi-scale depth features
Cui et al. Image data augmentation for SAR sensor via generative adversarial nets
Wang et al. Fusing bird’s eye view lidar point cloud and front view camera image for 3d object detection
Zhang et al. A GANs-based deep learning framework for automatic subsurface object recognition from ground penetrating radar data
CN108416378B (en) A large-scene SAR target recognition method based on deep neural network
CN106355151B (en) A kind of three-dimensional S AR images steganalysis method based on depth confidence network
CN108764006B (en) A SAR image target detection method based on deep reinforcement learning
CN113902969B (en) Zero-sample SAR target recognition method fusing CNN and image similarity
CN112966667B (en) One-dimensional range image noise reduction convolutional neural network recognition method for sea surface targets
CN107341488A (en) A kind of SAR image target detection identifies integral method
US9330336B2 (en) Systems, methods, and media for on-line boosting of a classifier
CN113240047B (en) SAR target recognition method based on component analysis multi-scale convolutional neural network
CN109035172B (en) A deep learning-based non-local mean ultrasound image denoising method
CN112215296B (en) Infrared image recognition method based on transfer learning and storage medium
CN112949380B (en) Intelligent underwater target identification system based on laser radar point cloud data
CN116597300B (en) Unsupervised domain self-adaptive SAR target recognition method integrating and aligning visual features and scattering topological features
CN116895016B (en) A method for generating and classifying ship targets in SAR images
CN118658028B (en) Intrinsic characteristic self-adaptive visible light infrared fusion detection and identification method and system
CN108257151A (en) PCANet image change detection methods based on significance analysis
CN116091889A (en) Space target identification method based on complex domain multi-scale visual transducer
CN116486183A (en) Classification method of built-up areas in SAR images based on fusion features of multiple attention weights
CN117746014B (en) SAR image target recognition method and system integrating electromagnetic scattering characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant