[go: up one dir, main page]

CN116012364B - SAR image change detection method and device - Google Patents

SAR image change detection method and device Download PDF

Info

Publication number
CN116012364B
CN116012364B CN202310101309.9A CN202310101309A CN116012364B CN 116012364 B CN116012364 B CN 116012364B CN 202310101309 A CN202310101309 A CN 202310101309A CN 116012364 B CN116012364 B CN 116012364B
Authority
CN
China
Prior art keywords
target
sar image
sample
feature
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310101309.9A
Other languages
Chinese (zh)
Other versions
CN116012364A (en
Inventor
赵江洪
李泽宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Civil Engineering and Architecture
Original Assignee
Beijing University of Civil Engineering and Architecture
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Civil Engineering and Architecture filed Critical Beijing University of Civil Engineering and Architecture
Priority to CN202310101309.9A priority Critical patent/CN116012364B/en
Publication of CN116012364A publication Critical patent/CN116012364A/en
Application granted granted Critical
Publication of CN116012364B publication Critical patent/CN116012364B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Radar Systems Or Details Thereof (AREA)
  • Image Analysis (AREA)

Abstract

本发明实施例公开了SAR图像变化检测方法和装置。该方法包括:获取两个时相的目标SAR图像;通过特征提取网络对图像进行特征提取;特征提取网络包括:第一特征提取模块用于对图像进行特征提取得到第一特征图;第二特征提取模块包括通道注意力模块和空间注意力模块;通道注意力模块用于对第一特征图进行处理确定通道注意力权重,基于该权重对第一特征图进行加权得到第二特征图;空间注意力模块用于对第二特征图进行处理确定空间注意力权重,基于该权重对第二特征图进行加权得到目标特征图;对两个目标特征图进行差异分析;根据目标差异图确定后一时相相对于前一时相发生变化的目标图像区域。基于该方法,可提高对于SAR图像的变化检测精度。

The embodiment of the invention discloses a SAR image change detection method and device. The method includes: acquiring target SAR images of two phases; performing feature extraction on the image through a feature extraction network; the feature extraction network includes: a first feature extraction module for feature extraction on the image to obtain a first feature map; a second feature The extraction module includes a channel attention module and a spatial attention module; the channel attention module is used to process the first feature map to determine the channel attention weight, and weight the first feature map based on the weight to obtain the second feature map; the spatial attention module The force module is used to process the second feature map to determine the spatial attention weight, weight the second feature map based on the weight to obtain the target feature map; perform difference analysis on the two target feature maps; determine the latter phase based on the target difference map The target image area that has changed relative to the previous phase. Based on this method, the change detection accuracy of SAR images can be improved.

Description

SAR图像变化检测方法和装置SAR image change detection method and device

技术领域Technical Field

本发明实施例涉及计算机技术领域,尤其涉及SAR图像变化检测方法和装置质。The embodiments of the present invention relate to the field of computer technology, and in particular to a SAR image change detection method and device.

背景技术Background Art

遥感影像变化检测通过计算与分析同一地理区域不同时间影像的差异,提取出地表发生的变化信息,在农业调查、城市扩展、防灾预警与灾情评估等多个领域都有广泛的应用。与常见的光学遥感影像相比,合成孔径雷达(Synthetic Aperture Radar,SAR)系统利用主动微波遥感脉冲压缩和合成孔径原理提高成像的距离分辨率和方位分辨率,不仅能够全天时、全天候地获取重访周期短、覆盖范围广的影像数据,克服光学影像容易受到阴影干扰导致样本制作与信息提取难度大等问题,而且对于水体和金属具有较强的识别能力。分辨率不断提高的SAR图像中丰富和稳定的信息能够为目标识别、分类等方面研究提供可靠的数据基础,在变化检测方面具有较大的研究价值与发展潜力。Remote sensing image change detection extracts information about surface changes by calculating and analyzing the differences in images of the same geographic area at different times. It has been widely used in many fields, such as agricultural surveys, urban expansion, disaster prevention and early warning, and disaster assessment. Compared with common optical remote sensing images, the Synthetic Aperture Radar (SAR) system uses active microwave remote sensing pulse compression and synthetic aperture principles to improve the range resolution and azimuth resolution of imaging. It can not only obtain image data with a short revisit period and a wide coverage area all day and all weather, but also overcome the problems of optical images being easily disturbed by shadows, which makes sample preparation and information extraction difficult. It also has a strong recognition ability for water bodies and metals. The rich and stable information in SAR images with increasing resolution can provide a reliable data basis for research in target recognition and classification, and has great research value and development potential in change detection.

近年来国内外对于SAR图像变化检测方法开展深入研究,基于小波卷积神经网络、深度级联网络、金字塔池化卷积神经网络与多尺度胶囊网络等结构,先后提出了一系列变化检测深度学习算法,能够从大量数据中提取不同层次的复杂特征,大幅提高影像处理的速度与精度。但是上述方法在处理变化检测任务时容易受到SAR图像中各种因素的干扰,影响信息提取的效果。In recent years, in-depth research has been conducted on SAR image change detection methods at home and abroad. Based on structures such as wavelet convolutional neural networks, deep cascade networks, pyramid pooling convolutional neural networks, and multi-scale capsule networks, a series of change detection deep learning algorithms have been proposed, which can extract complex features at different levels from a large amount of data and greatly improve the speed and accuracy of image processing. However, the above methods are easily interfered by various factors in SAR images when dealing with change detection tasks, which affects the effect of information extraction.

注意力机制能够对特征图进行加权处理,突出图像中重要类型地物的变化特征,抑制不感兴趣的目标和复杂的背景因素,增强噪声鲁棒性,优化信息提取的效果。目前,针对光学影像,提出了多种基于注意力机制的变化检测方法,例如深度监督图像融合网络IFN将原始图像的多级深层特征与图像差异特征进行融合,提升变化特征图的边界完整性和语义一致性;时空注意力神经网络STANet由金字塔时空注意力模块组成,利用自注意力机制为不同时间位置的两个像素点计算权重以生成具有判别性的特征,通过探究不同时空像素点之间的关系来提升变化检测性能。The attention mechanism can perform weighted processing on feature maps, highlight the changing features of important types of objects in the image, suppress uninteresting targets and complex background factors, enhance noise robustness, and optimize the effect of information extraction. At present, a variety of change detection methods based on attention mechanisms have been proposed for optical images. For example, the deep supervised image fusion network IFN fuses the multi-level deep features of the original image with the image difference features to improve the boundary integrity and semantic consistency of the change feature map; the spatiotemporal attention neural network STANet is composed of a pyramid spatiotemporal attention module, and uses the self-attention mechanism to calculate weights for two pixels at different time positions to generate discriminative features, and improves the change detection performance by exploring the relationship between different spatiotemporal pixels.

然而,基于注意力机制开展SAR图像深度学习变化检测在实际应用中仍具有一定的局限性:首先,现有的变化检测模型要求被分析影像具有较高的分辨率和稳定持续的重访周期,才能提供精确的变化检测结果。然而目前大部分SAR图像的空间分辨率和时间分辨率难以同时满足上述要求。因此,需要设计一种适用于SAR图像的,并可以实现精确检测的深度学习变化检测模型。其次,由于光学影像与SAR影像在成像原理、影像特征、语义信息等方面都存在较大差异,将用于光学影像的基于注意力机制的深度学习变化检测模型直接用于SAR图像会产生许多问题。因此,目前SAR图像深度学习变化检测方法的研究大部分仍停留在理论层面,或仅能实现极小范围变化情况的简单识别与分析,实际预测效果、鲁棒性和泛化性都十分有限。However, deep learning change detection for SAR images based on the attention mechanism still has certain limitations in practical applications: First, the existing change detection model requires the analyzed image to have a high resolution and a stable and continuous revisit cycle in order to provide accurate change detection results. However, the spatial resolution and temporal resolution of most SAR images are difficult to meet the above requirements at the same time. Therefore, it is necessary to design a deep learning change detection model suitable for SAR images and capable of accurate detection. Secondly, due to the large differences between optical images and SAR images in imaging principles, image features, semantic information, etc., directly applying the deep learning change detection model based on the attention mechanism for optical images to SAR images will cause many problems. Therefore, most of the current research on deep learning change detection methods for SAR images remains at the theoretical level, or can only achieve simple recognition and analysis of changes in a very small range, and the actual prediction effect, robustness and generalization are very limited.

发明内容Summary of the invention

本发明实施例的一个目的是解决至少上述问题和/或缺陷,并提供至少后面将说明的优点。An object of embodiments of the present invention is to solve at least the above problems and/or disadvantages and to provide at least the advantages to be described below.

本发明实施例提供了SAR图像变化检测方法和装置,其可以提取不同时相的SAR图像中小尺度目标的精细变化信息,提高对于SAR图像的变化检测精度。The embodiments of the present invention provide a SAR image change detection method and device, which can extract fine change information of small-scale targets in SAR images of different phases and improve the change detection accuracy of SAR images.

第一方面,提供了一种SAR图像变化检测方法,包括:In a first aspect, a SAR image change detection method is provided, comprising:

获取两个时相的目标SAR图像;Acquire target SAR images in two phases;

通过特征提取网络对所述两个时相的目标SAR图像进行特征提取,得到两个时相的目标特征图;Performing feature extraction on the target SAR images of the two time phases through a feature extraction network to obtain target feature maps of the two time phases;

所述特征提取网络包括:The feature extraction network includes:

第一特征提取模块,用于对每个时相的目标SAR图像进行特征提取,得到每个时相的第一特征图;A first feature extraction module is used to extract features from the target SAR image of each time phase to obtain a first feature map of each time phase;

第二特征提取模块,包括通道注意力模块和空间注意力模块;其中,所述通道注意力模块用于基于注意力机制对每个时相的第一特征图进行处理,确定每个时相的第一特征图的通道注意力权重,并基于所述通道注意力权重对每个时相的第一特征图进行加权处理,得到每个时相的第二特征图;所述空间注意力模块用于基于注意力机制对每个时相的第二特征图进行处理,确定每个时相的第二特征图的空间注意力权重,并基于所述空间注意力权重对每个时相的第二特征图进行加权处理,得到每个时相的目标特征图;The second feature extraction module includes a channel attention module and a spatial attention module; wherein the channel attention module is used to process the first feature map of each phase based on the attention mechanism, determine the channel attention weight of the first feature map of each phase, and perform weighted processing on the first feature map of each phase based on the channel attention weight to obtain the second feature map of each phase; the spatial attention module is used to process the second feature map of each phase based on the attention mechanism, determine the spatial attention weight of the second feature map of each phase, and perform weighted processing on the second feature map of each phase based on the spatial attention weight to obtain the target feature map of each phase;

对所述两个时相的目标特征图进行差异分析,生成目标差异图;Performing difference analysis on the target feature graphs of the two phases to generate a target difference graph;

根据所述目标差异图,确定所述两个时相的目标SAR图像中后一时相的目标SAR图像相对于前一时相的目标SAR图像发生变化的目标图像区域。According to the target difference map, a target image region in which the target SAR image of the latter phase of the target SAR images of the two phases changes relative to the target SAR image of the former phase is determined.

可选地,所述第一特征提取模块包括依次连接的第一卷积层、多个残差模块和第二卷积层;Optionally, the first feature extraction module includes a first convolutional layer, a plurality of residual modules and a second convolutional layer connected in sequence;

所述第一卷积层用于对每个时相的目标SAR图像进行降维处理;The first convolutional layer is used to perform dimensionality reduction processing on the target SAR image of each phase;

每个残差模块用于对自身的输入特征进行特征提取,第一个残差模块的输入特征为所述第一卷积层的输出特征;Each residual module is used to extract features from its own input features, and the input features of the first residual module are the output features of the first convolutional layer;

所述第二卷积层用于对最后一个残差模块的输出特征进行升维处理,得到与每个时相的目标SAR图像维度相同的第一特征图。The second convolutional layer is used to perform dimensionality increase processing on the output features of the last residual module to obtain a first feature map with the same dimension as the target SAR image in each phase.

可选地,所述通道注意力模块包括第一最大池化层、第一平均池化层、第三卷积层、第一融合层、第一sigmoid激活函数和第一输出层,其中,所述第一最大池化层和第一平均池化层并联于所述第三卷积层的输入端,所述第三卷积层、所述第一融合层、所述第一sigmoid激活函数和所述第一输出层依次连接;Optionally, the channel attention module includes a first maximum pooling layer, a first average pooling layer, a third convolutional layer, a first fusion layer, a first sigmoid activation function and a first output layer, wherein the first maximum pooling layer and the first average pooling layer are connected in parallel to the input end of the third convolutional layer, and the third convolutional layer, the first fusion layer, the first sigmoid activation function and the first output layer are connected in sequence;

所述第一最大池化层和所述第一平均池化层分别用于对每个时相的第一特征图进行空间维度的最大池化处理和平均池化处理;The first maximum pooling layer and the first average pooling layer are used to perform maximum pooling processing and average pooling processing on the first feature map of each phase in the spatial dimension, respectively;

所述第三卷积层用于分别对所述第一最大池化层和所述第一平均池化层的输出特征进行卷积处理,以得到两个通道注意力权重矩阵;The third convolutional layer is used to perform convolution processing on the output features of the first maximum pooling layer and the first average pooling layer respectively to obtain two channel attention weight matrices;

所述第一融合层用于对所述两个通道注意力权重矩阵相加,以得到总的通道注意力权重矩阵;The first fusion layer is used to add the two channel attention weight matrices to obtain a total channel attention weight matrix;

所述第一sigmoid激活函数用于对所述总的通道注意力权重矩阵进行激活处理,以得到通道注意力权重;The first sigmoid activation function is used to activate the total channel attention weight matrix to obtain the channel attention weight;

所述第一输出层用于基于所述通道注意力权重对每个时相的第一特征图进行加权处理,得到每个时相的第二特征图。The first output layer is used to perform weighted processing on the first feature map of each time phase based on the channel attention weight to obtain the second feature map of each time phase.

可选地,所述空间注意力模块包括第二最大池化层、第二平均池化层、第二融合层、第四卷积层、第二sigmoid激活函数和第二输出层,所述第二最大池化层和所述第二平均池化层并联于所述第二融合层的输入端,所述第二融合层、所述第四卷积层和所述第二输出层依次连接;Optionally, the spatial attention module includes a second maximum pooling layer, a second average pooling layer, a second fusion layer, a fourth convolutional layer, a second sigmoid activation function and a second output layer, the second maximum pooling layer and the second average pooling layer are connected in parallel to the input end of the second fusion layer, and the second fusion layer, the fourth convolutional layer and the second output layer are connected in sequence;

所述第二最大池化层和所述第二平均池化层分别用于对每个时相的第二特征图进行通道维度的最大池化处理和平均池化处理;The second maximum pooling layer and the second average pooling layer are used to perform maximum pooling processing and average pooling processing on the second feature map of each phase in the channel dimension respectively;

所述第二融合层用于对所述第二最大池化层和所述第二平均池化层的输出特征进行向量拼接;The second fusion layer is used to perform vector concatenation on output features of the second maximum pooling layer and the second average pooling layer;

所述第四卷积层用于对向量拼接得到的输出特征进行卷积处理,以得到空间注意力权重矩阵;所述第二sigmoid激活函数用于对所述空间注意力权重矩阵进行激活处理,以得到空间注意力权重;The fourth convolutional layer is used to perform convolution processing on the output features obtained by vector concatenation to obtain a spatial attention weight matrix; the second sigmoid activation function is used to perform activation processing on the spatial attention weight matrix to obtain the spatial attention weight;

所述第二输出层用于基于所述空间注意力权重对每个时相的第二特征图进行加权处理,得到每个时相的目标特征图。The second output layer is used to perform weighted processing on the second feature map of each phase based on the spatial attention weight to obtain a target feature map of each phase.

可选地,所述对两个时相的目标特征图进行差异分析,生成目标差异图,包括:Optionally, performing difference analysis on the target feature maps of the two phases to generate a target difference map includes:

分别基于差值算子和对数比算子对所述两个时相的目标特征图进行计算,得到基于差值算子的差异图和基于对数比算子的差异图;Calculating the target feature maps of the two phases based on a difference operator and a logarithmic ratio operator respectively to obtain a difference map based on the difference operator and a difference map based on the logarithmic ratio operator;

对基于差值算子的差异图和基于对数比算子的差异图进行融合处理,生成所述目标差异图。The difference map based on the difference operator and the difference map based on the logarithmic ratio operator are fused to generate the target difference map.

可选地,所述对基于差值算子的差异图和基于对数比算子的差异图进行融合处理,生成所述目标差异图,包括:Optionally, the fusing the difference map based on the difference operator and the difference map based on the logarithmic ratio operator to generate the target difference map includes:

对基于差值算子的差异图和基于对数比算子的差异图进行加权融合处理,生成所述目标差异图。A weighted fusion process is performed on the difference map based on the difference operator and the difference map based on the logarithmic ratio operator to generate the target difference map.

可选地,所述方法还包括:Optionally, the method further comprises:

获取多个样本SAR图像组,其中,每个样本SAR图像组包括两个时相的样本SAR图像以及样本标签,所述样本标签用于指示所述两个时相的样本SAR图像中后一时相的样本SAR图像相对于前一时相的样本SAR图像发生变化的实际图像区域;Acquire multiple sample SAR image groups, wherein each sample SAR image group includes sample SAR images of two phases and sample labels, wherein the sample labels are used to indicate an actual image region in which the sample SAR image of the latter phase of the sample SAR images of the two phases changes relative to the sample SAR image of the previous phase;

通过待训练的特征提取网络对每个样本SAR图像组中的两个时相的样本SAR图像进行特征提取,得到两个时相的样本特征图;Performing feature extraction on sample SAR images of two time phases in each sample SAR image group through the feature extraction network to be trained, and obtaining sample feature maps of the two time phases;

对每个样本SAR图像组对应的所述两个时相的样本特征图进行差异分析,生成每个样本SAR图像组对应的样本差异图;Performing difference analysis on the sample feature maps of the two phases corresponding to each sample SAR image group to generate a sample difference map corresponding to each sample SAR image group;

根据每个样本SAR图像组对应的样本差异图,确定每个样本SAR图像组中后一时相的样本SAR图像相对于前一时相的样本SAR图像发生变化的图像区域;According to the sample difference map corresponding to each sample SAR image group, determine the image area in which the sample SAR image of the latter phase in each sample SAR image group changes relative to the sample SAR image of the previous phase;

根据每个样本SAR图像组中后一时相的样本SAR图像相对于前一时相的样本SAR图像发生变化的图像区域以及样本标签,确定损失信息;Determine loss information according to the image area of the sample SAR image of the latter phase that has changed relative to the sample SAR image of the previous phase in each sample SAR image group and the sample label;

根据所述损失信息,对所述待训练的特征提取网络进行训练。The feature extraction network to be trained is trained according to the loss information.

可选地,所述获取多个样本SAR图像组,包括:Optionally, acquiring a plurality of sample SAR image groups includes:

获取多个时相的原始SAR图像;Acquire original SAR images of multiple phases;

对所述多个时相的原始SAR图像进行数据增强处理,得到多个时相的样本SAR图像;Performing data enhancement processing on the original SAR images of the multiple time phases to obtain sample SAR images of the multiple time phases;

根据所述多个时相的样本SAR图像,构建多个样本SAR图像组。A plurality of sample SAR image groups are constructed according to the sample SAR images of the plurality of time phases.

可选地,所述根据每个样本SAR图像组中后一时相的样本SAR图像相对于前一时相的样本SAR图像发生变化的图像区域以及样本标签,确定损失信息,包括:Optionally, determining the loss information according to the image area and sample label of the sample SAR image of the next phase in each sample SAR image group that has changed relative to the sample SAR image of the previous phase includes:

根据每个样本SAR图像组中后一时相的样本SAR图像相对于前一时相的样本SAR图像发生变化的图像区域以及样本标签,分别确定集合相似度损失信息和交叉熵损失信息;According to the image area and sample label of the sample SAR image of the latter phase in each sample SAR image group that has changed relative to the sample SAR image of the previous phase, the set similarity loss information and the cross entropy loss information are determined respectively;

根据所述集合相似度损失信息和所述交叉熵损失信息,确定混合损失信息。Mixed loss information is determined according to the set similarity loss information and the cross entropy loss information.

第二方面,提供了一种SAR图像变化检测装置,包括:In a second aspect, a SAR image change detection device is provided, comprising:

目标图像获取模块,用于获取两个时相的目标SAR图像;A target image acquisition module is used to acquire target SAR images in two time phases;

目标特征图提取模块,用于通过特征提取网络对所述两个时相的目标SAR图像进行特征提取,得到两个时相的目标特征图;A target feature map extraction module is used to extract features from the target SAR images of the two time phases through a feature extraction network to obtain target feature maps of the two time phases;

所述特征提取网络包括:The feature extraction network includes:

第一特征提取模块,用于对每个时相的目标SAR图像进行特征提取,得到每个时相的第一特征图;A first feature extraction module is used to extract features from the target SAR image of each time phase to obtain a first feature map of each time phase;

第二特征提取模块,包括通道注意力模块和空间注意力模块;其中,所述通道注意力模块用于基于注意力机制对每个时相的第一特征图进行处理,确定每个时相的第一特征图的通道注意力权重,并基于所述通道注意力权重对每个时相的第一特征图进行加权处理,得到每个时相的第二特征图;所述空间注意力模块用于基于注意力机制对每个时相的第二特征图进行处理,确定每个时相的第二特征图的空间注意力权重,并基于所述空间注意力权重对每个时相的第二特征图进行加权处理,得到每个时相的目标特征图;The second feature extraction module includes a channel attention module and a spatial attention module; wherein the channel attention module is used to process the first feature map of each phase based on the attention mechanism, determine the channel attention weight of the first feature map of each phase, and perform weighted processing on the first feature map of each phase based on the channel attention weight to obtain the second feature map of each phase; the spatial attention module is used to process the second feature map of each phase based on the attention mechanism, determine the spatial attention weight of the second feature map of each phase, and perform weighted processing on the second feature map of each phase based on the spatial attention weight to obtain the target feature map of each phase;

目标差异图生成模块,用于对所述两个时相的目标特征图进行差异分析,生成目标差异图;A target difference map generating module, used for performing difference analysis on the target feature maps of the two time phases to generate a target difference map;

目标图像区域确定模块,用于根据所述目标差异图,确定所述两个时相的目标SAR图像中后一时相的目标SAR图像相对于前一时相的目标SAR图像发生变化的目标图像区域。The target image region determination module is used to determine, according to the target difference map, a target image region of the target SAR image of the latter phase of the two phases, where the target SAR image changes relative to the target SAR image of the previous phase.

第三方面,提供了一种电子设备,包括:至少一个处理器,以及与所述至少一个处理器通信连接的存储器,其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器执行所述的方法。According to a third aspect, an electronic device is provided, comprising: at least one processor, and a memory communicatively connected to the at least one processor, wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor so that the at least one processor executes the described method.

第四方面,提供了一种存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时,实现所述的方法。In a fourth aspect, a storage medium is provided, on which a computer program is stored, characterized in that when the program is executed by a processor, the described method is implemented.

本发明实施例至少包括以下有益效果:The embodiments of the present invention have at least the following beneficial effects:

本发明实施例提供了SAR图像变化检测方法和装置,所述方法中,首先获取两个时相的目标SAR图像;之后通过特征提取网络对所述两个时相的目标SAR图像进行特征提取,得到两个时相的目标特征图,所述特征提取网络包括:第一特征提取模块,用于对每个时相的目标SAR图像进行特征提取,得到每个时相的第一特征图;第二特征提取模块,包括通道注意力模块和空间注意力模块;其中,所述通道注意力模块用于基于注意力机制对每个时相的第一特征图进行处理,确定每个时相的第一特征图的通道注意力权重,并基于所述通道注意力权重对每个时相的第一特征图进行加权处理,得到每个时相的第二特征图;所述空间注意力模块用于基于注意力机制对每个时相的第二特征图进行处理,确定每个时相的第二特征图的空间注意力权重,并基于所述空间注意力权重对每个时相的第二特征图进行加权处理,得到每个时相的目标特征图;再对所述两个时相的目标特征图进行差异分析,生成目标差异图;最后根据所述目标差异图,确定所述两个时相的目标SAR图像中后一时相的目标SAR图像相对于前一时相的目标SAR图像发生变化的目标图像区域。基于该方法和装置,其通过第一特征提取模块提取目标SAR图像中的基础特征信息,再利用第二特征模块中的通道注意力模块和空间注意力模块进一步提取目标SAR图像深层次、多维度和精细化的特征,从而提取不同时相的SAR图像中小尺度目标的精细变化信息,提高对于SAR图像的变化检测精度。The embodiment of the present invention provides a SAR image change detection method and device. In the method, firstly, target SAR images of two phases are acquired; then, feature extraction is performed on the target SAR images of the two phases through a feature extraction network to obtain target feature maps of the two phases, and the feature extraction network includes: a first feature extraction module, which is used to extract features from the target SAR image of each phase to obtain a first feature map of each phase; a second feature extraction module, which includes a channel attention module and a spatial attention module; wherein the channel attention module is used to process the first feature map of each phase based on an attention mechanism, determine the channel attention weight of the first feature map of each phase, and based on The channel attention weight performs weighted processing on the first feature map of each phase to obtain the second feature map of each phase; the spatial attention module is used to process the second feature map of each phase based on the attention mechanism, determine the spatial attention weight of the second feature map of each phase, and perform weighted processing on the second feature map of each phase based on the spatial attention weight to obtain the target feature map of each phase; then the target feature maps of the two phases are subjected to difference analysis to generate a target difference map; finally, according to the target difference map, the target image area of the target SAR image of the latter phase in the target SAR image of the two phases is determined to be changed relative to the target SAR image of the previous phase. Based on this method and device, it extracts the basic feature information in the target SAR image through the first feature extraction module, and then uses the channel attention module and the spatial attention module in the second feature module to further extract the deep, multi-dimensional and refined features of the target SAR image, thereby extracting the fine change information of small-scale targets in SAR images of different phases, and improving the change detection accuracy of SAR images.

本发明实施例的其它优点、目标和特征将部分通过下面的说明体现,部分还将通过对本发明实施例的研究和实践而为本领域的技术人员所理解。Other advantages, objectives, and features of the embodiments of the present invention will be reflected in part through the following description, and in part will be understood by those skilled in the art through study and practice of the embodiments of the present invention.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1为本发明一个实施例提供的SAR图像变化检测方法的流程图。FIG1 is a flow chart of a SAR image change detection method provided by an embodiment of the present invention.

图2为本发明一个实施例提供的变化检测模型的示意图。FIG. 2 is a schematic diagram of a change detection model provided by an embodiment of the present invention.

图3为本发明一个实施例提供的特征提取网络的训练方法的流程图。FIG3 is a flow chart of a method for training a feature extraction network according to an embodiment of the present invention.

图4为本发明另一个实施例提供的SAR图像变化检测方法的流程图。FIG. 4 is a flow chart of a SAR image change detection method provided by another embodiment of the present invention.

图5为本发明另一个实施例提供的数据集制作的流程图。FIG. 5 is a flow chart of data set production provided by another embodiment of the present invention.

图6a为本发明另一个实施例提供的前一时相的目标SAR图像;图6b为本发明另一个实施例提供的后一时相的目标SAR图像;图6c为本发明另一个实施例提供的标签图;图6d为本发明另一个实施例提供的SAR图像变化检测结果。FIG6a is a target SAR image of a previous phase provided by another embodiment of the present invention; FIG6b is a target SAR image of a later phase provided by another embodiment of the present invention; FIG6c is a label image provided by another embodiment of the present invention; and FIG6d is a SAR image change detection result provided by another embodiment of the present invention.

图7为本发明一个实施例提供的SAR图像变化检测装置的结构示意图。FIG. 7 is a schematic diagram of the structure of a SAR image change detection device provided by an embodiment of the present invention.

图8为本发明一个实施例提供的电子设备的结构示意图。FIG8 is a schematic diagram of the structure of an electronic device provided by an embodiment of the present invention.

具体实施方式DETAILED DESCRIPTION

下面结合附图对本发明实施例做进一步的详细说明,以令本领域技术人员参照说明书文字能够据以实施。The embodiments of the present invention are further described in detail below in conjunction with the accompanying drawings so that those skilled in the art can implement the embodiments according to the description.

图1为本发明实施例提供的SAR图像变化检测方法的流程图,由具有处理能力的系统、服务端设备或SAR图像变化检测装置执行。如图1所示,该方法包括步骤110至步骤140。FIG1 is a flow chart of a SAR image change detection method provided by an embodiment of the present invention, which is executed by a system with processing capabilities, a server device or a SAR image change detection apparatus. As shown in FIG1 , the method includes steps 110 to 140.

步骤110,获取两个时相的目标SAR图像。Step 110: Acquire target SAR images in two time phases.

这里,两个目标SAR图像之间的时间间隔可以是任意的时间间隔,例如24小时或1个星期。应该理解的是,用于变化检测的两个目标SAR图像为针对同一目标地区的SAR图像。Here, the time interval between the two target SAR images may be any time interval, such as 24 hours or 1 week. It should be understood that the two target SAR images used for change detection are SAR images for the same target area.

步骤120,通过特征提取网络对所述两个时相的目标SAR图像进行特征提取,得到两个时相的目标特征图;所述特征提取网络包括:第一特征提取模块,用于对每个时相的目标SAR图像进行特征提取,得到每个时相的第一特征图;第二特征提取模块,包括通道注意力模块和空间注意力模块;其中,所述通道注意力模块用于基于注意力机制对每个时相的第一特征图进行处理,确定每个时相的第一特征图的通道注意力权重,并基于所述通道注意力权重对每个时相的第一特征图进行加权处理,得到每个时相的第二特征图;所述空间注意力模块用于基于注意力机制对每个时相的第二特征图进行处理,确定每个时相的第二特征图的空间注意力权重,并基于所述空间注意力权重对每个时相的第二特征图进行加权处理,得到每个时相的目标特征图。Step 120, extracting features from the target SAR images of the two phases through a feature extraction network to obtain target feature maps of the two phases; the feature extraction network includes: a first feature extraction module, used to extract features from the target SAR images of each phase to obtain a first feature map of each phase; a second feature extraction module, including a channel attention module and a spatial attention module; wherein the channel attention module is used to process the first feature map of each phase based on an attention mechanism, determine the channel attention weight of the first feature map of each phase, and perform weighted processing on the first feature map of each phase based on the channel attention weight to obtain a second feature map of each phase; the spatial attention module is used to process the second feature map of each phase based on an attention mechanism, determine the spatial attention weight of the second feature map of each phase, and perform weighted processing on the second feature map of each phase based on the spatial attention weight to obtain a target feature map of each phase.

本步骤中,首先,第一特征提取模块提取目标SAR图像中的基础特征信息,即第一特征图。接下来,通道注意力模块基于注意力机制对第一特征图进行处理,计算通道注意力权重。通道注意力权重可以表示第一特征图中各通道的重要性,即为包含关键信息的通道赋予更高的权重,为缺乏关键信息的通道赋予更低的权重。因此,基于通道注意力权重对第一特征图进行加权处理,可以使得在通道维度上的关键信息被提取出来,生成第二特征图,并在第二特征图中抑制通道维度上非关键信息的提取。进一步地,空间注意力模块基于注意力机制对第二特征图进行处理,计算空间注意力权重。空间注意力权重可以表示第二特征图中各空间位置的重要性,即为包含关键信息的空间位置赋予更高的权重,为缺乏关键信息的空间位置赋予更低的权重。因此,基于空间注意力权重对第二特征图进行加权处理,可以使得在空间维度上的关键信息被提取出来,生成目标特征图,并在目标特征图中抑制空间维度上非关键信息的提取。基于通道注意力模块和空间注意力模块,可以提取目标SAR图像深层次、多维度和精细化的特征,提高第二特征提取模块对于特征表示的能力。In this step, first, the first feature extraction module extracts the basic feature information in the target SAR image, that is, the first feature map. Next, the channel attention module processes the first feature map based on the attention mechanism and calculates the channel attention weight. The channel attention weight can represent the importance of each channel in the first feature map, that is, a higher weight is given to the channel containing key information and a lower weight is given to the channel lacking key information. Therefore, weighted processing of the first feature map based on the channel attention weight can extract key information in the channel dimension, generate a second feature map, and suppress the extraction of non-key information in the channel dimension in the second feature map. Further, the spatial attention module processes the second feature map based on the attention mechanism and calculates the spatial attention weight. The spatial attention weight can represent the importance of each spatial position in the second feature map, that is, a higher weight is given to the spatial position containing key information and a lower weight is given to the spatial position lacking key information. Therefore, weighted processing of the second feature map based on the spatial attention weight can extract key information in the spatial dimension, generate a target feature map, and suppress the extraction of non-key information in the spatial dimension in the target feature map. Based on the channel attention module and the spatial attention module, deep, multi-dimensional and refined features of the target SAR image can be extracted, improving the feature representation capability of the second feature extraction module.

由于由上述特别提取模型所提取的两个时相的目标特征图将包含目标SAR图像中关键目标的变化特征,基于两个时相的目标特征图进一步进行差异分析,生成目标差异图,则可以实现对于SAR图像中例如舰船、建筑物等关键目标的精确变化检测,提高对于SAR图像的变化检测精度。Since the target feature maps of the two time phases extracted by the above-mentioned special extraction model will contain the change characteristics of the key targets in the target SAR image, further difference analysis is performed based on the target feature maps of the two time phases to generate a target difference map, which can achieve accurate change detection of key targets in SAR images such as ships and buildings, thereby improving the change detection accuracy of SAR images.

图2为本发明实施例提供的变化检测模型的示意图。如图2所示,变化检测模型包括特征提取网络和差异生成与分析模块,其中,特征提取网络包括第一特征提取模块和第二特征提取模块。Fig. 2 is a schematic diagram of a change detection model provided by an embodiment of the present invention. As shown in Fig. 2, the change detection model includes a feature extraction network and a difference generation and analysis module, wherein the feature extraction network includes a first feature extraction module and a second feature extraction module.

如图2所示,在一些实施例中,所述第一特征提取模块包括依次连接的第一卷积层、多个残差模块和第二卷积层;所述第一卷积层用于对每个时相的目标SAR图像进行降维处理;每个残差模块用于对自身的输入特征进行特征提取,第一个残差模块的输入特征为所述第一卷积层的输出特征;所述第二卷积层用于对最后一个残差模块的输出特征进行升维处理,得到与每个时相的目标SAR图像维度相同的第一特征图。As shown in Figure 2, in some embodiments, the first feature extraction module includes a first convolutional layer, multiple residual modules and a second convolutional layer connected in sequence; the first convolutional layer is used to perform dimensionality reduction processing on the target SAR image of each phase; each residual module is used to extract features from its own input features, and the input features of the first residual module are the output features of the first convolutional layer; the second convolutional layer is used to perform dimensionality increase processing on the output features of the last residual module to obtain a first feature map with the same dimension as the target SAR image of each phase.

在传统的残差网络中,在多个残差模块之后,通常连接全局池化层和全连接层,这会导致传统的残差网络中参数量较大,计算效率下降。因此,本发明实施例采用改进的残差网络实现第一特征提取模块。具体而言,将传统的残差结构中连接于多个残差模块之后的全局池化层和全连接层替换为卷积层,通过该卷积层(即第二卷积层)具有对最后一个残差模块的输出特征进行升维处理,以输出与目标SAR图像维度相同的第一特征图,从而形成改进的残差结构。通过这种方式,可以在保证第一特征提取模块的特征提取能力的基础上,减小第一特征提取模块的参数量,保证特征提取网络整体的训练效率和稳定性。In a traditional residual network, after multiple residual modules, a global pooling layer and a fully connected layer are usually connected, which will result in a large number of parameters in the traditional residual network and a decrease in computational efficiency. Therefore, an embodiment of the present invention uses an improved residual network to implement the first feature extraction module. Specifically, the global pooling layer and the fully connected layer connected to multiple residual modules in the traditional residual structure are replaced with a convolutional layer, and the convolutional layer (i.e., the second convolutional layer) has the output features of the last residual module dimensionality-upgraded to output a first feature map with the same dimension as the target SAR image, thereby forming an improved residual structure. In this way, the number of parameters of the first feature extraction module can be reduced on the basis of ensuring the feature extraction capability of the first feature extraction module, thereby ensuring the overall training efficiency and stability of the feature extraction network.

具体地,在第一特征提取模块中,每个残差模块(Resblock)用于对自身的输入特征进行特征提取。每个残差模块的输入特征来自于与其连接的前一个模块的输出特征。残差模块进一步包括结构相同的多个残差结构,每个残差结构包括多个卷积层。第一卷积层用于对目标SAR图像进行降维处理,使第一卷积层的输出特征的维度满足第一个残差模块对于输入特征的维度要求。第二卷积层则用于将最后一个残差模块的输出特征进行升维处理,使第二卷积层的输出特征恢复至目标SAR图像的维度。本发明实施例对于残差模块和残差结构的具体结构不做具体限定。Specifically, in the first feature extraction module, each residual module (Resblock) is used to extract features from its own input features. The input features of each residual module come from the output features of the previous module connected to it. The residual module further includes a plurality of residual structures with the same structure, and each residual structure includes a plurality of convolutional layers. The first convolutional layer is used to perform dimensionality reduction processing on the target SAR image so that the dimension of the output features of the first convolutional layer meets the dimensionality requirement of the first residual module for the input features. The second convolutional layer is used to perform dimensionality increase processing on the output features of the last residual module so that the output features of the second convolutional layer are restored to the dimension of the target SAR image. The embodiment of the present invention does not specifically limit the specific structure of the residual module and the residual structure.

在一些示例中,采用改进的ResNet-34残差网络实现第一特征提取模块。图2具体示出了改进的ResNet-34残差网络。在改进的ResNet-34残差网络中,包括第一卷积层、4个残差模块和第二卷积层,其中,残差模块ResBlock1、ResBlock2、ResBlock3、ResBlock4分别包含3个、4个、6个和3个残差结构(图2中示出了第一个残差模块中的残差结构由2个3×3的卷积层组成),在最后一个残差模块ResBlock4之后连接第二卷积层,第二卷积层的输出特征即第一特征图。In some examples, an improved ResNet-34 residual network is used to implement the first feature extraction module. Figure 2 specifically shows the improved ResNet-34 residual network. In the improved ResNet-34 residual network, it includes a first convolutional layer, 4 residual modules and a second convolutional layer, wherein the residual modules ResBlock1, ResBlock2, ResBlock3, and ResBlock4 contain 3, 4, 6, and 3 residual structures, respectively (Figure 2 shows that the residual structure in the first residual module consists of 2 3×3 convolutional layers), and the second convolutional layer is connected after the last residual module ResBlock4, and the output feature of the second convolutional layer is the first feature map.

如图2所示,在一些实施例中,所述通道注意力模块包括第一最大池化层、第一平均池化层、第三卷积层、第一融合层、第一sigmoid激活函数和第一输出层,其中,所述第一最大池化层和第一平均池化层并联于所述第三卷积层的输入端,所述第三卷积层、所述第一融合层、所述第一sigmoid激活函数和所述第一输出层依次连接;所述第一最大池化层和所述第一平均池化层分别用于对每个时相的第一特征图进行空间维度的最大池化处理和平均池化处理;所述第三卷积层用于分别对所述第一最大池化层和所述第一平均池化层的输出特征进行卷积处理,以得到两个通道注意力权重矩阵;所述第一融合层用于对所述两个通道注意力权重矩阵相加,以得到总的通道注意力权重矩阵;所述第一sigmoid激活函数用于对所述总的通道注意力权重矩阵进行激活处理,以得到通道注意力权重;所述第一输出层用于基于所述通道注意力权重对每个时相的第一特征图进行加权处理,得到每个时相的第二特征图。As shown in Figure 2, in some embodiments, the channel attention module includes a first maximum pooling layer, a first average pooling layer, a third convolutional layer, a first fusion layer, a first sigmoid activation function and a first output layer, wherein the first maximum pooling layer and the first average pooling layer are connected in parallel to the input end of the third convolutional layer, and the third convolutional layer, the first fusion layer, the first sigmoid activation function and the first output layer are connected in sequence; the first maximum pooling layer and the first average pooling layer are respectively used to perform maximum pooling processing and average pooling processing on the first feature map of each phase in the spatial dimension; the third convolutional layer is used to convolve the output features of the first maximum pooling layer and the first average pooling layer respectively to obtain two channel attention weight matrices; the first fusion layer is used to add the two channel attention weight matrices to obtain a total channel attention weight matrix; the first sigmoid activation function is used to activate the total channel attention weight matrix to obtain the channel attention weight; the first output layer is used to weight the first feature map of each phase based on the channel attention weight to obtain the second feature map of each phase.

具体地,通道注意力模块的输入特征为第一特征提取模块的输出特征,即第一特征图。通道注意力模块对第一特征图的处理过程为:Specifically, the input feature of the channel attention module is the output feature of the first feature extraction module, that is, the first feature map. The processing process of the channel attention module on the first feature map is:

首先,分别使用第一最大池化层和第一平均池化层对第一特征图进行空间维度的最大池化处理和平均池化处理,分别得到两个输出特征。最大池化处理和平均池化处理可以实现对于第一特征图的压缩处理,以提高特征提取模型的计算效率;此外同时使用两种压缩方式,可以实现对于第一特征图中不同信息的利用,以提高通道注意力模块的特征表示能力。给定第一特征图的维度为H×W×C,其中,H、W、C分别代表第一特征图的高度、宽度和通道数,经过空间维度的最大池化处理和平均池化处理,可以得到两个1×1×C的输出特征。First, the first maximum pooling layer and the first average pooling layer are used to perform maximum pooling and average pooling on the first feature map in the spatial dimension, respectively, to obtain two output features. Maximum pooling and average pooling can realize compression processing of the first feature map to improve the computational efficiency of the feature extraction model; in addition, using two compression methods at the same time can realize the utilization of different information in the first feature map to improve the feature representation ability of the channel attention module. Given that the dimension of the first feature map is H×W×C, where H, W, and C represent the height, width, and number of channels of the first feature map, respectively, after maximum pooling and average pooling in the spatial dimension, two 1×1×C output features can be obtained.

在传统的通道注意力模块中,通常使用全连接层进行通道注意力权重矩阵的计算,然而全连接层的参数量大会导致特征提取模型的计算效率不理想。基于此,本发明实施例使用第三卷积层对第一最大池化层和第一平均池化层的输出特征进行卷积处理,得到两个通道注意力权重矩阵,每个通道注意权重矩阵用于表示相应输出特征中各通道的重要性。相较于全连接层,卷积层的参数量大幅度减少,因而可以有效提高模型的计算效率,同时,在本发明实施例所提供的通道注意力模块中卷积层可以满足特征提取模型的要求,保证实现对于SAR图像的精确变化检测。在一些示例中,第三卷积层为一维卷积层。In the traditional channel attention module, the fully connected layer is usually used to calculate the channel attention weight matrix. However, the large number of parameters in the fully connected layer will lead to unsatisfactory computational efficiency of the feature extraction model. Based on this, the embodiment of the present invention uses the third convolutional layer to convolve the output features of the first maximum pooling layer and the first average pooling layer to obtain two channel attention weight matrices, each of which is used to represent the importance of each channel in the corresponding output feature. Compared with the fully connected layer, the number of parameters of the convolutional layer is greatly reduced, so the computational efficiency of the model can be effectively improved. At the same time, the convolutional layer in the channel attention module provided in the embodiment of the present invention can meet the requirements of the feature extraction model and ensure the accurate change detection of SAR images. In some examples, the third convolutional layer is a one-dimensional convolutional layer.

之后,使用第一融合层将两个通道注意力权重矩阵相加,以得到总的通道注意力权重矩阵。具体而言,两个通道注意力权重矩阵的相加即两个矩阵中元素之间的相加。Afterwards, the first fusion layer is used to add the two channel attention weight matrices to obtain the total channel attention weight matrix. Specifically, the addition of the two channel attention weight matrices is the addition of the elements in the two matrices.

再使用第一sigmoid激活函数对总的通道注意力权重矩阵进行激活处理,以得到通道注意力权重。Then use the first sigmoid activation function to activate the total channel attention weight matrix to obtain the channel attention weight.

最后,使用第一输出层将通道注意力权重对第一特征图进行加权处理,得到第二特征图。具体而言,可以通过将通道注意力权重与第一特征图相乘,得到第二特征图。Finally, the first output layer is used to weight the first feature map with the channel attention weight to obtain the second feature map. Specifically, the second feature map can be obtained by multiplying the channel attention weight with the first feature map.

相应地,通道注意力模块对第一特征图的处理过程的计算公式可以表示为:Accordingly, the calculation formula of the channel attention module for processing the first feature map can be expressed as:

MC(F)=σ(f(AvgPool(F))+f(MaxPool(F)))M C (F)=σ(f(AvgPool(F))+f(MaxPool(F)))

其中,F表示第一特征图,AvgPool(F)和MaxPool(F)分别为对第一特征图进行空间维度的平均池化和最大池化,σ为sigmoid激活函数,f为卷积处理,Mc(F)表示通道注意力权重,F′表示第二特征图。Where F represents the first feature map, AvgPool(F) and MaxPool(F) are the average pooling and maximum pooling of the spatial dimension of the first feature map, σ is the sigmoid activation function, f is the convolution process, Mc (F) represents the channel attention weight, and F′ represents the second feature map.

如图2所示,在一些实施例中,所述空间注意力模块包括第二最大池化层、第二平均池化层、第二融合层、第四卷积层、第二sigmoid激活函数和第二输出层,所述第二最大池化层和所述第二平均池化层并联于所述第二融合层的输入端,所述第二融合层、所述第四卷积层和所述第二输出层依次连接;所述第二最大池化层和所述第二平均池化层分别用于对每个时相的第二特征图进行通道维度的最大池化处理和平均池化处理;所述第二融合层用于对所述第二最大池化层和所述第二平均池化层的输出特征进行向量拼接;所述第四卷积层用于对向量拼接得到的输出特征进行卷积处理,以得到空间注意力权重矩阵;所述第二sigmoid激活函数用于对所述空间注意力权重矩阵进行激活处理,以得到空间注意力权重;所述第二输出层用于基于所述空间注意力权重对每个时相的第二特征图进行加权处理,得到每个时相的目标特征图。As shown in Figure 2, in some embodiments, the spatial attention module includes a second maximum pooling layer, a second average pooling layer, a second fusion layer, a fourth convolutional layer, a second sigmoid activation function and a second output layer, the second maximum pooling layer and the second average pooling layer are connected in parallel to the input end of the second fusion layer, and the second fusion layer, the fourth convolutional layer and the second output layer are connected in sequence; the second maximum pooling layer and the second average pooling layer are respectively used to perform maximum pooling processing and average pooling processing in the channel dimension on the second feature map of each phase; the second fusion layer is used to vector splice the output features of the second maximum pooling layer and the second average pooling layer; the fourth convolutional layer is used to convolve the output features obtained by vector splicing to obtain a spatial attention weight matrix; the second sigmoid activation function is used to activate the spatial attention weight matrix to obtain the spatial attention weight; the second output layer is used to weight the second feature map of each phase based on the spatial attention weight to obtain the target feature map of each phase.

具体地,空间注意力模块的输入特征为通道注意力模块的输出特征,即第二特征图。空间注意力模块对第二特征图的处理过程为:Specifically, the input features of the spatial attention module are the output features of the channel attention module, that is, the second feature map. The processing process of the spatial attention module on the second feature map is:

首先,分别使用第二最大池化层和第二平均池化层对第二特征图进行通道维度的最大池化处理和平均池化处理,分别得到两个输出特征。最大池化处理和平均池化处理可以实现对于第二特征图的压缩处理,以提高特征提取模型的计算效率;此外同时使用两种压缩方式,可以实现对于第二特征图中不同信息的利用,以提高空间注意力模块的特征表示能力。给定第二特征图的维度为H×W×C,其中,H、W、C分别代表第二特征图的高度、宽度和通道数,经过通道维度的最大池化处理和平均池化处理,可以得到两个H×W×1的输出特征。First, the second feature map is subjected to maximum pooling and average pooling in the channel dimension using the second maximum pooling layer and the second average pooling layer, respectively, to obtain two output features. Maximum pooling and average pooling can realize compression processing of the second feature map to improve the computational efficiency of the feature extraction model; in addition, the use of two compression methods at the same time can realize the utilization of different information in the second feature map to improve the feature representation ability of the spatial attention module. Given that the dimension of the second feature map is H×W×C, where H, W, and C represent the height, width, and number of channels of the second feature map, respectively, two H×W×1 output features can be obtained after maximum pooling and average pooling in the channel dimension.

之后,使用第二融合层将第二最大池化层和第二平均池化层的输出特征进行向量拼接,从而实现两个输出特征的融合。Afterwards, the second fusion layer is used to vectorize the output features of the second maximum pooling layer and the second average pooling layer, thereby achieving the fusion of the two output features.

再使用第四卷积层对向量拼接后的输出特征进行卷积处理,得到空间注意力权重矩阵,空间注意权重矩阵用于表示向量拼接后的输出特征中各空间位置的重要性。在本发明实施例所提供的空间注意力模块中卷积层可以满足特征提取模型的要求,保证实现对于SAR图像的精确变化检测。The fourth convolutional layer is then used to perform convolution processing on the output features after the vector splicing to obtain a spatial attention weight matrix, which is used to represent the importance of each spatial position in the output features after the vector splicing. In the spatial attention module provided in the embodiment of the present invention, the convolutional layer can meet the requirements of the feature extraction model and ensure the accurate change detection of the SAR image.

然后,使用第二sigmoid激活函数对空间注意力权重矩阵进行激活处理,以得到空间注意力权重。Then, the second sigmoid activation function is used to activate the spatial attention weight matrix to obtain the spatial attention weight.

最后,使用第二输出层将空间注意力权重对第二特征图进行加权处理,得到目标特征图。具体而言,可以通过将空间注意力权重与第二特征图相乘,得到目标特征图。目标特征图即第二特征提取模块的输出特征。Finally, the second output layer is used to weight the second feature map with the spatial attention weight to obtain the target feature map. Specifically, the target feature map can be obtained by multiplying the spatial attention weight with the second feature map. The target feature map is the output feature of the second feature extraction module.

相应地,空间注意力模块对第二特征图的处理过程的计算公式可以表示为:Accordingly, the calculation formula of the spatial attention module for processing the second feature map can be expressed as:

MS(F')=σ(f(AvgPool(F');MaxPool(F')))M S (F')=σ(f(AvgPool(F');MaxPool(F')))

其中,F′表示第二特征图,AvgPool(F′)和MaxPool(F′)分别为对第二特征图进行通道维度的平均池化和最大池化,σ为sigmoid激活函数,f为卷积处理,Ms(F′)表示空间注意力权重,F″表示目标特征图。Wherein, F′ represents the second feature map, AvgPool(F′) and MaxPool(F′) are the average pooling and maximum pooling of the channel dimension of the second feature map, σ is the sigmoid activation function, f is the convolution process, Ms (F′) represents the spatial attention weight, and F″ represents the target feature map.

步骤130,对所述两个时相的目标特征图进行差异分析,生成目标差异图。Step 130, performing difference analysis on the target feature graphs of the two phases to generate a target difference graph.

如图2所示,在一些实施例中,步骤130中,对所述两个时相的目标特征图进行差异分析,生成目标差异图,包括:分别基于差值算子和对数比算子(简称LR算子)对所述两个时相的目标特征图进行计算,得到基于差值算子的差异图和基于对数比算子的差异图;对基于差值算子的差异图和基于对数比算子的差异图进行融合处理,生成所述目标差异图。As shown in Figure 2, in some embodiments, in step 130, a difference analysis is performed on the target feature maps of the two time phases to generate a target difference map, including: calculating the target feature maps of the two time phases based on a difference operator and a logarithmic ratio operator (LR operator for short) to obtain a difference map based on the difference operator and a difference map based on the logarithmic ratio operator; fusing the difference map based on the difference operator and the difference map based on the logarithmic ratio operator to generate the target difference map.

通过上述方式,可以最大程度地克服噪声干扰,并可以更好地融合两个目标特征图中的细节和整体信息,进而提高对于SAR图像的变化检测精度。Through the above method, noise interference can be overcome to the greatest extent, and the details and overall information in the two target feature maps can be better integrated, thereby improving the change detection accuracy of SAR images.

可以采用图像融合方法对两个算子的差异图进行融合,例如根据差异图所属的层次采用像素级、特征级或决策级融合。其中,像素级融合方法具有易实现、计算复杂度低的优势。在像素级融合方法中,加权融合方法的计算复杂度低,更加易于实现。基于此,在一些示例中,所述对基于差值算子的差异图和基于对数比算子的差异图进行融合处理,生成所述目标差异图,包括:对基于差值算子的差异图和基于对数比算子的差异图进行加权融合处理,生成所述目标差异图。The difference maps of two operators can be fused by using an image fusion method, for example, pixel-level, feature-level or decision-level fusion is used according to the level to which the difference map belongs. Among them, the pixel-level fusion method has the advantages of being easy to implement and having low computational complexity. Among the pixel-level fusion methods, the weighted fusion method has low computational complexity and is easier to implement. Based on this, in some examples, the fusion processing of the difference map based on the difference operator and the difference map based on the logarithmic ratio operator to generate the target difference map includes: weighted fusion processing of the difference map based on the difference operator and the difference map based on the logarithmic ratio operator to generate the target difference map.

具体地,目标差异图的生成过程的计算公式可以表示如下:Specifically, the calculation formula for the generation process of the target difference map can be expressed as follows:

XC=α(|X1-X2|)+(1-α)(|log(X2+1)-log(X1+1)|)X C =α(|X 1 -X 2 |)+(1-α)(|log(X 2 +1)-log(X 1 +1)|)

其中,X1和X2分别表示两个时相的目标特征图;|X1-X2|表示基于差值算子所生成的差异图;|log(X2+1)-log(X1+1)|表示基于对数比算子所生成的差异图;α为权重系数,取值为0~1;Xc表示目标差异图。Wherein, X1 and X2 represent the target feature maps of two phases respectively; | X1 - X2 | represents the difference map generated based on the difference operator; |log( X2 +1)-log( X1 +1)| represents the difference map generated based on the logarithmic ratio operator; α is the weight coefficient, ranging from 0 to 1; Xc represents the target difference map.

步骤140,根据所述目标差异图,确定所述两个时相的目标SAR图像中后一时相的目标SAR图像相对于前一时相的目标SAR图像发生变化的目标图像区域。Step 140 , determining, according to the target difference map, a target image region in which the target SAR image of the latter phase of the two phases changes relative to the target SAR image of the former phase.

这里,所确定的两个时相的目标SAR图像中后一时相的目标SAR图像相对于前一时相的目标SAR图像发生变化的目标图像区域即为变化检测结果,可以使用二值结果图表示。在二值结果图中,可以将发生变化的像素使用黑色表示,而将其他未发生变化的像素使用白色表示。Here, the target image region where the target SAR image of the latter phase of the two determined target SAR images changes relative to the target SAR image of the previous phase is the change detection result, which can be represented by a binary result map. In the binary result map, the changed pixels can be represented by black, and the other unchanged pixels can be represented by white.

在本步骤中,可以将像素类别设定为两类,分别为发生变化类和未发生变化类。然后,针对目标差异图设定阈值,根据阈值对像素的类别进行判断。当目标差异图中某一个像素超过所设定的阈值,则判定该像素属于发生变化类,反之,当目标差异图中某一个像素未超过所设定的阈值,则判定该像素属于未发生变化类。最终,根据判断结果,生成表示变化信息的二值结果图。In this step, the pixel category can be set into two categories, namely, the changed category and the unchanged category. Then, a threshold is set for the target difference map, and the category of the pixel is judged according to the threshold. When a pixel in the target difference map exceeds the set threshold, it is determined that the pixel belongs to the changed category. Conversely, when a pixel in the target difference map does not exceed the set threshold, it is determined that the pixel belongs to the unchanged category. Finally, based on the judgment result, a binary result map representing the change information is generated.

在一些实施例中,可以使用Kilter&Illingworth(KI)阈值法建立性能指标函数对发生变化类和未发生变化类的像素类条件分布进行直方图拟合,求得函数最小值作为最优阈值。也可以采用其他的阈值分析方法计算阈值,并基于所计算的阈值对目标差异图中的像素进行分类。本发明实施例对此不做具体限定。In some embodiments, the Kilter & Illingworth (KI) threshold method can be used to establish a performance index function to perform histogram fitting on the conditional distribution of pixels of the changed class and the unchanged class, and the minimum value of the function is obtained as the optimal threshold. Other threshold analysis methods can also be used to calculate the threshold, and the pixels in the target difference map are classified based on the calculated threshold. The embodiments of the present invention do not specifically limit this.

图3为本发明实施例提供的特征提取网络的训练方法的流程图。如图3所示,在一些实施例中,特征提取网络的训练方法包括步骤310至步骤360。Fig. 3 is a flow chart of a method for training a feature extraction network provided by an embodiment of the present invention. As shown in Fig. 3, in some embodiments, the method for training a feature extraction network includes steps 310 to 360.

步骤310,获取多个样本SAR图像组,其中,每个样本SAR图像组包括两个时相的样本SAR图像以及样本标签,所述样本标签用于指示所述两个时相的样本SAR图像中后一时相的样本SAR图像相对于前一时相的样本SAR图像发生变化的实际图像区域。Step 310, acquiring a plurality of sample SAR image groups, wherein each sample SAR image group includes sample SAR images of two phases and sample labels, wherein the sample labels are used to indicate an actual image region in which the sample SAR image of the latter phase of the two sample SAR images changes relative to the sample SAR image of the previous phase.

在实际应用中,可用于SAR图像深度学习模型训练的大规模公开数据集很少,样本数据的准确标记较为困难,导致模型过拟合问题严重。基于此,在一些实施例中,所述获取多个样本SAR图像组,包括:获取多个时相的原始SAR图像;对所述多个时相的原始SAR图像进行数据增强处理,得到多个时相的样本SAR图像;根据所述多个时相的样本SAR图像,构建多个样本SAR图像组。In practical applications, there are few large-scale public data sets that can be used for deep learning model training of SAR images, and it is difficult to accurately label sample data, resulting in serious model overfitting problems. Based on this, in some embodiments, the acquisition of multiple sample SAR image groups includes: acquiring original SAR images of multiple phases; performing data enhancement processing on the original SAR images of the multiple phases to obtain sample SAR images of the multiple phases; and constructing multiple sample SAR image groups based on the sample SAR images of the multiple phases.

通过对多个时相的原始SAR图像进行数据增强处理,可以实现对于样本数据集的图像数量与多样性的扩增,提升SAR图像深度学习模型的稳定性与泛化能力。By performing data enhancement processing on the original SAR images of multiple phases, the number and diversity of images in the sample data set can be expanded, and the stability and generalization ability of the SAR image deep learning model can be improved.

数据增强方法可以是旋转、缩放、增加噪音与遮挡等多种方式,本发明实施例对此不做具体限定。The data enhancement method may be rotation, scaling, adding noise and occlusion, etc., which is not specifically limited in the embodiment of the present invention.

在对原始SAR图像进行数据增强处理之前,可以先对原始SAR图像进行图像预处理,包括辐射定标、图像自适应滤波与地理编码等操作,同时还可以采用线性2%拉伸方式处理成8位图像,将同一地区的图像裁剪为相同的大小,并通过区域网平差工具进行精配准。进一步地,可以将经过图像预处理的图像裁剪为256像素×256像素的大小,并使切块间无重叠,再通过旋转、缩放、增加噪音与遮挡等方式进行数据增强。Before data enhancement is performed on the original SAR images, the original SAR images can be preprocessed, including radiometric calibration, image adaptive filtering, and geocoding. At the same time, they can be processed into 8-bit images using a linear 2% stretching method, and images in the same area can be cropped to the same size and precisely aligned using the regional block adjustment tool. Furthermore, the preprocessed images can be cropped to a size of 256 pixels × 256 pixels, with no overlap between the cuts, and then data enhancement can be performed by rotation, scaling, adding noise, and occlusion.

每个样本SAR图像组还包括样本标签。可以通过图像处理软件的颜色分量合成工具粗略计算原始SAR图像或经过图像预处理的原始SAR图像的变化差异,再结合高精度地图数据与先验知识,手工标记变化样本,生成样本标签。样本标签可以使用二值参考图实现。Each sample SAR image group also includes a sample label. The color component synthesis tool of the image processing software can be used to roughly calculate the change difference of the original SAR image or the original SAR image after image preprocessing, and then combined with high-precision map data and prior knowledge, the change samples are manually marked to generate sample labels. Sample labels can be implemented using binary reference images.

步骤320,通过待训练的特征提取网络对每个样本SAR图像组中的两个时相的样本SAR图像进行特征提取,得到两个时相的样本特征图。Step 320: extract features from the sample SAR images of two time phases in each sample SAR image group through the feature extraction network to be trained, and obtain sample feature maps of the two time phases.

步骤330,对每个样本SAR图像组对应的所述两个时相的样本特征图进行差异分析,生成每个样本SAR图像组对应的样本差异图。Step 330 , performing difference analysis on the sample feature maps of the two phases corresponding to each sample SAR image group, and generating a sample difference map corresponding to each sample SAR image group.

步骤340,根据每个样本SAR图像组对应的样本差异图,确定每个样本SAR图像组中后一时相的样本SAR图像相对于前一时相的样本SAR图像发生变化的图像区域。Step 340: Determine, based on the sample difference map corresponding to each sample SAR image group, the image region where the sample SAR image of the latter phase in each sample SAR image group has changed relative to the sample SAR image of the previous phase.

步骤350,根据每个样本SAR图像组中后一时相的样本SAR图像相对于前一时相的样本SAR图像发生变化的图像区域以及样本标签,确定损失信息。Step 350 , determining loss information according to the image region of the sample SAR image of the next phase in each sample SAR image group that has changed relative to the sample SAR image of the previous phase and the sample label.

在一些实施例中,步骤350中,所述根据每个样本SAR图像组中后一时相的样本SAR图像相对于前一时相的样本SAR图像发生变化的图像区域以及样本标签,确定损失信息,包括:根据每个样本SAR图像组中后一时相的样本SAR图像相对于前一时相的样本SAR图像发生变化的图像区域以及样本标签,分别确定集合相似度损失信息和交叉熵损失信息;根据所述集合相似度损失和所述交叉熵损失,确定混合损失信息。In some embodiments, in step 350, determining the loss information according to the image area and sample label of the sample SAR image of the subsequent phase in each sample SAR image group that have changed relative to the sample SAR image of the previous phase includes: determining the set similarity loss information and the cross entropy loss information respectively according to the image area and sample label of the sample SAR image of the subsequent phase in each sample SAR image group that have changed relative to the sample SAR image of the previous phase; and determining the mixed loss information according to the set similarity loss and the cross entropy loss.

集合相似度损失函数Dice Loss与交叉熵损失函数CE Loss分别具有注重全局和考察微观信息的特点。本发明实施例基于上述二者构建混合损失函数计算混合损失信息,可以兼具注重全局和考察微观信息,从而全面、高效地评估训练情况,解决正负样本不平衡导致学习效果不佳的问题。The set similarity loss function Dice Loss and the cross entropy loss function CE Loss have the characteristics of focusing on the global and examining micro information respectively. The embodiment of the present invention constructs a hybrid loss function based on the above two to calculate the hybrid loss information, which can focus on the global and examine micro information, thereby comprehensively and efficiently evaluating the training situation and solving the problem of poor learning effect caused by imbalance of positive and negative samples.

在一些示例中,可以将集合相似度损失信息和交叉熵损失信息进行加权求和,以计算混合损失信息。为两种损失信息赋予不同的权重,可以调整全局信息或微观信息对于特征提取网络的影响。此外,还可以将集合相似度损失信息和交叉熵损失信息相加,以计算混合损失信息。该方法相当于为两种损失信息赋予相同的权重,以平衡对于全局和微观信息的考察。In some examples, the set similarity loss information and the cross entropy loss information may be weighted and summed to calculate the mixed loss information. By assigning different weights to the two loss information, the influence of global information or micro information on the feature extraction network may be adjusted. In addition, the set similarity loss information and the cross entropy loss information may be added to calculate the mixed loss information. This method is equivalent to assigning the same weight to the two loss information to balance the examination of global and micro information.

集合相似度损失函数Dice Loss的计算公式如下:The calculation formula of the set similarity loss function Dice Loss is as follows:

交叉熵损失函数CE Loss的计算公式如下:The calculation formula of the cross entropy loss function CE Loss is as follows:

LCE=-RTlogRP L CE = -RT log RP

其中,LD表示集合相似度损失信息;LCE表示交叉熵损失信息;RT表示样本标签,取值有0或1两种情况,分别表示未发生变化和发生变化;RP表示变化预测结果,取值范围为[0,1]。Among them, LD represents the set similarity loss information; LCE represents the cross entropy loss information; RT represents the sample label, which has two values: 0 or 1, indicating no change and change respectively; RP represents the change prediction result, with a value range of [0,1].

则,在一些示例中,混合损失函数L的计算公式可以表示为:Then, in some examples, the calculation formula of the hybrid loss function L can be expressed as:

L=LD+LCE L= LD + LCE

步骤360,根据所述损失信息,对所述待训练的特征提取网络进行训练。Step 360: training the feature extraction network to be trained according to the loss information.

在特征提取网络的训练过程中,执行多次迭代训练,并在每次迭代中对特征提取网络的参数进行调整,直至达到训练结束条件。训练结束条件可以是迭代次数或设定的检测精度阈值。本发明实施例对此不做具体限定。During the training process of the feature extraction network, multiple iterations of training are performed, and the parameters of the feature extraction network are adjusted in each iteration until the training end condition is reached. The training end condition may be the number of iterations or a set detection accuracy threshold. The embodiment of the present invention does not specifically limit this.

应该理解的是,步骤320使用待训练的特征提取网络对样本SAR图像进行特征提取的过程与步骤120使用训练完成的特征提取网络对目标SAR图像进行特征提取所执行的步骤是相同的。同时,步骤330对每个样本SAR图像组对应的两个时相的样本特征图进行差异分析生成每个样本SAR图像组对应的样本差异图,与步骤130对两个时相的目标特征图进行差异分析生成目标差异图所执行的步骤也应保持一致。同理,步骤340根据每个样本SAR图像组对应的样本差异图确定每个样本SAR图像组中后一时相的样本SAR图像相对于前一时相的样本SAR图像发生变化的图像区域,与步骤140根据目标差异图确定两个时相的目标SAR图像中后一时相的目标SAR图像相对于前一时相的目标SAR图像发生变化的目标图像区域所执行的步骤也应保持一致。It should be understood that the process of extracting features from sample SAR images using the feature extraction network to be trained in step 320 is the same as the step performed in step 120 of extracting features from target SAR images using the trained feature extraction network. At the same time, step 330 performs difference analysis on the sample feature maps of the two phases corresponding to each sample SAR image group to generate a sample difference map corresponding to each sample SAR image group, which should also be consistent with the step performed in step 130 of performing difference analysis on the target feature maps of the two phases to generate a target difference map. Similarly, step 340 determines the image region of the sample SAR image of the latter phase in each sample SAR image group that has changed relative to the sample SAR image of the previous phase according to the sample difference map corresponding to each sample SAR image group, which should also be consistent with the step performed in step 140 of determining the target image region of the target SAR image of the latter phase in the target SAR image of the two phases that has changed relative to the target SAR image of the previous phase according to the target difference map.

综上所述,本发明实施例提供了SAR图像变化检测方法。所述方法中,首先获取两个时相的目标SAR图像;之后通过特征提取网络对所述两个时相的目标SAR图像进行特征提取,得到两个时相的目标特征图,所述特征提取网络包括:第一特征提取模块,用于对每个时相的目标SAR图像进行特征提取,得到每个时相的第一特征图;第二特征提取模块,包括通道注意力模块和空间注意力模块;其中,所述通道注意力模块用于基于注意力机制对每个时相的第一特征图进行处理,确定每个时相的第一特征图的通道注意力权重,并基于所述通道注意力权重对每个时相的第一特征图进行加权处理,得到每个时相的第二特征图;所述空间注意力模块用于基于注意力机制对每个时相的第二特征图进行处理,确定每个时相的第二特征图的空间注意力权重,并基于所述空间注意力权重对每个时相的第二特征图进行加权处理,得到每个时相的目标特征图;再对所述两个时相的目标特征图进行差异分析,生成目标差异图;最后根据所述目标差异图,确定所述两个时相的目标SAR图像中后一时相的目标SAR图像相对于前一时相的目标SAR图像发生变化的目标图像区域。基于该方法,其通过第一特征提取模块提取目标SAR图像中的基础特征信息,再利用第二特征模块中的通道注意力模块和空间注意力模块进一步提取目标SAR图像深层次、多维度和精细化的特征,从而提取不同时相的SAR图像中小尺度目标的精细变化信息,提高对于SAR图像的变化检测精度。In summary, an embodiment of the present invention provides a SAR image change detection method. In the method, firstly, target SAR images of two phases are acquired; then, feature extraction is performed on the target SAR images of the two phases through a feature extraction network to obtain target feature maps of the two phases, and the feature extraction network includes: a first feature extraction module, which is used to extract features from the target SAR images of each phase to obtain a first feature map of each phase; a second feature extraction module, including a channel attention module and a spatial attention module; wherein the channel attention module is used to process the first feature map of each phase based on the attention mechanism, determine the channel attention weight of the first feature map of each phase, and perform channel attention on each first feature map based on the channel attention weight. The first feature map of each phase is weighted to obtain the second feature map of each phase; the spatial attention module is used to process the second feature map of each phase based on the attention mechanism, determine the spatial attention weight of the second feature map of each phase, and weight the second feature map of each phase based on the spatial attention weight to obtain the target feature map of each phase; then the target feature maps of the two phases are analyzed for differences to generate a target difference map; finally, according to the target difference map, the target image area of the target SAR image of the latter phase in the target SAR image of the two phases is determined to change relative to the target SAR image of the previous phase. Based on this method, it extracts the basic feature information in the target SAR image through the first feature extraction module, and then uses the channel attention module and the spatial attention module in the second feature module to further extract the deep, multi-dimensional and refined features of the target SAR image, thereby extracting the fine change information of small-scale targets in SAR images of different phases, and improving the change detection accuracy of SAR images.

以下提供一个具体的实施场景,以进一步说明本发明实施例提供的SAR图像变化检测方法。A specific implementation scenario is provided below to further illustrate the SAR image change detection method provided by an embodiment of the present invention.

图4为本发明实施例提供的SAR图像变化检测方法的流程图。本发明实施例提供的SAR图像变化检测方法主要包括数据集制作、模型搭建和模型训练三个部分。Fig. 4 is a flow chart of a SAR image change detection method provided by an embodiment of the present invention. The SAR image change detection method provided by an embodiment of the present invention mainly includes three parts: data set preparation, model building and model training.

步骤410,数据集制作Step 410: Dataset Creation

图5为本发明实施例提供的数据集制作的流程图。如图5所示,本发明实施例步骤410进一步包括步骤411至步骤415。FIG5 is a flow chart of data set preparation according to an embodiment of the present invention. As shown in FIG5 , step 410 of the embodiment of the present invention further includes steps 411 to 415 .

步骤411,获取原始SAR图像Step 411, obtaining the original SAR image

下载同一地区不同时间多幅1m分辨率的原始SAR图像,处理对象主要包括位于港口附近的舰船、建筑物与荒地等。Download multiple 1m resolution original SAR images of the same area at different times. The processing objects mainly include ships, buildings and wasteland near the port.

步骤412,图像预处理Step 412: Image preprocessing

对原始SAR图像依次进行辐射定标、图像自适应滤波与地理编码等操作,同时采用线性2%拉伸方式处理成8位图像,将同一地区的图像裁剪为相同的大小,再通过区域网平差工具进行精配准。The original SAR images were subjected to radiometric calibration, image adaptive filtering and geocoding operations in sequence, and processed into 8-bit images using a linear 2% stretching method. Images in the same area were cropped to the same size and then precisely aligned using the regional block adjustment tool.

步骤413,样本标记Step 413: Sample labeling

通过图像处理软件的颜色分量合成工具粗略计算不同时相的图像之间的变化差异,结合高精度地图数据与先验知识,手工精确标记变化样本,生成二值参考图作为标签图。The color component synthesis tool of the image processing software is used to roughly calculate the change differences between images in different phases. The high-precision map data and prior knowledge are combined to manually and accurately mark the change samples to generate a binary reference image as the label image.

步骤414,数据裁剪与增强Step 414: Data trimming and enhancement

将全部图像裁剪为256像素×256像素的大小,保证切块间无重叠。通过旋转、缩放、增加噪音与遮挡等数据增强操作,对数据集的图像数量与多样性进行扩增。通过数据增强处理,提升深度学习模型的稳定性与泛化能力。最终得到由前后时相图像与标签图组成的95040个图像组。All images were cropped to 256 pixels × 256 pixels to ensure that there was no overlap between the slices. The number and diversity of images in the dataset were increased through data augmentation operations such as rotation, scaling, adding noise and occlusion. The stability and generalization ability of the deep learning model were improved through data augmentation processing. Finally, 95,040 image groups consisting of before and after phase images and label maps were obtained.

步骤415,划分数据集Step 415: Divide the data set

根据比例与图像变换类型综合原则,将全部数据中60000组作为训练集,20000组作为验证集,剩余的12040组划分为测试集。According to the comprehensive principle of proportion and image transformation type, 60,000 groups of all data are used as training sets, 20,000 groups are used as validation sets, and the remaining 12,040 groups are divided into test sets.

步骤420,搭建变化检测模型Step 420: Build a change detection model

图2为本发明实施例提供的变化检测模型的示意图。如图2所示,变化检测模型包括特征提取网络和差异生成与分析模块,其中,特征提取网络包括第一特征提取模块和第二特征提取模块。Fig. 2 is a schematic diagram of a change detection model provided by an embodiment of the present invention. As shown in Fig. 2, the change detection model includes a feature extraction network and a difference generation and analysis module, wherein the feature extraction network includes a first feature extraction module and a second feature extraction module.

具体地,采用改进的ResNet-34残差网络实现第一特征提取模块。图2具体示出了改进的ResNet-34残差网络。在改进的ResNet-34残差网络中,包括2个卷积层Conv和4个残差模块,其中,残差模块ResBlock1、ResBlock2、ResBlock3、ResBlock4分别包含3个、4个、6个和3个残差结构(图2中示出了第一个残差模块中的残差结构由2个3×3的卷积层组成),在第一个残差模块ResBlock1之前以及最后一个残差模块ResBlock4之后分别连接一个卷积层。Specifically, the improved ResNet-34 residual network is used to implement the first feature extraction module. Figure 2 specifically shows the improved ResNet-34 residual network. In the improved ResNet-34 residual network, there are 2 convolutional layers Conv and 4 residual modules, among which the residual modules ResBlock1, ResBlock2, ResBlock3, and ResBlock4 contain 3, 4, 6, and 3 residual structures respectively (Figure 2 shows that the residual structure in the first residual module consists of 2 3×3 convolutional layers), and a convolutional layer is connected before the first residual module ResBlock1 and after the last residual module ResBlock4.

两个时相的目标SAR图像I1和I2分别输入至第一特征提取模块中进行特征提取,当前输入至第一特征提取模块中的图像即输入图像。在通过第一特征提取模块对目标SAR图像进行特征提取时,先通过第一卷积层对输入图像进行降维处理,以使第一卷积层的输出特征维度满足第一个残差模块对于输入特征的维度要求,再通过每个残差模块对自身的输入特征进行特征提取,最后通过第二卷积层对最后一个残差模块的输出特征进行升维处理,使第二个卷积层的输出特征恢复至目标SAR图像的维度,第二卷积层的输出特征即第一特征图。The target SAR images I1 and I2 of the two time phases are respectively input into the first feature extraction module for feature extraction, and the image currently input into the first feature extraction module is the input image. When the first feature extraction module performs feature extraction on the target SAR image, the input image is firstly subjected to dimensionality reduction processing through the first convolution layer so that the output feature dimension of the first convolution layer meets the dimensionality requirement of the first residual module for the input feature, and then each residual module is subjected to feature extraction on its own input feature, and finally the output feature of the last residual module is subjected to dimensionality increase processing through the second convolution layer so that the output feature of the second convolution layer is restored to the dimension of the target SAR image, and the output feature of the second convolution layer is the first feature map.

第二特征提取模块包括通道注意力模块和空间注意力模块。The second feature extraction module includes a channel attention module and a spatial attention module.

通道注意力模块包括第一最大池化层MaxPool、第一平均池化层AvgPool、第三卷积层Conv、第一融合层、第一sigmoid激活函数和第一输出层,其中,第一最大池化层和第一平均池化层并联于第三卷积层的输入端,第三卷积层、第一融合层、第一sigmoid激活函数和第一输出层依次连接,第三卷积层为一维卷积层。在通过通道注意力模块对第一特征图进行处理时,首先分别使用第一最大池化层和第一平均池化层对第一特征图进行空间维度的最大池化处理和平均池化处理,分别得到两个输出特征。给定第一特征图的维度为H×W×C,其中,H、W、C分别代表第一特征图的高度、宽度和通道数,经过空间维度的最大池化处理和平均池化处理,可以得到两个1×1×C的输出特征。之后使用第一融合层将两个通道注意力权重矩阵进行元素相加,以得到总的通道注意力权重矩阵。再使用第一sigmoid激活函数对总的通道注意力权重矩阵进行激活处理,以得到通道注意力权重。最后,使用第一输出层将通道注意力权重与第一特征图相乘,得到第二特征图。The channel attention module includes the first maximum pooling layer MaxPool, the first average pooling layer AvgPool, the third convolution layer Conv, the first fusion layer, the first sigmoid activation function and the first output layer, wherein the first maximum pooling layer and the first average pooling layer are connected in parallel to the input end of the third convolution layer, the third convolution layer, the first fusion layer, the first sigmoid activation function and the first output layer are connected in sequence, and the third convolution layer is a one-dimensional convolution layer. When the first feature map is processed by the channel attention module, the first maximum pooling layer and the first average pooling layer are first used to perform maximum pooling and average pooling processing on the first feature map in the spatial dimension, respectively, to obtain two output features. Given that the dimension of the first feature map is H×W×C, where H, W, and C represent the height, width, and number of channels of the first feature map, respectively, after the maximum pooling and average pooling processing in the spatial dimension, two 1×1×C output features can be obtained. Then, the first fusion layer is used to add the two channel attention weight matrices to obtain the total channel attention weight matrix. The first sigmoid activation function is then used to activate the total channel attention weight matrix to obtain the channel attention weight. Finally, the first output layer is used to multiply the channel attention weight with the first feature map to obtain the second feature map.

通道注意力模块对第一特征图的处理过程的计算公式可以表示为:The calculation formula of the channel attention module for processing the first feature map can be expressed as:

MC(F)=σ(f(AvgPool(F))+f(MaxPool(F)))M C (F)=σ(f(AvgPool(F))+f(MaxPool(F)))

其中,F表示第一特征图,AvgPool(F)和MaxPool(F)分别为对第一特征图进行空间维度的平均池化和最大池化,σ为sigmoid激活函数,f为卷积处理,Mc(F)表示通道注意力权重,F′表示第二特征图。Where F represents the first feature map, AvgPool(F) and MaxPool(F) are the average pooling and maximum pooling of the spatial dimension of the first feature map, σ is the sigmoid activation function, f is the convolution process, Mc (F) represents the channel attention weight, and F′ represents the second feature map.

空间注意力模块包括第二最大池化层、第二平均池化层、第二融合层、第四卷积层、第二sigmoid激活函数和第二输出层,第二最大池化层和第二平均池化层并联于第二融合层的输入端,第二融合层、第四卷积层和第二输出层依次连接。空间注意力模块对第二特征图的处理过程为:首先分别使用第二最大池化层和第二平均池化层对第二特征图进行通道维度的最大池化处理和平均池化处理,分别得到两个输出特征。给定第二特征图的维度为H×W×C,其中,H、W、C分别代表第二特征图的高度、宽度和通道数,经过通道维度的最大池化处理和平均池化处理,可以得到两个H×W×1的输出特征。之后使用第二融合层将第二最大池化层和第二平均池化层的输出特征进行向量拼接。再使用第四卷积层对向量拼接后的输出特征进行卷积处理,得到空间注意力权重矩阵。然后使用第二sigmoid激活函数对空间注意力权重矩阵进行激活处理,以得到空间注意力权重。最后通过第二输出层将空间注意力权重与第二特征图相乘,得到目标特征图,作为第二特征提取模块的输出特征。The spatial attention module includes a second maximum pooling layer, a second average pooling layer, a second fusion layer, a fourth convolutional layer, a second sigmoid activation function, and a second output layer. The second maximum pooling layer and the second average pooling layer are connected in parallel to the input end of the second fusion layer. The second fusion layer, the fourth convolutional layer, and the second output layer are connected in sequence. The processing process of the spatial attention module on the second feature map is as follows: first, the second maximum pooling layer and the second average pooling layer are used to perform maximum pooling and average pooling on the second feature map in the channel dimension, respectively, to obtain two output features. Given that the dimension of the second feature map is H×W×C, where H, W, and C represent the height, width, and number of channels of the second feature map, respectively, after the maximum pooling and average pooling of the channel dimension, two H×W×1 output features can be obtained. Then, the second fusion layer is used to vectorize the output features of the second maximum pooling layer and the second average pooling layer. The fourth convolutional layer is then used to convolve the output features after vectorization to obtain the spatial attention weight matrix. Then, the second sigmoid activation function is used to activate the spatial attention weight matrix to obtain the spatial attention weight. Finally, the spatial attention weight is multiplied by the second feature map through the second output layer to obtain the target feature map as the output feature of the second feature extraction module.

空间注意力模块对第二特征图的处理过程的计算公式可以表示为:The calculation formula of the spatial attention module for processing the second feature map can be expressed as:

MS(F')=σ(f(AvgPool(F');MaxPool(F')))M S (F')=σ(f(AvgPool(F');MaxPool(F')))

其中,F′表示第二特征图,AvgPool(F′)和MaxPool(F′)分别为对第二特征图进行通道维度的平均池化和最大池化,σ为sigmoid激活函数,f为卷积处理,Ms(F′)表示空间注意力权重,F″表示目标特征图。Wherein, F′ represents the second feature map, AvgPool(F′) and MaxPool(F′) are the average pooling and maximum pooling of the channel dimension of the second feature map, σ is the sigmoid activation function, f is the convolution process, Ms (F′) represents the spatial attention weight, and F″ represents the target feature map.

本发明实施例采用结合集合相似度损失函数Dice Loss与交叉熵损失函数CELoss的混合损失函数。The embodiment of the present invention adopts a hybrid loss function that combines the set similarity loss function Dice Loss and the cross entropy loss function CELoss.

集合相似度损失函数Dice Loss的计算公式如下:The calculation formula of the set similarity loss function Dice Loss is as follows:

交叉熵损失函数CE Loss的计算公式如下:The calculation formula of the cross entropy loss function CE Loss is as follows:

LCE=-RT log RP L CE = -RT log R P

其中,LD表示集合相似度损失信息;LCE表示交叉熵损失信息;RT表示样本标签,取值有0或1两种情况,分别表示未发生变化和发生变化;RP表示变化预测结果,取值范围为[0,1]。Among them, LD represents the set similarity loss information; LCE represents the cross entropy loss information; RT represents the sample label, which has two values: 0 or 1, indicating no change and change respectively; RP represents the change prediction result, with a value range of [0,1].

混合损失函数L的计算公式可以表示为:The calculation formula of the mixed loss function L can be expressed as:

L=LD+LCE L= LD + LCE

接下来,两个时相的目标特征图X1和X2输入至差异生成与分析模块进行后续处理。首先,分别基于差值算子和对数比算子(简称LR算子)对两个时相的目标特征图进行计算,得到基于差值算子的差异图和基于对数比算子的差异图,之后对基于差值算子的差异图和基于对数比算子的差异图进行融合处理,生成目标差异图,再针对目标差异图进行阈值分析,最后确定后一时相的目标SAR图像相对于前一时相的目标SAR图像发生变化的目标图像区域,输出变化检测结果。Next, the target feature maps X1 and X2 of the two phases are input to the difference generation and analysis module for subsequent processing. First, the target feature maps of the two phases are calculated based on the difference operator and the logarithmic ratio operator (LR operator for short), respectively, to obtain the difference map based on the difference operator and the difference map based on the logarithmic ratio operator, then the difference map based on the difference operator and the difference map based on the logarithmic ratio operator are fused to generate the target difference map, and then the threshold analysis is performed on the target difference map, and finally the target image area where the target SAR image of the latter phase changes relative to the target SAR image of the previous phase is determined, and the change detection result is output.

目标差异图的生成过程的计算公式可以表示如下:The calculation formula of the target difference map generation process can be expressed as follows:

XC=α(X1-X2|)+(1-α)(log(X2+1)-log(X1+1))X C =α(X 1 -X 2 |)+(1-α)(log(X 2 +1)-log(X 1 +1))

XC=α(|X1-X2|)+(1-α)(|log(X2+1)-log(X1+1)|)XC=α(|X1-X 2 |)+(1-α)(|log(X 2 +1)-log(X 1 +1)|)

其中,X1和X2分别表示两个时相的目标特征图;|X1-X2|表示基于差值算子所生成的差异图;|log(X2+1)-log(X1+1)|表示基于对数比算子所生成的差异图;α为权重系数,取值为0~1;Xc表示目标差异图。Wherein, X1 and X2 represent the target feature maps of two phases respectively; | X1 - X2 | represents the difference map generated based on the difference operator; |log( X2 +1)-log( X1 +1)| represents the difference map generated based on the logarithmic ratio operator; α is the weight coefficient, ranging from 0 to 1; Xc represents the target difference map.

步骤430,模型训练和测试Step 430, model training and testing

通过深度学习框架Pytorch在NVIDIA M4000上开展模型的迭代训练,利用AdaMax优化器更新卷积权重、偏置等参数,初始学习率设为0.001,衰减因子β1=0.9,β2=0.999。The model was iteratively trained on NVIDIA M4000 using the deep learning framework Pytorch, and the convolution weights, biases and other parameters were updated using the AdaMax optimizer. The initial learning rate was set to 0.001, and the attenuation factors β1=0.9 and β2=0.999.

使用测试数据集对训练完成的模型性能进行测试。将本发明实施例所搭建的模型性能与现有的模型性能进行对比,以验证本发明实施例提供的方法的有效性。评价指标采用精确率(Precision,P)、召回率(Recall,R)、综合评价指标F1与整体精度(OverallAccuracy,OA),公式如下:The performance of the trained model is tested using a test data set. The performance of the model built by the embodiment of the present invention is compared with the performance of the existing model to verify the effectiveness of the method provided by the embodiment of the present invention. The evaluation indicators include precision (Precision, P), recall (Recall, R), comprehensive evaluation index F1 and overall accuracy (OverallAccuracy, OA), and the formula is as follows:

其中,TP、TN、FP、FN为经典混淆矩阵中4种判断类型;TP表示预测正确的变化像素数;TN表示预测正确的未变化像素数;FP表示将未变化预测为变化的像素数;FN表示将变化预测为未变化的像素数。Among them, TP, TN, FP, and FN are four types of judgments in the classic confusion matrix; TP represents the number of changed pixels predicted correctly; TN represents the number of unchanged pixels predicted correctly; FP represents the number of pixels predicted as changed; and FN represents the number of pixels predicted as unchanged.

在变化检测任务中,更高的精确率表明预测结果中产生更少的错误,更大的召回率表明检测出更多的正样本,F1和整体精度对预测结果进行整体度量,指标的数值越大则模型的预测效果越好。指标评价结果见表1。由表1可知,与其他方法相比,本发明所提供方法的综合性能具有明显的提升。In the change detection task, a higher precision indicates that fewer errors are generated in the prediction results, a larger recall indicates that more positive samples are detected, and F1 and overall accuracy measure the prediction results as a whole. The larger the value of the indicator, the better the prediction effect of the model. The indicator evaluation results are shown in Table 1. As can be seen from Table 1, compared with other methods, the comprehensive performance of the method provided by the present invention is significantly improved.

表1 指标评价结果Table 1 Index evaluation results

模型名称Model Name 准确率PAccuracy 召回率RRecall R F1F1 整体精度OAOverall accuracy OA STANetSTANet 0.9340.934 0.8460.846 0.8870.887 0.9900.990 DSAMNetDSAMNet 0.9280.928 0.8690.869 0.8980.898 0.9870.987 SNUNetSNUNet 0.9500.950 0.9060.906 0.9270.927 0.9830.983 本发明实施例Embodiments of the present invention 0.9880.988 0.9680.968 0.9780.978 0.9940.994

图6a为本发明实施例提供的前一时相的目标SAR图像;图6b为本发明实施例提供的后一时相的目标SAR图像;图6c为本发明实施例提供的标签图;图6d为本发明实施例提供的SAR图像变化检测结果。如图6a至图6d所示,本发明实施例所提供的方法可以检测出SAR图像中精细的变化信息,检测能力强,提取信息准确完整,虚警与漏检的情况都有所抑制,预测结果接近手工标记的参考样本。FIG6a is a target SAR image of the previous phase provided by an embodiment of the present invention; FIG6b is a target SAR image of the next phase provided by an embodiment of the present invention; FIG6c is a label map provided by an embodiment of the present invention; and FIG6d is a SAR image change detection result provided by an embodiment of the present invention. As shown in FIG6a to FIG6d, the method provided by the embodiment of the present invention can detect fine change information in SAR images, has strong detection capability, extracts accurate and complete information, suppresses false alarms and missed detections, and the prediction results are close to the manually labeled reference samples.

综上所述,本发明实施例提供的SAR图像变化检测方法采用残差网络实现第一特征提取模块,由通道注意力模块和空间注意力模块组成第二特征提取模块,搭建变化检测模型,在包含不同场景的大规模数据集的对比实验中,取得了最佳的预测结果和更高精度,证明其具备高效利用高分辨率SAR图像中丰富信息和复杂特征的能力以及提升模型收敛和优化效率的能力,可以实现对于舰船、建筑物等目标动态信息的准确提取,可以改善地物特征的表示和变化检测效果。本发明实施例提供的SAR图像变化检测方法不仅能够克服传统方法的不足与缺陷,提高算法效率,而且可以为城市发展和地质灾害监测等领域的应用提供可靠参考,从而更加准确和有针对性地解决城市发展和地质灾害监测等多领域中实际问题。In summary, the SAR image change detection method provided by the embodiment of the present invention adopts a residual network to implement the first feature extraction module, and the second feature extraction module is composed of a channel attention module and a spatial attention module to build a change detection model. In the comparative experiment of a large-scale data set containing different scenes, the best prediction results and higher accuracy are achieved, proving that it has the ability to efficiently utilize the rich information and complex features in high-resolution SAR images and improve the convergence and optimization efficiency of the model, and can achieve accurate extraction of dynamic information of targets such as ships and buildings, and can improve the representation of ground feature and change detection effect. The SAR image change detection method provided by the embodiment of the present invention can not only overcome the shortcomings and defects of traditional methods and improve the efficiency of the algorithm, but also provide a reliable reference for applications in fields such as urban development and geological disaster monitoring, so as to more accurately and targetedly solve practical problems in multiple fields such as urban development and geological disaster monitoring.

图7为本发明实施例提供的SAR图像变化检测装置的结构示意图。如图7所示,该SAR图像变化检测装置700,包括:目标图像获取模块710,用于获取两个时相的目标SAR图像;目标特征图提取模块720,用于通过特征提取网络对所述两个时相的目标SAR图像进行特征提取,得到两个时相的目标特征图;所述特征提取网络包括:第一特征提取模块,用于对每个时相的目标SAR图像进行特征提取,得到每个时相的第一特征图;第二特征提取模块,包括通道注意力模块和空间注意力模块;其中,所述通道注意力模块用于基于注意力机制对每个时相的第一特征图进行处理,确定每个时相的第一特征图的通道注意力权重,并基于所述通道注意力权重对每个时相的第一特征图进行加权处理,得到每个时相的第二特征图;所述空间注意力模块用于基于注意力机制对每个时相的第二特征图进行处理,确定每个时相的第二特征图的空间注意力权重,并基于所述空间注意力权重对每个时相的第二特征图进行加权处理,得到每个时相的目标特征图;目标差异图生成模块730,用于对所述两个时相的目标特征图进行差异分析,生成目标差异图;目标图像区域确定模块740,用于根据所述目标差异图,确定所述两个时相的目标SAR图像中后一时相的目标SAR图像相对于前一时相的目标SAR图像发生变化的目标图像区域。FIG7 is a schematic diagram of the structure of a SAR image change detection device provided by an embodiment of the present invention. As shown in FIG7 , the SAR image change detection device 700 includes: a target image acquisition module 710, which is used to acquire target SAR images of two phases; a target feature map extraction module 720, which is used to extract features of the target SAR images of the two phases through a feature extraction network to obtain target feature maps of the two phases; the feature extraction network includes: a first feature extraction module, which is used to extract features of the target SAR images of each phase to obtain the first feature map of each phase; a second feature extraction module, including a channel attention module and a spatial attention module; wherein the channel attention module is used to process the first feature map of each phase based on the attention mechanism, determine the channel attention weight of the first feature map of each phase, and based on the channel attention mechanism, obtain the channel attention weight of the first feature map of each phase. The attention weight is used to weight the first feature map of each phase to obtain the second feature map of each phase; the spatial attention module is used to process the second feature map of each phase based on the attention mechanism, determine the spatial attention weight of the second feature map of each phase, and weight the second feature map of each phase based on the spatial attention weight to obtain the target feature map of each phase; the target difference map generation module 730 is used to perform difference analysis on the target feature maps of the two phases to generate a target difference map; the target image area determination module 740 is used to determine the target image area of the target SAR image of the latter phase in the target SAR image of the two phases relative to the target SAR image of the previous phase according to the target difference map.

在一些实施例中,所述第一特征提取模块包括依次连接的第一卷积层、多个残差模块和第二卷积层;所述第一卷积层用于对每个时相的目标SAR图像进行降维处理;每个残差模块用于对自身的输入特征进行特征提取,第一个残差模块的输入特征为所述第一卷积层的输出特征;所述第二卷积层用于对最后一个残差模块的输出特征进行升维处理,得到与每个时相的目标SAR图像维度相同的第一特征图。In some embodiments, the first feature extraction module includes a first convolutional layer, multiple residual modules and a second convolutional layer connected in sequence; the first convolutional layer is used to perform dimensionality reduction processing on the target SAR image of each phase; each residual module is used to extract features from its own input features, and the input features of the first residual module are the output features of the first convolutional layer; the second convolutional layer is used to perform dimensionality increase processing on the output features of the last residual module to obtain a first feature map with the same dimension as the target SAR image of each phase.

在一些实施例中,所述通道注意力模块包括第一最大池化层、第一平均池化层、第三卷积层、第一融合层、第一sigmoid激活函数和第一输出层,其中,所述第一最大池化层和第一平均池化层并联于所述第三卷积层的输入端,所述第三卷积层、所述第一融合层、所述第一sigmoid激活函数和所述第一输出层依次连接;In some embodiments, the channel attention module includes a first maximum pooling layer, a first average pooling layer, a third convolutional layer, a first fusion layer, a first sigmoid activation function and a first output layer, wherein the first maximum pooling layer and the first average pooling layer are connected in parallel to the input end of the third convolutional layer, and the third convolutional layer, the first fusion layer, the first sigmoid activation function and the first output layer are connected in sequence;

所述第一最大池化层和所述第一平均池化层分别用于对每个时相的第一特征图进行空间维度的最大池化处理和平均池化处理;The first maximum pooling layer and the first average pooling layer are used to perform maximum pooling processing and average pooling processing on the first feature map of each phase in the spatial dimension, respectively;

所述第三卷积层用于分别对所述第一最大池化层和所述第一平均池化层的输出特征进行卷积处理,以得到两个通道注意力权重矩阵;The third convolutional layer is used to perform convolution processing on the output features of the first maximum pooling layer and the first average pooling layer respectively to obtain two channel attention weight matrices;

所述第一融合层用于对所述两个通道注意力权重矩阵相加,以得到总的通道注意力权重矩阵;The first fusion layer is used to add the two channel attention weight matrices to obtain a total channel attention weight matrix;

所述第一sigmoid激活函数用于对所述总的通道注意力权重矩阵进行激活处理,以得到通道注意力权重;The first sigmoid activation function is used to activate the total channel attention weight matrix to obtain the channel attention weight;

所述第一输出层用于基于所述通道注意力权重对每个时相的第一特征图进行加权处理,得到每个时相的第二特征图。The first output layer is used to perform weighted processing on the first feature map of each time phase based on the channel attention weight to obtain the second feature map of each time phase.

在一些实施例中,所述空间注意力模块包括第二最大池化层、第二平均池化层、第二融合层、第四卷积层、第二sigmoid激活函数和第二输出层,所述第二最大池化层和所述第二平均池化层并联于所述第二融合层的输入端,所述第二融合层、所述第四卷积层和所述第二输出层依次连接;In some embodiments, the spatial attention module includes a second maximum pooling layer, a second average pooling layer, a second fusion layer, a fourth convolutional layer, a second sigmoid activation function, and a second output layer, the second maximum pooling layer and the second average pooling layer are connected in parallel to the input end of the second fusion layer, and the second fusion layer, the fourth convolutional layer, and the second output layer are connected in sequence;

所述第二最大池化层和所述第二平均池化层分别用于对每个时相的第二特征图进行通道维度的最大池化处理和平均池化处理;The second maximum pooling layer and the second average pooling layer are used to perform maximum pooling processing and average pooling processing on the second feature map of each phase in the channel dimension respectively;

所述第二融合层用于对所述第二最大池化层和所述第二平均池化层的输出特征进行向量拼接;The second fusion layer is used to perform vector concatenation on output features of the second maximum pooling layer and the second average pooling layer;

所述第四卷积层用于对向量拼接得到的输出特征进行卷积处理,以得到空间注意力权重矩阵;The fourth convolutional layer is used to perform convolution processing on the output features obtained by vector concatenation to obtain a spatial attention weight matrix;

所述第二sigmoid激活函数用于对所述空间注意力权重矩阵进行激活处理,以得到空间注意力权重;The second sigmoid activation function is used to activate the spatial attention weight matrix to obtain the spatial attention weight;

所述第二输出层用于基于所述空间注意力权重对每个时相的第二特征图进行加权处理,得到每个时相的目标特征图。The second output layer is used to perform weighted processing on the second feature map of each phase based on the spatial attention weight to obtain a target feature map of each phase.

在一些实施例中,所述目标差异图生成模块,包括:In some embodiments, the target difference map generation module includes:

差异图生成子模块,用于分别基于差值算子和对数比算子对所述两个时相的目标特征图进行计算,得到基于差值算子的差异图和基于对数比算子的差异图;A difference map generation submodule, used to calculate the target feature maps of the two phases based on a difference operator and a logarithmic ratio operator, respectively, to obtain a difference map based on a difference operator and a difference map based on a logarithmic ratio operator;

差异图融合子模块,用于对基于差值算子的差异图和基于对数比算子的差异图进行融合处理,生成所述目标差异图。The difference map fusion submodule is used to fuse the difference map based on the difference operator and the difference map based on the logarithmic ratio operator to generate the target difference map.

在一些实施例中,所述差异图融合子模块,具体用于:In some embodiments, the difference map fusion submodule is specifically used to:

对基于差值算子的差异图和基于对数比算子的差异图进行加权融合处理,生成所述目标差异图。A weighted fusion process is performed on the difference map based on the difference operator and the difference map based on the logarithmic ratio operator to generate the target difference map.

在一些实施例中,所述装置还包括:In some embodiments, the apparatus further comprises:

样本图像组获取模块,用于获取多个样本SAR图像组,其中,每个样本SAR图像组包括两个时相的样本SAR图像以及样本标签,所述样本标签用于指示所述两个时相的样本SAR图像中后一时相的样本SAR图像相对于前一时相的样本SAR图像发生变化的实际图像区域;A sample image group acquisition module, used to acquire multiple sample SAR image groups, wherein each sample SAR image group includes sample SAR images of two time phases and sample labels, wherein the sample labels are used to indicate an actual image region in which the sample SAR image of the latter time phase of the sample SAR images of the two time phases changes relative to the sample SAR image of the previous time phase;

样本特征图提取模块,用于通过待训练的特征提取网络对每个样本SAR图像组中的两个时相的样本SAR图像进行特征提取,得到两个时相的样本特征图;A sample feature map extraction module is used to extract features from sample SAR images of two time phases in each sample SAR image group through a feature extraction network to be trained, so as to obtain sample feature maps of two time phases;

样本差异图生成模块,用于对每个样本SAR图像组对应的所述两个时相的样本特征图进行差异分析,生成每个样本SAR图像组对应的样本差异图;A sample difference map generating module is used to perform difference analysis on the sample feature maps of the two phases corresponding to each sample SAR image group to generate a sample difference map corresponding to each sample SAR image group;

图像区域确定模块,用于根据每个样本SAR图像组对应的样本差异图,确定每个样本SAR图像组中后一时相的样本SAR图像相对于前一时相的样本SAR图像发生变化的图像区域;An image region determination module, for determining, according to a sample difference map corresponding to each sample SAR image group, an image region in which a sample SAR image of a later phase in each sample SAR image group has changed relative to a sample SAR image of a previous phase;

损失信息确定模块,用于根据每个样本SAR图像组中后一时相的样本SAR图像相对于前一时相的样本SAR图像发生变化的图像区域以及样本标签,确定损失信息;A loss information determination module, used to determine loss information according to an image area and a sample label of a sample SAR image of a later phase in each sample SAR image group that has changed relative to a sample SAR image of a previous phase;

训练模块,用于根据所述损失信息,对所述待训练的特征提取网络进行训练。A training module is used to train the feature extraction network to be trained according to the loss information.

在一些实施例中,所述样本图像组获取模块,包括:In some embodiments, the sample image group acquisition module includes:

原始图像获取子模块,用于获取多个时相的原始SAR图像;The original image acquisition submodule is used to acquire original SAR images of multiple phases;

数据增强子模块,用于对所述多个时相的原始SAR图像进行数据增强处理,得到多个时相的样本SAR图像;A data enhancement submodule is used to perform data enhancement processing on the original SAR images of the multiple time phases to obtain sample SAR images of the multiple time phases;

样本图像组构建子模块,用于根据所述多个时相的样本SAR图像,构建多个样本SAR图像组。The sample image group construction submodule is used to construct a plurality of sample SAR image groups according to the sample SAR images of the plurality of time phases.

在一些实施例中,所述损失信息确定模块,具体用于:In some embodiments, the loss information determination module is specifically used to:

根据每个样本SAR图像组中后一时相的样本SAR图像相对于前一时相的样本SAR图像发生变化的图像区域以及样本标签,分别确定集合相似度损失信息和交叉熵损失信息;According to the image area and sample label of the sample SAR image of the latter phase in each sample SAR image group that has changed relative to the sample SAR image of the previous phase, the set similarity loss information and the cross entropy loss information are determined respectively;

根据所述集合相似度损失信息和所述交叉熵损失信息,确定混合损失信息。Mixed loss information is determined according to the set similarity loss information and the cross entropy loss information.

图8为本发明实施例的电子设备。如图8所示,电子设备800包括:至少一个处理器810,以及与至少一个处理器810通信连接的存储器820,其中,存储器存储有可被至少一个处理器执行的指令,指令被至少一个处理器执行,以使至少一个处理器执行的方法。Fig. 8 is an electronic device according to an embodiment of the present invention. As shown in Fig. 8, the electronic device 800 includes: at least one processor 810, and a memory 820 connected to the at least one processor 810 for communication, wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to perform a method.

具体地,上述存储器820和处理器810经由总线830连接在一起,能够为通用的存储器和处理器,这里不做具体限定,当处理器810运行存储器820存储的计算机程序时,能够执行本发明实施例中结合图1至图7所描述的各项操作和功能。Specifically, the above-mentioned memory 820 and processor 810 are connected together via a bus 830, and can be a general memory and processor, which are not specifically limited here. When the processor 810 runs the computer program stored in the memory 820, it can perform the various operations and functions described in combination with Figures 1 to 7 in the embodiments of the present invention.

在本发明实施例中,电子设备800可以包括但不限于:个人计算机、服务器计算机、工作站、桌面型计算机、膝上型计算机、笔记本计算机、移动计算设备、智能电话、平板计算机、个人数字助理(PDA)、手持装置、消息收发设备、可佩带计算设备等等。In an embodiment of the present invention, the electronic device 800 may include but is not limited to: a personal computer, a server computer, a workstation, a desktop computer, a laptop computer, a notebook computer, a mobile computing device, a smart phone, a tablet computer, a personal digital assistant (PDA), a handheld device, a messaging device, a wearable computing device, and the like.

本发明实施例还提供了一种存储介质,其上存储有计算机程序,该程序被处理器执行时,实现的方法。具体实现可参见方法实施例,在此不再赘述。具体地,可以提供配有存储介质的系统或者装置,在该存储介质上存着实现上述实施例中任一实施例的功能的软件程序代码,且使该系统或者装置的计算机或处理器读出并执行存储在该存储介质中的指令。从存储介质读取的程序代码本身可实现上述实施例中任何一项实施例的功能,因此机器可读代码和存储机器可读代码的存储介质构成了本发明的一部分。The embodiment of the present invention further provides a storage medium on which a computer program is stored, and when the program is executed by a processor, a method is implemented. For specific implementation, please refer to the method embodiment, which will not be repeated here. Specifically, a system or device equipped with a storage medium can be provided, on which a software program code that implements the functions of any of the above embodiments is stored, and the computer or processor of the system or device reads and executes the instructions stored in the storage medium. The program code read from the storage medium itself can implement the functions of any of the above embodiments, so the machine-readable code and the storage medium storing the machine-readable code constitute a part of the present invention.

存储介质包括但不限于软盘、硬盘、磁光盘、光盘、磁带、非易失性存储卡和ROM。还可以通过通信网络从服务器计算机上或者云上下载程序代码。Storage media include, but are not limited to, floppy disks, hard disks, magneto-optical disks, optical disks, magnetic tapes, non-volatile memory cards, and ROMs. Program codes may also be downloaded from a server computer or a cloud via a communication network.

需要说明的是,上述各流程和各系统结构中,不是所有的步骤和模块都是必须的,可以根据实际需要忽略某些步骤和单元。各步骤的执行顺序不是固定的,可以根据需要进行确定。上述各实施例中的描述的装置结构可以是物理结构,也可以是逻辑结构。某个模块或单元可能由同一物理实体实现,某个模块或单元可能由多个物理实体分别实现,某个模块或单元还可以由多个独立设备中的多个部件共同实现。It should be noted that, in the above-mentioned processes and system structures, not all steps and modules are necessary, and some steps and units can be ignored according to actual needs. The execution order of each step is not fixed and can be determined as needed. The device structure described in the above-mentioned embodiments can be a physical structure or a logical structure. A module or unit may be implemented by the same physical entity, a module or unit may be implemented by multiple physical entities respectively, and a module or unit may also be implemented by multiple components in multiple independent devices.

尽管本发明实施例的实施方案已公开如上,但其并不仅仅限于说明书和实施方式中所列运用。它完全可以被适用于各种适合本发明实施例的领域。对于熟悉本领域的人员而言,可容易地实现另外的修改。因此在不背离权利要求及等同范围所限定的一般概念下,本发明实施例并不限于特定的细节和这里示出与描述的图例。Although the embodiments of the present invention have been disclosed above, they are not limited to the applications listed in the specification and the embodiments. It can be fully applied to various fields suitable for the embodiments of the present invention. For those familiar with the art, additional modifications can be easily realized. Therefore, without departing from the general concept defined by the claims and equivalent scopes, the embodiments of the present invention are not limited to the specific details and the legends shown and described here.

Claims (7)

1. A SAR image change detection method, comprising:
acquiring target SAR images of two time phases;
extracting features of the target SAR images of the two time phases through a feature extraction network to obtain target feature images of the two time phases;
the feature extraction network includes:
the first feature extraction module is used for carrying out feature extraction on the target SAR image of each time phase to obtain a first feature map of each time phase;
the second feature extraction module comprises a channel attention module and a space attention module; the channel attention module is used for processing the first feature map of each time phase based on an attention mechanism, determining the channel attention weight of the first feature map of each time phase, and weighting the first feature map of each time phase based on the channel attention weight to obtain the second feature map of each time phase; the spatial attention module is used for processing the second feature map of each time phase based on an attention mechanism, determining the spatial attention weight of the second feature map of each time phase, and carrying out weighting processing on the second feature map of each time phase based on the spatial attention weight to obtain a target feature map of each time phase;
Performing differential analysis on the target feature graphs of the two time phases to generate a target differential graph;
according to the target difference map, determining a target image area in which a target SAR image of a later time phase in the target SAR images of the two time phases changes relative to a target SAR image of a previous time phase;
the first feature extraction module comprises a first convolution layer, a plurality of residual error modules and a second convolution layer which are sequentially connected;
the first convolution layer is used for performing dimension reduction processing on the target SAR image of each time phase;
each residual error module is used for extracting the characteristics of the input characteristics of the residual error module, and the input characteristics of the first residual error module are the output characteristics of the first convolution layer;
the second convolution layer is used for carrying out dimension-lifting processing on the output characteristics of the last residual error module to obtain a first characteristic diagram with the same dimension as the target SAR image of each time phase;
performing differential analysis on the target feature graphs of the two time phases to generate a target differential graph, wherein the differential analysis comprises the following steps:
calculating the target feature graphs of the two time phases based on a difference operator and a logarithmic ratio operator respectively to obtain a difference graph based on the difference operator and a difference graph based on the logarithmic ratio operator;
Carrying out fusion processing on the difference graph based on the difference operator and the difference graph based on the logarithmic ratio operator to generate the target difference graph;
the fusing processing is carried out on the difference graph based on the difference operator and the difference graph based on the logarithmic ratio operator, and the target difference graph is generated, which comprises the following steps:
and carrying out weighted fusion processing on the difference graph based on the difference operator and the difference graph based on the logarithmic ratio operator to generate the target difference graph.
2. The SAR image change detection method of claim 1, wherein the channel attention module comprises a first max-pooling layer, a first average pooling layer, a third convolution layer, a first fusion layer, a first sigmoid activation function, and a first output layer, wherein the first max-pooling layer and the first average pooling layer are connected in parallel with an input end of the third convolution layer, and the third convolution layer, the first fusion layer, the first sigmoid activation function, and the first output layer are sequentially connected;
the first maximum pooling layer and the first average pooling layer are respectively used for carrying out maximum pooling processing and average pooling processing of space dimension on the first feature map of each time phase;
the third convolution layer is used for respectively carrying out convolution processing on the output characteristics of the first maximum pooling layer and the first average pooling layer so as to obtain two channel attention weight matrixes;
The first fusion layer is used for adding the attention weight matrixes of the two channels to obtain a total channel attention weight matrix;
the first sigmoid activation function is used for activating the total channel attention weight matrix to obtain channel attention weights;
the first output layer is used for carrying out weighting processing on the first characteristic map of each time phase based on the channel attention weight to obtain a second characteristic map of each time phase.
3. The SAR image change detection method of claim 1, wherein the spatial attention module comprises a second maximum pooling layer, a second averaging pooling layer, a second fusion layer, a fourth convolution layer, a second sigmoid activation function, and a second output layer, wherein the second maximum pooling layer and the second averaging pooling layer are connected in parallel with an input end of the second fusion layer, and the second fusion layer, the fourth convolution layer, and the second output layer are sequentially connected;
the second maximum pooling layer and the second average pooling layer are respectively used for carrying out maximum pooling treatment and average pooling treatment on the channel dimension of the second feature map of each time phase;
the second fusion layer is used for vector splicing of the output features of the second maximum pooling layer and the second average pooling layer;
The fourth convolution layer is used for carrying out convolution processing on the output characteristics obtained by vector splicing so as to obtain a space attention weight matrix;
the second sigmoid activation function is used for activating the spatial attention weight matrix to obtain spatial attention weight;
and the second output layer is used for carrying out weighting processing on the second characteristic map of each time phase based on the spatial attention weight to obtain a target characteristic map of each time phase.
4. The SAR image change detection method of claim 1, wherein the method comprises:
acquiring a plurality of sample SAR image groups, wherein each sample SAR image group comprises two time-phase sample SAR images and a sample label, and the sample label is used for indicating an actual image area in which a sample SAR image of a later time phase in the two time-phase sample SAR images is changed relative to a sample SAR image of a previous time phase;
carrying out feature extraction on the sample SAR images of two time phases in each sample SAR image group through a feature extraction network to be trained to obtain sample feature images of the two time phases;
performing differential analysis on the sample feature images of the two phases corresponding to each sample SAR image group to generate a sample differential image corresponding to each sample SAR image group;
According to the sample difference map corresponding to each sample SAR image group, determining an image area in which the sample SAR image of the next time phase in each sample SAR image group changes relative to the sample SAR image of the previous time phase;
determining loss information according to an image area and a sample label of a sample SAR image of a next time phase in each sample SAR image group, wherein the image area changes relative to the sample SAR image of a previous time phase;
and training the feature extraction network to be trained according to the loss information.
5. The SAR image change detection method of claim 4, wherein the acquiring a plurality of sample SAR image sets comprises:
acquiring original SAR images of a plurality of time phases;
performing data enhancement processing on the original SAR images of the multiple time phases to obtain sample SAR images of the multiple time phases;
and constructing a plurality of sample SAR image groups according to the sample SAR images of the plurality of phases.
6. The SAR image change detection method of claim 4, wherein the determining the loss information based on the image area and the sample label in which the sample SAR image of the subsequent phase in each sample SAR image group changes with respect to the sample SAR image of the previous phase comprises:
Respectively determining aggregate similarity loss information and cross entropy loss information according to an image area and a sample label of a sample SAR image of a later time phase in each sample SAR image group relative to a sample SAR image of a previous time phase;
and determining mixed loss information according to the aggregate similarity loss information and the cross entropy loss information.
7. A SAR image change detection apparatus, comprising:
the target image acquisition module is used for acquiring target SAR images of two time phases;
the target feature map extraction module is used for carrying out feature extraction on the target SAR images of the two time phases through a feature extraction network to obtain target feature maps of the two time phases;
the feature extraction network includes:
the first feature extraction module is used for carrying out feature extraction on the target SAR image of each time phase to obtain a first feature map of each time phase;
the second feature extraction module comprises a channel attention module and a space attention module; the channel attention module is used for processing the first feature map of each time phase based on an attention mechanism, determining the channel attention weight of the first feature map of each time phase, and weighting the first feature map of each time phase based on the channel attention weight to obtain the second feature map of each time phase; the spatial attention module is used for processing the second feature map of each time phase based on an attention mechanism, determining the spatial attention weight of the second feature map of each time phase, and carrying out weighting processing on the second feature map of each time phase based on the spatial attention weight to obtain a target feature map of each time phase;
The target difference map generation module is used for carrying out difference analysis on the target feature maps of the two time phases to generate a target difference map;
the target image area determining module is used for determining a target image area of which the target SAR image of the next time phase in the target SAR images of the two time phases is changed relative to the target SAR image of the previous time phase according to the target difference image;
the first feature extraction module comprises a first convolution layer, a plurality of residual error modules and a second convolution layer which are sequentially connected; the first convolution layer is used for performing dimension reduction processing on the target SAR image of each time phase; each residual error module is used for extracting the characteristics of the input characteristics of the residual error module, and the input characteristics of the first residual error module are the output characteristics of the first convolution layer; the second convolution layer is used for carrying out dimension-lifting processing on the output characteristics of the last residual error module to obtain a first characteristic diagram with the same dimension as the target SAR image of each time phase;
the target difference graph generation module comprises:
the difference map generation sub-module is used for calculating the target feature maps of the two time phases based on a difference operator and a logarithmic comparison operator respectively to obtain a difference map based on the difference operator and a difference map based on the logarithmic comparison operator;
The difference map fusion sub-module is used for carrying out fusion processing on the difference map based on the difference operator and the difference map based on the logarithmic ratio operator to generate the target difference map;
the difference map fusion submodule is specifically configured to:
and carrying out weighted fusion processing on the difference graph based on the difference operator and the difference graph based on the logarithmic ratio operator to generate the target difference graph.
CN202310101309.9A 2023-01-28 2023-01-28 SAR image change detection method and device Active CN116012364B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310101309.9A CN116012364B (en) 2023-01-28 2023-01-28 SAR image change detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310101309.9A CN116012364B (en) 2023-01-28 2023-01-28 SAR image change detection method and device

Publications (2)

Publication Number Publication Date
CN116012364A CN116012364A (en) 2023-04-25
CN116012364B true CN116012364B (en) 2024-01-16

Family

ID=86024952

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310101309.9A Active CN116012364B (en) 2023-01-28 2023-01-28 SAR image change detection method and device

Country Status (1)

Country Link
CN (1) CN116012364B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116524206B (en) * 2023-06-30 2023-10-03 深圳须弥云图空间科技有限公司 Target image identification method and device
CN117173104B (en) * 2023-08-04 2024-04-16 山东大学 Low-altitude unmanned aerial vehicle image change detection method and system
CN117494765B (en) * 2023-10-23 2025-02-07 昆明理工大学 A twin network and method for change detection of remote sensing images with ultra-high spatial resolution
CN117745688B (en) * 2023-12-25 2024-06-14 中国科学院空天信息创新研究院 A multi-scale SAR image change detection visualization system, electronic device and storage medium

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111539316A (en) * 2020-04-22 2020-08-14 中南大学 High-resolution remote sensing image change detection method based on double attention twin network
WO2021000906A1 (en) * 2019-07-02 2021-01-07 五邑大学 Sar image-oriented small-sample semantic feature enhancement method and apparatus
CN112488025A (en) * 2020-12-10 2021-03-12 武汉大学 Double-temporal remote sensing image semantic change detection method based on multi-modal feature fusion
CN113034471A (en) * 2021-03-25 2021-06-25 重庆大学 SAR image change detection method based on FINCH clustering
CN113378686A (en) * 2021-06-07 2021-09-10 武汉大学 Two-stage remote sensing target detection method based on target center point estimation
CN113420662A (en) * 2021-06-23 2021-09-21 西安电子科技大学 Remote sensing image change detection method based on twin multi-scale difference feature fusion
CN113536929A (en) * 2021-06-15 2021-10-22 南京理工大学 SAR image target detection method under complex scene
CN113567984A (en) * 2021-07-30 2021-10-29 长沙理工大学 A method and system for detecting small artificial targets in SAR images
CN113743383A (en) * 2021-11-05 2021-12-03 航天宏图信息技术股份有限公司 SAR image water body extraction method and device, electronic equipment and storage medium
WO2022000426A1 (en) * 2020-06-30 2022-01-06 中国科学院自动化研究所 Method and system for segmenting moving target on basis of twin deep neural network
CN114119621A (en) * 2021-11-30 2022-03-01 云南电网有限责任公司输电分公司 SAR remote sensing image water area segmentation method based on depth coding and decoding fusion network
CN114283120A (en) * 2021-12-01 2022-04-05 武汉大学 End-to-end multi-source heterogeneous remote sensing image change detection method based on domain self-adaptation
CN114494870A (en) * 2022-01-21 2022-05-13 山东科技大学 A dual-phase remote sensing image change detection method, model building method and device
CN114841924A (en) * 2022-04-11 2022-08-02 中国人民解放军战略支援部队航天工程大学 Unsupervised change detection method for heterogeneous remote sensing image
CN114841319A (en) * 2022-04-29 2022-08-02 哈尔滨工程大学 Multispectral image change detection method based on multi-scale self-adaptive convolution kernel
CN114926746A (en) * 2022-05-25 2022-08-19 西北工业大学 SAR image change detection method based on multi-scale differential feature attention mechanism
CN115187861A (en) * 2022-07-13 2022-10-14 哈尔滨理工大学 Hyperspectral image change detection method and system based on depth twin network
CN115331087A (en) * 2022-10-11 2022-11-11 水利部交通运输部国家能源局南京水利科学研究院 Remote sensing image change detection method and system based on fusion of regional semantics and pixel features
CN115457390A (en) * 2022-09-13 2022-12-09 中国人民解放军国防科技大学 Remote sensing image change detection method, device, computer equipment and storage medium

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021000906A1 (en) * 2019-07-02 2021-01-07 五邑大学 Sar image-oriented small-sample semantic feature enhancement method and apparatus
CN111539316A (en) * 2020-04-22 2020-08-14 中南大学 High-resolution remote sensing image change detection method based on double attention twin network
WO2022000426A1 (en) * 2020-06-30 2022-01-06 中国科学院自动化研究所 Method and system for segmenting moving target on basis of twin deep neural network
CN112488025A (en) * 2020-12-10 2021-03-12 武汉大学 Double-temporal remote sensing image semantic change detection method based on multi-modal feature fusion
CN113034471A (en) * 2021-03-25 2021-06-25 重庆大学 SAR image change detection method based on FINCH clustering
CN113378686A (en) * 2021-06-07 2021-09-10 武汉大学 Two-stage remote sensing target detection method based on target center point estimation
CN113536929A (en) * 2021-06-15 2021-10-22 南京理工大学 SAR image target detection method under complex scene
CN113420662A (en) * 2021-06-23 2021-09-21 西安电子科技大学 Remote sensing image change detection method based on twin multi-scale difference feature fusion
CN113567984A (en) * 2021-07-30 2021-10-29 长沙理工大学 A method and system for detecting small artificial targets in SAR images
CN113743383A (en) * 2021-11-05 2021-12-03 航天宏图信息技术股份有限公司 SAR image water body extraction method and device, electronic equipment and storage medium
CN114119621A (en) * 2021-11-30 2022-03-01 云南电网有限责任公司输电分公司 SAR remote sensing image water area segmentation method based on depth coding and decoding fusion network
CN114283120A (en) * 2021-12-01 2022-04-05 武汉大学 End-to-end multi-source heterogeneous remote sensing image change detection method based on domain self-adaptation
CN114494870A (en) * 2022-01-21 2022-05-13 山东科技大学 A dual-phase remote sensing image change detection method, model building method and device
CN114841924A (en) * 2022-04-11 2022-08-02 中国人民解放军战略支援部队航天工程大学 Unsupervised change detection method for heterogeneous remote sensing image
CN114841319A (en) * 2022-04-29 2022-08-02 哈尔滨工程大学 Multispectral image change detection method based on multi-scale self-adaptive convolution kernel
CN114926746A (en) * 2022-05-25 2022-08-19 西北工业大学 SAR image change detection method based on multi-scale differential feature attention mechanism
CN115187861A (en) * 2022-07-13 2022-10-14 哈尔滨理工大学 Hyperspectral image change detection method and system based on depth twin network
CN115457390A (en) * 2022-09-13 2022-12-09 中国人民解放军国防科技大学 Remote sensing image change detection method, device, computer equipment and storage medium
CN115331087A (en) * 2022-10-11 2022-11-11 水利部交通运输部国家能源局南京水利科学研究院 Remote sensing image change detection method and system based on fusion of regional semantics and pixel features

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
A Deep Multiscale Pyramid Network Enhanced With Spatial-Spectral Residual Attention for Hyperspectral Image Change Detection;Yufei Yang等;《IEEE Transactions on Geoscience and Remote Sensing》;第60卷;1-13 *
A deeply supervised image fusion network for change detection in high resolution bi-temporal remote sensing images;Chenxiao Zhang等;《ISPRS Journal of Photogrammetry and Remote Sensing》;第166卷;183-200 *
Attention-Based Adaptive Spectral-Spatial Kernel ResNet for Hyperspectral Image Classification;Swalpa Kumar Roy等;《IEEE Transactions on Geoscience and Remote Sensing》;第59卷(第9期);7831-7843 *
Feature Decomposition-Optimization-Reorganization Network for Building Change Detection in Remote Sensing Images;Yuanxin Ye等;《Remote Sens》;第14卷(第3期);1-18 *
基于全局结构差异与局部注意力的变化检测;梅杰等;《中国科学:信息科学》;第52卷(第11期);2058-2074 *

Also Published As

Publication number Publication date
CN116012364A (en) 2023-04-25

Similar Documents

Publication Publication Date Title
CN116012364B (en) SAR image change detection method and device
CN110927706B (en) Convolutional neural network-based radar interference detection and identification method
CN107808138B (en) Communication signal identification method based on FasterR-CNN
CN112270285B (en) SAR image change detection method based on sparse representation and capsule network
CN116563285B (en) Focus characteristic identifying and dividing method and system based on full neural network
CN112634369A (en) Space and or graph model generation method and device, electronic equipment and storage medium
CN113887656B (en) Hyperspectral image classification method combining deep learning and sparse representation
Chen et al. Change detection algorithm for multi-temporal remote sensing images based on adaptive parameter estimation
CN115457492A (en) Target detection method and device, computer equipment and storage medium
CN115311502A (en) A small sample scene classification method for remote sensing images based on multi-scale dual-stream architecture
CN108986083B (en) SAR image change detection method based on threshold optimization
CN116167936A (en) A method and device for removing hill shadows for flood monitoring
CN115995042A (en) Video SAR moving target detection method and device
CN107392863A (en) SAR image change detection based on affine matrix fusion Spectral Clustering
Lin et al. Pointer generation and main scale detection for occluded meter reading based on generative adversarial network
CN114240940B (en) Cloud and cloud shadow detection method and device based on remote sensing image
CN108509835B (en) PolSAR image ground object classification method based on DFIC super-pixels
CN111275680B (en) SAR image change detection method based on Gabor convolution network
CN117058498B (en) Training method of segmentation map evaluation model, and segmentation map evaluation method and device
CN109697474A (en) Synthetic Aperture Radar images change detecting method based on iteration Bayes
CN113327221B (en) Image synthesis method, device, electronic equipment and medium for fusing ROI (region of interest)
CN115171225A (en) Image detection method and training method of image detection model
CN107423765A (en) Based on sparse coding feedback network from the upper well-marked target detection method in bottom
CN113762478A (en) Radio frequency interference detection model, radio frequency interference detection method and device
Ullah et al. High-Throughput Spike Detection in Greenhouse Cultivated Grain Crops with Attention Mechanisms-Based Deep Learning Models

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant