[go: up one dir, main page]

CN111985543A - Construction method, classification method and system of hyperspectral image classification model - Google Patents

Construction method, classification method and system of hyperspectral image classification model Download PDF

Info

Publication number
CN111985543A
CN111985543A CN202010781786.0A CN202010781786A CN111985543A CN 111985543 A CN111985543 A CN 111985543A CN 202010781786 A CN202010781786 A CN 202010781786A CN 111985543 A CN111985543 A CN 111985543A
Authority
CN
China
Prior art keywords
hyperspectral image
network
classification
sample set
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010781786.0A
Other languages
Chinese (zh)
Other versions
CN111985543B (en
Inventor
彭进业
闫怀平
王珺
张二磊
罗迒哉
彭敏
赵万青
李斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xianyang Xinhepu Photoelectric Co ltd
Northwestern University
Original Assignee
Xianyang Xinhepu Photoelectric Co ltd
Northwestern University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xianyang Xinhepu Photoelectric Co ltd, Northwestern University filed Critical Xianyang Xinhepu Photoelectric Co ltd
Priority to CN202010781786.0A priority Critical patent/CN111985543B/en
Publication of CN111985543A publication Critical patent/CN111985543A/en
Application granted granted Critical
Publication of CN111985543B publication Critical patent/CN111985543B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method for constructing a hyperspectral image classification model, an image classification method and a system, wherein the method for constructing the hyperspectral image classification model comprises the following steps: acquiring a hyperspectral image, and standardizing the hyperspectral image to enable the hyperspectral image to accord with the Gaussian distribution of unit variance of 0 mean value; performing category calibration to obtain a calibration sample set and a label set; constructing a neighborhood data cube, and classifying according to a set proportion to obtain a training sample set, a verification sample set and a test sample set; the hyperspectral image classification method provided by the invention adopts a cascade network structure which extracts spectral attention features and then spatial attention features, so that the network focuses more on interested spatial regions and meaningful spectral bands, the spectral features and the spatial features are combined, abundant spectral information and spatial information are fully utilized, therefore, higher classification precision can be obtained, and the obtained classified images are more continuous visually.

Description

一种高光谱图像分类模型的构建方法、分类方法及系统Construction method, classification method and system of a hyperspectral image classification model

技术领域technical field

本发明属于高光谱图像处理技术领域,具体涉及一种高光谱图像分类模型的构建方法、分类方法及系统。The invention belongs to the technical field of hyperspectral image processing, and in particular relates to a construction method, a classification method and a system of a hyperspectral image classification model.

背景技术Background technique

高光谱图像通常被看作是一个三维数据立方体,除了具有空间上宽度和高度两个维度外,往往还有上百个波段的光谱信息。这些波段除了包含可见光谱外还包括紫外及近红外等波段信息,利用这些丰富的信息人们可以发现在可见光中不可见的感兴趣目标,为人类学习世界和改变世界提供强大的技术支撑,因此有越来越多针对高光谱图像的研究。A hyperspectral image is usually regarded as a three-dimensional data cube, in addition to the two dimensions of width and height in space, there are often hundreds of bands of spectral information. In addition to the visible spectrum, these bands also include information in ultraviolet and near-infrared bands. Using this rich information, people can find targets of interest that are not visible in visible light, and provide strong technical support for humans to learn the world and change the world. Therefore, there are There is an increasing number of studies on hyperspectral images.

高光谱图像分类是高光谱图像研究中的一个研究热点,它是指利用高光谱图像的空间信息和光谱信息将图像逐像素的分类。早期的方法有基于支持向量机的分类方法,主成分分析降维再分类的方法和基于稀疏表示的方法等。由于这些方法不能很好地利用图像的深度信息,往往导致分类准确率较低。Hyperspectral image classification is a research hotspot in hyperspectral image research, which refers to the pixel-by-pixel classification of images using the spatial and spectral information of hyperspectral images. Early methods include support vector machine-based classification methods, principal component analysis dimensionality reduction and reclassification methods, and sparse representation-based methods. Since these methods cannot make good use of the depth information of images, they often lead to low classification accuracy.

近年来,基于深度学习的方法在图像分类和目标检测中取得了很大的成功,因此出现了很多基于深度学习的高光谱图像分类方法。常用的深度学习的网络有自动编码器,深度置信网络和卷积神经网络,以及将残差块和迁移学习相结合的网络等。尽管现有的基于深度学习的方法可以获得较好的分类精度,但是大多数现有的方法都需要较多的训练样本,网络复杂度高,网络深度较深,需要训练的参数较多等不足。而实际应用中,高光谱图像的标定样本往往是有限的,所以现有的方法的分类准确率还有待提高,针对不同数据集的鲁棒性还有待增强。In recent years, deep learning-based methods have achieved great success in image classification and object detection, so many deep learning-based hyperspectral image classification methods have emerged. Commonly used deep learning networks include autoencoders, deep belief networks and convolutional neural networks, as well as networks that combine residual blocks and transfer learning. Although the existing deep learning-based methods can achieve better classification accuracy, most of the existing methods need more training samples, the network complexity is high, the network depth is deep, and there are many parameters that need to be trained. . In practical applications, the calibration samples of hyperspectral images are often limited, so the classification accuracy of the existing methods needs to be improved, and the robustness to different datasets needs to be enhanced.

发明内容SUMMARY OF THE INVENTION

针对上述现有技术的不足与缺陷,本发明的目的在于提供一种高光谱图像分类模型的构建方法,图像分类方法及系统,解决现有技术中的方法由于对高光谱图像特征提取不充分,导致高光谱图像分类准确率不高的问题。Aiming at the above-mentioned deficiencies and defects of the prior art, the purpose of the present invention is to provide a method for constructing a hyperspectral image classification model, an image classification method and a system, so as to solve the problem that the method in the prior art is insufficient for hyperspectral image feature extraction, This leads to the problem of low accuracy of hyperspectral image classification.

为了解决上述技术问题,本发明采用如下技术方案予以实现:一种高光谱图像分类模型的构建方法,所述的构建方法为:In order to solve the above-mentioned technical problems, the present invention adopts the following technical solutions to achieve: a construction method of a hyperspectral image classification model, the construction method is:

步骤一,获得高光谱图像,并对所述高光谱图像进行标准化,使高光谱图像符合0均值单位方差的高斯分布;Step 1, obtaining a hyperspectral image, and standardizing the hyperspectral image, so that the hyperspectral image conforms to a Gaussian distribution with 0 mean and unit variance;

步骤二,对标准化后的高光谱图像的像素对应的地物类别进行类别标定,获得标定样本集和标签集;Step 2, perform category calibration on the ground object category corresponding to the pixels of the standardized hyperspectral image, and obtain a calibration sample set and a label set;

步骤三,以标定样本集中每个样本的像素为中心构造邻域数据立方体,所述邻域数据立方体的维度为w*w*B,其中w*w代表像素的空间邻域,B为高光谱图像的初始波段数,对构造得到的邻域数据立方体按照设定比例进行划分,得到训练样本集、验证样本集和测试样本集;Step 3: Construct a neighborhood data cube with the pixel of each sample in the calibration sample set as the center, and the dimension of the neighborhood data cube is w*w*B, where w*w represents the spatial neighborhood of the pixel, and B is the hyperspectral spectrum. The initial number of bands of the image, the constructed neighborhood data cube is divided according to the set ratio, and the training sample set, the verification sample set and the test sample set are obtained;

步骤四,将训练样本集和验证样本集中每个样本对应的邻域数据立方体作为网络的输入,以标签集作为期望输出,对网络进行训练,获得训练好的网络模型;Step 4: Use the neighborhood data cube corresponding to each sample in the training sample set and the verification sample set as the input of the network, and use the label set as the expected output to train the network to obtain a trained network model;

所述的网络包括依次串联的设置的网络输入层、数据降维模块、光谱注意力模块、二次数据降维模块、空间注意力机制模块、池化模块和全连接网络;The network includes a network input layer, a data dimensionality reduction module, a spectral attention module, a secondary data dimensionality reduction module, a spatial attention mechanism module, a pooling module and a fully connected network that are set in series in sequence;

所述的数据降维模块包括依次连接的两个卷积模块,所述的卷积模块包括依次连接的卷积层、批量归一化层和激活函数层;The data dimensionality reduction module includes two convolution modules connected in sequence, and the convolution module includes a convolution layer, a batch normalization layer and an activation function layer connected in sequence;

所述的光谱注意力模块包括依次连接的多个光谱注意力特征层;The spectral attention module includes a plurality of spectral attention feature layers connected in sequence;

所述的二次数据降维模块包括一个卷积层;The secondary data dimensionality reduction module includes a convolution layer;

所述的空间注意力机制模块包括依次连接的多个空间注意力特征层。The spatial attention mechanism module includes a plurality of spatial attention feature layers connected in sequence.

所述的步骤三中构造邻域数据立方体时如遇到边界问题,采用0边界填充的方法对边界进行扩充。If a boundary problem is encountered when constructing the neighborhood data cube in the third step, the boundary is expanded by using the method of filling the 0 boundary.

所述的步骤四的具体方法为:The concrete method of described step 4 is:

将训练样本集和验证样本集中每个样本对应的邻域数据立方体作为网络的输入,以标签集作为期望输出,计算训练样本的真实标签与网络预测输出值的交叉熵损失,将最小化该交叉熵损失作为优化目标,采用自适应矩估计算法优化网络,将得到的网络模型在验证集上测试,若得到的分类准确率优于之前网络模型,则更新网络模型参数,直至迭代结束,保存在验证集上分类准确率最高的模型,获得训练好的网络模型,对训练好的网络模型采用测试样本集评价分类性能。The neighborhood data cube corresponding to each sample in the training sample set and the validation sample set is used as the input of the network, and the label set is used as the expected output, and the cross entropy loss between the true label of the training sample and the predicted output value of the network is calculated, and the cross entropy loss will be minimized. The entropy loss is used as the optimization target, and the adaptive moment estimation algorithm is used to optimize the network, and the obtained network model is tested on the validation set. The model with the highest classification accuracy on the validation set obtains the trained network model, and uses the test sample set to evaluate the classification performance of the trained network model.

一种高光谱图像分类方法,包括以下步骤:A hyperspectral image classification method, comprising the following steps:

获得待分类的高光谱图像集,输入网络模型,用网络模型对高光谱图像的每个像素进行分类,得到最终的分类结果;Obtain the hyperspectral image set to be classified, input the network model, use the network model to classify each pixel of the hyperspectral image, and obtain the final classification result;

所述的网络模型为上述的高光谱图像分类模型的构建方法构建得到的高光谱图像分类模型。The network model is a hyperspectral image classification model constructed by the above-mentioned method for constructing a hyperspectral image classification model.

一种高光谱图像分类系统,所述的分类系统包括图像采集模块、数据预处理模块以及分类模块;A hyperspectral image classification system, the classification system includes an image acquisition module, a data preprocessing module and a classification module;

所述的图像采集模块用于获取高光谱图像;The image acquisition module is used to acquire hyperspectral images;

所述的数据预处理模块用于为所述高光谱图像进行标准化处理,使得输入图像符合0均值单位方差的高斯分布;The data preprocessing module is used to standardize the hyperspectral image, so that the input image conforms to the Gaussian distribution with 0 mean and unit variance;

所述的分类模块采用前述方法对高光谱图像进行分类并输出分类结果。The classification module adopts the aforementioned method to classify the hyperspectral image and outputs the classification result.

一种空间注意力特征可视化的方法,包括以下步骤:A method for visualizing spatial attention features, including the following steps:

步骤A,根据构建的高光谱图像分类模型,以及构建的高光谱图像分类系统,提取空间注意特征所在的网络层标号L,及其前一层标号L-1;Step A, according to the constructed hyperspectral image classification model and the constructed hyperspectral image classification system, extract the network layer label L where the spatial attention feature is located, and the label L-1 of the previous layer;

步骤B,在高光谱图像分类模型中调取L-1层网络参数M1和L层网络参数M2,将高光谱图像作为输入图像,使用参数M1得到注意力特征提取前的图像,使用参数M2得到注意力特征提取后的图像,实现注意力特征的可视化;Step B, call the L-1 layer network parameter M1 and the L layer network parameter M2 in the hyperspectral image classification model, take the hyperspectral image as the input image, use the parameter M1 to obtain the image before the attention feature extraction, and use the parameter M2 to obtain The image after the attention feature is extracted to realize the visualization of the attention feature;

所述的高光谱图像分类模型为上述所述的高光谱图像分类模型的构建方法法构建得到的高光谱图像分类模型;The hyperspectral image classification model is a hyperspectral image classification model constructed by the method for constructing the hyperspectral image classification model described above;

所述的高光谱图像分类系统为上述所述的图像分类系统。The hyperspectral image classification system is the above-mentioned image classification system.

本发明与现有技术相比,具有如下技术效果:Compared with the prior art, the present invention has the following technical effects:

(Ⅰ)高光谱图像分类时仅使用光谱特征分类将会受到“同物异谱”和“异物同谱”的影响,而仅使用空间特征又会丢失丰富的光谱信息,这两种情况都会导致较低的分类精度,因此,本发明提供的高光谱图像分类技术采用先提取光谱特征后提取空间特征的级联网络结构,将光谱特征和空间特征结合起来,充分利用了丰富的光谱信息和空间信息,因此可获得更高的分类精度,且得到的分类图像视觉上更加连续;(I) When classifying hyperspectral images, only using spectral features will be affected by "similar objects with different spectra" and "different objects with the same spectrum", and only using spatial features will lose rich spectral information, both of which will lead to Therefore, the hyperspectral image classification technology provided by the present invention adopts a cascade network structure that extracts spectral features first and then spatial features, combines spectral features and spatial features, and makes full use of the rich spectral information and space. Therefore, higher classification accuracy can be obtained, and the obtained classification images are more visually continuous;

(Ⅱ)本发明采用的注意力机制使得网络对不同的光谱波段和空间区域赋予不同的权重,更加符合人类视觉机制,可以得到更有意义的特征,在网络层数不用太深时可得到更多有意义的特征,这一特征使得网络在小样本训练集情况下更有优势;(II) The attention mechanism adopted in the present invention enables the network to assign different weights to different spectral bands and spatial regions, which is more in line with the human visual mechanism, and more meaningful features can be obtained. More meaningful features, this feature makes the network more advantageous in the case of small sample training sets;

(Ⅲ)本发明采用了端到端的网络结构,整个网络在数据降维,光谱注意力特征提取,空间注意力特征提取三个模块上统一采用了3D CNN,充分利用了高光谱图像的三维特性,相较于现有技术在数据降维,光谱特征提取,空间特征提取等操作上采用不同技术和网络,本发明的网络结构简单,易于实现;(III) The present invention adopts an end-to-end network structure. The entire network adopts 3D CNN in three modules: data dimensionality reduction, spectral attention feature extraction, and spatial attention feature extraction, making full use of the three-dimensional characteristics of hyperspectral images. , compared with the prior art using different technologies and networks in operations such as data dimensionality reduction, spectral feature extraction, spatial feature extraction, etc., the network structure of the present invention is simple and easy to implement;

(Ⅳ)本发明对深度网络中的空间注意力特征进行了可视化,以往的深度学习网络类似于一个黑箱,因此每一层的作用不容易被理解,本发明通过可视化空间注意力特征,可以更直观地理解注意力机制的功能。(IV) The present invention visualizes the spatial attention feature in the deep network. The previous deep learning network is similar to a black box, so the role of each layer is not easy to understand. By visualizing the spatial attention feature, the present invention can be more Intuitively understand the function of the attention mechanism.

附图说明Description of drawings

图1为本发明的高光谱图像分类模型框架图;1 is a frame diagram of a hyperspectral image classification model of the present invention;

图2为本发明的光谱注意力模块的结构图;Fig. 2 is the structure diagram of the spectral attention module of the present invention;

图3为本发明的空间注意力模块的结构图;3 is a structural diagram of a spatial attention module of the present invention;

图4为本发明的高光谱图像分类模型构建示意图;4 is a schematic diagram of the construction of a hyperspectral image classification model of the present invention;

图5为本发明的实施例中提供的实验结果对比图;Fig. 5 is the experimental result comparison diagram provided in the embodiment of the present invention;

图6为本发明提出空间注意力特征可视化实施例的实验结果图。FIG. 6 is an experimental result diagram of an embodiment of the visualization of spatial attention features proposed by the present invention.

以下结合实施例对本发明的具体内容作进一步详细解释说明。The specific content of the present invention will be further explained in detail below in conjunction with the embodiments.

具体实施方式Detailed ways

以下给出本发明的具体实施例,需要说明的是本发明并不局限于以下具体实施例,凡在本申请技术方案基础上做的等同变换均落入本发明的保护范围。Specific embodiments of the present invention are given below. It should be noted that the present invention is not limited to the following specific embodiments, and all equivalent transformations made on the basis of the technical solutions of the present application fall into the protection scope of the present invention.

首先对本发明中出现的技术术语进行解释,以帮助更好的理解本申请的技术内容:First, the technical terms that appear in the present invention are explained to help better understand the technical content of the application:

注意力机制:注意力机制受人类视觉机制启发,人类视觉观测场景时,并不是对场景中所有区域都有同等程度的关注,而是会针对感兴趣的目标区域投入更多地关注,以获取更多所需要关注目标的细节信息,而抑制其他无用信息。Attention mechanism: The attention mechanism is inspired by the human visual mechanism. When humans visually observe a scene, they do not pay equal attention to all areas in the scene, but pay more attention to the target area of interest to obtain More detailed information about the target that needs to be focused on, while suppressing other useless information.

BN层:批量归一化层,一种有效的正则化方法。与激活函数层、卷积层、全连接层、池化层一样,也属于网络的一层。BN层可解决在训练过程中,中间层数据分布发生改变的问题,可改善流经网络的梯度,大幅提高训练速度,减少网络对初始化的强烈依赖等优点。BN layer: Batch normalization layer, an effective regularization method. Like the activation function layer, convolution layer, fully connected layer, and pooling layer, it is also a layer of the network. The BN layer can solve the problem that the data distribution of the intermediate layer changes during the training process, can improve the gradient flowing through the network, greatly improve the training speed, and reduce the network's strong dependence on initialization.

本发明中深度网络搭建时多次用到卷积层,在网络的不同阶段,为实现不同的功能,实施卷积操作时需要采用不同的参数,现将本发明中不同的卷积层所使用的主要参数作以下说明:In the present invention, the convolution layer is used many times during the construction of the deep network. In order to realize different functions at different stages of the network, different parameters need to be used when implementing the convolution operation. Now, the different convolution layers in the present invention are used. The main parameters are as follows:

滤波器数量:即卷积操作后得到的特征图的个数,本发明统一采用的是24;Number of filters: that is, the number of feature maps obtained after the convolution operation, and the present invention adopts 24 uniformly;

卷积核尺寸:本发明采用的是3D CNN,卷积核的尺寸用三维向量来表示,在网络的三个主要功能模块采用了不同的卷积核尺寸,在实施例中具体阐述;Convolution kernel size: The present invention adopts 3D CNN, and the size of the convolution kernel is represented by a three-dimensional vector, and the three main functional modules of the network adopt different convolution kernel sizes, which are specifically described in the embodiments;

步长:用于指定3个维度上卷积步幅,在本发明中只有数据降维模块的第一个卷积层采用(1,1,2),参见图1,其余卷积层统一采用默认值(1,1,1);Step size: used to specify the convolution stride in 3 dimensions. In the present invention, only the first convolution layer of the data dimension reduction module adopts (1, 1, 2), see Figure 1, and the other convolution layers are uniformly adopted default(1,1,1);

扩张率:指定用于扩张卷积的扩张率,在本发明中只有光谱注意力模块的卷积层采用(1,1,2),参见图2,其余卷积层统一采用默认值(1,1,1);Expansion rate: specifies the expansion rate for dilated convolution. In the present invention, only the convolutional layer of the spectral attention module adopts (1, 1, 2), see Figure 2, and the other convolutional layers uniformly use the default value (1, 1,1);

边界处理:卷积操作时是否对边界进行填充,发明中光谱注意力模块和空间注意力模块均采用边界填充策略,其余卷积层均默认的不进行边界填充。Boundary processing: whether to fill the boundary during the convolution operation. In the invention, both the spectral attention module and the spatial attention module adopt the boundary filling strategy, and the remaining convolution layers do not perform boundary filling by default.

在具体实施中,没有特殊说明的参数均采用默认值。In the specific implementation, the parameters that are not specially described are all set to default values.

实施例1:Example 1:

一种高光谱图像分类模型的构建方法,所述的构建方法为:A construction method of a hyperspectral image classification model, the construction method is:

步骤一,获得高光谱图像,所述高光谱图像的初始波段数为B,并对所述高光谱图像进行标准化,使高光谱图像符合0均值单位方差的高斯分布;Step 1, obtaining a hyperspectral image, the initial number of bands of the hyperspectral image is B, and standardizing the hyperspectral image so that the hyperspectral image conforms to a Gaussian distribution with 0 mean unit variance;

步骤二,对标准化后的高光谱图像的像素对应的地物类别进行类别标定,获得标定样本集和标签集;Step 2, perform category calibration on the ground object category corresponding to the pixels of the standardized hyperspectral image, and obtain a calibration sample set and a label set;

本实施例中,采用的数据集是由具有224波段的AVIRIS传感器采集位于加利福尼亚州萨利纳斯(Salinas)山谷上空的场景,具有高空间分辨率(3.7米/像素)。覆盖的区域由512行,每行217个样本组成。20个水吸收波段[108-112],[154-167],224被弃用,实际使用204个波段,因此实际使用的数据集的维度是521×217×204。它包括蔬菜、裸地和葡萄园等类别。标定了16个类别,总标定样本数为54129。具体的类别名称及标定样本个数见表1。In this example, the data set used is a scene over the valley of Salinas, California, captured by an AVIRIS sensor with 224 bands, with high spatial resolution (3.7 m/pixel). The covered area consists of 512 rows of 217 samples each. The 20 water absorption bands [108-112], [154-167], 224 are deprecated and 204 bands are actually used, so the dimension of the actual dataset used is 521×217×204. It includes categories such as vegetables, bare land and vineyards. 16 categories are calibrated, and the total number of calibration samples is 54129. The specific category names and the number of calibration samples are shown in Table 1.

表1Salinas数据集及训练集,验证集和测试集的分类Table 1 Salinas dataset and classification of training set, validation set and test set

No.No. 类别category 标定样本Calibration sample 训练样本Training samples 验证样本Validation samples 测试样本test sample 11 Brocoli_green_weeds_1Brocoli_green_weeds_1 20092009 21twenty one 21twenty one 19671967 22 Brocoli_green_weeds_2Brocoli_green_weeds_2 37263726 3838 3838 36503650 33 FallowFallow 19761976 2020 2020 19361936 44 Fallow_rough_plowFallow_rough_plow 13941394 1414 1414 13661366 55 Fallow_smoothFallow_smooth 26782678 2727 2727 26242624 66 StubbleStubble 39593959 4040 4040 38793879 77 CeleryCelery 35793579 3636 3636 35073507 88 Grapes_untrainedGrapes_untrained 1127111271 113113 113113 1104511045 99 Soil_vinyard_developSoil_vinyard_develop 62036203 6363 6363 60776077 1010 Corn_senesced_green_weedsCorn_senesced_green_weeds 32783278 3333 3333 32123212 1111 Lettuce_romaine_4wkLettuce_romaine_4wk 10681068 1111 1111 10461046 1212 Lettuce_romaine_5wkLettuce_romaine_5wk 19271927 2020 2020 18871887 1313 Lettuce_romaine_6wkLettuce_romaine_6wk 916916 1010 1010 896896 1414 Lettuce_romaine_7wkLettuce_romaine_7wk 10701070 1111 1111 10481048 1515 Vinyard_untrainedVinyard_untrained 72687268 7373 7373 71227122 1616 Vinyard_vertical_trellisVinyard_vertical_trellis 18071807 1919 1919 17691769 TotalTotal 5412954129 549549 549549 5303153031

步骤三,以标定样本集中每个样本的像素为中心构造邻域数据立方体,所述邻域数据立方体的维度为7*7*204,其中7*7代表中心像素的空间邻域,204为初始波段数,对构造得到的邻域数据立方体按照设定比例进行划分,得到训练样本集、验证样本集和测试样本集;Step 3: Construct a neighborhood data cube with the pixel of each sample in the calibration sample set as the center. The dimension of the neighborhood data cube is 7*7*204, where 7*7 represents the spatial neighborhood of the center pixel, and 204 is the initial The number of bands, divide the constructed neighborhood data cube according to the set ratio, and obtain the training sample set, the verification sample set and the test sample set;

设置训练样本的比例参数train ratio=1%,即采用较少的样本进行训练,此实施例可衡量本发明在小样本训练集下的分类性能;验证集样本个数与训练集样本个数相同;The ratio parameter train ratio=1% of the training samples is set, that is, fewer samples are used for training. This embodiment can measure the classification performance of the present invention under the small sample training set; the number of samples in the verification set is the same as the number of samples in the training set ;

训练时的批处理样本数batchsize=50,迭代次数epoch=100;The number of batch samples during training batchsize=50, and the number of iterations epoch=100;

将标定样本(像素)按照指定比例进行分类,训练样本集占1%,验证样本集占1%,剩余98%标定样本划分到测试样本集;The calibration samples (pixels) are classified according to the specified ratio, the training sample set accounts for 1%, the verification sample set accounts for 1%, and the remaining 98% of the calibration samples are divided into the test sample set;

步骤四,将训练样本集和验证样本集中每个样本对应的邻域数据立方体作为网络的输入,以标签集作为期望输出,对网络进行训练,获得训练好的网络模型;Step 4: Use the neighborhood data cube corresponding to each sample in the training sample set and the verification sample set as the input of the network, and use the label set as the expected output to train the network to obtain a trained network model;

将训练样本集和验证样本集中每个样本对应的邻域数据立方体作为网络的输入,以标签集作为期望输出,计算训练样本的真实标签与网络预测输出值的交叉熵损失,将最小化该交叉熵损失作为优化目标,采用自适应矩估计算法优化网络,将得到的网络模型在验证集上测试,若得到的分类准确率优于之前网络模型,则更新网络模型参数,直至迭代结束,保存在验证集上分类准确率最高的模型,获得训练好的网络模型,对训练好的网络模型采用测试样本集评价分类性能。The neighborhood data cube corresponding to each sample in the training sample set and the validation sample set is used as the input of the network, and the label set is used as the expected output, and the cross entropy loss between the true label of the training sample and the predicted output value of the network is calculated, and the cross entropy loss will be minimized. The entropy loss is used as the optimization goal, and the adaptive moment estimation algorithm is used to optimize the network, and the obtained network model is tested on the validation set. If the obtained classification accuracy is better than the previous network model, the network model parameters are updated until the iteration is over, and saved in The model with the highest classification accuracy on the validation set is used to obtain the trained network model, and the test sample set is used to evaluate the classification performance of the trained network model.

建立标定样本的邻域立方体时如遇到边界问题,采用0边界填充对边界进行扩充,避免边界越界问题。If the boundary problem is encountered when building the neighborhood cube of the calibration sample, the boundary is expanded with 0 boundary filling to avoid the problem of boundary crossing.

所述的网络包括依次串联的设置的网络输入层、数据降维模块、光谱注意力模块、二次数据降维模块、空间注意力机制模块、池化模块和全连接网络;The network includes a network input layer, a data dimensionality reduction module, a spectral attention module, a secondary data dimensionality reduction module, a spatial attention mechanism module, a pooling module and a fully connected network that are set in series in sequence;

网络模型的输入是7*7*204的数据立方体,其对应标签是该立方体中心像素对应的类别标签,其中7*7代表中心像素的空间邻域,204代表高光谱图像的初始波段数;The input of the network model is a 7*7*204 data cube, and its corresponding label is the class label corresponding to the center pixel of the cube, where 7*7 represents the spatial neighborhood of the center pixel, and 204 represents the initial number of bands of the hyperspectral image;

对输入的7*7*204的数据立方体进行光谱维(第3维度)上的第一个卷积操作,卷积核尺寸为1*1*7,卷积操作的步长straids=2,提取数据的第一层浅层特征,并且使得数据的光谱维度降低;Perform the first convolution operation on the spectral dimension (3rd dimension) of the input 7*7*204 data cube, the size of the convolution kernel is 1*1*7, the step size of the convolution operation is straids=2, extract The first layer of shallow features of the data, and the spectral dimension of the data is reduced;

对降维后的数据在第3维度上进行第2个卷积操作,卷积核尺寸为1*1*7,卷积时不进行边界填充,只对有效数据进行卷积,这样可以提取第二层浅层特征,并且使得数据的光谱维度再次降低,该层的输出作为光谱注意力模块的输入进行光谱维深度特征的提取;The second convolution operation is performed on the data after dimension reduction on the third dimension. The size of the convolution kernel is 1*1*7. No boundary padding is performed during convolution, and only valid data is convolved, so that the first convolution can be extracted. The second layer of shallow features, and the spectral dimension of the data is reduced again, the output of this layer is used as the input of the spectral attention module to extract the spectral dimension depth features;

所述的数据降维模块包括依次连接的两个卷积模块,所述的卷积模块包括依次连接的卷积层、BN层和激活函数层;对输入的7*7*204的数据立方体进行两次3维卷积,第一次卷积在光谱维上(第3维度)设置大于1的步长,大幅降维;卷积操作均为三维卷积,卷积核尺寸为1*1*7,步长为straids=(1,1,2),提取数据的第一层浅层特征,并且使得数据的光谱维度降低为(int(B-7+1)/2),int表示向下取整;The data dimensionality reduction module includes two convolution modules connected in sequence, and the convolution module includes a convolution layer, a BN layer and an activation function layer connected in sequence; the input 7*7*204 data cube is processed. Two 3-dimensional convolutions, the first convolution sets a step size greater than 1 in the spectral dimension (the third dimension), which greatly reduces the dimension; the convolution operations are all three-dimensional convolutions, and the size of the convolution kernel is 1*1* 7. The step size is straids=(1,1,2), extract the first layer of shallow features of the data, and reduce the spectral dimension of the data to (int(B-7+1)/2), int means downward Rounding;

对上述降维后的数据在光谱维进行第2个卷积操作,卷积核尺寸为1*1*7,这样可以提取第二层浅层特征,并且使得数据的光谱维度再次降低为(int(B-7+1)/2-7+1),令TK=int(B-7+1)/2-7+1,该输出作为光谱注意力模块的输入进行深度光谱特征的提取;所述的卷积操作后都紧随一个BN层和一个ReLu激活层。Perform the second convolution operation on the spectral dimension of the above-mentioned dimensionally reduced data, and the size of the convolution kernel is 1*1*7, so that the second layer of shallow features can be extracted, and the spectral dimension of the data can be reduced to (int again) (B-7+1)/2-7+1), let TK=int(B-7+1)/2-7+1, the output is used as the input of the spectral attention module to extract deep spectral features; so The convolution operations described above are followed by a BN layer and a ReLu activation layer.

所述的光谱注意力模块包括依次连接的多个光谱注意力特征层;对输入数据采用1*1*3的卷积核提取光谱特征,卷积时在第3维上采用扩张率为2的扩张卷积,这样可以使用较少的卷积核而涵盖更大的卷积视野;The spectral attention module includes a plurality of spectral attention feature layers connected in sequence; a 1*1*3 convolution kernel is used to extract spectral features for the input data, and a dilation rate of 2 is used in the third dimension during convolution. Dilated convolutions, so that fewer convolution kernels can be used to cover a larger convolutional field of view;

对提取的光谱特征先进行双曲正切变换,然后再通过softmax激活函数将提取的特征转换为0到1的权重值,权重值代表不同的光谱波段空间区域需要被关注的程度;First perform hyperbolic tangent transformation on the extracted spectral features, and then convert the extracted features into a weight value from 0 to 1 through the softmax activation function, and the weight value represents the degree to which different spectral bands need to be paid attention to;

将求得的权重值与输入值进行元素级乘法运算,得到一层光谱注意力特征,在此之后添加一个批量归一化BN层和ReLU激活层,得到第一层光谱注意力网络的输出;Perform element-level multiplication of the obtained weight value and the input value to obtain a layer of spectral attention features, after which a batch normalized BN layer and ReLU activation layer are added to obtain the output of the first layer of spectral attention network;

将第一层光谱注意力网络的输出作为下一层的输入,重复以上操作,得到第二层光谱注意力网络的输出,以此类推,可以得到多层光谱注意力网络的输出,这些值代表了不同深度的光谱注意力特征;The output of the first layer of spectral attention network is used as the input of the next layer, and the above operations are repeated to obtain the output of the second layer of spectral attention network, and so on, the output of the multi-layer spectral attention network can be obtained, these values represent spectral attention features of different depths;

对得到的多层光谱注意力特征求和,该结果融合了不同深度的光谱注意力特征,这样有助于提高图像的分类精度;Summing the obtained multi-layer spectral attention features, the result fuses spectral attention features of different depths, which helps to improve the classification accuracy of images;

所述的二次数据降维模块包括一个卷积层;第二次卷积在光谱维上不进行边界填充,小幅降维。对多层光谱注意力网络的输出维度为7*7*TK,采用卷积核尺寸(1,1,TK)的卷积操作可将数据的第3维度转换为1,即将所有的光谱特征集中在一个维度上,得到结果数据的维度为7*7*1,为后续的空间特征提取做好准备;The second data dimension reduction module includes a convolution layer; the second convolution does not perform boundary filling in the spectral dimension, and reduces the dimension slightly. The output dimension of the multi-layer spectral attention network is 7*7*TK, and the convolution operation using the convolution kernel size (1,1,TK) can convert the third dimension of the data to 1, that is, all spectral features are concentrated In one dimension, the dimension of the resulting data is 7*7*1, which is ready for subsequent spatial feature extraction;

进行卷积操作,该操作将数据的第3维度转换为1,即将所有的光谱特征集中在一个波段上,得到结果数据的维度为7*7*1,为后续的空间特征提取做好准备;卷积操作均为三维卷积,卷积核尺寸为1*1*TK,卷积时在第3维上采用扩张率dilation rate>1的扩张卷积。上一步骤的输出维度为7*7*TK,采用卷积核尺寸(1,1,TK)的卷积操作可将数据的第3维度转换为1,即将所有的光谱特征集中在一个维度上,得到结果数据的维度为7*7*1,为后续的空间特征提取做好准备;Perform a convolution operation, which converts the third dimension of the data to 1, that is, concentrating all spectral features on one band, and the resulting data has a dimension of 7*7*1, which is ready for subsequent spatial feature extraction; The convolution operations are all three-dimensional convolutions, the size of the convolution kernel is 1*1*TK, and the dilation rate of dilation rate>1 is used in the third dimension during convolution. The output dimension of the previous step is 7*7*TK, and the convolution operation of the convolution kernel size (1,1,TK) can convert the third dimension of the data to 1, that is, focus all spectral features on one dimension , the dimension of the resulting data is 7*7*1, which is ready for the subsequent spatial feature extraction;

所述的空间注意力机制模块包括依次连接的多个空间注意力特征层;The spatial attention mechanism module includes a plurality of spatial attention feature layers connected in sequence;

对输入数据进行3维卷积操作,采用3*3*1的卷积核提取空间特征;Perform a 3-dimensional convolution operation on the input data, and use a 3*3*1 convolution kernel to extract spatial features;

对提取的空间特征先进行双曲正切变换,然后再通过softmax激活函数将提取的空间特征转换为0到1的权重值,这些值代表了注意力机制下空间上不同区域(或像素)需要被关注的程度,将这些值称为空间注意力特征的权重;First perform hyperbolic tangent transformation on the extracted spatial features, and then convert the extracted spatial features into weight values from 0 to 1 through the softmax activation function. These values represent the different regions (or pixels) in space under the attention mechanism. The degree of attention, and these values are called the weights of spatial attention features;

将上述求得的权重值与提取的空间特征的输入值进行逐像素相乘,得到一层空间注意力特征,在此之后添加一个BN层和ReLu激活层,得到第一层空间注意力网络的输出;Multiply the obtained weight value and the input value of the extracted spatial feature pixel by pixel to obtain a layer of spatial attention features, after which a BN layer and a ReLu activation layer are added to obtain the first layer of spatial attention network. output;

将第一层空间注意力网络的输出作为下一层的输入,重复上述操作,得到第二层空间注意力网络的输出,以此类推,可以得到多层空间注意力网络的输出,这些值代表了不同深度的空间注意力特征;The output of the first layer of spatial attention network is used as the input of the next layer, and the above operation is repeated to obtain the output of the second layer of spatial attention network, and so on, the output of the multi-layer spatial attention network can be obtained. These values represent spatial attention features of different depths;

对得到的多层空间注意力特征求和,融合不同深度的空间注意力特征;有助于提高图像的分类精度;卷积操作均为三维卷积,卷积核尺寸为3*3*1。The obtained multi-layer spatial attention features are summed, and the spatial attention features of different depths are fused; this helps to improve the classification accuracy of the image; the convolution operations are all three-dimensional convolutions, and the size of the convolution kernel is 3*3*1.

对上述结果进行平均池化,将池化结果输入全连接网络,得到最后的分类结果。Average pooling is performed on the above results, and the pooling results are input into the fully connected network to obtain the final classification result.

进一步地,卷积操作均为三维卷积,卷积操作后都紧跟一个批量归一化BN层和一个ReLu激活层。Further, the convolution operations are all three-dimensional convolutions, and the convolution operations are followed by a batch normalized BN layer and a ReLu activation layer.

本实施例中还公开了一种高光谱图像分类方法,包括以下步骤:This embodiment also discloses a hyperspectral image classification method, comprising the following steps:

获得待分类的高光谱图像集,输入网络模型,用网络模型对高光谱图像的每个像素进行分类,得到最终的分类结果;Obtain the hyperspectral image set to be classified, input the network model, use the network model to classify each pixel of the hyperspectral image, and obtain the final classification result;

所述的网络模型为上述所述的高光谱图像分类模型的构建方法构建得到的高光谱图像分类模型。The network model is a hyperspectral image classification model constructed by the method for constructing a hyperspectral image classification model described above.

分类质量评价指标主要采用OA(总体准确率Overall Accuracy)、AA(平均准确率Average Accuracy)和Kappa系数三个指标,分别评价总体正确分类像素的百分比,图像中各个类别正确率的平均值和分类与完全随机的分类产生错误减少的比例,这三个指标均越高越好。表2给出了本发明提供的结果与其他对比方法的结果,每个结果均是随机进行了10次实验的平均值与方差,平均值越高表示分类性能越好,方差越小表示分类方法越稳定。通过对比可以看出本发明在各项指标上均得到了较好的结果。The classification quality evaluation indicators mainly use OA (Overall Accuracy), AA (Average Accuracy) and Kappa coefficient to evaluate the percentage of overall correctly classified pixels, the average and The ratio of error reductions produced by classification versus completely random classification, the higher the better for all three metrics. Table 2 shows the results provided by the present invention and the results of other comparison methods. Each result is the average value and variance of 10 random experiments. The higher the average value, the better the classification performance, and the smaller the variance, the better the classification method. more stable. It can be seen from the comparison that the present invention has obtained good results in various indicators.

表2不同方法得到的分类效果比较Table 2 Comparison of classification effects obtained by different methods

SSFCNSSFCN HbridSNHbridSN SSRNSSRN CDSCNCDSCN Spe_ANSpe_AN Spa_ANSpa_AN 本发明this invention AAAA 95.13±0.9395.13±0.93 97.84±0.4697.84±0.46 97.94±0.4497.94±0.44 97.54±1.0897.54±1.08 97.41±2.6897.41±2.68 98.69±0.2798.69±0.27 98.91±0.3498.91±0.34 KappaKappa 94.75±1.2394.75±1.23 96.38±0.5496.38±0.54 95.11±1.5995.11±1.59 94.91±1.5994.91±1.59 96.57±1.5596.57±1.55 97.77±0.4497.77±0.44 98.17±0.4498.17±0.44 OAOA 95.29±1.195.29±1.1 96.75±0.4996.75±0.49 95.61±1.4195.61±1.41 95.43±1.4395.43±1.43 96.92±1.3996.92±1.39 98±0.3998±0.39 98.35±0.3998.35±0.39

分类结果如图5所示。在图5中,图5(a)为高光谱图像的假彩色图像,图5(b)为标定图像,图5(c)为SSFCA(光谱-空间全卷积神经网络)方法分类图像,图5(d)为HbridSN(混合光谱卷积神经网络)方法分类图像,图5(e)为SSRN(光谱-空间残差网络)方法分类图像,图5(f)为CDSCN(级联双尺度交叉网络)方法分类图像,图5(g)为Spe_AN(只含光谱注意力特征网络)方法分类图像,图5(h)为Spa_AN(只含空间注意力特征网络)方法分类图像,图5(i)为本发明提供的分类方法分类图像,可以很明显的看出本发明提供的分类方法分类结果连续,噪声点较少,取得了较好的分类结果。The classification results are shown in Figure 5. In Fig. 5, Fig. 5(a) is the false color image of the hyperspectral image, Fig. 5(b) is the calibration image, Fig. 5(c) is the classification image of the SSFCA (spectral-spatial fully convolutional neural network) method, and Fig. 5(d) is the image classified by the HbridSN (Hybrid Spectral Convolutional Neural Network) method, Figure 5(e) is the image classified by the SSRN (Spectral-Spatial Residual Network) method, and Figure 5(f) is the CDSCN (Cascaded Dual Scale Crossover). Figure 5(g) is the image classified by the Spe_AN (only spectral attention feature network) method, Figure 5(h) is the image classified by the Spa_AN (only spatial attention feature network) method, Figure 5(i) ) is the classification method provided by the present invention to classify images, and it can be clearly seen that the classification results provided by the present invention are continuous, have fewer noise points, and achieve better classification results.

一种空间注意力特征的可视化方法,按照以下步骤执行:A visualization method for spatial attention features, which is performed as follows:

步骤A,根据构建的高光谱图像分类模型,以及构建的高光谱图像分类系统,保存的最佳网络模型用M表示,提取空间注意特征所在的网络层标号用L表示,及其前一层网络标号用L-1;Step A: According to the constructed hyperspectral image classification model and the constructed hyperspectral image classification system, the best saved network model is denoted by M, the label of the network layer where the spatial attention feature is extracted is denoted by L, and its previous layer network is denoted by M. Use L-1 for the label;

步骤B,在高光谱图像分类模型中调取L-1层网络参数M1和L层网络参数M2,将高光谱图像作为输入图像,使用参数M1得到注意力特征提取前的图像,参见图6.(a)使用参数M2得到注意力特征提取后的图像,实现注意力特征的可视化;参见图6.(b),从图中可以看出注意力特征更关注于图像中不同区域的边界线或拐点等丰富特征区域。In step B, the L-1 layer network parameter M1 and the L layer network parameter M2 are called in the hyperspectral image classification model, the hyperspectral image is used as the input image, and the parameter M1 is used to obtain the image before attention feature extraction, see Figure 6. (a) Use the parameter M2 to obtain the image after the attention feature extraction to realize the visualization of the attention feature; see Figure 6.(b), it can be seen from the figure that the attention feature is more concerned with the boundary lines or the boundaries of different regions in the image. Rich feature regions such as inflection points.

Claims (5)

1. A construction method of a hyperspectral image classification model is characterized by comprising the following steps:
acquiring a hyperspectral image, and standardizing the hyperspectral image to enable the hyperspectral image to accord with the Gaussian distribution of unit variance of 0 mean value;
secondly, performing category calibration on the ground object categories corresponding to the pixels of the standardized hyperspectral image to obtain a calibration sample set and a label set;
constructing a neighborhood data cube by taking the pixel of each sample in the calibration sample set as a center, wherein the dimension of the neighborhood data cube is w W B, w represents the spatial neighborhood of the pixel, B is the initial wave band number of the hyperspectral image, and dividing the constructed neighborhood data cube according to a set proportion to obtain a training sample set, a verification sample set and a test sample set;
taking a neighborhood data cube corresponding to each sample in the training sample set and the verification sample set as the input of the network, taking the label set as the expected output, and training the network to obtain a trained network model;
the network comprises a network input layer, a data dimension reduction module, a spectrum attention module, a secondary data dimension reduction module, a space attention mechanism module, a pooling module and a full-connection network which are sequentially arranged in a cascade manner;
the data dimension reduction module comprises two convolution modules which are connected in sequence, and each convolution module comprises a convolution layer, a batch normalization layer and an activation function layer which are connected in sequence;
the spectral attention module comprises a plurality of spectral attention feature layers which are connected in sequence;
the secondary data dimension reduction module comprises a convolution layer;
the space attention mechanism module comprises a plurality of space attention characteristic layers which are connected in sequence.
2. The construction method according to claim 1, wherein, when a boundary problem is encountered during the construction of the neighborhood data cube in the third step, the boundary is extended by using a 0 boundary filling method.
3. The construction method according to claim 1, wherein the concrete method of the fourth step is as follows:
taking a neighborhood data cube corresponding to each sample in a training sample set and a verification sample set as the input of a network, taking a label set as expected output, calculating the cross entropy loss between a real label of the training sample and a network prediction output value, taking the minimized cross entropy loss as an optimization target, optimizing the network by adopting an adaptive moment estimation algorithm, testing an obtained network model on the verification set, updating network model parameters until iteration is finished if the obtained classification accuracy is superior to that of the previous network model, storing the model with the highest classification accuracy on the verification set, obtaining the trained network model, and evaluating the classification performance of the trained network model by adopting the test sample set.
4. A hyperspectral image classification method is characterized by comprising the following steps:
acquiring a hyperspectral image set to be classified, inputting the hyperspectral image set into a network model, and classifying each pixel of the hyperspectral image by using the network model to obtain a final classification result;
the network model is a hyperspectral image classification model constructed by the hyperspectral image classification model construction method according to any one of claims 1 to 3.
5. A hyperspectral image classification system is characterized by comprising an image acquisition module, a data preprocessing module and a classification module;
the image acquisition module is used for acquiring a hyperspectral image;
the data preprocessing module is used for carrying out standardization processing on the hyperspectral image so that the input image conforms to Gaussian distribution of unit variance of 0 mean value;
the classification module classifies the hyperspectral images by adopting the method of claim 4 and outputs a classification result.
CN202010781786.0A 2020-08-06 2020-08-06 Construction method, classification method and system of hyperspectral image classification model Active CN111985543B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010781786.0A CN111985543B (en) 2020-08-06 2020-08-06 Construction method, classification method and system of hyperspectral image classification model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010781786.0A CN111985543B (en) 2020-08-06 2020-08-06 Construction method, classification method and system of hyperspectral image classification model

Publications (2)

Publication Number Publication Date
CN111985543A true CN111985543A (en) 2020-11-24
CN111985543B CN111985543B (en) 2024-05-10

Family

ID=73445227

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010781786.0A Active CN111985543B (en) 2020-08-06 2020-08-06 Construction method, classification method and system of hyperspectral image classification model

Country Status (1)

Country Link
CN (1) CN111985543B (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886870A (en) * 2018-12-29 2019-06-14 西北大学 Remote sensing image fusion method based on two-channel neural network
CN112580670A (en) * 2020-12-31 2021-03-30 中国人民解放军国防科技大学 Hyperspectral-spatial-spectral combined feature extraction method based on transfer learning
CN112801133A (en) * 2020-12-30 2021-05-14 核工业北京地质研究院 Spectrum identification and classification method based on keras model
CN113420798A (en) * 2021-06-09 2021-09-21 中国石油大学(华东) Hyperspectral image classification based on twin spectral attention consistency
CN113435253A (en) * 2021-05-31 2021-09-24 西安电子科技大学 Multi-source image combined urban area ground surface coverage classification method
CN114255348A (en) * 2021-09-27 2022-03-29 海南电网有限责任公司电力科学研究院 Insulator aging and fouling spectrum classification method for improving B _ CNN
CN114264626A (en) * 2021-12-18 2022-04-01 复旦大学 A Nondestructive Quantitative Analysis Method of Fabric Based on Time Series Residual Network
CN114550305A (en) * 2022-03-04 2022-05-27 合肥工业大学 Human body posture estimation method and system based on Transformer
CN114663821A (en) * 2022-05-18 2022-06-24 武汉大学 Real-time nondestructive detection method for product quality based on video hyperspectral imaging technology
CN114778485A (en) * 2022-06-16 2022-07-22 中化现代农业有限公司 Variety identification method and system based on near infrared spectrum and attention mechanism network
CN114863223A (en) * 2022-06-30 2022-08-05 中国自然资源航空物探遥感中心 Hyperspectral weak supervision classification method combining denoising autoencoder and scene enhancement
CN114972889A (en) * 2022-06-29 2022-08-30 江南大学 Wheat seed classification method based on data enhancement and attention mechanism
CN115034362A (en) * 2022-06-06 2022-09-09 北京沃东天骏信息技术有限公司 Calibrator generation method and device and robustness-resisting calibration method and device
CN115909052A (en) * 2022-10-26 2023-04-04 杭州师范大学 Hyperspectral remote sensing image classification method based on hybrid convolutional neural network
CN117079019A (en) * 2023-08-10 2023-11-17 中国农业大学 Light-weight transducer hyperspectral image classification method and device
CN117474893A (en) * 2023-11-14 2024-01-30 国网新疆电力有限公司乌鲁木齐供电公司 An intelligent detection method for electrical test rings based on semantic feature template matching
CN117829731A (en) * 2023-12-29 2024-04-05 光谷技术有限公司 Military equipment warehouse management method and system based on RFID and AI vision
CN117893537A (en) * 2024-03-14 2024-04-16 深圳市普拉托科技有限公司 Decoloring detection method and system for tray surface material
CN118135341A (en) * 2024-05-07 2024-06-04 武汉理工大学三亚科教创新园 Hyperspectral image classification method based on cascaded spatial cross-attention network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060251324A1 (en) * 2004-09-20 2006-11-09 Bachmann Charles M Method for image data processing
WO2017215284A1 (en) * 2016-06-14 2017-12-21 山东大学 Gastrointestinal tumor microscopic hyper-spectral image processing method based on convolutional neural network
CN109376804A (en) * 2018-12-19 2019-02-22 中国地质大学(武汉) A classification method of hyperspectral remote sensing images based on attention mechanism and convolutional neural network
CN110222773A (en) * 2019-06-10 2019-09-10 西北工业大学 Based on the asymmetric high spectrum image small sample classification method for decomposing convolutional network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060251324A1 (en) * 2004-09-20 2006-11-09 Bachmann Charles M Method for image data processing
WO2017215284A1 (en) * 2016-06-14 2017-12-21 山东大学 Gastrointestinal tumor microscopic hyper-spectral image processing method based on convolutional neural network
CN109376804A (en) * 2018-12-19 2019-02-22 中国地质大学(武汉) A classification method of hyperspectral remote sensing images based on attention mechanism and convolutional neural network
CN110222773A (en) * 2019-06-10 2019-09-10 西北工业大学 Based on the asymmetric high spectrum image small sample classification method for decomposing convolutional network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张婧;袁细国;: "基于小样本学习的高光谱遥感图像分类算法", 聊城大学学报(自然科学版), no. 06 *

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886870A (en) * 2018-12-29 2019-06-14 西北大学 Remote sensing image fusion method based on two-channel neural network
CN112801133A (en) * 2020-12-30 2021-05-14 核工业北京地质研究院 Spectrum identification and classification method based on keras model
CN112580670A (en) * 2020-12-31 2021-03-30 中国人民解放军国防科技大学 Hyperspectral-spatial-spectral combined feature extraction method based on transfer learning
CN112580670B (en) * 2020-12-31 2022-04-19 中国人民解放军国防科技大学 Hyperspectral-spatial-spectral combined feature extraction method based on transfer learning
CN113435253A (en) * 2021-05-31 2021-09-24 西安电子科技大学 Multi-source image combined urban area ground surface coverage classification method
CN113420798B (en) * 2021-06-09 2024-09-17 中国石油大学(华东) Hyperspectral image classification method based on twin spectral attention consistency
CN113420798A (en) * 2021-06-09 2021-09-21 中国石油大学(华东) Hyperspectral image classification based on twin spectral attention consistency
CN114255348A (en) * 2021-09-27 2022-03-29 海南电网有限责任公司电力科学研究院 Insulator aging and fouling spectrum classification method for improving B _ CNN
CN114255348B (en) * 2021-09-27 2023-01-10 海南电网有限责任公司电力科学研究院 Insulator aging and fouling spectrum classification method for improving B _ CNN
CN114264626A (en) * 2021-12-18 2022-04-01 复旦大学 A Nondestructive Quantitative Analysis Method of Fabric Based on Time Series Residual Network
CN114550305A (en) * 2022-03-04 2022-05-27 合肥工业大学 Human body posture estimation method and system based on Transformer
CN114663821A (en) * 2022-05-18 2022-06-24 武汉大学 Real-time nondestructive detection method for product quality based on video hyperspectral imaging technology
CN115034362A (en) * 2022-06-06 2022-09-09 北京沃东天骏信息技术有限公司 Calibrator generation method and device and robustness-resisting calibration method and device
CN114778485A (en) * 2022-06-16 2022-07-22 中化现代农业有限公司 Variety identification method and system based on near infrared spectrum and attention mechanism network
CN114972889A (en) * 2022-06-29 2022-08-30 江南大学 Wheat seed classification method based on data enhancement and attention mechanism
CN114972889B (en) * 2022-06-29 2025-07-08 江南大学 Wheat seed classification method based on data enhancement and attention mechanism
CN114863223A (en) * 2022-06-30 2022-08-05 中国自然资源航空物探遥感中心 Hyperspectral weak supervision classification method combining denoising autoencoder and scene enhancement
CN115909052A (en) * 2022-10-26 2023-04-04 杭州师范大学 Hyperspectral remote sensing image classification method based on hybrid convolutional neural network
CN115909052B (en) * 2022-10-26 2025-02-21 杭州师范大学 A hyperspectral remote sensing image classification method based on hybrid convolutional neural network
CN117079019A (en) * 2023-08-10 2023-11-17 中国农业大学 Light-weight transducer hyperspectral image classification method and device
CN117474893A (en) * 2023-11-14 2024-01-30 国网新疆电力有限公司乌鲁木齐供电公司 An intelligent detection method for electrical test rings based on semantic feature template matching
CN117829731A (en) * 2023-12-29 2024-04-05 光谷技术有限公司 Military equipment warehouse management method and system based on RFID and AI vision
CN117893537A (en) * 2024-03-14 2024-04-16 深圳市普拉托科技有限公司 Decoloring detection method and system for tray surface material
CN117893537B (en) * 2024-03-14 2024-05-28 深圳市普拉托科技有限公司 Decoloring detection method and system for tray surface material
CN118135341A (en) * 2024-05-07 2024-06-04 武汉理工大学三亚科教创新园 Hyperspectral image classification method based on cascaded spatial cross-attention network
CN118135341B (en) * 2024-05-07 2024-07-16 武汉理工大学三亚科教创新园 Hyperspectral image classification method based on cascaded spatial cross-attention network

Also Published As

Publication number Publication date
CN111985543B (en) 2024-05-10

Similar Documents

Publication Publication Date Title
CN111985543B (en) Construction method, classification method and system of hyperspectral image classification model
CN110399909B (en) A Hyperspectral Image Classification Method Based on Label-constrained Elastic Net Graph Model
CN110287869B (en) A crop classification method based on deep learning for high-resolution remote sensing images
CN110363215B (en) A method for converting SAR image to optical image based on generative adversarial network
CN113610905B (en) Deep learning remote sensing image registration method based on sub-image matching and its application
CN115909052A (en) Hyperspectral remote sensing image classification method based on hybrid convolutional neural network
CN111652039B (en) Hyperspectral remote sensing ground object classification method based on residual error network and feature fusion module
CN110490799B (en) Super-resolution method of hyperspectral remote sensing image based on self-fusion convolutional neural network
CN111369442A (en) Remote sensing image super-resolution reconstruction method based on fuzzy kernel classification and attention mechanism
CN103150722B (en) The peripheral blood leucocyte edge extracting method that application quaternion division and graph theory are optimized
CN115331104A (en) A method of crop planting information extraction based on convolutional neural network
CN113902622A (en) A Spectral Super-Resolution Method Based on Joint Attention with Deep Priors
CN115410074B (en) Remote sensing image cloud detection method and device
CN113887656B (en) Hyperspectral image classification method combining deep learning and sparse representation
CN114119532B (en) A building change detection method based on remote sensing image and twin neural network
CN112818920A (en) Double-temporal hyperspectral image space spectrum joint change detection method
CN116091833A (en) Attention and Transformer Hyperspectral Image Classification Method and System
CN111612127B (en) Multi-direction information propagation convolution neural network construction method for hyperspectral image classification
CN105760857B (en) A target detection method for hyperspectral remote sensing images
CN113096114B (en) A remote sensing extraction method for high-resolution urban water patches combining morphology and index
CN118781432A (en) A hyperspectral image classification method based on refined spatial-spectral joint feature extraction
CN113762128A (en) A hyperspectral image classification method based on unsupervised learning
CN119169443A (en) A method, system, device and medium for identifying mangrove vegetation
CN116704378A (en) Homeland mapping data classification method based on self-growing convolution neural network
CN115187984A (en) A Multispectral Remote Sensing Image Segmentation Method Based on Band-Location Selection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant