[go: up one dir, main page]

CN112883852A - Hyperspectral image classification system and method - Google Patents

Hyperspectral image classification system and method Download PDF

Info

Publication number
CN112883852A
CN112883852A CN202110155143.XA CN202110155143A CN112883852A CN 112883852 A CN112883852 A CN 112883852A CN 202110155143 A CN202110155143 A CN 202110155143A CN 112883852 A CN112883852 A CN 112883852A
Authority
CN
China
Prior art keywords
image
gradient
algorithm
classification
minimum value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110155143.XA
Other languages
Chinese (zh)
Other versions
CN112883852B (en
Inventor
曹衍龙
刘佳炜
董献瑞
杨将新
曹彦鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Industrial Technology Research Institute of ZJU
Original Assignee
Shandong Industrial Technology Research Institute of ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Industrial Technology Research Institute of ZJU filed Critical Shandong Industrial Technology Research Institute of ZJU
Priority to CN202110155143.XA priority Critical patent/CN112883852B/en
Publication of CN112883852A publication Critical patent/CN112883852A/en
Application granted granted Critical
Publication of CN112883852B publication Critical patent/CN112883852B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/12Computing arrangements based on biological models using genetic models
    • G06N3/126Evolutionary algorithms, e.g. genetic algorithms or genetic programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/194Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于自适应阈值分水岭算法和改进支持向量机的高光谱图像分类方法,即通过自适应阈值分水岭算法获得同质性区域,提取区域特征,再通过改进支持向量机算法进行分类。

Figure 202110155143

The invention discloses a hyperspectral image classification method based on an adaptive threshold watershed algorithm and an improved support vector machine. .

Figure 202110155143

Description

Hyperspectral image classification system and method
Technical Field
The invention relates to hyperspectral image processing, in particular to a hyperspectral image classification system and a hyperspectral image classification method.
Background
The hyperspectral remote sensing image can simultaneously image a target area in the ultraviolet, visible light, near infrared and middle infrared ranges of an electromagnetic spectrum by tens of to hundreds of continuous and subdivided spectral bands, integrates image information and spectral information into a whole, and provides possibility for image classification.
The hyperspectral image classification method is mainly divided into two categories: pixel-based classification methods and object-oriented classification methods. The classification method based on the pixels is to extract features pixel by pixel for classification, and has high classification accuracy, but due to the pixel-by-pixel classification, the classification time is long, the efficiency is low, and the requirement of real-time property cannot be met. The object-oriented classification method firstly needs to segment images to obtain homogeneity regions, and then extracts features from each homogeneity region to perform region classification. Compared with a pixel-based classification method, the object-oriented classification method is high in classification speed and can meet the real-time requirement.
Disclosure of Invention
In view of the above defects in the prior art, the technical problem to be solved by the present invention is to overcome the defects in the prior art, that is, the pixel-based classification method has low classification efficiency and cannot meet the real-time requirement, and to provide a hyperspectral image classification method based on an adaptive threshold watershed algorithm and an improved support vector machine, that is, a homogeneous region is obtained through the adaptive threshold watershed algorithm, the region features are extracted, and then classification is performed through the improved support vector machine algorithm.
In order to achieve the above object, the present invention provides, in a first aspect, a hyperspectral image classification method, including the steps of:
(1) inputting a hyperspectral image dataset to be classified;
(2) performing an image preprocessing step;
(3) obtaining a segmentation image by using a self-adaptive threshold watershed algorithm;
(4) extracting spectral features and textural features according to the segmented image;
(5) performing feature evaluation through a classification model, removing useless features, and obtaining optimal classification;
the classification model is obtained by determining the optimal kernel function parameter g and the penalty coefficient C of the SVM model by utilizing a training subset and a WOA-GA mixed algorithm.
Further, the step (2) specifically includes the following substeps:
(201) the wavelength of red light is 700.0nm, the wavelength of green light is 546.1nm, the wavelength of blue light is 435.8nm, the image with the corresponding wavelength is obtained through band calculation, the red light image, the green light image and the blue light image are synthesized into a visible light image and converted into a gray image f, and the band calculation formula is as follows:
Figure BDA0002933103300000021
wherein H is the image obtained by the band calculation, rho is the wavelength to be calculated, and rhominAnd ρmaxLower and upper limit wavelengths respectively of the closest wavelength ρ, G being ρmaxCorresponding band image, F is rhominCorresponding band images;
(202) filtering noise of a visible light image in a forest region by a bilateral filtering algorithm, and reducing a pseudo local minimum value;
(203) then, a multi-scale morphological gradient extraction algorithm is used for extracting the gradient image, and the definition of the multi-scale morphological gradient extraction algorithm is as follows:
Figure BDA0002933103300000022
in the formula (I), the compound is shown in the specification,
Figure BDA0002933103300000023
in order to perform the operation of dilation,
Figure BDA0002933103300000024
for erosion operation, f is the original gray image, G (f) is the gradient image, BijIs a group of square structural elements, i (i is more than or equal to 1 and less than or equal to n) is a size factor of the structural elements, and the size of the structural elements is (2i +1) × (2i +1), and j (j is more than or equal to 1 and less than or equal to j)M) is a structural element shape factor, representing structural elements in different directions;
selecting structural elements in 4 directions, wherein the structural elements are 0 degree, 45 degrees, 90 degrees and 135 degrees respectively; the 4 directional structural elements with the size of 3 x 3 are respectively
Figure BDA0002933103300000025
Further, in the step (3), firstly, an adaptive threshold extraction algorithm is executed, then an H-minima forced minimum value transformation is executed, and finally a watershed image segmentation is executed, which specifically includes the following sub-steps:
(301) an adaptive threshold extraction algorithm is performed, defined as follows:
H=mean(0<gradmin≤h);
in the formula, gradmin is a local minimum value, and h is a threshold upper limit;
(302) performing forced minimum value transformation, and marking the gradient image by using a marking threshold value H, namely reserving a local minimum value larger than the threshold value and obtaining a binary image capable of reflecting the position of the local minimum value; modifying the gradient image by using a forced minimum algorithm, setting the value of a pixel point smaller than a threshold value as the size of the threshold value through morphological corrosion expansion operation, and enabling a local minimum value to only appear at a mark position, thereby reducing a pseudo local minimum value; its definition is as follows:
Figure BDA0002933103300000026
Figure BDA0002933103300000027
in the formula (I), the compound is shown in the specification,
Figure BDA0002933103300000028
the method comprises the steps of marking a gradient image by using a threshold value H to obtain a binary image, wherein G is the gradient image, and I is the marked gradient image after forced minimum value transformation;
(303) executing a watershed algorithm to obtain a segmentation image; the watershed algorithm is a Meyer algorithm;
(304) executing a region merging algorithm based on a spectrum matching method, merging a small-area region and a similar region of the obtained primary segmentation region image, wherein the region similarity evaluation index is as follows:
Figure BDA0002933103300000031
in the formula, T is a new evaluation index, and X and Y are spectral vectors.
Further, the step (4) specifically includes the following substeps:
(401) obtaining the average spectral characteristics of the region by averaging the reflectance values of all pixels of the image in the same waveband in the region;
(402) extracting texture information by utilizing a gray gradient co-occurrence matrix: extracting a gray level gradient co-occurrence matrix of each pixel point in each image of the visible light wave band, respectively calculating 15 texture features of small gradient advantage, large gradient advantage, gray level distribution nonuniformity, gradient distribution nonuniformity, energy, gray level average, gradient average, gray level variance, gradient variance, correlation, gray level entropy, gradient entropy, mixed entropy, differential moment and inverse differential moment, and taking an average value to obtain the 15-dimensional features.
Further, the step (5) comprises the following specific sub-steps:
(501) ranking all the features by using the ranking index, and evaluating the importance of the features;
(502) removing the features with the least scores to form a new subset;
(503) executing the steps until only one feature exists in the subset, and searching the finally reserved feature according to the classification accuracy of the classification model;
the adopted feature sorting index is a maximum geometric interval feature sorting index, and the formula is as follows:
Hc=|W2-W-(P)2|
in the formula, W2And W-(P)2Respectively the weight of the SVM model under the current subset and the weight after the P-th feature is removed, W2The formula is as follows:
Figure BDA0002933103300000032
wherein i and j are sample numbers, alphaiAnd alphajFor the solved parameters, y is the class label and K is the kernel function.
The invention provides in a second aspect a hyperspectral image classification system comprising the following modules:
the hyperspectral image data set input module is used for inputting a hyperspectral image data set to be classified;
an image preprocessing module for performing an image preprocessing step;
the image segmentation module is used for obtaining a segmented image by utilizing a self-adaptive threshold watershed algorithm;
the characteristic extraction module is used for extracting spectral characteristics and texture characteristics according to the segmented image;
the classification module is used for performing characteristic evaluation through the classification model, removing useless characteristics and obtaining the optimal classification;
the classification model of the classification module is obtained by determining the optimal kernel function parameter g and the penalty coefficient C of the SVM model by utilizing a training subset and a WOA-GA mixed algorithm.
Further, the image pre-processing module is arranged to perform the steps of:
(701) the wavelength of red light is 700.0nm, the wavelength of green light is 546.1nm, the wavelength of blue light is 435.8nm, the image with the corresponding wavelength is obtained through band calculation, the red light image, the green light image and the blue light image are synthesized into a visible light image and converted into a gray image f, and the band calculation formula is as follows:
Figure BDA0002933103300000041
wherein H is a band operationThe obtained image is provided with rho as the wavelength to be calculatedminAnd ρmaxLower and upper limit wavelengths respectively of the closest wavelength ρ, G being ρmaxCorresponding band image, F is rhominCorresponding band images;
(702) filtering noise of a visible light image in a forest region by a bilateral filtering algorithm, and reducing a pseudo local minimum value;
(703) then, a multi-scale morphological gradient extraction algorithm is used for extracting the gradient image, and the definition of the multi-scale morphological gradient extraction algorithm is as follows:
Figure BDA0002933103300000042
in the formula (I), the compound is shown in the specification,
Figure BDA0002933103300000043
in order to perform the operation of dilation,
Figure BDA0002933103300000044
for erosion operation, f is the original gray image, G (f) is the gradient image, BijThe method is characterized in that the method is a group of square structural elements, i (i is more than or equal to 1 and less than or equal to n) is a size factor of the structural elements, and j (j is more than or equal to 1 and less than or equal to m) is a shape factor of the structural elements and represents the structural elements in different directions, wherein the size of the structural elements is (2i +1) × (2i + 1);
selecting structural elements in 4 directions, wherein the structural elements are 0 degree, 45 degrees, 90 degrees and 135 degrees respectively; the 4 directional structural elements with the size of 3 x 3 are respectively
Figure BDA0002933103300000045
Further, the image segmentation module is arranged to perform the steps of:
firstly, executing an adaptive threshold extraction algorithm, then executing H-minima forced minimum value transformation, and finally executing watershed image segmentation, wherein the method specifically comprises the following steps:
(801) an adaptive threshold extraction algorithm is performed, defined as follows:
H=mean(0<gradmin≤h);
in the formula, gra1min is a local minimum value, and h is an upper threshold limit;
(802) performing forced minimum value transformation, and marking the gradient image by using a marking threshold value H, namely reserving a local minimum value larger than the threshold value and obtaining a binary image capable of reflecting the position of the local minimum value; modifying the gradient image by using a forced minimum algorithm, setting the value of a pixel point smaller than a threshold value as the size of the threshold value through morphological corrosion expansion operation, and enabling a local minimum value to only appear at a mark position, thereby reducing a pseudo local minimum value; its definition is as follows:
Figure BDA0002933103300000046
Figure BDA0002933103300000047
in the formula (I), the compound is shown in the specification,
Figure BDA0002933103300000048
the method comprises the steps of marking a gradient image by using a threshold value H to obtain a binary image, wherein G is the gradient image, and I is the marked gradient image after forced minimum value transformation;
(803) executing a watershed algorithm to obtain a segmentation image; the watershed algorithm is a Meyer algorithm;
(804) executing a region merging algorithm based on a spectrum matching method, merging a small-area region and a similar region of the obtained primary segmentation region image, wherein the region similarity evaluation index is as follows:
Figure BDA0002933103300000051
in the formula, T is a new evaluation index, and X and Y are spectral vectors.
Further, the feature extraction module is arranged to perform the steps of:
(901) obtaining the average spectral characteristics of the region by averaging the reflectance values of all pixels of the image in the same waveband in the region;
(902) extracting texture information by utilizing a gray gradient co-occurrence matrix: extracting a gray level gradient co-occurrence matrix of each pixel point in each image of the visible light wave band, respectively calculating 15 texture features of small gradient advantage, large gradient advantage, gray level distribution nonuniformity, gradient distribution nonuniformity, energy, gray level average, gradient average, gray level variance, gradient variance, correlation, gray level entropy, gradient entropy, mixed entropy, differential moment and inverse differential moment, and taking an average value to obtain the 15-dimensional features.
Further, the classification module is arranged to perform the steps of:
(1001) ranking all the features by using the ranking index, and evaluating the importance of the features;
(1002) removing the features with the least scores to form a new subset;
(1003) executing the steps until only one feature exists in the subset, and searching the finally reserved feature according to the classification accuracy of the classification model;
the adopted feature sorting index is a maximum geometric interval feature sorting index, and the formula is as follows:
Hc=|W2-W-(P)2|
in the formula, W2And W-(P)2Respectively the weight of the SVM model under the current subset and the weight after the P-th feature is removed, W2The formula is as follows:
Figure BDA0002933103300000052
wherein i and j are sample numbers, alphaiAnd alphajFor the solved parameters, y is the class label and K is the kernel function.
The conception, the specific structure and the technical effects of the present invention will be further described with reference to the accompanying drawings to fully understand the objects, the features and the effects of the present invention.
Drawings
FIG. 1 is a flow chart of an adaptive threshold watershed algorithm in a preferred embodiment of the invention;
FIG. 2 is a flow chart of a watershed algorithm in a preferred embodiment of the invention;
FIG. 3 is a flow chart of a region merging algorithm based on a spectrum matching method in a preferred embodiment of the present invention;
FIG. 4 is a flow chart of the WOAGA hybrid algorithm in a preferred embodiment of the present invention;
FIG. 5 is a flow chart of a SVM-REF based feature selection algorithm in a preferred embodiment of the present invention.
Detailed Description
The technical contents of the preferred embodiments of the present invention will be more clearly and easily understood by referring to the drawings attached to the specification. The present invention may be embodied in many different forms of embodiments and the scope of the invention is not limited to the embodiments set forth herein.
In a specific embodiment of the hyperspectral image classification method according to the invention, the method comprises the following steps
Firstly, inputting a hyperspectral image data set to be classified;
secondly, executing an image preprocessing step, which comprises the following specific steps:
(1) the wavelength of red light is 700.0nm, the wavelength of green light is 546.1nm, the wavelength of blue light is 435.8nm, the image with the corresponding wavelength is obtained through band calculation, the red light image, the green light image and the blue light image are synthesized into a visible light image and converted into a gray image f, and the band calculation formula is as follows:
Figure BDA0002933103300000061
wherein H is the image obtained by the band calculation, rho is the wavelength to be calculated, and rhominAnd ρmaxLower and upper limit wavelengths respectively of the closest wavelength ρ, G being ρmaxCorresponding band image, F is rhominThe corresponding band image.
(2) Filtering noise of a visible light image in a forest region by a bilateral filtering algorithm, and reducing a pseudo local minimum value;
(3) then, a multi-scale morphological gradient extraction algorithm is used for extracting a gradient image, so that internal textures are reduced, and a better segmentation effect is obtained, wherein the multi-scale morphological gradient extraction algorithm is defined as follows:
Figure BDA0002933103300000062
in the formula (I), the compound is shown in the specification,
Figure BDA0002933103300000063
in order to perform the operation of dilation,
Figure BDA0002933103300000064
for erosion operation, f is the original image, G (f) is the gradient image, BijThe structural elements are in the shape of a group of square structural elements, i (i is more than or equal to 1 and less than or equal to n) is a size factor of the structural elements, and j (j is more than or equal to 1 and less than or equal to m) is a shape factor of the structural elements and represents the structural elements in different directions for the size of the structural elements (2i +1) × (2i + 1). The invention selects structural elements in 4 directions, which are respectively 0 degree, 45 degrees, 90 degrees and 135 degrees. The 4 directional structural elements with the size of 3 x 3 are respectively
Figure BDA0002933103300000065
Thirdly, obtaining a segmented image by using an adaptive threshold watershed algorithm, firstly executing an adaptive threshold extraction algorithm, then executing H-minima forced minimum value transformation, and finally executing watershed image segmentation, wherein a flow chart of the algorithm is shown in figure 1, and the method comprises the following specific steps:
(1) an adaptive threshold extraction algorithm is performed, defined as follows:
H=mean(0<gradmin≤h);
where gradmin is the local minimum and h is the upper threshold. The value of the pseudo local minimum value caused by noise and internal texture is small, while the value of the effective local minimum value is large, if the average value of all the local minimum values is used as the threshold value H, the effective local minimum value can cause the threshold value H to be large, and the effective local minimum value can be eliminated when H-minima forced minimum value transformation is carried out;
(2) and performing forced minimum value transformation, and marking the gradient image by using a marking threshold value H, namely reserving a local minimum value larger than the threshold value and obtaining a binary image capable of reflecting the position of the local minimum value. And modifying the gradient image by using a forced minimum algorithm, and setting the value of the pixel point smaller than the threshold value as the threshold value through morphological corrosion expansion operation, so that the local minimum value only appears at the mark position, thereby reducing the pseudo local minimum value. Its definition is as follows:
Figure BDA0002933103300000071
Figure BDA0002933103300000072
in the formula (I), the compound is shown in the specification,
Figure BDA0002933103300000073
the method comprises the steps of marking a gradient image by using a threshold value H to obtain a binary image, G is the gradient image, and I is the marked gradient image after forced minimum value transformation
(3) And executing a watershed algorithm to obtain a segmentation image. The watershed algorithm adopted by the invention is a Meyer algorithm, and the algorithm flow is shown in figure 2;
(4) and (3) executing a region merging algorithm based on a spectrum matching method, and merging the small-area region and the similar region of the obtained primary segmentation region image, wherein a flow chart of the algorithm is shown in fig. 3. The regional similarity evaluation index is as follows:
Figure BDA0002933103300000074
in the formula, T is a new evaluation index, X and Y are spectral vectors, the left half part of the formula is Euclidean distance, and the left half part of the formula is a spectral angle.
And fourthly, extracting spectral features and textural features according to the segmented image, and specifically comprising the following steps:
(1) obtaining the average spectral characteristics of the region by averaging the reflectance values of all pixels of the image in the same waveband in the region;
(2) extracting texture information by utilizing a gray gradient co-occurrence matrix: extracting a gray level gradient co-occurrence matrix of each pixel point in each image of the visible light wave band, respectively calculating 15 texture features of small gradient advantage, large gradient advantage, gray level distribution nonuniformity, gradient distribution nonuniformity, energy, gray level average, gradient average, gray level variance, gradient variance, correlation, gray level entropy, gradient entropy, mixed entropy, differential moment and inverse differential moment, and taking an average value to obtain the 15-dimensional features.
And fifthly, performing feature evaluation based on the WOAGA-SVM and a recursive feature elimination algorithm, removing useless features and obtaining an optimal classification model. The Whale Optimization Algorithm (WOA) and the Genetic Algorithm (Genetic Algorithm, GA) are two Optimization algorithms, each of the two algorithms has advantages and disadvantages, the invention provides a WOAGA hybrid Optimization Algorithm which can better search for optimal parameters and is used for parameter Optimization of a support vector machine Gaussian kernel function parameter g and a penalty coefficient C, and the Algorithm steps are shown in figure 4. The process based on the WOAGA-SVM and the recursive feature elimination algorithm is shown in FIG. 5, and the specific steps are as follows:
(1) finding out the optimal parameters g and C of the SVM model by using a training subset and a WOAGA algorithm, and training to obtain an optimal classification model;
(2) ranking all the features by using the ranking index, and evaluating the importance of the features;
(3) culling the least scoring features (the least useful features) to form a new subset;
(4) and (4) executing the steps until only one feature is in the subset, and searching the finally reserved feature according to the classification accuracy of the model.
The adopted feature sorting index is the maximum geometric interval feature sorting index, and the formula is as follows:
Hc=|W2-W-(P)2|;
in the formula, W2And W-(P)2Respectively the weight of the SVM model under the current subset and the weight after the P-th feature is removed, W2The formula is as follows:
Figure BDA0002933103300000081
wherein i and j are sample numbers, alphaiAnd alphajFor the solved parameters, y is the class label and K is the kernel function.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (10)

1. A hyperspectral image classification method is characterized by comprising the following steps:
(1) inputting a hyperspectral image dataset to be classified;
(2) performing an image preprocessing step;
(3) obtaining a segmentation image by using a self-adaptive threshold watershed algorithm;
(4) extracting spectral features and textural features according to the segmented image;
(5) performing feature evaluation through a classification model, removing useless features, and obtaining optimal classification;
the classification model is obtained by determining the optimal kernel function parameter g and the penalty coefficient C of the SVM model by utilizing a training subset and a WOA-GA mixed algorithm.
2. The hyperspectral image classification method according to claim 1, wherein the step (2) comprises the following substeps:
(201) the wavelength of red light is 700.0nm, the wavelength of green light is 546.1nm, the wavelength of blue light is 435.8nm, the image with the corresponding wavelength is obtained through band calculation, the red light image, the green light image and the blue light image are synthesized into a visible light image and converted into a gray image f, and the band calculation formula is as follows:
Figure FDA0002933103290000011
wherein H is the image obtained by the band calculation, rho is the wavelength to be calculated, and rhominAnd ρmaxLower and upper limit wavelengths respectively of the closest wavelength ρ, G being ρmaxCorresponding band image, F is rhominCorresponding band images;
(202) filtering noise of a visible light image in a forest region by a bilateral filtering algorithm, and reducing a pseudo local minimum value;
(203) then, a multi-scale morphological gradient extraction algorithm is used for extracting the gradient image, and the definition of the multi-scale morphological gradient extraction algorithm is as follows:
Figure FDA0002933103290000012
in the formula (I), the compound is shown in the specification,
Figure FDA0002933103290000013
in order to perform the operation of dilation,
Figure FDA0002933103290000014
for erosion operation, f is the original gray image, G (f) is the gradient image, BijThe method is characterized in that the method is a group of square structural elements, i (i is more than or equal to 1 and less than or equal to n) is a size factor of the structural elements, and j (j is more than or equal to 1 and less than or equal to m) is a shape factor of the structural elements and represents the structural elements in different directions, wherein the size of the structural elements is (2i +1) × (2i + 1);
selecting structural elements in 4 directions, wherein the structural elements are 0 degree, 45 degrees, 90 degrees and 135 degrees respectively; the 4 directional structural elements with the size of 3 x 3 are respectively
Figure FDA0002933103290000015
3. The hyperspectral image classification method according to claim 2, wherein in the step (3), an adaptive threshold extraction algorithm is firstly executed, then an H-minima forced minimum transform is executed, and finally watershed image segmentation is executed, and the method specifically comprises the following sub-steps:
(301) an adaptive threshold extraction algorithm is performed, defined as follows:
H=mean(0<gradmin≤h);
in the formula, gradmin is a local minimum value, and h is a threshold upper limit;
(302) performing forced minimum value transformation, and marking the gradient image by using a marking threshold value H, namely reserving a local minimum value larger than the threshold value and obtaining a binary image capable of reflecting the position of the local minimum value; modifying the gradient image by using a forced minimum algorithm, setting the value of a pixel point smaller than a threshold value as the size of the threshold value through morphological corrosion expansion operation, and enabling a local minimum value to only appear at a mark position, thereby reducing a pseudo local minimum value; its definition is as follows:
Figure FDA0002933103290000021
Figure FDA0002933103290000022
in the formula (I), the compound is shown in the specification,
Figure FDA0002933103290000023
the method comprises the steps of marking a gradient image by using a threshold value H to obtain a binary image, wherein G is the gradient image, and I is the marked gradient image after forced minimum value transformation;
(303) executing a watershed algorithm to obtain a segmentation image; the watershed algorithm is a Meyer algorithm;
(304) executing a region merging algorithm based on a spectrum matching method, merging a small-area region and a similar region of the obtained primary segmentation region image, wherein the region similarity evaluation index is as follows:
Figure FDA0002933103290000024
in the formula, T is a new evaluation index, and X and Y are spectral vectors.
4. The hyperspectral image classification method according to claim 3, wherein the step (4) comprises the following substeps:
(401) obtaining the average spectral characteristics of the region by averaging the reflectance values of all pixels of the image in the same waveband in the region;
(402) extracting texture information by utilizing a gray gradient co-occurrence matrix: extracting a gray level gradient co-occurrence matrix of each pixel point in each image of the visible light wave band, respectively calculating 15 texture features of small gradient advantage, large gradient advantage, gray level distribution nonuniformity, gradient distribution nonuniformity, energy, gray level average, gradient average, gray level variance, gradient variance, correlation, gray level entropy, gradient entropy, mixed entropy, differential moment and inverse differential moment, and taking an average value to obtain the 15-dimensional features.
5. The hyperspectral image classification method according to claim 4, wherein the step (5) comprises the following specific sub-steps:
(501) ranking all the features by using the ranking index, and evaluating the importance of the features;
(502) removing the features with the least scores to form a new subset;
(503) executing the steps until only one feature exists in the subset, and searching the finally reserved feature according to the classification accuracy of the classification model;
the adopted feature sorting index is a maximum geometric interval feature sorting index, and the formula is as follows:
Rc=|W2-W-(P)2|;
in the formula, W2And W-(P)2Respectively the weight of the SVM model under the current subset and the weight after the P-th feature is removed, W2The formula is as follows:
Figure FDA0002933103290000031
wherein i and j are sample numbers, alphaiAnd alphajFor the solved parameters, y is the class label and K is the kernel function.
6. A hyperspectral image classification system comprises the following modules:
the hyperspectral image data set input module is used for inputting a hyperspectral image data set to be classified;
an image preprocessing module for performing an image preprocessing step;
the image segmentation module is used for obtaining a segmented image by utilizing a self-adaptive threshold watershed algorithm;
the characteristic extraction module is used for extracting spectral characteristics and texture characteristics according to the segmented image;
the classification module is used for performing characteristic evaluation through the classification model, removing useless characteristics and obtaining the optimal classification;
the classification model of the classification module is obtained by determining the optimal kernel function parameter g and the penalty coefficient C of the SVM model by utilizing a training subset and a WOA-GA mixed algorithm.
7. The hyperspectral image classification system of claim 6 wherein the image pre-processing module is arranged to perform the steps of:
(701) the wavelength of red light is 700.0nm, the wavelength of green light is 546.1nm, the wavelength of blue light is 435.8nm, the image with the corresponding wavelength is obtained through band calculation, the red light image, the green light image and the blue light image are synthesized into a visible light image and converted into a gray image f, and the band calculation formula is as follows:
Figure FDA0002933103290000032
wherein H is the image obtained by the band calculation, rho is the wavelength to be calculated, and rhominAnd ρmaxLower and upper limit wavelengths respectively of the closest wavelength ρ, G being ρmaxCorresponding band image, F is rhominCorresponding band images;
(702) filtering noise of a visible light image in a forest region by a bilateral filtering algorithm, and reducing a pseudo local minimum value;
(703) then, a multi-scale morphological gradient extraction algorithm is used for extracting the gradient image, and the definition of the multi-scale morphological gradient extraction algorithm is as follows:
Figure FDA0002933103290000033
in the formula (I), the compound is shown in the specification,
Figure FDA0002933103290000034
in order to perform the operation of dilation,
Figure FDA0002933103290000035
for erosion operation, f is the original gray image, G (f) is the gradient image, BijThe method is characterized in that the method is a group of square structural elements, i (i is more than or equal to 1 and less than or equal to n) is a size factor of the structural elements, and j (j is more than or equal to 1 and less than or equal to m) is a shape factor of the structural elements and represents the structural elements in different directions, wherein the size of the structural elements is (2i +1) × (2i + 1);
selecting structural elements in 4 directions, wherein the structural elements are 0 degree, 45 degrees, 90 degrees and 135 degrees respectively; the 4 directional structural elements with the size of 3 x 3 are respectively
Figure FDA0002933103290000041
8. The hyperspectral image classification system of claim 7 wherein the image segmentation module is arranged to perform the steps of:
firstly, executing an adaptive threshold extraction algorithm, then executing H-minima forced minimum value transformation, and finally executing watershed image segmentation, wherein the method specifically comprises the following steps:
(801) an adaptive threshold extraction algorithm is performed, defined as follows:
H=mean(0<gradmin≤h);
in the formula, gradmin is a local minimum value, and h is a threshold upper limit;
(802) performing forced minimum value transformation, and marking the gradient image by using a marking threshold value H, namely reserving a local minimum value larger than the threshold value and obtaining a binary image capable of reflecting the position of the local minimum value; modifying the gradient image by using a forced minimum algorithm, setting the value of a pixel point smaller than a threshold value as the size of the threshold value through morphological corrosion expansion operation, and enabling a local minimum value to only appear at a mark position, thereby reducing a pseudo local minimum value; its definition is as follows:
Figure FDA0002933103290000042
Figure FDA0002933103290000043
in the formula (I), the compound is shown in the specification,
Figure FDA0002933103290000044
the method comprises the steps of marking a gradient image by using a threshold value H to obtain a binary image, wherein G is the gradient image, and I is the marked gradient image after forced minimum value transformation;
(803) executing a watershed algorithm to obtain a segmentation image; the watershed algorithm is a Meyer algorithm;
(804) executing a region merging algorithm based on a spectrum matching method, merging a small-area region and a similar region of the obtained primary segmentation region image, wherein the region similarity evaluation index is as follows:
Figure FDA0002933103290000045
in the formula, T is a new evaluation index, and X and Y are spectral vectors.
9. The hyperspectral image classification system of claim 8 wherein the feature extraction module is arranged to perform the steps of:
(901) obtaining the average spectral characteristics of the region by averaging the reflectance values of all pixels of the image in the same waveband in the region;
(902) extracting texture information by utilizing a gray gradient co-occurrence matrix: extracting a gray level gradient co-occurrence matrix of each pixel point in each image of the visible light wave band, respectively calculating 15 texture features of small gradient advantage, large gradient advantage, gray level distribution nonuniformity, gradient distribution nonuniformity, energy, gray level average, gradient average, gray level variance, gradient variance, correlation, gray level entropy, gradient entropy, mixed entropy, differential moment and inverse differential moment, and taking an average value to obtain the 15-dimensional features.
10. The hyperspectral image classification system of claim 9 wherein the classification module is arranged to perform the steps of:
(1001) ranking all the features by using the ranking index, and evaluating the importance of the features;
(1002) removing the features with the least scores to form a new subset;
(1003) executing the steps until only one feature exists in the subset, and searching the finally reserved feature according to the classification accuracy of the classification model;
the adopted feature sorting index is a maximum geometric interval feature sorting index, and the formula is as follows:
Rc=|W2-W-(P)2|;
in the formula, W2And W-(P)2Respectively weighting the SVM model under the current subset and rejecting the P-th featureHeavy, W2The formula is as follows:
Figure FDA0002933103290000051
wherein i and j are sample numbers, alphaiAnd alphajFor the solved parameters, y is the class label and K is the kernel function.
CN202110155143.XA 2021-02-04 2021-02-04 Hyperspectral image classification system and method Active CN112883852B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110155143.XA CN112883852B (en) 2021-02-04 2021-02-04 Hyperspectral image classification system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110155143.XA CN112883852B (en) 2021-02-04 2021-02-04 Hyperspectral image classification system and method

Publications (2)

Publication Number Publication Date
CN112883852A true CN112883852A (en) 2021-06-01
CN112883852B CN112883852B (en) 2022-10-28

Family

ID=76057246

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110155143.XA Active CN112883852B (en) 2021-02-04 2021-02-04 Hyperspectral image classification system and method

Country Status (1)

Country Link
CN (1) CN112883852B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113378979A (en) * 2021-07-02 2021-09-10 浙江大学 Hyperspectral band selection method and device based on band attention reconstruction network
CN116017816A (en) * 2022-12-25 2023-04-25 郑州银丰电子科技有限公司 Intelligent street lamp adjusting and controlling system based on video analysis

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070065009A1 (en) * 2005-08-26 2007-03-22 Shenzhen Mindray Bio-Medical Electronics Co., Ltd. Ultrasound image enhancement and speckle mitigation method
CN101510309A (en) * 2009-03-30 2009-08-19 西安电子科技大学 Segmentation method for improving water parting SAR image based on compound wavelet veins region merge
CN109359653A (en) * 2018-09-12 2019-02-19 中国农业科学院农业信息研究所 A method and system for image segmentation of cotton leaf adhesion lesions
CN109871884A (en) * 2019-01-25 2019-06-11 曲阜师范大学 A multi-feature fusion support vector machine object-oriented remote sensing image classification method
CN110472598A (en) * 2019-08-20 2019-11-19 齐鲁工业大学 SVM machine pick cotton flower based on provincial characteristics contains miscellaneous image partition method and system
CN111881933A (en) * 2019-06-29 2020-11-03 浙江大学 Hyperspectral image classification method and system
CN112287886A (en) * 2020-11-19 2021-01-29 安徽农业大学 Wheat plant nitrogen content estimation method based on hyperspectral image fusion map features

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070065009A1 (en) * 2005-08-26 2007-03-22 Shenzhen Mindray Bio-Medical Electronics Co., Ltd. Ultrasound image enhancement and speckle mitigation method
CN101510309A (en) * 2009-03-30 2009-08-19 西安电子科技大学 Segmentation method for improving water parting SAR image based on compound wavelet veins region merge
CN109359653A (en) * 2018-09-12 2019-02-19 中国农业科学院农业信息研究所 A method and system for image segmentation of cotton leaf adhesion lesions
CN109871884A (en) * 2019-01-25 2019-06-11 曲阜师范大学 A multi-feature fusion support vector machine object-oriented remote sensing image classification method
CN111881933A (en) * 2019-06-29 2020-11-03 浙江大学 Hyperspectral image classification method and system
CN110472598A (en) * 2019-08-20 2019-11-19 齐鲁工业大学 SVM machine pick cotton flower based on provincial characteristics contains miscellaneous image partition method and system
CN112287886A (en) * 2020-11-19 2021-01-29 安徽农业大学 Wheat plant nitrogen content estimation method based on hyperspectral image fusion map features

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
卢佳 等: "基于独立空谱残差融合的联合稀疏表示高光谱图像分类", 《计算机工程》 *
张建华 等: "改进自适应分水岭方法分割棉花叶部粘连病斑", 《农业工程学报》 *
赵川源: "基于图像和光谱技术的果实识别与病害检测方法研究", 《农业科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113378979A (en) * 2021-07-02 2021-09-10 浙江大学 Hyperspectral band selection method and device based on band attention reconstruction network
CN116017816A (en) * 2022-12-25 2023-04-25 郑州银丰电子科技有限公司 Intelligent street lamp adjusting and controlling system based on video analysis

Also Published As

Publication number Publication date
CN112883852B (en) 2022-10-28

Similar Documents

Publication Publication Date Title
CN112101271B (en) Hyperspectral remote sensing image classification method and device
CN112036231B (en) A detection and recognition method of lane lines and road signs based on vehicle video
Zhang et al. Feature enhancement network: A refined scene text detector
CN104680127A (en) Gesture identification method and gesture identification system
Gudigar et al. Local texture patterns for traffic sign recognition using higher order spectra
CN110458192B (en) Visual saliency-based classification method and system for hyperspectral remote sensing images
CN110211127B (en) Image partition method based on bicoherence network
CN111986126B (en) Multi-target detection method based on improved VGG16 network
CN112464942A (en) Computer vision-based overlapped tobacco leaf intelligent grading method
WO2021118463A1 (en) Defect detection in image space
CN113221881B (en) A multi-level smartphone screen defect detection method
PL A study on various image processing techniques
CN107704865A (en) Fleet Targets Detection based on the extraction of structure forest edge candidate region
CN106682675A (en) Space spectrum combined feature extracting method for hyperspectral images
CN112883852A (en) Hyperspectral image classification system and method
Lv et al. Research on plant leaf recognition method based on multi-feature fusion in different partition blocks
Cao et al. Non-overlapping classification of hyperspectral imagery with superpixel segmentation
CN111368776B (en) High-resolution remote sensing image classification method based on deep ensemble learning
Chowdhury et al. Scene text detection using sparse stroke information and MLP
CN108446723A (en) A kind of multiple dimensioned empty spectrum synergetic classification method of high spectrum image
Hasan Classification of apple types using principal component analysis and K-nearest neighbor
Mukherjee et al. Conditional random field based salient proposal set generation and its application in content aware seam carving
Niu et al. A Novel Method for Wheat Spike Phenotyping Based on Instance Segmentation and Classification.
Elanangai et al. Automated system for defect identification and character recognition using IR images of SS-plates
CN112966781A (en) Hyperspectral image classification method based on triple loss and convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant