[go: up one dir, main page]

CN110991554A - A Deep Network Image Classification Method Based on Improved PCA - Google Patents

A Deep Network Image Classification Method Based on Improved PCA Download PDF

Info

Publication number
CN110991554A
CN110991554A CN201911291420.9A CN201911291420A CN110991554A CN 110991554 A CN110991554 A CN 110991554A CN 201911291420 A CN201911291420 A CN 201911291420A CN 110991554 A CN110991554 A CN 110991554A
Authority
CN
China
Prior art keywords
image
features
pca
formula
covariance matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911291420.9A
Other languages
Chinese (zh)
Other versions
CN110991554B (en
Inventor
蒋强
陈凯
冯永新
隋涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Ligong University
Original Assignee
Shenyang Ligong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Ligong University filed Critical Shenyang Ligong University
Priority to CN201911291420.9A priority Critical patent/CN110991554B/en
Publication of CN110991554A publication Critical patent/CN110991554A/en
Application granted granted Critical
Publication of CN110991554B publication Critical patent/CN110991554B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于改进PCA的深度网络图像分类方法,首先使用深度卷积神经网络提取输入图像的特征,然后通过计算图像信息熵并设定图像信息熵阈值以筛选特征,筛选后的特征再经过PCA降维,进一步精简图像特征、提高特征质量,有效解决了大数据图像集分类过程中数据维度过高引起的识别速度过慢的问题,同时通过图像信息熵筛选特征,有效减少了PCA在对特征降维的过程中计算协方差矩阵巨大的计算量,提高了图像分类的实时性。

Figure 201911291420

The invention discloses a deep network image classification method based on improved PCA. First, the features of an input image are extracted by using a deep convolutional neural network, and then the features are filtered by calculating the image information entropy and setting a threshold value of the image information entropy. After PCA dimensionality reduction, the image features are further simplified and the feature quality is improved, which effectively solves the problem of too slow recognition speed caused by too high data dimension in the process of classifying large data image sets. In the process of feature dimension reduction, the calculation of covariance matrix is huge, which improves the real-time performance of image classification.

Figure 201911291420

Description

Improved PCA (principal component analysis) -based deep network image classification method
Technical Field
The invention belongs to the technical field of image processing and pattern recognition, and particularly relates to a depth network image classification method based on improved PCA.
Background
In the current society, as the cloud era comes, big data attracts more and more attention, and images as the main expression form of data information become important means for people to acquire information due to the characteristics of rich content, visual reflection and the like, and the quantity of the images is rapidly increasing at an astonishing speed. However, the problem of disorder of image information becomes increasingly prominent with the increase of image data. How to automatically identify, retrieve and classify massive image data by using an artificial intelligence technology has become a research focus in the field of computer vision identification at present.
The traditional image classification method, such as Scale Invariant Feature Transform (SIFT), Histogram of Oriented Gradients (HOG), etc., has shallow structure hierarchy and small calculation amount, and can complete model training and analysis without taking a large number of images as a basis. However, the traditional model cannot acquire semantic features and depth features of higher layers from the original image, and image features are not easy to extract under the condition of large data. With the rise of deep networks, many excellent image classification methods based on deep learning emerge, for example: AlexNet, VGGNet, google lenet, ResNet, etc. The deep learning identification method can obtain deeper image features, the image feature expression is richer, the image feature extraction is more accurate, excellent classification results are obtained, and the classification effect of partial deep networks even exceeds the precision of human beings.
The image classification is widely applied in many fields, and the application in the aspects of biological feature recognition, intelligent transportation, medical auxiliary diagnosis and the like brings great convenience to our lives, but the deep learning method still has the problems of large amount of images as a basis, large calculation amount, long model training time, high requirement on hardware environment, long classification process and the like, and along with the solution of the problems, the deep learning plays a greater role in the image classification field.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a depth network image classification method based on improved PCA, which comprises the following steps:
step 1: inputting m images in a CIFAR-100 image data set into a deep convolutional neural network model, carrying out gray scale and filtering pretreatment, and eliminating noise interference to obtain the original characteristics of each image;
step 2: constructing a feature extraction module in a deep convolutional neural network model, and extracting the image features of each image by using the constructed feature extraction module;
and step 3: carrying out improved PCA dimension reduction processing on the image features of each image, wherein the improved PCA dimension reduction processing means that firstly, the image features of each image are preliminarily screened through image information entropy, and then, the PCA dimension reduction processing is carried out on the preliminarily screened image features, and the method is specifically represented as follows:
step 3.1: calculating the image information entropy H of the extracted image features by using a formula (1), and primarily screening the image features according to an information entropy threshold;
Figure BDA0002319189840000021
where H denotes the entropy of the image information of the image feature, psRepresenting the probability value corresponding to the s-th gray value in each image, wherein n represents the total number of the input gray values of each image;
step 3.2: carrying out PCA (principal component analysis) dimension reduction processing on the image characteristics of each image obtained by primary screening;
and 4, step 4: inputting the image characteristics Y subjected to the dimension reduction processing into a Softmax classifier to finish classification processing, and outputting the result of image classification as a real classification result;
and 5: selecting a loss function of the deep convolutional neural network model during training as a cross entropy loss function, and calculating a difference value H (p, q) between a prediction classification result and a real result according to the cross entropy loss function, wherein the cross entropy loss function is expressed as:
Figure BDA0002319189840000022
wherein p (m) represents the predicted classification result, and q (m) represents the true classification result;
step 6: if the difference value H (p, q) between the predicted classification result and the real result is larger than the expected difference value H' (p, q), reversely propagating the cross entropy loss function through a deep convolutional network back propagation algorithm, and continuously adjusting the network weight wuvUntil the difference H (p, q) between the calculated predicted classification result and the true result is less than or equal to the desired difference H' (p, q), or is reachedAnd until a preset training time M, wherein u represents the u-th neuron of the previous layer, and v represents the v-th neuron of the next layer.
The feature extraction module in the step 2 is specifically expressed as follows: designing the number of layers of a deep convolutional neural network model to be 13 layers according to a CIFAR-100 image dataset, wherein the 13 layers comprise L2~L14Layer of which L1Represents an input layer, L2~L14The number of the layer convolution kernels is respectively 64, 64, 128, 128, 256, 256, 256, 512, 512, 512, 512, 512, 512 and L in sequence2~L14The convolution kernel size of the layers is 3 x 3, the pooling mode is chosen as Maxpool, and the activation function is chosen as ReLU.
The step 3.2 is specifically expressed as:
3.2.1) centralizing the image characteristic matrix of each image obtained by primary screening;
3.2.2) calculating the covariance of different dimensions of the image features after the centralization treatment, and forming a covariance matrix with the dimension m;
3.2.3) calculating the eigenvalue of each covariance matrix and the eigenvector corresponding to the eigenvalue;
3.2.4) firstly determining the value of the contribution rate f according to the actual situation, and then determining K characteristic values needing to be reserved by using a formula (3);
Figure BDA0002319189840000031
in the formula, λjRepresents the j-th eigenvalue to be preserved in each covariance matrix, K represents the total number of eigenvalues to be preserved in each covariance matrix, and lambdaiThe ith eigenvalue of each covariance matrix is represented, and m represents the total number of eigenvalues in each covariance matrix;
3.2.5) forming a transformation basis P by the preserved eigenvectors corresponding to the K eigenvalues, and finishing the dimension reduction processing by using a formula (4) according to the transformation basis P;
Y=PTX (4)
in the formula, Y represents the image feature after the dimension reduction processing,PTA transpose matrix representing the exchange base P and X representing the image feature after the centering process.
The invention has the beneficial effects that:
1) the invention introduces PCA dimension reduction on the basis of the deep convolutional neural network, and reduces the characteristic dimension.
2) The invention further introduces the image information entropy, thereby effectively reducing the huge calculation amount and hardware occupation of the calculation covariance matrix in the PCA dimension reduction process.
3) According to the invention, Softmax is used for replacing a full connection layer of a traditional deep network, so that the classification speed is effectively improved.
Drawings
FIG. 1 is a schematic structural diagram of a deep convolutional neural network model based on improved PCA in the present invention.
FIG. 2 is a comparison graph of image classification time before and after the improved PCA dimensionality reduction process of the present invention.
FIG. 3 is a comparison graph of classification accuracy of 4 experimental images before and after the improved PCA dimensionality reduction process in the present invention.
Detailed Description
The invention is further described with reference to the following figures and specific examples.
As shown in fig. 1, according to a structural schematic diagram of a built depth convolution neural network model based on improved PCA, a depth network image classification method based on improved PCA first extracts features of an input image using a depth convolution neural network, then filters the features by calculating an image information entropy and setting an image information entropy threshold, dimensionality reduction is performed on the filtered features through PCA (principal component analysis), image features are further simplified, feature quality is improved, and finally the filtered features are sent to a Softmax classifier to realize classification, which specifically comprises the following steps:
step 1: 5000 images in a CIFAR-100 image data set are input into a deep convolution neural network model, gray level and filtering preprocessing is carried out, noise interference is eliminated, the original characteristics of each image are obtained, and different classification labels are attached to each image and used as prediction classification results of the original characteristics;
step 2: constructing a feature extraction module in a deep convolutional neural network model, and extracting the image features of each image by using the constructed feature extraction module;
as shown in fig. 1, the feature extraction module is specifically expressed as: designing the number of layers of a deep convolutional neural network model to be 13 layers according to a CIFAR-100 image dataset, wherein the 13 layers comprise L2~L14Layer of which L1Represents an input layer, L2~L14The number of the layer convolution kernels is respectively 64, 64, 128, 128, 256, 256, 256, 512, 512, 512, 512, 512, 512 and L in sequence2~L14The convolution kernel size of the layers is 3 x 3, the pooling mode is chosen as Maxpool, and the activation function is chosen as ReLU.
And step 3: carrying out improved PCA dimension reduction processing on the image features of each image, wherein the improved PCA dimension reduction processing means that firstly, the image features of each image are preliminarily screened through image information entropy, and then, the PCA dimension reduction processing is carried out on the preliminarily screened image features, and the method is specifically represented as follows:
step 3.1: calculating the image information entropy H of the extracted image features by using a formula (1), and primarily screening the image features according to an information entropy threshold, wherein the information entropy threshold is set to be 0.15 in the embodiment;
Figure BDA0002319189840000041
where H denotes the entropy of the image information of the image feature, psRepresenting the probability value corresponding to the s-th gray value in each image, wherein n represents the total number of the input gray values of each image;
step 3.2: carrying out PCA (principal component analysis) dimension reduction processing on the image features of each image obtained by primary screening, wherein the specific expression is as follows:
3.2.1) centralizing the image characteristic matrix of each image obtained by primary screening;
3.2.2) calculating the covariance of different dimensions of the image features after the centralization treatment, and forming a covariance matrix with the dimension m;
3.2.3) calculating the eigenvalue of each covariance matrix and the eigenvector corresponding to the eigenvalue;
3.2.4) determining the contribution rate f to be 90% in the embodiment, and then determining K characteristic values needing to be reserved by using a formula (3);
Figure BDA0002319189840000042
in the formula, λjRepresents the j-th eigenvalue to be preserved in each covariance matrix, K represents the total number of eigenvalues to be preserved in each covariance matrix, and lambdaiThe ith eigenvalue of each covariance matrix is represented, and m represents the total number of eigenvalues in each covariance matrix;
3.2.5) forming a transformation basis P by the preserved eigenvectors corresponding to the K eigenvalues, and finishing the dimension reduction processing by using a formula (4) according to the transformation basis P;
Y=PTX (4)
wherein Y represents an image feature after dimension reduction processing, PTA transpose matrix representing the exchange base P and X representing the image feature after the centering process.
And 4, step 4: inputting the image characteristics Y subjected to the dimension reduction processing into a Softmax classifier to finish classification processing, and outputting the result of image classification as a real classification result;
and 5: selecting a loss function of the deep convolutional neural network model during training as a cross entropy loss function, and calculating a difference value H (p, q) between a prediction classification result and a real result according to the cross entropy loss function, wherein the cross entropy loss function is expressed as:
Figure BDA0002319189840000051
wherein p (m) represents the predicted classification result, and q (m) represents the true classification result;
step 6: if the difference value H (p, q) between the predicted classification result and the real result is larger than the expected difference value H' (p, q), reversely propagating the cross entropy loss function through a deep convolutional network back propagation algorithm, and continuously adjusting the network weight wuvUntil the difference H (p, q) between the calculated predicted classification result and the real result is less than or equal to the expected difference H' (p, q), or a preset training number M is reached, wherein u represents the u-th neuron of the previous layer and v represents the v-th neuron of the next layer.
As shown in fig. 2 and fig. 3, the technical scheme can effectively reduce the feature dimension and the calculation amount in the feature extraction stage, improve the image classification effect, and obviously improve the image classification time and the accuracy after the improved PCA dimension reduction processing.

Claims (3)

1.一种基于改进PCA的深度网络图像分类方法,其特征在于,包括以下步骤:1. a deep network image classification method based on improved PCA, is characterized in that, comprises the following steps: 步骤1:将CIFAR-100图像数据集中的m张图像输入到深度卷积神经网络模型中,进行灰度、滤波预处理,消除噪声干扰,得到每张图像的原始特征;Step 1: Input m images in the CIFAR-100 image dataset into the deep convolutional neural network model, perform grayscale, filtering preprocessing, eliminate noise interference, and obtain the original features of each image; 步骤2:搭建深度卷积神经网络模型中的特征提取模块,利用搭建好的特征提取模块提取每张图像的图像特征;Step 2: Build a feature extraction module in the deep convolutional neural network model, and use the built feature extraction module to extract the image features of each image; 步骤3:对每张图像的图像特征进行改进PCA降维处理,所述改进PCA降维处理是指首先将每张图像的图像特征通过图像信息熵进行初步筛选,然后对初步筛选后的图像特征进行PCA降维处理,具体表述为:Step 3: Perform improved PCA dimensionality reduction processing on the image features of each image. The improved PCA dimensionality reduction processing refers to firstly screening the image features of each image through the image information entropy, and then performing preliminary screening on the image features after the initial screening. Perform PCA dimensionality reduction processing, which is specifically expressed as: 步骤3.1:利用公式(1)计算已提取的图像特征的图像信息熵H,并根据信息熵阈值初步筛选图像特征;Step 3.1: Calculate the image information entropy H of the extracted image features by using the formula (1), and preliminarily screen the image features according to the information entropy threshold;
Figure FDA0002319189830000011
Figure FDA0002319189830000011
式中,H表示图像特征的图像信息熵,ps表示每张图像中第s个灰度值对应的概率值,n表示输入的每张图像的灰度值总个数;In the formula, H represents the image information entropy of the image feature, ps represents the probability value corresponding to the s -th gray value in each image, and n represents the total number of gray values of each input image; 步骤3.2:对初步筛选得到的每张图像的图像特征进行PCA降维处理;Step 3.2: Perform PCA dimensionality reduction processing on the image features of each image obtained from the preliminary screening; 步骤4:将降维处理后的图像特征Y输入Softmax分类器完成分类处理,输出图像类别的结果作为真实分类结果;Step 4: Input the image feature Y after dimensionality reduction processing into the Softmax classifier to complete the classification process, and output the result of the image category as the real classification result; 步骤5:训练时的深度卷积神经网络模型的损失函数选择为交叉熵损失函数,根据交叉熵损失函数计算预测分类结果和真实结果之间的差值H(p,q),所述交叉熵损失函数表示为:Step 5: The loss function of the deep convolutional neural network model during training is selected as the cross entropy loss function, and the difference H(p, q) between the predicted classification result and the real result is calculated according to the cross entropy loss function. The loss function is expressed as:
Figure FDA0002319189830000012
Figure FDA0002319189830000012
式中,p(m)表示预测分类结果,q(m)表示真实分类结果;In the formula, p(m) represents the predicted classification result, and q(m) represents the real classification result; 步骤6:如果预测分类结果跟真实结果之间的差值H(p,q)大于期望的差值H'(p,q),则通过深度卷积网络反向传播算法将交叉熵损失函数反向传播,不断调整网络权值wuv,直到计算的预测分类结果跟真实结果之间的差值H(p,q)小于等于期望的差值H'(p,q),或者是达到预设训练次数M为止,其中u表示前一层的第u个神经元,v表示后一层的第v个神经元。Step 6: If the difference H(p,q) between the predicted classification result and the real result is greater than the expected difference H'(p,q), the cross-entropy loss function is reversed by the deep convolutional network backpropagation algorithm. Propagation in the forward direction, and continuously adjust the network weight w uv until the difference H(p,q) between the calculated predicted classification result and the real result is less than or equal to the expected difference H'(p,q), or reaches the preset Up to the number of training times M, where u represents the uth neuron in the previous layer, and v represents the vth neuron in the next layer.
2.根据权利要求1所述的一种基于改进PCA的深度网络图像分类方法,其特征在于,所述步骤2中的特征提取模块,具体表述为:根据CIFAR-100图像数据集将深度卷积神经网络模型的层数设计为13层,所述13层包括L2~L14层,其中L1表示输入层,L2~L14层卷积核个数依次分别为64,64,128,128,256,256,256,512,512,512,512,512,512,L2~L14层的卷积核大小均为3*3,池化方式选择为Maxpool,激活函数选择为ReLU。2. a kind of deep network image classification method based on improved PCA according to claim 1, is characterized in that, the feature extraction module in described step 2, is specifically expressed as: according to CIFAR-100 image dataset, depth convolution The number of layers of the neural network model is designed to be 13 layers, and the 13 layers include L 2 to L 14 layers, where L 1 represents the input layer, and the number of convolution kernels of the L 2 to L 14 layers is 64, 64, 128, respectively. 128, 256, 256, 256, 512, 512, 512, 512, 512, 512, the convolution kernel size of L 2 ~ L 14 layers are all 3*3, the pooling method is selected as Maxpool, and the activation function is selected as ReLU. 3.根据权利要求1所述的一种基于改进PCA的深度网络图像分类方法,其特征在于,所述步骤3.2具体表述为:3. a kind of deep network image classification method based on improved PCA according to claim 1, is characterized in that, described step 3.2 is specifically expressed as: 3.2.1)将初步筛选得到的每张图像的图像特征矩阵中心化处理;3.2.1) Centralize the image feature matrix of each image obtained by preliminary screening; 3.2.2)计算中心化处理后的图像特征的不同维度之间的协方差,并构成维数为m的协方差矩阵;3.2.2) Calculate the covariance between different dimensions of the image features after centralization, and form a covariance matrix with dimension m; 3.2.3)计算每个协方差矩阵的特征值及特征值对应的特征向量;3.2.3) Calculate the eigenvalue of each covariance matrix and the eigenvector corresponding to the eigenvalue; 3.2.4)首先根据实际情况确定贡献率f的大小值,然后利用公式(3)确定需要保留的K个特征值;3.2.4) First determine the size value of the contribution rate f according to the actual situation, and then use the formula (3) to determine the K eigenvalues that need to be retained;
Figure FDA0002319189830000021
Figure FDA0002319189830000021
式中,λj表示每个协方差矩阵中需要保留的第j个特征值,K表示每个协方差矩阵中需要保留的特征值总个数,λi表示每个协方差矩阵的第i个特征值,m表示每个协方差矩阵中的特征值总个数;In the formula, λ j represents the jth eigenvalue that needs to be retained in each covariance matrix, K represents the total number of eigenvalues that need to be retained in each covariance matrix, and λ i represents the ith eigenvalue of each covariance matrix Eigenvalue, m represents the total number of eigenvalues in each covariance matrix; 3.2.5)通过保留的K个特征值对应的特征向量组成变换基底P,根据交换基底P,利用公式(4)完成降维处理;3.2.5) The transformation basis P is formed by the eigenvectors corresponding to the reserved K eigenvalues, and the dimensionality reduction process is completed by formula (4) according to the exchange basis P; Y=PTX (4)Y=P T X (4) 式中,Y表示降维处理后的图像特征,PT表示交换基底P的转置矩阵,X表示中心化处理后的图像特征。In the formula, Y represents the image features after dimensionality reduction processing, P T represents the transpose matrix of the exchange base P, and X represents the image features after centralization.
CN201911291420.9A 2019-12-16 2019-12-16 Improved PCA (principal component analysis) -based deep network image classification method Active CN110991554B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911291420.9A CN110991554B (en) 2019-12-16 2019-12-16 Improved PCA (principal component analysis) -based deep network image classification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911291420.9A CN110991554B (en) 2019-12-16 2019-12-16 Improved PCA (principal component analysis) -based deep network image classification method

Publications (2)

Publication Number Publication Date
CN110991554A true CN110991554A (en) 2020-04-10
CN110991554B CN110991554B (en) 2023-04-18

Family

ID=70093848

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911291420.9A Active CN110991554B (en) 2019-12-16 2019-12-16 Improved PCA (principal component analysis) -based deep network image classification method

Country Status (1)

Country Link
CN (1) CN110991554B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111583217A (en) * 2020-04-30 2020-08-25 深圳开立生物医疗科技股份有限公司 Tumor ablation curative effect prediction method, device, equipment and computer medium
CN111738318A (en) * 2020-06-11 2020-10-02 大连理工大学 A Large Image Classification Method Based on Graph Neural Network
CN117520824A (en) * 2024-01-03 2024-02-06 浙江省白马湖实验室有限公司 Information entropy-based distributed optical fiber data characteristic reconstruction method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017215284A1 (en) * 2016-06-14 2017-12-21 山东大学 Gastrointestinal tumor microscopic hyper-spectral image processing method based on convolutional neural network
CN109242028A (en) * 2018-09-19 2019-01-18 西安电子科技大学 SAR image classification method based on 2D-PCA and convolutional neural networks

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017215284A1 (en) * 2016-06-14 2017-12-21 山东大学 Gastrointestinal tumor microscopic hyper-spectral image processing method based on convolutional neural network
CN109242028A (en) * 2018-09-19 2019-01-18 西安电子科技大学 SAR image classification method based on 2D-PCA and convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
宋丹;石勇;邓宸伟: "一种结合PCA与信息熵的SIFT特征向量自适应降维算法" *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111583217A (en) * 2020-04-30 2020-08-25 深圳开立生物医疗科技股份有限公司 Tumor ablation curative effect prediction method, device, equipment and computer medium
CN111738318A (en) * 2020-06-11 2020-10-02 大连理工大学 A Large Image Classification Method Based on Graph Neural Network
CN117520824A (en) * 2024-01-03 2024-02-06 浙江省白马湖实验室有限公司 Information entropy-based distributed optical fiber data characteristic reconstruction method

Also Published As

Publication number Publication date
CN110991554B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN110532900B (en) Facial Expression Recognition Method Based on U-Net and LS-CNN
CN108985377B (en) A high-level image semantic recognition method based on deep network multi-feature fusion
CN108615010B (en) Facial expression recognition method based on parallel convolution neural network feature map fusion
CN109345508B (en) A Bone Age Evaluation Method Based on Two-Stage Neural Network
CN109754017B (en) A method for hyperspectral image classification based on separable 3D residual networks and transfer learning
CN114898151B (en) An image classification method based on the fusion of deep learning and support vector machine
CN108734208B (en) Multi-source heterogeneous data fusion system based on multi-mode deep migration learning mechanism
Tereikovskyi et al. The method of semantic image segmentation using neural networks
CN110097060B (en) Open set identification method for trunk image
CN107085704A (en) Fast Facial Expression Recognition Method Based on ELM Autoencoding Algorithm
CN108171318B (en) Convolution neural network integration method based on simulated annealing-Gaussian function
CN107918772B (en) Target tracking method based on compressed sensing theory and gcForest
CN113177612B (en) An image recognition method of agricultural pests and diseases based on CNN with few samples
CN110619352A (en) Typical infrared target classification method based on deep convolutional neural network
Kusrini et al. The effect of Gaussian filter and data preprocessing on the classification of Punakawan puppet images with the convolutional neural network algorithm
CN109886161A (en) A road traffic sign recognition method based on likelihood clustering and convolutional neural network
CN109934158A (en) Video emotion recognition method based on locally enhanced motion history graph and recurrent convolutional neural network
CN106709528A (en) Method and device of vehicle reidentification based on multiple objective function deep learning
CN105512681A (en) Method and system for acquiring target category picture
CN110991554A (en) A Deep Network Image Classification Method Based on Improved PCA
CN108520213A (en) A face beauty prediction method based on multi-scale depth
Yue et al. Face recognition based on histogram equalization and convolution neural network
CN110347851A (en) Image search method and system based on convolutional neural networks
CN109344898A (en) A Convolutional Neural Network Image Classification Method Based on Sparse Coding Pre-training
CN106980831A (en) Based on self-encoding encoder from affiliation recognition methods

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant