[go: up one dir, main page]

CN111401485A - Practical texture classification method - Google Patents

Practical texture classification method Download PDF

Info

Publication number
CN111401485A
CN111401485A CN202010497420.0A CN202010497420A CN111401485A CN 111401485 A CN111401485 A CN 111401485A CN 202010497420 A CN202010497420 A CN 202010497420A CN 111401485 A CN111401485 A CN 111401485A
Authority
CN
China
Prior art keywords
mre
features
space
classification
classification method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010497420.0A
Other languages
Chinese (zh)
Inventor
张帆
吴小飞
张孟
周凯
庞凤江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xinshizhi Technology Co ltd
Original Assignee
Shenzhen Xinshizhi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xinshizhi Technology Co ltd filed Critical Shenzhen Xinshizhi Technology Co ltd
Priority to CN202010497420.0A priority Critical patent/CN111401485A/en
Publication of CN111401485A publication Critical patent/CN111401485A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of computer vision and pattern recognition, in particular to a practical texture classification method, which comprises a step S2 of converting an RGB image from an RGB space to an HSV space, extracting color features in the HSV space by adopting a color histogram, a step S3 of converting the RGB image from the RGB space to a gray scale space, extracting texture features in the gray scale space by adopting an improved MRE L BP feature extraction method, a step S4 of splicing the color features and the texture features to obtain cascade features, and a step S5 of inputting the cascade features into a histogram cross kernel SVM classifier and outputting the classes by the histogram cross kernel SVM classifier.

Description

Practical texture classification method
Technical Field
The invention relates to the technical field of computer vision and pattern recognition, in particular to a practical texture classification method.
Background
In L BP, MRE L BP feature description has better overall performance, but for multi-channel image data (i.e. color images), the MRE L BP feature extraction method needs to perform local binary description on each channel and then accumulate the description results of each channel, thus causing the extracted feature dimension to be too high, besides, in order to complete the classification task, the traditional MRE L BP feature descriptor is a simpler nearest neighbor classifier which has the defect of being easily influenced by noise.
Disclosure of Invention
In order to overcome the above problems, the present invention provides a practical texture classification method that can effectively solve the above problems.
The invention provides a technical scheme for solving the technical problems, which comprises the following steps: a practical texture classification method is provided, which is based on an image processing algorithm and comprises the following steps:
step S1, inputting an RGB image;
step S2, converting the RGB image from the RGB space to HSV space, and extracting color features in the HSV space by adopting a color histogram;
step S3, converting the RGB image from the RGB space to a gray scale space, and extracting texture features in the gray scale space by adopting an improved MRE L BP feature extraction method;
step S4, splicing the color features and the texture features to obtain cascade features;
and step S5, inputting the cascade features into a histogram cross kernel SVM classifier, and outputting the classes by the histogram cross kernel SVM classifier.
Preferably, in step S2, the values of the three components corresponding to the HSV space are H ≦ 0 ≦ 360, S ≦ 0 ≦ 1, and V ≦ 0 ≦ 1.
Preferably, in step S2, the hue H is divided into 16 parts and the saturation S is divided into 4 parts.
Preferably, in step S3, three features MRE L BP _ CI, MRE L BP _ NI and MRE L BP _ RD are calculated by an improved MRE L BP feature extraction method, and MRE L BP _ NI and MRE L BP _ RD are encoded.
Preferably, in step S3, the input gray image is normalized to 0 mean and unit variance, and a pixel point x in the normalized image is givencThe MRE L BP _ CI feature corresponding to this point may be expressed as:
Figure 853403DEST_PATH_IMAGE001
preferably, in step S3, a pixel point x in the normalized image is givencWith the current pixel point xcFor the center, p uniformly distributed neighborhood points on a circle with r as the radius are represented as
Figure 673460DEST_PATH_IMAGE002
The MRE L BP _ NI feature may be expressed as:
Figure 337922DEST_PATH_IMAGE003
preferably, in step S3, a pixel point x in the normalized image is givencThe radial differential characteristic MRE L BP _ RD may be expressed as:
Figure 19570DEST_PATH_IMAGE004
preferably, the coding mode adopted by the coding is as follows:
Figure 115571DEST_PATH_IMAGE005
preferably, in step S5, regarding k (k ≧ 2) classes of problems as a set of binary classification problems, designing an SVM between any two classes of samples, and designing k classes of samples
Figure 981896DEST_PATH_IMAGE006
Two classification SVM's, when to one
When the unknown samples are classified, the category with the most votes is the category of the unknown samples.
Preferably, a two-class decision function is employed
Figure 117342DEST_PATH_IMAGE007
Figure 461736DEST_PATH_IMAGE008
Designing a decision function corresponding to the SVM between any two class samples; will be provided with
Figure 913708DEST_PATH_IMAGE006
Voting is carried out on the decision results of the two-classification SVM, and the class with the most votes is selected as the final classification result by utilizing the principle that the minority obeys the majority.
Compared with the prior art, the practical texture classification method combines the improved MRE L BP feature extraction method and the color histogram for feature extraction, and takes the histogram cross kernel SVM classifier as the texture feature classifier on the basis of feature extraction, so that the problem of overhigh dimensionality of the extracted features is successfully solved, the interference of image blurring, noise, scale transformation and image rotation on classification precision is effectively avoided, and the method has the characteristics of high efficiency, stability and practicability.
Drawings
FIG. 1 is a flow chart of the steps of a texture classification method in accordance with the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It should be noted that all directional indications (such as up, down, left, right, front, and back … …) in the embodiments of the present invention are limited to relative positions on a given view, not absolute positions.
In addition, the descriptions related to "first", "second", etc. in the present invention are only for descriptive purposes and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Referring to fig. 1, the practical texture classification method of the present invention includes the following steps:
step S1, inputting an RGB image;
step S2, converting the RGB image from the RGB space to HSV space, and extracting color features in the HSV space by adopting a color histogram;
step S3, converting the RGB image from the RGB space to a gray scale space, and extracting texture features in the gray scale space by adopting an improved MRE L BP feature extraction method;
step S4, splicing the color features and the texture features to obtain cascade features;
and step S5, inputting the cascade features into a histogram cross kernel SVM classifier, and outputting the classes by the histogram cross kernel SVM classifier.
In the step S2, the values of the three components corresponding to the HSV space are H not less than 0 and not more than 360, S not less than 0 and not more than 1, and V not less than 0 and not more than 1. According to the resolving power of human eyes, the hue H is divided into 16 parts and the saturation S is divided into 4 parts. According to the resolving power of the human eye, the corresponding HS components are quantized as follows:
Figure 91880DEST_PATH_IMAGE009
Figure 308097DEST_PATH_IMAGE010
after non-uniform quantization of the HS components, the global image (or the block image) can be divided into 16 × 4=64 levels. On the basis of the completed color grading, we can calculate the corresponding histogram distribution. The conversion theory of the image in the RGB space and HSV space is mature, and is not described herein. Color histograms are the most common method of expressing color features, which has the advantage of being immune to image rotation, translation, and scale changes.
In step S3, three features MRE L BP _ CI, MRE L BP _ NI, and MRE L BP _ RD are calculated by the improved MRE L BP feature extraction method, and MRE L BP _ NI and MRE L BP _ RD are encoded.
Firstly, an input gray level image is normalized into 0 mean value and unit variance, and a pixel point x in the normalized image is givencThe MRE L BP _ CI feature corresponding to this point may be expressed as:
Figure 643133DEST_PATH_IMAGE011
(formula 1)
Wherein
Figure 402141DEST_PATH_IMAGE012
Representing the median filter with size w × w at pixel point xcResponse of (d) to (d)wRepresenting the mean of the full image median filter response.
Given pixel point x in normalized imagecWith the current pixel point xcFor the center, p uniformly distributed neighborhood points on a circle with r as the radius are represented as
Figure 610269DEST_PATH_IMAGE002
The MRE L BP _ NI feature can be expressed as:
Figure 2198DEST_PATH_IMAGE013
wherein
Figure 829340DEST_PATH_IMAGE014
Denotes wr×wrIn the median filter ofcNeighborhood on circle with radius r as central pointThe response at the point.
Giving pixel points in a normalized image
Figure 567489DEST_PATH_IMAGE015
The radial differential characteristic MRE L BP _ RD may be expressed as:
Figure 336730DEST_PATH_IMAGE016
wherein
Figure 465223DEST_PATH_IMAGE017
And
Figure 220690DEST_PATH_IMAGE018
respectively representing neighborhood pixels xr,p,nAnd xr-1,p,nAnd (4) obtaining a median filter response.
In this context, p has the value 8, wcTaking 3; r is (2, 4, 6, 8), and w corresponds to the valuerThe value is (3, 5, 7, 9).
In the improved MRE L BP feature extraction method, a median response is firstly calculated in a support region, then local binary features are calculated under different radiuses and combined features are formed, finally histograms obtained under different radiuses are concatenated to obtain a final MRE L BP feature descriptor, when p =8, the binary features of MRE L BP _ NI and MRE L BP _ RD generate 256 modes, and different coding methods are applied in the calculation process of the local binary features in order to solve the problem of excessive modes.
A rotation invariant uniform riu2 coding scheme is employed in many L BP feature extractions for some classes of texture images, the uniform coding does not necessarily represent the most important pattern features, and therefore, dividing all non-uniform patterns into a set may cause information loss
Figure 173865DEST_PATH_IMAGE019
And (4) coding mode. By adopting the coding mode, micro and macro textures can be captured better. The new coding scheme is based first on the homogeneity measureThe method comprises the following steps of dividing all L BPs into a uniform mode and a non-uniform mode, adopting a rotation invariant uniform riu2 coding scheme under the uniform mode, and refining the non-uniform mode, wherein the new coding mode is adopted to help enhance the distinguishing capability of the feature description, and the coding mode adopted in the method is as follows:
Figure 989374DEST_PATH_IMAGE020
where mod is the remainder function.
In step S4, after the computation and encoding of the RE L BP _ CI, RE L BP _ NI, and RE L BP _ RD feature components are completed, the joint histogram used herein is represented as a simple joint rather than an enumerated fusion of the 3 feature components.
In step S5, to solve the k (k ≧ 2) class problem, we consider the problem as a set of binary classification problems. An SVM is designed between any two types of samples, and the samples of k types need to be designed
Figure 605163DEST_PATH_IMAGE006
A binary SVM. When an unknown sample is classified, the category with the most votes is the category of the unknown sample. At a given training vector
Figure 23375DEST_PATH_IMAGE021
And satisfy yi∈ { 1, -1 } conditional tag vector y ∈ RlThe optimization problem for any two classes is as follows:
Figure 736116DEST_PATH_IMAGE022
(formula 7)
Wherein
Figure 129051DEST_PATH_IMAGE023
Will be provided with
Figure 186131DEST_PATH_IMAGE024
Mapping to a high-dimensional space; c>0 denotes a regularization parameter. Due to the fact that
Figure 17821DEST_PATH_IMAGE025
The possible dimensionality of the vector is high, and solving the dual problem generally yields:
Figure 726014DEST_PATH_IMAGE026
(formula 8)
Wherein
Figure 539118DEST_PATH_IMAGE027
[1,… 1]TAll vectors with a value of 1; q is
Figure 457395DEST_PATH_IMAGE028
Is determined by the positive semi-definite matrix of (a),
Figure 968142DEST_PATH_IMAGE029
. Kernel function in this context
Figure 389896DEST_PATH_IMAGE030
A histogram cross kernel is used which is a fast kernel and is superior to the commonly used radial basis kernel function. Constructing lagrange function pairs
Figure 140946DEST_PATH_IMAGE031
Obtaining extreme values, calculating
Figure 890727DEST_PATH_IMAGE032
And derive the optimum
Figure 595378DEST_PATH_IMAGE033
And
Figure 58589DEST_PATH_IMAGE034
Figure 167491DEST_PATH_IMAGE035
two classification decision functions can be derived from the equations (equation 9) and (equation 10)
Figure 794781DEST_PATH_IMAGE007
Figure 663642DEST_PATH_IMAGE036
(formula 11)
(formula 11) is a decision function corresponding to the SVM designed between any two class samples; to solve the multi-classification problem, we will
Figure 732092DEST_PATH_IMAGE006
Voting is carried out on the decision results of the two-classification SVM, and the class with the most votes is selected as the final classification result by utilizing the principle that the minority obeys the majority.
Experimental analysis:
the invention provides a practical texture classification method, which comprises the steps of firstly calculating color histogram features in HSV space, then calculating MRE L BP features in gray space, then cascading the two features, and finally inputting the cascading features into a histogram cross kernel SVM classifier, firstly comparing the feature extraction descriptors in the text with the traditional MRE L BP feature descriptors in feature dimension, wherein the feature fusion among RE L BP _ CI, RE L BP _ NI and RE L BP _ RD features is carried out by the traditional MRE L BP feature descriptors in local binary feature calculation to form higher feature dimension, meanwhile, in order to complete the classification task of the color space, the traditional MRE L BP features need to carry out feature extraction operation on each channel, and the problem of overhigh dimension is caused, and the following table (1) is the feature extraction algorithm and the feature dimension comparison result of the traditional MRE L BP feature descriptors under a color image.
Figure 589059DEST_PATH_IMAGE037
Table 1: feature dimension comparison
The feature extraction method herein is then analyzed for histogram results of feature descriptors in case of blur, rotation, and noise disturbances. The result of the feature descriptor correspondence under the conditions of plus gaussian noise, blurring and rotation shows that the feature descriptor is robust to changes such as rotation, noise interference and blurring.
In the SVM classification process, commonly used kernel functions are: linear kernel functions, gaussian radial basis kernel functions, polynomial kernel functions, sigmoid kernel functions, and histogram cross kernel functions. The method utilizes the cone yarn texture images collected in industrial production to compare the classification precision and the training time. 1500 training samples and 500 testing samples are used, the input image is 128 × 96, an automatic training mode is adopted in the training process, and the iteration times are 500. The results of the experiments for the 5 kernel functions are shown in table 2 below. As can be seen from table 2, the kernel function used herein achieves the highest accuracy while training time is also short.
Figure 313432DEST_PATH_IMAGE038
Table 2: comparison of results of 5 kernel function experiments
To further illustrate the effectiveness of the texture classification algorithm of the present invention, we performed experiments on the KTHTIPS grayscale texture dataset. The dataset includes 10 classes, each class having an image data size of 200 x 200, and containing textures of different scales. We split the data set into two parts, training data (630) and test data (180). Under the condition of 500 iterations, the corresponding experimental results are shown in the following table 3:
Figure 94306DEST_PATH_IMAGE039
table 3: comparison of experimental results of 3 methods
As can be seen from Table 3, the classification method adopted by the present invention has shorter training time under the condition of equivalent classification precision.
Compared with the prior art, the practical texture classification method combines the improved MRE L BP feature extraction method and the color histogram for feature extraction, and takes the histogram cross kernel SVM classifier as the texture feature classifier on the basis of feature extraction, so that the problem of overhigh dimensionality of the extracted features is successfully solved, the interference of image blurring, noise, scale transformation and image rotation on classification precision is effectively avoided, and the method has the characteristics of high efficiency, stability and practicability.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and any modifications, equivalents, improvements, etc. made within the spirit of the present invention should be included in the scope of the present invention.

Claims (10)

1. The practical texture classification method is characterized by comprising the following steps:
step S1, inputting an RGB image;
step S2, converting the RGB image from the RGB space to HSV space, and extracting color features in the HSV space by adopting a color histogram;
step S3, converting the RGB image from the RGB space to a gray scale space, and extracting texture features in the gray scale space by adopting an improved MRE L BP feature extraction method;
step S4, splicing the color features and the texture features to obtain cascade features;
and step S5, inputting the cascade features into a histogram cross kernel SVM classifier, and outputting the classes by the histogram cross kernel SVM classifier.
2. The practical texture classification method according to claim 1, wherein in step S2, three components corresponding to HSV space are taken as values
Figure 367626DEST_PATH_IMAGE001
Figure 222450DEST_PATH_IMAGE002
Figure 2187DEST_PATH_IMAGE003
3. The practical texture classification method according to claim 1, wherein in step S2, the hue H is divided into 16 parts and the saturation S is divided into 4 parts.
4. The practical texture classification method according to claim 1, wherein in step S3, three features MRE L BP _ CI, MRE L BP _ NI and MRE L BP _ RD are calculated by a modified MRE L BP feature extraction method, and MRE L BP _ NI and MRE L BP _ RD are encoded.
5. The practical texture classification method according to claim 4, wherein in step S3, the input gray image is normalized to 0 mean and unit variance, and a pixel point x in the normalized image is givencThe MRE L BP _ CI feature corresponding to this point may be expressed as:
Figure 584347DEST_PATH_IMAGE004
6. the practical texture classification method according to claim 4, characterized in that in the step S3, a pixel point x in a normalized image is givencWith the current pixel point xcFor the center, p uniformly distributed neighborhood points on a circle with r as the radius are represented as
Figure 336402DEST_PATH_IMAGE005
The MRE L BP _ NI feature may be expressed as:
Figure 847280DEST_PATH_IMAGE006
7. as claimed in claim 4The practical texture classification method is characterized in that in the step S3, a pixel point x in the normalized image is givencThe radial differential characteristic MRE L BP _ RD may be expressed as:
Figure 114314DEST_PATH_IMAGE007
8. the method of claim 4, wherein the encoding is performed in the following manner:
Figure 250897DEST_PATH_IMAGE008
9. the practical texture classification method according to claim 1, wherein in step S5, k (k ≧ 2) class problems are treated as a set of binary classification problems, an SVM is designed between any two classes of samples, and samples of k classes are designed
Figure 106726DEST_PATH_IMAGE009
And when an unknown sample is classified, the classification with the most votes is the classification of the unknown sample.
10. The method of claim 9, wherein a two-class decision function is used
Figure 100090DEST_PATH_IMAGE010
Figure 792103DEST_PATH_IMAGE011
Designing a decision function corresponding to the SVM between any two class samples; will be provided with
Figure 217530DEST_PATH_IMAGE009
Voting is carried out on the decision results of the two-classification SVM, and the class with the most votes is selected as the final classification result by utilizing the principle that the minority obeys the majority.
CN202010497420.0A 2020-06-04 2020-06-04 Practical texture classification method Pending CN111401485A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010497420.0A CN111401485A (en) 2020-06-04 2020-06-04 Practical texture classification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010497420.0A CN111401485A (en) 2020-06-04 2020-06-04 Practical texture classification method

Publications (1)

Publication Number Publication Date
CN111401485A true CN111401485A (en) 2020-07-10

Family

ID=71437622

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010497420.0A Pending CN111401485A (en) 2020-06-04 2020-06-04 Practical texture classification method

Country Status (1)

Country Link
CN (1) CN111401485A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658118A (en) * 2021-08-02 2021-11-16 维沃移动通信有限公司 Image noise degree estimation method and device, electronic equipment and storage medium
CN113743523A (en) * 2021-09-13 2021-12-03 西安建筑科技大学 Visual multi-feature guided construction waste fine classification method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110059759A (en) * 2019-04-25 2019-07-26 南京农业大学 Compost maturity prediction technique based on weighting LBP- color moment
CN110533069A (en) * 2019-07-25 2019-12-03 西安电子科技大学 A kind of two-dimentional chaff distribution character recognition methods based on algorithm of support vector machine

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110059759A (en) * 2019-04-25 2019-07-26 南京农业大学 Compost maturity prediction technique based on weighting LBP- color moment
CN110533069A (en) * 2019-07-25 2019-12-03 西安电子科技大学 A kind of two-dimentional chaff distribution character recognition methods based on algorithm of support vector machine

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LI LIU ET AL: ""Median Robust Extended Local Binary Pattern for Texture Classification", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658118A (en) * 2021-08-02 2021-11-16 维沃移动通信有限公司 Image noise degree estimation method and device, electronic equipment and storage medium
WO2023011280A1 (en) * 2021-08-02 2023-02-09 维沃移动通信有限公司 Image noise degree estimation method and apparatus, and electronic device and storage medium
CN113658118B (en) * 2021-08-02 2024-08-27 维沃移动通信有限公司 Image noise degree estimation method, device, electronic equipment and storage medium
CN113743523A (en) * 2021-09-13 2021-12-03 西安建筑科技大学 Visual multi-feature guided construction waste fine classification method
CN113743523B (en) * 2021-09-13 2024-05-14 西安建筑科技大学 Building rubbish fine classification method guided by visual multi-feature

Similar Documents

Publication Publication Date Title
Deng et al. Saliency detection via a multiple self-weighted graph-based manifold ranking
Liu et al. Evaluation of LBP and deep texture descriptors with a new robustness benchmark
Huang et al. Local binary patterns and superpixel-based multiple kernels for hyperspectral image classification
CN114170418B (en) Multi-feature fusion image retrieval method for automobile harness connector by means of graph searching
CN110738672A (en) image segmentation method based on hierarchical high-order conditional random field
Wang et al. Fully convolutional network based skeletonization for handwritten chinese characters
Benazzouz et al. Microscopic image segmentation based on pixel classification and dimensionality reduction
Stojnić et al. Detection of pollen bearing honey bees in hive entrance images
WO2015146113A1 (en) Identification dictionary learning system, identification dictionary learning method, and recording medium
CN117197904A (en) Training method of human face living body detection model, human face living body detection method and human face living body detection device
CN111401485A (en) Practical texture classification method
Guo et al. Multi-focus image fusion based on fully convolutional networks
Perez et al. Face patches designed through neuroevolution for face recognition with large pose variation
CN112434731A (en) Image recognition method and device and readable storage medium
CN113744241A (en) Cell Image Segmentation Method Based on Improved SLIC Algorithm
CN114463574A (en) A scene classification method and device for remote sensing images
CN113762151A (en) A fault data processing method, system and fault prediction method
Nanda et al. A person re-identification framework by inlier-set group modeling for video surveillance
Sowmya et al. Significance of processing chrominance information for scene classification: a review
CN110363227B (en) LED classification method based on manifold learning
Jena et al. Elitist TLBO for identification and verification of plant diseases
Habiba et al. Hlgp: a modified local gradient pattern for image classification
Li et al. High-fidelity illumination normalization for face recognition based on auto-encoder
Krishna et al. Color Image Segmentation Using Soft Rough Fuzzy-C-Means and Local Binary Pattern.
Fatemi et al. Fully unsupervised salient object detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200710