CN111401485A - Practical texture classification method - Google Patents
Practical texture classification method Download PDFInfo
- Publication number
- CN111401485A CN111401485A CN202010497420.0A CN202010497420A CN111401485A CN 111401485 A CN111401485 A CN 111401485A CN 202010497420 A CN202010497420 A CN 202010497420A CN 111401485 A CN111401485 A CN 111401485A
- Authority
- CN
- China
- Prior art keywords
- mre
- features
- space
- classification
- classification method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 238000000605 extraction Methods 0.000 claims abstract description 20
- 230000004438 eyesight Effects 0.000 abstract description 2
- 238000003909 pattern recognition Methods 0.000 abstract description 2
- 238000012549 training Methods 0.000 description 8
- 230000004044 response Effects 0.000 description 4
- 239000013598 vector Substances 0.000 description 4
- 238000002474 experimental method Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000004304 visual acuity Effects 0.000 description 2
- 241000764238 Isis Species 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 238000009776 industrial production Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 238000007670 refining Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of computer vision and pattern recognition, in particular to a practical texture classification method, which comprises a step S2 of converting an RGB image from an RGB space to an HSV space, extracting color features in the HSV space by adopting a color histogram, a step S3 of converting the RGB image from the RGB space to a gray scale space, extracting texture features in the gray scale space by adopting an improved MRE L BP feature extraction method, a step S4 of splicing the color features and the texture features to obtain cascade features, and a step S5 of inputting the cascade features into a histogram cross kernel SVM classifier and outputting the classes by the histogram cross kernel SVM classifier.
Description
Technical Field
The invention relates to the technical field of computer vision and pattern recognition, in particular to a practical texture classification method.
Background
In L BP, MRE L BP feature description has better overall performance, but for multi-channel image data (i.e. color images), the MRE L BP feature extraction method needs to perform local binary description on each channel and then accumulate the description results of each channel, thus causing the extracted feature dimension to be too high, besides, in order to complete the classification task, the traditional MRE L BP feature descriptor is a simpler nearest neighbor classifier which has the defect of being easily influenced by noise.
Disclosure of Invention
In order to overcome the above problems, the present invention provides a practical texture classification method that can effectively solve the above problems.
The invention provides a technical scheme for solving the technical problems, which comprises the following steps: a practical texture classification method is provided, which is based on an image processing algorithm and comprises the following steps:
step S1, inputting an RGB image;
step S2, converting the RGB image from the RGB space to HSV space, and extracting color features in the HSV space by adopting a color histogram;
step S3, converting the RGB image from the RGB space to a gray scale space, and extracting texture features in the gray scale space by adopting an improved MRE L BP feature extraction method;
step S4, splicing the color features and the texture features to obtain cascade features;
and step S5, inputting the cascade features into a histogram cross kernel SVM classifier, and outputting the classes by the histogram cross kernel SVM classifier.
Preferably, in step S2, the values of the three components corresponding to the HSV space are H ≦ 0 ≦ 360, S ≦ 0 ≦ 1, and V ≦ 0 ≦ 1.
Preferably, in step S2, the hue H is divided into 16 parts and the saturation S is divided into 4 parts.
Preferably, in step S3, three features MRE L BP _ CI, MRE L BP _ NI and MRE L BP _ RD are calculated by an improved MRE L BP feature extraction method, and MRE L BP _ NI and MRE L BP _ RD are encoded.
Preferably, in step S3, the input gray image is normalized to 0 mean and unit variance, and a pixel point x in the normalized image is givencThe MRE L BP _ CI feature corresponding to this point may be expressed as:
preferably, in step S3, a pixel point x in the normalized image is givencWith the current pixel point xcFor the center, p uniformly distributed neighborhood points on a circle with r as the radius are represented asThe MRE L BP _ NI feature may be expressed as:。
preferably, in step S3, a pixel point x in the normalized image is givencThe radial differential characteristic MRE L BP _ RD may be expressed as:
preferably, the coding mode adopted by the coding is as follows:
preferably, in step S5, regarding k (k ≧ 2) classes of problems as a set of binary classification problems, designing an SVM between any two classes of samples, and designing k classes of samplesTwo classification SVM's, when to one
When the unknown samples are classified, the category with the most votes is the category of the unknown samples.
Designing a decision function corresponding to the SVM between any two class samples; will be provided withVoting is carried out on the decision results of the two-classification SVM, and the class with the most votes is selected as the final classification result by utilizing the principle that the minority obeys the majority.
Compared with the prior art, the practical texture classification method combines the improved MRE L BP feature extraction method and the color histogram for feature extraction, and takes the histogram cross kernel SVM classifier as the texture feature classifier on the basis of feature extraction, so that the problem of overhigh dimensionality of the extracted features is successfully solved, the interference of image blurring, noise, scale transformation and image rotation on classification precision is effectively avoided, and the method has the characteristics of high efficiency, stability and practicability.
Drawings
FIG. 1 is a flow chart of the steps of a texture classification method in accordance with the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It should be noted that all directional indications (such as up, down, left, right, front, and back … …) in the embodiments of the present invention are limited to relative positions on a given view, not absolute positions.
In addition, the descriptions related to "first", "second", etc. in the present invention are only for descriptive purposes and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Referring to fig. 1, the practical texture classification method of the present invention includes the following steps:
step S1, inputting an RGB image;
step S2, converting the RGB image from the RGB space to HSV space, and extracting color features in the HSV space by adopting a color histogram;
step S3, converting the RGB image from the RGB space to a gray scale space, and extracting texture features in the gray scale space by adopting an improved MRE L BP feature extraction method;
step S4, splicing the color features and the texture features to obtain cascade features;
and step S5, inputting the cascade features into a histogram cross kernel SVM classifier, and outputting the classes by the histogram cross kernel SVM classifier.
In the step S2, the values of the three components corresponding to the HSV space are H not less than 0 and not more than 360, S not less than 0 and not more than 1, and V not less than 0 and not more than 1. According to the resolving power of human eyes, the hue H is divided into 16 parts and the saturation S is divided into 4 parts. According to the resolving power of the human eye, the corresponding HS components are quantized as follows:
after non-uniform quantization of the HS components, the global image (or the block image) can be divided into 16 × 4=64 levels. On the basis of the completed color grading, we can calculate the corresponding histogram distribution. The conversion theory of the image in the RGB space and HSV space is mature, and is not described herein. Color histograms are the most common method of expressing color features, which has the advantage of being immune to image rotation, translation, and scale changes.
In step S3, three features MRE L BP _ CI, MRE L BP _ NI, and MRE L BP _ RD are calculated by the improved MRE L BP feature extraction method, and MRE L BP _ NI and MRE L BP _ RD are encoded.
Firstly, an input gray level image is normalized into 0 mean value and unit variance, and a pixel point x in the normalized image is givencThe MRE L BP _ CI feature corresponding to this point may be expressed as:
WhereinRepresenting the median filter with size w × w at pixel point xcResponse of (d) to (d)wRepresenting the mean of the full image median filter response.
Given pixel point x in normalized imagecWith the current pixel point xcFor the center, p uniformly distributed neighborhood points on a circle with r as the radius are represented asThe MRE L BP _ NI feature can be expressed as:
whereinDenotes wr×wrIn the median filter ofcNeighborhood on circle with radius r as central pointThe response at the point.
Giving pixel points in a normalized imageThe radial differential characteristic MRE L BP _ RD may be expressed as:
whereinAndrespectively representing neighborhood pixels xr,p,nAnd xr-1,p,nAnd (4) obtaining a median filter response.
In this context, p has the value 8, wcTaking 3; r is (2, 4, 6, 8), and w corresponds to the valuerThe value is (3, 5, 7, 9).
In the improved MRE L BP feature extraction method, a median response is firstly calculated in a support region, then local binary features are calculated under different radiuses and combined features are formed, finally histograms obtained under different radiuses are concatenated to obtain a final MRE L BP feature descriptor, when p =8, the binary features of MRE L BP _ NI and MRE L BP _ RD generate 256 modes, and different coding methods are applied in the calculation process of the local binary features in order to solve the problem of excessive modes.
A rotation invariant uniform riu2 coding scheme is employed in many L BP feature extractions for some classes of texture images, the uniform coding does not necessarily represent the most important pattern features, and therefore, dividing all non-uniform patterns into a set may cause information lossAnd (4) coding mode. By adopting the coding mode, micro and macro textures can be captured better. The new coding scheme is based first on the homogeneity measureThe method comprises the following steps of dividing all L BPs into a uniform mode and a non-uniform mode, adopting a rotation invariant uniform riu2 coding scheme under the uniform mode, and refining the non-uniform mode, wherein the new coding mode is adopted to help enhance the distinguishing capability of the feature description, and the coding mode adopted in the method is as follows:
where mod is the remainder function.
In step S4, after the computation and encoding of the RE L BP _ CI, RE L BP _ NI, and RE L BP _ RD feature components are completed, the joint histogram used herein is represented as a simple joint rather than an enumerated fusion of the 3 feature components.
In step S5, to solve the k (k ≧ 2) class problem, we consider the problem as a set of binary classification problems. An SVM is designed between any two types of samples, and the samples of k types need to be designedA binary SVM. When an unknown sample is classified, the category with the most votes is the category of the unknown sample. At a given training vectorAnd satisfy yi∈ { 1, -1 } conditional tag vector y ∈ RlThe optimization problem for any two classes is as follows:
WhereinWill be provided withMapping to a high-dimensional space; c>0 denotes a regularization parameter. Due to the fact thatThe possible dimensionality of the vector is high, and solving the dual problem generally yields:
Wherein[1,… 1]TAll vectors with a value of 1; q isIs determined by the positive semi-definite matrix of (a),
. Kernel function in this contextA histogram cross kernel is used which is a fast kernel and is superior to the commonly used radial basis kernel function. Constructing lagrange function pairsObtaining extreme values, calculatingAnd derive the optimumAnd。
two classification decision functions can be derived from the equations (equation 9) and (equation 10):
(formula 11) is a decision function corresponding to the SVM designed between any two class samples; to solve the multi-classification problem, we willVoting is carried out on the decision results of the two-classification SVM, and the class with the most votes is selected as the final classification result by utilizing the principle that the minority obeys the majority.
Experimental analysis:
the invention provides a practical texture classification method, which comprises the steps of firstly calculating color histogram features in HSV space, then calculating MRE L BP features in gray space, then cascading the two features, and finally inputting the cascading features into a histogram cross kernel SVM classifier, firstly comparing the feature extraction descriptors in the text with the traditional MRE L BP feature descriptors in feature dimension, wherein the feature fusion among RE L BP _ CI, RE L BP _ NI and RE L BP _ RD features is carried out by the traditional MRE L BP feature descriptors in local binary feature calculation to form higher feature dimension, meanwhile, in order to complete the classification task of the color space, the traditional MRE L BP features need to carry out feature extraction operation on each channel, and the problem of overhigh dimension is caused, and the following table (1) is the feature extraction algorithm and the feature dimension comparison result of the traditional MRE L BP feature descriptors under a color image.
Table 1: feature dimension comparison
The feature extraction method herein is then analyzed for histogram results of feature descriptors in case of blur, rotation, and noise disturbances. The result of the feature descriptor correspondence under the conditions of plus gaussian noise, blurring and rotation shows that the feature descriptor is robust to changes such as rotation, noise interference and blurring.
In the SVM classification process, commonly used kernel functions are: linear kernel functions, gaussian radial basis kernel functions, polynomial kernel functions, sigmoid kernel functions, and histogram cross kernel functions. The method utilizes the cone yarn texture images collected in industrial production to compare the classification precision and the training time. 1500 training samples and 500 testing samples are used, the input image is 128 × 96, an automatic training mode is adopted in the training process, and the iteration times are 500. The results of the experiments for the 5 kernel functions are shown in table 2 below. As can be seen from table 2, the kernel function used herein achieves the highest accuracy while training time is also short.
Table 2: comparison of results of 5 kernel function experiments
To further illustrate the effectiveness of the texture classification algorithm of the present invention, we performed experiments on the KTHTIPS grayscale texture dataset. The dataset includes 10 classes, each class having an image data size of 200 x 200, and containing textures of different scales. We split the data set into two parts, training data (630) and test data (180). Under the condition of 500 iterations, the corresponding experimental results are shown in the following table 3:
table 3: comparison of experimental results of 3 methods
As can be seen from Table 3, the classification method adopted by the present invention has shorter training time under the condition of equivalent classification precision.
Compared with the prior art, the practical texture classification method combines the improved MRE L BP feature extraction method and the color histogram for feature extraction, and takes the histogram cross kernel SVM classifier as the texture feature classifier on the basis of feature extraction, so that the problem of overhigh dimensionality of the extracted features is successfully solved, the interference of image blurring, noise, scale transformation and image rotation on classification precision is effectively avoided, and the method has the characteristics of high efficiency, stability and practicability.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and any modifications, equivalents, improvements, etc. made within the spirit of the present invention should be included in the scope of the present invention.
Claims (10)
1. The practical texture classification method is characterized by comprising the following steps:
step S1, inputting an RGB image;
step S2, converting the RGB image from the RGB space to HSV space, and extracting color features in the HSV space by adopting a color histogram;
step S3, converting the RGB image from the RGB space to a gray scale space, and extracting texture features in the gray scale space by adopting an improved MRE L BP feature extraction method;
step S4, splicing the color features and the texture features to obtain cascade features;
and step S5, inputting the cascade features into a histogram cross kernel SVM classifier, and outputting the classes by the histogram cross kernel SVM classifier.
3. The practical texture classification method according to claim 1, wherein in step S2, the hue H is divided into 16 parts and the saturation S is divided into 4 parts.
4. The practical texture classification method according to claim 1, wherein in step S3, three features MRE L BP _ CI, MRE L BP _ NI and MRE L BP _ RD are calculated by a modified MRE L BP feature extraction method, and MRE L BP _ NI and MRE L BP _ RD are encoded.
6. the practical texture classification method according to claim 4, characterized in that in the step S3, a pixel point x in a normalized image is givencWith the current pixel point xcFor the center, p uniformly distributed neighborhood points on a circle with r as the radius are represented asThe MRE L BP _ NI feature may be expressed as:
9. the practical texture classification method according to claim 1, wherein in step S5, k (k ≧ 2) class problems are treated as a set of binary classification problems, an SVM is designed between any two classes of samples, and samples of k classes are designedAnd when an unknown sample is classified, the classification with the most votes is the classification of the unknown sample.
Designing a decision function corresponding to the SVM between any two class samples; will be provided withVoting is carried out on the decision results of the two-classification SVM, and the class with the most votes is selected as the final classification result by utilizing the principle that the minority obeys the majority.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010497420.0A CN111401485A (en) | 2020-06-04 | 2020-06-04 | Practical texture classification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010497420.0A CN111401485A (en) | 2020-06-04 | 2020-06-04 | Practical texture classification method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111401485A true CN111401485A (en) | 2020-07-10 |
Family
ID=71437622
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010497420.0A Pending CN111401485A (en) | 2020-06-04 | 2020-06-04 | Practical texture classification method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111401485A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113658118A (en) * | 2021-08-02 | 2021-11-16 | 维沃移动通信有限公司 | Image noise degree estimation method and device, electronic equipment and storage medium |
CN113743523A (en) * | 2021-09-13 | 2021-12-03 | 西安建筑科技大学 | Visual multi-feature guided construction waste fine classification method |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110059759A (en) * | 2019-04-25 | 2019-07-26 | 南京农业大学 | Compost maturity prediction technique based on weighting LBP- color moment |
CN110533069A (en) * | 2019-07-25 | 2019-12-03 | 西安电子科技大学 | A kind of two-dimentional chaff distribution character recognition methods based on algorithm of support vector machine |
-
2020
- 2020-06-04 CN CN202010497420.0A patent/CN111401485A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110059759A (en) * | 2019-04-25 | 2019-07-26 | 南京农业大学 | Compost maturity prediction technique based on weighting LBP- color moment |
CN110533069A (en) * | 2019-07-25 | 2019-12-03 | 西安电子科技大学 | A kind of two-dimentional chaff distribution character recognition methods based on algorithm of support vector machine |
Non-Patent Citations (1)
Title |
---|
LI LIU ET AL: ""Median Robust Extended Local Binary Pattern for Texture Classification", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113658118A (en) * | 2021-08-02 | 2021-11-16 | 维沃移动通信有限公司 | Image noise degree estimation method and device, electronic equipment and storage medium |
WO2023011280A1 (en) * | 2021-08-02 | 2023-02-09 | 维沃移动通信有限公司 | Image noise degree estimation method and apparatus, and electronic device and storage medium |
CN113658118B (en) * | 2021-08-02 | 2024-08-27 | 维沃移动通信有限公司 | Image noise degree estimation method, device, electronic equipment and storage medium |
CN113743523A (en) * | 2021-09-13 | 2021-12-03 | 西安建筑科技大学 | Visual multi-feature guided construction waste fine classification method |
CN113743523B (en) * | 2021-09-13 | 2024-05-14 | 西安建筑科技大学 | Building rubbish fine classification method guided by visual multi-feature |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Deng et al. | Saliency detection via a multiple self-weighted graph-based manifold ranking | |
Liu et al. | Evaluation of LBP and deep texture descriptors with a new robustness benchmark | |
Huang et al. | Local binary patterns and superpixel-based multiple kernels for hyperspectral image classification | |
CN114170418B (en) | Multi-feature fusion image retrieval method for automobile harness connector by means of graph searching | |
CN110738672A (en) | image segmentation method based on hierarchical high-order conditional random field | |
Wang et al. | Fully convolutional network based skeletonization for handwritten chinese characters | |
Benazzouz et al. | Microscopic image segmentation based on pixel classification and dimensionality reduction | |
Stojnić et al. | Detection of pollen bearing honey bees in hive entrance images | |
WO2015146113A1 (en) | Identification dictionary learning system, identification dictionary learning method, and recording medium | |
CN117197904A (en) | Training method of human face living body detection model, human face living body detection method and human face living body detection device | |
CN111401485A (en) | Practical texture classification method | |
Guo et al. | Multi-focus image fusion based on fully convolutional networks | |
Perez et al. | Face patches designed through neuroevolution for face recognition with large pose variation | |
CN112434731A (en) | Image recognition method and device and readable storage medium | |
CN113744241A (en) | Cell Image Segmentation Method Based on Improved SLIC Algorithm | |
CN114463574A (en) | A scene classification method and device for remote sensing images | |
CN113762151A (en) | A fault data processing method, system and fault prediction method | |
Nanda et al. | A person re-identification framework by inlier-set group modeling for video surveillance | |
Sowmya et al. | Significance of processing chrominance information for scene classification: a review | |
CN110363227B (en) | LED classification method based on manifold learning | |
Jena et al. | Elitist TLBO for identification and verification of plant diseases | |
Habiba et al. | Hlgp: a modified local gradient pattern for image classification | |
Li et al. | High-fidelity illumination normalization for face recognition based on auto-encoder | |
Krishna et al. | Color Image Segmentation Using Soft Rough Fuzzy-C-Means and Local Binary Pattern. | |
Fatemi et al. | Fully unsupervised salient object detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200710 |