[go: up one dir, main page]

CN118941553A - A Yarn Detection System Based on Image Processing - Google Patents

A Yarn Detection System Based on Image Processing Download PDF

Info

Publication number
CN118941553A
CN118941553A CN202411277774.9A CN202411277774A CN118941553A CN 118941553 A CN118941553 A CN 118941553A CN 202411277774 A CN202411277774 A CN 202411277774A CN 118941553 A CN118941553 A CN 118941553A
Authority
CN
China
Prior art keywords
image
yarn
module
color
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411277774.9A
Other languages
Chinese (zh)
Inventor
谷美露
王军
蒋玉国
陶书君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qufu Jinxulong Textile Co ltd
Original Assignee
Qufu Jinxulong Textile Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qufu Jinxulong Textile Co ltd filed Critical Qufu Jinxulong Textile Co ltd
Priority to CN202411277774.9A priority Critical patent/CN118941553A/en
Publication of CN118941553A publication Critical patent/CN118941553A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a yarn detection system based on image processing, and relates to the technical field of yarn detection; the yarn detection device comprises an image acquisition module, an image preprocessing module, a feature extraction module, a yarn detection module, a result display module and a system management module, wherein the image acquisition module acquires yarn image data in different forms through a camera, the image preprocessing module preprocesses the image data after image acquisition, the feature extraction module extracts features for describing various information of yarns from the image data after image data preprocessing, and the yarn detection module analyzes and judges the features of a special region by utilizing an image processing technology so as to realize detection and identification of the yarns. The multi-dimensional characteristic information in the yarn image is comprehensively and deeply extracted through a plurality of technologies such as edge detection, texture characteristic extraction, color characteristic extraction, shape characteristic extraction, regional characteristic extraction, gray level characteristic extraction and the like.

Description

Yarn detecting system based on image processing
Technical Field
The invention relates to the technical field of yarn detection, in particular to a yarn detection system based on image processing.
Background
With the rapid development of the textile industry, the requirements on yarn quality are also increasing. The traditional yarn detection method mainly depends on manual visual inspection, has low efficiency and is easy to make mistakes, and the requirements of the modern textile production on rapidness and intellectualization are difficult to meet. Image processing based yarn detection systems have been developed. The system utilizes a camera and an image processing technology to analyze yarn images in real time, and automatically identifies and detects yarn defects such as broken yarns, knots, chromatic aberration, impurities and the like, so that the yarn quality is detected rapidly, accurately and efficiently. Automatic yarn detection can improve production efficiency, reduce cost of labor, reduce manufacturing cost. The yarn quality is improved, the rejection rate is reduced, and the added value of the product is improved. Therefore, the yarn detection system based on image processing is an important direction of automatic production and intelligent upgrading in the textile industry, and has a wide application prospect.
Through retrieval, the patent with the Chinese patent application number of CN2022109896531 discloses a textile bobbin yarn detection method based on computer vision, and the method can be integrated into an artificial intelligence system in the production field, can be used as an artificial intelligence optimizing operation system, an artificial intelligence middleware and the like, and can be used for developing computer vision software. The method comprises the following steps: capturing and identifying a surface image of the bobbin, and preprocessing the image to obtain a gray level image of the bobbin; and obtaining the probability that the suspected yarn pixels are yarn pixels according to the gray fluctuation degree and the gradient direction consistency of the suspected yarn pixels in the gray image of the yarn tube, and obtaining the number of the yarn pixels by using the probability. One textile bobbin yarn detection method based on computer vision in the above patent has the following disadvantages:
The system can accurately distinguish the condition that a small amount of yarns exist on the yarn tube, can adapt to complex working condition environments, and avoid false detection of yarn tube residual caused by illumination or other complex working conditions, but is imperfect in aspects of feature extraction in the system, so that the system has poor robustness to environmental privacy such as illumination change, noise interference and the like, and the performance of the system is reduced in complex scenes.
Disclosure of Invention
The invention aims to solve the defects in the prior art, and provides a yarn detection system based on image processing.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
The yarn detection system based on image processing comprises an image acquisition module, an image preprocessing module, a feature extraction module, a yarn detection module, a result display module and a system management module, wherein the image acquisition module acquires yarn image data which are presented in different forms through a camera, the image preprocessing module preprocesses the image data after image acquisition, the feature extraction module extracts features for describing various information of yarns from the image data after image data preprocessing, the yarn detection module analyzes and judges the features of a special region by utilizing an image processing technology so as to realize detection and identification of the yarns, the result display module displays detection results to a user in the form of images or characters, and the system management module manages and monitors the whole system.
Preferably: the specific steps of the work of the image acquisition module are as follows:
a1: setting the position and angle of a camera to ensure that the yarn images are captured at all angles;
a2: starting an acquisition program, and enabling a camera to start acquiring image data in the yarn running process and transmitting the image data to a computer for storage so as to perform subsequent image processing and analysis;
a3: in the image data acquisition process, the acquired images are monitored and displayed in real time;
A4: and the computer corrects and adjusts the acquired image data, corrects the color deviation of the image data, and deletes the image data with poor definition.
Further: the specific steps of the work of the image preprocessing module are as follows:
b1: denoising and image enhancement processing are carried out on the yarn image;
B2: converting the image from RGB color space to HSV color space, and separating brightness and color information of the image;
b3: converting the image into a black-and-white image, separating the yarn from the background, removing small noise points and connecting broken areas in the image, and highlighting the outline and the edge of the yarn;
b4: cutting the image according to the actual position in the yarn image, removing unnecessary parts, and adjusting the size of the image.
Based on the scheme: the specific steps of the work of the feature extraction module are as follows:
C1: edge detection: extracting edge information in the yarn image by adopting an edge detection technology;
C2: texture feature extraction: extracting texture information in the yarn image through image texture feature analysis, and identifying texture features of the yarn;
And C3: color feature extraction: extracting color characteristics of the yarns according to the image converted by the color space;
and C4: and (3) extracting shape features: extracting shape characteristics of the yarns by adopting a shape analysis technology;
C5: extracting regional characteristics: dividing an image into different areas, and extracting feature information of each area, wherein the feature information comprises color features, texture features and shape features in the area;
C6: gray level feature extraction: according to the gray information of the image, the gray characteristics of the yarn image are described by calculating a gray histogram, a mean value, a variance, texture characteristics and gray gradient characteristics.
Among the foregoing, the preferred one is: in the step C1, an edge detection technique for extracting edge information in a yarn image has an algorithm formula as follows:
Giving a gray level image I (x, y), and carrying out edge detection through a horizontal edge detection operator and a vertical edge detection operator;
horizontal edge detection operator:
Vertical edge detection operator:
convolving the image I (x, y) with a horizontal edge detection operator and a vertical edge detection operator to obtain horizontal edge intensity And vertical edge strength
Finally, calculating the edge strength and the edge direction:
where G is edge intensity and θ is edge direction, used to determine the gradient and direction of the edge.
As a further scheme of the invention: in the step C2, the algorithm formula is as follows, where the algorithm formula is used for analyzing and extracting the texture features of the image:
Wherein, Is the pixel value of a two-dimensional image,Is a two-dimensional wavelet function, and can obtain the decomposition coefficient of the image by wavelet transformation of different scales and directions for analyzing the frequency domain characteristics of the image.
Meanwhile, in the step C3, after the image is converted from the RGB color space to the HSV color space, the number of pixels of different color components is counted in the HSV color space image, a color histogram is constructed, and the color characteristics of the yarn are extracted according to the color histogram, wherein the color characteristics comprise the mean value and the variance of the color distribution, the peak position and the height of the color histogram, the shape characteristics of the color histogram, and the energy and the entropy of the color histogram.
As a preferred embodiment of the present invention: the yarn detection module comprises the following specific steps of:
D1: collecting an image dataset comprising positive samples (yarns) and negative samples (non-yarns), marking, and dividing the dataset into a training set and a testing set for model training and evaluation;
d2: extracting features of images in the training set and the testing set;
d3: after the features are extracted, the extracted features are input into a machine learning model for training and classification;
d4: constructing a convolutional neural network model, taking an image as input, extracting characteristics through multi-layer convolution and pooling operation, and finally classifying through a full connection layer;
d5: and applying the trained model to a specific area in the image, detecting and identifying yarns, and judging whether yarns exist in the image according to a result output by the model.
Meanwhile, the result display module comprises the following specific working steps:
and displaying in an image form: directly marking detected yarn areas on an original image, marking frames, lines or points with different colors or shapes, dividing the original image into a plurality of areas, wherein each area only comprises one or more yarns, marking each area, simultaneously displaying the confidence level of yarn detection by using a thermodynamic diagram, wherein the high-confidence-level areas are darker in color, and the low-confidence-level areas are lighter in color;
and displaying the text form: the detected yarn information is displayed in a list form including coordinates, length, width, color and type of the yarn, and the yarn information is displayed in a table form while the detected yarn is described in text.
As a more preferable scheme of the invention: the system management module comprises the specific steps of user management (adding, deleting and modifying user account numbers and authority settings), log management (recording system operation logs, error logs and access logs), system setting (setting system parameters and configuration, backing up and restoring operation), authority management (setting authority levels of user roles, managing user groups and authority groups), data management (backing up, transferring and cleaning data), system monitoring (monitoring system running states and performance indexes), system maintenance (software updating and fault processing) and report generation (generating a system running report).
The beneficial effects of the invention are as follows:
1. The yarn detection system based on image processing is characterized in that a feature extraction module comprehensively and deeply extracts multidimensional feature information in a yarn image through various technologies such as edge detection, texture feature extraction, color feature extraction, shape feature extraction, regional feature extraction, gray level feature extraction and the like. The series of steps not only improves the accuracy and richness of feature extraction, but also provides a high-quality data base for subsequent image analysis and pattern recognition, thereby remarkably improving the accuracy and reliability of yarn image analysis.
2. According to the yarn detection system based on image processing, a yarn detection module collects and marks an image data set containing positive and negative samples, performs feature extraction, inputs the extracted features into a machine learning model for training and classification, simultaneously builds a convolutional neural network model, extracts image features through multi-layer rolling and pooling operations, and finally classifies through a full-connection layer. The trained model is applied to a specific area in the image for yarn detection and identification. The process not only improves the accuracy and efficiency of detection, but also can effectively distinguish yarn from non-yarn, and provides powerful technical support for yarn quality control and automatic detection.
3. According to the yarn detection system based on image processing, the image preprocessing module performs denoising and image enhancement on the yarn image, so that the quality of the image is improved; converting the image from RGB to HSV color space to facilitate separation of brightness and color information; converting the image into a black-and-white image, separating the yarns, removing noise points and repairing broken areas, and highlighting the contours and edges of the yarns; the image size is cut and adjusted according to the actual position. The series of steps not only improves the definition and usability of the image, but also provides high-quality input data for subsequent image analysis and processing, thereby improving the accuracy and efficiency of the overall analysis.
4. The yarn detection system based on image processing is characterized in that an image acquisition module captures a yarn image at all angles by setting the position and the angle of a camera, starts an acquisition program to monitor and display image data in real time, and transmits the image data to a computer for storage and then performs color cast correction and definition screening. The process not only ensures the integrity and high quality of data, but also improves the working efficiency and the analysis precision, and provides a reliable basis for yarn monitoring and quality control.
5. The yarn detection system based on image processing is characterized in that an image acquisition module captures a yarn image at all angles by setting the position and the angle of a camera, starts an acquisition program to monitor and display image data in real time, and transmits the image data to a computer for storage and then performs color cast correction and definition screening. The process not only ensures the integrity and high quality of data, but also improves the working efficiency and the analysis precision, and provides a reliable basis for yarn monitoring and quality control.
Drawings
FIG. 1 is a schematic diagram of a yarn detecting system based on image processing according to the present invention;
FIG. 2 is a flow chart of an image acquisition module in a yarn detecting system based on image processing according to the present invention;
FIG. 3 is a flowchart of an image preprocessing module in a yarn detecting system based on image processing according to the present invention;
FIG. 4 is a flowchart of a feature extraction module in a yarn detection system based on image processing according to the present invention;
fig. 5 is a flowchart of a yarn detecting module in a yarn detecting system based on image processing according to the present invention.
Detailed Description
The technical scheme of the invention is further described in detail below with reference to the specific embodiments.
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the invention.
The yarn detection system based on image processing comprises an image acquisition module, an image preprocessing module, a characteristic extraction module, a yarn detection module, a result display module and a system management module, wherein the image acquisition module acquires yarn image data in different forms through a camera, the image preprocessing module preprocesses the image data after image acquisition, the characteristic extraction module extracts characteristics for describing various information of yarns from the image data after image data preprocessing, the yarn detection module analyzes and judges the characteristics of a special region by utilizing an image processing technology so as to realize detection and identification of the yarns, the result display module displays detection results to a user in the form of images or characters, and the system management module manages and monitors the whole system.
In order to provide the necessary inputs for subsequent image processing and yarn detection; as shown in fig. 2, the specific steps of the operation of the image acquisition module are as follows:
a1: setting the position and angle of a camera to ensure that the yarn images are captured at all angles;
a2: starting an acquisition program, and enabling a camera to start acquiring image data in the yarn running process and transmitting the image data to a computer for storage so as to perform subsequent image processing and analysis;
a3: in the image data acquisition process, the acquired images are monitored and displayed in real time;
A4: and the computer corrects and adjusts the acquired image data, corrects the color deviation of the image data, and deletes the image data with poor definition.
The image acquisition module captures yarn images at all angles by setting the position and the angle of the camera, starts an acquisition program to monitor and display image data in real time, and transmits the image data to a computer for storage and then performs color cast correction and definition screening. The process not only ensures the integrity and high quality of data, but also improves the working efficiency and the analysis precision, and provides a reliable basis for yarn monitoring and quality control.
Selecting and adjusting according to specific yarn detection scenes and image characteristics; as shown in fig. 3, the specific steps of the image preprocessing module are as follows:
b1: denoising and image enhancement processing are carried out on the yarn image;
B2: converting the image from RGB color space to HSV color space, and separating brightness and color information of the image;
b3: converting the image into a black-and-white image, separating the yarn from the background, removing small noise points and connecting broken areas in the image, and highlighting the outline and the edge of the yarn;
b4: cutting the image according to the actual position in the yarn image, removing unnecessary parts, and adjusting the size of the image.
The image preprocessing module improves the quality of the image by denoising and enhancing the yarn image; converting the image from RGB to HSV color space to facilitate separation of brightness and color information; converting the image into a black-and-white image, separating the yarns, removing noise points and repairing broken areas, and highlighting the contours and edges of the yarns; the image size is cut and adjusted according to the actual position. The series of steps not only improves the definition and usability of the image, but also provides high-quality input data for subsequent image analysis and processing, thereby improving the accuracy and efficiency of the overall analysis.
In order to efficiently extract information describing yarn characteristics; as shown in fig. 4, the specific steps of the operation of the feature extraction module are as follows:
C1: edge detection: extracting edge information in the yarn image by adopting an edge detection technology;
C2: texture feature extraction: extracting texture information in the yarn image through image texture feature analysis, and identifying texture features of the yarn;
And C3: color feature extraction: extracting color characteristics of the yarns according to the image converted by the color space;
and C4: and (3) extracting shape features: extracting shape characteristics of the yarns by adopting a shape analysis technology;
C5: extracting regional characteristics: dividing an image into different areas, and extracting feature information of each area, wherein the feature information comprises color features, texture features and shape features in the area;
C6: gray level feature extraction: according to the gray information of the image, the gray characteristics of the yarn image are described by calculating a gray histogram, a mean value, a variance, texture characteristics and gray gradient characteristics.
The characteristic extraction module comprehensively and deeply extracts multidimensional characteristic information in the yarn image through a plurality of technologies such as edge detection, texture characteristic extraction, color characteristic extraction, shape characteristic extraction, regional characteristic extraction, gray level characteristic extraction and the like. The series of steps not only improves the accuracy and richness of feature extraction, but also provides a high-quality data base for subsequent image analysis and pattern recognition, thereby remarkably improving the accuracy and reliability of yarn image analysis.
In the step C1, an edge detection technique for extracting edge information in a yarn image has an algorithm formula as follows:
Giving a gray level image I (x, y), and carrying out edge detection through a horizontal edge detection operator and a vertical edge detection operator;
horizontal edge detection operator:
Vertical edge detection operator:
convolving the image I (x, y) with a horizontal edge detection operator and a vertical edge detection operator to obtain horizontal edge intensity And vertical edge strength
Finally, calculating the edge strength and the edge direction:
Wherein G is edge intensity, θ is edge direction, and is used to determine gradient and direction of the edge;
in the step C2, the algorithm formula is as follows, where the algorithm formula is used for analyzing and extracting the texture features of the image:
Wherein, Is the pixel value of a two-dimensional image,Is a two-dimensional wavelet function, and can obtain the decomposition coefficient of the image by wavelet transformation of different scales and directions for analyzing the frequency domain characteristics of the image.
In the step C3, after the image is converted from the RGB color space to the HSV color space, counting the number of pixels of different color components in the HSV color space image, constructing a color histogram, and extracting color characteristics of the yarn according to the color histogram, wherein the color characteristics comprise the mean value and variance of the color distribution, the peak position and height of the color histogram, the shape characteristics of the color histogram, and the energy and entropy of the color histogram.
In the step C4, the algorithm formula of the shape feature technology for the shape feature of the yarn is as follows:
Wherein F represents a shape factor, A represents the area of the object, P represents the perimeter of the object, the value range of the shape factor is between 0 and 1, and when the shape factor is close to 1, the yarn is close to smooth; when the shape factor is close to 0, it means that the yarn is close to long and thin;
In order to realize yarn detection and identification functions in a designated area; as shown in fig. 5, the specific steps of the yarn detecting module are as follows:
D1: collecting an image dataset comprising positive samples (yarns) and negative samples (non-yarns), marking, and dividing the dataset into a training set and a testing set for model training and evaluation;
d2: extracting features of images in the training set and the testing set;
d3: after the features are extracted, the extracted features are input into a machine learning model for training and classification;
d4: constructing a convolutional neural network model, taking an image as input, extracting characteristics through multi-layer convolution and pooling operation, and finally classifying through a full connection layer;
d5: and applying the trained model to a specific area in the image, detecting and identifying yarns, and judging whether yarns exist in the image according to a result output by the model.
The yarn detection module collects and marks an image data set containing positive and negative samples, performs feature extraction, inputs the extracted features into a machine learning model for training and classification, simultaneously constructs a convolutional neural network model, extracts image features through multi-layer convolution and pooling operation, and finally classifies through a full connection layer. The trained model is applied to a specific area in the image for yarn detection and identification. The process not only improves the accuracy and efficiency of detection, but also can effectively distinguish yarn from non-yarn, and provides powerful technical support for yarn quality control and automatic detection.
The result display module comprises the following specific working steps:
and displaying in an image form: directly marking detected yarn areas on an original image, marking frames, lines or points with different colors or shapes, dividing the original image into a plurality of areas, wherein each area only comprises one or more yarns, marking each area, simultaneously displaying the confidence level of yarn detection by using a thermodynamic diagram, wherein the high-confidence-level areas are darker in color, and the low-confidence-level areas are lighter in color;
and displaying the text form: the detected yarn information is displayed in a list form including coordinates, length, width, color and type of the yarn, and the yarn information is displayed in a table form while the detected yarn is described in text.
The result display module displays the confidence of yarn detection by marking the detected yarn area on the original image and using marks with different colors or shapes and thermodynamic diagrams, thereby providing visual feedback. In addition, the detected yarn information is displayed in the form of a list and a table, including coordinates, length, width, color and type, and a text description is attached, providing detailed text information. The process not only enhances the visual effect of the result, so that the detection result is easier to understand and analyze, but also provides comprehensive yarn information, is convenient for further processing and decision making, and improves the practicability and accuracy of the detection result.
The system management module comprises the specific steps of user management (adding, deleting and modifying user account numbers and authority settings), log management (recording system operation logs, error logs and access logs), system setting (setting system parameters and configuration, backing up and restoring operation), authority management (setting authority levels of user roles, managing user groups and authority groups), data management (backing up, transferring and cleaning data), system monitoring (monitoring system running states and performance indexes), system maintenance (software updating and fault processing) and report generation (generating a system running report).
The system management module provides comprehensive system management functions through a series of steps of user management, log management, system setting, authority management, data management, system monitoring, system maintenance, report generation and the like. User management and authority management ensure the safety and controllability of the system, log management and system monitoring realize real-time tracking and recording of the running state of the system, system setting and maintenance ensure the stability and sustainability of the system, and data management and report generation improve the reliability and availability of data. The process not only improves the overall management efficiency of the system, but also enhances the safety, stability and maintainability of the system, and provides firm guarantee for the efficient operation of the system.
The foregoing is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art, who is within the scope of the present invention, should make equivalent substitutions or modifications according to the technical scheme of the present invention and the inventive concept thereof, and should be covered by the scope of the present invention.

Claims (10)

1. The yarn detection system based on image processing comprises an image acquisition module, an image preprocessing module, a feature extraction module, a yarn detection module, a result display module and a system management module, and is characterized in that the image acquisition module acquires yarn image data in different forms through a camera, the image preprocessing module preprocesses the image data after image acquisition, the feature extraction module extracts features for describing various information of yarns from the image data after image data preprocessing, the yarn detection module analyzes and judges the features of a special region by utilizing an image processing technology so as to realize detection and identification of the yarns, the result display module displays detection results to a user in the form of images or characters, and the system management module manages and monitors the whole system.
2. The yarn detecting system based on image processing as in claim 1, wherein the image acquisition module works as follows:
a1: setting the position and angle of a camera to ensure that the yarn images are captured at all angles;
a2: starting an acquisition program, and enabling a camera to start acquiring image data in the yarn running process and transmitting the image data to a computer for storage so as to perform subsequent image processing and analysis;
a3: in the image data acquisition process, the acquired images are monitored and displayed in real time;
A4: and the computer corrects and adjusts the acquired image data, corrects the color deviation of the image data, and deletes the image data with poor definition.
3. The yarn detecting system based on image processing as in claim 1, wherein the image preprocessing module works as follows:
b1: denoising and image enhancement processing are carried out on the yarn image;
B2: converting the image from RGB color space to HSV color space, and separating brightness and color information of the image;
b3: converting the image into a black-and-white image, separating the yarn from the background, removing small noise points and connecting broken areas in the image, and highlighting the outline and the edge of the yarn;
b4: cutting the image according to the actual position in the yarn image, removing unnecessary parts, and adjusting the size of the image.
4. The yarn detection system based on image processing as in claim 1, wherein the feature extraction module operates as follows:
C1: edge detection: extracting edge information in the yarn image by adopting an edge detection technology;
C2: texture feature extraction: extracting texture information in the yarn image through image texture feature analysis, and identifying texture features of the yarn;
And C3: color feature extraction: extracting color characteristics of the yarns according to the image converted by the color space;
and C4: and (3) extracting shape features: extracting shape characteristics of the yarns by adopting a shape analysis technology;
C5: extracting regional characteristics: dividing an image into different areas, and extracting feature information of each area, wherein the feature information comprises color features, texture features and shape features in the area;
C6: gray level feature extraction: according to the gray information of the image, the gray characteristics of the yarn image are described by calculating a gray histogram, a mean value, a variance, texture characteristics and gray gradient characteristics.
5. The system of claim 4, wherein in step C1, the algorithm formula of the edge detection technique for extracting the edge information from the yarn image is as follows:
Giving a gray level image I (x, y), and carrying out edge detection through a horizontal edge detection operator and a vertical edge detection operator;
horizontal edge detection operator:
Vertical edge detection operator:
convolving the image I (x, y) with a horizontal edge detection operator and a vertical edge detection operator to obtain horizontal edge intensity And vertical edge strength
Finally, calculating the edge strength and the edge direction:
where G is edge intensity and θ is edge direction, used to determine the gradient and direction of the edge.
6. The system of claim 4, wherein in step C2, the algorithm formula for image texture feature analysis extraction is as follows:
Wherein, Is the pixel value of a two-dimensional image,Is a two-dimensional wavelet function, and can obtain the decomposition coefficient of the image by wavelet transformation of different scales and directions for analyzing the frequency domain characteristics of the image.
7. The system of claim 4, wherein in the step C3, after the image is converted from RGB color space to HSV color space, the number of pixels of different color components is counted in the HSV color space image, a color histogram is constructed, and the color characteristics of the yarn are extracted according to the color histogram, including the mean and variance of the color distribution, the peak position and height of the color histogram, the shape characteristics of the color histogram, and the energy and entropy of the color histogram.
8. The yarn detecting system based on image processing as in claim 1, wherein the yarn detecting module works as follows:
D1: collecting an image dataset comprising positive samples (yarns) and negative samples (non-yarns), marking, and dividing the dataset into a training set and a testing set for model training and evaluation;
d2: extracting features of images in the training set and the testing set;
d3: after the features are extracted, the extracted features are input into a machine learning model for training and classification;
d4: constructing a convolutional neural network model, taking an image as input, extracting characteristics through multi-layer convolution and pooling operation, and finally classifying through a full connection layer;
d5: and applying the trained model to a specific area in the image, detecting and identifying yarns, and judging whether yarns exist in the image according to a result output by the model.
9. The yarn detecting system based on image processing as in claim 1, wherein the result display module comprises the following working steps:
and displaying in an image form: directly marking detected yarn areas on an original image, marking frames, lines or points with different colors or shapes, dividing the original image into a plurality of areas, wherein each area only comprises one or more yarns, marking each area, simultaneously displaying the confidence level of yarn detection by using a thermodynamic diagram, wherein the high-confidence-level areas are darker in color, and the low-confidence-level areas are lighter in color;
and displaying the text form: the detected yarn information is displayed in a list form including coordinates, length, width, color and type of the yarn, and the yarn information is displayed in a table form while the detected yarn is described in text.
10. The image processing-based yarn detecting system as claimed in claim 1, wherein said system management module comprises the specific steps of user management (adding, deleting, modifying user account numbers and authority settings), log management (recording system operation log, error log, access log), system setting (setting system parameters and configuration, backup and restore operations), authority management (setting authority level of user roles, managing user group and authority group), data management (backup, migration, cleaning data), system monitoring (monitoring system operation status, performance index), system maintenance (software update, fault handling), and report generation (generating system operation report).
CN202411277774.9A 2024-09-12 2024-09-12 A Yarn Detection System Based on Image Processing Pending CN118941553A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411277774.9A CN118941553A (en) 2024-09-12 2024-09-12 A Yarn Detection System Based on Image Processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411277774.9A CN118941553A (en) 2024-09-12 2024-09-12 A Yarn Detection System Based on Image Processing

Publications (1)

Publication Number Publication Date
CN118941553A true CN118941553A (en) 2024-11-12

Family

ID=93365113

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411277774.9A Pending CN118941553A (en) 2024-09-12 2024-09-12 A Yarn Detection System Based on Image Processing

Country Status (1)

Country Link
CN (1) CN118941553A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119313660A (en) * 2024-12-16 2025-01-14 江苏格罗瑞节能科技有限公司 Dynamic detection and feature extraction method of textile spindle speed
CN119887620A (en) * 2024-11-27 2025-04-25 新凤鸣集团股份有限公司 Polyester filament yarn detection method and system based on image processing technology

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119887620A (en) * 2024-11-27 2025-04-25 新凤鸣集团股份有限公司 Polyester filament yarn detection method and system based on image processing technology
CN119313660A (en) * 2024-12-16 2025-01-14 江苏格罗瑞节能科技有限公司 Dynamic detection and feature extraction method of textile spindle speed
CN119313660B (en) * 2024-12-16 2025-05-16 江苏格罗瑞节能科技有限公司 Dynamic detection and feature extraction method of textile spindle speed

Similar Documents

Publication Publication Date Title
CN118608504B (en) Machine vision-based part surface quality detection method and system
CN108562589B (en) Method for detecting surface defects of magnetic circuit material
CN111582294B (en) Method for constructing convolutional neural network model for surface defect detection and application thereof
CN107563396B (en) A method for constructing an intelligent identification system for protection screens in power inspections
CN118941553A (en) A Yarn Detection System Based on Image Processing
CN114549522A (en) Textile quality detection method based on target detection
CN113777030A (en) Cloth surface defect detection device and method based on machine vision
CN107123114A (en) A kind of cloth defect inspection method and device based on machine learning
CN107870172A (en) A Method of Cloth Defect Detection Based on Image Processing
CN118196068B (en) Textile printing and dyeing quality monitoring system based on artificial intelligence
CN119290896B (en) A method and system for detecting defects in textile products
CN111852792B (en) Fan blade defect self-diagnosis positioning method based on machine vision
CN117952904A (en) Large equipment surface defect positioning and measuring method based on combination of image and point cloud
CN118570865B (en) Face recognition analysis method and system based on artificial intelligence
CN117372373A (en) Textile production quality management system based on big data
CN117152158B (en) Textile defect detection method and system based on artificial intelligence
CN108460344A (en) Dynamic area intelligent identifying system in screen and intelligent identification Method
Zhang et al. Fabric defect detection based on visual saliency map and SVM
CN119574563A (en) A method and system for detecting ribbon defects based on artificial intelligence
CN118097305B (en) Method and system for detecting quality of semiconductor light-emitting element
CN114330477A (en) A system and method for defect detection of power equipment based on mixed reality equipment
CN113052234A (en) Jade classification method based on image features and deep learning technology
CN118587496A (en) Automatic identification system and method of parts processing accuracy based on computer vision
CN117496507A (en) Target detection method and device for edible fungus insect damage
CN115753791B (en) Defect detection method, device and system based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination