CN117576031A - Thing contact system based on monitor printing picture and text detects - Google Patents
Thing contact system based on monitor printing picture and text detects Download PDFInfo
- Publication number
- CN117576031A CN117576031A CN202311536966.2A CN202311536966A CN117576031A CN 117576031 A CN117576031 A CN 117576031A CN 202311536966 A CN202311536966 A CN 202311536966A CN 117576031 A CN117576031 A CN 117576031A
- Authority
- CN
- China
- Prior art keywords
- image
- system based
- text
- information
- detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000007639 printing Methods 0.000 title claims abstract description 38
- 238000001514 detection method Methods 0.000 claims abstract description 52
- 238000004458 analytical method Methods 0.000 claims abstract description 32
- 238000012544 monitoring process Methods 0.000 claims abstract description 22
- 238000012545 processing Methods 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 12
- 238000000605 extraction Methods 0.000 claims description 10
- 238000000034 method Methods 0.000 claims description 9
- 238000007781 pre-processing Methods 0.000 claims description 8
- 238000007405 data analysis Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 5
- 230000010365 information processing Effects 0.000 claims description 3
- 238000010801 machine learning Methods 0.000 claims description 3
- 230000004044 response Effects 0.000 claims description 3
- 238000007619 statistical method Methods 0.000 claims description 3
- 230000001629 suppression Effects 0.000 claims description 3
- 238000004148 unit process Methods 0.000 claims description 3
- 238000012800 visualization Methods 0.000 claims description 3
- 239000000463 material Substances 0.000 abstract description 5
- 230000002159 abnormal effect Effects 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 11
- 230000003203 everyday effect Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000000919 ceramic Substances 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 239000010985 leather Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 239000004033 plastic Substances 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000004753 textile Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
- 239000002023 wood Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/001—Industrial image inspection using an image reference approach
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/255—Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/54—Extraction of image or video features relating to texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/12—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Medical Informatics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Artificial Intelligence (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- Quality & Reliability (AREA)
- Software Systems (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention relates to the technical field of printed image and text detection, and discloses an Internet of things system based on monitoring printed image and text detection. The internet of things system based on monitoring the printing image and text detection acquires and performs analysis and comparison on image information in real time according to a related algorithm through the printing image and text detection module, so that whether the printed image and text has abnormal states, such as wrong pages, mixed materials, white pages and the like is judged, the analysis results are uploaded to a cloud server in real time, the cloud server stores the analysis results, a later manager is convenient to multiplex the working condition of image and text printing equipment, and meanwhile, the analysis results are sent to a display terminal in real time, and the real-time state and complete real-time operation data of printing of the printing equipment are fed back to a manager in real time.
Description
Technical Field
The invention relates to the technical field of printed image and text detection, in particular to an Internet of things system based on monitoring printed image and text detection.
Background
The printer is a machine that prints characters and images. Modern printers typically consist of mechanisms such as plate loading, inking, stamping, paper feeding (including folding). The working principle of the device is as follows: the characters and images to be printed are first made into printing plate, then mounted on a printer, then ink is applied to the places where the characters and images are on the printing plate by means of manual or printer, and then directly or indirectly transferred onto paper or other printing stock (such as textile, metal plate, plastic, leather, wood plate, glass and ceramic), so that the same printed matter as the printing plate can be reproduced. The invention and development of the printing machine play an important role in the spread of human civilization and culture;
the printing image-text visual detection system is arranged on the machine equipment of the customer, images are shot by an industrial camera, images are processed, analyzed and identified, defects such as wrong pages, mixed materials, white pages and the like of the product are identified, and a shutdown or automatic waste discharge signal is output after the defective products are found. However, due to the limitations of computer technology, image processing technology and mobile communication technology, whether the machine normally uses the image-text detection system cannot be monitored in real time;
however, the existing detection system has the following problems that the data information such as the operation failure rate, stability and machine start-up using time of the on-site detection equipment can only be recorded manually by on-site staff every day, the specific authenticity can not be reflected, and related database statistics are not available.
Disclosure of Invention
(one) solving the technical problems
Aiming at the defects of the prior art, the invention provides an Internet of things system based on monitoring printing image-text detection, which is provided with the advantages that the image information is acquired through a printing image-text detection module and is analyzed and compared in real time according to a related algorithm, so that whether the printed image-text has abnormal states, such as wrong pages, mixed materials, white pages and the like, is judged, the analysis result is uploaded to a cloud server in real time, the cloud server stores the analysis result, a later manager is convenient to multiplex the working condition of image-text printing equipment, meanwhile, the analysis result is sent to a display terminal in real time, the real-time state of printing equipment and complete real-time operation data are fed back to the manager in real time, the problem that the data information such as the operation failure rate, the stability, the machine start-up use time and the like of the detection equipment in a manual site can only be recorded manually every day, the reality of specific data cannot be reflected, the related data cannot be counted and the like is solved.
(II) technical scheme
In order to achieve the above purpose, the present invention provides the following technical solutions: an internet of things system based on monitoring printing image and text detection comprises a printing image and text detection module, a cloud server and a display terminal;
the printing image-text detection module comprises an image acquisition unit, an image information analysis unit and a data output unit, wherein,
the image acquisition unit is used for acquiring the printing image-text information, the image information analysis unit is used for analyzing and processing the acquired image-text information and comparing the processed image-text information with the preset image-text information characteristics, and the image information analysis unit transmits the processed information to the data output unit;
the data output unit is connected with the cloud server through a network, and the data output unit sends the detection result to the cloud server through the network;
the cloud server is used for storing the detection result and sending the detection result to the display terminal through a network;
the display terminal is used for receiving and displaying the detection result.
Preferably, the specific steps of collecting the printed image-text information are as follows:
a1, determining an acquisition target;
a2, preparing a collected sample;
a3, collecting image information;
and A4, sending the image information to a data analysis unit.
Preferably, after receiving the image information, the data analysis unit processes the image information, and the specific steps of the image information processing are as follows:
b1, preprocessing image information;
b2, extracting useful characteristic information from the image according to the need;
b3, detecting and identifying a target state in the image by using an image processing algorithm and a machine learning method;
b4, dividing the image into different areas or objects so as to better understand and process the image;
b5, dividing the images into different categories according to the extracted characteristics;
b6, image classification: dividing the images into different categories according to the extracted features;
b7, analyzing and applying the processed image result, and performing result visualization, statistical analysis and decision judgment according to specific requirements;
and B8, sending the analysis result to a cloud server through a network.
Preferably, the image information preprocessing further includes the steps of:
b1.1, graying the image, and using a weighted average method;
b1.2, binarizing the image, converting the gray level image into a binary image, and further highlighting the target object and the edge.
Preferably, the formula of the image processing algorithm is as follows:
assuming that each pixel of the gray image is (i, j),
calculating the amplitude difference value of two adjacent positions of the pixel point in the gradient direction:
m1=m (i, j-1) m2=m (i, j+1) (horizontal direction)
M3=m (i-1, j) m4=m (i+1, j) (vertical direction)
If the amplitude of the current pixel point is not the maximum value in the adjacent points, setting the amplitude of the current pixel point to 0;
the formula is used to apply maximum suppression to the amplitude image so that it preserves the edge response of the maximum gradient amplitude.
Preferably, the feature extraction further comprises the steps of:
b5.1, extracting color characteristics;
b5.2, extracting texture features;
b5.3, extracting shape characteristics;
and B5.4, SIFT feature extraction.
Preferably, the SIFT feature extraction algorithm formula is as follows:
DoG(t,x,y)=G(t,x,y)-G(t-1,x,y);
in the formula, doG is a gaussian differential pyramid, and G (t, x, y) represents a specific level image of the gaussian pyramid, with pixel values at spatial locations (x, y) at scale t.
Preferably, the cloud server is used for storing the analysis result and sending the analysis result to a display terminal, and the display terminal is composed of a computer terminal and a mobile phone terminal
A computer device comprising a memory storing a computer program and a processor implementing steps of an internet of things system based on monitoring printed image detection when the computer program is executed.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs a step of monitoring printed image detection.
Compared with the prior art, the invention provides an Internet of things system based on monitoring printing image-text detection, which has the following beneficial effects:
according to the invention, the image information is collected through the printing image-text detection module and analyzed and compared in real time according to the related algorithm, so that whether the printed image-text has abnormal states, such as wrong pages, mixed materials, white pages and the like, is judged, the analysis result is uploaded to the cloud server in real time, the cloud server stores the analysis result, a later manager is convenient to multiplex the working condition of the image-text printing equipment, meanwhile, the analysis result is sent to the display terminal in real time, the real-time state of printing of the printing equipment and complete real-time operation data are fed back to the manager in real time, and the situation that the data information such as the operation failure rate, the stability, the machine start-up using time and the like of the detection equipment in a manual field can only be recorded manually by the field staff every day is avoided, the reality of specific data cannot be reflected, and the related data cannot be counted.
Drawings
FIG. 1 is a schematic diagram of a system according to the present invention;
FIG. 2 is a schematic diagram of a process for collecting printing text according to the present invention;
FIG. 3 is a schematic diagram of an image information preprocessing flow according to the present invention;
FIG. 4 is a schematic diagram of an image information preprocessing flow according to the present invention;
fig. 5 is a schematic diagram of a feature extraction process according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1-5, an internet of things system based on monitoring printed text detection, wherein: the system comprises a printing image-text detection module, a cloud server and a display terminal;
the printing image-text detection module comprises an image acquisition unit, an image information analysis unit and a data output unit, wherein,
the image acquisition unit is used for acquiring the printed image-text information, the image information analysis unit is used for analyzing and processing the acquired image-text information and comparing the processed image-text information with the preset image-text information characteristics, and the image information analysis unit transmits the processed information to the data output unit;
the data output unit is connected with the cloud server through a network, and the data output unit sends the detection result to the cloud server through the network;
the cloud server is used for storing the detection result and sending the detection result to the display terminal through the network;
the display terminal is used for receiving and displaying the detection result;
the specific steps for collecting the printed image-text information are as follows:
a1, determining an acquisition target;
a2, preparing a collected sample;
a3, collecting image information;
a4, sending the image information to a data analysis unit;
after receiving the image information, the data analysis unit processes the image information, and the specific steps of the image information processing are as follows:
b1, preprocessing image information; by converting the color image into a grayscale image, the color information is removed, leaving only the luminance information. The color channels may be fused into one gray value using averaging, weighted averaging, or using human eye perception
Wherein, the image information preprocessing further comprises the following steps:
b1.1, graying the image, and using a weighted average method;
b1.2, binarizing the image, converting the gray level image into a binary image, and further highlighting the target object and the edge;
b2, extracting useful characteristic information from the image according to the need;
b3, detecting and identifying a target state in the image by using an image processing algorithm and a machine learning method;
the formula of the image processing algorithm is as follows:
assuming that each pixel of the gray image is (i, j),
calculating the amplitude difference value of two adjacent positions of the pixel point in the gradient direction:
m1=m (i, j-1) m2=m (i, j+1) (horizontal direction)
M3=m (i-1, j) m4=m (i+1, j) (vertical direction)
If the amplitude of the current pixel point is not the maximum value in the adjacent points, setting the amplitude of the current pixel point to 0;
the formula is used for applying maximum suppression to the amplitude image so as to keep the edge response of the maximum gradient amplitude;
b4, dividing the image into different areas or objects so as to better understand and process the image;
b5, dividing the images into different categories according to the extracted characteristics;
wherein, the feature extraction further comprises the following steps:
b5.1, extracting color characteristics;
b5.2, extracting texture features;
b5.3, extracting shape characteristics;
and B5.4, SIFT feature extraction, wherein the SIFT feature extraction algorithm formula is as follows:
DoG(t,x,y)=G(t,x,y)-G(t-1,x,y);
in the formula, doG is a Gaussian differential pyramid, G (t, x, y) represents a specific level image of the Gaussian pyramid, and pixel values at a spatial position (x, y) and a scale t are shown in the formula;
b6, image classification: dividing the images into different categories according to the extracted features;
b7, analyzing and applying the processed image result, and performing result visualization, statistical analysis and decision judgment according to specific requirements;
b8, sending the analysis result to a cloud server through a network;
the cloud server is used for storing analysis results and sending the analysis results to the display terminal, wherein the cloud server provides various storage options, such as object storage, file storage, database storage and the like, and can be used as an intermediate node or proxy for forwarding and processing information. For example, if there are multiple terminal devices that need to communicate with each other, a cloud server may be used as a message transfer station to receive information from one device and forward it to the other device. The cloud server can also be used for processing real-time data streams, such as sensor data, log data and the like, and forwarding the real-time data streams to a proper target, the cloud server has high reliability and elasticity, high-performance and low-delay information storage and forwarding can be realized in a cloud environment, and in addition, a cloud service provider generally provides security, backup and disaster recovery mechanisms so as to ensure the security and reliability of information in the storage and transmission processes;
the display terminal is composed of a computer end and a mobile phone end, and after the display terminal receives the analysis result, the display terminal respectively sends real-time monitoring results of the object system for monitoring the image-text detection of the printing to the manager through the computer end and the mobile phone end, so that the manager can remotely and real-timely grasp the image-text printing state and the image-text printing quality.
The internet of things system based on monitoring the detection of the printed images and texts is characterized in that the image information is acquired through the printed image and texts detection module and analyzed and compared in real time according to the related algorithm, whether the printed images and texts are in abnormal states, such as wrong pages, mixed materials, white pages and the like, is judged, the analysis result is uploaded to the cloud server in real time, the cloud server stores the analysis result, a later manager is convenient to multiplex the working condition of the image and texts printing equipment, meanwhile, the analysis result is sent to the display terminal in real time, the real-time state and complete real-time operation data of the printing equipment are fed back to the manager in real time, the situation that the operation failure rate, stability, machine start-up use time and other data information of the detection equipment in a manual field are needed only recorded manually every day by the field staff is avoided, the reality of specific data cannot be reflected, and the related data cannot be counted.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (10)
1. An internet of things system based on monitoring printing image and text detection is characterized in that: the system comprises a printing image-text detection module, a cloud server and a display terminal;
the printing image-text detection module comprises an image acquisition unit, an image information analysis unit and a data output unit, wherein,
the image acquisition unit is used for acquiring the printing image-text information, the image information analysis unit is used for analyzing and processing the acquired image-text information and comparing the processed image-text information with the preset image-text information characteristics, and the image information analysis unit transmits the processed information to the data output unit;
the data output unit is connected with the cloud server through a network, and the data output unit sends the detection result to the cloud server through the network;
the cloud server is used for storing the detection result and sending the detection result to the display terminal through a network;
the display terminal is used for receiving and displaying the detection result.
2. An internet of things system based on monitoring printed graphic detection as claimed in claim 1, wherein: the specific steps of collecting the printed image-text information are as follows:
a1, determining an acquisition target;
a2, preparing a collected sample;
a3, collecting image information;
and A4, sending the image information to a data analysis unit.
3. An internet of things system based on monitoring printed graphic detection as claimed in claim 2, wherein: after receiving the image information, the data analysis unit processes the image information, wherein the specific steps of the image information processing are as follows:
b1, preprocessing image information;
b2, extracting useful characteristic information from the image according to the need;
b3, detecting and identifying a target state in the image by using an image processing algorithm and a machine learning method;
b4, dividing the image into different areas or objects so as to better understand and process the image;
b5, dividing the images into different categories according to the extracted characteristics;
b6, image classification: dividing the images into different categories according to the extracted features;
b7, analyzing and applying the processed image result, and performing result visualization, statistical analysis and decision judgment according to specific requirements;
and B8, sending the analysis result to a cloud server through a network.
4. An internet of things system based on monitoring printed graphic detection as claimed in claim 3, wherein: the image information preprocessing further comprises the following steps:
b1.1, graying the image, and using a weighted average method;
b1.2, binarizing the image, converting the gray level image into a binary image, and further highlighting the target object and the edge.
5. An internet of things system based on monitoring printed graphic detection as claimed in claim 3, wherein: the formula of the image processing algorithm is as follows:
assuming that each pixel of the gray image is (i, j),
calculating the amplitude difference value of two adjacent positions of the pixel point in the gradient direction:
m1=m (i, j-1) m2=m (i, j+1) (horizontal direction)
M3=m (i-1, j) m4=m (i+1, j) (vertical direction)
If the amplitude of the current pixel point is not the maximum value in the adjacent points, setting the amplitude of the current pixel point to 0;
the formula is used to apply maximum suppression to the amplitude image so that it preserves the edge response of the maximum gradient amplitude.
6. An internet of things system based on monitoring printed graphic detection as claimed in claim 3, wherein: the feature extraction further comprises the steps of:
b5.1, extracting color characteristics;
b5.2, extracting texture features;
b5.3, extracting shape characteristics;
and B5.4, SIFT feature extraction.
7. An internet of things system based on monitoring printed graphic detection as in claim 6, wherein: the SIFT feature extraction algorithm formula is as follows:
DoG(t,x,y)=G(t,x,y)-G(t-1,x,y);
in the formula, doG is a gaussian differential pyramid, and G (t, x, y) represents a specific level image of the gaussian pyramid, with pixel values at spatial locations (x, y) at scale t.
8. An internet of things system based on monitoring printed graphic detection as claimed in claim 3, wherein: the cloud server is used for storing analysis results and sending the analysis results to the display terminal, and the display terminal is composed of a computer terminal and a mobile phone terminal.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that: the processor, when executing the computer program, implements the steps of an internet of things system based on monitoring printed graphic detection as claimed in any one of claims 1 to 8.
10. A computer-readable storage medium having stored thereon a computer program, characterized by: the computer program when executed by a processor carries out the steps of any one of claims 1 to 8 based on monitoring printed graphic detection.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311536966.2A CN117576031A (en) | 2023-11-15 | 2023-11-15 | Thing contact system based on monitor printing picture and text detects |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311536966.2A CN117576031A (en) | 2023-11-15 | 2023-11-15 | Thing contact system based on monitor printing picture and text detects |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117576031A true CN117576031A (en) | 2024-02-20 |
Family
ID=89861937
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311536966.2A Pending CN117576031A (en) | 2023-11-15 | 2023-11-15 | Thing contact system based on monitor printing picture and text detects |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117576031A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118396967A (en) * | 2024-05-10 | 2024-07-26 | 山东润声印务有限公司 | Printing anomaly detection and early warning system |
-
2023
- 2023-11-15 CN CN202311536966.2A patent/CN117576031A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118396967A (en) * | 2024-05-10 | 2024-07-26 | 山东润声印务有限公司 | Printing anomaly detection and early warning system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110618134A (en) | Steel plate surface quality defect detection and rating system and method | |
CN111062934B (en) | Fabric image defect real-time detection method | |
CN117576031A (en) | Thing contact system based on monitor printing picture and text detects | |
CN111062941B (en) | Point light source fault detection device and method | |
CN113822385B (en) | Coal conveying abnormity monitoring method, device and equipment based on image and storage medium | |
CN107423944A (en) | A kind of portable one-stop detection service system and method | |
CN115302963A (en) | Bar code printing control method, system and medium based on machine vision | |
CN114858806A (en) | Cigarette quality inspection data analysis system and method | |
CN103984967B (en) | A kind of automatic checkout system being applied to Commercial goods labels detection and automatic testing method | |
CN112329575A (en) | Nose print detection method and device based on image quality evaluation | |
CN113071209B (en) | Printing quality monitoring method, system, terminal and storage medium of printing device | |
CN115456984A (en) | High-speed image recognition defect detection system based on two-dimensional code | |
CN118906649A (en) | Printing machine real-time monitoring method based on image processing and related equipment | |
US11645770B2 (en) | System and method for quantifying nozzle occlusion in 3D printing | |
CN112836779B (en) | Printing device of one-dimensional code format | |
CN116580026A (en) | Automatic optical detection method, equipment and storage medium for appearance defects of precision parts | |
CN116704440A (en) | Intelligent comprehensive acquisition and analysis system based on big data | |
CN115965887A (en) | Automatic control system and method applied to container information detection and distribution | |
CN115689994A (en) | Data plate and bar code defect detection method, equipment and storage medium | |
CN112767306A (en) | Printed matter quality detection and receiving method and system | |
CN114373116A (en) | Method, device, equipment and product for screening operation and maintenance site images | |
CN110989422A (en) | Management system and management method for AOI (automated optical inspection) over-inspection parameters based on serial number code spraying | |
US20250104191A1 (en) | Detecting system and detecting method | |
CN116403098B (en) | Bill tampering detection method and system | |
CN116879292B (en) | Quality evaluation method and device for photocatalyst diatom mud board |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |