[go: up one dir, main page]

CN119169010B - Dynamic production line measurement and control method and system based on visual recognition - Google Patents

Dynamic production line measurement and control method and system based on visual recognition Download PDF

Info

Publication number
CN119169010B
CN119169010B CN202411667229.0A CN202411667229A CN119169010B CN 119169010 B CN119169010 B CN 119169010B CN 202411667229 A CN202411667229 A CN 202411667229A CN 119169010 B CN119169010 B CN 119169010B
Authority
CN
China
Prior art keywords
data
product
calibration
production line
calibration point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411667229.0A
Other languages
Chinese (zh)
Other versions
CN119169010A (en
Inventor
李霞
吴磊
崔航
王保毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao University of Technology
Original Assignee
Qingdao University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao University of Technology filed Critical Qingdao University of Technology
Priority to CN202411667229.0A priority Critical patent/CN119169010B/en
Publication of CN119169010A publication Critical patent/CN119169010A/en
Application granted granted Critical
Publication of CN119169010B publication Critical patent/CN119169010B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)

Abstract

本发明涉及视觉识别技术领域,尤其涉及一种基于视觉识别的动态生产线测控方法及系统。该方法包括以下步骤:通过视觉识别设备进行生产线产品数据采集,得到生产线产品图像数据;根据生产线产品图像数据进行动态校准,得到产品图像校准数据;根据产品图像校准数据进行产品状态分析,得到产品状态数据;根据产品状态数据进行生产线调整处理,得到生产线调整数据,以进行生产线参数调整辅助作业。本发明提升生产线的自动化水平、减少人工干预、提高产品质量控制的精度与一致性,并通过闭环反馈系统实现了动态调整与优化。

The present invention relates to the field of visual recognition technology, and in particular to a dynamic production line measurement and control method and system based on visual recognition. The method comprises the following steps: collecting production line product data through visual recognition equipment to obtain production line product image data; performing dynamic calibration based on production line product image data to obtain product image calibration data; performing product status analysis based on product image calibration data to obtain product status data; performing production line adjustment processing based on product status data to obtain production line adjustment data to perform auxiliary operations for production line parameter adjustment. The present invention improves the automation level of the production line, reduces manual intervention, improves the accuracy and consistency of product quality control, and realizes dynamic adjustment and optimization through a closed-loop feedback system.

Description

Dynamic production line measurement and control method and system based on visual identification
Technical Field
The invention relates to the technical field of visual identification, in particular to a dynamic production line measurement and control method and system based on visual identification.
Background
With the rapid development of intelligent manufacturing technology, the automation level of a production line is continuously improved, and particularly, the requirements on the product quality in the industries of high-precision manufacturing, electronic manufacturing, automobile parts and the like are more and more strict. The traditional production line control and quality detection methods mostly depend on manual detection and mechanical sensors, and the method is limited in detection precision, low in efficiency, easy to be influenced by human errors and environmental changes, and cannot meet the requirements of modern industrial production on high precision, real-time monitoring and full automation.
The introduction of visual recognition technology provides new possibilities for intelligent and automated management of production lines. The production line detection system based on the computer vision and image processing technology can realize non-contact and high-precision detection of products, can detect the form and the size of the products, and can also identify the surface defects of the products, detect the internal structure and the like. Meanwhile, the visual recognition system can adapt to complex environmental changes of a production line, such as illumination conditions, product position deviation and the like through dynamic calibration and automatic adjustment, and the reliability and the production efficiency of detection are remarkably improved. In the production environment, due to the influence of factors such as illumination, angles and equipment vibration, a visual detection system is easy to interfere, so that the detection accuracy is reduced. In addition, under the condition that the product form is complex or deformation and shielding exist, the traditional visual detection method is difficult to realize stable high-precision detection.
Disclosure of Invention
The invention provides a dynamic production line measurement and control method based on visual identification to solve at least one technical problem.
The application provides a dynamic production line measurement and control method based on visual identification, which comprises the following steps:
S1, acquiring product data of a production line through visual identification equipment to obtain product image data of the production line;
s2, carrying out dynamic calibration according to the product image data of the production line to obtain product image calibration data;
s3, carrying out product state analysis according to the product image calibration data to obtain product state data;
And S4, carrying out production line adjustment processing according to the product state data to obtain production line adjustment data so as to carry out auxiliary operation of production line parameter adjustment.
The visual recognition belongs to non-contact detection, reduces physical interference to products, and is suitable for high-speed assembly line operation. By dynamically calibrating the product image, the deviation caused by mechanical equipment errors or production line jitter can be automatically detected and corrected, and the production precision is ensured. The dynamic calibration can be adjusted in real time according to environmental factors such as illumination, angle change and the like, so that the visual recognition system can keep high precision in a complex environment. The product state is analyzed by an image analysis technology, so that whether the product meets the production standard or not can be rapidly detected, and the problems of surface defects, shape deviation and the like are identified. By automatically adjusting parameters of the production line, the system can quickly respond to product quality problems or equipment errors, and the generation of unqualified products is reduced. By means of real-time adjustment and optimization, the time for stopping and adjusting the production line is reduced, and the high efficiency and continuity of production are ensured.
Preferably, step S1 is specifically:
Scanning the production line through visual identification equipment to obtain production line image data;
Obtaining standard product image data;
And carrying out product vision area extraction according to the standard product image data and the production line image data to obtain the production line product image data.
According to the invention, by combining the standard product image data and the production line image data, the visual area of the product can be accurately extracted, the background interference is reduced, and the recognition accuracy is improved. The visual identification equipment is used for scanning operation, so that products do not need to be contacted, the interference on the physical shape of the products is reduced, and the visual identification equipment is suitable for high-speed assembly lines and products which are easy to damage. The standard product image is used as a comparison object, so that a non-product area and a product area on a production line can be effectively distinguished, false detection and missing detection phenomena are reduced, and the accuracy of visual identification is improved. By combining the standard product image and the production line image data, the visual area where the product is located can be accurately extracted, irrelevant parts such as the background and mechanical parts are filtered, and the accuracy of subsequent detection and analysis is improved. By extracting accurate product areas, the requirement for comprehensively analyzing the whole image is reduced, the processing time is shortened, and the visual processing efficiency is improved.
Preferably, step S2 is specifically:
performing calibration mode selection according to the product image data of the production line to obtain calibration mode data;
performing calibration point detection according to the product image data of the production line to obtain calibration point data;
carrying out space calibration according to the calibration mode data, the calibration point data and the product image data of the production line to obtain product space calibration data;
performing color calibration according to the calibration mode data and the product space calibration data to obtain product color calibration data;
performing dynamic environment compensation according to the calibration mode data and the product color calibration data to obtain product environment factor compensation data;
and carrying out attitude calibration according to the calibration mode data and the product environment factor compensation data to obtain product image calibration data.
According to the invention, the proper calibration flow is automatically selected according to the type of the product, the production environment and the process requirement, so that the optimal matching of the calibration scheme is ensured. Different calibration modes are selected under different products or environments, so that unnecessary calibration operation can be avoided, and the efficiency is improved. Accurate calibration point data helps to reduce the accumulation of spatial errors that may occur in subsequent steps, thereby improving the accuracy of the overall calibration. Through the calibration point data and the calibration mode, the accurate space calibration of the production line products under different angles, positions, distances and the like is ensured, and the deviation caused by mechanical errors of equipment or product movement is avoided. The color calibration ensures that the color data of the product is consistent under different illumination and reflection conditions, and is particularly suitable for color-sensitive production scenes, such as printing, spinning and other industries. The dynamic environment compensation can compensate environmental factors (such as illumination change, temperature fluctuation, vibration and the like) in the production line in real time, so that the visual recognition system can adapt to the environmental change and keep high precision. The attitude calibration can accurately detect and correct attitude deviations (such as rotation, inclination and the like) of products in a production line, and ensure that each product is detected, processed or assembled in a standard attitude.
Preferably, the calibration point detection is specifically:
sub-pixel precision calibration point detection is carried out according to product image data of a production line, and first calibration point data are obtained;
performing geometric invariance characteristic calibration point detection according to the product image data of the production line to obtain second calibration point data;
performing self-calibration point detection according to the product image data of the production line to obtain third calibration point data;
And performing the calibration point matching according to the first calibration point data, the second calibration point data and the third calibration point data to obtain calibration point matching data.
The sub-pixel precision calibration point detection method effectively reduces errors caused by the limitation of equipment resolution, improves the precision of the calibration point position, and ensures that the space calibration process is more accurate. Under the complex environment that the geometric characteristics are not easy to change, the geometric invariance calibration point detection can effectively ensure the accurate identification of the calibration point and ensure that the product can be correctly positioned even under non-ideal conditions. The self-calibration technology can automatically adjust the calibration point detection algorithm according to different product characteristics and environmental changes, adapt to the dynamic changes of the production line, and reduce manual intervention and additional settings. The self-calibration technology reduces dependence on external factors, can effectively cope with environmental interference such as illumination, background sundries, vibration and the like, and ensures the stability of the detection of the calibration point. By matching the calibration point data from different detection methods, the limitations of each method can be eliminated, the advantages are taken, and the precision of the calibration point is improved through fusion and matching. Through multi-source calibration point data matching, correction and compensation of detection errors can be realized, and higher reliability and accuracy of the calibration point matching data are ensured.
Preferably, the subpixel accuracy calibration point detection is specifically:
Performing rough positioning processing on the calibration point area according to the product image data of the production line to obtain rough positioning data of the calibration point area;
carrying out sub-pixel level edge detection according to the rough positioning data of the calibration point region to obtain sub-pixel level edge detection data;
calculating a sub-pixel centroid according to the sub-pixel level edge detection data to obtain sub-pixel centroid data;
and carrying out sub-pixel shape fitting according to the sub-pixel centroid data to obtain first calibration point data.
According to the invention, the calculation burden of an image processing algorithm is reduced by coarse positioning, and only the region of the standard point is subjected to further refined processing, so that the efficiency of the system is improved. The edge detection at the sub-pixel level can be accurate to the edge contour finer than the pixel resolution, and the fine edge point position is calculated through an interpolation algorithm, so that the method is suitable for scenes with high requirements on precision. The centroid calculation further refines the position of the calibration point, so that the detection precision can reach the sub-pixel level, and the method is suitable for application scenes needing high-precision positioning, such as precision machining or detection equipment. Through shape fitting (such as parabolic fitting or spline curve fitting) at the sub-pixel level, the shape of the calibration point can be accurately fitted within the minimum error range, so that the accuracy of the position of the calibration point is ensured, and the real geometric shape of the calibration point can be reserved.
Preferably, the geometric invariance feature calibration point detection is specifically:
extracting geometric invariance characteristics according to the product image data of the production line to obtain geometric invariance characteristic data;
performing geometric invariance feature point matching on the geometric invariance feature data to obtain candidate calibration point matching pair data;
according to the candidate calibration point matching pair data, carrying out random sampling consistency abnormal point rejection to obtain effective calibration point data;
Carrying out affine transformation parameter estimation according to the effective calibration point data to obtain affine transformation parameter data;
and (5) performing calibration point position recalculation according to the affine transformation parameter data to obtain second calibration point data.
The invention can extract stable local characteristic points, and can reliably extract important characteristic points even under the conditions of illumination change, complex background or noise interference on a production line. Based on the geometric invariance characteristics, the feature points in the standard template and the feature points actually detected can be quickly matched by using Euclidean distance, hamming distance and other algorithms, so that the matching precision of the standard points is ensured. And verifying the feature point matching result through a random sampling consistency algorithm, removing the error matching points which do not accord with geometric consistency, and ensuring the accuracy of the calibration points. Through affine transformation parameter estimation (such as rotation angle, scaling and translation vector), geometric transformation of a marked point in a product image can be accurately recovered, and the marked point can still be accurately positioned even when the product is spatially changed. The actual space position of the calibration point can be accurately calculated through affine transformation parameters, displacement, rotation or scaling errors caused by geometric transformation are compensated, and the position accuracy of the calibration point is ensured. No matter what kind of transformation occurs on the production line, the system recalculates the position of the calibration point through the step, and ensures that consistent and accurate calibration point data can be obtained for each detection.
Preferably, wherein the self-calibrating calibration point detection is specifically:
edge detection and contour extraction are carried out according to the product image data of the production line, so that main contour data of the product are obtained;
Performing corner detection according to main contour data of the product to obtain corner data of the product;
Carrying out repeated characteristic analysis according to the product corner data to obtain product structural characteristic data;
carrying out symmetry detection and candidate calibration point extraction according to the product structure characteristic data to obtain candidate calibration point data;
performing geometric non-deformation analysis according to the candidate calibration point data to obtain candidate calibration point geometric non-deformation data;
And carrying out self-calibration feature point verification according to the geometrical non-deformation data of the candidate calibration points to obtain third calibration point data.
The invention can cope with production environments with different illumination conditions, complex background and noise interference, and ensures the precision and stability of contour extraction. By analyzing the geometric characteristics and local structural repeatability of the product, portions of the product having the same or similar characteristics can be detected, which are generally stable and repeatable landmark candidate regions. Since the repeatability features have higher stability on the product, the stability and robustness of the calibration points in the dynamic change of the production line can be improved by selecting the feature areas as the calibration points. The symmetry of the product is a stable and reliable geometric feature, and the accurate position of the standard point can be determined through symmetry detection, so that the method is particularly suitable for products with symmetrical structures. The geometric invariance analysis ensures that the calibration point can be kept unchanged under geometric transformation such as rotation, scaling, translation and the like, and is suitable for changes caused by product movement, equipment shake and the like on a production line. Through self-calibration feature point verification, the system can automatically check and confirm the accuracy of the calibration points, reject the calibration points which do not meet the standard or are misjudged, and ensure that the output calibration points have high accuracy.
Preferably, step S3 is specifically:
carrying out product morphological analysis according to the product image calibration data to obtain product morphological analysis data;
carrying out surface quality detection according to the product image calibration data to obtain surface quality detection data;
generating an X-ray image through an X-ray device to obtain X-ray image data of a product;
performing internal structure analysis according to the X-ray image data of the product to obtain internal structure analysis data;
performing size precision measurement on the product image calibration data to obtain size precision measurement data;
And integrating the product morphological analysis data, the surface quality detection data, the internal structure analysis data and the dimensional accuracy measurement data to obtain product state data.
According to the invention, the appearance characteristics of the product can be accurately identified by carrying out morphological analysis on the calibration data of the product image, so that the appearance is ensured to accord with the design standard, and the method is suitable for production scenes requiring high-precision appearance control. The automated surface quality detection can rapidly identify defects, scratches, pits or pollution on the surface of the product, and ensure high quality and consistency of the appearance of the product. The hidden defects such as cracks, holes, bubbles, foreign matters and the like in the product can be found by the X-rays, and the internal problems cannot be found in the conventional visual detection, so that the comprehensiveness of product quality control is greatly improved. Through the analysis to the X-ray image, whether the internal geometry of the product meets the design requirement can be accurately detected, and the correct assembly and the structural integrity of the internal parts of the product are ensured. And the calibrated image is used for measuring the size, so that the size of the product meets the design specification, and the method is particularly suitable for the industrial manufacturing field with high requirement on the size precision. By integrating all detection data together, the quality condition of each product can be comprehensively mastered, any potential problems can be found and treated before entering the downstream flow, and accumulation and diffusion of quality problems are reduced.
Preferably, step S4 is specifically:
Carrying out product state parameter conversion according to the product state data to obtain product state parameter data;
matching according to the product state parameter data and a preset product quality inspection index library to obtain product state quality inspection data;
and carrying out production line adjustment processing according to the product state quality inspection data and a preset production line expert knowledge engine to obtain production line adjustment data so as to carry out auxiliary operation of production line parameter adjustment.
According to the invention, the system can simplify the processing of complex multidimensional data by converting the product state data into specific product state parameters, extract key quality parameters (such as morphology, size, internal structure and the like), ensure that various state information of the product is converted into standardized parameter values through state parameter conversion, facilitate consistency comparison with quality inspection indexes and improve the processing efficiency of the data. By matching with a preset quality inspection index library, the system can automatically judge whether the product meets the quality standard, realize full-automatic quality inspection and evaluation, and reduce errors and subjectivity in manual quality inspection. The expert knowledge engine can rapidly judge the state of the production line according to the change of quality inspection data, timely adjust the parameters of the production line, prevent the expansion of quality problems and reduce the occurrence of unqualified products. The expert knowledge engine can gradually optimize self decision logic through accumulation and learning of historical data, so that the accuracy and efficiency of production line adjustment are improved, and the system is more intelligent and efficient in long-time operation.
Preferably, the present application also provides a dynamic production line measurement and control system based on visual identification, for executing the dynamic production line measurement and control method based on visual identification as described above, the dynamic production line measurement and control system based on visual identification includes:
The production line product data acquisition module is used for acquiring the production line product data through the visual identification equipment to obtain the production line product image data;
The product image calibration module is used for carrying out dynamic calibration according to the product image data of the production line to obtain product image calibration data;
the product state analysis module is used for carrying out product state analysis according to the product image calibration data to obtain product state data;
and the production line adjustment processing module is used for carrying out production line adjustment processing according to the product state data to obtain production line adjustment data so as to carry out auxiliary operation of production line parameter adjustment.
The invention has the beneficial effects that from the acquisition of product data to the adjustment of a production line, a closed-loop automatic data stream is formed, errors and delays caused by human intervention are avoided, and the production efficiency is improved. As a non-contact detection means, the visual identification can not cause physical interference to products, and is suitable for industries requiring high-precision production such as precision manufacturing, electronic equipment and the like. Through the calibration of a plurality of dimensions such as space, color, environment and gesture, the detection result of the product is ensured to still keep high accuracy even under the condition of different illumination, visual angles and position changes. Dynamic environment compensation enables the system to perform self-adaptive adjustment according to real-time environment changes (such as illumination, temperature, vibration and the like), and ensures that production line detection still keeps running stably in a complex environment. The geometric shape of the product can be accurately identified through morphological analysis, the appearance of the product is ensured to meet the design standard, and appearance defects caused by die deviation or production errors can be timely found. The surface quality detection is used for rapidly detecting surface flaws of products through the visual identification equipment, and the X-ray image provides nondestructive detection of internal structures, so that the surface and internal flaws can be comprehensively detected, and the omnibearing quality control of the products is ensured. By analyzing and quality inspection matching of real-time product data, the system can perform instant feedback according to different product states, adjust production line parameters, quickly respond to quality problems and prevent the quality problems from being spread to the whole production batch. The adjustment of the production line is based on historical data and professional experience by using a preset expert system, so that the fine and intelligent production control can be realized. The expert system can automatically adjust common problems and optimize decision logic through learning. The state data of each product can be fed back to the production line for adjustment and optimization, so that closed-loop control is formed, and quality fluctuation in production is reduced.
Drawings
Other features, objects and advantages of the application will become more apparent upon reading of the detailed description of a non-limiting implementation, made with reference to the accompanying drawings in which:
FIG. 1 is a flow chart showing the steps of a dynamic production line measurement and control method based on visual recognition according to an embodiment;
FIG. 2 shows a flow chart of the steps of a method of data acquisition for a production line of an embodiment;
FIG. 3 shows a flow chart of the steps of a product image calibration method of an embodiment;
FIG. 4 is a flow chart illustrating steps of a method of product state analysis according to one embodiment;
FIG. 5 is a flow chart illustrating steps of a line adjustment processing method according to an embodiment.
Detailed Description
The following is a clear and complete description of the technical method of the present patent in conjunction with the accompanying drawings, and it is evident that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, are intended to fall within the scope of the present invention.
Furthermore, the drawings are merely schematic illustrations of the present invention and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. The functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor methods and/or microcontroller methods.
It will be understood that, although the terms "first," "second," etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
And (3) scanning the products on the production line in real time by using a visual identification device to generate high-definition production line product image data. The image data includes information such as the shape, surface characteristics, and spatial location of the product on the production line. The production line is used for producing certain precise mechanical parts, and each part has a standard shape of a cylinder, a height of 10mm and a diameter of 5 mm. The system shoots an appearance image of the part through a visual recognition device, obtains preliminary product shape data, and measures that the appearance size of the part is 9.8 mm high and the diameter is 5.1 mm.
The system performs dynamic calibration according to the product image data to correct the position of the product on the production line, the color difference caused by illumination conditions, the posture change and the like. After calibration, the product image data is processed to ensure the subsequent detection accuracy. An appropriate calibration mode is selected according to the different product types. And (3) carrying out sub-pixel precision calibration point detection on the image, and ensuring the positioning precision. The influence of ambient illumination on the image is eliminated through color calibration, and the system is dynamically adjusted to adapt to the environmental change. In image acquisition, the detected part shape appears slightly distorted due to the angle of the visual recognition device. Through dynamic calibration, the system adjusts the spatial position of the object in the image, corrects the distorted cylinder, and obtains accurate data with the height of 9.9 mm and the diameter of 5.0 mm.
The system performs a state analysis of the product from multiple dimensions, including analyzing the geometry of the product to determine if it meets design requirements. Other surface imperfections such as scratches, depressions, etc. Nondestructive inspection is performed on the internal structure by X-rays to ensure that the interior is free of defects (such as bubbles, cracks and the like). The dimensions of the product are measured accurately and compared with the design specifications. By morphological analysis, the system detects that the height of the part is 9.9 mm, the diameter is 5.0 mm, and the design tolerance requirement of +/-0.1 mm is met. The surface detection shows that the surface of the part has slight scratches, the size is 0.05 mm, the performance is not affected, and the quality standard is met. The X-ray image shows that the internal structure of the part is perfect and no bubbles or cracks exist. The dimensional measurement confirms that the overall dimensions of the part are satisfactory.
According to the product state data, the system converts the detection results into product state parameters and matches the product state parameters with a preset product quality inspection index library. And the system automatically adjusts parameters of the production line through an expert knowledge engine according to the matching result. Morphological analysis, surface quality, internal structure and dimensional accuracy data are converted into quantifiable parameters. Matching with a preset quality inspection standard, and judging whether the product is qualified or not. If a deviation is detected, the system adjusts production line parameters such as equipment precision, temperature, pressure and the like through an expert knowledge engine to ensure that subsequent products meet quality requirements. According to the quality inspection standard, the size, surface quality and internal structure of the product meet the standards, so the product is judged to be qualified. If the dimensional deviation of the part is outside the tolerance range (e.g., a height of 9.6 mm is detected), the system identifies the cause of the problem based on expert knowledge engines, and finds that it is likely due to cutter wear of the cutting apparatus. The system automatically adjusts the cutter parameters of the cutting equipment, compensates for the deviation and ensures the dimensional accuracy of subsequent parts.
The production line is responsible for manufacturing small metal connectors in electronic devices, requiring a height of 2.0 mm, a width of 1.5 mm, a thickness of 0.5 mm, and a tolerance of + -0.05 mm. 5000 such connectors are produced daily. The visual recognition device captures image data of each connector. The result of the preliminary test is a slight variation in the dimensions of the partial connectors, such as 1.98 mm (height), 1.52 mm (width), 0.48 mm (thickness). The system performs dynamic calibration and corrects detection errors caused by equipment angle or illumination change. The actual connector dimensions after calibration were confirmed to be 2.01 mm (height), 1.50 mm (width), 0.50 mm (thickness) and to fit the tolerance range. The geometric shape of the connector is confirmed to meet the design requirements, and the detected form deviation is within an acceptable range. The visual recognition device finds that part of the surface of the connector has micro scratches, and the scratch depth is 0.02 millimeter after analysis, so that the surface quality standard is met. The X-ray display connector has no defects such as bubbles, cracks and the like, and the internal structure is complete. And confirming that the size of the final product is within the tolerance range, the width deviation of part of the product is +0.02 millimeter, and the thickness deviation is-0.01 millimeter, and still meets the precision requirement.
And converting the detection result into a product state parameter by the system, and performing quality inspection standard matching. If a small number of connectors are detected with widths outside of the tolerance range (e.g., up to 1.56 mm in width), the system can identify equipment wear or material shrinkage problems and automatically adjust the cutting speed and pressure of the equipment by an expert system. The adjusted production line continues to run, and the width of the product in the next batch is stabilized at 1.50 mm, so that the sizes of all connectors are ensured to meet the requirements.
Referring to fig. 1 to 5, the application provides a dynamic production line measurement and control method based on visual identification, which comprises the following steps:
S1, acquiring product data of a production line through visual identification equipment to obtain product image data of the production line;
in particular, a plurality of visual recognition devices, such as cameras or image sensors, are arranged on the production line, ensuring that the devices are able to cover critical parts of the production line. When a product passes through the production line, the visual identification equipment starts to work, and the image data of the product is shot, wherein the image data contains the information of the appearance, the color, the size and the like of the product. And (5) dividing the image, separating the product from the background, and extracting the region of interest.
S2, carrying out dynamic calibration according to the product image data of the production line to obtain product image calibration data;
Specifically, main characteristic information of the product, such as edges, shapes, textures and the like, is extracted from the preprocessed image, so that the state of the product can be accurately described. These features are compared with a pre-set standard template or reference model. The reference model is typically based on known standard product information, covering the ideal morphology and parameters of the product. And according to the comparison result, the identification parameters of the system are adjusted in real time so as to cope with the slight change of the product in the production process. For example, line speed changes, lighting conditions changes, etc., which affect the quality of the image, it is necessary to ensure consistency by adjusting the calibration.
S3, carrying out product state analysis according to the product image calibration data to obtain product state data;
Specifically, based on the image calibration data of the product, the system constructs a state analysis model which contains a series of key indexes such as the size, the shape, the color and the like of the product. The extracted product features are compared with the analytical model. For example, detecting whether the product is defective (e.g., broken, deformed, etc.), or judging whether the product is assembled correctly (e.g., whether the parts are missing or misplaced). And classifying the products into a qualified state or a disqualified state according to the comparison result, and marking specific disqualification reasons for subsequent processing.
And S4, carrying out production line adjustment processing according to the product state data to obtain production line adjustment data so as to carry out auxiliary operation of production line parameter adjustment.
Specifically, according to the product state data, the system analyzes the current running state of the production line and judges whether the production parameters need to be adjusted. For example, if the number of defective products increases, this may be caused by problems such as wear of equipment, improper parameter setting, and the like. The system gives specific production line adjustment suggestions based on the product state analysis results. These include adjusting the line speed, adjusting the positional accuracy of the product, or modifying the parameter settings of the production facility. In some cases, the system automatically adjusts real-time parameters, such as conveyor belt speed, equipment operating accuracy, or visual recognition equipment recognition sensitivity, according to adjustment recommendations to optimize production line operating efficiency.
Preferably, step S1 is specifically:
Scanning the production line through visual identification equipment to obtain production line image data;
In particular, a plurality of visual recognition devices (such as high resolution cameras or 3D scanners) are installed and configured to ensure that the devices are able to cover critical areas of the production line. It is necessary to adjust the angle and focal length of the device so that it can capture a complete image of the product. As the product passes on the production line, the visual recognition device continuously photographs the product, generating production line image data, which typically includes the entire production environment and the background surrounding the product. As the product passes, the camera continuously captures multiple images per second (e.g., 10 frames per second), creating a series of pictures containing the product.
Obtaining standard product image data;
Specifically, data acquisition of standard product images is performed from a preset database, wherein the standard product images are generated by shooting the standard products at multiple angles so as to capture complete characteristic information. The standard image obtained is a product image with higher resolution and clean background.
And carrying out product vision area extraction according to the standard product image data and the production line image data to obtain the production line product image data.
Specifically, real-time image data on a production line is compared with standard product image data. The boundary and shape of the product are found out through a feature matching technology, and the background part irrelevant to the product in the image is removed, so that only the region conforming to the standard product is ensured to be extracted. And extracting the region of the product from the production line image by utilizing the information such as shape characteristics, color distribution and the like, separating the product from the background by an image segmentation technology, and extracting the outline and the appearance of the product. And carrying out refined comparison on the extracted product region and a standard product, and confirming whether the extracted region is accurate. If there is an error in position or size, calibration is performed so that the product image data matches as much as possible with the standard image.
Preferably, step S2 is specifically:
performing calibration mode selection according to the product image data of the production line to obtain calibration mode data;
Specifically, based on the characteristics of the product (e.g., shape, material, color, etc.) and the operating conditions of the production line, the system automatically selects the corresponding calibration mode. Different calibration patterns may be optimized for different dimensions (e.g., space, color, or pose) of the product. According to the image data of the production line products, the system analyzes the current environment, such as light conditions, product moving speed and the like, and selects a corresponding calibration mode. The calibration mode may be classified as a standard mode, an enhanced accuracy mode, or a fast mode depending on the requirements of the production line. The system first determines the most appropriate calibration mode by analyzing the quality of the image (e.g., sharpness, contrast) and line speed. If the product is passing at high speed, the system will select the fast mode, and if the ambient illumination is complex, the enhanced accuracy mode is selected.
Performing calibration point detection according to the product image data of the production line to obtain calibration point data;
Specifically, some key feature points are extracted from the image of the product, and the feature points are one of corners, mark points or specific contours of the shape of the product. These calibration points may serve as a reference for calibration. The system accurately positions the calibration points through an image processing algorithm, and ensures the accuracy of the calibration points, such as edge detection, corner detection or other feature extraction technologies.
Carrying out space calibration according to the calibration mode data, the calibration point data and the product image data of the production line to obtain product space calibration data;
Specifically, according to the data of the calibration points and the calibration mode, the system performs geometric transformation on the image so as to correct the relative position deviation between the camera and the product, compensate the error of the angle or the installation position of the camera and ensure that the product in the image keeps the correct space proportion. In some cases, the system needs to convert two-dimensional information in the image into three-dimensional spatial information, and determine the position and posture of the product in real space by indexing point data.
Performing color calibration according to the calibration mode data and the product space calibration data to obtain product color calibration data;
specifically, from the spatially calibrated image data, the system first converts the color space of the image to make it more suitable for color analysis (e.g., converts the RGB color space to HSV or Lab color space). The system uses a color model of the standard product (including hue, brightness, saturation, etc. of the color) to compare with the image of the current product. By comparison, the system corrects color deviations in the image due to light, reflection, and other environmental factors.
Performing dynamic environment compensation according to the calibration mode data and the product color calibration data to obtain product environment factor compensation data;
Specifically, the system monitors factors such as illumination intensity, color change, background noise and the like in the production line environment in real time. These environmental factors can cause changes in the color and characteristics of the product in the image. According to the environmental data acquired in real time, the system dynamically compensates the image, including adjusting brightness, contrast, or applying a filtering technique to compensate for environmental effects on the image.
And carrying out attitude calibration according to the calibration mode data and the product environment factor compensation data to obtain product image calibration data.
In particular, the system uses previously extracted calibration points, spatial calibration data, color calibration data to detect the pose (including rotation, tilt, displacement, etc.) of the product in the image. The system determines whether correction is needed by comparing the standard pose model to the pose in the current image. If the deviation of the product posture from the standard posture (such as different product inclination or rotation angles) is detected, the system can perform rotation, translation or scaling correction on the image, so that the product image is ensured to be consistent with the standard posture.
Preferably, the calibration point detection is specifically:
sub-pixel precision calibration point detection is carried out according to product image data of a production line, and first calibration point data are obtained;
Specifically, edge detection techniques (e.g., gradient or differential) are first used to locate the critical feature edges of the product. The gradient or difference method employed identifies edges by detecting significant changes in pixel intensity in the image. To improve the accuracy, the accuracy of the calibration points is improved to the sub-pixel level between pixels using a sub-pixel level interpolation method. The system locates more accurate coordinates of the calibration point at the edge point by analyzing the change trend of the pixel gray value. Geometrical features (such as corner points or places with significant curvature) with special significance on the edges are extracted as first-batch calibration point data.
Performing geometric invariance characteristic calibration point detection according to the product image data of the production line to obtain second calibration point data;
In particular, geometric invariance features, such as local features of corner points, edge points or areas (such as label edges of bottles, unique shapes of bottle caps, etc.), are extracted from the images, and remain stable under geometric transformations such as rotation, scaling, etc. Invariant feature points are detected by template matching or specific feature descriptors (e.g., edge-based shape features or local textures) using invariant feature detection methods. The detected invariance features are compared with the invariance features in the standard product image, and the feature points are determined as second standard points.
Performing self-calibration point detection according to the product image data of the production line to obtain third calibration point data;
Specifically, the system automatically detects additional calibration points through self-calibration techniques. These points are generated by analyzing the symmetry, scale or other internal geometric relationships of the image. Self-calibration is typically used to correct image distortion and to improve the accuracy of the calibration points. The system automatically generates a series of third calibration points based on symmetry or repeatability characteristics of the product (e.g., symmetry axis of the body, equally spaced markings of the body, etc.). By analyzing the geometric distribution of these calibration points, the system self-calibrates them, ensuring that the positions of these points are consistent with the standard model.
And performing the calibration point matching according to the first calibration point data, the second calibration point data and the third calibration point data to obtain calibration point matching data.
Specifically, the data of the calibration points of the first batch of sub-pixel precision, the calibration points of the second batch of geometric invariance features and the calibration points of the third batch of self-calibration are integrated. Each type of calibration point provides different calibration information that the system combines to improve calibration accuracy. These calibration points are matched with the calibration points of the standard products by a feature matching technology, and the difference between the calibration points and the calibration points is calculated. Based on the differences, the system fine-tunes and matches the calibration points, ensuring that they are precisely aligned with the calibration points in the standard template. If errors exist in the matching process, the system can correct errors according to the characteristics of the three types of calibration points, and accurate calibration point matching data is output.
Preferably, the subpixel accuracy calibration point detection is specifically:
Performing rough positioning processing on the calibration point area according to the product image data of the production line to obtain rough positioning data of the calibration point area;
Specifically, the production line product image is first subjected to basic processing, such as graying, denoising and contrast enhancement, so as to improve the accuracy of subsequent calibration point detection. The main characteristic region of the product is found in the image through the technologies of threshold segmentation, morphological operation or edge detection and the like, and is used for identifying the rough calibration point region, but the accuracy is rough. By analyzing geometric features in the image, the rough location of the calibration point is locked. For example, a substantially central location is identified in the product area.
Carrying out sub-pixel level edge detection according to the rough positioning data of the calibration point region to obtain sub-pixel level edge detection data;
Specifically, in the rough positioning area of the calibration point, an edge detection technology is applied to further extract fine edge information. The pixel positions of the edges can be found accurately using gradient detection, canny edge detection, etc. To achieve sub-pixel accuracy, interpolation processing is performed near the detected edge points. By analyzing the gray gradient or difference of pixels around the edge points, edge positions finer than the pixels are calculated.
Calculating a sub-pixel centroid according to the sub-pixel level edge detection data to obtain sub-pixel centroid data;
Specifically, the centroid of the calibration point region is calculated from the detected sub-pixel edge points. Centroid calculation is based on coordinates of edge points, and the centroid of sub-pixel precision is calculated by counting geometric centers of the edge points. On the basis of sub-pixel precision, the accuracy of the mass center is further improved. The centroid position is determined by a weighted average method and the like and combines the intensity of the edges and gray information.
The method comprises the steps of obtaining sub-pixel level edge detection data, carrying out multi-channel fusion according to the sub-pixel level edge detection data to obtain edge detection multi-channel fusion data, carrying out local space resampling according to the edge detection multi-channel fusion data to obtain local resampling edge point data, carrying out dynamic edge weight distribution according to the local resampling edge point data to obtain weighted edge point data, carrying out dynamic centroid clustering according to the weighted edge point data to obtain centroid candidate area data, and carrying out weighted centroid calculation according to the centroid candidate area data to obtain sub-pixel centroid data.
The sub-pixel edge detection data contains image feature information of multiple dimensions of brightness, color and the like. To improve the accuracy of centroid calculation, the data may be processed in multiple channels, i.e., different channel information (e.g., RGB color channel, luminance channel, texture channel, etc.) may be weighted and fused according to their importance. Each channel is dynamically weighted by a weight distribution algorithm (e.g., an adaptive channel weight distribution algorithm) based on gradient and edge strength, thereby generating fused high-precision sub-pixel edge point data. Since the sub-pixel edge points have high accuracy but sparse distribution, it is necessary to locally spatially resample them before centroid calculation. The distribution of edge points is further refined by regenerating a high-density edge point set in the neighborhood of sub-pixel edge points by a spatial interpolation algorithm (such as bilinear interpolation or higher-order B-spline interpolation) to increase the accuracy in calculating the centroid.
For each resampled edge point, a different weight is assigned according to its distance from the centroid candidate area, edge intensity and gradient direction. By combining the dual factors of the spatial distance and the edge intensity (such as the corresponding weights obtained by weighted regression calculation based on the spatial distance and the edge intensity), the points closer to the centroid are ensured to have larger weights. The distance between each edge point and the center of the centroid candidate region (or initial estimated position) is calculated for that edge point. The calculation of the spatial distance can reflect the relative positions of the edge points and the centroid positions, and the points close to the centers of the centroid candidate areas have larger influence in the centroid calculation. For each edge point, its edge intensity is calculated from the gradient of the image. The gradient is the rate at which the brightness of the image changes, and can reflect the sharpness and significance of the edges. The level of edge intensity represents the significance of the edge points. The higher intensity edge points should take up more weight in centroid calculations, especially in complex edge or noise environments, the higher intensity points have higher confidence. The core of the dynamic edge weighting mechanism is to assign a weight coefficient based on the spatial distance and the edge strength to each edge point. The weighting formula is:, For weighting the weight data/dynamic weighting coefficients corresponding to the edge point data, For the edge strength of the sheet material,In order to be a spatial distance from each other,Is a small positive number, and the divisor is prevented from being 0. The weights of each edge point are directly used for calculating the mass center after being calculated based on the dynamic weighting coefficients. At this time, the coordinates of each edge point will be multiplied by its corresponding weight, resulting in weighted edge point data for centroid calculation.
Meanwhile, the edge point removing algorithm based on noise analysis can remove noise points deviating from the standard. On the basis of weighted edge points, preliminary clustering analysis is performed on all the edge points to find a centroid candidate region. The densely distributed edge points are taken as a group by a clustering algorithm (such as a DBSCAN algorithm) based on density, and sparse noise points are excluded. The centroid candidate area is generated, and selection of the centroid position is further optimized. And calculating weighted coordinates of all edge points in the local area by using a weighted centroid formula (such as an improved version of a classical centroid formula) according to the centroid candidate area data to obtain a centroid position. Here, the centroid coordinates (x, y) are calculated by the following formula:,, Is the abscissa data of the sub-pixel centroid data, Is the firstThe weights of the individual edge points are chosen,Is the firstThe abscissa data of the individual edge points,Is the ordinate data of the sub-pixel centroid data,Is the firstOrdinate data of the individual edge points. At this time, the centroid calculation not only considers the space position, but also fuses a plurality of image features, so that the centroid is positioned more accurately.
And carrying out sub-pixel shape fitting according to the sub-pixel centroid data to obtain first calibration point data.
Specifically, on the basis of the obtained sub-pixel centroid data, a geometric fitting method (such as least square fitting of a circle or an ellipse) is applied to fit the shape of the calibration point, so that the centroid data can be corrected, and accurate shape characteristics of the calibration point can be obtained. Through the fitted shape features, the system determines the position and shape parameters (such as circle center, radius, etc.) of the calibration point as the first calibration point data.
Preferably, the geometric invariance feature calibration point detection is specifically:
extracting geometric invariance characteristics according to the product image data of the production line to obtain geometric invariance characteristic data;
Specifically, the basic image processing such as graying, denoising, contrast enhancement and the like is carried out on the product image of the production line. In this way, the sharpness of the image and the recognizability of the marked points are improved. By thresholding, edge detection or morphological operations, a rough calibration point region is identified from the image of the product. Coarse positioning typically extracts larger geometric features (e.g., caps, labels, etc.) to determine the approximate location of the calibration points. The system uses a coarse positioning algorithm to frame the calibration point region into a larger rectangular or circular frame, which is used as the basis for the accurate processing of the subsequent sub-pixel level.
Performing geometric invariance feature point matching on the geometric invariance feature data to obtain candidate calibration point matching pair data;
specifically, a finer edge detection algorithm (e.g., canny, sobel, etc.) is applied to the coarse positioning area of the calibration point to detect the edges of the calibration point. The edge detection accuracy at this time is at the pixel level. In order to improve the accuracy of the edge points, a sub-pixel interpolation method is adopted. And calculating the neighborhood gray value change through interpolation, and finding out edge points more accurate than the pixel grid. Common interpolation methods include linear interpolation or parabolic interpolation. And extracting the interpolated sub-pixel precision edge points for further calculation.
According to the candidate calibration point matching pair data, carrying out random sampling consistency abnormal point rejection to obtain effective calibration point data;
Specifically, random sampling is performed from the candidate matching pairs, and abnormal points or noise points are gradually removed. And (3) detecting abnormal points of random sampling consistency, namely finding out point pairs meeting a certain geometric relation (such as an affine transformation model) from the matched point pairs through multiple iterations, and eliminating abnormal points which do not accord with the model. Those pairs that deviate more (mismatching due to occlusion, illumination or noise) are culled, leaving only valid pairs of index point matches. And randomly selecting a plurality of point pairs from the candidate matching point pairs to carry out consistency evaluation, and finding out the maximum consensus set meeting a certain geometric transformation model (such as affine transformation). Outliers (noise points) which are inconsistent with the main stream matching result are removed in the iterative process. In each iteration, the erroneous matching points with larger deviations are eliminated by estimating an affine model and detecting whether each candidate matching point corresponds to the model. The system randomly selects a plurality of points in a group of candidate calibration point matching pairs, and eliminates abnormal points which do not accord with the transformation model through randomly sampling consistency abnormal points. For example, the system finds that some noise points do not conform to the expected affine transformation leaving valid calibration point pairs, such as (120, 180) and (115, 175).
Carrying out affine transformation parameter estimation according to the effective calibration point data to obtain affine transformation parameter data;
Specifically, parameters of affine transformation are calculated using the valid pair of calibration points that remain. Affine transformations include translation, rotation, scaling, and shearing parameters, which can be estimated by least squares methods, etc. The system calculates affine relations between the points by comparing the positions of the marked points in the standard image and the current image, and then obtains a transformation matrix.
And (5) performing calibration point position recalculation according to the affine transformation parameter data to obtain second calibration point data.
Specifically, the positions of the calibration points in the standard image are mapped onto the current image through affine transformation parameters, so that the recalculated positions of the calibration points are obtained. And taking the calibration point subjected to affine transformation adjustment as second calibration point data.
Preferably, wherein the self-calibrating calibration point detection is specifically:
edge detection and contour extraction are carried out according to the product image data of the production line, so that main contour data of the product are obtained;
specifically, edge detection techniques (e.g., canny, sobel, etc.) are used to extract the edge profile of the product. Edge detection identifies the outer contour of a product by capturing significant changes in pixel values (e.g., gray scale gradients) in the image. And on the basis of the detected edge points, extracting the complete outline of the product to form a closed curve. The integrity and continuity of the profile is enhanced by morphological operations (e.g., swelling, erosion, etc.).
Performing corner detection according to main contour data of the product to obtain corner data of the product;
Specifically, salient corners on the contour are identified by corner detection techniques (e.g., harris corner detection). Corner points are usually positions in the image where the edge direction changes drastically and can represent the geometrical features of the product. From the extracted main contours, all corner points are found and their coordinates are recorded as corner point data. These corner points will serve as feature points for subsequent steps.
Carrying out repeated characteristic analysis according to the product corner data to obtain product structural characteristic data;
Specifically, by analyzing the corner points in the plurality of image frames, it is determined whether the corner points remain stable under different viewing angles and scaling conditions. A repetitive feature generally refers to a feature point that repeatedly appears in multiple images without significant change in position. And screening out the angular points which show repeatability and stability in the multi-frame images to form structural characteristic data of the product. These points are typically critical parts of the product structure, such as the label edges, body symmetry points, etc.
Carrying out symmetry detection and candidate calibration point extraction according to the product structure characteristic data to obtain candidate calibration point data;
Specifically, symmetry of the product is detected by analyzing the geometric distribution of structural features of the product. Symmetry can be achieved by calculating whether the distribution of the feature points about a certain axis is symmetrical, such as whether the corner points on the left and right sides of the bottle body are symmetrical. On the basis of symmetry detection, feature points which are positioned near the symmetry axis or can describe the symmetry of the product are extracted and used as candidate calibration points.
Performing geometric non-deformation analysis according to the candidate calibration point data to obtain candidate calibration point geometric non-deformation data;
In particular, by analyzing whether candidate calibration points remain geometrically unchanged under different transformations (e.g., rotation, scaling). The geometrical non-deformation characteristic can be determined by comparing the relative position changes of the feature points after affine transformation or other geometrical manipulation. And verifying whether the candidate calibration points can maintain consistency under different viewing angles or scale change conditions. If the positional relationship of the calibration points remains unchanged in many cases, the geometrical invariance thereof can be determined.
And carrying out self-calibration feature point verification according to the geometrical non-deformation data of the candidate calibration points to obtain third calibration point data.
Specifically, according to geometric non-deformation data of candidate calibration points, whether the candidate calibration points accord with self-calibration characteristics is verified, and whether the points can be calibrated through internal structural relations is mainly judged. The self-calibrating feature points need to meet the requirements of symmetry, repeatability and geometric invariance. The calibration points verified by the self-calibration feature will be determined to be valid calibration points, which the system outputs as third calibration point data.
Preferably, step S3 is specifically:
carrying out product morphological analysis according to the product image calibration data to obtain product morphological analysis data;
specifically, the system uses image processing technology to analyze the overall shape of the product, mainly including identifying and calculating the outline, shape, structural features, etc. of the product. The geometry of the product is identified using morphological operations (e.g., erosion, swelling) or feature extraction algorithms. And extracting key geometric features (such as height, width, angle, curve and the like) of the product, comparing morphological parameters in a standard model, and judging whether the product accords with the expected geometric form.
Carrying out surface quality detection according to the product image calibration data to obtain surface quality detection data;
Specifically, the surface of the product is analyzed by high resolution image processing techniques to detect surface defects (e.g., scratches, pits, bubbles, contamination, etc.). Texture analysis and pattern recognition techniques are used to find outlier regions. By comparing the texture and gray scale distribution of the normal surface, a possible defect area is automatically detected. Then, defect classification is performed to mark different types of surface defects.
Generating an X-ray image through an X-ray device to obtain X-ray image data of a product;
Specifically, an X-ray device is used to scan the product, generating an internal structural image of the product. Different attenuations are generated when the X-rays pass through materials with different densities, and the system generates X-ray images according to the attenuation information. X-ray images are acquired by a sensor and pre-processed (e.g., denoising, contrast enhancement) to improve the sharpness and resolution of the image.
Performing internal structure analysis according to the X-ray image data of the product to obtain internal structure analysis data;
Specifically, based on the X-ray image data, the system analyzes the internal structure of the product, such as material distribution, internal defects (e.g., bubbles, cracks), connection of internal parts, and the like. Regions of different densities are distinguished and analyzed using image segmentation and pattern recognition techniques. By comparing the normal internal structure with the current X-ray image, internal outliers or potential defect areas are automatically identified and marked.
Performing size precision measurement on the product image calibration data to obtain size precision measurement data;
specifically, based on the calibrated image data, the system automatically measures critical dimension parameters of the product, such as length, width, thickness, etc. By comparing with the standard size model, whether the dimensional accuracy of the product is within the allowable error range is evaluated. And the system performs error analysis on the measurement result and judges whether the deviation of the actually measured size and the standard size meets the design requirement.
And integrating the product morphological analysis data, the surface quality detection data, the internal structure analysis data and the dimensional accuracy measurement data to obtain product state data.
Specifically, the system aggregates and integrates data derived from morphological analysis, surface quality inspection, internal structural analysis, and dimensional accuracy measurements. And forming a complete product state report through multidimensional data fusion, and evaluating whether the product meets the standard. Based on the data of each aspect, the system evaluates the overall state of the product, gives out a final judgment whether the product is qualified or not, and lists the detection result of each item of data in detail.
Preferably, step S4 is specifically:
Carrying out product state parameter conversion according to the product state data to obtain product state parameter data;
Specifically, the product state data (morphological analysis, surface quality detection, internal structure analysis, dimensional accuracy measurement, etc.) obtained from the previous stage is converted into standardized state parameters. The status parameters are represented as digitized indicators such as pass/fail decisions, type and number of specific defects, dimensional errors, etc. Through data cleaning and conversion, different types of status data are uniformly converted into a format which is convenient to process, such as size deviation of millimeter, area or severity classification of surface defects and the like.
Matching according to the product state parameter data and a preset product quality inspection index library to obtain product state quality inspection data;
Specifically, the system compares the standardized product state parameters with a preset product quality inspection index library. The quality inspection index library contains the information of quality requirements, qualification standards, tolerance ranges and the like of the products. According to the standard in the quality inspection index library, the system automatically judges whether the product state parameters meet the requirements. For non-standard status parameters, the system marks the failure cause and generates quality inspection data indicating specific problem areas.
And carrying out production line adjustment processing according to the product state quality inspection data and a preset production line expert knowledge engine to obtain production line adjustment data so as to carry out auxiliary operation of production line parameter adjustment.
Specifically, a preset production line expert knowledge engine is utilized, and the system analyzes whether the current production line parameters need to be adjusted according to the product state quality inspection data. The expert knowledge engine contains production process and operation rules, and can generate adjustment suggestions according to the detection result, such as adjusting production speed, changing equipment setting, calibrating equipment and the like. Based on the quality inspection data, the system analyzes the failure cause and generates corresponding production line adjustment suggestions. The adjustment advice includes adjusting the accuracy of the equipment, reducing the production speed, optimizing the operation flow, etc., to improve the yield of the product.
Preferably, the present application also provides a dynamic production line measurement and control system based on visual identification, for executing the dynamic production line measurement and control method based on visual identification as described above, the dynamic production line measurement and control system based on visual identification includes:
The production line product data acquisition module is used for acquiring the production line product data through the visual identification equipment to obtain the production line product image data;
The product image calibration module is used for carrying out dynamic calibration according to the product image data of the production line to obtain product image calibration data;
the product state analysis module is used for carrying out product state analysis according to the product image calibration data to obtain product state data;
and the production line adjustment processing module is used for carrying out production line adjustment processing according to the product state data to obtain production line adjustment data so as to carry out auxiliary operation of production line parameter adjustment.
The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
The foregoing is only a specific embodiment of the invention to enable those skilled in the art to understand or practice the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. The dynamic production line measurement and control method based on visual identification is characterized by comprising the following steps of:
S1, acquiring product data of a production line through visual identification equipment to obtain product image data of the production line;
s2, carrying out dynamic calibration according to the product image data of the production line to obtain product image calibration data;
s3, carrying out product state analysis according to the product image calibration data to obtain product state data;
s4, carrying out production line adjustment processing according to the product state data to obtain production line adjustment data so as to carry out auxiliary operation of production line parameter adjustment;
the step S2 specifically comprises the following steps:
performing calibration mode selection according to the product image data of the production line to obtain calibration mode data;
performing calibration point detection according to the product image data of the production line to obtain calibration point data;
carrying out space calibration according to the calibration mode data, the calibration point data and the product image data of the production line to obtain product space calibration data;
performing color calibration according to the calibration mode data and the product space calibration data to obtain product color calibration data;
performing dynamic environment compensation according to the calibration mode data and the product color calibration data to obtain product environment factor compensation data;
Carrying out attitude calibration according to the calibration mode data and the product environment factor compensation data to obtain product image calibration data;
The detection of the calibration point specifically comprises the following steps:
sub-pixel precision calibration point detection is carried out according to product image data of a production line, and first calibration point data are obtained;
performing geometric invariance characteristic calibration point detection according to the product image data of the production line to obtain second calibration point data;
performing self-calibration point detection according to the product image data of the production line to obtain third calibration point data;
And performing the calibration point matching according to the first calibration point data, the second calibration point data and the third calibration point data to obtain calibration point matching data.
2. The method according to claim 1, wherein step S1 is specifically:
Scanning the production line through visual identification equipment to obtain production line image data;
Obtaining standard product image data;
And carrying out product vision area extraction according to the standard product image data and the production line image data to obtain the production line product image data.
3. The method according to claim 1, wherein the sub-pixel precision calibration point detection is specifically:
Performing rough positioning processing on the calibration point area according to the product image data of the production line to obtain rough positioning data of the calibration point area;
carrying out sub-pixel level edge detection according to the rough positioning data of the calibration point region to obtain sub-pixel level edge detection data;
calculating a sub-pixel centroid according to the sub-pixel level edge detection data to obtain sub-pixel centroid data;
and carrying out sub-pixel shape fitting according to the sub-pixel centroid data to obtain first calibration point data.
4. The method according to claim 1, wherein the geometric invariance feature calibration point detection is specifically:
extracting geometric invariance characteristics according to the product image data of the production line to obtain geometric invariance characteristic data;
performing geometric invariance feature point matching on the geometric invariance feature data to obtain candidate calibration point matching pair data;
according to the candidate calibration point matching pair data, carrying out random sampling consistency abnormal point rejection to obtain effective calibration point data;
Carrying out affine transformation parameter estimation according to the effective calibration point data to obtain affine transformation parameter data;
and (5) performing calibration point position recalculation according to the affine transformation parameter data to obtain second calibration point data.
5. The method according to claim 1, wherein the self-calibrating calibration point detection is specifically:
edge detection and contour extraction are carried out according to the product image data of the production line, so that main contour data of the product are obtained;
Performing corner detection according to main contour data of the product to obtain corner data of the product;
Carrying out repeated characteristic analysis according to the product corner data to obtain product structural characteristic data;
carrying out symmetry detection and candidate calibration point extraction according to the product structure characteristic data to obtain candidate calibration point data;
performing geometric non-deformation analysis according to the candidate calibration point data to obtain candidate calibration point geometric non-deformation data;
And carrying out self-calibration feature point verification according to the geometrical non-deformation data of the candidate calibration points to obtain third calibration point data.
6. The method according to claim 1, wherein step S3 is specifically:
carrying out product morphological analysis according to the product image calibration data to obtain product morphological analysis data;
carrying out surface quality detection according to the product image calibration data to obtain surface quality detection data;
generating an X-ray image through an X-ray device to obtain X-ray image data of a product;
performing internal structure analysis according to the X-ray image data of the product to obtain internal structure analysis data;
performing size precision measurement on the product image calibration data to obtain size precision measurement data;
And integrating the product morphological analysis data, the surface quality detection data, the internal structure analysis data and the dimensional accuracy measurement data to obtain product state data.
7. The method according to claim 1, wherein step S4 is specifically:
Carrying out product state parameter conversion according to the product state data to obtain product state parameter data;
matching according to the product state parameter data and a preset product quality inspection index library to obtain product state quality inspection data;
and carrying out production line adjustment processing according to the product state quality inspection data and a preset production line expert knowledge engine to obtain production line adjustment data so as to carry out auxiliary operation of production line parameter adjustment.
8. A vision-based dynamic production line measurement and control system for performing the vision-based dynamic production line measurement and control method according to claim 1, comprising:
The production line product data acquisition module is used for acquiring the production line product data through the visual identification equipment to obtain the production line product image data;
The product image calibration module is used for carrying out dynamic calibration according to the product image data of the production line to obtain product image calibration data;
the product state analysis module is used for carrying out product state analysis according to the product image calibration data to obtain product state data;
and the production line adjustment processing module is used for carrying out production line adjustment processing according to the product state data to obtain production line adjustment data so as to carry out auxiliary operation of production line parameter adjustment.
CN202411667229.0A 2024-11-21 2024-11-21 Dynamic production line measurement and control method and system based on visual recognition Active CN119169010B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411667229.0A CN119169010B (en) 2024-11-21 2024-11-21 Dynamic production line measurement and control method and system based on visual recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411667229.0A CN119169010B (en) 2024-11-21 2024-11-21 Dynamic production line measurement and control method and system based on visual recognition

Publications (2)

Publication Number Publication Date
CN119169010A CN119169010A (en) 2024-12-20
CN119169010B true CN119169010B (en) 2025-03-25

Family

ID=93881967

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411667229.0A Active CN119169010B (en) 2024-11-21 2024-11-21 Dynamic production line measurement and control method and system based on visual recognition

Country Status (1)

Country Link
CN (1) CN119169010B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118386251A (en) * 2024-06-21 2024-07-26 深圳市曜通科技有限公司 Self-adaptive grabbing system and method based on semiconductor grabbing mechanism
CN118938847A (en) * 2024-10-15 2024-11-12 山东格林汇能科技有限公司 Wet wipes quality control system based on artificial intelligence

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101404640B1 (en) * 2012-12-11 2014-06-20 한국항공우주연구원 Method and system for image registration
CN115816833B (en) * 2023-01-07 2023-06-30 深圳市创想三维科技股份有限公司 Method and device for determining image correction data, electronic equipment and storage medium
CN115877808B (en) * 2023-01-30 2023-05-16 成都秦川物联网科技股份有限公司 Industrial Internet of things for processing sheet workpiece and control method
CN118644686A (en) * 2024-05-29 2024-09-13 遵义师范学院 An image acquisition system for pattern recognition analysis
CN118305479B (en) * 2024-06-12 2024-09-03 深圳市牧激科技有限公司 Control method and device for laser processing path, processor and storage medium
CN118977137A (en) * 2024-09-25 2024-11-19 小恒勇创(苏州)智能科技有限公司 An automatic tool adjustment method for inspection machine based on visual recognition

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118386251A (en) * 2024-06-21 2024-07-26 深圳市曜通科技有限公司 Self-adaptive grabbing system and method based on semiconductor grabbing mechanism
CN118938847A (en) * 2024-10-15 2024-11-12 山东格林汇能科技有限公司 Wet wipes quality control system based on artificial intelligence

Also Published As

Publication number Publication date
CN119169010A (en) 2024-12-20

Similar Documents

Publication Publication Date Title
US12001191B2 (en) Automated 360-degree dense point object inspection
CN115082464B (en) Method and system for identifying weld data in welding process of dust remover
EP3963414A2 (en) Automated 360-degree dense point object inspection
CN107228860B (en) Gear defect detection method based on image rotation period characteristics
CN113608378B (en) Full-automatic defect detection method and system based on LCD (liquid crystal display) process
CN117576100B (en) Surface defect grading detection and evaluation method for FPC connector
CN109934839A (en) A Vision-Based Workpiece Detection Method
CN117333467B (en) Image processing-based glass bottle body flaw identification and detection method and system
CN105160652A (en) Handset casing testing apparatus and method based on computer vision
CN117649404A (en) Medicine packaging box quality detection method and system based on image data analysis
CN119090874B (en) Wire harness defect detection method and system based on visual recognition
CN117670823B (en) PCBA circuit board element detection and evaluation method based on image recognition
CN118247331B (en) Automatic part size detection method and system based on image recognition
CN119379701B (en) Shaving board quality detection method and system based on image recognition
CN119006419B (en) Part size online detection method and system based on linear array camera
CN119991544A (en) Intelligent classification method and system for transmission line tension clamp defects based on two-dimensional images
CN118837379B (en) A high-precision defect detection method and system for smart glasses
CN114943738A (en) Sensor packaging curing adhesive defect identification method based on visual identification
CN119169010B (en) Dynamic production line measurement and control method and system based on visual recognition
KR20190119801A (en) Vehicle Headlight Alignment Calibration and Classification, Inspection of Vehicle Headlight Defects
CN117593302B (en) Defective part tracing method and system
CN120355656A (en) Multi-camera computer vision-based part image processing method and system
Rodriguez et al. Image registration for automated inspection of printed circuit patterns using CAD reference data
CN119295548A (en) An intelligent positioning method and system for canning and stamping
CN120318234A (en) Method and system for rapidly detecting visual defects of square power battery shell

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant