CN118470873A - Thermal forming component pose monitoring and early warning method - Google Patents
Thermal forming component pose monitoring and early warning method Download PDFInfo
- Publication number
- CN118470873A CN118470873A CN202410906257.7A CN202410906257A CN118470873A CN 118470873 A CN118470873 A CN 118470873A CN 202410906257 A CN202410906257 A CN 202410906257A CN 118470873 A CN118470873 A CN 118470873A
- Authority
- CN
- China
- Prior art keywords
- component
- image
- pose
- light source
- monitoring
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B7/00—Signalling systems according to more than one of groups G08B3/00 - G08B6/00; Personal calling systems according to more than one of groups G08B3/00 - G08B6/00
- G08B7/06—Signalling systems according to more than one of groups G08B3/00 - G08B6/00; Personal calling systems according to more than one of groups G08B3/00 - G08B6/00 using electric transmission, e.g. involving audible and visible signalling through the use of sound and light sources
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J5/00—Radiation pyrometry, e.g. infrared or optical thermometry
- G01J5/48—Thermography; Techniques using wholly visual means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/006—Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/809—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
- G06V10/811—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data the classifiers operating on different input data, e.g. multi-modal recognition
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C3/00—Registering or indicating the condition or the working of machines or other apparatus, other than vehicles
- G07C3/14—Quality control systems
- G07C3/143—Finished product quality control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/20—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from infrared radiation only
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/74—Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/81—Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Signal Processing (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Quality & Reliability (AREA)
- Biophysics (AREA)
- Data Mining & Analysis (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- General Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Radiation Pyrometers (AREA)
Abstract
The invention relates to the technical field of image processing, in particular to a pose monitoring and early warning method of a thermal forming component, which optimizes the thermal radiation characteristic of the thermal forming component by dynamically adjusting the parameters of a light source, obviously improves the imaging quality, ensures that clear and accurate image data can be obtained even under the extreme thermal working condition of the component, and solves the problem of inaccurate image identification caused by thermal radiation interference in the traditional monitoring method; the invention integrates multiple image data acquisition of RGB, near infrared, far infrared and the like, provides rich visual information, combines post-processing technology, enhances the recognition capability of component characteristics, and improves the accuracy and the robustness of pose detection.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a thermal forming component pose monitoring and early warning method.
Background
At present, a component heated in a thermoforming generating process is required to be conveyed to the lower part of a stamping part through a material taking device and is positioned through the cooperation of a limiting pin and a component limiting hole, but the component is required to be manually aligned or not after being placed.
At present, whether the component is aligned or not is monitored through the camera in the prior art, however, when the image is acquired through the camera, the light emitted by the heated component can cause very high interference on the image acquisition, whether the component is aligned or not cannot be confirmed, and the pose of the component is inaccurate through image identification.
Therefore, a method for monitoring and early warning the pose of a thermal forming component is needed, which can solve the problems.
Disclosure of Invention
The invention provides a thermal forming component pose monitoring and early warning method, which optimizes the thermal radiation characteristics of a thermal forming component by dynamically adjusting the parameters of a light source, obviously improves imaging quality, ensures that clear and accurate image data can be obtained under extreme thermal working conditions of the component, and solves the problem of inaccurate image identification caused by thermal radiation interference in the traditional monitoring method; the invention integrates multiple image data acquisition of RGB, near infrared, far infrared and the like, provides rich visual information, combines post-processing technology, enhances the recognition capability of component characteristics, and improves the accuracy and the robustness of pose detection.
The technical scheme adopted by the invention for solving the technical problems is as follows: the thermal forming component pose monitoring and early warning method is used for monitoring the component pose in a thermal forming monitoring system, the thermal forming monitoring system comprises a programmable light source matrix, a multi-mode image acquisition unit and an automatic monitoring control unit, and the thermal forming component pose monitoring and early warning method comprises the following steps:
Step S1: dynamically adjusting light source parameters according to the heat radiation characteristics of the thermoforming component by using a programmable light source matrix, and optimizing an imaging environment;
Step S2: collecting image data of the component through a multi-mode image acquisition unit, and integrating the image data into a CPU in an automatic monitoring control unit;
Step S3: the CPU in the automatic monitoring control unit is used for carrying out post-processing on the acquired image, filtering interference light emitted by the component, and enhancing the definition and contrast of the component image;
Step S4: processing the image by using a multi-scale depth vision network, and extracting the current pose characteristics of the component;
Step S5: and presetting a component pose deviation threshold, matching the current pose characteristics of the component with the preset standard component pose, and sending out an acousto-optic notification alarm through an automatic monitoring control unit if the pose deviation exceeds the preset threshold.
Further, the step S1 dynamically adjusts the light source parameters according to the heat radiation characteristics of the thermoforming member using a programmable light source matrix, and optimizing the imaging environment includes:
S1-1, monitoring heat radiation intensity and distribution characteristic data of the surface of a component in the thermoforming process in real time through an integrated thermal imaging sensor or a heat radiation monitoring device;
Step S1-2: and calculating a light source adjustment strategy by a CPU in the programmable light source matrix according to the monitored heat radiation data, wherein the light source adjustment strategy comprises, but is not limited to, adjusting the brightness, the color temperature, the light emitting wave band and the irradiation direction of the light source.
Further, the calculating, by the CPU in the programmable light source matrix in step S1-2, the light source adjustment policy includes:
Step S1-2-1: performing preliminary processing on thermal radiation data collected from a thermal imaging sensor or a thermal radiation monitoring device through an image quality optimization system, wherein the preliminary processing comprises noise removal through filtering, error correction and standardization, so that the accuracy and the usability of the data are ensured;
Step S1-2-2: extracting the maximum value, average value, distribution mode, change rate and temperature gradient of the surface of the component from the pretreated data;
step S1-2-3: establishing a mathematical relationship between light source adjustment and imaging quality in an image quality optimization system based on historical data and a physical model;
Step S1-2-4: randomly generating a group of initial light source parameter configurations through particle swarm initialization by adopting a particle swarm optimization algorithm, wherein each particle represents a light source adjustment strategy, and distributing initial speed and position for each particle;
step S1-2-5: calculating an image quality index corresponding to each particle through a CPU in the programmable light source matrix according to the physical model and the historical data;
Step S1-2-6: updating pBest if the current particle position produces an image quality index that is better than the image quality index previously recorded for the particle, and updating gBest if the current particle position produces an image quality index that is better than the image quality index previously recorded for all the particles in the particle swarm;
Wherein pBest represents the optimal solution that each particle has undergone during the search process, gBest represents the optimal solution that was found so far in the whole particle swarm;
Step S1-2-7: repeating the steps S1-2-5 to S1-2-6 until the preset iteration times are reached, and after the final solution iteration is finished, the gBest positions in the particle swarm represent the light source regulation strategy;
step S1-2-8: and controlling the current, the voltage and the pulse width of each light source unit in the light source matrix by the CPU in the programmable light source matrix according to the calculated strategy instruction, and realizing the adjustment of brightness, color temperature and direction.
Further, the image data of the component in the step S2 includes RGB image data, near infrared image data and far infrared image data.
Further, in the step S3, the post-processing of the acquired image by the CPU in the automated monitoring control unit includes:
step S3-1: the CPU in the automatic monitoring control unit receives the original image data from the multi-mode image acquisition unit, performs frequency domain filtering processing on the image, and removes image noise caused by heat radiation;
Step S3-2: extracting the change of the component relative to the background from the continuous image sequence by adopting a background difference algorithm;
Step S3-3: the gray distribution of the image is adjusted through histogram equalization and self-adaptive contrast, so that the contrast of the image is enhanced, and the characteristics of the component are obvious;
Step S3-4: the sharpening filter is adopted to enhance the image edge, so that the visibility of details is improved;
Step S3-5: and integrating the processed image data into a real-time display interface to provide preparation for subsequent component pose analysis.
Further, the extracting the change of the component relative to the background from the continuous image sequence in the step S3-2 by using a background difference algorithm includes:
Step S3-2-1: selecting one or more images from the beginning of the sequence of successive images as background references;
Step S3-2-2: creating a composite background image by averaging the selected reference frames;
Step S3-2-3: performing pixel-by-pixel difference operation on the current image of the component and the comprehensive background image to obtain a difference image;
step S3-2-4: denoising the difference image by adopting a median filtering method;
step S3-2-5: setting a threshold value to process and distinguish a change region of the difference map from noise, wherein a difference value lower than the threshold value is set to be zero, and pixels higher than the threshold value represent dynamic elements;
Step S3-2-6: connectivity analysis is carried out on the difference graph after the threshold processing, and adjacent dynamic elements are aggregated into a region, so that a movement region of the component is defined.
Further, in the step S4, the image processing by applying the multi-scale depth vision network, and the extracting of the current pose feature of the component includes:
step S4-1: the multi-scale depth vision network extracts multi-scale features of components in the image in real time through convolution kernels of different sizes;
step S4-2: integrating the extracted multi-scale features into a comprehensive feature vector through weighted fusion;
Step S4-3: based on the comprehensive feature vector, the current pose features of the component are further extracted by utilizing a full-connection layer in the multi-scale depth vision network;
step S4-4: outputting the pose parameters of the component through the regression layer based on the extracted current pose characteristics of the component.
Further, the multi-scale features of the component include edge features, texture features, shape features, and local second-order statistical features; the current pose characteristics of the component comprise the position characteristics and the rotation angle characteristics of the component; the pose parameters of the component include, but are not limited to, the specific position coordinates of the component and the angle at which the component rotates.
Further, the component pose deviation threshold in the step S5 includes a component position coordinate threshold and a component rotation angle range threshold.
Further, in the step S5, matching the current pose feature of the component with the preset standard component pose includes:
step S5-1: calculating the difference value between the current pose coordinates of the component and the standard pose coordinates of the component to obtain the deviation value of the component in the x and y directions;
Step S5-2: comparing the current posture of the component with the rotation parameters of the standard posture of the component, and calculating an angle difference;
Step S5-3: if the deviation is within the component position coordinate threshold and the angle difference is within the component rotation angle range threshold, the component is positioned through the limiting pin, and if the deviation exceeds the component position coordinate threshold or the angle difference exceeds the component rotation angle range threshold, an early warning signal is sent to the automatic monitoring control unit.
The invention has the advantages that: 1. automatically optimizing an imaging environment: according to the invention, the light source parameters are dynamically regulated to optimize the heat radiation characteristics of the thermal forming component, and the brightness, color and direction parameters are regulated to offset or minimize the influence of heat radiation on imaging, so that the imaging quality is remarkably improved, the clear and accurate component image data obtained by the multi-mode image acquisition unit can be ensured, and the problem of inaccurate image identification caused by heat radiation interference in the traditional monitoring method is solved.
2. The monitoring precision is improved through multi-mode image fusion: the invention integrates multiple image data acquisition of RGB, near infrared, far infrared and the like, provides rich visual information, combines post-processing technology, enhances the recognition capability of component characteristics, and improves the accuracy and the robustness of pose detection.
3. Deep learning driven intelligent analysis: the invention adopts the multi-scale depth vision network, can automatically extract the key pose information of the component from the multi-dimensional characteristics, not only improves the speed of pose recognition, but also greatly improves the accuracy of recognition, ensures the rapid and accurate judgment of the pose of the component, and provides a solid foundation for subsequent automatic control.
4. High-efficiency early warning mechanism: according to the invention, through the preset pose deviation threshold value, the thermoforming monitoring system can immediately judge whether the pose of the component meets the production requirement, and once the deviation exceeding the threshold value is detected, the early warning signal is immediately triggered, so that the production accident caused by component positioning errors is avoided, and the production efficiency and the product quality are ensured.
5. Unattended automated operation: the whole monitoring and early warning process does not need manual intervention, realizes full automation from image acquisition, processing and analysis to abnormal alarm, greatly reduces labor cost, improves the safety and continuous operation capacity of a production line, and meets the requirements of modern manufacturing industry on efficient automatic production.
6. And (3) refined quality control: according to the invention, through fine pose deviation detection and early warning, the pose deviation of the component in the production process can be timely adjusted, the rejection rate caused by pose misalignment is reduced, and the quality and consistency of the whole product are improved.
In summary, the invention successfully solves the problem of image interference caused by high temperature in the thermoforming process by a series of innovative technical means, realizes accurate monitoring and automatic early warning of the pose of the component, and has important significance in improving the production efficiency and the product quality of the manufacturing industry.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for monitoring and early warning the pose of a thermal forming member.
Detailed Description
The following description of the embodiments of the present invention will be made apparent and fully in view of the accompanying drawings, in which some, but not all embodiments of the invention are shown. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1: fig. 1 is a flowchart of a method for monitoring and early warning the pose of a thermal forming member, as shown in fig. 1, for monitoring the pose of the member in a thermal forming monitoring system, wherein the thermal forming monitoring system comprises a programmable light source matrix, a multi-mode image acquisition unit and an automatic monitoring control unit, and the method for monitoring and early warning the pose of the thermal forming member comprises the following steps:
Step S1: dynamically adjusting light source parameters according to the heat radiation characteristics of the thermoforming component by using a programmable light source matrix, and optimizing an imaging environment; wherein the imaging environment refers to ideal lighting conditions created for image acquisition. A programmable light source matrix is a set of light sources that can be precisely controlled and whose task is to counteract or minimize the effect of thermal radiation on imaging by adjusting brightness, color, and direction parameters. For example, if the component causes light reflection or overexposure, the light source matrix may adjust brightness or angle, using light of a particular wavelength to reduce these disturbances so that subsequent image acquisition can be performed under more ideal lighting conditions. The light source matrix itself does not directly participate in the shooting of the image, but creates a good external environment for image acquisition.
Specifically, the method comprises the following steps of S1-1, monitoring the heat radiation intensity and distribution characteristic data of the surface of a component in the thermoforming process in real time through an integrated thermal imaging sensor or a heat radiation monitoring device; the integrated thermal imaging sensor is a non-contact sensor and can capture and convert heat energy into an electric signal so as to generate a thermal image. Thermal imaging sensors (e.g., thermopiles, microbolometers, or pyroelectric detectors) are capable of detecting infrared radiation in a broad band, covering the near infrared to far infrared region. The sensor can rapidly respond to temperature change, record the heat distribution condition of the surface of the component in real time, such as hot spots, cold spots, temperature gradients and the like, and provide quantitative data of heat radiation intensity and distribution characteristics. The external heat radiation monitoring device comprises a heat radiometer or an infrared scanning system, can scan the component from a certain distance to acquire wider heat field information, has higher precision and resolution, can provide more detailed component heat radiation intensity data, and can generate a two-dimensional or three-dimensional heat radiation distribution diagram; further, the heat radiation intensity and distribution characteristic data includes heat radiation intensity: refers to the radiant energy emitted from a unit area of the surface of a component in watts per square meter (W/m). Temperature distribution map: displaying the temperature differences at various points on the surface of the component in the form of images and representing different temperature ranges in different colors can help to intuitively understand the heat distribution characteristics of the component. Spectral characteristics of heat radiation: the distribution of the heat radiation over different wavelength ranges is recorded. Time series data: thermal radiation intensity and temperature data over time are used to analyze the temperature variation of the component and the dynamics of thermal effects during thermoforming.
Step S1-2: according to the monitored thermal radiation data, an optimal light source regulation strategy is calculated by a CPU (the CPU is used as a main controller to be responsible for receiving the data from the thermal imaging sensor and executing an advanced algorithm to analyze the data and formulate a light source regulation strategy according to the data;
Further, the calculating, by the CPU in the programmable light source matrix in step S1-2, the light source adjustment policy includes:
Step S1-2-1: performing preliminary processing on the thermal radiation data collected from the thermal imaging sensor or the thermal radiation monitoring device by an image quality optimization system (the image quality optimization system is tightly connected with the thermal radiation monitoring sensor, the programmable light source matrix and the imaging equipment in an automatic monitoring control unit of the programmable light source matrix, can ensure that thermal radiation data is acquired in real time, reacts quickly and adjusts a light source to optimize imaging conditions), including filtering to remove noise, correcting errors and standardizing, and ensuring the accuracy and usability of the data;
Step S1-2-2: extracting the maximum value, average value, distribution mode, change rate and temperature gradient of the surface of the component from the pretreated data;
Step S1-2-3: establishing a mathematical relationship between light source adjustment and imaging quality in an image quality optimization system based on historical data and a physical model; wherein the historical data refers to data records collected in the past regarding the thermal radiation characteristics of the component (maximum of radiation intensity, average, distribution pattern, rate of change, and temperature gradient of the component surface), the light source parameter adjustment, and the corresponding imaging quality (signal-to-noise ratio SNR, contrast, sharpness of the image) when the thermoforming process was performed under similar or identical conditions. The data reflect the actual influence of the light source adjustment on the imaging effect, and provide a demonstration basis for establishing a mathematical model. The physical model adopts a light model, a thermal radiation transmission model and an image quality evaluation model in the prior art, wherein the light model describes the process how a light source emits light into space, the light interacts with the environment (including thermal radiation) and finally reaches a camera sensor. The thermal radiation transmission model is used to simulate how thermal radiation propagates in the environment and the interactions with the surrounding medium help predict and calculate the extent of interference of thermal radiation with imaging. The image quality evaluation model is used for defining an index of quantized image quality, and comprises: signal-to-noise ratio (SNR): measuring the ratio of image signal strength to background noise, a high SNR means that the image is clearer and the noise interference is small. Contrast ratio: representing the difference in brightness of the brightest and darkest areas in the image, the high contrast makes the object more well defined. Definition: the degree of resolvable image detail is measured. In the process of establishing mathematical relationship, the invention comprehensively utilizes the models to construct a comprehensive model, and the model can predict the final image quality index according to the current heat radiation condition and the expected light source adjustment strategy. In this way, the image quality optimization system can find the light source adjustment strategy that optimizes the image quality, such as maximizing SNR, contrast, and sharpness, by optimizing these model parameters, thereby ensuring that the best imaging results can be obtained during thermoforming, providing high quality image data for subsequent pose monitoring. Specifically, the process of constructing the integrated model includes:
a. Data integration and preprocessing
First, collected historical data is collated, including the heat radiation characteristics of the components, the conventional light source parameter adjustment records and the corresponding imaging quality indexes (SNR, contrast and definition). And cleaning and standardizing the data, eliminating abnormal values and ensuring the data quality.
B. model fusion design
And establishing a comprehensive model by using key parameters in the illumination model, the thermal radiation transmission model and the image quality evaluation model as independent variables and using SNR, contrast and definition as dependent variables through multivariate regression analysis and a multivariate linear or nonlinear regression method.
And then, a Support Vector Machine (SVM), A Neural Network (ANN) or a Random Forest (RF) machine learning algorithm is utilized to input a characteristic vector comprising heat radiation data and light source parameters, output the characteristic vector as a predicted image quality index, and learn the relation between the characteristic and a result through a training set.
C. parameter optimization and policy generation
A comprehensive objective function is defined, for example, minimizing the gap between the predicted image quality index and the expected value, while taking into account the weights of the indices, ensuring that the overall image quality is maximized. Adopting particle swarm optimization algorithm, searching for the light source parameter combination (brightness, color temperature, wave band and direction) which enables the objective function to reach the optimal, in the embodiment, the image quality index has three: signal-to-noise ratio (SNR), contrast (C), sharpness (Clarity), while light source parameters that need to be optimized are Luminance (luminence, L), color temperature (Color Temperature, CT), and band (B). The goal is to find a set of light source parameters such that these three quality indicators are as close as possible to their ideal expected values (denoted SNR des, Cdes,Claritydes in the following formulas, respectively), while taking into account the relative importance between them, by giving different weights (w 1, w2, w3), in particular defining a comprehensive objective function J that measures the difference between the predicted image quality and the expected value and is a weighted sum of the index differences. For ease of optimization, take the form of the sum of squares of errors, namely:
Wherein Representing the signal to noise ratio predicted from the physical model and the current light source parameters; representing the contrast ratio predicted from the physical model and the current light source parameters; representing the definition predicted from the physical model and the current light source parameters; by defining a comprehensive objective function which comprehensively considers three key image quality indexes of signal-to-noise ratio (SNR), contrast (C) and definition (Clarity), and allows the importance of the indexes to be balanced through weight distribution, a particle swarm optimization algorithm is adopted in the subsequent steps to find a set of optimal light source parameters (brightness L, color temperature CT and wave band B), so that the comprehensive objective function value is minimum, namely the difference between the actual image quality index and an expected value is minimized.
D. Parameter optimization and policy generation
In the thermoforming process, real-time monitored thermal radiation data is used as input, and expected image quality indexes under a given light source adjustment strategy are predicted through a trained comprehensive model. And comparing the predicted result with a set image quality target, dynamically adjusting the parameters of the light source until the optimal image quality is reached or approached, and specifically, selecting a gradient descent method as an example to perform parameter optimization. First, the partial derivatives of the objective function J with respect to each of the light source parameters (brightness L, color temperature CT, band B) are calculated, constituting a gradient vector. Then, in each iteration, the light source parameters are updated along the negative gradient direction until convergence to a local minimum. At the practical operational level, this step focuses on applying the previously established models and methods to a real-time thermoforming monitoring system. The method comprises the steps of predicting an expected image quality index under a current light source adjustment strategy by utilizing a trained comprehensive model through monitoring heat radiation data in a thermoforming process in real time, namely dynamically adjusting actual light source parameters based on comparison of a prediction result and a preset image quality target. Taking a gradient descent method as an example, the gradient of the objective function with respect to each light source parameter is calculated, and the parameters are adjusted gradually according to the gradient, so that the overall or local optimal solution is approximated gradually, and the real-time optimal control of the image quality is realized. This step ensures that the theoretical model can operate effectively in the actual production environment, continuously improving the image quality to the nearest ideal state.
Through the steps, the image quality optimization system comprehensively utilizes the historical data and the physical model, and a comprehensive model capable of dynamically predicting and optimizing the light source adjustment strategy is constructed, so that the imaging quality in the thermoforming process is ensured to be optimal, and the high standard requirement of subsequent pose monitoring is met.
Step S1-2-4: randomly generating a group of initial light source parameter configurations by particle swarm initialization by adopting a particle swarm optimization algorithm, wherein each particle represents a light source adjustment strategy, and assigning an initial speed v and a position x to each particle, according to the invention, a Particle Swarm Optimization (PSO) algorithm is applied to the position and posture monitoring and early warning of the thermoforming member, the position of each particle represents the current position of the particle in a solution space, and for the position and posture monitoring and early warning of the thermoforming member, the quality and efficiency of monitoring are directly determined by the adjusting parameters of the light source. Thus, the brightness, color temperature, wavelength band, direction are taken as the individual components of the position vector, which together constitute a specific light source adjustment strategy. For example, the position x= (luminance L, color temperature CT, band B, direction D) represents one specific light source configuration. The Velocity v (Velocity) then determines how the particle moves from the current position to the next position in the solution space. In this embodiment, the velocity v also needs to be four-dimensional, since it corresponds to the rate of change of the four parameters of brightness, color temperature, wavelength band, direction. That is, the velocity v= (Δbrightness, Δcolor temperature, Δband, Δdirection) describes how each particle's position (i.e. light source parameters) should be adjusted in the next iteration in order to get closer to a more optimal solution. According to the invention, different light source adjustment strategies can be explored through a Particle Swarm Optimization (PSO), wherein the speed guides the particles how to gradually adjust the positions (namely the light source parameters) of the particles so as to achieve a better monitoring and early warning effect, namely, to minimize or maximize a certain objective function (such as improving the definition of an image or reducing noise). In each iteration, the particles collectively learn and approach a more optimal light source configuration strategy by comparing the individual optimal solution with the global optimal solution.
Step S1-2-5: calculating an image quality index corresponding to each particle (namely, a light source parameter combination) through a CPU in the programmable light source matrix according to the physical model and the historical data;
Step S1-2-6: if the current particle position yields an image quality index that is better than the image quality index previously recorded by the particle, then pBest, specifically pBest (Personal Best), refers to the optimal solution that each particle has undergone during the search process, specifically to the thermoforming member pose monitoring method, and pBest is the combination of the highest signal-to-noise ratio, contrast, or sharpness that the particle achieves for each particle (i.e., a particular light source adjustment strategy). During the algorithm run, if the current particle position (i.e., the current light source adjustment parameter) yields an image quality index that is better than pBest that it previously recorded, this pBest value is updated. If the current particle position produces an image quality index that is better than the image quality index previously recorded for all particles in the particle swarm, updating gBest; specifically, gBest (Global Best) represents the best solution found so far in the whole population of particles, gBest means that among all attempted light source adjustment strategies, the strategy that yields the best image quality. When pBest of any particle is better than the current gBest, gBest is updated to this better solution. All particles are affected by gBest, causing them to move toward the optimum direction for the population. By continuously updating pBest and gBest and adjusting the speed and position of the particles accordingly, the particle swarm optimization algorithm can guide the particle swarm to gradually converge to an optimal region of the solution space, thereby finding or approaching a globally optimal light source adjustment strategy.
According to pBest and gBest, the velocity v and position x of each particle are updated using the following formulas:
Wherein v i denotes the current speed, w is the inertial weight, c 1 and c 2 are acceleration constants, and r 1 and r 2 are random numbers between [0,1] for introducing randomness, wherein Representing the cognitive component, i.e., the difference between the current individual optimal solution pBest i and the current location x i, multiplied by the acceleration constant c 1 and a random number r 1; wherein the method comprises the steps ofThe difference between the global optimal solution gBest and the current position x i is multiplied by an acceleration constant c 2 and another random number r 2, and the change amount of the speed of each particle in the next iteration is determined jointly through the operation of the formula, so that the moving direction and speed of the particle in the solution space are influenced, and a better solution is searched.
Step S1-2-7: repeating the steps S1-2-5 to S1-2-6 until the preset iteration times are reached, and after the final solution iteration is finished, enabling gBest positions in the particle swarm to represent the optimal light source adjustment strategy;
step S1-2-8: and controlling the current, the voltage and the pulse width of each light source unit in the light source matrix by the CPU in the programmable light source matrix according to the calculated strategy instruction, and realizing the adjustment of brightness, color temperature and direction.
Through the steps, the optimal light source adjustment strategy can be dynamically and automatically calculated and executed, the calculated strategy can be applied to the light source matrix in real time, meanwhile, the thermoforming monitoring system can continuously monitor the imaging quality, the strategy is finely adjusted according to the feedback result, closed-loop control is formed, and the light source adjustment is ensured to be always optimal. The object of searching the optimal solution through the particle swarm optimization algorithm is to find a group of light source adjusting parameters (brightness L, color temperature CT, wave band B and irradiation direction D) so that image quality evaluation indexes (such as signal to noise ratio SNR, contrast C and definition Clarity) are optimal, so as to adapt to the continuously changing heat radiation condition in the thermoforming process, ensure the imaging quality and provide a reliable data base for subsequent image processing.
Step S2: collecting image data of the component (the image data comprises RGB image data, near infrared image data and far infrared image data) by a multi-mode image acquisition unit, and integrating the image data into a CPU in an automatic monitoring control unit; the multi-mode image acquisition unit is equipped with different types of camera systems, and can simultaneously or sequentially capture image data in different spectral ranges to acquire more comprehensive information, and the unit at least comprises three key components: RGB cameras, near infrared cameras, and far infrared cameras. Wherein the RGB camera is responsible for capturing images in the visible spectrum, the RGB image is capable of providing visual information of the appearance characteristics, color changes, etc. of the component during thermoforming, and is critical to surface quality and structural recognition (image processing and computer vision techniques such as edge detection, shape matching, or feature point recognition can also be utilized to determine whether the component is tilted, rotated, or otherwise displaced relative to the intended location by analyzing the RGB image). Near infrared cameras capture electromagnetic wave images between the visible and mid-infrared bands. The image of this band helps to penetrate the surface of the material, revealing internal structures or enhancing the characteristics of specific substances, which are important for detecting the temperature distribution of the thermoformed article. Far infrared cameras are used to capture infrared radiation of longer wavelength, mainly for temperature measurement and thermal distribution analysis. In the thermoforming process, the far infrared image can accurately reflect the thermal field distribution of the component.
Optionally, the invention can also adopt an accurate time synchronization mechanism (such as GPS time synchronization, internal clock synchronization or external trigger signals), ensure that all cameras shoot images at the same moment or according to a preset time interval, integrate the acquired RGB, near infrared and far infrared image data into a unified data processing platform to comprehensively analyze information provided by different modes, and through the steps, the multi-mode image acquisition unit can provide abundant visual information of the thermoforming component and can effectively monitor the temperature and the temperature distribution condition of the component.
Step S3: the CPU in the automatic monitoring control unit is used for carrying out post-processing on the acquired image, filtering interference light emitted by the component, enhancing the definition and contrast of the component image, and specifically comprises the following steps:
Step S3-1: the CPU in the automatic monitoring control unit receives original image data (the data comprise visible light images and infrared images) from the multi-mode image acquisition unit, the image is subjected to frequency domain filtering processing, image noise caused by heat radiation is removed, and image blurring and distortion caused by a heat source are reduced.
Step S3-2: the background difference algorithm is adopted to extract the change of the component relative to the background from the continuous image sequence, so that the light interference emitted by the component can be effectively eliminated, the contour of the component is clearer, and the method specifically comprises the following steps:
step S3-2-1: one or more frames of images are selected from the beginning of the sequence of successive images as background references, ensuring that the selected frames reflect mainly static background information.
Step S3-2-2: creating a composite background image by averaging the selected reference frames (this background image will serve as a base template for subsequent differencing for comparing and distinguishing dynamic objects from static background);
Step S3-2-3: and carrying out pixel-by-pixel difference operation on the current image of the component and the comprehensive background image to obtain a difference image (direct subtraction of pixel values, and generating a difference image by the difference operation result, wherein the difference image shows the difference between the current image of the component and the comprehensive background image), a change region in the image appears in a highlight form in the difference image, and a background region shows a low value or zero value.
Step S3-2-4: denoising the difference image by adopting a median filtering method, and reducing random noise while maintaining the edge;
Step S3-2-5: the method comprises the steps of setting a threshold value to process and distinguish a change region of a difference map from noise, setting a difference value lower than the threshold value to zero, and representing a dynamic element by pixels higher than the threshold value, wherein the dynamic element is a region or an object which changes relative to the background in a continuous image sequence, and the dynamic element is formed by the actions of movement, deformation and the like of a thermoforming component in the production process. Pixels above the threshold represent these dynamic elements because during background differencing, the pixels display larger difference values, indicating that the member is moving or changing state, rather than small changes in the background itself or imaging noise; the main purpose of thresholding is to filter out insignificant noise or minor variations that may be caused by light fluctuations, camera sensor noise, etc., and by thresholding can clearly distinguish which variations are significant (i.e., dynamic elements) and which are background noise or insignificant variations that should be ignored. For example, assume that we are in the sequence of processed images that the pixel values of the member in background and stationary states typically do not differ by more than 10 gray levels, but the maximum difference from the background when the member is moving or deforming can be up to 50 gray levels. At this time, a threshold value of 20 gray levels may be preliminarily set. The effect of doing so is: difference values below 20 gray levels: these pixels are considered to be close to the change in background noise level, or non-critical differences due to weak light changes, and are therefore set to zero, and are considered to be consistent with the background, belonging to areas where attention is not needed. Difference values above 20 gray levels: the regions represented by these pixels are considered to be significantly changed, indicating that the regions are significantly different from the background, and are determined as a result of the change in the pose state of the member, and are thus marked as dynamic elements or changed regions.
Step S3-2-6: connectivity analysis is carried out on the difference graph after threshold processing, and adjacent dynamic elements are aggregated into a region, so that a movement region of a component is accurately defined;
Wherein the dynamic element represents the apparent difference that the current image pixel of the component exhibits in the difference map relative to the set threshold, in particular, the thresholded difference map in which the value of each pixel represents the difference in brightness or color. If the brightness of a pixel in the current frame is higher than the brightness in the background frame, the difference value is positive; conversely, if the luminance is low, the difference value is negative (for color images, the difference may be calculated based on the individual channels of the RGB or other color space).
In order to distinguish a truly meaningful change (e.g., movement or shape change of a member) from a minor difference due to factors such as illumination change, image noise, etc., a threshold value is set. The threshold is a critical point for filtering out minor or insignificant differences, and only significant changes remain, and when the difference value between the pixels in the current image of the component and the background image in the difference map exceeds the preset threshold, the pixels are considered to represent significant differences, which means that the component corresponding to the pixels has substantial displacement, deformation, brightness or color changes relative to the background. For example, if a component is moving, its edges will exhibit high disparity values in the disparity map, as these portions are significantly different from the corresponding positions in the background image.
The difference map is obtained by comparing the current image of the component with the comprehensive background image, and the difference value of each pixel reflects the light intensity or color difference of the component image and the background image at the position. When this difference is significant, i.e. the difference value is above a set threshold, these pixels are considered as dynamic elements and may also be referred to as "high difference value pixels", meaning that they correspond to areas of significant distinction between the component and the background. Thus, connectivity analysis is performed and adjacent dynamic elements are aggregated in order to more accurately define the actual motion or change area of the component.
Further, connectivity analysis is an image processing technique performed on the thresholded disparity map, in order to identify and label successive pixel groups with similar properties (here, disparity values above the threshold), i.e., connected domains. The following are the steps of connectivity analysis:
a. initializing a mark: first, traversing the whole difference graph, searching for the first pixel point (namely the starting point of the change region) higher than the threshold value, and marking the pixels as seed points of different connected domains, wherein each seed point corresponds to a unique label.
B. Region expansion: for each marked seed point, the algorithm will examine its surrounding neighboring pixels. If these neighboring pixels are also above the threshold and have not been marked, they are assigned to the same label and added to the current connected domain. This process is recursively performed until all pixels adjacent to the current connected domain that are eligible are marked.
C. Repeating the marking: the above process is repeated until all pixels above the threshold are assigned to at least one connected domain. Thus, each connected region represents a continuous region with a high difference value, i.e. a region representing the movement of the component relative to the background.
Through connectivity analysis, not only can the change area be accurately defined, but also isolated noise points can be eliminated, and clear component movement area outline can be obtained, which is important for subsequent pose analysis.
By the steps, the dynamic change of the thermal forming component relative to the background can be effectively separated from the continuous image sequence, and meanwhile, the interference of environmental factors is eliminated.
Step S3-3: the gray distribution of the image is adjusted through histogram equalization and self-adaptive contrast, so that the contrast of the image is enhanced, and the characteristics of the component are obvious;
Step S3-4: the sharpening filter (such as Unsharp Masking and Laplacian sharpening) is adopted to enhance the image edge, so that the visibility of details is improved, and the microstructure of the component is clearer.
Step S3-5: and integrating the processed image data into a real-time display interface to provide preparation for subsequent component pose analysis.
Through the image processing steps, the position and posture information of the thermoforming component can be accurately reflected after the image data acquired from the multi-mode image acquisition unit is effectively processed, meanwhile, the interference of environmental and heat radiation factors is reduced, and the reliability and the precision of the whole thermoforming monitoring system are improved.
Step S4: the method for processing the image by using the multi-scale depth vision network comprises the steps of:
Step S4-1: the multi-scale depth vision network extracts multi-scale features of components in an image in real time through convolution kernels (i.e. filters with different scales) with different sizes, wherein the multi-scale features of the components comprise edge features (one of the most basic features in the image and represent abrupt changes of brightness or color among pixels and help to describe the outline and boundary of an object), texture features (repeated structures in the image and describe texture information of the surface of the components, such as smoothness, roughness and the like), shape features (including the overall outline structure of the components and the local structure of the components) and local second-order statistical features (Gabor wavelet coefficients of the image);
these features are capable of capturing changes in direction and intensity of localized areas of the component, suitable for texture analysis and object recognition;
step S4-2: integrating the extracted multi-scale features into a comprehensive feature vector through weighted fusion, and enhancing the description capability of component details and integral structures;
Step S4-3: based on the comprehensive feature vector, the current pose features of the component are further extracted by using a full-connection layer in the multi-scale depth vision network, specifically, the full-connection layer has the function of extracting the pose features of the component, including but not limited to position coordinates and rotation angles, and specifically, the workflow of the full-connection layer is as follows: first, the composite feature vector is flattened into a one-dimensional vector to serve as an input to a fully connected layer that contains a series of neurons that are each connected to each element in the input vector and are given different weights. These weights determine how to combine the input features to form a new, more targeted representation of the features. The multiple fully connected layers are stacked, each layer further abstracts and refines features on the basis of the previous layer, until a final output layer, which directly outputs predicted values related to component pose. Through the series of operations, the full connection layer effectively screens and strengthens the most relevant features of the component pose from the comprehensive features, and lays a solid foundation for the subsequent pose parameter output (such as a regression layer in step S4-4).
Step S4-4: based on the extracted current pose characteristics of the component, outputting pose parameters of the component through a regression layer, wherein the pose parameters of the component include, but are not limited to, specific position coordinates of the component and the rotation angle of the component.
Through the steps, the multi-scale depth vision network in the prior art is adopted to capture the image features with different scales at the same time, the network can more comprehensively understand the image information, the network comprises detail features (such as fine textures) and shape structure features of the components, and the multi-scale depth vision network is beneficial to improving the extraction precision of the current pose features of the components, so that the pose of the components is monitored more accurately.
Step S5: presetting a component pose deviation threshold, wherein the component pose deviation threshold comprises a component position coordinate threshold and a component rotation angle range threshold; wherein the component position coordinate threshold value refers to the maximum allowable deviation value between the actual position and the ideal position of the component in the three directions of the component x, y and z. Wherein the threshold value of the rotation angle range of the component refers to the maximum allowable deviation between the actual rotation angle of the component and the preset rotation angle;
then matching the current pose characteristics of the component with the preset pose of the standard component, and specifically comprising the following steps:
step S5-1: calculating the difference value between the current pose coordinates of the component and the standard pose coordinates of the component to obtain the deviation value of the component in the x and y directions;
Step S5-2: comparing the current posture of the component with the rotation parameters of the standard posture of the component, and calculating an angle difference;
Step S5-3: if the deviation is within the member position coordinate threshold and the angle difference is within the member rotation angle range threshold, the member is positioned by the limiting pin, and if the deviation exceeds the member position coordinate threshold or the angle difference exceeds the member rotation angle range threshold, an early warning signal is sent to the automatic monitoring control unit, and in the step S4-4, the thermoforming monitoring system has extracted detailed pose information of the current member from the acquired image through the multi-scale depth vision network, including but not limited to position coordinates (x, y, z) and rotation parameters of the member, and the information forms pose features. In step S5, a preset standard component pose is required, and a set of standard pose parameters are preset as reference for comparison according to the design specification and production requirement of the component, including the component position and the allowable rotation angle range. An algorithm in the thermoforming monitoring system is responsible for comparing the difference between the currently extracted pose features and the preset standard poses, and whether the production standard is met or not is evaluated by calculating the deviation amount of rotation and translation. Before production, a series of allowable maximum pose deviation values are set according to the precision requirements of the component and the characteristics of the thermoforming process, and the threshold value comprises the maximum tolerance of the maximum value of the position deviation and the rotation angle. And calculating the difference between the current pose and the standard pose, comparing the difference with a preset threshold, and if the deviation of any dimension exceeds the set threshold, judging that the model is unqualified. The thermoforming monitoring system will immediately send a signal to the automated monitoring control unit as soon as the deviation is found to be outside the threshold range. The automatic monitoring control unit can immediately activate an audible and visual alarm after receiving the signal, including but not limited to a high-loudness warning sound, a flashing light or a striking warning message on a display screen, so as to ensure that an operator can notice the problem in time, so as to take corrective measures, wherein the corrective measures comprise adjusting working parameters of a thermoforming machine, suspending production for manual adjustment or triggering an automatic correction program, and ensure the quality and safety of subsequent production.
In summary, step S5 is a response and action stage in the whole monitoring and early warning process, and through accurate pose matching, strict deviation judgment and real-time alarm mechanism, the high efficiency and accuracy of the thermoforming process are ensured, the increase of rejection rate caused by improper pose is avoided, and meanwhile, the safety and automation level of production are also improved.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.
Claims (10)
1. The thermal forming component pose monitoring and early warning method is used for monitoring the component pose in a thermal forming monitoring system, and the thermal forming monitoring system comprises a programmable light source matrix, a multi-mode image acquisition unit and an automatic monitoring control unit and is characterized by comprising the following steps:
Step S1: dynamically adjusting light source parameters according to the heat radiation characteristics of the thermoforming component by using a programmable light source matrix, and optimizing an imaging environment;
Step S2: collecting image data of the component through a multi-mode image acquisition unit, and integrating the image data into a CPU in an automatic monitoring control unit;
Step S3: the CPU in the automatic monitoring control unit is used for carrying out post-processing on the acquired image, filtering interference light emitted by the component, and enhancing the definition and contrast of the component image;
Step S4: processing the image by using a multi-scale depth vision network, and extracting the current pose characteristics of the component;
Step S5: and presetting a component pose deviation threshold, matching the current pose characteristics of the component with the preset standard component pose, and sending out an acousto-optic notification alarm through an automatic monitoring control unit if the pose deviation exceeds the preset threshold.
2. The method for monitoring and pre-warning the pose of the thermal forming member according to claim 1, wherein the step S1 of dynamically adjusting the light source parameters according to the thermal radiation characteristics of the thermal forming member using the programmable light source matrix, and optimizing the imaging environment comprises:
S1-1, monitoring heat radiation intensity and distribution characteristic data of the surface of a component in the thermoforming process in real time through an integrated thermal imaging sensor or a heat radiation monitoring device;
Step S1-2: according to the monitored heat radiation data, calculating a light source adjusting strategy by a CPU in the programmable light source matrix, wherein the light source adjusting strategy comprises the adjustment of the brightness, the color temperature, the light emitting wave band and the irradiation direction of the light source.
3. The method for monitoring and pre-warning the pose of the thermal forming member according to claim 2, wherein the calculating the light source adjustment strategy by the CPU in the programmable light source matrix in step S1-2 includes:
Step S1-2-1: performing preliminary processing on thermal radiation data collected from a thermal imaging sensor or a thermal radiation monitoring device through an image quality optimization system, wherein the preliminary processing comprises noise removal through filtering, error correction and standardization, so that the accuracy and the usability of the data are ensured;
Step S1-2-2: extracting the maximum value, average value, distribution mode, change rate and temperature gradient of the surface of the component from the pretreated data;
step S1-2-3: establishing a mathematical relationship between light source adjustment and imaging quality in an image quality optimization system based on historical data and a physical model;
Step S1-2-4: randomly generating a group of initial light source parameter configurations through particle swarm initialization by adopting a particle swarm optimization algorithm, wherein each particle represents a light source adjustment strategy, and distributing initial speed and position for each particle;
step S1-2-5: calculating an image quality index corresponding to each particle through a CPU in the programmable light source matrix according to the physical model and the historical data;
Step S1-2-6: updating pBest if the current particle position produces an image quality index that is better than the image quality index previously recorded for the particle, and updating gBest if the current particle position produces an image quality index that is better than the image quality index previously recorded for all the particles in the particle swarm;
Wherein pBest represents the optimal solution that each particle has undergone during the search process, gBest represents the optimal solution that was found so far in the whole particle swarm;
Step S1-2-7: repeating the steps S1-2-5 to S1-2-6 until the preset iteration times are reached, and after the final solution iteration is finished, the gBest positions in the particle swarm represent the light source regulation strategy;
step S1-2-8: and controlling the current, the voltage and the pulse width of each light source unit in the light source matrix by the CPU in the programmable light source matrix according to the calculated strategy instruction, and realizing the adjustment of brightness, color temperature and direction.
4. The method according to claim 1, wherein the image data of the component in step S2 includes RGB image data, near infrared image data and far infrared image data.
5. The method for monitoring and pre-warning the pose of the thermoformed component according to claim 1, wherein the post-processing of the acquired image by the CPU in the automated monitoring and control unit in the step S3 includes:
step S3-1: the CPU in the automatic monitoring control unit receives the original image data from the multi-mode image acquisition unit, performs frequency domain filtering processing on the image, and removes image noise caused by heat radiation;
Step S3-2: extracting the change of the component relative to the background from the continuous image sequence by adopting a background difference algorithm;
Step S3-3: the gray distribution of the image is adjusted through histogram equalization and self-adaptive contrast, so that the contrast of the image is enhanced, and the characteristics of the component are obvious;
Step S3-4: the sharpening filter is adopted to enhance the image edge, so that the visibility of details is improved;
Step S3-5: and integrating the processed image data into a real-time display interface to provide preparation for subsequent component pose analysis.
6. The method for monitoring and pre-warning the pose of the thermal forming member according to claim 5, wherein extracting the change of the member relative to the background from the continuous image sequence by using the background difference algorithm in the step S3-2 comprises:
Step S3-2-1: selecting one or more images from the beginning of the sequence of successive images as background references;
Step S3-2-2: creating a composite background image by averaging the selected reference frames;
Step S3-2-3: performing pixel-by-pixel difference operation on the current image of the component and the comprehensive background image to obtain a difference image;
step S3-2-4: denoising the difference image by adopting a median filtering method;
step S3-2-5: setting a threshold value to process and distinguish a change region of the difference map from noise, wherein a difference value lower than the threshold value is set to be zero, and pixels higher than the threshold value represent dynamic elements;
Step S3-2-6: connectivity analysis is carried out on the difference graph after the threshold processing, and adjacent dynamic elements are aggregated into a region, so that a movement region of the component is defined.
7. The method for monitoring and pre-warning the pose of the thermal forming member according to claim 1, wherein the step S4 of applying the multi-scale depth vision network to process the image, the extracting the current pose features of the member includes:
step S4-1: the multi-scale depth vision network extracts multi-scale features of components in the image in real time through convolution kernels of different sizes;
step S4-2: integrating the extracted multi-scale features into a comprehensive feature vector through weighted fusion;
Step S4-3: based on the comprehensive feature vector, the current pose features of the component are further extracted by utilizing a full-connection layer in the multi-scale depth vision network;
step S4-4: outputting the pose parameters of the component through the regression layer based on the extracted current pose characteristics of the component.
8. The method of claim 7, wherein the multi-scale features of the component include edge features, texture features, shape features, and local second order statistical features; the current pose characteristics of the component comprise the position characteristics and the rotation angle characteristics of the component; the pose parameters of the component comprise specific position coordinates of the component and the rotation angle of the component.
9. The method according to claim 1, wherein the component pose deviation threshold in step S5 includes a component position coordinate threshold and a component rotation angle range threshold.
10. The method for monitoring and pre-warning the pose of the thermal forming member according to claim 1, wherein the step S5 of matching the current pose characteristics of the member with the preset standard pose of the member comprises:
step S5-1: calculating the difference value between the current pose coordinates of the component and the standard pose coordinates of the component to obtain the deviation value of the component in the x and y directions;
Step S5-2: comparing the current posture of the component with the rotation parameters of the standard posture of the component, and calculating an angle difference;
Step S5-3: if the deviation is within the component position coordinate threshold and the angle difference is within the component rotation angle range threshold, the component is positioned through the limiting pin, and if the deviation exceeds the component position coordinate threshold or the angle difference exceeds the component rotation angle range threshold, an early warning signal is sent to the automatic monitoring control unit.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410906257.7A CN118470873B (en) | 2024-07-08 | 2024-07-08 | Thermal forming component pose monitoring and early warning method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410906257.7A CN118470873B (en) | 2024-07-08 | 2024-07-08 | Thermal forming component pose monitoring and early warning method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118470873A true CN118470873A (en) | 2024-08-09 |
CN118470873B CN118470873B (en) | 2024-10-18 |
Family
ID=92163921
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410906257.7A Active CN118470873B (en) | 2024-07-08 | 2024-07-08 | Thermal forming component pose monitoring and early warning method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118470873B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119477897A (en) * | 2025-01-10 | 2025-02-18 | 陕西核昌机电装备有限公司 | Real-time analysis system for geological structure image in uranium ore drilling process |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017186120A2 (en) * | 2016-04-28 | 2017-11-02 | 宁波舜宇光电信息有限公司 | Image capturing module and molded photosensitive assembly therefor, molded photosensitive assembly semi-finished product and manufacturing method, and electronic device |
CN114399882A (en) * | 2022-01-20 | 2022-04-26 | 红骐科技(杭州)有限公司 | Fire source detection, identification and early warning method for fire-fighting robot |
CN114494249A (en) * | 2022-04-01 | 2022-05-13 | 济南奥图自动化股份有限公司 | Positioning detection method for hot forming workpiece in ultrahigh-strength rigid-hot stamping production line |
CN217034782U (en) * | 2020-08-17 | 2022-07-22 | 爱色乐居 | High Resolution Image Acquisition System |
EP4171396A1 (en) * | 2021-05-10 | 2023-05-03 | Cilag GmbH International | Adaptive control of surgical stapling instrument based on staple cartridge type |
CN116958771A (en) * | 2023-07-28 | 2023-10-27 | 北京元境数字科技有限公司 | Computer vision recognition system and method |
CN117557503A (en) * | 2023-10-25 | 2024-02-13 | 维克多精密工业(深圳)有限公司 | Thermal forming die detection method and system based on artificial intelligence |
CN117630951A (en) * | 2023-11-15 | 2024-03-01 | 上海交通大学三亚崖州湾深海科技研究院 | Integrated panoramic looking around and sonar early warning system and method based on deep sea mining vehicle |
CN117995289A (en) * | 2024-03-11 | 2024-05-07 | 湖南华菱湘潭钢铁有限公司 | Soft measurement modeling method for temperature of blast furnace tuyere convolution zone |
-
2024
- 2024-07-08 CN CN202410906257.7A patent/CN118470873B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017186120A2 (en) * | 2016-04-28 | 2017-11-02 | 宁波舜宇光电信息有限公司 | Image capturing module and molded photosensitive assembly therefor, molded photosensitive assembly semi-finished product and manufacturing method, and electronic device |
CN217034782U (en) * | 2020-08-17 | 2022-07-22 | 爱色乐居 | High Resolution Image Acquisition System |
EP4171396A1 (en) * | 2021-05-10 | 2023-05-03 | Cilag GmbH International | Adaptive control of surgical stapling instrument based on staple cartridge type |
CN114399882A (en) * | 2022-01-20 | 2022-04-26 | 红骐科技(杭州)有限公司 | Fire source detection, identification and early warning method for fire-fighting robot |
CN114494249A (en) * | 2022-04-01 | 2022-05-13 | 济南奥图自动化股份有限公司 | Positioning detection method for hot forming workpiece in ultrahigh-strength rigid-hot stamping production line |
CN116958771A (en) * | 2023-07-28 | 2023-10-27 | 北京元境数字科技有限公司 | Computer vision recognition system and method |
CN117557503A (en) * | 2023-10-25 | 2024-02-13 | 维克多精密工业(深圳)有限公司 | Thermal forming die detection method and system based on artificial intelligence |
CN117630951A (en) * | 2023-11-15 | 2024-03-01 | 上海交通大学三亚崖州湾深海科技研究院 | Integrated panoramic looking around and sonar early warning system and method based on deep sea mining vehicle |
CN117995289A (en) * | 2024-03-11 | 2024-05-07 | 湖南华菱湘潭钢铁有限公司 | Soft measurement modeling method for temperature of blast furnace tuyere convolution zone |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119477897A (en) * | 2025-01-10 | 2025-02-18 | 陕西核昌机电装备有限公司 | Real-time analysis system for geological structure image in uranium ore drilling process |
Also Published As
Publication number | Publication date |
---|---|
CN118470873B (en) | 2024-10-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107229930B (en) | Intelligent identification method for numerical value of pointer instrument | |
CN118134062B (en) | Numerical control machine tool casting quality tracking system | |
CN107064170B (en) | Method for detecting profile defect of mobile phone shell | |
CN117593592B (en) | Intelligent scanning and identifying system and method for foreign matters at bottom of vehicle | |
JP6305171B2 (en) | How to detect objects in a scene | |
EP3776462B1 (en) | System and method for image-based target object inspection | |
CN118470873B (en) | Thermal forming component pose monitoring and early warning method | |
CN117456195A (en) | Abnormal image identification method and system based on depth fusion | |
Sood et al. | Image quality enhancement for Wheat rust diseased images using Histogram equalization technique | |
CN112883969B (en) | Rainfall intensity detection method based on convolutional neural network | |
CN118154562A (en) | A metal surface online defect detection system based on YOLO7 deep learning | |
CN117830177A (en) | Dynamic sea state image recognition system and method for offshore photovoltaic monitoring | |
CN117854014B (en) | A comprehensive method for automatically capturing and analyzing abnormal phenomena | |
CN114792417A (en) | Model training method, image recognition method, device, equipment and storage medium | |
CN116106319A (en) | Automatic detection method and system for defects of synthetic leather | |
CN112733584A (en) | Intelligent alarm method and device for communication optical cable | |
CN112598632B (en) | Appearance detection method and device for contact piece of crimping connector | |
Stahl et al. | Comprehensive Quantitative Quality Assessment of Thermal Cut Sheet Edges using Convolutional Neural Networks. | |
CN117706239B (en) | Converter transformer overheat fault point prediction system | |
CN118781122B (en) | Industrial image real-time analysis system and method | |
CN117934453B (en) | Method and system for diagnosing defects of backlight foreign matters of mobile phone screen | |
CN119295460B (en) | A printed circuit board defect detection method and system based on image recognition | |
Babao et al. | Integration of visual and thermographic images in an artificial neural network for object classification | |
Li | Infrared Spectral Imaging-Based Image Recognition for Motion Detection | |
CN119804447A (en) | Cooking robot food freshness identification method and system based on AI |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |