CN116894951B - Jewelry online monitoring method based on image processing - Google Patents
Jewelry online monitoring method based on image processing Download PDFInfo
- Publication number
- CN116894951B CN116894951B CN202311159485.4A CN202311159485A CN116894951B CN 116894951 B CN116894951 B CN 116894951B CN 202311159485 A CN202311159485 A CN 202311159485A CN 116894951 B CN116894951 B CN 116894951B
- Authority
- CN
- China
- Prior art keywords
- frequency component
- component image
- wavelet scale
- wavelet
- confidence level
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract 17
- 238000012544 monitoring process Methods 0.000 title claims abstract 9
- 230000009466 transformation Effects 0.000 claims abstract 5
- 230000003044 adaptive effect Effects 0.000 claims 6
- 230000011218 segmentation Effects 0.000 claims 2
- 238000013528 artificial neural network Methods 0.000 claims 1
- 238000010606 normalization Methods 0.000 claims 1
- 238000000513 principal component analysis Methods 0.000 claims 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/245—Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/42—Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/52—Scale-space analysis, e.g. wavelet analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/60—Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/70—Labelling scene content, e.g. deriving syntactic or semantic representations
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Computational Linguistics (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of image processing, in particular to an online jewelry monitoring method based on image processing, which comprises the following steps: collecting jewelry on-line monitoring gray level images; performing discrete wavelet transformation on the jewelry on-line monitoring gray level image, and respectively obtaining gray level confidence level and structure confidence level according to gray level distribution characteristics of the low-frequency component and edge structure distribution characteristics of the high-frequency component; obtaining the edge region confidence level of the first high-frequency component image under each wavelet scale according to the gray level confidence level and the structure confidence level; obtaining a self-adaptive wavelet scale weight coefficient according to the edge region confidence degree of the first high-frequency component image under each wavelet scale to obtain a reconstructed image; and identifying the position of the jewelry in the image according to the reconstructed image, and completing on-line monitoring. The invention can properly inhibit the light reflecting part of the jewelry area, and the edge area is clearer, and the jewelry is accurately monitored on line.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to an online jewelry monitoring method based on image processing.
Background
In the jewelry monitoring scene, due to the physical properties of jewelry, the jewelry surface often has a reflective area, and a highlight area is presented on an obtained jewelry image, so that the acquisition of the jewelry surface and jewelry edge information is influenced. The reflection compensation can be carried out by adopting discrete wavelet transformation, the high-frequency component where the reflection is located is properly restrained, the high-frequency component in the edge area is properly enhanced, a clear jewelry image is obtained, and the jewelry on-line monitoring is completed.
Since the high-frequency component and the low-frequency component extracted by discrete wavelet transform contain the edge region and the overall structure region of the image, respectively, and the edge portion and the overall structure portion of the jewelry light reflection region are also contained therein, it is difficult to achieve the processing target by performing the decomposition coefficient thresholding processing using only a simple threshold. Therefore, the edge confidence level of jewelry is obtained by combining the gray level distribution characteristics of the low-frequency information and the edge structure distribution characteristics of the high-frequency information, and the image is reconstructed through wavelet transformation confidence threshold and wavelet inverse transformation, so that the enhancement processing of the image is realized, the light reflection part of the jewelry area is properly inhibited, the edge area is clearer, and finally the on-line jewelry monitoring is completed.
Disclosure of Invention
In order to solve the problems, the invention provides an online jewelry monitoring method based on image processing, which comprises the following steps:
acquiring jewelry on-line monitoring gray level images;
performing discrete wavelet transformation on the jewelry online monitoring gray level image to obtain a low-frequency component image and a high-frequency component image under a first wavelet scale; recording a high-frequency component image of a first wavelet scale as a first high-frequency component image, and acquiring a low-frequency component image and a high-frequency component image of the first high-frequency component image under each wavelet scale, wherein the low-frequency component image and the high-frequency component image comprise a plurality of local areas; acquiring the gray level confidence level of each local area in the low-frequency component image of the first high-frequency component image under each wavelet scale according to the gray level distribution characteristics of the low-frequency component image; acquiring the structural confidence level of each local area in the high-frequency component image of the first high-frequency component image under each wavelet scale according to the edge structural distribution characteristics of the high-frequency component image;
acquiring the edge region confidence level of the high-frequency component image of the first high-frequency component image under each wavelet scale according to the gray level confidence level of each local region in the low-frequency component image of the first high-frequency component image under each wavelet scale and the structure confidence level of each local region in the high-frequency component image of the first high-frequency component image under each wavelet scale; acquiring the edge region confidence level of the first high-frequency component image under each wavelet scale according to the edge region confidence level of the first high-frequency component image under each wavelet scale;
acquiring an adaptive wavelet scale weight coefficient of a low-frequency component image of the first high-frequency component image under each wavelet scale according to the edge region confidence level of the first high-frequency component image under each wavelet scale, and marking the adaptive wavelet scale weight coefficient as the first wavelet scale weight coefficient; acquiring adaptive wavelet scale weight coefficients of the high-frequency component images of the first high-frequency component images under each wavelet scale according to the edge region confidence level of the first high-frequency component images under each wavelet scale, and marking the adaptive wavelet scale weight coefficients as second wavelet scale weight coefficients; obtaining a reconstructed image according to the first wavelet scale weight coefficient and the second wavelet scale weight coefficient; based on the reconstructed image, the location of the jewelry in the image is identified.
Preferably, the specific formula for obtaining the gray confidence level of each local area in the low-frequency component image of the first high-frequency component image under each wavelet scale according to the gray distribution feature of the low-frequency component image is as follows:
in the method, in the process of the invention,representing that the first high frequency component image is at +.>First +.in low frequency component image at wavelet scale>Gray level confidence level of each local area; />Representing that the first high frequency component image is at +.>First +.in low frequency component image at wavelet scale>Gray average of each local area; />Representing that the first high frequency component image is at +.>The gray average value of all local areas in the low-frequency component image under the wavelet scale is maximum.
Preferably, the specific formula for obtaining the structural confidence level of each local area in the high-frequency component image of the first high-frequency component image under each wavelet scale according to the edge structural distribution feature of the high-frequency component image is as follows:
in the method, in the process of the invention,representing that the first high frequency component image is at +.>First +.in high frequency component image at wavelet scale>The structural confidence level of the individual local regions; />Representing that the first high frequency component image is at +.>The centroid of the high-frequency component image at the wavelet scale is +.>Euclidean distance of centroid of each local region; />Representing that the first high frequency component image is at +.>The total number of local areas in the high frequency component image at the wavelet scale; />Representing that the first high frequency component image is at +.>First +.in high frequency component image at wavelet scale>The total number of pixels of the local area; />Representing that the first high frequency component image is at +.>The total length of projection of the high-frequency component image under the wavelet scale; />Is a linear normalization function.
Preferably, the acquiring the first high frequency component image is at the firstThe projection total length of the high-frequency component image under the wavelet scale comprises the following specific methods:
on the first high-frequency component imageHigh-frequency component image at wavelet scale +.>Principal component analysis, namely obtaining principal component directions, taking a straight line in which the principal component directions are located as a coordinate axis, locating a coordinate origin at the mass center of the high-frequency component image, projecting edges of all local areas to obtain a first high-frequency component image at the first mass center>The total length of projection of the high frequency component image at the wavelet scale.
Preferably, the specific formula for obtaining the confidence level of the edge region of the high-frequency component image of the first high-frequency component image under each wavelet scale according to the gray level confidence level of each local region in the low-frequency component image of the first high-frequency component image under each wavelet scale and the structure confidence level of each local region in the high-frequency component image of the first high-frequency component image under each wavelet scale is as follows:
in the method, in the process of the invention,representing that the first high frequency component image is at +.>Confidence level of edge area of high-frequency component image under wavelet scale; />Representing that the first high frequency component image is at +.>The total number of local areas in the high frequency component image at the wavelet scale; />Representing that the first high frequency component image is at +.>First +.in high frequency component image at wavelet scale>The structural confidence level of the individual local regions; />Representing that the first high frequency component image is at +.>Low frequency component map at wavelet scaleThe total number of local areas in the image; />Representing that the first high frequency component image is at +.>First +.in low frequency component image at wavelet scale>Gray level confidence level of each local area; />Representing that the first high frequency component image is at +.>The total number of local areas in the high frequency component image at the wavelet scale; />Representing that the first high frequency component image is at +.>First +.in high frequency component image at wavelet scale>The structural confidence level of the individual local regions; />Representing the number of wavelet scales.
Preferably, the obtaining the edge region confidence level of the first high-frequency component image under each wavelet scale according to the edge region confidence level of the first high-frequency component image under each wavelet scale includes the following specific steps:
acquiring the first high-frequency component image at the firstThe edge region confidence level of the high-frequency component image under the wavelet scale is recorded as a first edge region confidence level; obtaining the obtainedTaking the first high frequency component image at +.>The edge region confidence level of the high-frequency component image under the wavelet scale is recorded as a second edge region confidence level; taking the absolute value of the difference between the confidence level of the first edge area and the confidence level of the second edge area as the first high-frequency component image at the (th)>Edge region confidence level at the wavelet scale.
Preferably, the method for obtaining the adaptive wavelet scale weight coefficient of the low-frequency component image of the first high-frequency component image under each wavelet scale according to the edge region confidence level of the first high-frequency component image under each wavelet scale includes the following specific steps:
in the first high-frequency component imageAcquiring a first high-frequency component image at the +.>The sum of the confidence level of the edge region under the wavelet scale and 1 is recorded as a first weight; acquiring a first high-frequency component image at +.>The gray level confidence level average value of all local areas in the low-frequency component image under the wavelet scale is recorded as a second weight; the product of the first weight and the second weight is taken as the first high-frequency component image at the (th)>Adaptive wavelet scale weight coefficients for low frequency component images at the wavelet scale.
Preferably, the method for obtaining the adaptive wavelet scale weight coefficient of the high-frequency component image of the first high-frequency component image under each wavelet scale according to the edge region confidence level of the first high-frequency component image under each wavelet scale includes the following specific steps:
in the first high-frequency component imageHigh-frequency component image at wavelet scale, acquiring first high-frequency component image at +.>The difference between the confidence level of the edge region under the wavelet scale and 1 is recorded as a third weight; acquiring a first high-frequency component image at +.>The structural confidence level average value of all local areas in the high-frequency component image under the wavelet scale is recorded as a fourth weight; the product of the third weight and the fourth weight is taken as the first high-frequency component image at the (th)>Adaptive wavelet scale weight coefficients for high frequency component images at the wavelet scale.
Preferably, the obtaining a reconstructed image according to the first wavelet scale weight coefficient and the second wavelet scale weight coefficient includes the following specific steps:
reconstructing the low-frequency component image and the high-frequency component image of the first high-frequency component image under all wavelet scales through inverse discrete wavelet transformation to obtain a first reconstructed image;
in the reconstruction process of the first reconstruction image, the wavelet coefficient of the discrete wavelet inverse transformation corresponding to the low-frequency component image is a first wavelet scale weight coefficient, and the wavelet coefficient of the discrete wavelet inverse transformation corresponding to the high-frequency component image is a second wavelet scale weight coefficient;
and reconstructing the first reconstructed image and the low-frequency component image under the first wavelet scale to obtain a reconstructed image.
Preferably, the method for identifying the jewelry position in the image according to the reconstructed image comprises the following specific steps:
inputting the reconstructed image into a neural network to obtain a semantic segmentation image, wherein the semantic segmentation image comprises a jewelry area; the jewelry area is visually displayed on a corresponding display.
The technical scheme of the invention has the beneficial effects that: aiming at the problem that the processing target is difficult to reach by the thresholding of the decomposition coefficient by using a simple threshold value only, because the high-frequency component and the low-frequency component extracted by discrete wavelet transformation respectively contain the edge region and the overall structure region of the image, and the edge part and the overall structure part of the jewelry reflective region are also contained therein; according to the invention, the gray level distribution characteristics of the low-frequency information and the edge structure distribution characteristics of the high-frequency information are combined to obtain the jewelry edge confidence level, and the image is reconstructed through wavelet transformation confidence threshold and wavelet inverse transformation, so that the enhancement processing of the image is realized; the light reflecting part of the jewelry area is properly restrained, the edge area is clearer, and the jewelry is accurately monitored on line.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of steps of an online jewelry monitoring method based on image processing according to the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following is a detailed description of specific implementation, structure, characteristics and effects of an image processing-based jewelry on online monitoring method according to the invention with reference to the accompanying drawings and preferred embodiments. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of the jewelry on-line monitoring method based on image processing.
Referring to fig. 1, a flowchart of a jewelry on-line monitoring method based on image processing according to an embodiment of the present invention is shown, the method includes the following steps:
step S001: and collecting jewelry on-line monitoring gray level images.
In the jewelry on-line monitoring scene, jewelry on-line monitoring images in a conventional production or sales scene are acquired, and the images can be acquired through jewelry on-line monitoring video frame images or conventional angles acquired by a monitoring probe or a camera. The obtained image is often blurred at the edge of the jewelry area in the image due to the complexity of the scene light or the transmission loss of the transmission device, and the reflection phenomenon of the jewelry area exists, so that the image is represented as a highlight area, and the intermittent diffusion phenomenon of the edge line is caused by the blurring. Therefore, denoising enhancement treatment is required to be carried out on the image, the influence of the light reflection area on the jewelry area is restrained, and the subsequent further monitoring of information such as jewelry morphology and quality is facilitated.
Specifically, in order to implement the jewelry on-line monitoring method based on image processing provided in this embodiment, firstly, jewelry on-line monitoring gray level images need to be collected, and the specific process is as follows: the method comprises the steps of obtaining jewelry on-line monitoring images by using a machine vision device, wherein jewelry on-line monitoring images in a conventional production or sales scene can be obtained through jewelry on-line monitoring video frame images or conventional angles obtained by a monitoring probe or a camera, and then graying the obtained jewelry on-line monitoring images to obtain jewelry on-line monitoring gray images.
So far, the jewelry on-line monitoring gray level image is obtained by the method.
Step S002: performing discrete wavelet transformation on the jewelry on-line monitoring gray level image, and respectively obtaining gray level confidence level and structure confidence level according to gray level distribution characteristics of the low-frequency component and edge structure distribution characteristics of the high-frequency component.
It should be noted that, since the high-frequency component and the low-frequency component extracted by discrete wavelet transformation respectively include the edge region and the overall structure region of the image, and the edge portion and the overall structure portion of the jewelry reflective region are also included therein, the gray level distribution feature of the low-frequency information and the edge structure distribution feature of the high-frequency information are combined to obtain the jewelry edge confidence level, and the image is reconstructed by wavelet transformation confidence threshold and wavelet inverse transformation, so that the enhancement processing of the image is realized, the reflective portion of the jewelry region is properly suppressed, the edge region is clearer, and finally the on-line monitoring of jewelry is completed.
It should be further noted that, because the obtained jewelry online monitoring gray level image is subjected to discrete wavelet transformation, the edge part and the overall structure part of the jewelry reflective area mainly exist in the high-frequency component and the low-frequency component, so that the gray level confidence and the structure confidence of the respective image are obtained according to the gray level value of the reflective area and the diffusion property of the edge, and the possibility that the respective area belongs to the jewelry reflective area is represented.
Since the discrete wavelet transform is a continuous wavelet transform that decomposes an original image into a series of images composed of wavelet coefficients of different scales, the wavelet transform of each level decomposes an image to be decomposed into a low-frequency component image and a high-frequency component image, and therefore, according to the gray distribution characteristics of the low-frequency components and the edge structure distribution characteristics of the high-frequency components.
In particular, use is made ofThe wavelet function carries out discrete wavelet change decomposition on jewelry on-line monitoring gray level images, and the discrete wavelet change decomposition is divided into +.>Layers, i.e. wavelet scale number +.>The method comprises the steps of carrying out a first treatment on the surface of the Thereby obtaining a low-frequency component image and a high-frequency component image at a plurality of wavelet scales.
Since the discrete wavelet transform is sensitive to the decomposition level, in this embodiment, it is necessary to obtain the influence of the brightness of the light reflection region on the blurring effect of the edge in the neighborhood, which is mostly represented in the high-frequency component, so that only the high-frequency component image is decomposed in multiple levels; i.e. for the firstLow-frequency component image and high-frequency component image at wavelet scale, th +.>The low-frequency component image and the high-frequency component image at the wavelet scale are +.>The two images to be decomposed under the wavelet scale are mainly analyzed, so that gray level and position distribution analysis is performed on the high-frequency component image and the low-frequency component image of the first wavelet scale, and only the high-frequency component image of the first wavelet scale is decomposed, so that the low-frequency component image and the high-frequency component image of the first wavelet scale under each wavelet scale are obtained.
Specifically, a high-frequency component image of a first wavelet scale is recorded as a first high-frequency component image, and each connected domain in a low-frequency component image and a high-frequency component image of the first high-frequency component image under each wavelet scale is used as a local area; the specific process of obtaining the gray level confidence level of each local area in the low-frequency component image of the first high-frequency component image under each wavelet scale and obtaining the structure confidence level of each local area in the high-frequency component image of the first high-frequency component image under each wavelet scale is as follows:
1. and acquiring the gray level confidence level of each local area in the low-frequency component image of the first high-frequency component image under each wavelet scale.
It should be noted that, due to the characteristic that the gray value of the reflective area is high and the low-frequency component image acquired under normal conditions is formed by an area set of divided gray values, the gray confidence level of each local area in the low-frequency component image of the first high-frequency component image under each wavelet scale is associated with the gray mean value of the respective local area and the gray mean value maximum value in the low-frequency component image.
Specifically, the first high-frequency component image is at the firstFirst +.in low frequency component image at wavelet scale>The calculated expression of the gray level confidence level of each local area is as follows:
in the method, in the process of the invention,representing that the first high frequency component image is at +.>First +.in low frequency component image at wavelet scale>Gray level confidence level of each local area; />Representing that the first high frequency component image is at +.>First +.in low frequency component image at wavelet scale>Gray average of each local area; />Representing that the first high frequency component image is at +.>The gray average value of the local area in the low-frequency component image under the wavelet scale is maximum.
Thus, the gray level confidence level of each local area in the low-frequency component image of the first high-frequency component image under each wavelet scale is obtained.
2. And acquiring the structural confidence degree of each local area in the high-frequency component image of the first high-frequency component image under each wavelet scale.
Since the reflective region is a feature of diffusing the edge structure of the high-frequency component, and the high-frequency component image obtained in the normal case is composed of a set of local regions divided by gradient values, one local region may be regarded as one edge line segment, and the degree of local region position change in the main component direction of the high-frequency component image composed of a plurality of local regions is taken as the degree of diffusing the edge structure of the high-frequency component image thereof.
Specifically, first, a first high-frequency component image is acquired at the firstThe centroid of the high frequency component image at the wavelet scale and the centroid of each local area in said high frequency component image, and secondly, at +.>High-frequency component image at wavelet scale +.>Principal component analysis, obtaining principal component directions, using a straight line where the principal component directions are located as coordinate axes, the origin of coordinates being located at the centroid of the high frequency component image, using +.>The algorithm obtains the principal component directions of the edges of all the local areas, and eachProjecting the edges of the local areas to the direction of the main component to obtain the projection length of each local area, calculating the sum of the projection lengths of all the local areas, and recording the sum as the projection total length of the high-frequency component image; the first high frequency component image is at +.>First +.in high frequency component image at wavelet scale>The computational expression of the structural confidence level of each local area is:
in the method, in the process of the invention,representing that the first high frequency component image is at +.>First +.in high frequency component image at wavelet scale>The structural confidence level of the individual local regions; />Representing that the first high frequency component image is at +.>The centroid of the high-frequency component image at the wavelet scale is +.>Euclidean distance of centroid of each local region; />Representing that the first high frequency component image is at +.>The total number of local areas in the high frequency component image at the wavelet scale; />Representing that the first high frequency component image is at +.>First +.in high frequency component image at wavelet scale>The total number of pixels of the local area; />Representing that the first high frequency component image is at +.>The total length of projection of the high-frequency component image under the wavelet scale; />Is a linear normalization function.
Thus, the structural confidence level of each local area in the high-frequency component image of the first high-frequency component image under each wavelet scale is obtained.
Step S003: and obtaining the edge region confidence level of the first high-frequency component image under each wavelet scale according to the gray level confidence level and the structure confidence level.
It should be noted that, since the gray level confidence level and the structure confidence level both indicate the possibility that the respective local areas are jewelry reflective areas, and the gray level confidence levels and the structure confidence levels of different levels are related to the gray level confidence levels of the previous level, the edge area confidence levels of all the local areas on the first high frequency component image are cumulatively obtained.
It should be further noted that, since the weight coefficient of the gray level confidence level and the structure confidence level on each level should be changed along with the deep relationship of the levels, since the two gray level confidence levels are different from the level sensitivity of the structure confidence level, the gray level confidence level should have a more decisive influence on the region relationship of the low level, and the structure confidence level has a more decisive influence on the region relationship of the high level, because the high frequency information of the bottom layer is related to the expression level of the low frequency information, the weight ratio of different levels thereof is quantified, and finally the edge region confidence level on each local region is obtained.
Specifically, the first high-frequency component image is at the firstThe computing expression of the edge region confidence level of the high-frequency component image under the wavelet scale is as follows:
in the method, in the process of the invention,representing that the first high frequency component image is at +.>Confidence level of edge area of high-frequency component image under wavelet scale; />Representing that the first high frequency component image is at +.>The total number of local areas in the high frequency component image at the wavelet scale; />Representing that the first high frequency component image is at +.>First +.in high frequency component image at wavelet scale>The structural confidence level of the individual local regions; />Representing that the first high frequency component image is at +.>The total number of local areas in the low frequency component image at the wavelet scale; />Representing that the first high frequency component image is at +.>First +.in low frequency component image at wavelet scale>Gray level confidence level of each local area; />Representing that the first high frequency component image is at +.>The total number of local areas in the high frequency component image at the wavelet scale; />Representing that the first high frequency component image is at +.>First +.in high frequency component image at wavelet scale>The structural confidence level of the individual local regions; />Representing the number of wavelet scales.
It should be noted that, as the level of the discrete wavelet transform goes deep, the edge region confidence levels of the high-frequency component images of different levels accumulate and form an edge region confidence level sequence of the high-frequency component images, and the numerical variation of the accumulated edge region confidence level sum of the high-frequency component images of each level is caused by dividing the edges of different types by the discrete wavelet transform, so that the edge region confidence level of each level changes sharply.
Specifically, the first high-frequency component image is acquired at the first positionThe edge region confidence level of the high-frequency component image under the wavelet scale is recorded as a first edge region confidence level; acquiring a first high-frequency component image at +.>The edge region confidence level of the high-frequency component image under the wavelet scale is recorded as a second edge region confidence level; taking the absolute value of the difference between the confidence level of the first edge area and the confidence level of the second edge area as the first high-frequency component image at the (th)>Edge region confidence level at the wavelet scale.
So far, the edge region confidence level of the first high-frequency component image under each wavelet scale is obtained through the method.
Step S004: obtaining a self-adaptive wavelet scale weight coefficient according to the edge region confidence degree of the first high-frequency component image under each wavelet scale to obtain a reconstructed image; and identifying the position of the jewelry in the image according to the reconstructed image, and completing on-line monitoring.
It should be noted that, because the obtained edge region confidence level of the first high-frequency component image under each wavelet scale characterizes the edge region confidence level of the high-frequency component image under each level; because the low-frequency region of the first high-frequency component image is a reflective region, the low-frequency region should be suppressed, and the high-frequency portion of the first high-frequency component image is a region edge, the high-frequency region should be enhanced, so that an adaptive wavelet scale weight coefficient corresponding to each level needs to be acquired for adjusting the inverse discrete wavelet transform, and further, the detail weights of the high-frequency and the low-frequency of each level in the process of reconstructing the image are acquired, thereby obtaining the reconstructed image.
Specifically, for the first high-frequency component image, at the firstAcquiring a first high-frequency component image at the +.>The sum of the confidence level of the edge region under the wavelet scale and 1 is recorded as a first weight; acquiring a first high-frequency component image at +.>The gray level confidence level average value of all local areas in the low-frequency component image under the wavelet scale is recorded as a second weight; the product of the first weight and the second weight is taken as the first high-frequency component image at the (th)>The adaptive wavelet scale weight coefficient of the low frequency component image under the wavelet scale is recorded as a first wavelet scale weight coefficient.
In the first high-frequency component imageHigh-frequency component image at wavelet scale, acquiring first high-frequency component image at +.>The difference between the confidence level of the edge region under the wavelet scale and 1 is recorded as a third weight; acquiring a first high-frequency component image at +.>The structural confidence level average value of all local areas in the high-frequency component image under the wavelet scale is recorded as a fourth weight; the product of the third weight and the fourth weight is taken as the first high-frequency component image at the (th)>The adaptive wavelet scale weight coefficient of the high-frequency component image under the wavelet scale is recorded as a second waveletScale weight coefficient.
Thus, adaptive wavelet scale weight coefficients of the high-frequency component image and the low-frequency component image of the first high-frequency component image at each wavelet scale are obtained.
Reconstructing the low-frequency component image and the high-frequency component image of the first high-frequency component image under all wavelet scales through inverse discrete wavelet transformation to obtain a first reconstructed image;
in the reconstruction process of the first reconstruction image, the wavelet coefficient of the discrete wavelet inverse transformation corresponding to the low-frequency component image is a first wavelet scale weight coefficient, and the wavelet coefficient of the discrete wavelet inverse transformation corresponding to the high-frequency component image is a second wavelet scale weight coefficient;
reconstructing the first reconstructed image and the low-frequency component image under the first wavelet scale to obtain a reconstructed image; the wavelet coefficients of the inverse discrete wavelet transform when the first reconstructed image and the low-frequency component image under the first wavelet scale are reconstructed are existing coefficients, and the inverse discrete wavelet transform is the existing technology, and redundant description is omitted here.
Specifically, the specific method for identifying jewelry targets through semantic segmentation U-net neural network is as follows:
collecting a large number of jewelry images under the corresponding actual storage environment as a data set, and manually labeling category information in the images, wherein the background pixel category is labeled 0, and the jewelry is labeled 1; and monitoring training by adopting a cross entropy loss function, after the neural network training is finished, sending the reconstructed image into the neural network, obtaining a corresponding semantic segmentation image by network reasoning, obtaining a jewelry target through class labels of pixels, and visually displaying the morphological position of jewelry on a corresponding display after the position of jewelry is identified, so that corresponding online monitoring is finished.
This embodiment is completed.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.
Claims (2)
1. An on-line jewelry monitoring method based on image processing, which is characterized by comprising the following steps:
acquiring jewelry on-line monitoring gray level images;
performing discrete wavelet transformation on the jewelry online monitoring gray level image to obtain a low-frequency component image and a high-frequency component image under a first wavelet scale; recording a high-frequency component image of a first wavelet scale as a first high-frequency component image, and acquiring a low-frequency component image and a high-frequency component image of the first high-frequency component image under each wavelet scale, wherein the low-frequency component image and the high-frequency component image comprise a plurality of local areas; acquiring the gray level confidence level of each local area in the low-frequency component image of the first high-frequency component image under each wavelet scale according to the gray level distribution characteristics of the low-frequency component image; acquiring the structural confidence level of each local area in the high-frequency component image of the first high-frequency component image under each wavelet scale according to the edge structural distribution characteristics of the high-frequency component image;
acquiring the edge region confidence level of the high-frequency component image of the first high-frequency component image under each wavelet scale according to the gray level confidence level of each local region in the low-frequency component image of the first high-frequency component image under each wavelet scale and the structure confidence level of each local region in the high-frequency component image of the first high-frequency component image under each wavelet scale; acquiring the edge region confidence level of the first high-frequency component image under each wavelet scale according to the edge region confidence level of the first high-frequency component image under each wavelet scale;
acquiring an adaptive wavelet scale weight coefficient of a low-frequency component image of the first high-frequency component image under each wavelet scale according to the edge region confidence level of the first high-frequency component image under each wavelet scale, and marking the adaptive wavelet scale weight coefficient as the first wavelet scale weight coefficient; acquiring adaptive wavelet scale weight coefficients of the high-frequency component images of the first high-frequency component images under each wavelet scale according to the edge region confidence level of the first high-frequency component images under each wavelet scale, and marking the adaptive wavelet scale weight coefficients as second wavelet scale weight coefficients; obtaining a reconstructed image according to the first wavelet scale weight coefficient and the second wavelet scale weight coefficient; identifying the position of jewelry in the image according to the reconstructed image;
the specific formula for acquiring the gray level confidence level of each local area in the low-frequency component image of the first high-frequency component image under each wavelet scale according to the gray level distribution characteristics of the low-frequency component image is as follows:
in the method, in the process of the invention,representing that the first high frequency component image is at +.>First +.in low frequency component image at wavelet scale>Gray level confidence level of each local area; />Representing that the first high frequency component image is at +.>First +.in low frequency component image at wavelet scale>Gray average of each local area; />Representing that the first high frequency component image is at +.>In images of low-frequency components at wavelet scaleThe gray average value of all local areas is maximum;
the specific formula for acquiring the structural confidence level of each local area in the high-frequency component image of the first high-frequency component image under each wavelet scale according to the edge structural distribution characteristics of the high-frequency component image is as follows:
in the method, in the process of the invention,representing that the first high frequency component image is at +.>First +.in high frequency component image at wavelet scale>The structural confidence level of the individual local regions; />Representing that the first high frequency component image is at +.>The centroid of the high-frequency component image at the wavelet scale is +.>Euclidean distance of centroid of each local region; />Representing that the first high frequency component image is at +.>The total number of local areas in the high frequency component image at the wavelet scale; />Representing that the first high frequency component image is at +.>First +.in high frequency component image at wavelet scale>The total number of pixels of the local area; />Representing that the first high frequency component image is at +.>The total length of projection of the high-frequency component image under the wavelet scale; />Is a linear normalization function;
the acquisition of the first high-frequency component image is carried out at the firstThe projection total length of the high-frequency component image under the wavelet scale comprises the following specific methods:
on the first high-frequency component imageHigh-frequency component image at wavelet scale +.>Principal component analysis, namely obtaining principal component directions, taking a straight line in which the principal component directions are located as a coordinate axis, locating a coordinate origin at the mass center of the high-frequency component image, projecting edges of all local areas to obtain a first high-frequency component image at the first mass center>Projection of high frequency component images at the wavelet scaleTotal length;
the specific formula for acquiring the confidence level of the edge region of the high-frequency component image of the first high-frequency component image under each wavelet scale according to the gray level confidence level of each local region in the low-frequency component image of the first high-frequency component image under each wavelet scale and the structure confidence level of each local region in the high-frequency component image of the first high-frequency component image under each wavelet scale is as follows:
in the method, in the process of the invention,representing that the first high frequency component image is at +.>Confidence level of edge area of high-frequency component image under wavelet scale; />Representing that the first high frequency component image is at +.>The total number of local areas in the high frequency component image at the wavelet scale; />Representing that the first high frequency component image is at +.>First +.in high frequency component image at wavelet scale>The structural confidence level of the individual local regions; />Represent the firstA high-frequency component image at +.>The total number of local areas in the low frequency component image at the wavelet scale; />Representing that the first high frequency component image is at +.>First +.in low frequency component image at wavelet scale>Gray level confidence level of each local area; />Representing that the first high frequency component image is at +.>The total number of local areas in the high frequency component image at the wavelet scale; />Representing that the first high frequency component image is at +.>First +.in high frequency component image at wavelet scale>The structural confidence level of the individual local regions; />Representing the number of wavelet scales;
the method for acquiring the edge region confidence level of the first high-frequency component image under each wavelet scale according to the edge region confidence level of the first high-frequency component image under each wavelet scale comprises the following specific steps:
acquiring the first high-frequency component image at the firstThe edge region confidence level of the high-frequency component image under the wavelet scale is recorded as a first edge region confidence level; acquiring a first high-frequency component image at +.>The edge region confidence level of the high-frequency component image under the wavelet scale is recorded as a second edge region confidence level; taking the absolute value of the difference between the confidence level of the first edge area and the confidence level of the second edge area as the first high-frequency component image at the (th)>Edge region confidence level at the wavelet scale;
the method for acquiring the self-adaptive wavelet scale weight coefficient of the low-frequency component image of the first high-frequency component image under each wavelet scale according to the edge region confidence level of the first high-frequency component image under each wavelet scale comprises the following specific steps:
in the first high-frequency component imageAcquiring a first high-frequency component image at the +.>The sum of the confidence level of the edge region under the wavelet scale and 1 is recorded as a first weight; acquiring a first high-frequency component image at +.>The gray level confidence level average value of all local areas in the low-frequency component image under the wavelet scale is recorded as a second weight; the product of the first weight and the second weight is taken as the first high-frequency component image at the (th)>Adaptive wavelet scale weight coefficients for low frequency component images at the wavelet scale;
the method for acquiring the self-adaptive wavelet scale weight coefficient of the high-frequency component image of the first high-frequency component image under each wavelet scale according to the edge region confidence level of the first high-frequency component image under each wavelet scale comprises the following specific steps:
in the first high-frequency component imageHigh-frequency component image at wavelet scale, acquiring first high-frequency component image at +.>The difference between the confidence level of the edge region under the wavelet scale and 1 is recorded as a third weight; acquiring a first high-frequency component image at +.>The structural confidence level average value of all local areas in the high-frequency component image under the wavelet scale is recorded as a fourth weight; the product of the third weight and the fourth weight is taken as the first high-frequency component image at the (th)>Adaptive wavelet scale weight coefficients of the high frequency component image at the wavelet scale;
the method for obtaining the reconstructed image according to the first wavelet scale weight coefficient and the second wavelet scale weight coefficient comprises the following specific steps:
reconstructing the low-frequency component image and the high-frequency component image of the first high-frequency component image under all wavelet scales through inverse discrete wavelet transformation to obtain a first reconstructed image;
in the reconstruction process of the first reconstruction image, the wavelet coefficient of the discrete wavelet inverse transformation corresponding to the low-frequency component image is a first wavelet scale weight coefficient, and the wavelet coefficient of the discrete wavelet inverse transformation corresponding to the high-frequency component image is a second wavelet scale weight coefficient;
and reconstructing the first reconstructed image and the low-frequency component image under the first wavelet scale to obtain a reconstructed image.
2. The method for on-line monitoring jewelry based on image processing according to claim 1, wherein the method for identifying the jewelry position in the image based on the reconstructed image comprises the following specific steps:
inputting the reconstructed image into a neural network to obtain a semantic segmentation image, wherein the semantic segmentation image comprises a jewelry area; the jewelry area is visually displayed on a corresponding display.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311159485.4A CN116894951B (en) | 2023-09-11 | 2023-09-11 | Jewelry online monitoring method based on image processing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311159485.4A CN116894951B (en) | 2023-09-11 | 2023-09-11 | Jewelry online monitoring method based on image processing |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116894951A CN116894951A (en) | 2023-10-17 |
CN116894951B true CN116894951B (en) | 2023-12-08 |
Family
ID=88313809
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311159485.4A Active CN116894951B (en) | 2023-09-11 | 2023-09-11 | Jewelry online monitoring method based on image processing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116894951B (en) |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20040070375A (en) * | 2003-02-03 | 2004-08-09 | 삼성전자주식회사 | Hallftoning method and apparatus using wavelet transformation |
JP2005296331A (en) * | 2004-04-12 | 2005-10-27 | Toshiba Corp | Ultrasonograph and image data processor |
CN101079949A (en) * | 2006-02-07 | 2007-11-28 | 索尼株式会社 | Image processing apparatus and method, recording medium, and program |
CN101635047A (en) * | 2009-03-25 | 2010-01-27 | 湖南大学 | Texture synthesis and image repair method based on wavelet transformation |
WO2015067186A1 (en) * | 2013-11-08 | 2015-05-14 | 华为终端有限公司 | Method and terminal used for image noise reduction |
CN105096280A (en) * | 2015-06-17 | 2015-11-25 | 浙江宇视科技有限公司 | Method and device for processing image noise |
CN111292274A (en) * | 2020-01-17 | 2020-06-16 | 河海大学常州校区 | Photovoltaic module image fusion method based on spectral residual significance model |
CN111583123A (en) * | 2019-02-17 | 2020-08-25 | 郑州大学 | Wavelet transform-based image enhancement algorithm for fusing high-frequency and low-frequency information |
CN113610717A (en) * | 2021-07-16 | 2021-11-05 | 江苏师范大学 | Method for enhancing ultraviolet fluorescence image of skin disease |
CN116343051A (en) * | 2023-05-29 | 2023-06-27 | 山东景闰工程研究设计有限公司 | Geological environment monitoring method and system based on remote sensing image |
CN116563799A (en) * | 2023-07-11 | 2023-08-08 | 山东昆仲信息科技有限公司 | On-line Dust Monitoring Method Based on Video Monitoring |
CN116580290A (en) * | 2023-07-11 | 2023-08-11 | 成都庆龙航空科技有限公司 | Unmanned aerial vehicle identification method, unmanned aerial vehicle identification device and storage medium |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7260272B2 (en) * | 2003-07-10 | 2007-08-21 | Samsung Electronics Co.. Ltd. | Method and apparatus for noise reduction using discrete wavelet transform |
US20130022288A1 (en) * | 2011-07-20 | 2013-01-24 | Sony Corporation | Image processing apparatus and method for reducing edge-induced artefacts |
-
2023
- 2023-09-11 CN CN202311159485.4A patent/CN116894951B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20040070375A (en) * | 2003-02-03 | 2004-08-09 | 삼성전자주식회사 | Hallftoning method and apparatus using wavelet transformation |
JP2005296331A (en) * | 2004-04-12 | 2005-10-27 | Toshiba Corp | Ultrasonograph and image data processor |
CN101079949A (en) * | 2006-02-07 | 2007-11-28 | 索尼株式会社 | Image processing apparatus and method, recording medium, and program |
CN101635047A (en) * | 2009-03-25 | 2010-01-27 | 湖南大学 | Texture synthesis and image repair method based on wavelet transformation |
WO2015067186A1 (en) * | 2013-11-08 | 2015-05-14 | 华为终端有限公司 | Method and terminal used for image noise reduction |
CN105096280A (en) * | 2015-06-17 | 2015-11-25 | 浙江宇视科技有限公司 | Method and device for processing image noise |
CN111583123A (en) * | 2019-02-17 | 2020-08-25 | 郑州大学 | Wavelet transform-based image enhancement algorithm for fusing high-frequency and low-frequency information |
CN111292274A (en) * | 2020-01-17 | 2020-06-16 | 河海大学常州校区 | Photovoltaic module image fusion method based on spectral residual significance model |
CN113610717A (en) * | 2021-07-16 | 2021-11-05 | 江苏师范大学 | Method for enhancing ultraviolet fluorescence image of skin disease |
CN116343051A (en) * | 2023-05-29 | 2023-06-27 | 山东景闰工程研究设计有限公司 | Geological environment monitoring method and system based on remote sensing image |
CN116563799A (en) * | 2023-07-11 | 2023-08-08 | 山东昆仲信息科技有限公司 | On-line Dust Monitoring Method Based on Video Monitoring |
CN116580290A (en) * | 2023-07-11 | 2023-08-11 | 成都庆龙航空科技有限公司 | Unmanned aerial vehicle identification method, unmanned aerial vehicle identification device and storage medium |
Non-Patent Citations (6)
Title |
---|
Image enhancement algorithm of Dongba manuscripts based on wavelet analysis and grey relational theory;Xia Xinyu 等;《2017 13th IEEE International Conference on Electronic Measurement & Instruments》;全文 * |
Working condition recognition of screw compressor using wavelets theory;Qunfeng Niu 等;《2008 7th World Congress on Intelligent Control and Automation》;全文 * |
基于双边滤波和NSST的红外与可见光图像融合;徐丹萍;王海梅;;计算机测量与控制(04);全文 * |
基于小波AlexNet网络的配电网故障区段定位方法;侯思祖 等;《电测与仪表》;全文 * |
基于小波变换及融合技术的图像边缘提取;朱士虎;朱红;何培忠;;徐州师范大学学报(自然科学版)(03);全文 * |
基于小波变换的边缘保留图像去噪改进算法;刘平 等;《电视技术》;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN116894951A (en) | 2023-10-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106339998B (en) | Multi-focus image fusion method based on contrast pyramid transformation | |
Saladi et al. | Analysis of denoising filters on MRI brain images | |
CN115965750B (en) | Vascular reconstruction method, vascular reconstruction device, vascular reconstruction computer device, and vascular reconstruction program | |
Lu et al. | Nonlocal Means‐Based Denoising for Medical Images | |
Zhang | Two-step non-local means method for image denoising | |
CN115713533B (en) | Power equipment surface defect detection method and device based on machine vision | |
CN109636766A (en) | Polarization differential and intensity image Multiscale Fusion method based on marginal information enhancing | |
Yao et al. | The Retinex-based image dehazing using a particle swarm optimization method | |
Jiao et al. | Guided-Pix2Pix: End-to-end inference and refinement network for image dehazing | |
CN110956632A (en) | Method and device for automatically detecting pectoralis major region in molybdenum target image | |
CN113837974A (en) | NSST (non-subsampled contourlet transform) domain power equipment infrared image enhancement method based on improved BEEPS (Bayesian particle swarm optimization) filtering algorithm | |
CN116188488A (en) | Gray gradient-based B-ultrasonic image focus region segmentation method and device | |
CN113592729A (en) | Infrared image enhancement method for electrical equipment based on NSCT domain | |
Mohan et al. | Exudate localization in retinal fundus images using modified speeded up robust features algorithm | |
CN112348819A (en) | Model training method, image processing and registering method, and related device and equipment | |
Li et al. | Speckle noise removal based on structural convolutional neural networks with feature fusion for medical image | |
An et al. | Patch loss: A generic multi-scale perceptual loss for single image super-resolution | |
CN115994870B (en) | Image processing method for enhancing denoising | |
CN109658357A (en) | A kind of denoising method towards remote sensing satellite image | |
Luo et al. | Infrared and visible image fusion based on VPDE model and VGG network | |
CN109242797B (en) | Image denoising method, system and medium based on fusion of homogeneous and heterogeneous regions | |
CN116894951B (en) | Jewelry online monitoring method based on image processing | |
Chang et al. | Restoration algorithm for image noise removal using double bilateral filtering | |
Liu et al. | Automatic Lung Parenchyma Segmentation of CT Images Based on Matrix Grey Incidence. | |
CN118918112B (en) | Artificial intelligence-based method and system for detecting heart occupation in ultrasonic image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |