[go: up one dir, main page]

CN117218037A - Image definition evaluation method and device, equipment and storage medium - Google Patents

Image definition evaluation method and device, equipment and storage medium Download PDF

Info

Publication number
CN117218037A
CN117218037A CN202311339802.0A CN202311339802A CN117218037A CN 117218037 A CN117218037 A CN 117218037A CN 202311339802 A CN202311339802 A CN 202311339802A CN 117218037 A CN117218037 A CN 117218037A
Authority
CN
China
Prior art keywords
image
evaluated
definition
edge
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311339802.0A
Other languages
Chinese (zh)
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Pioneer Huineng Technology Co ltd
Original Assignee
Shanghai Pioneer Huineng Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Pioneer Huineng Technology Co ltd filed Critical Shanghai Pioneer Huineng Technology Co ltd
Priority to CN202311339802.0A priority Critical patent/CN117218037A/en
Publication of CN117218037A publication Critical patent/CN117218037A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses an image definition evaluation method, an image definition evaluation device and a storage medium, wherein the image definition evaluation method comprises the following steps: acquiring an image to be evaluated; extracting edge information of a target object in an image to be evaluated to obtain an edge feature map corresponding to the image to be evaluated; calculating to obtain a definition value of the image to be evaluated based on the image definition scoring model and the edge feature image; the edge information comprises a set of pixel points of a pixel gray value transition region in the edge region of the target object. The method can improve the evaluation accuracy of the definition of the image to be evaluated, reduce the evaluation complexity and improve the evaluation speed.

Description

Image definition evaluation method and device, equipment and storage medium
Technical Field
The embodiment of the application relates to an image processing technology, and relates to, but is not limited to, an image definition evaluation method, an image definition evaluation device, image definition evaluation equipment and a storage medium.
Background
With the rapid development of internet technology, media data such as graphics or video presented in the form of images is widely accepted. The acquisition of the media data in the image form mostly depends on a camera in the electronic equipment, and the quality of the camera directly influences the image presentation effect, such as the definition of the image.
In other words, the definition of the photographed image can also be visually good or bad in the quality of the camera that photographed the image on the surface.
In the related art, for the evaluation of the image definition, the definition of the image is mostly determined by calculating the gradient value of the image, and the theoretical basis is that the higher the definition of the image is, the larger the gradient value at the edge of the image is. However, when calculating the gradient value of the image, if the gradient value of the whole image is calculated to represent the definition of the image, the edge area of the image is small, so that the calculated gradient value of the whole image cannot be represented well; however, if gradient values are calculated only around the edge region of the image to represent the sharpness of the image, the processing method is relatively complex and time-consuming in the case of undefined scenes.
Therefore, how to quickly and reasonably evaluate the sharpness of an image is an urgent problem to be solved.
Disclosure of Invention
In view of this, the image definition evaluation method, device, equipment and storage medium provided by the embodiment of the application can improve the evaluation accuracy, reduce the evaluation complexity and improve the evaluation speed. The image definition evaluation method, the device, the equipment and the storage medium provided by the embodiment of the application are realized in the following way:
The image definition evaluation method provided by the embodiment of the application comprises the following steps:
acquiring an image to be evaluated;
extracting edge information of a target object in the image to be evaluated to obtain an edge feature map corresponding to the image to be evaluated;
calculating to obtain a definition value of the image to be evaluated based on an image definition scoring model and the edge feature image;
the edge information comprises a set of pixel points of a pixel gray value transition region in the edge region of the target object.
In some embodiments, the image sharpness scoring model includes an average pooling module, a plurality of maximum pooling modules, a global maximum pooling layer, and a fully connected layer.
In some embodiments, the average pooling module comprises a convolution layer, an average pooling layer, and an activation function, and each of the maximum pooling modules comprises a convolution layer, a maximum pooling layer, and an activation function.
In some embodiments, the calculating, based on the image sharpness scoring model and the edge feature map, a sharpness value of the image to be evaluated includes:
carrying out noise reduction treatment on the edge feature map through the average pooling module to obtain a first feature map;
extracting local maximum values in the noise reduction feature images through a plurality of maximum pooling modules to obtain second feature images;
Carrying out pooling treatment on the second feature map through the global maximum pooling layer to obtain multi-level feature information corresponding to the second feature map, wherein the multi-level feature information comprises feature information of the second feature map on each channel;
and carrying out full connection processing on the multi-level characteristic information through the full connection layer to obtain the definition value of the image to be evaluated.
In some embodiments, the method further comprises, prior to acquiring the image to be evaluated:
determining a training sample set comprising a plurality of sample image pairs, each sample image pair comprising a first sample image and a second sample image, the first sample image having a higher sharpness than the second sample image;
and training the initial scoring model according to a preset loss function and the training sample set to obtain the image definition scoring model.
In some embodiments, prior to the determining the training sample set, the method further comprises:
and carrying out Gaussian blur degradation processing on the acquired image to obtain a plurality of sample images with different resolutions corresponding to the image.
The embodiment of the application provides a method for evaluating the definition of an application image, which comprises the following steps:
shooting a target object to obtain an image to be evaluated;
calculating to obtain a definition value of the image to be evaluated by the method of claim 1;
determining an optimal focusing position of equipment for shooting the image to be evaluated according to the definition value of the image to be evaluated; or alternatively
And generating alarm information according to the definition value of the image to be evaluated and an image definition early warning threshold value.
In some embodiments, the determining the best focus position of the device for capturing the image to be evaluated according to the sharpness value of the image to be evaluated includes:
and acquiring definition values of a plurality of images to be evaluated, and defining a focusing position corresponding to the highest definition value as the optimal focusing position.
In some embodiments, the generating the alarm information according to the sharpness value of the image to be evaluated and the image sharpness pre-warning threshold value includes:
and generating the alarm information under the condition that the calculated definition value of the image to be evaluated is smaller than or equal to the image definition early-warning threshold value.
The image definition evaluation device provided by the embodiment of the application comprises:
The acquisition module is used for acquiring the image to be evaluated;
the extraction module is used for extracting edge information of a target object in the image to be evaluated to obtain an edge feature map corresponding to the image to be evaluated;
the processing module is used for calculating the definition value of the image to be evaluated based on the image definition scoring model and the edge feature image; the edge information comprises a set of pixel points of a pixel gray value transition region in the edge region of the target object.
The computer device provided by the embodiment of the application comprises a memory and a processor, wherein the memory stores a computer program capable of running on the processor, and the processor realizes the method of the embodiment of the application when executing the program.
The computer readable storage medium provided by the embodiment of the present application stores a computer program thereon, which when executed by a processor implements the method provided by the embodiment of the present application.
The image definition evaluation method, the device, the computer equipment and the computer readable storage medium provided by the embodiment of the application are used for acquiring an image to be evaluated; extracting edge information of a target object in an image to be evaluated to obtain an edge feature map corresponding to the image to be evaluated; calculating to obtain a definition value of the image to be evaluated based on the image definition scoring model and the edge feature image; the edge information comprises a set of pixel points of a pixel gray value transition region in the edge region of the target object.
According to the image definition evaluation method provided by the embodiment of the application, the edge feature image of the image to be evaluated is extracted, and then the definition score of the image to be evaluated is calculated based on the edge feature image and the trained image definition scoring model. Therefore, the edge characteristics in the image to be evaluated are extracted, the image to be evaluated is directly subjected to the sharpness evaluation based on the edge characteristics through the trained image sharpness scoring model, and the edge information of the image is reserved during evaluation so as to improve the evaluation accuracy, reduce the processing complexity, improve the processing speed and solve the technical problems in the background technology.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic implementation flow chart of an image sharpness evaluation method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of edge information according to an embodiment of the present application;
fig. 3 is a schematic implementation flow chart of an image sharpness evaluation method according to an embodiment of the present application;
FIG. 4 is a schematic structural diagram of an image evaluation model according to an embodiment of the present application;
fig. 5 is a schematic diagram of an implementation flow of an edge extraction method according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an edge extraction network model according to an embodiment of the present application;
FIG. 7 is a graph of the contrast effect between an edge feature map and an image to be evaluated according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an image sharpness scoring network according to an embodiment of the present application;
FIG. 9 is a schematic diagram of an implementation flow of a training method of an image evaluation model according to an embodiment of the present application;
fig. 10 is a schematic diagram illustrating an effect of an image sharpness evaluation method according to an embodiment of the present application;
FIG. 11 is a schematic diagram of an implementation flow of a method for applying image sharpness evaluation according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of an image sharpness evaluation apparatus according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of an image sharpness evaluation apparatus according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application more apparent, the specific technical solutions of the present application will be described in further detail below with reference to the accompanying drawings in the embodiments of the present application. The following examples are illustrative of the application and are not intended to limit the scope of the application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the application only and is not intended to be limiting of the application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
It should be noted that the term "first/second/third" in relation to embodiments of the present application is used to distinguish between similar or different objects, and does not represent a particular ordering of the objects, it being understood that the "first/second/third" may be interchanged with a particular order or sequencing, as permitted, to enable embodiments of the present application described herein to be implemented in an order other than that illustrated or described herein.
In industrial production, in order to obtain a good imaging effect, the definition of the lens is the key point of the type selection. A good lens has good expression in the aspects of resolution, sharpness, depth of field and the like, for example, the definition of a shot image can intuitively reflect the quality of the lens for shooting the image.
In the related art, for the evaluation of the image definition, the definition of the image is mostly determined by calculating the gradient value of the image, and the theoretical basis is that the higher the definition of the image is, the larger the gradient value at the edge of the image is. However, when calculating the gradient value of the image, if the gradient value of the whole image is calculated to represent the definition of the image, the edge area of the image is small, so that the calculated gradient value of the whole image cannot be represented well; however, if gradient values are calculated only around the edge region of the image to represent the sharpness of the image, the processing method is relatively complex and time-consuming in the case of undefined scenes.
However, in industrial production, if the image definition is evaluated by using the above-mentioned evaluation method for image definition, there are problems of insufficient evaluation accuracy, high evaluation complexity, and long time consumption.
In view of this, an embodiment of the present application provides an image sharpness evaluation method, which is applied to a terminal device.
The terminal devices according to the embodiments of the present application may include general hand-held electronic terminals such as mobile phones, smart phones, portable terminals, personal digital assistants (Personal Digital Assistant, PDA), portable multimedia player (Personal Media Player, PMP) devices, notebook computers, notebook (Note Pad), wireless broadband (Wireless Broadband, wibro) terminals, tablet computers (personal computer, PC), smart PCs, point of Sales (POS), and car computers, etc.
The terminal device may also comprise a wearable device. The wearable device may be worn directly on the user or may be a portable electronic device integrated into the user's clothing or accessories. The wearable device is not only a hardware device, but also can realize powerful intelligent functions through software support and data interaction and cloud interaction, such as: the mobile phone terminal has the advantages of calculating function, positioning function and alarming function, and can be connected with mobile phones and various terminals. Wearable devices may include, but are not limited to, wrist-supported watch types (e.g., watches, wrist products, etc.), foot-supported shoes (e.g., shoes, socks, or other leg wear products), head-supported Glass types (e.g., glasses, helmets, headbands, etc.), and smart apparel, school bags, crutches, accessories, etc. in various non-mainstream product forms.
Fig. 1 is a schematic implementation flow chart of an image sharpness evaluation method according to an embodiment of the present application. As shown in fig. 1, the method may include the following steps 101 to 102:
step 101, an image to be evaluated is acquired.
In the embodiment of the present application, the means for acquiring the image to be evaluated is not limited, for example, the image to be evaluated may be an image captured in real time by an engineer based on a camera lens in industrial production.
Of course, the definition of the image to be evaluated is not limited in the embodiment of the application, and the image to be evaluated may be a clearer image or a blurred image.
Step 102, extracting edge information of a target object in the image to be evaluated to obtain an edge feature map corresponding to the image to be evaluated, wherein the edge information comprises a set of pixel points of a pixel gray value transition region in an edge region of the target object.
It will be appreciated that the more blurred (i.e. less sharp) the image to be evaluated, the wider the edge of the image to be evaluated is relatively, while the smaller the pixel values at the edge are. Therefore, the edge information in the image to be evaluated can reflect the sharpness of the image to be evaluated to a certain extent.
Based on the principle, in the embodiment of the application, the edge characteristic diagram of the image to be evaluated is obtained by extracting the edge information of the image to be evaluated, so that the definition of the image to be evaluated is determined at least according to the edge characteristic diagram.
Here, the specific type of the edge information is not limited, and for example, the edge information may be an edge width of an image, a pixel value at an edge of the image, a gradient value at an edge of the image, or the like.
In some embodiments, edge information of the target object in the image to be evaluated may be extracted by performing step 302 in the following embodiments to obtain an edge feature map corresponding to the image to be evaluated.
In an embodiment of the present application, the edge information may include a set of pixel points in a transition region of pixel gray values in an edge region of the target object. As shown in fig. 2, the extracted edge information may include a set of pixel points (e.g., a set from black to white in the figure) of a pixel color transition region. The higher the sharpness of the image to be evaluated, the fewer the number of pixels in the extracted edge information.
In an alternative embodiment, before extracting the edge information of the target object in the image to be evaluated to obtain the edge feature map corresponding to the image to be evaluated, the image to be evaluated may be preprocessed, so as to reduce the noise influence of the image to be evaluated, highlight the features of the image to be evaluated, and improve the evaluation accuracy of the sharpness of the image to be evaluated.
Here, the manner of preprocessing is not limited, and may be one or more of image enhancement processing, normalization processing, binarization processing, histogram equalization processing, and bilateral filtering processing, for example.
The normalization processing can improve the robustness of the image to be evaluated, for example, the image gray scale range of the image to be evaluated can be stretched to a specified range through linear stretching, for example, the specified range can be set to 0-255; the filtering processing is carried out on the image to be evaluated, so that noise possibly existing in the image to be evaluated can be filtered, the data size of the image is reduced to a certain extent, and the edge information of the image to be evaluated is not damaged.
And step 103, calculating to obtain the definition value of the image to be evaluated based on the image definition scoring model and the edge feature map.
Here, the specific architecture of the image sharpness scoring model may be described with reference to steps 303 to 306 of the following embodiments.
In the embodiment of the application, the edge feature image of the image to be evaluated is extracted first, and then the definition score of the image to be evaluated is calculated based on the edge feature image and the trained image definition score model. Therefore, the edge characteristics in the image to be evaluated are extracted, the trained image definition scoring model is used for directly evaluating the definition of the image to be evaluated based on the edge characteristics, and the edge information of the image is reserved during evaluation to improve the evaluation accuracy, reduce the processing complexity and improve the processing speed.
The embodiment of the application further provides an image definition evaluation method, and fig. 3 is a schematic diagram of an implementation flow of the image definition evaluation method provided by the embodiment of the application. As shown in fig. 3, the method may include the following steps 301 to 306:
step 301, an image to be evaluated is acquired.
Step 302, extracting edge information of a target object in an image to be evaluated to obtain an edge feature map corresponding to the image to be evaluated, wherein the edge information comprises a set of pixel points of a pixel gray value transition region in an edge region of the target object.
In the embodiment of the application, the edge extraction network model can be used for extracting the edge information of the target object in the image to be evaluated so as to obtain the edge characteristic diagram corresponding to the image to be evaluated.
The edge extraction network model and the image sharpness scoring model may be referred to herein in combination as an image assessment model. The image evaluation model provided by embodiments of the present application may be a convolutional neural network (Convolutional Neural Networks, CNN), which is essentially an input-to-output mapping that can learn a large number of mappings between inputs and outputs without requiring any precise mathematical expressions between inputs and outputs, and the network has the ability to map between pairs of inputs and outputs as long as the convolutional network is trained with known patterns.
As shown in fig. 4, the image evaluation model provided by the embodiment of the present application may include an edge extraction network model and an image sharpness scoring model. Inputting the image to be evaluated into an edge extraction network model to obtain a corresponding edge characteristic diagram; the edge feature image is input into an image definition scoring model, and then the definition value of the image to be evaluated can be output.
It should be noted that, in the edge extraction network model provided in the embodiment of the present application, the edge extraction network model includes M convolution modules, an interpolation module, and a first full connection module, where M >0.
Based on this, fig. 5 shows a schematic implementation flow diagram of an edge feature map obtained by feature extraction of an image to be evaluated by using an edge extraction network model. As shown in fig. 5, the method may include the following steps 501 to 502:
step 501, inputting an image to be evaluated into an edge extraction network model, and obtaining a feature extraction result of each convolution module in the edge extraction network; when i=0, the feature extraction result is obtained by performing feature extraction processing on the original image by the ith convolution module; and when i is not equal to 0, the feature extraction result is obtained by performing feature extraction processing on the feature extraction result output by the ith-1 convolution module by the ith convolution module.
Inputting the image to be evaluated into an edge extraction network for edge feature extraction, namely processing the image to be evaluated sequentially through M convolution modules to obtain a plurality of feature extraction results (convolution feature graphs); performing interpolation processing on each convolution feature map to obtain an interpolation feature map; and finally, performing full connection processing on the plurality of convolution feature images through the first full connection layer to obtain corresponding edge feature images.
As shown in fig. 6, assuming that the edge extraction network model includes 3 convolution modules, an image to be evaluated is input to the 1 st convolution module first, and a feature extraction result corresponding to the 1 st convolution module is obtained; then, the feature extraction result corresponding to the 1 st convolution module is input into the 2 nd convolution module again for convolution processing, and the feature extraction result corresponding to the 2 nd convolution module is obtained; and finally, inputting the feature extraction result corresponding to the 2 nd convolution module into the 3 rd convolution module again for convolution processing to obtain the feature extraction result corresponding to the 3 rd convolution module.
In an alternative embodiment, each convolution module includes a convolution layer and a 2 x 2 pooling layer.
Each convolution layer contains a plurality of convolution kernels, and the whole image is scanned from left to right and from top to bottom by using the convolution kernels to obtain output data called Feature Map (Feature Map). The convolution layer in front of the network captures local and detailed information of the image, and has a small receptive field, i.e. each pixel of the output image corresponds to only a small range of the input image. The later convolutional layer receptive field increases layer by layer for capturing more complex, more abstract information of the image. And finally obtaining abstract representations of the images in different sizes through the operation of a plurality of convolution layers.
The convolution layer can be used for image denoising, image enhancement, image edge detection and the like, and can also be used for extracting the characteristics of the image. When processing an image with a convolution layer, the convolution layer is generally processed by a convolution kernel, which is a matrix of n×m, and the matrix is used to slide from top to bottom and from left to right over the image, and the elements of the matrix of the convolution kernel are multiplied by the elements of the corresponding position covered by the matrix on the image, and then summed to obtain an output value. That is, the convolution operation traverses the entire image using the convolution kernel matrix, and performs the convolution operation on all positions, thereby obtaining an output image (i.e., a feature map).
The pooling layer is used for pooling the obtained convolution feature map after the convolution processing of the image so as to reduce the dimension of the convolution feature map and obtain the edge feature map, wherein the dimension of the feature map obtained through the convolution layer is still very high, the calculation is time-consuming and the overfitting is easy to cause due to the high dimension.
When the image is pooled, operations such as average pooling and maximum pooling are often adopted. The pooling operation may be performed on the image by replacing a certain area of the image with a value, which may be an average value or a maximum value. If a maximum is employed, the pooling operation is maximum pooling; if an average is used, the pooling operation is mean pooling.
In addition to reducing the image size, another effect of pooling is a degree of translational and rotational invariance, as the output value is calculated from a region of the image and is insensitive to small amplitude translations and rotations.
Further, in an alternative embodiment, as shown in fig. 6, after the feature extraction result corresponding to the 2 nd convolution module and the feature extraction result corresponding to the 3 rd convolution module are obtained, interpolation processing may be performed on the two feature extraction results to obtain interpolation results corresponding to the two convolution modules.
With continued reference to fig. 6, in the structure shown in fig. 6, block_x (x=1, 2, 3) represents the 1 st, 2 nd, 3 rd feature extraction modules respectively, and each module has a structure shown in fig. (a) and mainly comprises a depth separable convolution and an activation function, which is used to conveniently extract effective information. After each Block module extracts the features, it is connected to the CSAM module in the horizontal direction and to the next Block module in the vertical direction (except the last Block). The CSAM module has a specific structure as shown in (B), and the main function is to pay attention to effective information in the input information. After each CSAM module, a Conv convolution module is connected, and meanwhile, the two last CSAM modules perform an intersylate up-sampling operation, and finally, information corresponding to the 3 modules, namely Block_1, block_2 and Block_3, is connected together through the Concate, and then, the information is output in the form of an edge feature map through Conv convolution conversion features.
Step 502, performing full connection processing on each feature extraction result through a first full connection module in the edge extraction network model to obtain an edge feature map.
With continued reference to fig. 6, as shown in fig. 6, after the feature extraction result corresponding to the 1 st convolution module is obtained and the interpolation results corresponding to the 2 nd convolution module and the 3 rd convolution module are obtained, the feature extraction result corresponding to the 1 st convolution module and the two interpolation results may be fully connected through the first fully connected layer in the edge extraction network, so as to obtain an edge feature map.
Through the processing of the first full connection layer, the feature space mapping sample marking space obtained through the extraction can be integrated into a value, so that the information contained in the comprehensive multiple features can be reduced, and the robustness of the whole network is improved.
By extracting the edge information of the image to be evaluated, an edge feature image is obtained, the edge information of the image can be effectively reserved, the edge information can be used for reflecting the definition of the image to be evaluated, and interference of background information in the image to be evaluated can be eliminated, so that the accuracy of the evaluation of the definition of the image to be evaluated in the later stage is improved.
As shown in fig. 7, an effect schematic diagram of an edge feature map is obtained after edge information extraction is performed on an image to be evaluated. It can be seen that the real scene is seen from the reflected edge feature map that the boundary width of the second map is wider than that of the first map, and the whole pixel value is darker, which accords with the visual effect. The same is true for the law of comparison of the third graph and the fourth graph.
And 303, carrying out noise reduction processing on the edge feature map through an average pooling module in the image definition scoring model to obtain a first feature map.
The image definition scoring model includes an average pooling module, a plurality of maximum pooling modules, a global maximum pooling layer and a fully connected layer, as shown in fig. 8. The average pooling module includes a convolution layer, an average pooling layer, and an activation function, and each of the maximum pooling modules includes a convolution layer, a maximum pooling layer, and an activation function.
The average pooling layer is a numerical value of replacing a certain area in the edge feature map by an average value, and can effectively filter out the protruding noise points, so that the noise reduction effect on the edge feature map is achieved, and the first feature map is obtained.
In a preferred embodiment, the average pooling layer may be a 2×2 average pooling layer.
And step 304, extracting local maximum values in the noise reduction feature map through a plurality of maximum pooling modules in the image definition scoring model to obtain a second feature map.
The maximum pooling layer replaces a numerical value of a certain area in the edge feature map by a maximum value, so that a local maximum value can be taken to highlight the influence of a high-brightness area. Based on theoretical analysis, two images degraded with the same definition, assuming that the boundary profile of the images is the same, the image with a lower degradation level than the image with a higher degradation level should be clearer at the boundary, i.e. the corresponding gray value is higher.
Step 305, performing pooling processing on the second feature map through the global maximum pooling layer in the image definition scoring model to obtain multi-level feature information corresponding to the second feature map, where the multi-level feature information includes feature information of the second feature map on each channel.
And 306, performing full connection processing on the multi-level characteristic information through the full connection layer in the image definition scoring model to obtain the definition value of the image to be evaluated.
And through the second full-connection module, multi-level characteristic information can be comprehensively considered, and finally objective definition evaluation scores are obtained.
In the embodiment of the application, through a trained image evaluation model, an edge feature image of an image to be evaluated is extracted, and then the definition score of the image to be evaluated is calculated based on the edge feature image. Therefore, the edge characteristics in the image to be evaluated are extracted, and the trained image evaluation model is used for directly evaluating the definition of the image to be evaluated based on the edge characteristics, so that the edge information of the image is reserved during evaluation to improve the evaluation accuracy, the processing complexity is reduced, and the processing speed is improved.
It will be appreciated that the image sharpness scoring model needs to be trained prior to acquiring the image to be evaluated, so as to obtain the trained image sharpness scoring model.
FIG. 9 is a flowchart of a method for training the image sharpness scoring model, as shown in FIG. 9, in which the trained image evaluation model can be obtained by performing steps 901 to 902 in the following embodiments:
step 901, determining a training sample set comprising a plurality of sample image pairs, each sample image pair comprising a first sample image and a second sample image, the sharpness of the first sample image being higher than the sharpness of the second sample image.
In the embodiment of the present application, the acquisition mode of the training sample set is not limited.
As an alternative embodiment, the training sample set may be an open source sample set, that is, various sample images with different sharpness are downloaded from a network, and each sample image has a corresponding sample tag, where the sample tag is used to represent the sharpness level of the sample image.
In another alternative embodiment, a gaussian blur degradation process may be performed on the acquired image to obtain a plurality of sample images with different resolutions corresponding to the image, so as to construct a training sample set.
In a specific embodiment, the lens of the industrial camera may be continuously adjusted to an optimal clear state to acquire a plurality of sample images, where the sharpness of each sample image is greater than a threshold, and the threshold may be set according to actual requirements.
In a preferred embodiment, the number of excess pixels at the boundary of the acquired sample image is within 5 pixel values, which ensures the sharpness of the sample image.
After a plurality of sample images are acquired, in order to expand the sample data, the embodiment of the application also performs preprocessing on each sample image.
Here, the manner of preprocessing the sample image is not limited, and may be, for example, rotation, translation, stretching, clipping, or the like.
In a preferred embodiment, when a plurality of sample images are acquired, a blurring degradation process may be performed on each sample image to obtain a plurality of processed sample images with different resolutions. It can be seen that the sharpness level of each processed sample image is different, and the difference between the sharpness level of each processed sample image and the sharpness level of the image content contained in the corresponding sample image is only different.
Here, the specific manner of the blurring degradation process is not limited, and for example, blurring degradation process may be performed using gaussian-check sample images of different sizes, thereby obtaining a plurality of processed sample images of different sharpness. In a preferred embodiment, the blur level of the degraded sample image may be classified into 6 levels, the higher the level value, the more blurred the processed sample image.
And combining each sample image with a corresponding plurality of processed sample images to obtain a training sample set comprising a plurality of sample image pairs, wherein the definition of the previous sample image in each sample image pair is greater than that of the next sample image.
Here, the combination method of the sample image and the corresponding plurality of processed sample images is not limited. For example, the sample image may be randomly combined with the corresponding processed sample image, or the sample image may be randomly combined with the processed sample image corresponding to another sample image.
For example, the sample image a1 and the processed sample image b1, the processed sample image b2, the processed sample image b3, the processed sample image b4, the processed sample image b5, and the processed sample image b6, which have sequentially increased blur degrees, may be randomly combined to obtain the sample image pairs (a 1, b 1), (a 1, b 2), (b 2, b 3), and so on.
Alternatively, the processed sample image d1, the processed sample image d2, the processed sample image d3, the processed sample image d4, the processed sample image d5, and the processed sample image d6, in which the degree of blurring of the sample image a1 and the sample image c1 increases in order, may be randomly combined to obtain the sample image pairs (a 1, d 1), (a 1, d 2), (d 2, d 3), and the like.
It should be noted that, the combination of the sample images is random, and it is only necessary to ensure that the sharpness of the previous sample image is greater than that of the next sample image in the sample image pair obtained after the combination, that is, the blurring degree of the previous sample image is lower than that of the next sample image.
For example, a qualified sample image pair may be represented as < image1_blur1, image4_blur3>, because blur1 represents an image1 image with a blur level of 1, and blur3 represents an image4 image with a blur level of 3, which is a qualified pair because blur level 1 is lower than blur level 3.
After obtaining a plurality of qualified sample image pairs, a training sample set consisting of the plurality of sample image pairs can be obtained.
And step 902, training the initial scoring model according to a preset loss function and a training sample set to obtain an image definition scoring model.
In the embodiment of the present application, the specific type of the loss function is not limited, for example, the loss function may be a Ranking loss function.
Wherein the loss function is shown in the following formula 1:
loss(x 1, x 2, y)=max(0,-y*(x 1 -x 2 ) +margin) (formula 1);
where x1, x2 represent the result of model prediction, y=1 represents the pair, and the sharpness of the x1 image is higher than the sharpness of the x2 image. margin is a constant, and takes a value of 0.5.
It should be noted that, the initial image evaluation model used herein is also a convolutional neural network model, which also includes an edge extraction network model and an image sharpness scoring model, and its construction is the same as that of the trained image evaluation model.
After the training sample set and the loss function are obtained, the training data set can be used as an input sample, the label corresponding to each sample image is used as an output result, and the input sample and the output result are substituted into a preset loss function. In each iteration process of the initial image evaluation model, the process of forward feedback and reverse feedback is included, the process of forward feedback is from an input layer to an output layer, reverse feedback is started from the last layer, namely the output layer, and the weight and bias of the whole network are initialized in the first forward feedback, so that parameters, namely a weight value and a bias value, of the network need to be adjusted, the adjustment basis is the difference between the output value and a true value of the output layer of the network, and the difference is reduced by adjusting the parameters, which is the optimization target of the neural network.
When the iteration number reaches a threshold value or the difference between the output result and the label type is small enough, the parameter obtained by the iteration at the time can be used as the final parameter of the initial image evaluation model, so that the trained image evaluation model is obtained.
Based on the model structure, training is performed, testing is performed based on the acquired images, and the acquired images are processed in the same manner, so that 14992 sample image pairs are formed. And predicting the scoring value of each image through the model, and counting by taking the pair as a unit, wherein if the scoring value of the first image is larger than the scoring value of the second image in the sample image pair, the prediction is correct, otherwise, the prediction is considered to be incorrect. A total of 14921 sample image pairs were statistically predicted to be correct.
Since different pair pairs contain different levels of fuzzy data, the statistics are performed in units of fuzzy levels, and the statistics result is shown in fig. 10. The results of scoring values for the images corresponding to blur levels 1-6 are represented in order from a to f in fig. 10.
As can be seen from the statistical results, the scoring value is in a descending trend along with the increase of the fuzzy grade, and meanwhile, the scoring value is concentrated under the condition that the fuzzy degree is consistent. The score difference of the same fuzzy grade is less than 10 points, and the score difference of different fuzzy grades is more than 10 points. The model can extract edge information to evaluate the definition, so that the model has universality on scene requirements. Therefore, the data statistics are generalizing, i.e., differences within 10 points of sharpness scores for different fields of view in the industry are believed to be achievable.
The embodiment of the application further provides a method for evaluating the definition of the application image.
Step 1101, shooting a target object to acquire an image to be evaluated.
In step 1102, the sharpness value of the image to be assessed is calculated by applying the image sharpness assessment method provided in the above embodiment.
Here, the manner of acquiring the sharpness value of the image to be evaluated is as shown in the above embodiment, and will not be described herein.
Step 1103, determining an optimal focus position of a device for capturing the image to be evaluated according to the sharpness value of the image to be evaluated; or generating alarm information according to the definition value of the image to be evaluated and the image definition early warning threshold value.
When determining the optimal focusing position of the device for shooting the image to be evaluated according to the definition values of the image to be evaluated, the focusing position corresponding to the highest definition value can be defined as the optimal focusing position for acquiring the definition values of a plurality of images to be evaluated.
And generating the alarm information when the alarm information is generated according to the definition value of the image to be evaluated and the image definition early-warning threshold value, wherein the alarm information can be generated under the condition that the calculated definition value of the image to be evaluated is less than or equal to the image definition early-warning threshold value.
The image definition evaluation method provided by the embodiment of the application can be widely applied to industrial generation and is used for assisting engineers in on-site focusing, and when the engineers focus on the camera, the engineers can judge whether to adjust to a clear state according to the definition value of the acquired image, so that the focusing time is reduced, the visual fatigue of the engineers is reduced, and the working efficiency is improved; when the camera is loose or the fault generates virtual focus (namely, the definition is lower), the early warning can be timely performed, and unnecessary loss caused by poor camera state is reduced.
It should be understood that, although the steps in the flowcharts described above are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described above may include a plurality of sub-steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of execution of the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternately with at least a part of the sub-steps or stages of other steps or other steps.
Based on the foregoing embodiments, an embodiment of the present application provides an image sharpness evaluation apparatus, which includes modules included, and units included in the modules, and may be implemented by a processor; of course, the method can also be realized by a specific logic circuit; in an implementation, the processor may be a Central Processing Unit (CPU), a Microprocessor (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.
Fig. 12 is a schematic structural diagram of an image sharpness evaluation apparatus according to an embodiment of the present application, as shown in fig. 12, the apparatus 1200 includes an acquisition module 1201, an extraction module 1202, and a processing module 1203, where:
an acquisition module 1201, configured to acquire an image to be evaluated;
an extracting module 1202, configured to extract edge information of a target object in the image to be evaluated to obtain an edge feature map corresponding to the image to be evaluated;
the processing module 1203 is configured to calculate a sharpness value of the image to be evaluated based on the image sharpness scoring model and the edge feature map; the edge information comprises a set of pixel points of a pixel gray value transition region in the edge region of the target object.
In some embodiments, the image sharpness scoring model includes an average pooling module, a plurality of maximum pooling modules, a global maximum pooling layer, and a fully connected layer.
In some embodiments, the average pooling module comprises a convolution layer, an average pooling layer, and an activation function, and each of the maximum pooling modules comprises a convolution layer, a maximum pooling layer, and an activation function.
In some embodiments, the processing module 1203 is specifically configured to perform noise reduction processing on the edge feature map through the average pooling module to obtain a first feature map; extracting local maximum values in the noise reduction feature images through a plurality of maximum pooling modules to obtain second feature images; carrying out pooling treatment on the second feature map through the global maximum pooling layer to obtain multi-level feature information corresponding to the second feature map, wherein the multi-level feature information comprises feature information of the second feature map on each channel; and carrying out full connection processing on the multi-level characteristic information through the full connection layer to obtain the definition value of the image to be evaluated.
In some embodiments, the obtaining module 1201 is further configured to determine a training sample set, where the training sample set includes a plurality of sample image pairs, each of the sample image pairs including a first sample image and a second sample image, the first sample image having a higher sharpness than the second sample image;
The processing module 1203 is further configured to train the initial scoring model according to a preset loss function and the training sample set, so as to obtain the image sharpness scoring model.
In some embodiments, the processing module 1203 is further configured to perform gaussian blur degradation processing on the acquired image, to obtain a plurality of sample images with different resolutions corresponding to the image.
Fig. 13 is a schematic structural diagram of an image sharpness evaluation apparatus according to an embodiment of the present application, as shown in fig. 13, the apparatus 1300 includes an acquisition module 1301 and a processing module 1302, where:
an acquisition module 1301, configured to capture a target object to acquire an image to be evaluated;
a processing module 1302, configured to calculate a sharpness value of the image to be evaluated by applying, for example, an image sharpness evaluation method;
determining an optimal focusing position of equipment for shooting the image to be evaluated according to the definition value of the image to be evaluated; or alternatively
And generating alarm information according to the definition value of the image to be evaluated and an image definition early warning threshold value.
In some embodiments, the processing module 1302 is specifically configured to obtain sharpness values of a plurality of the images to be evaluated, and define a focus position corresponding to a highest sharpness value as the best focus position.
In some embodiments, the processing module 1302 is specifically configured to generate the alarm information when it is determined that the calculated sharpness value of the image to be evaluated is less than or equal to an image sharpness pre-warning threshold.
The description of the apparatus embodiments above is similar to that of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the apparatus of the present application, please refer to the description of the embodiments of the method of the present application.
It should be noted that, in the embodiment of the present application, the division of the modules by the image sharpness evaluation apparatus shown in fig. 12 and 13 is schematic, and is merely a logic function division, and another division manner may be adopted in actual implementation. In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units. Or in a combination of software and hardware.
It should be noted that, in the embodiment of the present application, if the method is implemented in the form of a software functional module, and sold or used as a separate product, the method may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partly contributing to the related art, embodied in the form of a software product stored in a storage medium, including several instructions for causing an electronic device to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, an optical disk, or other various media capable of storing program codes. Thus, embodiments of the application are not limited to any specific combination of hardware and software.
The embodiment of the application provides a computer device, which can be a server, and the internal structure diagram of the computer device can be shown in fig. 14. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is for storing data. The network interface of the computer device is used for communicating with an external terminal through a network connection. Which computer program, when being executed by a processor, carries out the above-mentioned method.
An embodiment of the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method provided in the above-described embodiment.
Embodiments of the present application provide a computer program product comprising instructions which, when run on a computer, cause the computer to perform the steps of the method provided by the method embodiments described above.
It will be appreciated by those skilled in the art that the structure shown in fig. 14 is merely a block diagram of a portion of the structure associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements are applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, the image evaluation apparatus provided by the present application may be implemented in the form of a computer program that is executable on a computer device as shown in fig. 14. The memory of the computer device may store the various program modules that make up the apparatus. The computer program of each program module causes a processor to carry out the steps of the method of each embodiment of the application described in the present specification.
It should be noted here that: the description of the storage medium and apparatus embodiments above is similar to that of the method embodiments described above, with similar benefits as the method embodiments. For technical details not disclosed in the storage medium, the storage medium and the device embodiments of the present application, please refer to the description of the method embodiments of the present application.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" or "some embodiments" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" or "in some embodiments" in various places throughout this specification are not necessarily referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present application, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application. The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments. The foregoing description of various embodiments is intended to highlight differences between the various embodiments, which may be the same or similar to each other by reference, and is not repeated herein for the sake of brevity.
The term "and/or" is herein merely an association relation describing associated objects, meaning that there may be three relations, e.g. object a and/or object B, may represent: there are three cases where object a alone exists, object a and object B together, and object B alone exists.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments are merely illustrative, and the division of the modules is merely a logical function division, and other divisions may be implemented in practice, such as: multiple modules or components may be combined, or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or modules, whether electrically, mechanically, or otherwise.
The modules described above as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules; can be located in one place or distributed to a plurality of network units; some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present application may be integrated in one processing unit, or each module may be separately used as one unit, or two or more modules may be integrated in one unit; the integrated modules may be implemented in hardware or in hardware plus software functional units.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware related to program instructions, and the foregoing program may be stored in a computer readable storage medium, where the program, when executed, performs steps including the above method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read Only Memory (ROM), a magnetic disk or an optical disk, or the like, which can store program codes.
Alternatively, the above-described integrated units of the present application may be stored in a computer-readable storage medium if implemented in the form of software functional modules and sold or used as separate products. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partly contributing to the related art, embodied in the form of a software product stored in a storage medium, including several instructions for causing an electronic device to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a removable storage device, a ROM, a magnetic disk, or an optical disk.
The methods disclosed in the method embodiments provided by the application can be arbitrarily combined under the condition of no conflict to obtain a new method embodiment.
The features disclosed in the several product embodiments provided by the application can be combined arbitrarily under the condition of no conflict to obtain new product embodiments.
The features disclosed in the embodiments of the method or the apparatus provided by the application can be arbitrarily combined without conflict to obtain new embodiments of the method or the apparatus.
The foregoing is merely an embodiment of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. A method of evaluating image sharpness, the method comprising:
acquiring an image to be evaluated;
extracting edge information of a target object in the image to be evaluated to obtain an edge feature map corresponding to the image to be evaluated;
calculating to obtain a definition value of the image to be evaluated based on an image definition scoring model and the edge feature image;
the edge information comprises a set of pixel points of a pixel gray value transition region in the edge region of the target object.
2. The method of claim 1, wherein the image sharpness scoring model includes an average pooling module, a plurality of maximum pooling modules, a global maximum pooling layer, and a fully connected layer.
3. The method of claim 2, wherein the average pooling module comprises a convolutional layer, an average pooling layer, and an activation function, and wherein each of the maximum pooling modules comprises a convolutional layer, a maximum pooling layer, and an activation function.
4. A method according to claim 3, wherein the calculating a sharpness value of the image to be evaluated based on the image sharpness scoring model and the edge feature map comprises:
carrying out noise reduction treatment on the edge feature map through the average pooling module to obtain a first feature map;
extracting local maximum values in the noise reduction feature images through a plurality of maximum pooling modules to obtain second feature images;
carrying out pooling treatment on the second feature map through the global maximum pooling layer to obtain multi-level feature information corresponding to the second feature map, wherein the multi-level feature information comprises feature information of the second feature map on each channel;
and carrying out full connection processing on the multi-level characteristic information through the full connection layer to obtain the definition value of the image to be evaluated.
5. The method of claim 1, wherein prior to acquiring the image to be evaluated, the method further comprises:
determining a training sample set comprising a plurality of sample image pairs, each sample image pair comprising a first sample image and a second sample image, the first sample image having a higher sharpness than the second sample image;
And training the initial scoring model according to a preset loss function and the training sample set to obtain the image definition scoring model.
6. The method of claim 5, wherein prior to said determining a training sample set, the method further comprises:
and carrying out Gaussian blur degradation processing on the acquired image to obtain a plurality of sample images with different resolutions corresponding to the image.
7. A method for applying image sharpness evaluation, comprising:
shooting a target object to obtain an image to be evaluated;
calculating to obtain a definition value of the image to be evaluated by the method of claim 1;
determining an optimal focusing position of equipment for shooting the image to be evaluated according to the definition value of the image to be evaluated; or alternatively
And generating alarm information according to the definition value of the image to be evaluated and an image definition early warning threshold value.
8. The method of claim 7, wherein determining the best focus position of the device capturing the image to be evaluated based on the sharpness value of the image to be evaluated comprises:
and acquiring definition values of a plurality of images to be evaluated, and defining a focusing position corresponding to the highest definition value as the optimal focusing position.
9. The method of claim 7, wherein generating the alert information based on the sharpness value of the image to be evaluated and an image sharpness pre-warning threshold comprises:
and generating the alarm information under the condition that the calculated definition value of the image to be evaluated is smaller than or equal to the image definition early-warning threshold value.
10. An image sharpness evaluation apparatus, comprising:
the acquisition module is used for acquiring the image to be evaluated;
the extraction module is used for extracting edge information of a target object in the image to be evaluated to obtain an edge feature map corresponding to the image to be evaluated;
the processing module is used for calculating the definition value of the image to be evaluated based on the image definition scoring model and the edge feature image; the edge information comprises a set of pixel points of a pixel gray value transition region in the edge region of the target object.
11. A computer device comprising a memory and a processor, the memory storing a computer program executable on the processor, characterized in that the processor implements the steps of the method of any of claims 1 to 6 or 7 to 9 when the program is executed.
12. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method according to any one of claims 1 to 6 or 7 to 9.
CN202311339802.0A 2023-10-16 2023-10-16 Image definition evaluation method and device, equipment and storage medium Pending CN117218037A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311339802.0A CN117218037A (en) 2023-10-16 2023-10-16 Image definition evaluation method and device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311339802.0A CN117218037A (en) 2023-10-16 2023-10-16 Image definition evaluation method and device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117218037A true CN117218037A (en) 2023-12-12

Family

ID=89035377

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311339802.0A Pending CN117218037A (en) 2023-10-16 2023-10-16 Image definition evaluation method and device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117218037A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118972533A (en) * 2024-10-17 2024-11-15 深圳市臻火科技有限公司 Projector focusing method, device, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118972533A (en) * 2024-10-17 2024-11-15 深圳市臻火科技有限公司 Projector focusing method, device, equipment and storage medium
CN118972533B (en) * 2024-10-17 2025-01-10 深圳市臻火科技有限公司 Focusing method, device, equipment and storage medium of projector

Similar Documents

Publication Publication Date Title
Li et al. Which has better visual quality: The clear blue sky or a blurry animal?
CN111079764B (en) Low-illumination license plate image recognition method and device based on deep learning
CN113128271A (en) Counterfeit detection of face images
Steffens et al. Cnn based image restoration: Adjusting ill-exposed srgb images in post-processing
Wang et al. Low-light image enhancement based on deep learning: a survey
Li et al. Hdrnet: Single-image-based hdr reconstruction using channel attention cnn
CN112101386A (en) Text detection method and device, computer equipment and storage medium
Shu et al. Face spoofing detection based on multi-scale color inversion dual-stream convolutional neural network
Liu et al. Dual UNet low-light image enhancement network based on attention mechanism
Tan et al. High dynamic range imaging for dynamic scenes with large-scale motions and severe saturation
Wang et al. Low-light image enhancement using generative adversarial networks
Zhao et al. RIRO: From retinex-inspired reconstruction optimization model to deep low-light image enhancement unfolding network
CN117218037A (en) Image definition evaluation method and device, equipment and storage medium
Liu et al. Fragrant: frequency-auxiliary guided relational attention network for low-light action recognition
Xu et al. Attention‐based multi‐channel feature fusion enhancement network to process low‐light images
Chaudhary et al. Enhanced Feature Extraction for Image Dehazing: A Comparative Study between Deep Learning Architectures and FFA-NET
Neelima et al. Optimal clustering based outlier detection and cluster center initialization algorithm for effective tone mapping
Manimurugan et al. HLASwin-T-ACoat-Net based underwater object detection
Wang et al. SBC-Net: semantic-guided brightness curve estimation network for low-light image enhancement
Li et al. Face recognition method based on fusion of improved MobileFaceNet and adaptive Gamma algorithm
Almaghthawi et al. Differential evolution-based approach for tone-mapping of high dynamic range images
Singh et al. A Review on Computational Low-Light Image Enhancement Models: Challenges, Benchmarks, and Perspectives
Saha et al. Npix2Cpix: A GAN-based image-to-image translation network with retrieval-classification integration for watermark retrieval from historical document images
Yang et al. An end‐to‐end perceptual enhancement method for UHD portrait images
Zhang et al. A Novel Exposure Fusion Method Based on Low-Resolution Context Aggregation Attention Network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination