[go: up one dir, main page]

CN116071279A - Image processing method, device, computer equipment and storage medium - Google Patents

Image processing method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN116071279A
CN116071279A CN202211638445.3A CN202211638445A CN116071279A CN 116071279 A CN116071279 A CN 116071279A CN 202211638445 A CN202211638445 A CN 202211638445A CN 116071279 A CN116071279 A CN 116071279A
Authority
CN
China
Prior art keywords
image
features
license plate
level
plate image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211638445.3A
Other languages
Chinese (zh)
Inventor
胡中华
李冰茹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Signalway Technologies Co ltd
Original Assignee
Beijing Signalway Technologies Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Signalway Technologies Co ltd filed Critical Beijing Signalway Technologies Co ltd
Priority to CN202211638445.3A priority Critical patent/CN116071279A/en
Publication of CN116071279A publication Critical patent/CN116071279A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to an image processing method, an image processing device, a computer device and a storage medium. The method comprises the following steps: acquiring a spectrum image corresponding to an original license plate image to be processed; performing feature extraction processing on the spectrum image to obtain a first type of features; the first type of features are feature information for expressing frequency domain features; performing feature extraction processing on the original license plate image to obtain second class features; the second type of features comprise feature information for expressing the characteristics of the spatial domain; fusing the first type of features and the second type of features to obtain fused features; carrying out license plate image restoration processing based on the fusion characteristics to obtain a target license plate image; the definition of the target license plate image is higher than that of the original license plate image. The method can improve adaptability of license plate image restoration processing.

Description

Image processing method, device, computer equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, a computer device, and a storage medium.
Background
With the development of image processing technology, more and more parking lots, toll stations and the like realize automatic license plate detection, identification and the like through the image processing technology, so that manpower and material resources are effectively liberated, and the passing efficiency is improved. However, movement of the vehicle tends to cause blurring of images that can adversely affect downstream tasks.
In the conventional technology, the definition of a blurred image is improved by estimating a blur kernel, so that the adverse effect of a sample image on a downstream task is reduced. However, since the blur kernel size and the blur degree of the blurred image are irregular, the method of fitting the blur kernel cannot avoid the problem of poor adaptability.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an image processing method, apparatus, computer device, computer-readable storage medium, and computer program product that can improve adaptability.
In a first aspect, the present application provides an image processing method. The method comprises the following steps:
acquiring a spectrum image corresponding to an original license plate image to be processed;
performing feature extraction processing on the spectrum image to obtain a first type of features; the first type of features are feature information for expressing frequency domain features;
Performing feature extraction processing on the original license plate image to obtain second class features; the second type of features comprise feature information for expressing the characteristics of the spatial domain;
fusing the first type of features and the second type of features to obtain fused features;
carrying out license plate image restoration processing based on the fusion characteristics to obtain a target license plate image; the definition of the target license plate image is higher than that of the original license plate image.
In one embodiment, the first type of feature is a multi-level first type of feature obtained by multi-level feature extraction; the second type of features are multi-level second type of features obtained through multi-level feature extraction;
the fusing the first type of features and the second type of features to obtain fused features comprises:
fusing the first type of features and the second type of features to obtain fused features of each level;
the license plate image restoration processing based on the fusion characteristics is carried out to obtain a target license plate image, and the method comprises the following steps:
and carrying out license plate image restoration processing based on the fusion characteristics of each level to obtain a target license plate image.
In one embodiment, the performing feature extraction processing on the spectrum image to obtain a first type of feature includes:
Performing multistage feature extraction processing on the frequency spectrum image to respectively obtain multistage first class features; wherein, the input data of the next stage of feature extraction processing is the result of the present stage of feature extraction processing;
the step of extracting the features of the original license plate image to obtain second class features comprises the following steps:
performing multistage feature extraction processing on the original license plate image to respectively obtain multistage second class features; the input data of the next-stage feature extraction processing is a fusion feature obtained by fusing the result of the current-stage feature extraction processing and the first-class feature of the current stage.
In one embodiment, the performing multi-level feature extraction on the spectrum image to obtain multi-level first-class features includes:
taking the frequency spectrum image as the input of a frequency domain feature extraction layer, sequentially carrying out feature extraction by a plurality of layers of convolution modules of the frequency domain feature extraction layer to obtain first class features output by each layer of convolution modules; the input data of the next-level convolution module is the output result of the previous-level convolution module;
the step of carrying out multistage feature extraction processing on the original license plate image to obtain multistage second class features respectively comprises the following steps:
Taking the original license plate image as the input of a spatial domain feature extraction layer, sequentially carrying out feature extraction through multi-level residual modules of the spatial domain feature extraction layer to obtain second class features output by each level residual module; the input data of the residual error module of the next level is a fusion characteristic obtained by fusing the second type characteristic output by the residual error module of the level and the first type characteristic output by the convolution module of the level.
In one embodiment, the license plate image restoration processing based on the fusion features of each stage includes:
performing multi-level deconvolution processing on the fusion characteristics of each level through a multi-level deconvolution module to obtain a target license plate image; the input data of the deconvolution module of the last level is the fusion characteristic of the last level; the input data of the deconvolution module of the previous level is obtained by fusing the output result of the deconvolution module of the present level and the fusion characteristic of the previous level.
In one embodiment, the target license plate image is an output obtained by taking an original license plate image as an input of a license plate image restoration model; the method further comprises a training step of the license plate image restoration model; the training step of the license plate image restoration model comprises the following steps:
Determining a sample image pair; the sample image pair comprises a sample image and a label image; the sharpness of the label image is higher than that of the sample image;
determining a sample image in the sample image pair as input of a model to be trained, and performing iterative training on the model to be trained based on the difference between an image output by the model to be trained and a label image in the sample image pair to obtain a trained license plate image restoration model.
In one embodiment, the determining the sample image pair includes at least one of:
acquiring a plurality of frames of vehicle moving images, performing superposition processing on the plurality of frames of vehicle moving images, then performing transformation processing to obtain sample images, and determining a tag image from the plurality of frames of vehicle moving images to determine a sample image pair comprising the sample images and the tag image;
and acquiring an out-of-focus vehicle image and a focused vehicle image, performing transformation processing on the out-of-focus vehicle image to obtain a sample image, and determining the focused vehicle image as a label image to determine a sample image pair comprising the sample image and the label image.
In a second aspect, the present application also provides an image processing apparatus. The device comprises:
The acquisition unit is used for acquiring a spectrum image corresponding to the original license plate image to be processed;
the extraction unit is used for carrying out feature extraction processing on the frequency spectrum image to obtain first type features; the first type of features are feature information for expressing frequency domain features; performing feature extraction processing on the original license plate image to obtain second class features; the second type of features comprise feature information for expressing the characteristics of the spatial domain; fusing the first type of features and the second type of features to obtain fused features;
the restoring unit is used for carrying out license plate image restoring processing based on the fusion characteristics to obtain a target license plate image; the definition of the target license plate image is higher than that of the original license plate image.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the steps of the method described above when the processor executes the computer program.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the method described above.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of the method described above.
The image processing method, the image processing device, the computer equipment, the storage medium and the computer program product acquire a spectrum image corresponding to an original license plate image to be processed; performing feature extraction processing on the frequency spectrum image to obtain first-class features; the first type of features are feature information for expressing frequency domain features; performing feature extraction processing on the original license plate image to obtain second class features; the second class of features comprise feature information for expressing the features of the spatial domain; fusing the first type of features and the second type of features to obtain fused features; carrying out license plate image restoration processing based on the fusion characteristics to obtain a target license plate image; the sharpness of the target license plate image is higher than that of the original license plate image. The first type features and the second type features are obtained by respectively extracting the frequency spectrum image and the original license plate image corresponding to the original license plate image, and then license plate image restoration processing is carried out based on the fusion features obtained by fusing the first type features and the second type features, so that a target license plate image with higher definition than the original license plate image is obtained, and compared with a mode of estimating a fuzzy core, the method can adapt to the original license plate image with different fuzzy degrees, and improves the adaptability.
Drawings
FIG. 1 is a flow chart of an image processing method in one embodiment;
FIG. 2 is a simplified flow diagram of an image processing method according to an embodiment;
FIG. 3 is a schematic diagram of an image restoration model in one embodiment;
FIG. 4 is a block diagram showing the structure of an image processing apparatus in one embodiment;
FIG. 5 is an internal block diagram of a computer device in one embodiment;
fig. 6 is an internal structural view of a computer device in another embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, an image processing method is provided, and the method is applied to a computer device for example and is described, it is understood that the computer device may include at least one of a terminal or a server. The method can be applied to a terminal or a server, and can also be applied to a system comprising the terminal and the server, and the method is realized through interaction between the terminal and the server. In this embodiment, the method includes the steps of:
S102, acquiring a spectrum image corresponding to an original license plate image to be processed; and carrying out feature extraction processing on the frequency spectrum image to obtain first-class features.
Wherein the first type of features are feature information for expressing frequency domain features. The spectral image indicates the frequency domain characteristics of the original license plate image. The first type of features are in fact frequency domain features of the original license plate image.
For example, the computer device may convert the original license plate image into a gray scale image, and then perform fast fourier transform on the gray scale image to obtain a spectrum image corresponding to the original license plate image. The computer device may obtain a multi-level first class feature by performing multi-level feature extraction on the spectral image. It is understood that the multi-level first class features include first class features obtained by each level of feature extraction.
In one embodiment, the computer device may derive the first type of feature by convolving the spectral image.
In one embodiment, the computer device may perform a multi-level convolution process on the spectral image to obtain a multi-level first-class feature. It will be appreciated that the multi-level first class of features includes the first class of features resulting from each level of convolution processing.
In one embodiment, the computer device may comprise at least one of a terminal or a server. The terminal can be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things equipment and portable wearable equipment, and the internet of things equipment can be smart speakers, smart televisions, smart air conditioners, smart vehicle-mounted equipment and the like. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The server may be implemented as a stand-alone server or as a server cluster composed of a plurality of servers.
S104, carrying out feature extraction processing on the original license plate image to obtain second class features; and fusing the first type of features and the second type of features to obtain fused features.
Wherein the second class of features comprise feature information for expressing spatial domain features;
the computer device may obtain the multi-level second-class feature by performing multi-level feature extraction processing on the original license plate image. The computer equipment can fuse the first type of features and the second type of features of the same level to obtain fused features of each level. The multi-stage second class features comprise second class features obtained by extracting each stage of features.
In one embodiment, the computer device may obtain the second type of feature by performing residual learning on the original license plate image.
In one embodiment, the computer device may obtain the multi-level second class feature by performing multi-level residual learning on the original license plate image. The multi-level second class features comprise second class features respectively obtained by residual learning of each level.
S106, license plate image restoration processing is carried out based on the fusion characteristics, and a target license plate image is obtained.
The definition of the target license plate image is higher than that of the original license plate image. It can be appreciated that the original license plate image is more blurred than the target license plate image, and the license plate image needs to be restored to improve the definition.
The computer device may derive the attention weight by performing a self-attention calculation on the fused features, for example. The computer equipment can perform license plate image restoration processing according to the attention weight and the fusion characteristic to obtain a target license plate image.
In one embodiment, the computer device may perform an upsampling process based on the fusion features to obtain the target license plate image.
In one embodiment, the computer device may perform an upsampling process based on the fusion feature to obtain an upsampled result, and perform a convolution process on the upsampled result to obtain the target license plate image.
In the image processing method, a spectrum image corresponding to an original license plate image to be processed is obtained; performing feature extraction processing on the frequency spectrum image to obtain first-class features; the first type of features are feature information for expressing frequency domain features; performing feature extraction processing on the original license plate image to obtain second class features; the second class of features comprise feature information for expressing the features of the spatial domain; fusing the first type of features and the second type of features to obtain fused features; carrying out license plate image restoration processing based on the fusion characteristics to obtain a target license plate image; the sharpness of the target license plate image is higher than that of the original license plate image. The first type features and the second type features are obtained by respectively extracting the frequency spectrum image and the original license plate image corresponding to the original license plate image, and then license plate image restoration processing is carried out based on the fusion features obtained by fusing the first type features and the second type features, so that a target license plate image with higher definition than the original license plate image is obtained, and compared with a mode of estimating a fuzzy core, the method can adapt to the original license plate image with different fuzzy degrees, and improves the adaptability.
In one embodiment, the first type of feature is a multi-level first type of feature obtained by multi-level feature extraction; the second type of features are multi-stage second type of features obtained through multi-stage feature extraction; fusing the first type of features and the second type of features to obtain fused features, wherein the fusing comprises the following steps: fusing the first class features and the second class features of the first class to obtain fused features of each class; license plate image restoration processing is carried out based on the fusion characteristics to obtain a target license plate image, which comprises the following steps: and carrying out license plate image restoration processing based on the fusion characteristics of each level to obtain a target license plate image.
For example, the computer device may fuse the first class feature and the second class feature of the same level to obtain a fused feature of each level. The computer equipment can obtain the attention weights respectively corresponding to the fusion characteristics of each level by carrying out self-attention calculation on the fusion characteristics of each level. It will be appreciated that the attention weight corresponding to the fused feature of each stage is used to weight the fused feature of that stage. The computer equipment can determine the fusion characteristics weighted by the attention weight, and perform license plate image restoration processing based on the weighted fusion characteristics to obtain the target license plate image.
In one embodiment, the computer device may perform self-attention computation on the fused features of the present stage through the self-attention module of the present stage for the fused features of each stage, to obtain an attention weight corresponding to the fused features of each stage. It is appreciated that the self-attention module is used to adaptively adjust feature weights of the fusion features.
In the embodiment, the first class features and the second class features of the first class are fused to obtain fused features of each class; license plate image restoration processing is carried out based on the fusion characteristics of each level, a target license plate image is obtained, the extracted multi-level fusion characteristics can be fully utilized, and the license plate image restoration effect is ensured.
In one embodiment, performing feature extraction processing on the spectrum image to obtain a first type of feature includes: performing multistage feature extraction processing on the frequency spectrum image to respectively obtain multistage first class features; wherein, the input data of the next stage of feature extraction processing is the result of the present stage of feature extraction processing; performing feature extraction processing on the original license plate image to obtain a second class of features, wherein the feature extraction processing comprises the following steps: performing multi-level feature extraction processing on the original license plate image to respectively obtain multi-level second class features; wherein the input data of the next-stage feature extraction process is determined based on the result of the present-stage feature extraction process.
In one embodiment, performing feature extraction processing on the spectrum image to obtain a first type of feature includes: performing multistage feature extraction processing on the frequency spectrum image to respectively obtain multistage first class features; wherein, the input data of the next stage of feature extraction processing is the result of the present stage of feature extraction processing; performing feature extraction processing on the original license plate image to obtain a second class of features, wherein the feature extraction processing comprises the following steps: performing multi-level feature extraction processing on the original license plate image to respectively obtain multi-level second class features; the input data of the next-stage feature extraction processing is a fusion feature obtained by fusing the result of the current-stage feature extraction processing and the first-class feature of the current stage.
It can be understood that the first-level second-class features obtained by the first-level feature extraction process express the spatial domain characteristics of the original license plate image. The second class features of the other stages except the first stage express the spatial domain characteristics and the frequency domain characteristics of the original license plate image.
Illustratively, the computer apparatus may perform multi-level convolution processing on the spectrum image by taking the spectrum image as input data of the first-level convolution processing and taking a result of the present-level convolution processing as input data of the next-level convolution processing, to obtain multi-level first-class features, respectively.
The computer equipment can be used for carrying out multistage feature extraction processing on the original license plate image by taking the original license plate image as input data of first-level residual error learning and taking fusion features obtained by fusion of a result of current-level residual error learning and first-class features of the current-level as input data of next-level residual error learning, so as to respectively obtain multistage second-class features.
It should be noted that the result of the residual learning at this stage is actually the second type of feature at this stage. The input data of the next stage residual learning is actually the fusion feature of the present stage.
In one embodiment, the computer device may perform a nonlinear mapping process on the first class features of each stage to obtain mapped first class features of each stage. For example, the computer device may employ a sigmoid function for the nonlinear mapping process.
In one embodiment, the computer device may perform, for the first class feature of each stage, nonlinear mapping processing on the first class feature of the stage by using a nonlinear mapping module of the stage, to obtain a mapped first class feature of each stage.
In one embodiment, the computer device may splice the second class features of the same level and the mapped first class features to obtain fusion features of each level.
In the embodiment, multistage feature extraction processing is performed on the spectrum image to obtain multistage first-class features respectively; and carrying out multistage feature extraction processing on the original license plate image to respectively obtain multistage second class features, and carrying out license plate image restoration processing on the subsequent fusion features of each stage, which are obtained based on the multistage first class features and the multistage second class features, so that the extracted multistage features are fully utilized, and the license plate image restoration effect is ensured.
In one embodiment, performing a multi-level feature extraction process on the spectral image to obtain multi-level first-class features respectively includes: taking the frequency spectrum image as the input of a frequency domain feature extraction layer, sequentially carrying out feature extraction by a plurality of layers of convolution modules of the frequency domain feature extraction layer to obtain first class features output by each layer of convolution modules; the input data of the next-level convolution module is the output result of the previous-level convolution module; performing multistage feature extraction processing on the original license plate image to obtain multistage second class features respectively, wherein the steps comprise: taking an original license plate image as input of a spatial domain feature extraction layer, sequentially carrying out feature extraction through multi-level residual modules of the spatial domain feature extraction layer to obtain second class features output by each level residual module; the input data of the residual error module of the next level is a fusion characteristic obtained by fusing the second type characteristic output by the residual error module of the level and the first type characteristic output by the convolution module of the level.
The computer device may take the spectrum image as input of the frequency domain feature extraction layer, perform first-level convolution processing on the spectrum image through the first-level convolution module, and continuously take the output result of the first-level convolution module as input data of the next-level convolution module to obtain the first-class feature output by each level convolution module.
The computer equipment can take the original license plate image as the input of the spatial domain feature extraction layer, performs first-level residual learning on the original license plate image through the first-level residual error module, and continuously takes the fusion feature obtained by fusing the second-class feature output by the first-level residual error module and the first-class feature of the first-level residual error module as the input data of the next-level residual module to obtain the second-class feature output by each level residual module.
In one embodiment, the first type of features and the second type of features are obtained by downsampling the frequency domain feature extraction layer and the spatial domain feature extraction layer; the convolution module, the nonlinear mapping module, the residual error module, the self-attention module and the deconvolution module are consistent in number; the number of the modules is consistent with the downsampling multiplying power corresponding to the original license plate image.
In one embodiment, in the license plate image restoration model, the input image is downsampled to a preset size through the frequency domain feature extraction layer and the spatial domain feature extraction layer, and then upsampled to the size of the input image through the multi-level deconvolution module. The computer equipment can determine the downsampling multiplying power corresponding to the original license plate image by comparing the size of the original license plate image with a preset size. The computer device may determine a plurality of levels that match the downsampling magnification. It will be appreciated that each level is downsampled to a preset magnification, i.e., a preset magnification is used to indicate the downsampled magnification of each level compared to the previous level. For example, if the downsampling magnification is 4 and the preset magnification is 2, the original license plate image is subjected to image restoration processing through 2 levels in the license plate restoration model.
In one embodiment, the computer device obtains the downsampling magnification corresponding to the original license plate image by calculating the ratio of the size of the original license plate image to the preset size.
Determining the downsampling multiplying power corresponding to the image to be processed according to the size of the image to be processed and a preset size; the preset size is used for indicating the size of the image to be processed to be downsampled;
Determining the module numbers of the convolution module, the residual error module and the deconvolution module based on a preset multiplying power and a downsampling multiplying power corresponding to the image to be processed; the preset multiplying power is used for indicating the downsampling multiplying power of each residual module compared with the last residual module, the downsampling multiplying power of each convolution module compared with the last convolution module and the downsampling multiplying power of each deconvolution module compared with the last deconvolution module.
In one embodiment, the computer device may scale the original license plate image to fine tune such that the original license plate image is sized to an exponential multiple of the preset magnification.
In one embodiment, a simplified flow diagram of an image processing method is provided as shown in FIG. 2. The computer equipment can acquire the original license plate image, fine-tune the size of the original license plate image, and obtain the fine-tuned original license plate image. The size of the original license plate image after fine adjustment is exponential times of the preset multiplying power, and the original license plate image can adapt to the structure of a license plate image restoration model. The computer equipment can convert the original license plate image after fine adjustment into a gray level image, and then perform fast Fourier transform on the gray level image to obtain a frequency spectrum image. The computer device may use the spectral image as an input to the frequency domain feature extraction layer to obtain a first class of features of the output multiple levels of each convolution module.
The computer equipment can use the trimmed original license plate image as the input of the spatial domain feature extraction layer, and fuse the first class features and the second class features of the first class to obtain the fused features of each class. And carrying out image reduction processing based on the fusion characteristics of each level to obtain a target license plate image.
In this embodiment, a spectrum image is used as an input of a frequency domain feature extraction layer, and feature extraction is sequentially performed by a multi-level convolution module of the frequency domain feature extraction layer, so as to obtain a first class feature output by each level convolution module; the original license plate image is used as the input of a spatial domain feature extraction layer, the feature extraction is sequentially carried out through the multi-level residual modules of the spatial domain feature extraction layer, the second type features output by each level residual module are obtained, the license plate image reduction processing can be carried out on the basis of each level of fusion features obtained by the multi-level first type features and the multi-level second type features, the extracted multi-level features are fully utilized, and the license plate image reduction effect is ensured.
In one embodiment, performing license plate image restoration processing based on the fusion features of each level to obtain a target license plate image includes: performing multi-level deconvolution processing on the fusion characteristics of each level through a multi-level deconvolution module to obtain a target license plate image; the input data of the deconvolution module of the last level is the fusion characteristic of the last level; the input data of the deconvolution module of the previous level is obtained by fusing the output result of the deconvolution module of the present level and the fusion characteristic of the previous level.
The computer device may perform deconvolution processing on the fusion feature of the last stage through the deconvolution module of the last stage, and may continuously obtain the input data of the deconvolution module of the last stage by concatenating the output result of the deconvolution module of the present stage and the fusion feature of the last stage, so as to obtain the up-sampling result output by the first deconvolution module. The computer device may convolve the upsampled result to obtain the target license plate image.
In one embodiment, a schematic structural diagram of an image restoration model is provided as shown in FIG. 3. The size of the original license plate image is 256×256×3, the size of the spectrum image is 256×256×1, and the corresponding downsampling ratio is 4. The preset multiplying power of each level is 2. The spectral image is used as the input of the first-level convolution module, and the first-class characteristic size output by the first-level convolution module is 128×128×32. The convolution module output of the last level is 64 x 64. The first-level nonlinear mapping module outputs the first-level mapped features. The nonlinear mapping module of the last level outputs the mapped first class of features of the last level.
The original license plate image is used as the input of a residual error module of a first level, and the second type characteristic size output by the residual error module of the first level is 128 x 32. And splicing the mapped first type features and second type features of the same stage to obtain fusion features of each stage. Taking the first-level fusion characteristic as the input of the residual error module of the last level, residual module output of last level is 64 x 64.
And taking the fusion characteristics of each level as the input of the self-attention module of each level, outputting the first-level self-attention module, weighting the fusion characteristics of the first level, and outputting the weighted fusion characteristics of the last level by the self-attention module of the last level.
And taking the weighted fusion characteristic of the last stage as the input of a deconvolution module of the last stage, wherein the deconvolution result output by the deconvolution module of the last stage has the size of 128×128×64. And splicing the deconvolution result of the last stage and the weighted fusion characteristic of the first stage, inputting the fusion characteristic into a deconvolution model of the first stage, wherein the deconvolution result output by the deconvolution module of the first stage has the size of 256×256×64. And carrying out convolution processing on the first-stage deconvolution result by the first-stage output module to obtain a target license plate image with the size of 256-3. It will be appreciated that the output module is in fact a convolution module.
In the embodiment, the multi-level deconvolution module is used for carrying out multi-level deconvolution processing on the fusion characteristics of each level to obtain the target license plate image, the extracted multi-level characteristics are fully utilized, and the license plate image restoration effect can be ensured.
In one embodiment, the target license plate image is an output obtained by taking the original license plate image as an input of a license plate image restoration model; the method further comprises a training step of license plate image restoration model; the training step of the license plate image restoration model comprises the following steps: determining a sample image pair; the sample image pair comprises a sample image and a label image; the definition of the label image is higher than that of the sample image; determining a sample image in the sample image pair as input of a model to be trained, and performing iterative training on the model to be trained based on the difference between an image output by the model to be trained and a label image in the sample image pair to obtain a trained license plate image restoration model.
For example, the computer device may determine, during each iteration of the training process, a sample image in the sample image pair as an input to the model to be trained, and tune the model to be trained in a direction in which a difference between an image output by the model to be trained and a label image in the sample image pair becomes smaller. And obtaining a license plate image restoration model after the training is finished after the iterative training.
In one embodiment, the computer device may determine deconvolution results output by each level deconvolution module during each round of iterative training. And carrying out convolution processing on the deconvolution results of all levels to obtain image restoration results of all levels. It can be understood that the first-level image restoration result is an image output by the model to be trained. The computer device may perform interpolation processing on the label image in the sample image pair according to the size of the image restoration result of each stage, to obtain an interpolated image of each stage. The interpolation image of the same level is consistent with the size of the image restoration result. The computer equipment can carry out parameter adjustment on the model to be trained in the direction of reducing the difference between the interpolation images and the image restoration results of all levels.
In one embodiment, the computer device may stop the iterative training to obtain the license plate image restoration model when the average square error between the difference image and the image restoration result of each stage reaches a preset value.
In one embodiment, the computer device may take the image output by the model to be trained and the label image in the sample image pair as inputs to the model loss function to obtain the model loss value output by the model loss function. And under the condition that the model loss value reaches a preset value, stopping iterative training of the model to be trained, and obtaining the trained license plate image restoration model.
In one embodiment, the model loss value may be the average squared error between the image output by the model to be trained and the label image in the sample image pair. It is understood that the model loss function may be an L2 loss function.
In one embodiment, the computer device may use the L2 loss function to measure feature differences and gradient differences between the image output by the model to be trained and the label image in the sample image pair.
In the embodiment, the sample image is determined as the input of the model to be trained, the model to be trained is iteratively trained based on the difference between the image output by the model to be trained and the label image in the sample image pair, so as to obtain the trained license plate image restoration model, and then license plate image restoration processing is carried out through the license plate image restoration model, so that the method can adapt to original license plate images with different fuzzy degrees, and improves the adaptability.
In one embodiment, determining the sample image pair includes at least one of: and acquiring a plurality of frames of vehicle moving images, performing superposition processing on the plurality of frames of vehicle moving images, then performing transformation processing to obtain sample images, and determining a tag image from the plurality of frames of vehicle moving images to determine a sample image pair comprising the sample images and the tag images. And acquiring an out-of-focus vehicle image and a focused vehicle image, performing transformation processing on the out-of-focus vehicle image to obtain a sample image, and determining the focused vehicle image as a label image to determine a sample image pair comprising the sample image and the label image.
For example, the computer device may acquire a plurality of frames of vehicle moving images in a slow vehicle speed scene, superimpose the plurality of frames of vehicle moving images frame by frame to obtain a superimposed vehicle moving image, and then transform the superimposed vehicle moving image to obtain a sample image. It is understood that the sample image is a motion blurred image. The computer device may filter the tag image from the multi-frame vehicle motion image to determine a sample image pair comprising the sample image and the tag image.
The computer equipment can acquire an out-of-focus vehicle image and a focused vehicle image in the same scene, and the out-of-focus vehicle image is transformed to obtain a sample image. It will be appreciated that the sample image is an out-of-focus blurred image. The computer device may determine the focused vehicle image as a label image to determine a sample image pair comprising the sample image and the label image.
The transformation process may include at least one of a random superimposed noise, a random transformed saturation luminance, or a random superimposed blur. It can be appreciated that the compatibilization data compatibilizes, i.e., performs various stochastic transformations on the image data, increasing the sample space.
In one embodiment, the computer device may obtain 10000 sets of sample image pairs, wherein 5000 sets include defocus blur images and 5000 sets include motion blur images.
In one embodiment, the computer device may perform at least one of random rotation or random cropping on the sample image and the label image in the sample image pair to obtain a sample image pair that is ultimately used to train the license plate image restoration model. For example, the image size of 128x128x3, 256x256x3, 512x512x3, or the like can be obtained by random cropping.
In this embodiment, a plurality of frames of vehicle moving images are acquired, superimposed and then transformed to obtain a sample image, and a tag image is determined from the plurality of frames of vehicle moving images to determine a sample image pair including the sample image and the tag image. The method comprises the steps of obtaining an out-of-focus vehicle image and a focused vehicle image, carrying out transformation processing on the out-of-focus vehicle image to obtain a sample image, determining the focused vehicle image as a label image to determine a sample image pair comprising the sample image and the label image, and simulating a real fuzzy license plate image to the greatest extent to enable a trained license plate image restoration model to be more robust in practical application.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiments of the present application also provide an image processing apparatus for implementing the above-mentioned image processing method. The implementation of the solution provided by the apparatus is similar to the implementation described in the above method, so the specific limitation of one or more embodiments of the image processing apparatus provided below may refer to the limitation of the image processing method hereinabove, and will not be repeated herein.
In one embodiment, as shown in fig. 4, there is provided an image processing apparatus 400 including: an acquisition unit 402, an extraction unit 404, and a reduction unit 406, wherein:
an obtaining unit 402, configured to obtain a spectrum image corresponding to the original license plate image to be processed.
An extracting unit 404, configured to perform feature extraction processing on the spectrum image to obtain a first type of feature; the first type of features are feature information for expressing frequency domain features; performing feature extraction processing on the original license plate image to obtain second class features; the second class of features comprise feature information for expressing the features of the spatial domain; and fusing the first type of features and the second type of features to obtain fused features.
A restoring unit 406, configured to perform license plate image restoration processing based on the fusion feature, to obtain a target license plate image; the sharpness of the target license plate image is higher than that of the original license plate image.
In one embodiment, the first type of feature is a multi-level first type of feature obtained by multi-level feature extraction; the second type of features are multi-stage second type of features obtained through multi-stage feature extraction; an extracting unit 404, configured to fuse the first class feature and the second class feature of the first class to obtain fused features of each class; and the restoring unit 406 is configured to perform license plate image restoration processing based on the fusion features of each level, so as to obtain a target license plate image.
In one embodiment, the extracting unit 404 is configured to perform a multi-level feature extraction process on the spectrum image, so as to obtain multi-level first class features respectively; wherein, the input data of the next stage of feature extraction processing is the result of the present stage of feature extraction processing; performing multi-level feature extraction processing on the original license plate image to respectively obtain multi-level second class features; the input data of the next-stage feature extraction processing is a fusion feature obtained by fusing the result of the current-stage feature extraction processing and the first-class feature of the current stage.
In one embodiment, the extracting unit 404 is configured to take the spectrum image as an input of the frequency domain feature extraction layer, and sequentially perform feature extraction through multiple levels of convolution modules of the frequency domain feature extraction layer to obtain a first type of feature output by each level of convolution module; the input data of the next-level convolution module is the output result of the previous-level convolution module; taking an original license plate image as input of a spatial domain feature extraction layer, sequentially carrying out feature extraction through multi-level residual modules of the spatial domain feature extraction layer to obtain second class features output by each level residual module; the input data of the residual error module of the next level is a fusion characteristic obtained by fusing the second type characteristic output by the residual error module of the level and the first type characteristic output by the convolution module of the level.
In one embodiment, the restoring unit 406 is configured to perform a multi-level deconvolution process on the fusion features of each level through a multi-level deconvolution module to obtain a target license plate image; the input data of the deconvolution module of the last level is the fusion characteristic of the last level; the input data of the deconvolution module of the previous level is obtained by fusing the output result of the deconvolution module of the present level and the fusion characteristic of the previous level.
In one embodiment, the target license plate image is an output obtained by taking the original license plate image as an input of a license plate image restoration model; an acquisition unit 402 for determining a sample image pair; the sample image pair comprises a sample image and a label image; the definition of the label image is higher than that of the sample image; determining a sample image in the sample image pair as input of a model to be trained, and performing iterative training on the model to be trained based on the difference between an image output by the model to be trained and a label image in the sample image pair to obtain a trained license plate image restoration model.
In one embodiment, the obtaining unit 402 is configured to obtain a plurality of frames of vehicle moving images, perform a superposition process on the plurality of frames of vehicle moving images, and then perform a transformation process to obtain a sample image, and determine a tag image from the plurality of frames of vehicle moving images, so as to determine a sample image pair including the sample image and the tag image. The acquiring unit 402 is further configured to acquire an out-of-focus vehicle image and a focused vehicle image, perform a transformation process on the out-of-focus vehicle image to obtain a sample image, and determine the focused vehicle image as a label image to determine a sample image pair including the sample image and the label image.
Each of the units in the image processing apparatus described above may be implemented in whole or in part by software, hardware, and combinations thereof. The units can be embedded in hardware or independent of a processor in the computer equipment, and can also be stored in a memory in the computer equipment in a software mode, so that the processor can call and execute the operations corresponding to the units.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 5. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing license plate image restoration models. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an image processing method.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure of which may be as shown in fig. 6. The computer device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input means. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement an image processing method. The display unit of the computer device is used for forming a visual picture, and can be a display screen, a projection device or a virtual reality imaging device. The display screen can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be a key, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structures shown in fig. 5 or 6 are merely block diagrams of portions of structures related to the aspects of the present application and are not intended to limit the computer devices to which the aspects of the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or may have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (10)

1. An image processing method, the method comprising:
acquiring a spectrum image corresponding to an original license plate image to be processed;
performing feature extraction processing on the spectrum image to obtain a first type of features; the first type of features are feature information for expressing frequency domain features;
performing feature extraction processing on the original license plate image to obtain second class features; the second type of features comprise feature information for expressing the characteristics of the spatial domain;
Fusing the first type of features and the second type of features to obtain fused features;
carrying out license plate image restoration processing based on the fusion characteristics to obtain a target license plate image; the definition of the target license plate image is higher than that of the original license plate image.
2. The method of claim 1, wherein the first type of feature is a multi-level first type of feature obtained by multi-level feature extraction; the second type of features are multi-level second type of features obtained through multi-level feature extraction;
the fusing the first type of features and the second type of features to obtain fused features comprises:
fusing the first type of features and the second type of features to obtain fused features of each level;
the license plate image restoration processing based on the fusion characteristics is carried out to obtain a target license plate image, and the method comprises the following steps:
and carrying out license plate image restoration processing based on the fusion characteristics of each level to obtain a target license plate image.
3. The method of claim 2, wherein the performing feature extraction processing on the spectral image to obtain a first class of features comprises:
performing multistage feature extraction processing on the frequency spectrum image to respectively obtain multistage first class features; wherein, the input data of the next stage of feature extraction processing is the result of the present stage of feature extraction processing;
The step of extracting the features of the original license plate image to obtain second class features comprises the following steps:
performing multistage feature extraction processing on the original license plate image to respectively obtain multistage second class features; the input data of the next-stage feature extraction processing is a fusion feature obtained by fusing the result of the current-stage feature extraction processing and the first-class feature of the current stage.
4. A method according to claim 3, wherein performing a multi-level feature extraction process on the spectral image to obtain multi-level first-class features respectively comprises:
taking the frequency spectrum image as the input of a frequency domain feature extraction layer, sequentially carrying out feature extraction by a plurality of layers of convolution modules of the frequency domain feature extraction layer to obtain first class features output by each layer of convolution modules; the input data of the next-level convolution module is the output result of the previous-level convolution module;
the step of carrying out multistage feature extraction processing on the original license plate image to obtain multistage second class features respectively comprises the following steps:
taking the original license plate image as the input of a spatial domain feature extraction layer, sequentially carrying out feature extraction through multi-level residual modules of the spatial domain feature extraction layer to obtain second class features output by each level residual module; the input data of the residual error module of the next level is a fusion characteristic obtained by fusing the second type characteristic output by the residual error module of the level and the first type characteristic output by the convolution module of the level.
5. The method of claim 2, wherein the license plate image restoration processing based on the fusion features of the respective levels includes:
performing multi-level deconvolution processing on the fusion characteristics of each level through a multi-level deconvolution module to obtain a target license plate image; the input data of the deconvolution module of the last level is the fusion characteristic of the last level; the input data of the deconvolution module of the previous level is obtained by fusing the output result of the deconvolution module of the present level and the fusion characteristic of the previous level.
6. The method according to any one of claims 1 to 5, wherein the target license plate image is an output obtained by taking an original license plate image as an input of a license plate image restoration model; the method further comprises a training step of the license plate image restoration model; the training step of the license plate image restoration model comprises the following steps:
determining a sample image pair; the sample image pair comprises a sample image and a label image; the sharpness of the label image is higher than that of the sample image;
determining a sample image in the sample image pair as input of a model to be trained, and performing iterative training on the model to be trained based on the difference between an image output by the model to be trained and a label image in the sample image pair to obtain a trained license plate image restoration model.
7. The method of claim 6, wherein the determining the sample image pair comprises at least one of:
acquiring a plurality of frames of vehicle moving images, performing superposition processing on the plurality of frames of vehicle moving images, then performing transformation processing to obtain sample images, and determining a tag image from the plurality of frames of vehicle moving images to determine a sample image pair comprising the sample images and the tag image;
and acquiring an out-of-focus vehicle image and a focused vehicle image, performing transformation processing on the out-of-focus vehicle image to obtain a sample image, and determining the focused vehicle image as a label image to determine a sample image pair comprising the sample image and the label image.
8. An image processing apparatus, characterized in that the apparatus comprises:
the acquisition unit is used for acquiring a spectrum image corresponding to the original license plate image to be processed;
the extraction unit is used for carrying out feature extraction processing on the frequency spectrum image to obtain first type features; the first type of features are feature information for expressing frequency domain features; performing feature extraction processing on the original license plate image to obtain second class features; the second type of features comprise feature information for expressing the characteristics of the spatial domain; fusing the first type of features and the second type of features to obtain fused features;
The restoring unit is used for carrying out license plate image restoring processing based on the fusion characteristics to obtain a target license plate image; the definition of the target license plate image is higher than that of the original license plate image.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
CN202211638445.3A 2022-12-19 2022-12-19 Image processing method, device, computer equipment and storage medium Pending CN116071279A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211638445.3A CN116071279A (en) 2022-12-19 2022-12-19 Image processing method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211638445.3A CN116071279A (en) 2022-12-19 2022-12-19 Image processing method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116071279A true CN116071279A (en) 2023-05-05

Family

ID=86175976

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211638445.3A Pending CN116071279A (en) 2022-12-19 2022-12-19 Image processing method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116071279A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116935278A (en) * 2023-07-25 2023-10-24 广东技术师范大学 Vehicle type recognition method and device based on synchronous signals, electronic equipment and medium
CN118172286A (en) * 2024-05-14 2024-06-11 西交利物浦大学 License plate image deblurring method, model training method, device, equipment and medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116935278A (en) * 2023-07-25 2023-10-24 广东技术师范大学 Vehicle type recognition method and device based on synchronous signals, electronic equipment and medium
CN116935278B (en) * 2023-07-25 2024-02-13 广东技术师范大学 Vehicle vehicle identification methods, devices, electronic equipment and media based on synchronization signals
CN118172286A (en) * 2024-05-14 2024-06-11 西交利物浦大学 License plate image deblurring method, model training method, device, equipment and medium

Similar Documents

Publication Publication Date Title
CN114757832B (en) Face super-resolution method and device based on cross convolution attention pair learning
CN111275626A (en) Video deblurring method, device and equipment based on ambiguity
JP2013518336A (en) Method and system for generating an output image with increased pixel resolution from an input image
Zeng et al. A generalized DAMRF image modeling for superresolution of license plates
CN116071279A (en) Image processing method, device, computer equipment and storage medium
Xu et al. Exploiting raw images for real-scene super-resolution
CN111932480A (en) Deblurred video recovery method and device, terminal equipment and storage medium
CN115496654A (en) Image super-resolution reconstruction method, device and medium based on self-attention mechanism
WO2022100490A1 (en) Methods and systems for deblurring blurry images
CN113284059A (en) Model training method, image enhancement method, device, electronic device and medium
CN113674187A (en) Image reconstruction method, system, terminal device and storage medium
CN110782398B (en) Image processing method, generative countermeasure network system and electronic device
Parvaz Point spread function estimation for blind image deblurring problems based on framelet transform
CN118608521A (en) Defect detection method, device, computer equipment and computer readable storage medium
Jia et al. Learning rich information for quad bayer remosaicing and denoising
CN118747728A (en) Image deblurring method, device, computer equipment and readable storage medium
CN118537226A (en) Super-resolution image reconstruction method, apparatus, computer device, readable storage medium and program product
CN116912148B (en) Image enhancement method, device, computer equipment and computer readable storage medium
CN111724292B (en) Image processing method, device, equipment and computer readable medium
CN118379200A (en) Image filtering processing method, device, electronic equipment and storage medium
Bricman et al. CocoNet: A deep neural network for mapping pixel coordinates to color values
CN114897732B (en) Image defogging method and device based on association of physical model and feature density
CN116229130A (en) Type identification method and device for blurred image, computer equipment and storage medium
CN115880144A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
Lee et al. Edge profile super resolution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination