CN110163194B - Image processing method, device and storage medium - Google Patents
Image processing method, device and storage medium Download PDFInfo
- Publication number
- CN110163194B CN110163194B CN201910378956.8A CN201910378956A CN110163194B CN 110163194 B CN110163194 B CN 110163194B CN 201910378956 A CN201910378956 A CN 201910378956A CN 110163194 B CN110163194 B CN 110163194B
- Authority
- CN
- China
- Prior art keywords
- image
- network
- countermeasure network
- generation countermeasure
- target image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 24
- 238000001514 detection method Methods 0.000 claims abstract description 25
- 238000000034 method Methods 0.000 claims description 25
- 230000008485 antagonism Effects 0.000 claims description 21
- 238000005070 sampling Methods 0.000 claims description 11
- 238000011176 pooling Methods 0.000 claims description 8
- 238000012015 optical character recognition Methods 0.000 description 15
- 238000010586 diagram Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 239000000284 extract Substances 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention discloses an image processing method, an image processing device and a storage medium, and the embodiment of the invention acquires an image to be processed; performing target image detection on an image to be processed; when the image to be processed is detected to contain a target image, a first generator in a generated countermeasure network is obtained, the generated countermeasure network is trained by sample images, and the first generator comprises a convolution sub-network and a deconvolution sub-network connected with the convolution sub-network; then carrying out convolution processing on the image to be processed based on the convolution sub-network to obtain non-target image characteristics of the image to be processed; and performing deconvolution processing on the non-target image features based on the deconvolution sub-network to obtain an image which does not contain the target image. The scheme can remove the target image in the image to be processed by generating a first generator in the countermeasure network to extract non-target image features in the image to be processed and then generating a processed image which does not contain the target image according to the non-target image features.
Description
Technical Field
The present invention relates to the field of computer vision, and in particular, to an image processing method, an image processing device, and a storage medium.
Background
Optical character recognition (Optical Character Recognition, OCR) technology can provide text detection, recognition in multiple scenarios. In some image recognition tasks, there are often some interference images, for example, in the bill recognition task, key fields are often covered with seals, which causes great interference to subsequent detection and recognition tasks, for example, characters in the seals are also added into recognition results, or recognition errors are caused due to overlapping of the seals and characters to be recognized.
In the fields of character recognition in an image, etc., it is important to remove an interference image in an image to be recognized.
Disclosure of Invention
The embodiment of the invention provides an image processing method, an image processing device and a storage medium, which can remove a target image in an image to be identified.
The embodiment of the invention provides an image processing method, which comprises the following steps:
Acquiring an image to be processed;
Detecting a target image of the image to be processed;
When the image to be processed is detected to contain a target image, a first generator in a generated countermeasure network is obtained, the generated countermeasure network is trained by sample images, wherein the target image is an image to be detected, and the first generator comprises a convolution sub-network and a deconvolution sub-network connected with the convolution sub-network;
Carrying out convolution processing on the image to be processed based on the convolution sub-network to obtain non-target image characteristics of the image to be processed, wherein the non-target image characteristics are image characteristics corresponding to images except the target image in the image to be processed;
And carrying out deconvolution processing on the non-target image features based on the deconvolution sub-network to obtain an image which does not contain the target image.
Correspondingly, the embodiment of the invention also provides an image processing device, which comprises:
The first acquisition unit is used for acquiring an image to be processed;
the detection unit is used for detecting the target image of the image to be processed;
the second acquisition unit is used for acquiring a first generator in a generated countermeasure network when the image to be processed contains a target image, wherein the generated countermeasure network is trained by sample images, the target image is an image to be detected, and the first generator comprises a convolution sub-network and a deconvolution sub-network connected with the convolution sub-network;
The first processing unit is used for carrying out convolution processing on the image to be processed based on the convolution sub-network to obtain non-target image characteristics of the image to be processed, wherein the non-target image characteristics are image characteristics corresponding to images except the target image in the image to be processed;
and the second processing unit is used for carrying out deconvolution processing on the non-target image features based on the deconvolution sub-network to obtain an image which does not contain the target image.
Optionally, in some embodiments, the first processing unit is specifically configured to:
extracting image features of the image to be processed based on a convolution layer in the convolution sub-network;
and carrying out downsampling processing on the image features based on a pooling layer in the convolution sub-network to obtain the non-target image features.
Optionally, in some embodiments, the second processing unit is specifically configured to:
Deconvolution processing is carried out on the non-target image features based on deconvolution layers in the deconvolution sub-network;
And carrying out up-sampling processing on the non-target image characteristics subjected to the deconvolution processing based on an up-sampling layer in the deconvolution sub-network to obtain the processed image.
Optionally, in some embodiments, the second processing unit is further specifically configured to:
acquiring image features output by the convolution layer based on the deconvolution layer;
Acquiring image features output by the previous layer based on the deconvolution layer;
and performing deconvolution processing on the image features output by the convolution layer and the image features output by the previous layer based on the deconvolution layer.
Optionally, in some embodiments, the apparatus further comprises:
A third obtaining unit, configured to obtain the sample image, where the sample image includes a positive example sample and a negative example sample, the positive example sample is a sample including a target image, and the negative example sample is a sample not including the target image;
and the training unit is used for alternately training a preset generation countermeasure network according to the positive example sample and the negative example sample to obtain the generation countermeasure network, wherein the preset generation countermeasure network comprises a first generator.
Optionally, in some embodiments, the preset generation countermeasure network includes a first preset generation countermeasure network and a second preset generation countermeasure network, and the training unit is specifically configured to:
Training the first preset generation countermeasure network according to the positive example sample to obtain a first generation countermeasure network;
updating parameters of a first network module in the first generation countermeasure network into a second network module corresponding to the second preset generation countermeasure network, wherein the second network module comprises a first generator;
Training the second preset generation countermeasure network according to the negative example sample to obtain a second generation countermeasure network;
Updating parameters of a second network module in the second generation countermeasure network into a first network module corresponding to the first preset generation countermeasure network, wherein the first network module comprises a first generator;
Determining the generated countermeasure network from the first generated countermeasure network or the second generated countermeasure network.
Optionally, in some embodiments, the first network module further includes a second generator, a first arbiter, and a second arbiter, and the training unit is further specifically configured to:
inputting the positive sample into a first generator in the first preset generation countermeasure network to generate a first image which does not contain a target image;
Inputting the first image into a second generator in the first preset generation countermeasure network to generate a second image containing a target image;
determining a first loss value of the first image by a first arbiter in the first preset generation countermeasure network, and determining a second loss value of the second image by a second arbiter in the first preset generation countermeasure network;
And adjusting parameters of the first preset generation countermeasure network according to the first loss value and the second loss value to obtain the first generation countermeasure network.
Optionally, in some embodiments, the second network module further includes a second generator, a first arbiter, and a second arbiter, and the training unit is further specifically configured to:
inputting the negative example sample into a second generator in the second preset generation countermeasure network to generate a third image containing a target image;
Inputting the third image into a first generator in the second preset generation countermeasure network to generate a fourth image which does not contain a target image;
determining a third loss value of the third image by a second arbiter in the second preset generation countermeasure network, and determining a fourth loss value of the fourth image by a first arbiter in the second preset generation countermeasure network;
and adjusting parameters of the second preset generation countermeasure network according to the third loss value and the fourth loss value to obtain the second generation countermeasure network.
Optionally, in some embodiments, the apparatus further comprises:
The extraction unit is used for extracting a target image area corresponding to the target image from the image to be processed when the image to be processed is detected to contain the target image;
At this time, the first processing unit is specifically configured to:
And carrying out convolution processing on the target image area based on the convolution sub-network to obtain non-target image characteristics of the image to be processed.
The embodiment of the invention also provides a storage medium, which stores a plurality of instructions, wherein the instructions are suitable for being loaded by a processor to execute the steps in any image processing method provided by the embodiment of the invention.
The present invention also provides a computer program product which, when run on a computer, causes the computer to perform the steps of any of the image processing methods provided by the embodiments of the present invention.
The image processing device acquires an image to be processed; performing target image detection on an image to be processed; when the image to be processed is detected to contain a target image, a first generator in a generated countermeasure network is obtained, the generated countermeasure network is trained by sample images, and the first generator comprises a convolution sub-network and a deconvolution sub-network connected with the convolution sub-network; then carrying out convolution processing on the image to be processed based on the convolution sub-network to obtain non-target image characteristics of the image to be processed; and performing deconvolution processing on the non-target image features based on the deconvolution sub-network to obtain an image which does not contain the target image. The scheme can remove the target image in the image to be processed by generating a first generator in the countermeasure network to extract non-target image features in the image to be processed and then generating a processed image which does not contain the target image according to the non-target image features.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of a scenario of an image processing method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of an image processing method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a first preset generation countermeasure network according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a second preset generation countermeasure network according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a first generator according to an embodiment of the present invention;
FIG. 6a is a schematic diagram of another flow chart of an image processing method according to an embodiment of the present invention;
FIG. 6b is a schematic illustration of an unprocessed ticket image provided by an embodiment of the present invention;
FIG. 6c is a schematic illustration of a processed ticket image provided by an embodiment of the invention;
FIG. 6d is a block diagram illustrating an image processing method according to the present invention;
fig. 7 is a schematic view of a structure of an image processing apparatus according to an embodiment of the present invention;
fig. 8 is another schematic structural view of an image processing apparatus according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a network device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to fall within the scope of the invention.
The embodiment of the invention provides an image processing method, an image processing device and a storage medium, wherein the image processing device can be integrated in network equipment, the network equipment can be a server or a terminal and the like, and the terminal can comprise a mobile phone, a tablet personal computer, a notebook computer, personal computing (PC, personal Computer) and the like.
The image processing method provided by the embodiment of the invention can be used for processing the image, for example, the target image in the image to be processed can be removed through the embodiment of the invention. In some embodiments, a target image in the image to be processed may be removed by generating a generator in the countermeasure network, wherein the target image includes a stamp image.
For example, referring to fig. 1, a network device acquires an image to be processed, then performs target image detection on the image to be processed, taking the image to be processed as a bill image, taking the target image as a seal image as an example, when detecting that the bill image contains the seal image, removing the seal image in the bill image by generating a generator in an antagonism network, specifically extracting non-seal image features of the bill image according to a convolution sub-network in the generator, and finally restoring an image corresponding to the non-seal image features according to a deconvolution sub-network in the generator to obtain the bill image without the seal image.
The following will describe in detail. The numbers of the following examples are not intended to limit the preferred order of the examples.
In the embodiments of the present invention, description will be made from the viewpoint of an image processing apparatus, and in an embodiment, an image processing method is provided, which may be executed by a processor of a network device, as shown in fig. 2, and a specific flow of the image processing method may be as follows:
201. And acquiring an image to be processed.
The image to be processed is an image which needs to be subjected to target image detection, and when the target image is contained, the target image needs to be removed.
The target image in the embodiment of the invention may be an interference image in the image to be processed, the interference image may be a foreground image of the image to be identified, and the image may interfere with OCR recognition of the image to be processed, etc.
In some embodiments, when the user needs to remove the target image in the image to be processed, the image to be processed is input into the image processing device, so that the image processing device obtains the image to be processed.
In some embodiments, the image to be processed may be a bill image, and the target image in the image to be processed may be a stamp image, that is, the stamp image in the bill image may be removed in the embodiment of the present invention.
202. And detecting the target image of the image to be processed.
After the image processing device acquires the image to be processed, target image detection is performed on the image to detect whether the image to be processed contains the target image.
Specifically, the embodiment of the invention can realize the detection of the target image of the image to be processed through an image detection method, an attention network and the like, and the specific method for detecting the target image is not limited herein, so long as whether the image to be processed contains the target image or not can be detected, for example, whether the bill image contains the seal image or not can be detected.
203. When it is detected that the image to be processed contains the target image, a first generator in the countermeasure network is acquired and generated.
The generating countermeasure network is trained by the sample image, wherein the target image is an image to be detected, and the first generator comprises a convolution sub-network and a deconvolution sub-network connected with the convolution sub-network.
When the target image is detected to be contained in the image to be processed, the target image in the image to be processed needs to be removed, firstly, a first generator in a generated countermeasure network needs to be acquired, and then the target image in the image to be processed is removed by using the first generator, wherein the first generator can be preset in an image processing device or can be acquired from other servers or terminals.
Wherein the generation countermeasure network is trained from the sample images, and the first generator includes a convolution sub-network and a deconvolution sub-network connected to the convolution sub-network.
Wherein the first generator in the generation countermeasure network may be included in the image processing apparatus.
In some embodiments, before the image to be processed is acquired, a preset generated countermeasure network is required to be trained according to the sample image, so as to obtain the generated countermeasure network, wherein the generated countermeasure network is a network trained according to the sample image.
Specifically, training the preset generation countermeasure network includes:
(1) A sample image is acquired.
The sample image comprises a positive sample and a negative sample, wherein the positive sample is a sample containing the target image, and the negative sample is a sample not containing the target image.
(2) And alternately training a preset generation countermeasure network according to the positive example sample and the negative example sample to obtain the generation countermeasure network, wherein the preset generation countermeasure network comprises a first generator.
More specifically, the preset generation countermeasure network includes a first preset generation countermeasure network including a first generator and a second preset generation countermeasure network including a first generator.
At this time, the alternately training is performed on the preset generated countermeasure network according to the positive example sample and the negative example sample to obtain the generated countermeasure network, which includes:
a. Training the first preset generation countermeasure network according to the positive example sample to obtain a first generation countermeasure network.
Since it is difficult to collect a large number of images with and without targets appearing in pairs, we train the network using an alternate generation method, and the positive and negative examples in this embodiment may be unpaired examples.
The structure of the first preset generation countermeasure network is shown in fig. 3, and the first network module in the first preset generation countermeasure network includes a first generator G x-y, a first discriminator D y, a second generator G y-x, and a second discriminator D x, where G x-y is used for generating an image y without a target image from an image x with the target image, G y-x is used for generating an image x 'with the target image from the image y, D y is used for determining whether the image y is a real image without the target image, and in particular, D x is used for determining whether the image x' is a real image with a seal based on a loss value. The discriminators (D y and D x) in the embodiment of the present invention are typical classification models based on convolutional neural networks, and the main components are a convolutional layer and a fully connected layer.
Specifically, the positive sample x is input into a first generator G x-y in a first preset generation countermeasure network, and a first image y which does not contain a target image is generated;
Then inputting the first image y into a second generator G y-x in a first preset generation countermeasure network to generate a second image x containing the target image;
and determining a first loss value of the first image y by a first arbiter D y in the first preset generation countermeasure network and a second loss value of the second image x' by a second arbiter D x in the first preset generation countermeasure network;
Finally, the parameters of the first preset generation countermeasure network are adjusted according to the first loss value and the second loss value, and the first generation countermeasure network is obtained.
In particular, the first loss value is used to adjust a parameter in the first generator and the second loss value is used to adjust a network parameter in the second generator.
The image recognition device takes the first image generated by the first generator as a sample in the first discriminator, so as to improve the accuracy of the first discriminator.
B. And updating parameters of a first network module in the first generation countermeasure network into a second network module corresponding to a second preset generation countermeasure network, wherein the second network module comprises a first generator.
After training the first preset generation countermeasure network in a round according to the positive example sample, updating parameters of a first network module in the first generation countermeasure network after the round training to a second network module corresponding to the second preset generation countermeasure network, where as shown in fig. 4, the second network module in the second preset generation countermeasure network includes: the second generator G y-x, the second arbiter D x, the first generator G x-y, and the first arbiter D y.
Specifically, updating parameters of a first generator in a first generation countermeasure network to the first generator in a second preset generation countermeasure network; updating parameters of a second generator in the first generation countermeasure network to the second generator in the second preset generation countermeasure network; updating parameters of a first discriminator in the first generation countermeasure network to a first discriminator in a second preset generation countermeasure network; and updating the parameters of the second discriminant in the first generation countermeasure network to the second discriminant in the second preset generation countermeasure network.
It should be noted that, at this time, the first generation resist network may be the first generation resist network that is newly trained.
The first generation of the discriminators in the antagonism network enables the produced image x 'to be the same as the real image x as much as possible, and the second generation of the discriminators in the antagonism network enables the produced image y' to be the same as the real image y as much as possible, and the capability of both parties can be continuously improved through the continuous game evolution process of the discriminators and the generator.
C. Training the second preset generation countermeasure network according to the negative example sample to obtain a second generation countermeasure network.
If the first preset generation countermeasure network is trained before, parameters of a corresponding network module in the second preset generation countermeasure network are the same as parameters of the trained first preset generation countermeasure network.
Training the second preset generation countermeasure network according to the sample, specifically:
inputting the negative example sample y into a second generator G y-x in a second preset generation countermeasure network to generate a third image x containing the target image;
Inputting the third image x into a first generator G x-y in a second preset generation countermeasure network to generate a fourth image y' which does not contain the target image;
and determining a third loss value of the third image x by the second arbiter D x in the second preset generation countermeasure network and a fourth loss value of the fourth image y' by the first arbiter D y in the second preset generation countermeasure network;
Finally, according to the third loss value and the fourth loss value, adjusting parameters of a second preset generation countermeasure network to obtain a second generation countermeasure network.
Specifically, parameters of the second generator are adjusted according to the third loss value, and parameters of the first generator are adjusted according to the fourth loss value.
D. And updating parameters of a second network module in the second generation countermeasure network into a first network module corresponding to the first preset generation countermeasure network, wherein the first network module comprises a first generator.
After a new training round is performed on the second preset generation countermeasure network according to the negative example, parameters of a second network module in the second generation countermeasure network after the training round are updated to a first network module corresponding to the first generation countermeasure network.
Specifically, updating parameters of a second generator in a second generation countermeasure network to the second generator in the first preset generation countermeasure network; updating parameters of a first generator in the second generation countermeasure network to a first generator in a first preset generation countermeasure network; updating parameters of a second discriminator in the second generation countermeasure network to the second discriminator in the first preset generation countermeasure network; updating the parameters of the first discriminator in the second generation countermeasure network to the first preset generation countermeasure network.
The second generation countermeasure network is the second generation countermeasure network that has been newly trained.
It should be noted that, in this embodiment, the first preset generation countermeasure network (or the first generation countermeasure network without training convergence) and the second preset generation countermeasure network (or the second generation countermeasure network without training convergence) need to be alternately trained according to the positive example sample and the negative example sample, and in the process of the alternate training, the latest training parameters are updated to the network modules corresponding to the other party (the first or second generation countermeasure network) until the networks converge.
The training sequence of generating the countermeasure network for the first preset and generating the countermeasure network for the second preset at the beginning of training is not limited herein.
In the embodiment of the invention, only the first generator in the first generation antagonism network or the second generation antagonism network is required to be extracted for processing the image.
E. The generation of the antagonism network is determined according to the first generation antagonism network or the second generation antagonism network.
When the network convergence is determined according to the first generation antagonism network and the second generation antagonism network, at this time, because parameters in corresponding network modules in the first generation antagonism network and the second generation antagonism network are mutually updated, parameters of network modules in the converged first generation antagonism network and the second generation antagonism network are the same, and at this time, the generation antagonism network is the first generation antagonism network after the training convergence and/or the second generation antagonism network after the training convergence.
The first generator may be acquired from the first generation opposing network or the second generation opposing network.
204. And carrying out convolution processing on the image to be processed based on the convolution sub-network to obtain non-target image characteristics of the image to be processed.
The non-target image features are image features corresponding to images except the target image in the image to be processed. The network structure of the first generator is shown in fig. 5, specifically, the image features of the image to be processed are extracted based on the convolution layer in the convolution sub-network, wherein the image features are non-target image features, that is, the image features corresponding to the target image are not included in the image map; and then, carrying out downsampling processing on the image features based on a pooling layer in the convolution sub-network to obtain the non-target image features.
The number of convolution layers and deconvolution layers in the embodiment of the present invention is not limited, and may be 7 or other numbers, and the numbers of the pooling layers and the upsampling layers in the embodiment of the present invention are not limited.
205. And deconvolution processing is carried out on the non-target image features based on the deconvolution sub-network, so that an image which does not contain the target image is obtained.
Specifically, deconvolution processing is carried out on non-target image features based on deconvolution layers in the deconvolution sub-network, and an image is restored; and then, up-sampling the non-target image features subjected to deconvolution based on an up-sampling layer in the deconvolution sub-network, and recovering the image size to obtain a processed image, wherein the processed image can be an image to be processed with the target image removed, for example, a bill image with the seal removed.
As shown in fig. 5, since detail information may be lost during the process of convolutional layer encoding, we need to connect the convolutional sub-network with the corresponding layer in the deconvolution sub-network to reduce the loss of information, and at this time, deconvolution processing is performed on the non-target image features based on the deconvolution layer in the deconvolution sub-network, which includes: acquiring image features output by a convolution layer based on the deconvolution layer; acquiring image features output by the previous layer based on the deconvolution layer; and deconvolution processing is carried out on the image characteristics output by the convolution layer and the image characteristics output by the previous layer based on the deconvolution layer. Wherein the convolution layer is a convolution layer corresponding to the deconvolution layer.
In some embodiments, when it is detected that the image to be processed includes the target image, a target image area corresponding to the target image is extracted from the image to be processed, that is, a part of the image area including the target image is cut from the image to be processed, where the non-target image feature may be an image feature corresponding to an image other than the target image in the target image area;
At this time, the convolution sub-network and the deconvolution sub-network in the first generator only process the target image area, and remove the target image in the target image area, so as to obtain the processed image.
At this time, the processed image needs to be spliced back to the original image, and at this time, the image to be processed from which the target image is removed can be obtained.
In some embodiments, after the target image to be processed is obtained, the image to be processed from which the target image is removed may be subjected to optical character recognition (Optical Character Recognition, OCR) at this time, for example, to the ticket image from which the stamp is removed, so as to obtain text information corresponding to the image to be processed, for example, to obtain text information of the ticket image.
The image processing device acquires an image to be processed; performing target image detection on an image to be processed; when the image to be processed is detected to contain a target image, a first generator in a generated countermeasure network is obtained, the generated countermeasure network is trained by sample images, and the first generator comprises a convolution sub-network and a deconvolution sub-network connected with the convolution sub-network; then carrying out convolution processing on the image to be processed based on the convolution sub-network to obtain non-target image characteristics of the image to be processed; and performing deconvolution processing on the non-target image features based on the deconvolution sub-network to obtain an image which does not contain the target image. The scheme can remove the target image in the image to be processed by generating a first generator in the countermeasure network to extract non-target image features in the image to be processed and then generating a processed image which does not contain the target image according to the non-target image features.
The image processing method in the embodiment of the invention is an end-to-end image processing method, and the whole removing process is automatic without manual participation.
The method described in the above embodiments is described in further detail below by way of example.
Referring to fig. 6a, in this embodiment, the image processing apparatus is specifically integrated in a network device, and the image to be processed is a bill image and the target image is a stamp image.
601. The network device acquires the ticket image.
The bill image is an image which needs to be detected by the seal image.
In some embodiments, the raw ticket image may be as shown in FIG. 6 b.
In some embodiments, when a user needs to remove a stamp image in a ticket image, the ticket image is input into the network device, so that the network device acquires the ticket image.
The bill image may be a bill image obtained by scanning a bill, or may be a bill image obtained by photographing a bill, which is not limited herein specifically.
602. And the network equipment detects the seal image of the bill image.
After the network device acquires the bill image, the image is subjected to seal image detection so as to detect whether the bill image contains the seal image.
Specifically, the embodiment of the invention can realize the seal image detection of the image to be processed through a seal detection network, an attention network and the like, and the specific method for seal image detection is not limited herein, as long as whether the seal image is contained in the ticket image can be detected.
603. When detecting that the bill image contains the seal image, the network equipment extracts a seal image area corresponding to the seal image from the bill image.
In order to improve the efficiency of removing the seal image of the network device, the network device can extract a seal image area corresponding to the seal image from the bill image, wherein the seal image area is smaller than the area of the bill image, the seal image area contains all seal image information, the seal image area can be a circular area or a square area, and the specific shape is not limited here.
In some embodiments, when it is detected that the ticket image does not contain a stamp image, OCR recognition may be directly performed on the image.
604. The network device obtains a first generator in a generation countermeasure network.
The generation countermeasure network is trained from the sample images, wherein the first generator includes a convolution sub-network and a deconvolution sub-network connected to the convolution sub-network.
When the seal image is detected and the seal image area is extracted from the bill image, a first generator in a generated countermeasure network is required to be acquired at the moment, and then the seal image in the bill image is removed by using the first generator.
Before acquiring the bill image, training a preset generation training network to obtain a training generation countermeasure network;
The preset generation training network comprises a first preset generation countermeasure network and a second preset generation countermeasure network, and the first preset generation countermeasure network is obtained after the first preset generation countermeasure network is trained; and after the second preset training generates the countermeasure network training, obtaining a second generated countermeasure network. The structure of the first preset generation countermeasure network is shown in fig. 3, and the structure of the second preset generation countermeasure network is shown in fig. 4.
Because a large number of images with target images and images without target images which appear in pairs are difficult to collect, an alternate generation method is used for training a network, positive examples and negative examples in the embodiment can be unpaired examples, the positive examples are bill images containing seal images, the negative examples are bill images not containing seal images, in the embodiment of the invention, the first preset generation countermeasure network is trained by the positive examples, and the second preset generation countermeasure network is trained by the negative examples.
In the embodiment of the invention, the positive example sample and the negative example sample are used for alternately training the first preset generation countermeasure network and the second preset generation countermeasure network respectively, and in addition, after training one preset generation countermeasure network, the parameters of the preset generation countermeasure network are updated to the network module corresponding to the other preset generation countermeasure network; for example, after training the first preset generation countermeasure network, the parameters of the first preset generation countermeasure network are updated to the network module corresponding to the second preset generation countermeasure network; after training the second preset generation countermeasure network, updating the parameters of the second preset generation countermeasure network into the network modules corresponding to the first preset generation countermeasure network until the first preset generation countermeasure network and/or the second preset generation countermeasure network converges, and generating a trained first preset generation countermeasure network and/or second preset generation countermeasure network, wherein the parameters of the corresponding network modules in the first preset generation countermeasure network and the second preset generation countermeasure network are the same.
In the embodiment of the invention, only the first generator in the first generation antagonism network or the second generation antagonism network is required to be extracted for processing the image.
605. The network device extracts image features of the stamp image area based on the convolutional layer in the convolutional subnetwork.
The structure of the first generator is shown in fig. 5, specifically, the image features of the seal image area are extracted based on the convolution layer in the convolution sub-network; and then, carrying out downsampling processing on the image features based on a pooling layer in the convolution sub-network to obtain non-target image features in the seal image area.
The number of convolution layers and deconvolution layers in the embodiment of the present invention is not limited, and may be 7 or other numbers, and the numbers of the pooling layers and the upsampling layers in the embodiment of the present invention are not limited.
606. The network equipment performs downsampling processing on the image features based on a pooling layer in the convolution sub-network to obtain non-seal image features.
In some embodiments, after the network device acquires the image features of the seal image area, the image is downsampled, and the image features are reduced to obtain non-seal image features in the seal image area.
607. The network device deconvolves the non-stamp image features based on deconvolution layers in the deconvolution sub-network.
As shown in fig. 5, since detail information may be lost during the process of convolutional layer encoding, we need to connect the convolutional sub-network with the corresponding layer in the deconvolution sub-network to reduce the loss of information, and at this time, deconvolution processing is performed on the non-target image features based on the deconvolution layer in the deconvolution sub-network, which includes: acquiring image features output by a convolution layer based on the deconvolution layer; acquiring image features output by the previous layer based on the deconvolution layer; and deconvolution processing is carried out on the image characteristics output by the convolution layer and the image characteristics output by the previous layer based on the deconvolution layer.
608. The network device performs up-sampling processing on the non-seal image features subjected to the deconvolution processing based on an up-sampling layer in the deconvolution sub-network to obtain a processed image corresponding to the seal image region.
When the non-seal image features are subjected to deconvolution, after the image corresponding to the non-seal image features is restored, the non-seal image features subjected to deconvolution are subjected to up-sampling processing, the image size of the seal image area is restored, and the processed image corresponding to the seal image area is obtained.
609. The network equipment splices the processed image corresponding to the seal image area back to the original bill image to obtain the processed image corresponding to the bill image.
After removing the seal image in the seal image area, the seal image except the seal image area is spliced back into the original bill image to obtain a processed image corresponding to the bill image, and then the bill image with the seal image removed is obtained.
In some embodiments, the processed image corresponding to the ticket image is as shown in FIG. 6 c.
610. And the network equipment performs OCR (optical character recognition) on the processed image corresponding to the bill image to obtain text information corresponding to the bill image.
In this embodiment, since the processed image corresponding to the ticket image is the ticket image from which the stamp image is removed, OCR recognition is performed on the processed image, so that interference of the stamp image to OCR recognition can be reduced, and the generator in the present invention can recover text covered by the stamp when removing the stamp, so that accuracy of OCR recognition can be improved.
The seal image can be removed through super-resolution reconstruction network or other generation countermeasure network.
Referring to fig. 6d, fig. 6d is a schematic flow chart of a frame of the image processing method provided by the present invention, specifically, for an input image, firstly, detecting a specific position of a seal through a seal detection network, if no seal exists, performing OCR recognition directly, otherwise, intercepting a seal image area where the seal exists, and inputting the seal image area into a generator for generating an anti-network to generate an image without a seal. And then the seal-free image is spliced into an original image and OCR recognition is carried out.
The image processing device acquires an image to be processed; performing target image detection on an image to be processed; when the image to be processed is detected to contain a target image, a first generator in a generated countermeasure network is obtained, the generated countermeasure network is trained by sample images, and the first generator comprises a convolution sub-network and a deconvolution sub-network connected with the convolution sub-network; then carrying out convolution processing on the image to be processed based on the convolution sub-network to obtain non-target image characteristics of the image to be processed; and performing deconvolution processing on the non-target image features based on the deconvolution sub-network to obtain an image which does not contain the target image. The scheme can remove the target image in the image to be processed by generating a first generator in the countermeasure network to extract non-target image features in the image to be processed and then generating a processed image which does not contain the target image according to the non-target image features.
In order to better implement the above method, correspondingly, the embodiment of the invention also provides an image processing device, which can be integrated in network equipment, wherein the network equipment can be a server or a terminal and other equipment.
For example, as shown in FIG. 7, the image processing apparatus may include
A first acquiring unit 701 configured to acquire an image to be processed;
a detection unit 702, configured to detect a target image of the image to be processed;
a second obtaining unit 703, configured to obtain, when it is detected that the image to be processed includes a target image, a first generator in a generated countermeasure network, where the generated countermeasure network is trained by sample images, and the target image is an image to be detected, where the first generator includes a convolution sub-network and a deconvolution sub-network connected to the convolution sub-network;
A first processing unit 704, configured to perform convolution processing on the image to be processed based on the convolution sub-network, so as to obtain non-target image features of the image to be processed, where the non-target image features are image features corresponding to images other than the target image in the image to be processed;
And a second processing unit 705, configured to perform deconvolution processing on the non-target image feature based on the deconvolution sub-network, to obtain an image that does not include the target image.
In some embodiments, the first processing unit 704 is specifically configured to:
extracting image features of the image to be processed based on a convolution layer in the convolution sub-network;
and carrying out downsampling processing on the image features based on a pooling layer in the convolution sub-network to obtain the non-target image features.
In some embodiments, the second processing unit 705 is specifically configured to:
Deconvolution processing is carried out on the non-target image features based on deconvolution layers in the deconvolution sub-network;
And carrying out up-sampling processing on the non-target image characteristics subjected to the deconvolution processing based on an up-sampling layer in the deconvolution sub-network to obtain the processed image.
In some embodiments, the second processing unit 705 is further specifically configured to:
acquiring image features output by the convolution layer based on the deconvolution layer;
Acquiring image features output by the previous layer based on the deconvolution layer;
and performing deconvolution processing on the image features output by the convolution layer and the image features output by the previous layer based on the deconvolution layer.
Referring to fig. 8, in some embodiments, the apparatus further comprises:
a third obtaining unit 706, configured to obtain the sample image, where the sample image includes a positive example sample and a negative example sample, the positive example sample is a sample including a target image, and the negative example sample is a sample not including the target image;
And the training unit 707 is configured to perform alternating training on a preset generated countermeasure network according to the positive example sample and the negative example sample, so as to obtain the generated countermeasure network, where the preset generated countermeasure network includes a first generator.
In some embodiments, the preset generation countermeasure network includes a first preset generation countermeasure network and a second preset generation countermeasure network, and the training unit 707 is specifically configured to:
Training the first preset generation countermeasure network according to the positive example sample to obtain a first generation countermeasure network;
updating parameters of a first network module in the first generation countermeasure network into a second network module corresponding to the second preset generation countermeasure network, wherein the second network module comprises a first generator;
Training the second preset generation countermeasure network according to the negative example sample to obtain a second generation countermeasure network;
Updating parameters of a second network module in the second generation countermeasure network into a first network module corresponding to the first preset generation countermeasure network, wherein the first network module comprises a first generator;
Determining the generated countermeasure network from the first generated countermeasure network or the second generated countermeasure network.
In some embodiments, the first network module further comprises a second generator, a first arbiter and a second arbiter, and the training unit 707 is further specifically configured to:
inputting the positive sample into a first generator in the first preset generation countermeasure network to generate a first image which does not contain a target image;
Inputting the first image into a second generator in the first preset generation countermeasure network to generate a second image containing a target image;
determining a first loss value of the first image by a first arbiter in the first preset generation countermeasure network, and determining a second loss value of the second image by a second arbiter in the first preset generation countermeasure network;
And adjusting parameters of the first preset generation countermeasure network according to the first loss value and the second loss value to obtain the first generation countermeasure network.
In some embodiments, the second network module further comprises a second generator, a first arbiter and a second arbiter, and the training unit 707 is further specifically configured to:
inputting the negative example sample into a second generator in the second preset generation countermeasure network to generate a third image containing a target image;
Inputting the third image into a first generator in the second preset generation countermeasure network to generate a fourth image which does not contain a target image;
determining a third loss value of the third image by a second arbiter in the second preset generation countermeasure network, and determining a fourth loss value of the fourth image by a first arbiter in the second preset generation countermeasure network;
and adjusting parameters of the second preset generation countermeasure network according to the third loss value and the fourth loss value to obtain the second generation countermeasure network.
In some embodiments, the apparatus further comprises:
An extracting unit 708, configured to extract, when it is detected that the image to be processed includes a target image, a target image area corresponding to the target image from the image to be processed;
at this time, the first processing unit 704 is specifically configured to:
And carrying out convolution processing on the target image area based on the convolution sub-network to obtain non-target image characteristics of the image to be processed.
In the embodiment of the invention, an image processing device acquires an image to be processed; the detection unit 702 performs target image detection on the image to be processed; when it is detected that the image to be processed contains the target image, the second acquisition unit 703 acquires a first generator in a generated countermeasure network, the generated countermeasure network being trained from the sample image, wherein the first generator includes a convolution sub-network and a deconvolution sub-network connected to the convolution sub-network; then, the first processing unit 704 performs convolution processing on the image to be processed based on the convolution sub-network to obtain non-target image characteristics of the image to be processed; the second processing unit 705 then performs deconvolution processing on the non-target image features based on the deconvolution sub-network, resulting in an image that does not contain the target image. The scheme can remove the target image in the image to be processed by generating a first generator in the countermeasure network to extract non-target image features in the image to be processed and then generating a processed image which does not contain the target image according to the non-target image features.
In addition, the embodiment of the present invention further provides a network device, as shown in fig. 9, which shows a schematic structural diagram of the network device according to the embodiment of the present invention, specifically:
The network device may include one or more processor cores 901, one or more computer-readable storage media memory 902, a power supply 903, and an input unit 904, among other components. Those skilled in the art will appreciate that the network device structure shown in fig. 9 is not limiting of the network device and may include more or fewer components than shown, or may combine certain components, or may be a different arrangement of components. Wherein:
The processor 901 is a control center of the network device, connects respective parts of the entire network device using various interfaces and lines, and performs various functions of the network device and processes data by running or executing software programs and/or modules stored in the memory 902 and calling data stored in the memory 902, thereby performing overall detection of the network device. Optionally, processor 901 may include one or more processing cores; preferably, the processor 901 may integrate an application processor and a modem processor, wherein the application processor primarily handles operating systems, user interfaces, applications, etc., and the modem processor primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 901.
The memory 902 may be used to store software programs and modules, and the processor 901 performs various functional applications and data processing by executing the software programs and modules stored in the memory 902. The memory 902 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data created according to the use of the network device, etc. In addition, the memory 902 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory 902 may also include a memory controller to provide access to the memory 902 by the processor 901.
The network device further comprises a power supply 903 for supplying power to the various components, and preferably the power supply 903 may be logically connected to the processor 901 through a power management system, so that functions of charge, discharge, power consumption management and the like are performed through the power management system. The power supply 903 may also include one or more of any components, such as a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
The network device may also include an input unit 904, which input unit 904 may be used to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the network device may further include a display unit or the like, which is not described herein. In particular, in this embodiment, the processor 901 in the network device loads executable files corresponding to the processes of one or more application programs into the memory 902 according to the following instructions, and the processor 901 executes the application programs stored in the memory 902, so as to implement various functions as follows:
Acquiring an image to be processed; detecting a target image of the image to be processed; when the image to be processed is detected to contain a target image, a first generator in a generated countermeasure network is obtained, the generated countermeasure network is trained by sample images, wherein the target image is an image to be detected, and the first generator comprises a convolution sub-network and a deconvolution sub-network connected with the convolution sub-network; carrying out convolution processing on the image to be processed based on the convolution sub-network to obtain non-target image characteristics of the image to be processed, wherein the non-target image characteristics are image characteristics corresponding to images except the target image in the image to be processed; and carrying out deconvolution processing on the non-target image features based on the deconvolution sub-network to obtain an image which does not contain the target image.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
As can be seen from the above, in the embodiment of the present invention, the image processing apparatus acquires an image to be processed; performing target image detection on an image to be processed; when the image to be processed is detected to contain a target image, a first generator in a generated countermeasure network is obtained, the generated countermeasure network is trained by sample images, and the first generator comprises a convolution sub-network and a deconvolution sub-network connected with the convolution sub-network; then carrying out convolution processing on the image to be processed based on the convolution sub-network to obtain non-target image characteristics of the image to be processed; and performing deconvolution processing on the non-target image features based on the deconvolution sub-network to obtain an image which does not contain the target image. The scheme can remove the target image in the image to be processed by generating a first generator in the countermeasure network to extract non-target image features in the image to be processed and then generating a processed image which does not contain the target image according to the non-target image features.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present invention provides a storage medium having stored therein a plurality of instructions capable of being loaded by a processor to perform steps in any one of the image processing methods provided by the embodiment of the present invention. For example, the instructions may perform the steps of:
Acquiring an image to be processed; detecting a target image of the image to be processed; when the image to be processed is detected to contain a target image, a first generator in a generated countermeasure network is obtained, the generated countermeasure network is trained by sample images, wherein the target image is an image to be detected, and the first generator comprises a convolution sub-network and a deconvolution sub-network connected with the convolution sub-network; carrying out convolution processing on the image to be processed based on the convolution sub-network to obtain non-target image characteristics of the image to be processed, wherein the non-target image characteristics are image characteristics corresponding to images except the target image in the image to be processed; and carrying out deconvolution processing on the non-target image features based on the deconvolution sub-network to obtain an image which does not contain the target image.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The instructions stored in the storage medium may perform steps in any image processing method provided by the embodiments of the present invention, so that the beneficial effects that any image processing method provided by the embodiments of the present invention can be achieved, which are detailed in the previous embodiments and are not described herein.
The foregoing has described in detail the methods, apparatuses and storage medium for image processing according to the embodiments of the present invention, and specific examples have been applied to illustrate the principles and embodiments of the present invention, where the foregoing examples are provided to assist in understanding the methods and core ideas of the present invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present invention, the present description should not be construed as limiting the present invention.
Claims (9)
1. An image processing method, comprising:
acquiring a sample image, wherein the sample image comprises a positive sample and a negative sample, the positive sample is a sample containing a target image, and the negative sample is a sample not containing the target image;
Alternately training a preset generation countermeasure network according to the positive example sample and the negative example sample to obtain a generation countermeasure network, wherein the preset generation countermeasure network comprises a first generator; the preset generation countermeasure network includes a first preset generation countermeasure network and a second preset generation countermeasure network, the first preset generation countermeasure network includes a first generator, the second preset generation countermeasure network includes a first generator, the preset generation countermeasure network is trained alternately according to the positive example sample and the negative example sample to obtain a generation countermeasure network, and the method includes: training the first preset generation countermeasure network according to the positive example sample to obtain a first generation countermeasure network; updating parameters of a first network module in the first generation countermeasure network into a second network module corresponding to the second preset generation countermeasure network, wherein the second network module comprises a first generator; training the second preset generation countermeasure network according to the negative example sample to obtain a second generation countermeasure network; updating parameters of a second network module in the second generation countermeasure network into a first network module corresponding to the first preset generation countermeasure network, wherein the first network module comprises a first generator; determining the generated countermeasure network from the first generated countermeasure network or the second generated countermeasure network;
Acquiring an image to be processed;
Detecting a target image of the image to be processed;
When the image to be processed is detected to contain a target image, a first generator in the generation countermeasure network is obtained, the generation countermeasure network is trained by sample images, wherein the target image is an image to be detected, and the first generator comprises a convolution sub-network and a deconvolution sub-network connected with the convolution sub-network;
Carrying out convolution processing on the image to be processed based on the convolution sub-network to obtain non-target image characteristics of the image to be processed, wherein the non-target image characteristics are image characteristics corresponding to images except the target image in the image to be processed;
And carrying out deconvolution processing on the non-target image features based on the deconvolution sub-network to obtain an image which does not contain the target image.
2. The method according to claim 1, wherein the convolving the image to be processed based on the convolution sub-network to obtain non-target image features of the image to be processed, includes:
extracting image features of the image to be processed based on a convolution layer in the convolution sub-network;
and carrying out downsampling processing on the image features based on a pooling layer in the convolution sub-network to obtain the non-target image features.
3. The method according to claim 1, wherein deconvoluting the non-target image features based on the deconvolution sub-network results in an image that does not include the target image, comprising:
Deconvolution processing is carried out on the non-target image features based on deconvolution layers in the deconvolution sub-network;
And carrying out up-sampling processing on the non-target image characteristics subjected to the deconvolution processing based on an up-sampling layer in the deconvolution sub-network to obtain a processed image.
4. A method according to claim 3, wherein said deconvoluting the non-target image features based on deconvolution layers in the deconvolution sub-network comprises:
acquiring image features output by the convolution layer based on the deconvolution layer;
Acquiring image features output by the previous layer based on the deconvolution layer;
And performing deconvolution processing on the image characteristics output by the convolution layer and the image characteristics output by the previous layer based on the deconvolution layer.
5. The method of claim 1, wherein the first network module further comprises a second generator, a first arbiter, and a second arbiter, wherein training the first preset generation antagonism network according to the positive example sample to obtain a first generation antagonism network comprises:
inputting the positive sample into a first generator in the first preset generation countermeasure network to generate a first image which does not contain a target image;
Inputting the first image into a second generator in the first preset generation countermeasure network to generate a second image containing a target image;
determining a first loss value of the first image by a first arbiter in the first preset generation countermeasure network, and determining a second loss value of the second image by a second arbiter in the first preset generation countermeasure network;
And adjusting parameters of the first preset generation countermeasure network according to the first loss value and the second loss value to obtain the first generation countermeasure network.
6. The method of claim 1, wherein the second network module further comprises a second generator, a first arbiter, and a second arbiter, wherein training the second preset generation countermeasure network according to the negative example sample results in a second generation countermeasure network, comprising:
inputting the negative example sample into a second generator in the second preset generation countermeasure network to generate a third image containing a target image;
Inputting the third image into a first generator in the second preset generation countermeasure network to generate a fourth image which does not contain a target image;
determining a third loss value of the third image by a second arbiter in the second preset generation countermeasure network, and determining a fourth loss value of the fourth image by a first arbiter in the second preset generation countermeasure network;
and adjusting parameters of the second preset generation countermeasure network according to the third loss value and the fourth loss value to obtain the second generation countermeasure network.
7. The method according to any one of claims 1 to 6, wherein after the target image detection of the image to be processed, the method further comprises:
When the image to be processed is detected to contain a target image, extracting a target image area corresponding to the target image from the image to be processed;
performing convolution processing on the image to be processed based on a convolution sub-network in the first generator to obtain non-target image characteristics of the image to be processed, wherein the convolution processing comprises the following steps:
And carrying out convolution processing on the target image area based on the convolution sub-network to obtain non-target image characteristics of the image to be processed.
8. An image processing apparatus, comprising:
A third obtaining unit, configured to obtain a sample image, where the sample image includes a positive sample and a negative sample, the positive sample is a sample including a target image, and the negative sample is a sample not including the target image;
The training unit is used for alternately training a preset generation countermeasure network according to the positive example sample and the negative example sample to obtain a generation countermeasure network, wherein the preset generation countermeasure network comprises a first generator; the preset generation countermeasure network comprises a first preset generation countermeasure network and a second preset generation countermeasure network, and the training unit is specifically configured to: training the first preset generation countermeasure network according to the positive example sample to obtain a first generation countermeasure network; updating parameters of a first network module in the first generation countermeasure network into a second network module corresponding to the second preset generation countermeasure network, wherein the second network module comprises a first generator; training the second preset generation countermeasure network according to the negative example sample to obtain a second generation countermeasure network; updating parameters of a second network module in the second generation countermeasure network into a first network module corresponding to the first preset generation countermeasure network, wherein the first network module comprises a first generator; determining the generated countermeasure network from the first generated countermeasure network or the second generated countermeasure network;
The first acquisition unit is used for acquiring an image to be processed;
the detection unit is used for detecting the target image of the image to be processed;
The second acquisition unit is used for acquiring a first generator in the generation countermeasure network when the image to be processed contains a target image, wherein the generation countermeasure network is trained by sample images, the target image is an image to be detected, and the first generator comprises a convolution sub-network and a deconvolution sub-network connected with the convolution sub-network;
The first processing unit is used for carrying out convolution processing on the image to be processed based on the convolution sub-network to obtain non-target image characteristics of the image to be processed, wherein the non-target image characteristics are image characteristics corresponding to images except the target image in the image to be processed;
and the second processing unit is used for carrying out deconvolution processing on the non-target image features based on the deconvolution sub-network to obtain an image which does not contain the target image.
9. A storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the steps of the image processing method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910378956.8A CN110163194B (en) | 2019-05-08 | 2019-05-08 | Image processing method, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910378956.8A CN110163194B (en) | 2019-05-08 | 2019-05-08 | Image processing method, device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110163194A CN110163194A (en) | 2019-08-23 |
CN110163194B true CN110163194B (en) | 2024-08-27 |
Family
ID=67633677
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910378956.8A Active CN110163194B (en) | 2019-05-08 | 2019-05-08 | Image processing method, device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110163194B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113065407B (en) * | 2021-03-09 | 2022-07-12 | 国网河北省电力有限公司 | Financial bill seal erasing method based on attention mechanism and generation countermeasure network |
CN114792285A (en) * | 2022-04-21 | 2022-07-26 | 维沃移动通信有限公司 | Image processing method and processing device, electronic device and readable storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108737875A (en) * | 2017-04-13 | 2018-11-02 | 北京小度互娱科技有限公司 | Image processing method and device |
CN108805789A (en) * | 2018-05-29 | 2018-11-13 | 厦门市美亚柏科信息股份有限公司 | A kind of method, apparatus, equipment and readable medium removing watermark based on confrontation neural network |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109359550B (en) * | 2018-09-20 | 2021-06-22 | 大连民族大学 | Manchu document seal extraction and removal method based on deep learning technology |
CN109376658B (en) * | 2018-10-26 | 2022-03-08 | 信雅达科技股份有限公司 | OCR method based on deep learning |
CN109508689B (en) * | 2018-11-28 | 2023-01-03 | 中山大学 | Face recognition method for strengthening confrontation |
-
2019
- 2019-05-08 CN CN201910378956.8A patent/CN110163194B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108737875A (en) * | 2017-04-13 | 2018-11-02 | 北京小度互娱科技有限公司 | Image processing method and device |
CN108805789A (en) * | 2018-05-29 | 2018-11-13 | 厦门市美亚柏科信息股份有限公司 | A kind of method, apparatus, equipment and readable medium removing watermark based on confrontation neural network |
Also Published As
Publication number | Publication date |
---|---|
CN110163194A (en) | 2019-08-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111368685B (en) | Method and device for identifying key points, readable medium and electronic equipment | |
CN110378338B (en) | Text recognition method and device, electronic equipment and storage medium | |
CN112381104A (en) | Image identification method and device, computer equipment and storage medium | |
CN112132279A (en) | Convolutional neural network model compression method, device, equipment and storage medium | |
CN111292262B (en) | Image processing method, device, electronic equipment and storage medium | |
CN112102185B (en) | Image deblurring method and device based on deep learning and electronic equipment | |
CN112883827B (en) | Method and device for identifying specified target in image, electronic equipment and storage medium | |
CN109117940A (en) | To accelerated method, apparatus and system before a kind of convolutional neural networks | |
CN110163194B (en) | Image processing method, device and storage medium | |
CN109743286A (en) | A kind of IP type mark method and apparatus based on figure convolutional neural networks | |
CN112528978A (en) | Face key point detection method and device, electronic equipment and storage medium | |
CN109598250A (en) | Feature extracting method, device, electronic equipment and computer-readable medium | |
CN113496176A (en) | Motion recognition method and device and electronic equipment | |
CN114529490A (en) | Data processing method, device, equipment and readable storage medium | |
CN113160231A (en) | Sample generation method, sample generation device and electronic equipment | |
CN109086737B (en) | Convolutional neural network-based shipping cargo monitoring video identification method and system | |
CN115188000B (en) | Text recognition method, device, storage medium and electronic device based on OCR | |
US20230005171A1 (en) | Visual positioning method, related apparatus and computer program product | |
CN110570375A (en) | image processing method, image processing device, electronic device and storage medium | |
CN114049518A (en) | Image classification method and device, electronic equipment and storage medium | |
CN115471439A (en) | Method and device for identifying defects of display panel, electronic equipment and storage medium | |
CN117274761B (en) | Image generation method, device, electronic equipment and storage medium | |
CN111160265B (en) | File conversion method and device, storage medium and electronic equipment | |
CN115393868B (en) | Text detection method, device, electronic equipment and storage medium | |
CN113449559B (en) | Table identification method and device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |