[go: up one dir, main page]

CN117522699A - Image restoration processing method, device, equipment and storage medium - Google Patents

Image restoration processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN117522699A
CN117522699A CN202311517186.3A CN202311517186A CN117522699A CN 117522699 A CN117522699 A CN 117522699A CN 202311517186 A CN202311517186 A CN 202311517186A CN 117522699 A CN117522699 A CN 117522699A
Authority
CN
China
Prior art keywords
frame
image
target
adjacent
target frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311517186.3A
Other languages
Chinese (zh)
Inventor
宁本德
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN202311517186.3A priority Critical patent/CN117522699A/en
Publication of CN117522699A publication Critical patent/CN117522699A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application relates to an image restoration processing method, an image restoration processing device, image restoration processing equipment and a storage medium. The method comprises the following steps: identifying a target frame to be repaired in the input video continuous frames; setting an auxiliary frame for repairing the target frame according to an adjacent frame adjacent to the target frame in the video continuous frames; wherein the target frame and the auxiliary frame comprise target images with the same parts and the same objects; repairing a target image corresponding to the target frame and a target image corresponding to the auxiliary frame; fusing the repair image corresponding to the target frame and the repair image corresponding to the auxiliary frame, and replacing the target image in the target frame by using the fused image to obtain the repaired target frame. According to the method and the device for repairing the target frame, the target frame to be repaired is automatically identified, the relation between the target frame and the adjacent frame is considered, the target image in the target frame is repaired in an adjacent frame auxiliary mode, the repairing efficiency of image repairing is improved, the influence of interference factors in the target frame on the image repairing is reduced, and the repairing quality of the image repairing is improved.

Description

Image restoration processing method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image restoration processing method, apparatus, device, and storage medium.
Background
Various video works can be seen in the life of people, and can be used for obtaining various information and enriching the mental life of people. However, some video works have problems in video image quality due to long shooting time, poor shooting equipment, aging of transmission media and the like, and the problems seriously affect the look and feel and viewing experience of the video. To address these problems, video is often restored using image restoration techniques to enhance the viewing experience of the viewer.
In the traditional repairing mode, firstly, analyzing the video frame by frame in a manual mode so as to search the image frames needing repairing in the continuous frames of the video; after finding the image frame to be repaired, performing single-frame repair by adopting a single-frame independent processing mode on the image frame to be repaired, namely: performing operations such as image deblurring and image enhancement on the single-frame image to be repaired by adopting a manual mode, or inputting the single-frame image to be repaired into an image repair model, and performing single-frame repair on the single-frame image by utilizing the image repair model; and finally, replacing the corresponding image frame in the video continuous frames with the repaired single-frame image to obtain the repaired video continuous frames.
However, the traditional repairing method has weak anti-interference capability, and the image repairing quality cannot be ensured, which is because: the method has the advantages that the information in the images can be damaged due to interference factors such as shielding, light change, noise and motion blur which possibly occur in the images, the damage degree of each frame of image is different, the traditional repairing mode realizes automation to a certain extent, but the manual intervention is still required, the manual intervention is subjective, the quality of each frame of image cannot be objectively evaluated, errors easily occur, the images needing to be repaired are omitted, and when the single frame of image is repaired, whether the single frame of image is repaired manually or is repaired by adopting an image repairing model, the damaged information in the single frame of image is only referred to repair the images, the high-quality repairing requirement on the images cannot be met, and the integral repairing effect of continuous frames of video is further affected.
Disclosure of Invention
The application provides an image restoration processing method, device, equipment and storage medium, which are used for solving the problems that the traditional restoration mode is weak in anti-interference capability and the image restoration quality cannot be ensured.
Aiming at the technical problems, the technical scheme is solved by the following embodiments:
The embodiment of the application provides an image restoration processing method, which comprises the following steps: identifying a target frame to be repaired in the input video continuous frames; setting an auxiliary frame for repairing the target frame according to an adjacent frame adjacent to the target frame in the video continuous frames; wherein the target frame and the auxiliary frame comprise target images with the same parts and the same objects; repairing the target image corresponding to the target frame and the target image corresponding to the auxiliary frame to obtain a repairing image corresponding to the target frame and a repairing image corresponding to the auxiliary frame; fusing the repair image corresponding to the target frame and the repair image corresponding to the auxiliary frame, and replacing the target image in the target frame by using the fused image to obtain the repaired target frame.
Wherein, in the input video continuous frames, identifying the target frame to be repaired comprises: the following is performed for each image frame in the video sequence: detecting an object image of the region in the image frame; extracting feature information of the object image in the image frame when the object image of the part is detected to be included in the image frame, and determining whether the feature information of the object image meets a preset condition to be repaired; when it is determined that the feature information of the object image satisfies the condition to be repaired, the object image is determined as a target image and the image frame is determined as a target frame to be repaired.
Wherein the feature information of the object image includes: the height of the object image and the ambiguity of the object image; the determining whether the feature information of the object image meets the preset condition to be repaired comprises the following steps: and when the height of the object image is in a preset height interval and the ambiguity of the object image is larger than a preset ambiguity threshold value, determining that the characteristic information of the object image meets the condition to be repaired.
Wherein the setting an auxiliary frame for repairing the target frame according to an adjacent frame adjacent to the target frame in the video continuous frames comprises: detecting an object image of the region in the adjacent frame; wherein it has been detected in advance that the target frame includes a target image of the site; when detecting that the adjacent frame comprises the object image of the part, identifying whether the object image in the adjacent frame and the target image in the target frame correspond to the same object; when the object image in the adjacent frame is identified to correspond to the same object as the object image in the target frame, determining the object image in the adjacent frame as the target image and setting the adjacent frame as an auxiliary frame; when the object image in the adjacent frame is identified to correspond to a different object from the object image in the target frame, or when the object image of the part is detected not to be included in the adjacent frame, the target frame is copied and the copied target frame is set as an auxiliary frame.
Wherein the identifying whether the object image in the adjacent frame and the object image in the object frame correspond to the same object comprises: determining the intersection ratio of the object image in the adjacent frame and the target image in the target frame by adopting an IOU intersection ratio algorithm; when the intersection ratio is larger than a preset proportion threshold value, determining that the object image in the adjacent frame corresponds to the same object with the target image in the target frame; otherwise, determining that the object image in the adjacent frame corresponds to a different object from the target image in the target frame.
The repairing the target image corresponding to the target frame and the target image corresponding to the auxiliary frame includes: according to the target image corresponding to the target frame, executing image correction operation on the target image corresponding to the auxiliary frame; after the image correction operation is finished, performing image alignment processing on the target image corresponding to the target frame and the target image corresponding to the auxiliary frame; and after the image alignment processing is finished, respectively repairing the target image corresponding to the target frame and the target image corresponding to the auxiliary frame by using a pre-trained image repairing network.
Wherein the adjacent frames include: in the video continuous frame, a previous frame image and/or a subsequent frame image adjacent to the target frame.
The embodiment of the application also provides an image restoration processing device, which comprises: the identification module is used for identifying a target frame to be repaired in the input video continuous frames; the setting module is used for setting an auxiliary frame for repairing the target frame according to an adjacent frame adjacent to the target frame in the video continuous frames; wherein the target frame and the auxiliary frame comprise target images with the same parts and the same objects; the restoration module is used for restoring the target image corresponding to the target frame and the target image corresponding to the auxiliary frame to obtain a restoration image corresponding to the target frame and a restoration image corresponding to the auxiliary frame; and the fusion module is used for fusing the repair image corresponding to the target frame and the repair image corresponding to the auxiliary frame, and replacing the target image in the target frame by using the fused image to obtain the repaired target frame.
The embodiment of the application also provides an image restoration processing device, which comprises: at least one communication interface; at least one bus connected to the at least one communication interface; at least one processor coupled to the at least one bus; at least one memory coupled to the at least one bus, wherein the processor is configured to: executing the image restoration processing program stored in the memory to implement the image restoration processing method described in any one of the above.
Embodiments of the present application also provide a computer-readable storage medium storing computer-executable instructions that are executed to implement the image restoration processing method of any one of the above.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages: the method provided by the embodiment of the application can identify the target frame to be repaired in the input video continuous frames; setting an auxiliary frame for repairing the target frame according to an adjacent frame adjacent to the target frame in the video continuous frames; wherein the target frame and the auxiliary frame comprise target images with the same parts and the same objects; repairing the target image corresponding to the target frame and the target image corresponding to the auxiliary frame to obtain a repairing image corresponding to the target frame and a repairing image corresponding to the auxiliary frame; fusing the repair image corresponding to the target frame and the repair image corresponding to the auxiliary frame, and replacing the target image in the target frame by using the fused image to obtain the repaired target frame. In the embodiment of the application, in the video continuous frames, the target frames to be repaired are automatically identified, the relation between the target frames and the adjacent frames is considered, and the target images in the target frames are repaired in an adjacent frame auxiliary mode, so that the repairing efficiency of image repairing is improved, the influence of interference factors in the target frames on image repairing is reduced, and the repairing quality of image repairing is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
One or more embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements, and in which the figures of the drawings are not to be taken in a limiting sense, unless otherwise indicated.
FIG. 1 is a flow chart of an image restoration processing method according to an embodiment of the present application;
FIG. 2 is a flowchart illustrating steps for target frame identification according to one embodiment of the present application
FIG. 3 is a flowchart illustrating steps for setting up an auxiliary frame according to an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating the calculation of the cross-over ratio according to an embodiment of the present application;
FIG. 5 is a flowchart of an image restoration step according to an embodiment of the present application;
FIG. 6 is a block diagram of an image restoration processing device according to an embodiment of the present application;
fig. 7 is a block diagram of an image restoration processing apparatus according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present application based on the embodiments herein.
The following disclosure provides many different embodiments, or examples, for implementing different structures of the application. In order to simplify the disclosure of the present application, the components and arrangements of specific examples are described below. Of course, they are merely examples and are not intended to limit the present application. Furthermore, the present application may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed.
The embodiment of the application provides an image restoration processing method. As shown in fig. 1, a flowchart of an image restoration processing method according to an embodiment of the present application is shown.
In step S110, in the input video continuous frames, a target frame to be repaired is identified.
Video consecutive frames refer to: a plurality of image frames that are consecutive in time.
Target frame, means: image frames that require repair. Further, the target frame contains a target image. The target image is an object image to be repaired, and the object image is an image of a preset part of the entity object. The preset portion may be all or part of a physical object. For example: the captured image of the person is included in the image frame, and if the face (part) image of the person (physical object) is recognized to be repaired, the image frame is a target frame, and the face image is a target image.
Specifically, a video file to be subjected to image restoration is obtained; performing video frame splitting processing on a video file by using a preset video frame splitting tool to obtain a plurality of temporally continuous image frames to form video continuous frames; inputting each image frame in the video continuous frames; in the input video continuous frames, each image frame in the video continuous frames is identified frame by frame or part of the image frames in the video continuous frames are identified so as to determine the image frames needing to be subjected to image restoration as target frames. The video frame disassembling tool is, for example: FFmpeg software tool.
Further, the present application may sequentially identify each image frame according to the input order of the image frames, and identify a target frame, that is, perform image restoration on the target frame, and then identify the next image frame; all target frames can be identified in all input image frames, and each identified target frame is subjected to image restoration one by one.
The identification manner of the target frame will be described in detail later, and thus will not be described in detail here.
Step S120, setting an auxiliary frame for repairing the target frame according to an adjacent frame adjacent to the target frame in the video continuous frames; wherein the target frame and the auxiliary frame include target images having the same location and the same object.
Adjacent frames, comprising: in the video successive frames, a previous frame image and/or a subsequent frame image adjacent to the target frame.
Auxiliary frame, means: and when repairing the target frame, the image frame plays an auxiliary role.
In the embodiment of the application, if the target frame and the adjacent frame comprise target images with the same parts and the same objects, the adjacent frame is set as an auxiliary frame, otherwise, the target frame is copied, and the copied frame of the target frame is set as the auxiliary frame. Wherein the object represents a physical object in the image (captured). The location refers to the location of a physical object. For example: the subject is a person and the part is a face of the person, then when it is detected that the target frame and the adjacent frame include facial images of the same person, the adjacent frame is set as the auxiliary frame.
Further, the adjacent frame is a previous frame image and/or a next frame image of the target frame, and correspondingly, the target frame is respectively compared with the previous frame image and/or the next frame image; if the target frame and the previous frame image comprise the target images with the same parts and the same objects, setting the previous frame image as a previous auxiliary frame, otherwise, setting the copy frame of the target frame as the previous auxiliary frame; and/or setting the subsequent frame image as the subsequent auxiliary frame if the target frame and the subsequent frame image comprise the target image with the same position and the same object, otherwise setting the copy frame of the target frame as the subsequent auxiliary frame.
And step S130, repairing the target image corresponding to the target frame and the target image corresponding to the auxiliary frame to obtain a repairing image corresponding to the target frame and a repairing image corresponding to the auxiliary frame.
In the embodiment of the present application, a pre-trained image restoration network may be used to restore the target image corresponding to the target frame and the target image corresponding to the auxiliary frame, so as to obtain a restoration image corresponding to the target frame and a restoration image corresponding to the auxiliary frame.
The image restoration network can be a pre-trained neural network, and the neural network is used for performing restoration processing such as deblurring and enhancing on a target image in a target frame to obtain a restored image of the target image so as to achieve the purpose of improving the image quality.
And step S140, fusing the repair image corresponding to the target frame and the repair image corresponding to the auxiliary frame, and replacing the target image in the target frame by using the fused image to obtain the repaired target frame.
And carrying out RGB (Red, green, blue, red, green and blue) three-channel fusion on the repair image corresponding to the target frame and the repair image corresponding to the auxiliary frame to obtain an image.
In this embodiment of the present application, the adjacent frames are the previous frame and/or the next frame of the target frame, and then the number of auxiliary frames is at least one, and each auxiliary frame corresponds to one repair image. Further, three-channel fusion refers to: and calculating the average value of RGB three channels of the pixel points at the same position aiming at the repair image corresponding to the target frame and the repair images corresponding to all the auxiliary frames respectively, and taking the average value as a new RGB value of the pixel position in the repair image corresponding to the target frame.
In the embodiment of the application, the image is attached to the position of the target image in the target frame, and the repaired target frame is obtained. And replacing the original target frame in the video continuous frames with the repaired target frame. And after repairing all target frames in the video continuous frames and replacing the target frames, obtaining the repaired video continuous frames.
In the method, a target frame to be repaired is identified in input video continuous frames; setting an auxiliary frame for repairing the target frame according to an adjacent frame adjacent to the target frame in the video continuous frames; wherein the target frame and the auxiliary frame comprise target images with the same parts and the same objects; repairing the target image corresponding to the target frame and the target image corresponding to the auxiliary frame to obtain a repairing image corresponding to the target frame and a repairing image corresponding to the auxiliary frame; fusing the repair image corresponding to the target frame and the repair image corresponding to the auxiliary frame, and replacing the target image in the target frame by using the fused image to obtain the repaired target frame. In the embodiment of the application, the target frame to be repaired can be automatically identified in the video continuous frames, when the images are repaired, the relationship between the target frame and the adjacent frames is considered, and the target images in the target frame are repaired in an auxiliary mode by using the adjacent frames, so that the repairing efficiency of the image repairing is improved, the influence of interference factors in the target frame on the image repairing is reduced, and the repairing quality of the image repairing is improved.
In order to make the technical solution of the present application clearer, the technical solution of the present application is further described below.
In order to avoid the problems of low efficiency and poor effect caused by manually searching for an image frame to be repaired in video continuous frames, the method and the device automatically identify the target frame in the video continuous frames. The identification of the target frame to be repaired is further described below with respect to the present application.
In the present embodiment, target frame identification is performed separately for each image frame in the video continuous frames. As shown in fig. 2, a flowchart of the steps for target frame identification according to an embodiment of the present application is shown.
In step S210, in the video consecutive frames, one image frame is sequentially acquired.
In step S220, in the image frame, an object image of a preset portion is detected.
In this embodiment of the present application, the preset portion may be a face. Of course, the device can be set as a part of a human body, a hand or the like which needs to be repaired according to the requirement. Further, in image restoration, face restoration is a relatively important link. The face is one of the most important elements in the video, and plays a vital role in aspects of storyline, character and the like of the video. However, when the image quality is problematic, the face often has problems such as blurring and distortion, and the viewing and enjoying experience of the video is seriously affected. For example: in the application scene of old film restoration, the face restoration can be used for improving the face image quality in the old film and eliminating the problems of blurring, color distortion and the like.
In this embodiment of the present application, the location of the physical object needs to be preset. Whether an image of a preset part exists or not can be detected in an image frame through an image detection technology, and the image is taken as an object image. For example: whether a face image exists or not is detected in an image frame by a face detection technology.
In this embodiment of the present application, the entity object may or may not be preset. In the case where the physical object is preset (specified), an object image of a preset portion of the preset physical object may be detected in the image frame. Further, whether the image frame contains the image of the preset physical object or not can be identified through an image identification technology, whether the image of the preset physical object contains the image of the preset part or not is detected through an image detection technology, and the image of the preset part of the physical object is intercepted to be used as an object image. This allows repair of a designated part (face) of a designated physical object (designated person) in an image. For example: only the face of the first female principal angle in the video is repaired. And under the condition that the entity object is not set, repairing the appointed part of any entity object in the video. For example: the faces of all people in the video are repaired.
Step S230, judging whether a preset type of object image is detected in the image frame; if yes, go to step S240; if not, go to step S210.
And step S240, when the object image of the part is detected to be included in the image frame, extracting the characteristic information of the object image in the image frame.
Characteristic information of the object image, including but not limited to: the height of the object image and the blur level of the object image. The height of the object image may be the difference between the highest point ordinate and the lowest point ordinate of the object image. The blur degree of the object image may be a pixel gradient value of the object image.
Specifically, in the embodiment of the present application, feature information may be extracted from an object image and recorded. The extracted feature information may be determined according to the requirements, for example: the feature information may include, in addition to the height and the ambiguity of the object image: the position of the object image in the image frame, sharpness, brightness, etc. In the case of a face image, the feature information may also include a rotation angle of the face. The characteristic information is recorded and stored in a file, such as a JSON file, and the characteristic information can be obtained quickly through the file.
Further, the brightness and the blur degree of the object image may be extracted in the pixel information of the object image. The height and rotation angle of the object image may be detected in the image frame first, and the height of the object image and the rotation angle of the solid object in the object image may be determined according to the key point information of the object image. The rotation angle of the physical object may be determined by comparing the keypoints of the object image with the keypoints of the standard model (such as the same keypoints of the standard forward face model).
Step S250, determining whether the characteristic information of the object image meets a preset condition to be repaired; if yes, go to step S260; if not, go to step S210.
And the condition to be repaired is used for measuring whether the object image needs to be repaired or not.
Conditions to be repaired, including but not limited to: and the height interval and the ambiguity threshold are used for indicating that the object image needs to be repaired. The end value of the altitude interval and the ambiguity threshold are empirical values or values obtained through experiments.
In an embodiment of the present application, determining whether the feature information of the object image meets a preset condition to be repaired includes: and when the height of the object image is in a preset height interval and the ambiguity of the object image is larger than a preset ambiguity threshold value, determining that the characteristic information of the object image meets the condition to be repaired, otherwise, determining that the characteristic information of the object image does not meet the condition to be repaired.
Further, the feature information of the object image may further include: rotation angle and brightness information of the object image in the image frame. The condition to be repaired may further include: a rotation angle threshold and a brightness threshold for indicating that the subject image needs to be repaired. The characteristic information in the JSON file can be read, and whether the characteristic information meets the condition to be repaired or not is judged. For example: when the height of the object image is in a preset height section, the ambiguity of the object image is larger than a preset ambiguity threshold, the rotation angle of an entity object (human face) in the object image is smaller than a preset rotation angle threshold, and the brightness of the object image is smaller than a preset brightness threshold, determining that the characteristic information of the object image meets the condition to be repaired, otherwise, determining that the characteristic information of the object image does not meet the condition to be repaired.
Step S250, when it is determined that the feature information of the object image satisfies the condition to be repaired, determining the object image as a target image and determining the image frame as a target frame to be repaired.
According to the method and the device, the target frame needing to be repaired can be automatically identified according to the characteristic information of the object image in the image frame in the video continuous frame, the accuracy of identification is high, the missing of the image needing to be repaired is avoided, the repairing process of the image is more coherent, the overall effect of image repairing can be improved, and the method and the device do not need manual intervention, so that the resource consumption is reduced.
After the target frame is identified, an auxiliary frame may be set for the target frame based on the physical objects and the locations of the physical objects contained in the target frame and the adjacent frames. The step of how to set the auxiliary frame for the target frame is further described below. As shown in fig. 3, a flowchart of the steps for setting up an auxiliary frame according to an embodiment of the present application is shown.
In step S310, adjacent frames of the target frame are acquired.
If a previous frame image and a next frame image of the target frame are used in advance, and the target image in the target frame is restored, two auxiliary frames need to be set for the target frame. In order to make the embodiments of the present application easier to understand, the present embodiment is described with respect to the case of two auxiliary frames.
Step S320, detecting an object image of the portion in the adjacent frame; wherein the target frame has been detected in advance to include a target image of the region.
Referring to the steps corresponding to fig. 2, whether the object image of the preset part is included is detected in the previous frame image and the next frame image, respectively.
Step S330, judging whether the object image of the part is detected in the adjacent frames; if yes, go to step S340; if not, step S360 is performed.
Step S340, when detecting that the object image of the portion is included in the adjacent frame, identifying whether the object image in the adjacent frame and the target image in the target frame correspond to the same object; if yes, go to step S350; if not, step S360 is performed.
In the embodiment of the application, an IOU (Intersection over Union, merging ratio) algorithm may be used to determine a merging ratio of the object image in the adjacent frame and the target image in the target frame; when the intersection ratio is larger than a preset proportion threshold value, determining that the object image in the adjacent frame corresponds to the same object with the target image in the target frame; otherwise, determining that the object image in the adjacent frame corresponds to a different object from the target image in the target frame. The ratio threshold may be an empirical value or a value obtained through experimentation.
For example: in face restoration, since the image of two adjacent frames has small change when video is played, if the physical objects between the two frames are the same, the image positions and sizes of the physical objects between the two frames are necessarily similar. Therefore, the embodiment of the application can judge the relation between the current frame and the front and back frame faces by means of the IOU algorithm, can improve the efficiency of object identification and ensures the continuity of the repairing process. Further, it may be determined whether the target image in the target frame and the target image in the adjacent frame correspond to the same face using the IOU algorithm. Fig. 4 is a schematic diagram of an intersection ratio according to an embodiment of the present application. The target image in the target frame is A, the target image in the adjacent frame is B, the cross-over ratio between A and B is calculated, the larger the cross-over ratio is, the larger the area C of the intersection of A and B is, the more similar the cross-over ratio is, the smaller the area C of the intersection of A and B is, the more dissimilar the A and B is, and when the cross-over ratio is larger than a proportion threshold value, the A and B can be considered to correspond to the same face.
And step S350, when the object image in the adjacent frame is identified to correspond to the same object with the object image in the target frame, determining the object image in the adjacent frame as the target image and setting the adjacent frame as an auxiliary frame.
And step S360, when the object image in the adjacent frame is identified to correspond to a different object from the object image in the target frame, or when the object image of the part is detected not to be included in the adjacent frame, the target frame is copied and the copied target frame is set as an auxiliary frame.
In the embodiment of the application, if the target frame is the first frame of the video continuous frames, copying the target frame, and taking the copied frame of the target frame as the previous frame image of the target frame; and if the target frame is the last frame of the video continuous frames, copying the target frame, and taking the copied frame of the target frame as the next frame image of the target frame.
In the embodiment of the application, the relation between the current frame and the front and back frame faces is judged by adopting the IOU algorithm, so that the repairing process is more coherent, the fluctuation in the repairing process can be reduced, and the visual effect of the repaired video is improved.
In this embodiment of the present application, in order to simplify the processing procedure, when it is identified that the object image in the adjacent frame corresponds to a different object from the object image in the target frame, or when it is detected that the object image in the adjacent frame does not include the portion, single-frame image repair may also be directly performed on the target frame, that is, repair processing such as deblurring, enhancement, and the like is performed on only the target image in the target frame, and after the repair processing, the repaired target frame is used to replace the original target frame in the video continuous frame.
After the auxiliary frames are set, repairing the target images corresponding to the target frames and repairing the target images corresponding to each auxiliary frame respectively to obtain the repairing images corresponding to the target frames and the repairing images corresponding to each auxiliary frame. In the embodiment of the application, in order to improve the accuracy of image restoration, the target image needs to be corrected and aligned.
The repair process of the target frame and the auxiliary frame is described below. FIG. 5 is a flowchart of an image restoration process according to an embodiment of the present application.
Step S510, performing an image correction operation on the target image corresponding to the auxiliary frame according to the target image corresponding to the target frame.
In the embodiment of the application, based on the target image corresponding to the target frame, the image correction operation is performed on the target image corresponding to each auxiliary frame. For the accuracy of image restoration, the size of the target image corresponding to the target frame is consistent with that of the target image corresponding to the auxiliary frame. For example: the pixels are 512 x 512.
The image correction operation may be a face deformation warp operation. Further, generally, in video, although the rotation angle of the faces of two adjacent frames is smaller, the image restoration is still affected due to the change of the angle, so in the embodiment of the present application, the target image corresponding to the auxiliary frame may be geometrically deformed (warp) according to the face key point in the target image corresponding to the target frame and the face key point in the target image corresponding to the auxiliary frame, so as to correct the target image corresponding to the auxiliary frame to be similar to the target image corresponding to the target frame.
For example: the i-1 frame is the target frame, the i-1 frame is the previous frame of the target frame, the i+1 frame is the next frame of the target frame, and face deformation warp operation is respectively carried out on the face image in the i-1 frame and the face image in the i+1 frame according to the face image in the i frame, so that the face image in the i-1 frame and the face image in the i+1 frame are both more similar to the face image in the i frame.
Further, in the stage of detecting the object image of the preset type in the target frame and the adjacent frame and extracting the characteristic information in the object image, the face key point detection can be carried out on the target frame and the adjacent frame, the coordinates of the face key point are recorded, and the height of the object image can be determined according to the ordinate of the highest point and the ordinate of the lowest point in the face key point.
Step S520, after the image correction operation is performed, performing image alignment processing on the target image corresponding to the target frame and the target image corresponding to the auxiliary frame.
In the embodiment of the present application, after the image correction operation, the target images corresponding to the auxiliary frames may be respectively subjected to image alignment processing with the target images corresponding to the target frames. The image alignment process may be an optical flow alignment process.
Specifically, the optical flow alignment process may bring two frame images closer to each other. First, it is assumed that the color and brightness in the image remain unchanged over time. Then, the key points in one image have the same or similar positions in the other image according to the moving mode of each key point in the images. The method comprises the steps of calculating the moving mode of key points by comparing the similarity of small areas in an image by adopting a local approximation method, so that a map describing how the points move can be obtained, namely an optical flow field. And moving the key points in the source image according to the movement direction indicated by the optical flow field to enable the source image to be more similar to the target image, wherein the whole process is optical flow alignment.
For example: the ith frame is a target frame, the (i-1) th frame is a previous frame of the target frame, and the (i+1) th frame is a next frame of the target frame; taking the i-1 th frame and the i-1 th frame as a group of adjacent frame pairs, and taking the i-1 th frame and the i+1 th frame as a group of adjacent frame pairs; two adjacent frames are respectively processed as follows: (1) feature extraction: the feature information is extracted from two frames of images contained in the current adjacent frame pair respectively, and further, the feature information can be feature points extracted from the two frames of images. These feature points are usually explicit and repetitive, such as corner points and edges. (2) optical flow estimation: after feature information extraction, the optical flow field is estimated by calculating the feature point motion between two frames of images. The displacement of the feature points in the horizontal and vertical directions is of primary concern here. (3) optical flow field alignment: the calculation and optimization of the optical flow field are completed, and the alignment between two frames of images is realized by using the data, wherein the main aim is to align the target image in the i-1 th frame to the target image in the i-th frame, align the target image in the i+1 th frame to the target image in the i-th frame, further minimize the visual difference between adjacent frame pairs, which is the target image in the adjacent frame pairs, namely: the visual difference between the target image in the i-1 th frame and the target image in the i-th frame is reduced, and the visual difference between the target image in the i-th frame and the target image in the i+1 th frame is reduced. The alignment method may include: interpolation, translation, rotation, etc. For example: the rotation and translation operations are performed on the i-1 th frame, while the i-th frame remains stationary.
Step S530, after the image alignment process is performed, repairing the target image corresponding to the target frame and the target image corresponding to the auxiliary frame respectively using a pre-trained image repairing network.
And merging the target image corresponding to the target frame and the target image corresponding to the auxiliary frame into one image in the target image corresponding to the repair target frame and the target image corresponding to the auxiliary frame, reversely converting the image and pasting the image back to the position of the target image in the target frame, and replacing the original target frame in the video continuous frame by using the target frame after mapping.
The video continuous frames after restoration can be obtained through the image restoration processing of the embodiment of the application, and because the embodiment of the application refers to the front frame and the rear frame of the target frame in the restoration process, the problem of poor restoration quality can not occur because of interference factors in the target frame, and the restoration mode of the embodiment of the application is adopted, so that the condition of restoration quality fluctuation can not occur when the video continuous frames are played.
According to the method and the device for repairing the target frames, the target frames needing to be repaired are automatically identified, the relation between the target frames and the front and rear frames is judged by adopting the IOU algorithm, and the front and rear frames are used as assistance in repairing, so that the repairing process is more coherent, the manual operation and time cost are reduced, the repairing time is shortened, the repairing quality is improved, the overall repairing effect is good, and the stability is strong.
The embodiment of the application has stronger practical value in the field of video image restoration, and is used for restoring scenes of old films. The old movies and television shows may be affected by factors such as shooting conditions, degradation during storage and transmission, etc., resulting in blurring of image quality and loss of details. After the face in the video is repaired by using the embodiment of the application, the audience can see the expression details of the performer more clearly. The old film and television works can better convey the performances of actors and fully display the artistic value of the works. To a certain extent, the interest of spectators on old film and television works is stimulated, and the inheritance of cultural heritage is promoted. When the old film and television works are remade, the repaired face can provide clearer and more real figure images for directors, designers and producers. This helps to better understand the original works and provides better reference and basis for subsequent remanufacturing.
The embodiment of the application also provides an image restoration processing device. As shown in fig. 6, a block diagram of an image restoration processing apparatus according to an embodiment of the present application is shown.
The image restoration processing device comprises: an identification module 610, a setup module 620, a repair module 630, and a fusion module 640.
The identifying module 610 is configured to identify, among the input video continuous frames, a target frame to be repaired.
A setting module 620, configured to set an auxiliary frame for repairing the target frame according to an adjacent frame adjacent to the target frame in the video continuous frames; wherein the target frame and the auxiliary frame include target images having the same location and the same object.
And the repairing module 630 is configured to repair the target image corresponding to the target frame and the target image corresponding to the auxiliary frame, so as to obtain a repaired image corresponding to the target frame and a repaired image corresponding to the auxiliary frame.
And the fusion module 640 is configured to fuse the repair image corresponding to the target frame with the repair image corresponding to the auxiliary frame, and replace the target image in the target frame with the fused image to obtain the repaired target frame.
The functions of the apparatus in the embodiments of the present application have been described in the foregoing method embodiments, so that the descriptions of the embodiments are not exhaustive, and reference may be made to the related descriptions in the foregoing embodiments, which are not repeated herein.
The embodiment of the application also provides an image restoration processing device, as shown in fig. 7, which is a structural diagram of the image restoration processing device according to an embodiment of the application.
The image restoration processing device includes: processor 710, communication interface 720, memory 730, and communication bus 740. Wherein processor 710, communication interface 720, and memory 730 communicate with each other via a communication bus 740.
Memory 730 for storing a computer program.
In one embodiment of the present application, the processor 710 is configured to implement the image restoration processing method provided in any one of the foregoing method embodiments when executing the program stored in the memory 730, where the method includes: identifying a target frame to be repaired in the input video continuous frames; setting an auxiliary frame for repairing the target frame according to an adjacent frame adjacent to the target frame in the video continuous frames; wherein the target frame and the auxiliary frame comprise target images with the same parts and the same objects; repairing the target image corresponding to the target frame and the target image corresponding to the auxiliary frame to obtain a repairing image corresponding to the target frame and a repairing image corresponding to the auxiliary frame; fusing the repair image corresponding to the target frame and the repair image corresponding to the auxiliary frame, and replacing the target image in the target frame by using the fused image to obtain the repaired target frame.
Wherein, in the input video continuous frames, identifying the target frame to be repaired comprises: the following is performed for each image frame in the video sequence: detecting an object image of the region in the image frame; extracting feature information of the object image in the image frame when the object image of the part is detected to be included in the image frame, and determining whether the feature information of the object image meets a preset condition to be repaired; when it is determined that the feature information of the object image satisfies the condition to be repaired, the object image is determined as a target image and the image frame is determined as a target frame to be repaired.
Wherein the feature information of the object image includes: the height of the object image and the ambiguity of the object image; the determining whether the feature information of the object image meets the preset condition to be repaired comprises the following steps: and when the height of the object image is in a preset height interval and the ambiguity of the object image is larger than a preset ambiguity threshold value, determining that the characteristic information of the object image meets the condition to be repaired.
Wherein the setting an auxiliary frame for repairing the target frame according to an adjacent frame adjacent to the target frame in the video continuous frames comprises: detecting an object image of the region in the adjacent frame; wherein it has been detected in advance that the target frame includes a target image of the site; when detecting that the adjacent frame comprises the object image of the part, identifying whether the object image in the adjacent frame and the target image in the target frame correspond to the same object; when the object image in the adjacent frame is identified to correspond to the same object as the object image in the target frame, determining the object image in the adjacent frame as the target image and setting the adjacent frame as an auxiliary frame; when the object image in the adjacent frame is identified to correspond to a different object from the object image in the target frame, or when the object image of the part is detected not to be included in the adjacent frame, the target frame is copied and the copied target frame is set as an auxiliary frame.
Wherein the identifying whether the object image in the adjacent frame and the object image in the object frame correspond to the same object comprises: determining the intersection ratio of the object image in the adjacent frame and the target image in the target frame by adopting an IOU intersection ratio algorithm; when the intersection ratio is larger than a preset proportion threshold value, determining that the object image in the adjacent frame corresponds to the same object with the target image in the target frame; otherwise, determining that the object image in the adjacent frame corresponds to a different object from the target image in the target frame.
The repairing the target image corresponding to the target frame and the target image corresponding to the auxiliary frame includes: according to the target image corresponding to the target frame, executing image correction operation on the target image corresponding to the auxiliary frame; after the image correction operation is finished, performing image alignment processing on the target image corresponding to the target frame and the target image corresponding to the auxiliary frame; and after the image alignment processing is finished, respectively repairing the target image corresponding to the target frame and the target image corresponding to the auxiliary frame by using a pre-trained image repairing network.
Wherein the adjacent frames include: in the video continuous frame, a previous frame image and/or a subsequent frame image adjacent to the target frame.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the image restoration processing method provided by any one of the method embodiments described above. Since the image restoration processing method has been described in detail above, the description of this embodiment is not exhaustive, and reference may be made to the related description in the foregoing embodiment, which is not repeated here.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
From the above description of embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus a general purpose hardware platform, or may be implemented by hardware. Based on such understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the related art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform the method described in the respective embodiments or some parts of the embodiments.
It is to be understood that the terminology used herein is for the purpose of describing particular example embodiments only, and is not intended to be limiting. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. The terms "comprises," "comprising," "includes," "including," and "having" are inclusive and therefore specify the presence of stated features, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, elements, components, and/or groups thereof. The method steps, processes, and operations described herein are not to be construed as necessarily requiring their performance in the particular order described or illustrated, unless an order of performance is explicitly stated. It should also be appreciated that additional or alternative steps may be used.
The foregoing is merely a specific embodiment of the application to enable one skilled in the art to understand or practice the application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. An image restoration processing method, comprising:
identifying a target frame to be repaired in the input video continuous frames;
setting an auxiliary frame for repairing the target frame according to an adjacent frame adjacent to the target frame in the video continuous frames; wherein the target frame and the auxiliary frame comprise target images with the same parts and the same objects;
repairing the target image corresponding to the target frame and the target image corresponding to the auxiliary frame to obtain a repairing image corresponding to the target frame and a repairing image corresponding to the auxiliary frame;
fusing the repair image corresponding to the target frame and the repair image corresponding to the auxiliary frame, and replacing the target image in the target frame by using the fused image to obtain the repaired target frame.
2. The method of claim 1, wherein identifying the target frame to be repaired from the input video sequence of frames comprises:
the following is performed for each image frame in the video sequence:
detecting an object image of the region in the image frame;
extracting feature information of the object image in the image frame when the object image of the part is detected to be included in the image frame, and determining whether the feature information of the object image meets a preset condition to be repaired;
When it is determined that the feature information of the object image satisfies the condition to be repaired, the object image is determined as a target image and the image frame is determined as a target frame to be repaired.
3. The method of claim 2, wherein the step of determining the position of the substrate comprises,
the feature information of the object image includes: the height of the object image and the ambiguity of the object image;
the determining whether the feature information of the object image meets the preset condition to be repaired comprises the following steps:
and when the height of the object image is in a preset height interval and the ambiguity of the object image is larger than a preset ambiguity threshold value, determining that the characteristic information of the object image meets the condition to be repaired.
4. The method according to claim 1, wherein the setting an auxiliary frame for repairing the target frame according to an adjacent frame adjacent to the target frame among the video consecutive frames comprises:
detecting an object image of the region in the adjacent frame; wherein it has been detected in advance that the target frame includes a target image of the site;
when detecting that the adjacent frame comprises the object image of the part, identifying whether the object image in the adjacent frame and the target image in the target frame correspond to the same object;
When the object image in the adjacent frame is identified to correspond to the same object as the object image in the target frame, determining the object image in the adjacent frame as the target image and setting the adjacent frame as an auxiliary frame;
when the object image in the adjacent frame is identified to correspond to a different object from the object image in the target frame, or when the object image of the part is detected not to be included in the adjacent frame, the target frame is copied and the copied target frame is set as an auxiliary frame.
5. The method of claim 4, wherein the identifying whether the object image in the adjacent frame corresponds to the same object as the object image in the object frame comprises:
determining the intersection ratio of the object image in the adjacent frame and the target image in the target frame by adopting an IOU intersection ratio algorithm;
when the intersection ratio is larger than a preset proportion threshold value, determining that the object image in the adjacent frame corresponds to the same object with the target image in the target frame; otherwise, determining that the object image in the adjacent frame corresponds to a different object from the target image in the target frame.
6. The method of claim 1, wherein the repairing the target image corresponding to the target frame and the target image corresponding to the auxiliary frame comprises:
According to the target image corresponding to the target frame, executing image correction operation on the target image corresponding to the auxiliary frame;
after the image correction operation is finished, performing image alignment processing on the target image corresponding to the target frame and the target image corresponding to the auxiliary frame;
and after the image alignment processing is finished, respectively repairing the target image corresponding to the target frame and the target image corresponding to the auxiliary frame by using a pre-trained image repairing network.
7. The method according to any one of claims 1 to 6, wherein,
the adjacent frames include: in the video continuous frame, a previous frame image and/or a subsequent frame image adjacent to the target frame.
8. An image restoration processing device, characterized by comprising:
the identification module is used for identifying a target frame to be repaired in the input video continuous frames;
the setting module is used for setting an auxiliary frame for repairing the target frame according to an adjacent frame adjacent to the target frame in the video continuous frames; wherein the target frame and the auxiliary frame comprise target images with the same parts and the same objects;
the restoration module is used for restoring the target image corresponding to the target frame and the target image corresponding to the auxiliary frame to obtain a restoration image corresponding to the target frame and a restoration image corresponding to the auxiliary frame;
And the fusion module is used for fusing the repair image corresponding to the target frame and the repair image corresponding to the auxiliary frame, and replacing the target image in the target frame by using the fused image to obtain the repaired target frame.
9. An image restoration processing apparatus, characterized by comprising: at least one communication interface; at least one bus connected to the at least one communication interface; at least one processor coupled to the at least one bus; at least one memory coupled to the at least one bus, wherein the processor is configured to: executing an image restoration processing program stored in the memory to implement the image restoration processing method according to any one of claims 1 to 7.
10. A computer-readable storage medium storing computer-executable instructions that are executed to implement the image restoration processing method of any one of claims 1-7.
CN202311517186.3A 2023-11-14 2023-11-14 Image restoration processing method, device, equipment and storage medium Pending CN117522699A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311517186.3A CN117522699A (en) 2023-11-14 2023-11-14 Image restoration processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311517186.3A CN117522699A (en) 2023-11-14 2023-11-14 Image restoration processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117522699A true CN117522699A (en) 2024-02-06

Family

ID=89741434

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311517186.3A Pending CN117522699A (en) 2023-11-14 2023-11-14 Image restoration processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117522699A (en)

Similar Documents

Publication Publication Date Title
Pece et al. Bitmap movement detection: HDR for dynamic scenes
US10764496B2 (en) Fast scan-type panoramic image synthesis method and device
CN104702928B (en) Method of correcting image overlap area, recording medium, and execution apparatus
CN105469375B (en) Method and apparatus for processing high dynamic range panoramas
CN104992408A (en) Panorama image generation method and apparatus for user terminal
CN103561258A (en) Kinect depth video spatio-temporal union restoration method
CN114862707B (en) Multi-scale feature restoration image enhancement method, device and storage medium
CN113055613A (en) Panoramic video stitching method and device based on mine scene
CN110866889A (en) Multi-camera data fusion method in monitoring system
CN112884664A (en) Image processing method, image processing device, electronic equipment and storage medium
US20080226159A1 (en) Method and System For Calculating Depth Information of Object in Image
Tian et al. Stitched image quality assessment based on local measurement errors and global statistical properties
CN108833879A (en) A Method of Virtual Viewpoint Synthesis with Spatiotemporal Continuity
CN114419102B (en) A Multi-target Tracking and Detection Method Based on Frame Difference Temporal Motion Information
US11783454B2 (en) Saliency map generation method and image processing system using the same
CN117522699A (en) Image restoration processing method, device, equipment and storage medium
CN112637573A (en) Multi-lens switching display method and system, intelligent terminal and storage medium
CN118264763A (en) Light and moving object adaptive multi-camera video stitching method, system and device
Ito et al. Deep homography-based video stabilization
CN114170445B (en) An indoor smoke environment image matching method suitable for fire fighting robots
CN115564708A (en) Multi-channel high-quality depth estimation system
Seychell et al. Monoscopic inpainting approach using depth information
JP2013246601A (en) Image process device
WO2016111239A1 (en) Image processing device, image processing method and program recording medium
Zarif et al. Fast and efficient video completion using object prior position

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination