[go: up one dir, main page]

CN114679553B - Video noise reduction method and device - Google Patents

Video noise reduction method and device Download PDF

Info

Publication number
CN114679553B
CN114679553B CN202011549202.3A CN202011549202A CN114679553B CN 114679553 B CN114679553 B CN 114679553B CN 202011549202 A CN202011549202 A CN 202011549202A CN 114679553 B CN114679553 B CN 114679553B
Authority
CN
China
Prior art keywords
frame
video
pixel
pixel value
weights
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011549202.3A
Other languages
Chinese (zh)
Other versions
CN114679553A (en
Inventor
陈加忠
胡康康
王晟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202011549202.3A priority Critical patent/CN114679553B/en
Publication of CN114679553A publication Critical patent/CN114679553A/en
Application granted granted Critical
Publication of CN114679553B publication Critical patent/CN114679553B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Picture Signal Circuits (AREA)
  • Image Processing (AREA)

Abstract

The application provides a video denoising method and a related device based on multi-frame fusion, which are used for denoising a plurality of cached video frames, wherein the plurality of video frames comprise a target video frame and at least two other video frames. And acquiring the weight of the target video frame and other video frames of each frame for two-frame fusion, acquiring the weight of the target video frame and all other video frames for multi-frame fusion together according to the two-frame fusion weight, and carrying out multi-frame fusion noise reduction on the target video frame and all other video frames based on the multi-frame fusion weight to generate the noise-reduced video frame. Through multi-frame fusion noise reduction, noise in the video frame can be removed while the texture of the video frame is maintained, and the video quality can be effectively improved.

Description

Video noise reduction method and device
Technical Field
The invention relates to the technical field of image processing, in particular to a noise reduction method for video.
Background
Image noise (image noise) is a random variation of brightness or color information (the photographed object itself is not present) in an image, and is usually a representation of electronic noise. It is typically produced by the sensors and circuitry of scanners or digital cameras and may also be affected by film grain or unavoidable shot noise in ideal photodetectors. Image noise is an undesirable byproduct of the image capture process, and gives the image errors and additional information.
Video is composed of a plurality of consecutive video frames, one video frame being an image, so video noise reduction includes noise reduction of images in frames in video. The noise reduction processing of the image refers to a technology for improving the visual quality of the image by adopting a certain method to inhibit or eliminate noise points in the image and simultaneously maintaining the original texture and detail of the image to the greatest extent. Video noise reduction differs from noise reduction of single frame images in that video can take advantage of the correlation between video frames in the temporal and/or spatial domain.
Some methods of image noise reduction rely on modeling noise, which has a number of methods. A commonly used noise model represents the level of noise corresponding to different pixel intensities under certain parameters (e.g., sensitivity, exposure time, etc.).
At present, a video noise reduction method based on time domain correlation between a noise model and video frames is adopted, and a two-frame iterative fusion method is adopted to reduce noise of the video. Firstly, two frames of images are acquired, wherein the current frame is an image to be denoised, the other frame is a frame before the current frame, the previous frame is denoised, the time domain correlation between a noise model and the two frames of images is used for calculating to obtain the weight of fusion of the two frames, then the two frames of images are weighted and fused according to the weight of fusion of the two frames to update the pixel value of the frame to be denoised, the denoised video frame is obtained, and then the denoised video frame is used for denoise of the video of the next frame, so that iteration is performed.
The existing method for reducing the noise of the video by utilizing the time domain correlation of the video frames has the problem that the noise is not removed cleanly enough, such as local residual plaques, so that the displayed video has poor quality.
Disclosure of Invention
In view of this, the application provides a method and a related device for reducing noise of video based on multi-frame fusion, which can remove noise more cleanly and promote the display effect of video on the basis of preserving the original texture of the image.
In a first aspect, the present application provides a video denoising method, which can be used for any scene where denoising of video is required, such as shooting various small videos, video conferences, and the like. The method comprises the steps of caching a plurality of video frames, wherein the plurality of video frames comprise a target video frame and at least two other video frames, the target video frame is any one frame of the plurality of video frames, acquiring a plurality of pairs of two-frame fusion weights, one pair of two-frame fusion weights comprises two weights used for weighting values of one pair of pixel points, the pair of pixel points comprises a first pixel point of the target video frame and corresponding pixel points, in the other video frames, with the same position as the first pixel point, of the first video frame, acquiring a multi-frame fusion weight used for fusing the target video frame and the at least two other video frames according to the plurality of pairs of two-frame fusion weights, the multi-frame fusion weight comprises a plurality of weights used for weighting values of one group of pixel points, the group of pixel points comprises the first pixel point of the target video frame, the at least two corresponding pixel points, in the other video frames, with the same position as the first pixel point, of the at least two corresponding pixel points, acquiring a new multi-frame fusion weight used for fusing the target video frame and the value of the at least two frames, and generating new multi-frame fusion weights based on the new pixel values after the new multi-frame fusion weights.
According to the video noise reduction method provided by the first aspect, the multi-pair two-frame fusion weights used for respectively fusing the target video frame and all other video frames in the cache are calculated, the multi-frame fusion weights used for fusing the target video frame and all other video frames in the cache together are calculated based on the multi-pair two-frame fusion weights, and then the plurality of video frames in the cache are weighted and fused according to the multi-frame fusion weights, so that weight distribution is more reasonable, the situation that the weight of the target video frame is too large or too small is avoided, and the quality of the target video frame after noise reduction is improved.
In addition, for noise reduction of a pixel X in a target video frame, only the information of the corresponding pixel X in a plurality of video frames, which is the same as the pixel X in position, is involved, and pixel X and/or pixel blocks around the corresponding pixel X are not involved.
With reference to the video denoising method provided by the first aspect, in one possible implementation manner, obtaining the two-frame fusion weight includes obtaining a first pixel value difference, wherein the first pixel value difference is a pixel value difference between the first pixel point and the corresponding pixel point, obtaining a noise level of the first pixel point, and obtaining the two-frame fusion weight according to the first pixel value difference, the noise level and parameters for controlling the fusion degree.
In another possible implementation manner, the video denoising method provided by the first aspect is used for obtaining the two-frame fusion weight, wherein the two-frame fusion weight comprises obtaining a first pixel value difference, a plurality of second pixel value differences and a parameter for controlling fusion degree, wherein the first pixel value difference is a pixel value difference between the first pixel point and the corresponding pixel point, the noise level of the first pixel point is obtained, one second pixel value difference is a pixel value difference between two corresponding pixel points of the first pixel point, and the two-frame fusion weight is obtained according to the first pixel value difference, the noise level, the plurality of second pixel value differences and the parameter for controlling fusion degree.
Under the same noise level, compared with the method for acquiring the two-frame fusion weight based on the pixel value difference of the pixel points, the method for acquiring the two-frame fusion weight based on the maximum pixel value difference can enable the weight to be more biased to a target video frame, and when the pixel value difference larger than the pixel value difference of the pixel points occurs in the second pixel value differences, the problems of drag, abnormal noise and the like in the noise reduction process can be effectively avoided by using the maximum pixel value difference to increase the weight of the pixel point X in the target video frame.
In combination with the video denoising method provided in any one of the first aspect, in a possible implementation manner, the video denoising method further includes deleting a first frame of the plurality of video frames.
The buffered multiple video frames are arranged according to time sequence, the image content of the adjacent frames has strong correlation, when the video frame buffer is updated after the noise reduction of the target video frame is completed, the first time sequence frame in the multiple video frames in the buffer is deleted, then the deleted frames are not intermediate frames in the multiple video frames, the condition that the frames in the multiple video frames in the buffer are discontinuous (or frame skip) is not caused after the deletion, and the problem of poor effect caused by overlarge difference of pixel values between the frames when the noise reduction of the video is continued is avoided.
In a second aspect, the application provides a video noise reduction method, which comprises the steps of responding to user operation, displaying video frames in a display screen of electronic equipment, wherein the video frames comprise target video frames after noise reduction, and the target video frames after noise reduction are obtained by carrying out video noise reduction on target video frames acquired by a camera of the electronic equipment according to the video noise reduction method provided by any one of the first aspects.
In a possible implementation manner, the video noise reduction function enabled by the video noise reduction method provided in the first aspect may be directly integrated in a system, and the user uses the video noise reduction function by default when shooting a video.
In another possible implementation manner, when a user shoots a video, the user can select whether to use the video denoising method provided by the application to denoise the video, so as to give the user more independent options. For example, the user may turn on or off a switch in a system setting or in a camera application indicating the video noise reduction function that can be achieved by the video noise reduction method provided in the first aspect.
In a third aspect, the present application provides a video noise reduction device, the device comprising a storage module, an acquisition module, a processing module, the video noise reduction device configured to perform the video noise reduction method of any one of the first or second aspects.
In a fourth aspect, the present application also provides a video noise reduction device, including a memory, a processor, and a program stored in the memory and executable on the processor, wherein the video noise reduction method according to any one of the first or second aspects is performed when the processor executes the program.
With reference to the video denoising device provided in the fourth aspect, in a possible implementation manner, the video denoising device further includes a camera, configured to collect video, and when the processor executes the program stored in the memory, the processor is configured to execute the video denoising method in any one of the first aspect or the second aspect, so as to denoise the video collected by the camera.
In a fifth aspect, the present application also provides a computer readable storage medium having a computer program stored therein, which when executed by a processor, implements the video denoising method of any one of the first or second aspects above.
In a sixth aspect, the application also provides a computer program product comprising computer programs/instructions which when executed by a processor implement the steps of the video noise reduction method according to any of the first or second aspects above.
In a seventh aspect, the present application further provides a chip, which includes a processor and a data interface, where the processor reads instructions and video frames stored on a memory through the data interface to perform the video denoising method according to any one of the first or second aspects.
Drawings
Fig. 1 is a schematic view of an application scenario of a video denoising method according to an embodiment of the present application;
Fig. 2 is a schematic diagram of an application scenario of a video denoising method according to an embodiment of the present application;
Fig. 3 is an application scenario schematic diagram of a video denoising method according to an embodiment of the present application;
fig. 4 is a schematic diagram of an application scenario of a video denoising method according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a system architecture according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
Fig. 8 is a flowchart of a video denoising method according to an embodiment of the present application;
fig. 9 is a flowchart of a method for obtaining a two-frame fusion weight according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a noise model provided by an embodiment of the present application;
FIG. 11 is a flowchart of another method for obtaining a two-frame fusion weight according to an embodiment of the present application;
fig. 12 is a flowchart of a method for obtaining multi-frame fusion weights according to an embodiment of the present application;
FIG. 13 is a schematic diagram of a set of display interfaces provided by an embodiment of the present application;
FIG. 14 is a schematic diagram of another set of display interfaces provided by an embodiment of the present application;
FIG. 15 is a schematic view of another display interface provided by an embodiment of the present application;
FIG. 16 is a schematic diagram of another display interface provided by an embodiment of the present application;
FIG. 17 is a schematic block diagram of a video noise reduction device provided by an embodiment of the present application;
fig. 18 is a schematic block diagram of a video noise reduction device provided in an embodiment of the present application.
Detailed Description
The following description of the technical solutions according to the embodiments of the present application will be given with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments. In the description of the embodiment of the present application, unless otherwise indicated, "/" means or, for example, a/B may represent a or B, and "and/or" herein is merely an association relationship describing an association object, which means that three relationships may exist, for example, a and/or B, and that three cases, i.e., a alone, a and B together, and B alone, exist. In the description of the embodiments of the present application, unless otherwise indicated, "plurality" means two or more. In addition, unless otherwise indicated, the terms "first," "second," and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, unless otherwise indicated, features defining "a first", "a second" may include one or more of such features either explicitly or implicitly.
Fig. 1 is a schematic diagram of an application scenario of a video denoising method according to an embodiment of the present application.
As shown in fig. 1, the video denoising method according to the embodiment of the present application may be applied to an electronic device. When an object is recorded by a camera of an electronic device, due to limitations of hardware, environment and the like, the acquired video may have noise problems such as noise points, plaques, ghosts and the like which affect visual effects. At this time, noise reduction processing can be performed on the original video which is acquired by the camera in the electronic device and is not subjected to noise reduction, so as to obtain a noise-reduced video.
The electronic device may be mobile or fixed, for example, the electronic device may be a mobile phone, a camera, a video camera, a vehicle, a tablet personal computer (tablet personal computer, TPC), a media player, a smart television, a notebook computer (LC), a personal digital assistant (personal DIGITAL ASSISTANT, PDA), a personal computer (personal computer, PC), a smart watch, an augmented reality (augmented reality, AR)/Virtual Reality (VR), a Wearable Device (WD), a game console, etc., which are not limited in the specific type of the electronic device according to the embodiments of the present application.
The noise reduction processing can be based on multi-frame fusion and comprises the steps of caching a plurality of video frames, calculating multi-pair two-frame fusion weights, calculating multi-frame fusion weights based on the multi-pair two-frame fusion weights, and carrying out multi-frame fusion noise reduction by using the multi-frame fusion weights. The multi-frame fusion noise reduction comprises the steps of weighting values of a group of pixel points with the same positions in a plurality of corresponding video frames according to a group of multi-frame fusion weights to obtain a new pixel value, repeating the steps to obtain a plurality of new pixel values, and generating a frame of noise-reduced video frame based on the new pixel values.
In one possible implementation, the video frames after the multi-frame fusion noise reduction may be output after subsequent image processing. The subsequent image processing may include performing image processing such as white balance, color correction, tone mapping, etc. on the video frame after the multi-frame fusion noise reduction.
Optionally, outputting the denoised image (or video) includes displaying on a screen of the electronic device and/or saving to an album of the electronic device.
In a possible implementation manner, when the user uses the electronic device to shoot, the user may select whether to use the video denoising method provided by the embodiment of the present application, for example, the user may select an option such as turning on/off a professional video in a system setting to indicate that the video denoising method provided by the embodiment of the present application is included, or the user may perform an operation in a shooting interface of a camera application to determine whether to use the video denoising method provided by the embodiment of the present application.
It should be noted that the expansion, limitation, explanation and explanation of the relevant content of the video denoising method in the relevant embodiment of fig. 1 are equally applicable to the expansion, limitation, explanation and explanation of the relevant content of the video denoising method in the relevant embodiments of fig. 2 to 9 and 11 to 12, and are not repeated herein.
The application of the video denoising method according to the embodiment of the present application in three specific scenarios is described below with reference to fig. 2 to 4.
In one embodiment, as shown in fig. 2, the video denoising method according to the embodiment of the present application may be applied to photographing of an electronic device (e.g., a mobile phone). For example, the video noise reduction method provided by the embodiment of the application can be adopted when the electronic equipment is used for shooting micro-recordings (vlog), small videos, live events and the like. According to the video denoising method provided by the embodiment of the application, the denoising processing based on multi-frame fusion can be performed on the acquired original video frame with poor quality, so that the video frame with improved visual quality is obtained, and further, high-quality video is output, and the user experience is improved.
In another embodiment, as shown in fig. 3, the video denoising method according to the embodiment of the present application may be applied to an autopilot scene. For example, the method can be applied to a navigation system of an automatic driving vehicle, and the video denoising method provided by the embodiment of the application can enable the automatic driving vehicle to obtain a clearer road image (or road video) by performing denoising processing based on multi-frame fusion on an original road image (or original road video) with lower image quality in the navigation process of road driving, thereby improving the safety of the automatic driving vehicle.
In another embodiment, as shown in fig. 4, the video denoising method provided by the embodiment of the application can be applied to a video monitoring scene. For example, original images (or original videos) collected by monitoring equipment in public places are often affected by factors such as weather, distance and the like, and problems such as blurred images, low image quality and the like exist. The video denoising method can perform multi-frame fusion denoising processing on the acquired original image (or original video) to obtain a denoised image (or video), and further can recover important information such as license plate numbers, clear faces and the like based on the processed street view image (or video) to provide important clue information for public security personnel to detect cases.
A system architecture 100 provided by an embodiment of the present application is described below with reference to fig. 5.
The system architecture 100 includes an electronic device 120 and an executing device 110, wherein the electronic device 120 may interact with the executing device 110 through a communication network of any communication mechanism/communication standard, which may be a wide area network, a local area network, a point-to-point connection, etc., or any combination thereof.
In one possible implementation, the execution device 110 is implemented by one or more servers.
Alternatively, the execution device 110 may be used with other computing devices. Such as data storage, routers, load balancers, etc. The execution device 110 may be disposed on one physical site or distributed across multiple physical sites.
It should be noted that, the execution device 110 may also be referred to as a cloud device, and the execution device 110 may be deployed in the cloud.
Specifically, the executing device 110 may perform the following process of buffering multiple frames of video frames, calculating a multi-to-two frame fusion weight, calculating multiple frame fusion weights based on the multi-to-two frame fusion weight, and performing weighted fusion on the multiple frames of video frames according to the multiple frame fusion weights to obtain the video frames after noise reduction.
Fig. 6 shows an exemplary structural schematic of the electronic device 120. According to fig. 6, the electronic device 120 comprises an application processor 1201, a memory 1202, a wireless communication module 1203, a graphics processor (graphics processing unit, GPU) 1204, an input/output (I/O) device 1205, and the like. Those skilled in the art will appreciate that the hardware architecture shown in fig. 6 is not limiting of the electronic device 120, and that the electronic device 120 may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The various components of the electronic device 120 are described in detail below in conjunction with fig. 6:
The application processor 1201 is a control center of the electronic device 120, and connects the various components of the electronic device 120 using various interfaces and buses. In some embodiments, the application processor 1201 may include one or more processing modules.
Stored in memory 1202 are computer programs, such as operating system 1222 and application programs 1221 shown in fig. 6. The application processor 1201 is configured to execute a computer program in the memory 1202 to perform functions defined by the computer program, e.g., the application processor 1201 executes the operating system 1222 to perform various functions of the operating system on the electronic device 120. Memory 1202 also stores data in addition to computer programs, such as data generated during the operation of operating system 1222 and application programs 1221. Memory 1202 is a non-volatile storage medium that typically includes memory and external storage. Memory includes, but is not limited to, random access memory (random access memory, RAM), read-only memory (ROM), or cache, among others. The external memory includes, but is not limited to, flash memory (flash memory), hard disk, optical disk, universal serial bus (universal serial bus, USB) disk, etc. Computer programs are typically stored on a memory, from which a processor loads the program into memory before executing the computer program.
The memory 1202 may be separate and connected to the application processor 1201 by a bus, or the memory 1202 may be integrated into a chip subsystem with the application processor 1201.
The wireless communication module 1203 is configured to provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (WIRELESS FIDELITY, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation SATELLITE SYSTEM, GNSS), frequency modulation (frequency modulation, FM), near field communication (NEAR FIELD communication, NFC), infrared (IR), and the like, applied to the electronic device 120. The wireless communication module 1203 may be one or more devices integrating at least one communication processing module. The wireless communication module 1203 receives electromagnetic waves via an antenna, demodulates and filters the electromagnetic wave signals, and transmits the processed signals to the application processor 1201. The wireless communication module 1203 may also receive a signal to be transmitted from the application processor 1201, frequency modulate it, amplify it, and convert it into electromagnetic waves via an antenna.
And the GPU1204 is used for drawing and rendering calculation on the image data to generate an image to be displayed. Also known as a display core or vision processor, is a microprocessor that performs image manipulation tasks, and may include 2D (Dimension) and/or 3D processing functions. Electronic device 120 may include one or more GPUs that execute program instructions to generate or change display information.
The input/output devices 1205 include, but are not limited to, a display 1251, a touch screen 1253, and audio circuits 1255.
Wherein the touch screen 1253 may collect touch events on or near the user of the electronic device 120 (such as the user's manipulation of any suitable object on the touch screen 1253 or near the touch screen 1253 using a finger, stylus, etc.) and send the collected touch events to other devices (e.g., the application processor 1201). Wherein a user's operation in the vicinity of the touch screen 1253 may be referred to as hover touch, by which the user may select, move, or drag an object (e.g., an icon, etc.) without directly contacting the touch screen 1253. In addition, the touch screen 1253 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave.
A display (also referred to as a display screen) 1251 is used to display information entered by a user or presented to a user. The display may be configured in the form of a Liquid Crystal Display (LCD) CRYSTAL DISPLAY, an Organic LIGHT EMITTING Diode (OLED), or the like. The touch screen 1253 may be overlaid on the display 1251 and, upon detection of a touch event by the touch screen 1253, passed to the application processor 1201 to determine the type of touch event, the application processor 1201 may then provide a corresponding visual output on the display 1251 based on the type of touch event. Although in fig. 6, the touch screen 1253 and the display 1251 are implemented as two separate components for the input and output functions of the electronic device 120, in some embodiments, the touch screen 1253 may be integrated with the display 1251 to implement the input and output functions of the electronic device 120. In addition, the touch screen 1253 and the display 1251 may be configured on the front side of the electronic device 120 in a full panel form to realize a bezel-less structure.
The audio circuitry 1255, speaker 1256, and microphone 1257 may provide an audio interface between a user and the electronic device 120. The audio circuit 1255 may convert received audio data into electrical signals for transmission to the speaker 1256 for conversion to sound signals for output by the speaker 1256, and on the other hand, the microphone 1257 may convert collected sound signals into electrical signals for reception by the audio circuit 1255 for conversion to audio data for transmission to, for example, another electronic device via a modem processor and a radio frequency module, or for output to the memory 1202 for further processing.
Optionally, the electronic device 120 may also include a microcontroller unit (microcontroller unit, MCU) 1206, the MCU 1206 being a co-processor for acquiring and processing data from the sensor 1261. The MCU 1206 has a smaller processing power and power consumption than the application processor 1201, but has a feature of "always on" (always on), and can continuously collect and process sensor data while the application processor 1201 is in the sleep mode, so that the normal operation of the sensor is ensured with extremely low power consumption. In one embodiment, MCU 1206 may be a sensor hub chip. The sensor 1261 may include a light sensor, a gyroscopic sensor. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display 1251 according to the brightness of ambient light, and a proximity sensor that may turn off the power to the display when the electronic device 120 is moved to the ear. The gyroscopic sensor may be used to determine a motion pose of the electronic device 120. In some embodiments, the angular velocity of the electronic device 120 about three axes (i.e., the x, y, and z axes) may be determined by a gyroscopic sensor. Angular velocity information obtained by the gyroscopic sensor is converted into a rotation matrix that can describe the rotation of the object and can then be used to align the images. The gyro sensor may also be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor detects the shake angle of the electronic device 120, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic device 120 through the reverse motion, so as to realize anti-shake. The gyroscopic sensor may also be used to navigate, somatosensory a game scene. The sensor 1261 may further include other sensors such as an acceleration sensor, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which are not described herein. The MCU 1206 and the sensor 1261 may be integrated on the same chip or may be separate components connected by a bus.
Optionally, the electronic device 120 may also include an Image Signal Processor (ISP) 1207.ISP1207 interfaces with camera 1271 for capturing images and performing image processing (e.g., exposure control, white balancing, color calibration, or noise removal, etc.) to generate image data, which may include a processor core or a pure hardware implementation that performs the necessary software processing.
Optionally, the electronic device 120 may also include a mobile communication module 1208. The mobile communication module 1208 may provide a solution for wireless communications, including 2G/3G/4G/5G, as applied on the electronic device 120. The mobile communication module 1208 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), or the like. The mobile communication module 1208 may receive electromagnetic waves from an antenna, filter, amplify, and so on the received electromagnetic waves, and transmit the electromagnetic waves to a modem processor for demodulation. The mobile communication module 1208 may amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through an antenna to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 1208 may be provided in the application processor 1201. In some embodiments, at least part of the functional modules of the mobile communication module 1208 may be provided in the same device as at least part of the modules of the application processor 1201.
Further, the operating system 1222 mounted on the electronic device 120 may beOr other operating system, to which embodiments of the application are not limited in any way.
To be carried onThe electronic device 120 of the operating system is, for example, as shown in fig. 7, the electronic device 120 can be logically divided into a hardware layer 21, an operating system 22, and an application layer 23. The hardware layer 21 includes hardware resources such as the application processor 1201, the memory 1202, the wireless communication module 1203, the graphics processor 1204, the sensor 1261, the camera 1271, and the like as described above. The application layer 23 includes one or more application programs, such as application program 231, and the application program 231 may be any type of application program such as a social application, an e-commerce application, a browser, and the like. The operating system 22, which is a software middleware between the hardware layer 21 and the application layer 23, is a computer program that manages and controls hardware and software resources.
In one embodiment, operating system 22 includes kernel 221, hardware abstraction layer (hardware abstraction layer, HAL) 222, library and runtime (libraries and runtime) 223, framework 224, and system applications 225. The kernel 221 is used for providing underlying system components and services, such as power management, memory management, thread management, hardware drivers, etc., including Wi-Fi drivers, sensor drivers, camera drivers, etc. The hardware abstraction layer 222 is a package for kernel drivers that provides an interface to the framework 224, masking low-level implementation details. The hardware abstraction layer 222 runs in user space, while the kernel driver runs in kernel space.
The library and runtime 223, also called a runtime library, provides the executable with the required library files and execution environment at runtime. In one embodiment, the library and Runtime 223 includes Android Runtime (ART) 2232, library 2231, and the like. ART 2232 is a virtual machine or virtual machine instance capable of converting the bytecode of an application into machine code. Library 2231 is a library that provides support for executable programs at runtime, including browser engines (e.g., webkit), script execution engines (e.g., javaScript engines), graphics processing engines, and the like.
Framework 224 is used to provide various underlying common components and services for applications in application layer 23, such as resource management, notification management, and the like. In one embodiment, the framework 224 includes a resource manager 2241, a content provider 2242, a notification manager 2243, and the like.
System applications 225 are some native applications that are integrated into operating system 22. In one embodiment, system applications 225 include email 2251, calendar 2252, camera 2253, and so forth. For example, in one embodiment, where a third party application developed by a developer needs to capture video, the developer may invoke the camera 2253 to capture video without having to build the functionality itself.
Optionally, the camera 2253 integrates the program code for implementing the multi-frame fusion noise reduction method provided by the embodiment of the application, so that the visual effect of the video shot by the third party application can be improved. It should be noted that, the video denoising method (i.e., multi-frame fusion denoising) provided in the embodiment of the present application is also applicable to the expansion, definition, explanation and description of the relevant content of the video denoising method in the relevant embodiments in fig. 1 to 6, and is not repeated here. The functions of the respective components of the operating system 22 described above can be realized by the application processor 1201 executing a program stored in the memory 1202.
Those skilled in the art will appreciate that electronic device 120 may include fewer or more components than those shown in fig. 6 or 7, and that the electronic device shown in fig. 6 or 7 includes only components that are more relevant to the various implementations disclosed in embodiments of the present application.
In one possible implementation, the video denoising method of an embodiment of the present application may be performed by the electronic device 120. For example, when a user shoots a video by using the electronic device 120, the video noise reduction method provided by the embodiment of the application is adopted to locally perform video noise reduction on the electronic device 120.
In another possible implementation manner, the video denoising method according to the embodiment of the present application may be an offline method performed in the cloud, for example, the video denoising method according to the embodiment of the present application may be performed by the above-mentioned performing device 110. For example, a user may operate the electronic device 120 to interact with the execution device 110, where the electronic device 120 transmits the captured video to the execution device 110, and the execution device 110 applies the video denoising method provided by the embodiment of the present application to denoise the video.
According to the video denoising method and the video denoising device, the multi-pair two-frame fusion weights for respectively fusing the target video frame and all other video frames in the cache are calculated, the multi-frame fusion weights for fusing the target video frame and all other video frames in the cache are calculated based on the multi-pair two-frame fusion weights, and the multiple video frames in the cache are weighted and fused according to the multi-frame fusion weights, so that weight distribution is more reasonable, the situation that the weight of the target video frame is too large or too small is avoided, and the quality of the target video frame after denoising is improved. In addition, in the scheme of the application, the front noise-reduced video frame is not used when the noise of the rear video frame is reduced, so that the occurrence of ghost is effectively avoided.
The following describes in detail a video denoising method according to an embodiment of the present application with reference to fig. 8 to 16.
Fig. 8 shows a schematic flow chart of a video denoising method 400 according to an embodiment of the present application, where the method may be performed by an apparatus having image processing capability, for example, the method may be performed by the execution device 110 in fig. 5, or may be performed by the electronic device 120, or may be performed by the execution device 110 in combination with the electronic device 120. The method 400 includes steps 401 to 405:
And 401, caching a plurality of video frames, wherein the plurality of video frames comprise a target video frame and at least two other video frames, and the target video frame is a frame to be noise reduced and can be any one frame in the plurality of video frames.
A multi-to-two frame fusion weight is obtained 402, the one-to-two frame fusion weight comprising two weights for weighting values of a pair of pixels.
403, Obtaining multi-frame fusion weights for fusing the target video frame and at least two other video frames according to the multi-to-two frame fusion weights, wherein one group of multi-frame fusion weights comprises a plurality of weights for weighting values of one group of pixels.
And 404, weighting the values of a group of pixel points corresponding to the multi-frame fusion weights according to the group of multi-frame fusion weights to obtain a new pixel value.
And 405, generating a noise-reduced target video frame based on the new pixel value.
These steps are described in detail below, respectively, using the method 400 performed at the electronic device 120 as an example.
And 401, caching a plurality of video frames, wherein the plurality of video frames comprise a target video frame and at least two other video frames, and the target video frame is to be noise reduced and can be any one of the plurality of video frames.
In one embodiment, the plurality of video frames are buffered in time series.
In one embodiment, the buffered video frames may be raw images captured by the camera, i.e., raw images not processed by ISP1207, or images generated after image preprocessing (such as exposure control, white balance, color calibration, noise removal, etc.) by the ISP.
It should be noted that the buffered multiple video frames are not processed by the video denoising method provided by the embodiment of the present application.
It should be noted that, the essence of noise reduction is to change the pixel values of some or all of the pixels in the image, so that all of the pixels together generate a clearer image without problems such as noise and smear, the present application describes, by taking any pixel X of a target video frame as an example, the processing steps including the pixel level in the video denoising method provided in the embodiment of the present application, that is, steps 402 to 404 in the method 400.
For any pixel X in the target video frame, its relative position in the target video frame may be identified by two-dimensional coordinates, etc., where the "position" in the embodiment of the present application refers to the relative position of the pixel in the video frame. And if the position of any pixel point Y in other video frames of one frame is the same as that of the pixel point X, the pixel point Y is called as a corresponding pixel point of the pixel point X.
A multi-to-two frame fusion weight is obtained 402, the one-to-two frame fusion weight comprising two weights for weighting values of a pair of pixels.
For example, the pair of pixels includes a pixel X in the target video frame and a corresponding pixel Y in the other video frame of the target video frame.
It should be understood that the pair of two-frame fusion weights mentioned in the present application corresponds to a pair of pixel points, i.e. a pair of weights may be used to weight the pixel values of a pair of pixel points to obtain a new pixel value.
For example, for one pixel X of the target video frame, a multi-to-two frame fusion weight is required to be obtained, where X is respectively weighted and fused with a corresponding pixel in all other video frames in the buffer. If there are N-1 frames of other video frames in the buffer, for pixel X, N-1 to two frames fusion weights need to be obtained.
If the target video frame has M pixels, and the noise reduction method provided by the embodiment of the application is used for reducing the noise of all the M pixels, the fusion weight of M (N-1) to two frames needs to be obtained in total, and the multiplication operation is represented.
In one possible implementation, all other video frames may be image aligned (or image registered) with the target video frame prior to acquiring the two-frame fusion weights. Image alignment refers to twisting one image to align it with the content of another image.
The image alignment may be performed based on at least one of angular velocity information acquired by a gyro sensor, homography matrix calculated from feature points, and optical flow information.
In one possible implementation, in order to avoid fluctuation of the fusion weight caused by intra-frame noise, to improve the noise reduction effect, a smoothing filter may be performed on the video frame before the fusion weight of two frames is acquired.
The smoothing filtering may be one of average filtering, gaussian filtering, bilateral filtering, median filtering, guided filtering, box filtering, or a superposition of at least two filtering modes.
403, Obtaining a set of multi-frame fusion weights for fusing the target video frame and at least two other video frames according to the multi-to-two frame fusion weights, wherein the set of multi-frame fusion weights comprises a plurality of weights for weighting values of a set of pixels.
For example, the set of pixels includes one pixel X of the target video frame, and at least two pixels corresponding to the pixel X in the at least two other video frames.
Similarly, the set of multi-frame fusion weights referred to in the present application corresponds to a set of pixels, i.e., a set of weights may be used to weight the pixel values of a set of pixels to obtain a new pixel value.
For a group of pixels including pixel X, a multi-to-two frame fusion weight is obtained in step 402 that respectively fuses pixel X with a plurality of corresponding pixels in a plurality of other video frames. One weight of each pair of two-frame fusion weights corresponds to the pixel point X, and the other weight corresponds to the corresponding pixel point in other video frames. That is, the pixel X has a plurality of two-frame fusion weights, and each corresponding pixel of the pixel X has a two-frame fusion weight. And acquiring one weight of the pixel point X participating in multi-frame fusion and the corresponding weight of each corresponding pixel point of the pixel point X participating in multi-frame fusion according to the multi-to-two-frame fusion weights.
And 404, weighting the values of a group of pixel points corresponding to the multi-frame fusion weights according to the group of multi-frame fusion weights to obtain a new pixel value.
For example, according to a group of multi-frame fusion weights corresponding to a group of pixels including the pixel X, the pixel X and the pixel values of all the pixels corresponding to the pixel X are weighted to obtain a new pixel value, and the new pixel value fuses information of a plurality of pixels with the same position as the pixel X in a plurality of video frames, so that noise possibly existing in the pixel X can be effectively removed.
It should be understood that, for noise reduction of one pixel X in a target video frame, only information of pixels in a plurality of video frames, which are the same as the pixel X in position, is involved, and no pixel X and/or pixel blocks around the corresponding pixels are involved.
And 405, generating a noise-reduced target video frame based on the new pixel value.
After completing the weighted fusion of all or part of the pixels in steps 402 to 404, a frame of noise-reduced target video frame can be generated based on the new pixel values, and the pixels in the noise-reduced target video frame fuse the information of the corresponding pixels in the original target video frame and the other frames before and/or after the original target video frame, so that the image presented by all the pixels as a whole is clearer.
In one possible implementation manner, the generated target video frame after noise reduction is not directly replaced with the original target video frame, the original target video frame is still kept in the buffer memory, the target video frame with noise reduction completed before is not used when the next frame of the target video frame is noise reduced, and the non-iterative multi-frame fusion noise reduction method ensures that the weight is distributed more reasonably in a plurality of video frames, the noise is removed more cleanly, and meanwhile, the problem of smear caused by the excessive weight of the video frame with noise reduction before is relieved.
The target video frame after noise reduction can be directly output or can be output after subsequent image processing, wherein the subsequent image processing can comprise image processing such as white balance, color correction, tone mapping and the like on the target video frame after noise reduction. The target video frame after noise reduction may be output to the display 1251 and/or the memory 1202 of the electronic device 120, or may be transmitted to other devices through the wireless communication module 1203 and/or the mobile communication module 1208.
In the process of video noise reduction, the frames in the video need to be noise reduced one by one according to the sequence, and after the current target video frame completes noise reduction, the next frame becomes a new target noise reduction frame.
Optionally, according to the change of the target video frame, the cached multiple video frames for denoising the target video frame are updated correspondingly.
In one possible implementation, the buffered video frames may be updated according to steps 406 and 407.
And 406, deleting the first frame in time sequence in the cached video frames.
The buffered multiple video frames are arranged according to time sequence, the image content of the adjacent frames has strong correlation, when the video frame buffer is updated after the noise reduction of the target video frame is completed, the first time sequence frame in the multiple video frames in the buffer is deleted, then the deleted frames are not intermediate frames in the multiple video frames, the condition that the frames in the multiple video frames in the buffer are discontinuous (or frame skip) is not caused after the deletion, and the problem of poor effect caused by overlarge difference of pixel values between the frames when the noise reduction of the video is continued is avoided.
407, Obtaining a frame of non-noise-reduced video frame, and adding the frame of non-noise-reduced video frame to the last of the cached multiple video frames.
In one possible implementation, a next frame in time sequence of a last frame of the buffered plurality of video frames is obtained from the original video and added to a last of the buffered plurality of video frames. After the noise reduction of the target video frame is completed, the next frame in the time sequence becomes a new target video frame, the next video frame is acquired from the original video and added into the buffer memory, the next video frame is the next frame in the time sequence of the last video frame in the existing buffer memory and is placed behind all the existing video frames in the buffer memory, the situation that after the buffer memory is updated, a plurality of video frames in the buffer memory are still arranged according to the time sequence and no frame skip exists is ensured, and the noise reduction effect of the new target video frame is further ensured.
In one embodiment, as shown in fig. 9, the specific process of obtaining the fusion weight of one to two frames in step 402 includes the following steps:
And 421, obtaining pixel value differences of a pair of pixel points, wherein one pixel point is positioned on a target video frame, and the other pixel point is positioned on other video frames.
For example, one pixel located in a target video frame is a pixel X, and another pixel located in another video frame is a pixel X located in a corresponding pixel Y of another video frame.
The pixel value difference may be an absolute value obtained by subtracting the pixel values of the two pixels.
At 422, the noise level of the pixel located in the target video frame is obtained.
The more serious the noise of a pixel to be denoised, the more inaccurate the pixel value of the pixel is, and the weight of the pixel value of the pixel itself in multi-frame fusion denoising needs to be reduced.
In one possible implementation, the noise is modeled and the noise model is used to obtain the noise level of the pixel. For example, one noise model represents the level of noise corresponding to different pixel intensities under certain parameters (e.g., sensitivity, exposure time, etc.).
Fig. 10 shows a schematic diagram of a noise model in the case of a certain parameter combination, in which the abscissa represents the intensity of a pixel (the intensity of the pixel is also referred to as a pixel value), the ordinate represents the level of noise, and a point on the curve represents the noise level corresponding to a pixel value.
In one possible implementation, the noise model is calibrated in advance and stored in memory. For example, the calibrated noise model may be stored in the memory 1202 of the electronic device 120, and the noise level of the pixel X to be noise reduced, that is, the noise level corresponding to the pixel value of the pixel X in the noise model, is loaded from the memory 1202 when the video is noise reduced.
In one possible implementation, the noise model is represented as a table noise_table [256] having 256 entries, each entry representing a corresponding pixel value, such as noise_table [0] representing a noise level at a pixel value of 0, noise_table [255] representing a noise level at a pixel value of 255, and noise_table [100] representing a noise level at a pixel value of 100. The above table may be implemented using any data structure for storing and processing large amounts of data and accessing and processing elements therein via index indices, such as arrays, linked lists, etc.
And 423, calculating two weights for carrying out weighted fusion on the pixel values of the pair of pixel points according to the pixel value difference and the noise level.
In one possible implementation, a weight of a pixel X located in the target video frame in a pair of pixels may be calculated first, and according to a sum of the two weights being 1, a weight of a pixel X subtracted from a corresponding pixel of the pair of pixels may be obtained.
In one possible implementation, the higher the noise level of pixel X, the smaller the weight of pixel X, and the greater the difference in pixel values of pixel X and its corresponding pixel, the greater the weight of pixel X.
In one possible implementation, the weight may be calculated by introducing a parameter that controls the fusion degree of the target video frame, and the parameter may be a fixed constant or a parameter that can be adaptively adjusted according to factors such as the image quality of the target video frame or the noise level of the pixel X.
For example, the weight of the pixel X in a pair of pixels may be obtained by the following formula:
Wherein W t represents the weight of the pixel point X, MIN represents the minimum value, which can ensure that the weight of the pixel point X is not more than 1.0, k represents the parameter for controlling the fusion degree, diff represents the pixel value difference between the pixel point X and the corresponding pixel point, and nl represents the noise level of the pixel point X.
In another embodiment, as shown in fig. 11, the step 402 may combine information of all other video frames when obtaining the pair-to-two frame fusion weights, and the specific flow may include steps 421 and 422 identical to fig. 9, and new steps 424 and 425.
Steps 421 and 422 are the same as those shown in fig. 9, and will not be described in detail below for new steps 424 and 425.
At 424, a plurality of second pixel value differences are obtained, one second pixel value difference being a pixel value difference from two pixels located in the other two frames of video.
The two pixel points are the corresponding pixel points of the pixel point X in other video frames.
In one possible implementation, a pixel value difference between at least two pixel points corresponding to the pixel point X in all other video frames is obtained.
And 425 calculating two weights for weighted fusion of pixel values of the pair of pixel points according to the pixel value difference, the noise level and the plurality of second pixel value differences, wherein the pair of pixel points are defined in the embodiment corresponding to the step 421 in the previous figure 9.
In one possible implementation, the pixel value difference of the pair of pixel points and the largest pixel value difference among the plurality of second pixel value differences may be obtained first, and then the two-frame fusion weight may be obtained based on the largest pixel value difference. Under the same noise level, compared with the method for acquiring the two-frame fusion weight based on the pixel value difference of the pixel points, the method for acquiring the two-frame fusion weight based on the maximum pixel value difference can enable the weight to be more biased to a target video frame, and when the pixel value difference larger than the pixel value difference of the pixel points occurs in the second pixel value differences, the problems of drag, abnormal noise and the like in the noise reduction process can be effectively avoided by using the maximum pixel value difference to increase the weight of the pixel point X in the target video frame.
It should be understood that in embodiments of the present application, the "second pixel value difference" is merely intended to refer to a different pixel value difference object, and is not meant to be otherwise limiting of the referred object.
In one possible implementation, the fusion degree of the target video frame may be controlled by a parameter, which may be a fixed constant or may be a parameter that may be adaptively adjusted according to factors such as the image quality of the target video frame or the noise level of the pixel X.
In one embodiment, the weight of pixel X may be obtained by the following formula:
Wherein W t represents the weight of the pixel point X, MIN represents the minimum value, which can ensure that the weight of the pixel point X is not greater than 1.0, k represents the parameter for controlling the fusion degree, max_diff represents the maximum pixel value difference, nl represents the noise level of the pixel point X.
It can be seen that the higher the noise level of the pixel X, the smaller the weight of the pixel X, and the larger the maximum pixel value difference, the larger the weight of the pixel X. On one hand, when the noise level of the pixel point X is higher, the method can reduce the influence of the noise of the pixel point X by reducing the weight of the pixel point X, and on the other hand, when larger pixel value differences occur in all video frames, the method can reduce the influence of abnormal pixel values possibly occurring in other video frames on the noise reduction effect by increasing the weight of the pixel point X, and can also relieve the smear problem caused by large-amplitude actions during noise reduction.
After the weight of the pixel X is obtained, the sum of the pair of weights is 1, and then the weight of the other pixel is 1 minus the difference obtained by subtracting the weight of the pixel X.
In one possible implementation, referring to fig. 12, the specific procedure for obtaining a set of multi-frame fusion weights in step 403 may include the following steps:
431, acquiring multi-frame fusion weights of one pixel point in the target video frame according to the multi-to-two frame fusion weights.
In one possible implementation, the multi-frame fusion weight at a pixel X of the target video frame may be the maximum/minimum/average of the two-frame fusion weights of that pixel.
In one possible implementation, the multi-frame fusion weight at a pixel X of the target video frame may be a weighted sum of two-frame fusion weights of the pixel.
432, Obtaining multi-frame fusion weights of at least two other pixel points according to the multi-to-two-frame fusion weights and the multi-frame fusion weights of the pixel points, wherein the pixel points and the at least two other pixel points form a group of pixel points together.
In one embodiment, the multi-frame fusion weight for the ith corresponding pixel of pixel X may be obtained by the following formula:
Wherein W i represents the multi-frame fusion weight of the ith corresponding pixel point of the pixel point X, the ith corresponding pixel point is positioned in the ith other video frame, W m represents the multi-frame fusion weight of the pixel point X, and W ti represents the ith two-frame fusion weight of the pixel point X, which can be used for carrying out two-frame fusion on the pixel point X and the ith corresponding pixel point.
It should be understood that the obtained set of multi-frame fusion weights for a set of pixels needs to be normalized.
In a possible implementation manner, the video noise reduction function enabled by the video noise reduction method provided in the first aspect may be directly integrated in a system, and the user uses the video noise reduction function by default when shooting a video.
In another possible implementation manner, when a user shoots a video, the user can select whether to use the video denoising method provided by the application to denoise the video, so as to give the user more independent options. For example, the user may turn on or off a switch in a system setting or in a camera application indicating the video noise reduction function that can be achieved by the video noise reduction method provided in the first aspect.
The video noise reduction method provided by the embodiment of the application is further described below by taking a mobile phone as an example with reference to fig. 13 to 16 from the perspective of a user operation interface. It should be understood that the descriptions of fig. 13-16 and their corresponding embodiments do not constitute any limitation on the electronic device 120.
Fig. 13 (a) shows a graphical user interface (GRAPHICAL USER INTERFACE, GUI) of the handset, which is the desktop 910 of the handset. When the user clicks an icon 920 of a camera Application (APP) on the desktop 910, the camera application is started in response to a user operation, displaying a photographing interface 930 as shown in (b) in fig. 13.
In one embodiment, the shooting interface 930 may include a view-finding frame 940, in which preview images may be displayed in real time in a preview state, and a control 950 for indicating a recording video mode and other shooting controls.
It should be noted that, in the embodiment of the present application, the image in the viewfinder 940 may be a color image or a gray-scale image, and the color of the image in the drawings of the present application does not form any limitation to the present application.
In one possible implementation, the user's photographing behavior includes a user's operation to turn on the camera, and in response to the operation, a photographing interface 930 is displayed on the display screen.
For example, after detecting the operation of clicking the icon 920 of the camera application by the user, the mobile phone may start the camera application and display the photographing interface 930. A view box 940 may be included on the photographing interface, and it is understood that the size of the view box may be different in photographing mode and video mode. For example, in video mode, the viewfinder may be the entire display screen. In the preview state, i.e., before the user turns on the camera and does not press the photographing/video button, the preview image can be displayed in real time in the viewfinder.
In one example, referring to fig. 14 (a), a photographing option 960 is included on the photographing interface 930, and after the mobile phone detects that the user clicks the photographing option 960, referring to fig. 14 (b), the mobile phone displays a photographing mode interface. After the mobile phone detects that the user clicks the mode 961 for indicating professional video on the shooting mode interface, the mobile phone enters the professional video mode. When the professional video recording mode is adopted to record the video, the mobile phone can adopt the method described in the related embodiment of fig. 8 to perform multi-frame fusion noise reduction processing on the original video frame acquired by the camera so as to improve the visual effect of the recorded video. It should be appreciated that the professional video mode described above may also include other image processing operations.
After the handset enters the professional video mode, the user may click a capture button 970 as shown in fig. 15 to instruct the handset to begin recording video. And responding to the shooting operation instructed by the user, starting to record the video by the mobile phone, and performing real-time multi-frame fusion noise reduction processing on the recorded video.
It should be understood that the operation of the user for indicating the shooting action may include pressing a shooting button in a camera application of the mobile phone, or may include the user indicating the mobile phone to perform the shooting action through voice, or may also refer to the user indicating the mobile phone to perform the shooting action through a shortcut key, or may further include other operations of the user indicating the mobile phone to perform the shooting action. The foregoing is illustrative and not intended to limit the application in any way.
In another example, the handset may provide a video noise reduction mode, such as presenting an operation control in an interface of a camera or other application, and turning on the video noise reduction mode in response to a particular operation of the control by the user. After the video noise reduction mode is started, the mobile phone can perform multi-frame fusion noise reduction processing on the original video frames acquired by the camera or received from other devices by adopting the method described in the related embodiment of fig. 8, so as to improve the visual effect of the video.
In one possible implementation manner, as shown in fig. 16, in response to an operation of shooting by a user, a video frame is displayed in a display screen of the mobile phone, where the video frame is obtained according to a noise-reduced video frame, and the noise-reduced video frame is obtained by performing multi-frame fusion noise reduction on a video frame acquired by a camera of the mobile phone.
The displayed video frame may refer to a standard full-color image obtained by performing subsequent image processing on the video frame after multi-frame fusion noise reduction, where the subsequent image processing may include, but is not limited to, performing image processing such as white balance, color correction, tone mapping on the video frame after multi-frame fusion noise reduction.
In one possible implementation manner, the multi-frame fusion noise reduction of the video frames includes buffering a plurality of video frames arranged according to time sequence, wherein the multi-frame fusion noise reduction includes a frame target video frame and at least two other video frames, the target video frame is a frame to be noise reduced, the frame can be any one frame of the plurality of video frames, the multi-pair two-frame fusion weight is obtained, the one-pair two-frame fusion weight includes two weights for weighting values of one pair of pixel points, the one pair of pixel points includes a pixel point of one target video frame and one pixel point of one other video frame, which is the same as the pixel point, in the one other video frame, in position, a set of multi-frame fusion weights for fusing the target video frame and the at least two other video frames are obtained according to the multi-pair two-frame fusion weights, the set of multi-frame fusion weights includes a plurality of weights for weighting values of one set of pixel points, the one pixel point of the set of pixel points includes one pixel point of the target video frame, and at least one pixel point of the other video frame is the same as the pixel point in position, the new pixel point of the new pixel point is obtained by using the new pixel point of the new pixel point, and the new pixel point is obtained by weighting the new pixel point of the new pixel point. It should be understood that the above multi-frame fusion noise reduction operation may be performed on all or part of the pixels of the target video frame, and if the above multi-frame fusion noise reduction operation is performed on only part of the pixels of the target video frame, the values of the remaining pixels may be unchanged or obtained by using other noise reduction methods. As long as at least one pixel point in the target video frame adopts the multi-frame fusion noise reduction method provided by the embodiment of the application to reduce noise, the target video frame can be considered to adopt the multi-frame fusion noise reduction method provided by the embodiment of the application to reduce noise.
It should be understood that the above multi-frame fusion noise reduction of video frames requires buffering multiple video frames, and when the video noise reduction is started, the first several frames may not be subjected to noise reduction processing.
In one possible implementation, after the denoising of the video frame is completed, the buffered video frames may be updated, for example, a first frame in the buffered video frames at a time sequence is deleted, and a video frame without denoising is acquired and added to the last of the buffered video frames.
The specific flow of the above-mentioned multi-frame fusion noise reduction for video frames can be seen in fig. 8 to 12, and will not be repeated here.
It should be understood that the video denoising method shown in fig. 8 to 12 is applicable to the multiframe fusion denoising process performed on the video by the electronic device when the user shown in fig. 14 to 16 uses the professional video recording mode, that is, the expansion, limitation, explanation and description of the relevant content in fig. 8 to 12 are also applicable to fig. 14 to 16, and are not repeated herein.
Based on the video noise reduction method described in the above embodiments, the embodiments of the present application further provide a video noise reduction device 600, which includes one or more functional units for performing the video noise reduction method described in the above embodiments, where the functional units may be implemented by software, or by hardware, such as a processor, or by a suitable combination of software, hardware and/or firmware, such as a part of the functions being implemented by an application processor executing a computer program, and a part of the functions being implemented by a wireless communication module (such as bluetooth, wi-Fi module, etc.), MCU, ISP, etc.
In one embodiment, as shown in FIG. 17, the video noise reduction device 600 includes at least a video frame buffer module 601, a fusion weight acquisition module 604, a multi-frame fusion module 605, and a noise reduced video frame generation module 606. The video frame buffer module 601 is configured to buffer a plurality of video frames arranged according to time sequence, where the video frames include a frame target video frame and at least two other video frames, the target video frame is a frame to be noise reduced, and the frame may be any one of the plurality of video frames, the fusion weight obtaining module 604 is configured to obtain a weight for performing weighted fusion on the plurality of video frames to implement noise reduction, and may first obtain two-frame fusion weights, and then obtain multi-frame fusion weights based on the two-frame fusion weights, the multi-frame fusion module 605 is configured to perform weighted fusion on the buffered plurality of video frames according to the fusion weights to obtain a new pixel value, and the noise-reduced video frame generating module 606 is configured to generate a new video frame based on the new pixel value, where the new video frame is a noise-reduced video frame.
Optionally, the video noise reduction device 600 may further include a video frame alignment module 602 for image aligning the buffered plurality of video frames.
Optionally, the video noise reduction device 600 may further include a video frame filtering module 603, configured to smooth the buffered multiple video frames before acquiring the fusion weight, so as to avoid fluctuation of the weight caused by intra-frame noise.
In one embodiment, the video frame buffer module 601 may be a cache (cache) that may enable noise reduction while capturing video.
In one embodiment, the functionality of fusion weight acquisition module 604 may be implemented by a combination of multiple devices. For example, the fusion weight acquisition module 604 may include an AP and a GPU, and the specific process of the fusion weight acquisition module 604 for acquiring the fusion weight by calling the combination of the AP and the GPU may refer to the foregoing related embodiments of fig. 8 to 12.
In one embodiment, the video noise reduction device 600 may output the noise reduced video frame generated by the noise reduced video frame generation module 606, and the video noise reduction device 600 may have a display module 607 for displaying an image obtained from the noise reduced video frame, and the display module 607 may be a display, such as a liquid crystal display, an organic light emitting diode display, or the like.
In one embodiment, the video noise reduction device 600 may further have a communication module 608, for communicating with other devices, may obtain a video frame without noise reduction from the other devices to perform noise reduction processing, and may also transmit the video frame after noise reduction to the other devices, where the communication module may be a device with a communication function, such as a Wi-Fi module, a bluetooth module, and the like.
An embodiment of the application also provides an electronic device 700, as shown in fig. 18, the electronic device 700 comprising a processing circuit 702, and a communication interface 704 and a storage medium 706 connected thereto.
The processing circuitry 702 is configured to process data, control data access and storage, issue commands, and control other components to perform various steps of the video denoising method of embodiments of the present application, e.g., to perform some or all of the steps of any of the embodiments shown in fig. 8, 9, and 11-16. The processing circuitry 702 may be implemented as one or more processors, one or more controllers, and/or other structures operable to execute programs. The processing circuitry 702 may include at least one of a general purpose processor, a Digital Signal Processor (DSP), a GPU, an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), or other programmable logic component, among others. A general purpose processor may include a microprocessor, as well as any conventional processor, controller, microcontroller, or state machine. The processing circuit 702 may also be implemented as a combination of computing components, such as a DSP and a microprocessor.
The communication interface 704 may comprise circuitry and/or programming to enable bi-directional communication between the electronic device 700 and one or more network devices (e.g., routers, switches, access points, etc.). The communication interface 704 includes at least one receive circuit 742 and/or at least one transmit circuit 741. In one embodiment, communication interface 704 may be implemented in whole or in part by a wireless modem.
The storage medium 706 may include a non-transitory computer-readable storage medium (non-transitory computer-readable storage medium) such as a magnetic storage device (e.g., hard disk, floppy disk, magnetic stripe), an optical storage medium (e.g., digital Versatile Disk (DVD)), a smart card, a flash memory device, random Access Memory (RAM), read-only memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), registers, and any combination thereof. The storage medium 706 may be coupled to the processing circuit 702 such that the processing circuit 702 can read information and write information to the storage medium 706. In particular, the storage medium 706 may be integrated into the processing circuit 702, or the storage medium 706 and the processing circuit 702 may be separate. The storage medium 706 may store a computer program 761, which when the computer program 761 is executed by the processing circuit 702, the processing circuit 702 is configured to perform various steps of the video denoising method according to an embodiment of the present application, for example, to perform some or all of the steps in any of the embodiments shown in fig. 8, 9, and 11 to 16.
It should be understood that the video noise reduction device shown in the embodiment of the present application may be a server, for example, a cloud server, or may also be a chip configured in the cloud server, or the video noise reduction device shown in the embodiment of the present application may be an electronic device, or may be a chip configured in the electronic device.
The embodiment of the application also provides a chip, which comprises a data interface and a processor. The data interface can be an input/output circuit or a communication interface, and the processor is an integrated processor or a microprocessor or an integrated circuit on the chip. The chip may perform the video denoising method in the method embodiment described above.
The embodiment of the application also provides a computer readable storage medium, on which instructions are stored, which when executed, perform the video denoising method in the above method embodiment.
The embodiment of the application also provides a computer program product containing instructions which when executed perform the video denoising method in the embodiment of the method.
It should be understood that, in various embodiments of the present application, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It should be noted that the terms "executable program," "computer program," "program" as used in embodiments of the present application should be construed broadly to include, but are not limited to, instructions, instruction sets, codes, code segments, subroutines, software modules, applications, software packages, threads, processes, functions, firmware, middleware, etc. It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the apparatus and units described above may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. The storage medium includes various media capable of storing executable programs, such as a usb disk, a removable hard disk, a read-only memory random access memory, a magnetic disk, or an optical disk.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (29)

1. A method of video denoising, comprising:
caching a plurality of video frames, wherein the plurality of video frames comprise a target video frame and at least two other video frames, and the target video frame is any frame in the plurality of video frames;
Acquiring a plurality of pairs of two-frame fusion weights, wherein each pair of two-frame fusion weights comprises two weights for weighting the values of a pair of pixel points, the pair of pixel points comprises a first pixel point of the target video frame, and corresponding pixel points with the same positions as the first pixel point in one frame of other video frames;
the obtaining the fusion weight of the two frames comprises the following steps:
Acquiring a first pixel value difference, wherein the first pixel value difference is the pixel value difference between the first pixel point and the corresponding pixel point;
Acquiring the noise level of the first pixel point;
Acquiring second pixel value differences, wherein the second pixel value differences are pixel value differences of two pixel points which are respectively positioned in different other video frames and correspond to the first pixel point;
acquiring the fusion weight of the two frames according to the first pixel value difference, the noise level, the second pixel value difference and parameters for controlling the fusion degree;
Acquiring multi-frame fusion weights for fusing the target video frame and the at least two other video frames according to the multi-to-two frame fusion weights, wherein the multi-frame fusion weights comprise a plurality of weights for weighting values of a group of pixel points, the group of pixel points comprise the first pixel point of the target video frame, and at least two corresponding pixel points with the same position as the first pixel point in the at least two other video frames;
weighting the values of the group of pixel points corresponding to the multi-frame fusion weights according to the multi-frame fusion weights to obtain a new pixel value;
and generating a noise-reduced target video frame based on the new pixel value.
2. The method of claim 1, wherein the obtaining the two-frame fusion weight based on the first pixel value difference, the noise level, the second pixel value difference, and a parameter controlling a degree of fusion comprises:
comparing the first pixel value difference with a plurality of second pixel value differences to obtain a maximum pixel value difference;
And acquiring the fusion weight of the two frames according to the maximum pixel value difference, the noise level and the parameters for controlling the fusion degree.
3. The method of claim 1, wherein the obtaining multi-frame fusion weights for fusing the target video frame and the at least two other video frames based on the multi-to-two frame fusion weights comprises:
acquiring a first weight in the multi-frame fusion weights according to the multi-to-two-frame fusion weights, wherein the first weight represents the weight of the first pixel point;
and acquiring at least two weights except the first weight in the multi-frame fusion weights according to the first weight and the multi-to-two frame fusion weights.
4. The method of claim 3, wherein the obtaining a first one of the multi-frame fusion weights from the multi-to-two-frame fusion weights comprises:
And acquiring a plurality of weights corresponding to the first pixel point in the multi-to-two frame fusion weights, wherein the first weight is the maximum value, the minimum value, the average value or the weighted sum of the plurality of weights.
5. The method of any one of claims 1-4, further comprising:
Deleting a first frame of the plurality of video frames.
6. The method of any one of claims 1-4, further comprising:
and acquiring a video frame before noise reduction of one frame, and adding the video frame before noise reduction to the last of the plurality of video frames.
7. A method of video denoising, comprising:
Responding to user operation, displaying video frames in a display screen of the electronic equipment, wherein the video frames comprise target video frames after noise reduction, the target video frames after noise reduction are obtained after the target video frames acquired by the electronic equipment are subjected to video noise reduction, and the noise reduction on the target video frames comprises the following steps:
Caching a plurality of video frames, wherein the plurality of video frames comprise the target video frame and at least two other video frames, and the target video frame is any frame in the plurality of video frames;
Acquiring a plurality of pairs of two-frame fusion weights, wherein one pair of the two-frame fusion weights comprises two weights used for weighting the values of a pair of pixel points, the pair of pixel points comprises a first pixel point of the target video frame, and corresponding pixel points with the same positions as the first pixel point in one frame of the other video frames;
the obtaining the fusion weight of the two frames comprises the following steps:
Acquiring a first pixel value difference, wherein the first pixel value difference is the pixel value difference between the first pixel point and the corresponding pixel point;
Acquiring the noise level of the first pixel point;
Acquiring second pixel value differences, wherein the second pixel value differences are pixel value differences of two pixel points which are respectively positioned in different other video frames and correspond to the first pixel point;
acquiring the fusion weight of the two frames according to the first pixel value difference, the noise level, the second pixel value difference and parameters for controlling the fusion degree;
Acquiring multi-frame fusion weights for fusing the target video frame and the at least two other video frames according to the multi-to-two frame fusion weights, wherein the multi-frame fusion weights comprise a plurality of weights for weighting values of a group of pixel points, the group of pixel points comprise the first pixel point of the target video frame, and at least two second corresponding pixel points with the same position as the first pixel point in the at least two other video frames;
weighting the values of the group of pixel points corresponding to the multi-frame fusion weights according to the multi-frame fusion weights to obtain a new pixel value;
and generating the target video frame after noise reduction based on the new pixel value.
8. The method according to claim 7, wherein the user operation is an operation to turn on a video noise reduction mode for indicating to reduce noise of a video frame acquired by the electronic device, or an operation to start video shooting.
9. The method of claim 7, wherein the obtaining the two-frame fusion weight based on the first pixel value difference, the noise level, the second pixel value difference, and a parameter controlling a degree of fusion comprises:
comparing the first pixel value difference with a plurality of second pixel value differences to obtain a maximum pixel value difference;
And acquiring the fusion weight of the two frames according to the maximum pixel value difference, the noise level and the parameters for controlling the fusion degree.
10. The method according to any one of claims 7-9, wherein the obtaining multi-frame fusion weights for fusing the target video frame and the at least two other video frames according to the multi-to-two frame fusion weights comprises:
acquiring a first weight in the multi-frame fusion weights according to the multi-to-two-frame fusion weights, wherein the first weight represents the weight of the first pixel point;
and acquiring at least two weights except the first weight in the multi-frame fusion weights according to the first weight and the multi-to-two frame fusion weights.
11. The method of claim 10, wherein the obtaining a first one of the multi-frame fusion weights from the multi-to-two-frame fusion weights comprises:
And acquiring a plurality of weights corresponding to the first pixel point in the multi-to-two frame fusion weights, wherein the first weight is the maximum value, the minimum value, the average value or the weighted sum of the plurality of weights.
12. The method according to any one of claims 7-9, further comprising:
Deleting a first frame of the plurality of video frames.
13. The method according to any one of claims 7-9, further comprising:
and acquiring a video frame before noise reduction of one frame, and adding the video frame before noise reduction to the last of the plurality of video frames.
14. A video noise reduction device, comprising:
the storage module is used for caching a plurality of video frames, wherein the plurality of video frames comprise a frame of target video frame and at least two frames of other video frames, and the target video frame is any frame in the plurality of video frames;
The acquisition module is used for acquiring a plurality of pairs of two-frame fusion weights, wherein one pair of two-frame fusion weights comprises two weights used for weighting the values of a pair of pixel points, the pair of pixel points comprises a first pixel point of the target video frame and a corresponding pixel point with the same position as the first pixel point in one frame of other video frames;
The acquisition module is further configured to acquire a multi-frame fusion weight according to the multi-to-two frame fusion weight, where the multi-frame fusion weight is used to fuse the target video frame and the at least two other video frames, the multi-frame fusion weight includes multiple weights used to weight a set of values of pixels, and the set of pixels includes the first pixel of the target video frame and at least two corresponding pixels in the at least two other video frames, where the positions of the corresponding pixels are the same as the positions of the first pixel;
the obtaining the fusion weight of the two frames comprises the following steps:
Acquiring a first pixel value difference, wherein the first pixel value difference is the pixel value difference between the first pixel point and the corresponding pixel point;
Acquiring the noise level of the first pixel point;
Acquiring second pixel value differences, wherein the second pixel value differences are pixel value differences of two pixel points which are respectively positioned in different other video frames and correspond to the first pixel point;
acquiring the fusion weight of the two frames according to the first pixel value difference, the noise level, the second pixel value difference and parameters for controlling the fusion degree;
The processing module is used for weighting the values of a group of pixel points corresponding to the multi-frame fusion weights according to the multi-frame fusion weights to obtain a new pixel value, and generating a noise-reduced target video frame based on the new pixel value.
15. The apparatus of claim 14, wherein the obtaining the two-frame fusion weights comprises:
Acquiring the noise level of the first pixel point;
Acquiring a first pixel value difference, wherein the first pixel value difference is the pixel value difference between the first pixel point and the corresponding pixel point;
and acquiring the fusion weight of the two frames according to the first pixel value difference, the noise level, the second pixel value difference and the parameters for controlling the fusion degree.
16. The apparatus of claim 14, wherein the obtaining the two-frame fusion weights comprises:
Acquiring the noise level of the first pixel point;
Acquiring a first pixel value difference, wherein the first pixel value difference is the pixel value difference between the first pixel point and the corresponding pixel point;
acquiring a plurality of second pixel value differences, wherein one second pixel value difference is the pixel value difference between two corresponding pixel points of the first pixel point;
And acquiring the fusion weight of the two frames according to the first pixel value difference, the noise level, the second pixel value differences and the parameters for controlling the fusion degree.
17. The apparatus of claim 16, wherein the obtaining the two-frame fusion weight based on the first pixel value difference, the noise level, the plurality of second pixel value differences, and a parameter controlling a degree of fusion comprises:
Comparing the first pixel value difference with the plurality of second pixel value differences to obtain a maximum pixel value difference;
And acquiring the fusion weight of the two frames according to the maximum pixel value difference, the noise level and the parameters for controlling the fusion degree.
18. The apparatus of any of claims 14-17, wherein the obtaining multi-frame fusion weights comprises:
acquiring a first weight in the multi-frame fusion weights according to the multi-to-two-frame fusion weights, wherein the first weight represents the weight of the first pixel point;
and acquiring at least two weights except the first weight in the multi-frame fusion weights according to the first weight and the multi-to-two frame fusion weights.
19. The apparatus of claim 18, wherein the obtaining a first one of the multi-frame fusion weights from the multi-to-two frame fusion weights comprises:
And acquiring a plurality of weights corresponding to the first pixel point in the multi-to-two frame fusion weights, wherein the first weight is the maximum value, the minimum value, the average value or the weighted sum of the plurality of weights.
20. The apparatus of any of claims 14-17, wherein the processing module is further configured to:
Deleting a first frame of the plurality of video frames.
21. The apparatus of any of claims 14-17, wherein the processing module is further configured to obtain a frame of pre-noise-reduced video frame and add the pre-noise-reduced video frame to a last of the plurality of video frames.
22. A video noise reduction device, comprising:
The processing module is used for responding to user operation and displaying video frames in a display screen of the electronic equipment, wherein the video frames comprise target video frames after noise reduction, and the target video frames after noise reduction are obtained by carrying out video noise reduction on the target video frames acquired by the electronic equipment;
the storage module is used for caching a plurality of video frames, wherein the plurality of video frames comprise the target video frame and at least two other video frames, and the target video frame is any frame in the plurality of video frames;
The acquisition module is used for acquiring a plurality of pairs of two-frame fusion weights, wherein one pair of two-frame fusion weights comprises two weights used for weighting the values of a pair of pixel points, the pair of pixel points comprises a first pixel point of the target video frame and a corresponding pixel point with the same position as the first pixel point in one frame of other video frames;
the obtaining the fusion weight of the two frames comprises the following steps:
Acquiring a first pixel value difference, wherein the first pixel value difference is the pixel value difference between the first pixel point and the corresponding pixel point;
Acquiring the noise level of the first pixel point;
Acquiring second pixel value differences, wherein the second pixel value differences are pixel value differences of two pixel points which are respectively positioned in different other video frames and correspond to the first pixel point;
acquiring the fusion weight of the two frames according to the first pixel value difference, the noise level, the second pixel value difference and parameters for controlling the fusion degree;
The acquisition module is further configured to acquire a multi-frame fusion weight according to the multi-to-two frame fusion weight, where the multi-frame fusion weight is used to fuse the target video frame and the at least two other video frames, the multi-frame fusion weight includes multiple weights used to weight a set of values of pixels, and the set of pixels includes the first pixel of the target video frame and at least two corresponding pixels in the at least two other video frames, where the positions of the corresponding pixels are the same as the positions of the first pixel;
the processing module is further configured to weight values of a group of pixel points corresponding to the multi-frame fusion weights according to the multi-frame fusion weights to obtain a new pixel value, and generate a target video frame after noise reduction based on the new pixel value.
23. A video noise reduction device comprising a memory, a processor and a program stored on the memory and executable on the processor, wherein the processor, when executing the program, performs the video noise reduction method of any one of claims 1 to 6.
24. The video noise reduction device of claim 23, further comprising:
The camera is used for collecting video, and when the processor executes the program stored in the memory, the processor is used for executing the video noise reduction method so as to reduce noise of the video collected by the camera.
25. A video noise reduction device comprising a memory, a processor and a program stored on the memory and executable on the processor, wherein the processor, when executing the program, performs the video noise reduction method of any one of claims 7 to 13.
26. The video noise reduction device of claim 25, further comprising:
The camera is used for collecting video, and when the processor executes the program stored in the memory, the processor is used for executing the video noise reduction method so as to reduce noise of the video collected by the camera.
27. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program which, when executed by a processor, implements the video denoising method of any one of claims 1 to 6 or 7 to 13.
28. A computer program product comprising computer programs/instructions which, when executed by a processor, implement the steps of the method of any one of claims 1 to 6 or 7 to 13.
29. A chip comprising a processor and a data interface, the processor reading instructions and video frames stored on a memory through the data interface to perform the video noise reduction method of any of claims 1 to 6 or 7 to 13.
CN202011549202.3A 2020-12-24 2020-12-24 Video noise reduction method and device Active CN114679553B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011549202.3A CN114679553B (en) 2020-12-24 2020-12-24 Video noise reduction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011549202.3A CN114679553B (en) 2020-12-24 2020-12-24 Video noise reduction method and device

Publications (2)

Publication Number Publication Date
CN114679553A CN114679553A (en) 2022-06-28
CN114679553B true CN114679553B (en) 2025-02-07

Family

ID=82069557

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011549202.3A Active CN114679553B (en) 2020-12-24 2020-12-24 Video noise reduction method and device

Country Status (1)

Country Link
CN (1) CN114679553B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274106B (en) * 2023-10-31 2024-04-09 荣耀终端有限公司 Photo restoration method, electronic equipment and related medium
CN118999830B (en) * 2024-10-24 2025-01-24 山东省科学院激光研究所 Distributed temperature measurement method and system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5974250B2 (en) * 2014-02-07 2016-08-23 株式会社モルフォ Image processing apparatus, image processing method, image processing program, and recording medium
JP6983801B2 (en) * 2016-03-23 2021-12-17 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. Improved image quality by reducing 2-pass temporal noise
US11197008B2 (en) * 2019-09-27 2021-12-07 Intel Corporation Method and system of content-adaptive denoising for video coding
CN111127347A (en) * 2019-12-09 2020-05-08 Oppo广东移动通信有限公司 Noise reduction method, terminal and storage medium

Also Published As

Publication number Publication date
CN114679553A (en) 2022-06-28

Similar Documents

Publication Publication Date Title
CN110784651B (en) Anti-shake method and electronic equipment
CN112889266A (en) Electronic device including camera module in display and method for compensating image around camera module
CN110896451B (en) Preview picture display method, electronic device and computer readable storage medium
CN109005369B (en) Exposure control method, exposure control device, electronic apparatus, and computer-readable storage medium
US11233948B2 (en) Exposure control method and device, and electronic device
CN107948505B (en) Panoramic shooting method and mobile terminal
CN109993722B (en) Image processing method, image processing device, storage medium and electronic equipment
CN109348088A (en) Image noise reduction method and device, electronic equipment and computer readable storage medium
CN106249508B (en) Atomatic focusing method and system, filming apparatus
CN107637063B (en) Method and camera for controlling functions based on user's gestures
CN110213462B (en) Image processing method, image processing device, electronic apparatus, image processing circuit, and storage medium
CN114679553B (en) Video noise reduction method and device
CN109167930A (en) Image display method, image display device, electronic equipment and computer readable storage medium
CN109089046A (en) Image noise reduction method and device, computer readable storage medium and electronic equipment
CN113706414A (en) Training method of video optimization model and electronic equipment
CN111479059B (en) Photographic processing method, device, electronic device and storage medium
CN108401110B (en) Image acquisition method and device, storage medium and electronic equipment
WO2020171300A1 (en) Processing image data in a composite image
WO2021103919A1 (en) Composition recommendation method and electronic device
CN115546043B (en) Video processing method and related equipment thereof
CN107395983B (en) Image processing method, mobile terminal and computer readable storage medium
CN112489006A (en) Image processing method, image processing device, storage medium and terminal
JP2012128529A (en) Image processing device, control method therefor, and program
CN110545375B (en) Image processing method, image processing device, storage medium and electronic equipment
CN114172596A (en) Channel noise detection method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant