[go: up one dir, main page]

CN112805990A - Video processing method and device, electronic equipment and computer readable storage medium - Google Patents

Video processing method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN112805990A
CN112805990A CN201880098282.XA CN201880098282A CN112805990A CN 112805990 A CN112805990 A CN 112805990A CN 201880098282 A CN201880098282 A CN 201880098282A CN 112805990 A CN112805990 A CN 112805990A
Authority
CN
China
Prior art keywords
frame
video
image
processed
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201880098282.XA
Other languages
Chinese (zh)
Inventor
胡小朋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Shenzhen Huantai Technology Co Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Shenzhen Huantai Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd, Shenzhen Huantai Technology Co Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Publication of CN112805990A publication Critical patent/CN112805990A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application discloses a video processing method and device, electronic equipment and a computer readable storage medium, and relates to the technical field of video processing. The method comprises the following steps: collecting a video, and extracting a frame image to be processed of the video; carrying out fuzzy processing on a frame image to be processed to obtain a fuzzy image; and determining encoding parameters and encoding the blurred image. According to the video processing method, in the process of recording the video, the preprocessing of the frame image to be processed is completed by carrying out fuzzy processing on the frame image to be processed, the preprocessed image is coded according to the determined coding parameters, and the coding block effect and mosaic in the video are removed while the video data volume is compressed and the coding efficiency is improved, so that the high video quality can be ensured and the video definition can be improved.

Description

Video processing method and device, electronic equipment and computer readable storage medium Technical Field
The present disclosure relates to the field of encoding and decoding of video images, and more particularly, to a video processing method, an apparatus, an electronic device, and a computer-readable storage medium.
Background
With the popularization of the internet, multimedia, especially video, becomes one of the main bearing media of content, and video is developing towards high definition, even ultra-definition, so that video transmission occupies most bandwidth of network transmission, and brings abundant experience to users, and brings huge pressure to storage and transmission, thereby being important to the compression of video.
In order to meet the video recording and sharing requirements of users for some application programs, videos with low code rates are recorded, compressed and encoded and uploaded. However, for these low bitrate videos, Advanced Video Coding (AVC) hard-coded recording is currently employed. The small video recorded by the method is unclear and has great mosaic or block effect.
Disclosure of Invention
In view of the above, the present application provides a video processing method, an apparatus, an electronic device and a computer-readable storage medium to improve the above-mentioned drawbacks.
In a first aspect, an embodiment of the present application provides a video processing method applied to an electronic device. The method comprises the following steps: collecting a video, and extracting a frame image to be processed of the video; carrying out fuzzy processing on a frame image to be processed to obtain a fuzzy image; and determining encoding parameters and encoding the blurred image.
In a second aspect, an embodiment of the present application further provides a video processing apparatus, which is applied to an electronic device, and the video processing apparatus includes: the video acquisition module is used for acquiring a video and extracting a frame image to be processed of the video; the preprocessing module is used for performing fuzzy processing on the frame image to be processed to acquire a fuzzy image; and the coding module is used for determining coding parameters and coding the blurred image.
In a third aspect, an embodiment of the present application further provides an electronic device, including: one or more processors; a memory; one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the above-described method.
In a fourth aspect, an embodiment of the present application further provides a computer-readable storage medium, where a program code is stored in the computer-readable storage medium, and the program code can be called by a processor to execute the above method.
Compared with the prior art, according to the scheme provided by the application, in the process of recording the video, the preprocessing of the frame image to be processed is completed by carrying out fuzzy processing on the frame image to be processed, the preprocessed image is coded according to the determined coding parameters, and the coding block effect and mosaic in the video are removed while the video data volume is compressed and the coding efficiency is improved, so that the high video quality can be ensured and the video definition can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a scene schematic diagram of video processing provided in an embodiment of the present application.
Fig. 2 is a schematic flowchart of a video processing method according to an embodiment of the present application.
Fig. 3 is a schematic flowchart of another video processing method according to an embodiment of the present application.
Fig. 4 is a flow chart illustrating the blurring processing step of the video processing method shown in fig. 3.
Fig. 5 is a flowchart illustrating the encoding steps of the video processing method shown in fig. 3.
Fig. 6 is a functional block diagram of a video processing apparatus according to an embodiment of the present application.
Fig. 7 is a schematic structural diagram of an electronic device for executing a video processing method according to an embodiment of the present application.
Fig. 8 is a block diagram of an electronic device provided in an embodiment of the present application and configured to execute a video processing method according to an embodiment of the present application.
Fig. 9 is a storage unit for storing or carrying program codes for implementing a video processing method according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
As used in embodiments herein, a "electronic device" "a communication terminal" (or simply "terminal") includes, but is not limited to, a device configured to receive/transmit communication signals via a wireline connection, such as via a Public Switched Telephone Network (PSTN), a Digital Subscriber Line (DSL), a digital cable, a direct cable connection, and/or another data connection/network, and/or via a wireless interface (e.g., for a cellular network, a Wireless Local Area Network (WLAN), a digital television network such as a DVB-H network, a satellite network, an AM-FM broadcast transmitter, and/or another communication terminal). Communication terminals arranged to communicate over a wireless interface may be referred to as "wireless communication terminals", "wireless terminals" and/or "mobile terminals", "electronic devices". Examples of mobile terminals, electronic devices include, but are not limited to, satellite or cellular telephones; a personal communications device (PCS) terminal that may combine a cellular radiotelephone with data processing, facsimile and data communications capabilities; PDAs that may include radiotelephones, pagers, internet/intranet access, Web browsers, notepads, calendars, and/or global positioning device (GPS) receivers; and conventional laptop and/or palmtop receivers or other electronic devices that include a radiotelephone transceiver. The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Currently, in order to meet the video recording and sharing requirements of users for some applications, a Low Bit Rate (Low Bit Rate) video is usually recorded, compressed and encoded, and uploaded. For example, some instant messaging applications support instant taking and instant sharing of small videos, where the videos are usually recorded by a higher resolution device (e.g. 960X544), the bitrate of the video is usually low (e.g. bitrate less than or equal to 1.2Mbps) in order to meet even sharing requirements, and the duration of the video is usually short, e.g. 10S, 8S, 5S video, which is generally called low bitrate small video. When small videos shot instantly are coded, in order to ensure the speed of instant sharing, Advanced Video Coding (AVC) hard Coding recording is often directly adopted. For example, in the current popular instant messaging application, when recording a small video, the resolution is set to 960X544, the code rate is 1.2Mbps, and AVC hard coding is adopted for recording. Although the small video recorded by the method occupies a small space and has a high transmission rate, the video quality is unclear, and even a large mosaic or block effect exists. Especially, under the conditions of rich details and complex scenes, the recorded small video mosaic is serious, and the user experience is influenced.
Therefore, the inventors of the present application are interested in studying how to improve the video quality and ensure the processing speed and transmission rate of the video in the low-bit-rate video recording process similar to the above scenario. In the research process, the inventor finds that, in the low-bit-rate video coding process of the scene, video frames are determined and coded directly on the obtained video data, wherein the I frame interval of the video frames is short, and in order to improve the coding efficiency, the video frames of the small video only include I frames and P frames, which results in a large amount of coded data. Therefore, it is difficult to obtain high-quality low-bitrate video pictures by the conventional encoding method to balance the video data space and the transmission rate. In view of the above, after a lot of research and analysis, the inventors propose a video processing method that can take into account the processing speed, image quality, and transmission rate of a video when recording the video. The video processing method can be suitable for the recording process of the low-bit-rate and small video of the scene, so that the low-bit-rate and small video recorded under the higher resolution has higher definition.
Referring to fig. 1, fig. 1 is a schematic view illustrating a scene of video processing and encoding according to the present application. According to the video processing method provided by the embodiment of the application, the video content 1011 is acquired by the shooting module 108, and the processor 102 is adopted to process the video content 1011. The processor 102 may include a preprocessor 1021 and an encoder 1023, where the preprocessor 1021 is used to preprocess the video content 1011, in which the video content 1011 is denoised and blurred, and the encoder 1023 is used to encode the preprocessed video content 1011. Therefore, the video processing method provided by the embodiment of the application removes high-frequency noise in the video content and then encodes the video content through preprocessing, thereby being beneficial to realizing noise reduction and keeping key information in the video content, and being capable of considering the processing speed, the image quality and the transmission rate of the video.
Referring to fig. 2, in particular, an embodiment of the present application provides a video processing method, which is applied to an electronic device with a camera in practical applications, where the electronic device may be a mobile phone, a tablet computer, or other portable mobile terminals (e.g., a smart watch, a camera, etc.). The video processing method may include: s101 to S105.
S101: and acquiring a video and extracting a frame image to be processed of the video.
Specifically, a video is acquired through a camera of the electronic device, and a frame image to be processed of the video is extracted in real time.
S103: and carrying out fuzzy processing on the frame image to be processed to obtain a fuzzy image.
The blurring processing in the embodiment of the present application is to be understood as performing blurring processing on the YUV data of the frame image to be processed, for example, reducing the sharpening degree of the image, and removing image noise and unnecessary details. Specifically, in some embodiments, the YUV data in the frame image to be processed is first extracted, and after performing time-domain noise reduction on the YUV data, the frame image to be processed is reduced and then enlarged to the original size, so as to implement the blur processing and obtain the blurred image.
The preprocessing of the frame image to be processed is completed by carrying out the fuzzy processing on the frame image to be processed, so that a part of details of the frame image to be processed can be lost, and the details are insensitive to human eyes (such as a high-frequency noise part and an over-sharpened detail part), thereby being beneficial to coding the frame image to be processed subsequently, reducing the coded data quantity and improving the coding rate, and further improving the image quality of post-processing.
S105: and determining encoding parameters and encoding the blurred image.
In some embodiments, when determining the encoding parameters, it is necessary to determine the type of the video frame to be processed, and then encode the blurred image according to the type of the video frame.
Specifically, when the blurred image is encoded, the encoding is performed based on the h.264 encoding standard. In the present embodiment, the types of video frames include I frames, P frames, and B frames. I-frames are intra-frame reference frames, also called key-frames, which are the first frames of GOP (group of pictures) coding, and their coding is independent of previous and following frames. P-frames are coded pictures, also called predicted frames, that compress the amount of data transmitted by substantially reducing the temporal redundancy information with previously coded frames in the picture sequence. The B frame is a bidirectional predicted frame, and its reference frames are adjacent previous frames, current frame and next frames. After an I-frame interval is set in video coding, a P-frame or a B-frame is set between two adjacent I-frames. In some embodiments, if it is determined that the first frame of the video frames is an I frame and the video frames subsequent to the first frame are B frames or/and P frames, the I frame is further intra-coded and the B frames or/and P frames are inter-coded.
In the embodiment provided by the application, the preprocessing of the frame image to be processed is completed by performing the blurring processing on the frame image to be processed, and the amount of encoded data can be reduced in advance. And then, the preprocessed image is coded according to the determined coding parameters, so that the coding block effect and mosaic in the video can be removed while the video data volume is compressed and the coding efficiency is improved, thereby ensuring higher video image quality and improving the video definition.
In addition, because H.264 has a high data compression ratio, the compression ratio of H.264 is more than 2 times of that of MPEG-2 and 1.5-2 times of that of MPEG-4 under the condition of equal image quality. For example, if the original file size is 88GB, the compression ratio is 25: 1 when the original file size is 3.5GB after being compressed by the MPEG-2 compression standard, and the compression ratio is 879MB when the original file size is compressed by the H.264 compression standard, and the compression ratio of the H.264 reaches 102: 1 from 88GB to 879 MB. The low bit rate plays an important role in the high compression ratio of H.264, and compared with the compression technologies such as MPEG-2 and MPEG-4ASP, the H.264 compression technology can greatly save the uploading time and the data flow charge of users. Further, the h.264 has high-quality smooth images while having a high compression ratio, so that when the video processing method of the embodiment of the present application is used for processing a low bit rate (1.2Mbps) video in the above scene, the video data compressed by the h.264 is adopted, which requires less bandwidth in the network transmission process and is more economical.
Referring to fig. 3, based on the video processing method, the present application further provides another video processing method. In this embodiment, when a video frame is encoded, the video processing method sets the type of the video frame according to the motion scene of the video frame, and then encodes the video according to the type of the video frame, so that high picture quality can be ensured when a dynamic video scene is recorded. The video processing method provided by the embodiment may include: s201 to S205.
S201: and acquiring a video and extracting a frame image to be processed of the video.
Specifically, a video is acquired through a camera of the electronic device, and a frame image to be processed of the video is extracted in real time. When the video processing method is applied to sharing of instant messaging small videos, the maximum video acquisition time is usually set, that is, the maximum time of a video acquired by a camera of an electronic device has a limit, so that the subsequent encoding parameters can be conveniently set. In some embodiments, the total allowable duration for the video may be 5-30 seconds, such as 5 seconds, 10 seconds, 15 seconds. In some embodiments, recording of the video is automatically stopped when the duration of recording the video reaches the total duration allowed for the video.
Therefore, in some embodiments, the video processing method may be applied to video recording of a network-based application (e.g., an instant messaging application, a social networking application), and the video processing method may further include the steps of: and when the video recording time length is greater than a preset value, automatically stopping recording the video, wherein the preset value is the set total time length allowed by the video.
S203: and carrying out fuzzy processing on the frame image to be processed to obtain a fuzzy image.
The blurring processing in the embodiment of the present application is understood to be performing blurring processing on the YUV data of the frame image to be processed, for example, reducing the sharpening degree of the image, and removing image noise and unnecessary details. Referring to fig. 4, in some embodiments, S203 may include: s2031 to S2035.
S2031: and extracting YUV data of the frame image to be processed.
In some embodiments, if the format of the frame image to be processed is YUV format, the YUV data of the frame image to be processed is directly extracted. In other embodiments, the format of the frame image to be processed is other formats, for example, RGB format, it is necessary to extract YUV data after converting RGB format into YUV format. At this time, S2031 may include the steps of: determining the format of a frame image to be processed; if the frame image to be processed is in a YUV format, extracting YUV data; and if the frame image to be processed is in the RGB format, converting the frame image to be processed into the YUV format and extracting YUV data.
Among them, YUV is a color coding method. YUV is a kind of compiled true-color space (color space), and the proper terms such as Y' UV, YUV, YCbCr, YPbPr, etc. may be called YUV, overlapping with each other. "Y" represents brightness (Luminince, Luma), and "U" and "V" represent Chroma and concentration (Chroma). RGB, a three primary color model (RGB color model), also called RGB color model or Red, Green and Blue color model, is an additive color model, which generates various colored lights by changing three color channels of Red (Red), Green (Green) and Blue (Blue) and superimposing them in different proportions. The common application of the RGB color model is to detect, represent and display images in electronic systems. The mutual conversion between the YUV data and the RGB data may be realized by a preset conversion matrix.
S2033: and performing time domain noise reduction processing on the YUV data to obtain a noise-reduced image.
In the process of video acquisition, due to reasons such as ambient light and shooting parameters (such as exposure parameters) and the like, noise appears in a product picture. From the probability distribution of noise, there are gaussian noise, rayleigh noise, gamma noise, exponential noise and uniform noise. In the embodiment of the application, in order to suppress noise and improve the quality of a frame image to be processed, so as to facilitate the post-processing of a video, the frame image to be processed needs to be subjected to noise reduction preprocessing.
Further, in some embodiments, after the YUV data is extracted, the high-frequency color signal and the low-frequency color signal in the YUV data are distinguished through a filter, the high-frequency color signal is filtered, and the noise-reduced image is obtained. Since the bandwidth of the color components is usually narrow and the human visual system is insensitive to high frequency color signals, the high frequency colors can be filtered out in the time domain by low pass filtering to remove high frequency noise in the frame image to be processed. In some embodiments, a simple low-pass filter may be used to suppress noise of an image, such as a gaussian filter, an average filter, etc., which is beneficial to distinguish desired effective image content from noise interference, and also can avoid a moving object or a smear of a moving scene in a video during video processing.
Specifically, the frame image to be processed is denoised by a gaussian filter, which is a linear filter, and can effectively suppress noise and smooth the frame image to be processed. The gaussian filter operates in a similar manner to the averaging filter, and takes the average of the pixels within the filter window as output. The coefficients of the window template are different from those of the average filter, and the template coefficients of the average filter are all the same and are 1; while the coefficients of the template of the gaussian filter decrease with increasing distance from the center of the template. Therefore, the gaussian filter blurs the image of the frame to be processed to a smaller extent than the mean filter.
For example, a 5 × 5 gaussian filter window is generated, and sampling is performed with the center position of the template as the origin of coordinates. And substituting the coordinates of each position of the template into a Gaussian function, wherein the obtained value is the coefficient of the template. And then the Gaussian filter window is convolved with the frame image to be processed, so that the noise of the frame image to be processed can be reduced.
S2035: and carrying out fuzzy processing on the noise-reduced image to obtain a fuzzy image.
The frame image to be processed is subjected to fuzzy processing to complete preprocessing of the frame image to be processed, a part of high-noise details of the frame image to be processed can be lost, the details are insensitive to human eyes (such as a high-frequency noise part and an over-sharpening part), the encoded data amount is reduced, and when the frame image to be processed is encoded, the encoding rate and the post-processing image quality can be improved.
In the present embodiment, the image is blurred by scaling. Specifically, the image after noise reduction is firstly reduced and then enlarged, so that unnecessary details in the image can be effectively removed in the reducing process, and the details of the characteristic image features which are more sensitive to human eyes are reserved. At this time, S2035 may include: determining the size of the denoised image as an original size; reducing the noise-reduced image to obtain a reduced image; and enlarging and reducing the image to the size of the original size to obtain a blurred image. When the noise-reduced image is reduced, the reduction multiple is not limited, for example, the ratio of the size of the reduced image to the original size may be 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, and the like. By selecting an appropriate reduction factor, over-compression of the image during scaling can be avoided to preserve the necessary image details.
S205: and determining encoding parameters and encoding the blurred image.
Specifically, when the blurred image is encoded, the encoding is performed based on the h.264 encoding standard.
In some embodiments, the encoding parameters include, but are not limited to: quantization parameter value (QP value), type of video frame, frame rate.
In the quantization and inverse quantization processes, the QP value determines the encoding compression rate and the picture precision of the quantizer. If the QP value is larger, the dynamic range of the quantization value is smaller, the corresponding coding length is smaller, but more image detail information is lost during inverse quantization; if the QP value is small, the dynamic range is large, the corresponding coding length is also large, but the loss of image detail information is small. In the h.264 coding standard, there are 52 values for the quantization parameter QP value. The finest quantization is represented when the QP value takes the minimum value of 0 and the coarsest quantization is represented when the QP value takes the maximum value of 51. In the embodiment, the QP value of the frame to be processed is determined to be in the range of 20-44 so as to take account of image details and coding length. It is understood that the QP value can be any one or a range of values from 20 to 44, for example, the QP value can be: 20. 22, 24, 26, 28, 30, 32, 34, 36, 38, 40, 42, 44, etc. In other embodiments, the encoder may automatically change the QP value according to the actual dynamic range of the image, and trade off between the encoding length and the image precision to achieve the best overall video processing effect.
In the present embodiment, the types of video frames include I frames, P frames, and B frames. In S205, it is determined that the first frame of the video frame is an I frame, and the video frames subsequent to the first frame are B frames or/and P frames or/and I frames, and then the I frame is intra-coded, and the B frames or/and P frames are inter-coded. By appropriately reducing the number of I frames, the data amount of video can be reduced, thereby saving the amount of encoded data. Further, referring to fig. 5, the video frame may be encoded according to the type of the video frame, in this case, S205 may include: s2051 to S2053.
S2051: and determining that the first frame image of the video is an I frame, and carrying out intra-frame coding on the I frame.
Further, in some embodiments, the number of I frames in the video is controlled by determining the frame interval duration of the I frames, which is beneficial to saving the amount of encoded data. Specifically, in some application scenarios, the recorded video has a maximum duration limit, for example, in a current popular instant messaging application, a small video that is allowed to be recorded for 10 seconds at most is shared, and at this time, the frame interval duration of the I frame may be limited according to the total duration allowed by the video. For example, the frame interval duration of the I frame is greater than or equal to 1/4, 1/3, 1/2, 2/3, 3/4, etc., of the total duration allowed for video, and even the frame interval duration of the I frame may be greater than the total duration allowed for video recording. For a scene with a determined total time allowed for video recording, the frame interval duration of the I frame may be set to a specified duration, for example, the frame interval duration of the I frame is set to 11 seconds.
S2053: and determining the video frame after the I frame as a B frame or/and a P frame, and performing inter-frame coding on the video frame after the I frame.
In some embodiments, the video frames after the I frame are determined to be B frames and P frames, and the B frames and the P frames are sequentially alternated. By spacing the B and P frames, frame compression efficiency and image quality can be achieved at the same time.
In other embodiments, a B-frame adaptive setting (use adaptive B-frame placement) may be set to allow the encoder to override the number of already encoded B-frame pictures to improve quality, for example, when the encoder detects a scene change or a subsequent frame of the current frame is an I-frame, the designated video frame is set to be a B-frame by the B-frame adaptive setting. In general, the interval frequency between the B frame and the P frame may be determined according to the shooting scene of the video frame, or the designated video frame may be set as the B frame according to the shooting scene of the video frame, so as to improve the encoding efficiency. At this time, S2053 may include: judging a motion scene of the video frame after the I frame; adaptively adjusting the type of the video frame after the I frame according to the judgment result of the motion scene; and coding the video frame after the I frame according to the type of the video frame after the I frame. Specifically, if any frame in the video frames after the I frame is in a moving scene, the video frame is determined to be a B frame, otherwise, the video frame is determined to be a P frame.
When the video frame is judged to be in a motion scene, whether the video frame is in the motion scene or not can be judged through the displacement of the same characteristic between the adjacent frame images. At this time, the motion scene judgment of the video frame after the I frame includes: acquiring a first coordinate of the specified feature A of the current frame in the current frame image, acquiring a second coordinate of the specified feature A of the previous frame of the current frame in the frame image, and if the difference between the first coordinate and the second coordinate is larger than a specified value, considering that the current frame is in a motion scene.
For example, at the current nth frame, the coordinates of the feature a have been determined to be (X, Y, Z), the change increment of the feature a is found to be (X1, Y1, Z1) by comparing the coordinates of the feature a of the nth frame and the N-1 st frame, and when the change increment is greater than a specified value, the video frame is considered to be in the motion scene. The moving scene may be understood as a scene in which a moving object is present in a captured picture, and the picture element changes rapidly in the moving scene. For example, a photographing apparatus (e.g., an electronic device) shakes greatly, a photographing scene changes, a car or a person runs, and the like.
In other embodiments, whether the video frame is in a moving scene may be determined by other determination methods, for example, by the correlation between adjacent frame images. Specifically, image information (e.g., color distribution) of two adjacent frames of images may be obtained, and a correlation between the adjacent frames of images is obtained by comparing the image information, and if the correlation is smaller than a preset value, the video frame is considered to be in a moving scene. At this time, the motion scene judgment of the video frame after the I frame includes: the method comprises the steps of obtaining first image information of a current frame, obtaining second image information of a previous frame of the current frame, obtaining a difference value between the first image information and the second image information, and if the difference value is larger than a specified value, determining that the current frame is in a motion scene.
In the embodiment provided by the application, the preprocessing of the frame image to be processed is completed by performing the fuzzy processing on the frame image to be processed, and the preprocessed image is coded according to the determined coding parameters, so that the video data volume is compressed, the coding efficiency is improved, and meanwhile, the higher image quality can be ensured, thereby removing the coding block effect and the mosaic in the video. Furthermore, when the video frame is coded, the type of the video frame is set according to the motion scene of the video frame, and then the video is coded according to the type of the video frame, so that the video processing method can ensure higher picture quality when recording a dynamic video scene.
Referring to fig. 6, based on the video processing method provided in the foregoing embodiment, the present application provides a video processing apparatus 300, and fig. 6 shows a block diagram of the video processing apparatus 300. The video processing apparatus 300 is run on the electronic device 100 shown in fig. 7, and is configured to execute the video processing method described above. In the present embodiment, the video processing apparatus 300 is stored in a memory of the electronic device 100 and is configured to be executed by one or more processors of the electronic device 100.
In the embodiment shown in fig. 6, the video processing apparatus 300 includes a video capture module 310, a pre-processing module 330, and an encoding module 350. It is understood that the modules may be program modules running on a computer-readable storage medium, and the purpose and operation of the modules are as follows:
the video capture module 310 is configured to capture a video and extract a frame image to be processed of the video. Specifically, the video capture module 310 captures a video through a camera of the electronic device, and extracts a frame image to be processed of the video in real time.
The pre-processing module 330 is used for pre-processing the video acquired by the video acquisition module 310. Specifically, the preprocessing module 330 is configured to perform blurring processing on the frame image to be processed, so as to obtain a blurred image.
The blurring processing in the embodiment of the present application is understood to be performing blurring processing on the YUV data of the frame image to be processed, for example, reducing the sharpening degree of the image, and removing image noise and unnecessary details. Further, in some embodiments, the pre-processing module 330 may include a YUV data extraction unit 331, a noise reduction unit 333, and a blur processing unit 335.
The YUV data extraction unit 331 is configured to extract YUV data of a frame image to be processed. In some embodiments, the format of the frame image to be processed is YUV format or RGB format, the YUV data extraction unit 331 is configured to determine the format of the frame image to be processed, if the frame image to be processed is YUV format, the YUV data extraction unit 331 is configured to directly extract YUV data, and if the frame image to be processed is RGB format, the YUV data extraction unit 331 is configured to convert the frame image to be processed into YUV format and extract YUV data.
The denoising unit 333 is configured to perform time domain denoising processing on the YUV data, and acquire a denoised image. After the YUV data extraction unit 331 extracts the YUV data, the noise reduction unit 333 is configured to distinguish a high-frequency color signal and a low-frequency color signal in the YUV data by using a filter, and filter the high-frequency color in a time domain by low-pass filtering, so as to remove high-frequency noise in the frame image to be processed.
The blur processing unit 335 is configured to perform blur processing on the noise-reduced image and acquire a blurred image. In this embodiment, the frame image to be processed is blurred by the blurring processing unit 335, so as to complete the preprocessing of the frame image to be processed, and thus a part of details of the frame image to be processed can be lost, which is beneficial to encoding the frame image to be processed, and the encoding rate and the post-processed image quality can be improved.
In this embodiment, the blur processing unit 335 is further configured to determine the size of the noise-reduced image as the original size, reduce the noise-reduced image, and obtain a reduced image; and a step for enlarging and reducing the size of the image to the original size to obtain a blurred image. Here, when the blur processing unit 335 reduces the noise-reduced image, the reduction multiple of the image is not limited, for example, the ratio of the size of the reduced image to the original size may be 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, and the like.
The encoding module 350 is configured to determine encoding parameters for encoding the blurred image. Specifically, when the blurred image is encoded, the encoding module 350 encodes the blurred image based on the h.264 encoding standard. The encoding module 350 includes a QP value setting unit 351, a frame type setting unit 353, and an encoding unit 353.
In this embodiment, the QP value setting unit 351 is configured to determine the range of QP values of the frame to be processed to be 20 to 44, so as to take account of image details and encoding length. It is understood that the QP value can be any one or a range of values from 20 to 44, for example, the QP value can be: 20. 22, 24, 26, 28, 30, 32, 34, 36, 38, 40, 42, 44, etc. In other embodiments, the QP value setting unit 351 is configured to automatically change the QP value according to the actual dynamic range of the image.
The frame type setting unit 353 is configured to determine that a first frame of the video frames is an I frame, and determine that a video frame subsequent to the first frame is a B frame or/and a P frame. In some embodiments, the frame type setting unit 353 is configured to determine that the video frame after the I frame is a B frame and a P frame, and set a sequential alternation interval between the B frame and the P frame. By spacing the B and P frames, frame compression efficiency and image quality can be achieved at the same time. In some other embodiments, the frame type setting unit 353 is configured to set a B frame adaptive setting, for example, when the encoder detects a scene change or a subsequent frame of the current frame is an I frame, the specified video frame is set as a B frame through the B frame adaptive setting.
Further, the frame type setting unit 353 is configured to appropriately reduce the number of I frames to reduce the data amount of the video, thereby saving the amount of encoded data. Specifically, the frame type setting unit 353 may encode a video frame according to the type of the video frame, and at this time the frame type setting unit 353 may include an I frame determination subunit 3531, a frame scene determination subunit 3533, a B frame determination subunit 3535, and a P frame determination subunit 3537.
The I-frame determining subunit 3531 is configured to determine that the first frame image of the video is an I-frame. Further, the I-frame determining subunit 3531 is further configured to determine a frame interval duration of the I-frame, so as to control the number of I-frames in the video, which is beneficial to saving the encoded data amount. In particular, the I-frame determination subunit 3531 may be configured to limit the inter-frame duration of the I-frame according to the total duration allowed for the video. For example, the I-frame determination subunit 3531 sets the frame interval duration of the I-frame to 1/4, 1/3, 1/2, 2/3, 3/4, and so on of the total duration allowed for video, and even the frame interval duration of the I-frame may be greater than the total duration allowed for video recording. For scenes where the total duration allowed for video recording has been determined, the I-frame determination subunit 3531 may set the inter-frame duration of the I-frame to a fixed value, for example, the inter-frame duration of the I-frame is set to 11 seconds.
The frame scene judging subunit 3533 is configured to judge a shooting scene of a video frame to allow the B frame determining subunit 3535 and the P frame determining subunit 3537 to determine a spacing frequency of B frames from P frames. Specifically, the frame scene determining subunit 3533 is configured to perform motion scene determination on the video frames after the I frame, and if any one of the video frames after the I frame is in a motion scene, the B frame determining subunit 3535 is configured to determine that the video frame is a B frame, otherwise, the B frame determining subunit 3535 is configured to determine that the video frame is a P frame. In some embodiments, the frame scene determining subunit 3533 is configured to obtain a first coordinate of the specified feature a of the current frame in the current frame image, obtain a second coordinate of the specified feature a of the previous frame in the current frame image, obtain a difference value between the first coordinate and the second coordinate, and determine that the current frame is in the motion scene if the difference value is greater than a specified value. In other embodiments, the frame scene determining subunit 3533 is configured to obtain first image information of a current frame, obtain second image information of a frame previous to the current frame, obtain a difference value between the first image information and the second image information, and determine that the current frame is in a moving scene if the difference value is greater than a specified value.
The encoding unit 355 is configured to encode the video frame according to the type of the video frame. Specifically, encoding section 355 is configured to intra-encode an I frame and inter-encode a B frame and/or a P frame.
In the embodiment provided by the application, the preprocessing of the frame image to be processed is completed by performing the fuzzy processing on the frame image to be processed, and the preprocessed image is coded according to the determined coding parameters, so that the video data volume is compressed, the coding efficiency is improved, and meanwhile, the higher image quality can be ensured, thereby removing the coding block effect and the mosaic in the video. Furthermore, when the video frame is coded, the type of the video frame is set according to the motion scene of the video frame, and then the video is coded according to the type of the video frame, so that the video processing method can ensure higher picture quality when recording a dynamic video scene.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, the coupling or direct coupling or communication connection between the modules shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or modules may be in an electrical, mechanical or other form.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Referring to fig. 7 and 8, based on the video processing apparatus 300 and the video processing method, an electronic device 100 is further provided according to an embodiment of the present application, and fig. 8 is a block diagram illustrating the electronic device 100. The electronic device 100 may be a smart phone, a tablet computer, an electronic book, or other electronic devices capable of running an application. The electronic device 100 includes an electronic main body 10, and the electronic main body 10 includes a housing 12 and a main display 14 disposed on the housing 12. In this embodiment, the main display 14 generally includes the display panel 111, and may include a circuit or the like for performing a touch operation on the display panel 111 in response thereto. The display panel 111 may be a Liquid Crystal Display (LCD) panel, and in some embodiments, the display panel 111 is a touch screen 109.
In a practical application scenario, the electronic device 100 may be used as a smartphone terminal, in which case the electronic body 10 typically further includes one or more (only one is shown in fig. 8) of the following components: a processor 102, a memory 104, a capture module 108, an audio circuit 110, an input module 118, a power module 122, and one or more applications, wherein the one or more applications may be stored in the memory 104 and configured to be executed by the one or more processors 102, the one or more programs configured to perform a method as described in the aforementioned method embodiments. It will be understood by those skilled in the art that the structure shown in fig. 5 is merely illustrative and is not intended to limit the structure of the electronic body 10. For example, the electronics body portion 10 may also include more or fewer components than shown in FIG. 8, or have a different configuration than shown in FIG. 8.
Processor 102 may include one or more processing cores. The processor 102 interfaces with various components throughout the electronic device 100 using various interfaces and circuitry to perform various functions of the electronic device 100 and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 104 and invoking data stored in the memory 104. Alternatively, the processor 102 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 102 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 102, but may be implemented by a communication chip.
The Memory 104 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 104 may be used to store instructions, programs, code sets, or instruction sets. The memory 104 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The data storage area may also store data created by the electronic device 100 during use (e.g., phone book, audio-video data, chat log data), and the like.
The shooting module 108 can be a camera, which is disposed on the electronic body portion 10 and is used for performing shooting tasks, such as taking pictures, videos, or making video phone calls.
The audio circuitry 110, speaker 101, sound jack 103, microphone 105 collectively provide an audio interface between a user and the electronic body portion 10 or main display 14. Specifically, audio circuitry 110 receives sound data from processor 102, converts the sound data to an electrical signal, and transmits the electrical signal to speaker 101. The speaker 101 converts the electric signal into a sound wave that can be heard by the human ear. The audio circuitry 110 also receives electrical signals from the microphone 105, converts the electrical signals to sound data, and transmits the sound data to the processor 102 for further processing.
In this embodiment, the input module 118 may include a touch screen 109 disposed on the main display 14, and the touch screen 109 may collect touch operations of the user on or near the touch screen 109 (e.g., operations of the user on or near the touch screen 109 using any suitable object or accessory such as a finger, a stylus, etc.) and drive the corresponding connection device according to a preset program. In addition to the touch screen 109, in other variations, the input module 118 may include other input devices, such as keys 107 or a microphone 105. The keys 107 may include, for example, character keys for inputting characters, and control keys for triggering control functions. Examples of control buttons include a "back to home" button, a power on/off button, and the like. The microphone 105 may be used to receive voice commands of the user.
The main display 14 is used to display information input by the user, information provided to the user, and various graphic user interfaces of the electronic main body section 10, which may be configured by graphics, text, icons, numerals, video, and any combination thereof, and in one example, the touch screen 109 may be provided on the display panel 111 so as to be integrated with the display panel 111.
The power module 122 is used to provide a supply of power to the processor 102 and other components. In particular, the power module 122 may include a power management device, one or more power sources (e.g., batteries or ac power), a charging circuit, a power failure detection circuit, an inverter, a power status indicator light, and any other components associated with the generation, management, and distribution of power within the electronic body portion 10 or the primary display screen 14.
It should be understood that the electronic device 100 described above is not limited to a smartphone terminal, but it should refer to a computer device that can be used in mobile. Specifically, the electronic device 100 is a mobile computer device equipped with an intelligent operating device, and the electronic device 100 includes, but is not limited to, a smart phone, a smart watch, a notebook, a tablet computer, a POS device, and even a vehicle-mounted computer.
Referring to fig. 9, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer-readable storage medium 800 has stored therein program code that can be invoked by a processor to perform the methods described in the method embodiments above.
The computer-readable storage medium 800 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 800 includes a non-transitory computer-readable storage medium. The computer readable storage medium 800 has storage space for program code 810 to perform any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 810 may be compressed, for example, in a suitable form.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, particular features or characteristics described may be combined in any one or more embodiments or examples as appropriate. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
The logic and/or steps represented in the flowcharts or otherwise described herein, such as an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable storage medium for use by or in connection with an instruction execution apparatus, device, or device (e.g., a computer-based apparatus, processor-containing apparatus, or other apparatus that can fetch the instructions from the instruction execution apparatus, device, or device and execute the instructions). For the purposes of this description, a "computer-readable storage medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution apparatus, device, or apparatus. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable storage medium may even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by suitable instruction execution devices. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware that is related to instructions of a program, and the program may be stored in a computer-readable storage medium, and when executed, the program includes one or a combination of the steps of the method embodiments. In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (20)

  1. A video processing method applied to an electronic device, the method comprising:
    collecting a video, and extracting a frame image to be processed of the video;
    performing fuzzy processing on the frame image to be processed to obtain a fuzzy image; and
    and determining encoding parameters and encoding the blurred image.
  2. The method according to claim 1, wherein the blurring the frame image to be processed to obtain a blurred image comprises:
    extracting YUV data of the frame image to be processed;
    performing time domain noise reduction processing on the YUV data to obtain a noise-reduced image; and
    and carrying out fuzzy processing on the noise-reduced image to obtain the fuzzy image.
  3. The method according to claim 2, wherein the blurring the noise-reduced image to obtain the blurred image comprises:
    determining the size of the denoised image as an original size;
    reducing the size of the noise-reduced image to obtain a reduced image; and
    and enlarging the reduced image to the size of the original size to obtain the blurred image.
  4. The method according to claim 2, wherein the extracting YUV data of the frame image to be processed comprises:
    determining the format of the frame image to be processed;
    if the frame image to be processed is in a YUV format, extracting the YUV data; and
    and if the frame image to be processed is in the RGB format, converting the frame image to be processed into the YUV format, and extracting the YUV data.
  5. The method of claim 2, wherein the time-domain denoising the YUV data to obtain a denoised image comprises:
    distinguishing a high-frequency color signal and a low-frequency color signal in the YUV data; and
    and filtering the high-frequency color signal to obtain the noise-reduced image.
  6. The method of claim 1, wherein the determining encoding parameters for encoding the blurred image comprises: and determining that a first frame image of the video is an I frame, and carrying out intra-frame coding on the I frame.
  7. The method of claim 6, wherein when the first frame of image of the video is determined to be an I frame, the I frame interval duration of the video is set to a specified duration.
  8. The method of claim 7, wherein the video processing method is applied to a video recording of a web-based application, the video recording having a total duration limit; the specified duration is greater than the total duration allowed by the video recording.
  9. The method of claim 6, wherein the determining encoding parameters to encode the blurred image further comprises: and determining the video frame after the I frame as a B frame and a P frame, and performing interframe coding on the video frame after the I frame.
  10. The method of claim 9, wherein determining that the video frames following the I-frame are B-frames and P-frames comprises: and setting the video frames after the I frame as B frames and P frames which are sequentially alternated.
  11. The method according to claim 10, wherein the video frames after the I frame are determined to be B frames and P frames, and the interval frequency of the B frames and the P frames is determined according to the shooting scene of the video.
  12. The method according to claim 10, wherein said encoding the video frame after the I frame comprises:
    judging a motion scene of the video frame after the I frame;
    adaptively adjusting the type of the video frame after the I frame according to the judgment result of the motion scene; and
    and performing inter-frame coding on the video frame after the I frame according to the type of the video frame after the I frame.
  13. The method according to claim 12, wherein said adaptively adjusting the type of the video frame after the I frame according to the result of the motion scene determination comprises: and if any frame in the video frames after the I frame is in the motion scene, determining that the video frame is a B frame, otherwise, determining that the video frame is a P frame.
  14. The method according to claim 13, wherein said determining a motion scene of the video frame after the I frame comprises:
    acquiring a first coordinate of a specified feature of a current video frame in an image of the current video frame;
    acquiring a second coordinate of the specified feature of a previous frame of the current video frame in an image of the previous frame; and
    and acquiring a difference value between the first coordinate and the second coordinate, and if the difference value is greater than a specified value, determining that the current video frame is in a motion scene.
  15. The method according to claim 13, wherein said determining a motion scene of the video frame after the I frame comprises:
    acquiring first image information of a current video frame;
    acquiring second image information of a previous frame of the current video frame; and
    and acquiring a difference value between first image information and second image information of the video, and if the difference value is greater than a specified value, determining that the current video frame is in a motion scene.
  16. The method of claim 9, wherein the determining encoding parameters to encode the blurred image further comprises: and determining the range of the quantization parameter value of the frame to be processed to be 20-44, and encoding the blurred image.
  17. The method of claim 1, wherein the video processing method is applied to video recording of a web-based application, and wherein the video processing method further comprises: and when the video recording duration is greater than a preset value, automatically stopping recording the video.
  18. A video processing apparatus applied to an electronic device, the video processing apparatus comprising:
    the video acquisition module is used for acquiring a video and extracting a frame image to be processed of the video;
    the preprocessing module is used for carrying out fuzzy processing on the frame image to be processed to obtain a fuzzy image; and
    and the coding module is used for determining coding parameters and coding the blurred image.
  19. An electronic device, comprising:
    one or more processors;
    a memory;
    one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-17.
  20. A computer-readable storage medium, characterized in that a program code is stored in the computer-readable storage medium, which program code can be called by a processor to execute the video processing method according to any of claims 1-17.
CN201880098282.XA 2018-11-15 2018-11-15 Video processing method and device, electronic equipment and computer readable storage medium Pending CN112805990A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/115753 WO2020097888A1 (en) 2018-11-15 2018-11-15 Video processing method and apparatus, electronic device, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN112805990A true CN112805990A (en) 2021-05-14

Family

ID=70730739

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880098282.XA Pending CN112805990A (en) 2018-11-15 2018-11-15 Video processing method and device, electronic equipment and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN112805990A (en)
WO (1) WO2020097888A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113613024A (en) * 2021-08-09 2021-11-05 北京金山云网络技术有限公司 Video preprocessing method and device
CN115396672A (en) * 2022-08-25 2022-11-25 广东中星电子有限公司 Bit stream storage method, device, electronic equipment and computer readable medium
CN118646930A (en) * 2024-08-16 2024-09-13 浙江嗨皮网络科技有限公司 Video background processing method, system and storage medium based on network signal strength

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111698512B (en) * 2020-06-24 2022-10-04 北京达佳互联信息技术有限公司 Video processing method, device, equipment and storage medium
CN113298723B (en) * 2020-07-08 2024-10-15 优酷文化科技(北京)有限公司 Video processing method, video processing device, electronic equipment and computer storage medium
CN114501001B (en) * 2020-10-26 2024-08-30 国家广播电视总局广播电视科学研究院 Video coding method and device and electronic equipment
CN112351285B (en) * 2020-11-04 2024-04-05 北京金山云网络技术有限公司 Video encoding method, video decoding method, video encoding device, video decoding device, electronic equipment and storage medium
CN113766322A (en) * 2021-01-18 2021-12-07 北京京东拓先科技有限公司 An image acquisition method, device, electronic device and storage medium
CN113066139B (en) * 2021-03-26 2024-06-21 西安万像电子科技有限公司 Picture processing method and device, storage medium and electronic equipment
CN113259660B (en) * 2021-06-11 2021-10-29 宁波星巡智能科技有限公司 Video compression transmission method, device, equipment and medium based on dynamic coding frame
CN113868123A (en) * 2021-09-14 2021-12-31 咪咕文化科技有限公司 Script generation method, device, equipment and computer program product
CN114302139B (en) * 2021-12-10 2024-09-24 阿里巴巴(中国)有限公司 Video encoding method, video decoding method and device
CN114390236A (en) * 2021-12-17 2022-04-22 云南腾云信息产业有限公司 Video processing method, video processing device, computer equipment and storage medium
CN115550660B (en) * 2021-12-30 2023-08-22 北京国瑞数智技术有限公司 Network video local variable compression method and system
CN114401405A (en) * 2022-01-14 2022-04-26 安谋科技(中国)有限公司 Video coding method, medium and electronic equipment
CN114630057B (en) * 2022-03-11 2024-01-30 北京字跳网络技术有限公司 Method and device for determining special effect video, electronic equipment and storage medium
CN114630124B (en) * 2022-03-11 2024-03-22 商丘市第一人民医院 Neural endoscope backup method and system
CN114640852B (en) * 2022-03-21 2024-08-23 湖南快乐阳光互动娱乐传媒有限公司 Video frame alignment method and device
CN114900736A (en) * 2022-03-28 2022-08-12 网易(杭州)网络有限公司 Video generation method and device and electronic equipment
CN117395381B (en) * 2023-12-12 2024-03-12 上海卫星互联网研究院有限公司 Compression method, device and equipment for telemetry data
CN119031135A (en) * 2024-10-18 2024-11-26 每日互动股份有限公司 A video decoding method, device, medium and equipment based on sampling
CN119562055A (en) * 2025-01-23 2025-03-04 湖北芯擎科技有限公司 Method, device, equipment and storage medium for realizing continuous encoding of image data

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1799492A (en) * 2005-12-02 2006-07-12 清华大学 Quasi-lossless image compression and decompression method of wireless endoscope system
US20100026880A1 (en) * 2008-07-31 2010-02-04 Atsushi Ito Image Processing Apparatus, Image Processing Method, and Program
CN102546917A (en) * 2010-12-31 2012-07-04 联想移动通信科技有限公司 Mobile terminal with camera and video processing method therefor
CN103702016A (en) * 2013-12-20 2014-04-02 广东威创视讯科技股份有限公司 Video denoising method and device
CN104661023A (en) * 2015-02-04 2015-05-27 天津大学 Image or video coding method based on predistortion and training filter
CN105103554A (en) * 2013-03-28 2015-11-25 华为技术有限公司 Method for protecting video frame sequence against packet loss
CN105825490A (en) * 2016-03-16 2016-08-03 北京小米移动软件有限公司 Gaussian blur method and device of image

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130107066A1 (en) * 2011-10-27 2013-05-02 Qualcomm Incorporated Sensor aided video stabilization
US9165345B2 (en) * 2013-03-14 2015-10-20 Drs Network & Imaging Systems, Llc Method and system for noise reduction in video systems
CN104966266B (en) * 2015-06-04 2019-07-09 福建天晴数码有限公司 The method and system of automatic fuzzy physical feeling
CN107797783A (en) * 2017-10-25 2018-03-13 广东欧珀移动通信有限公司 Control method, control device, and computer-readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1799492A (en) * 2005-12-02 2006-07-12 清华大学 Quasi-lossless image compression and decompression method of wireless endoscope system
US20100026880A1 (en) * 2008-07-31 2010-02-04 Atsushi Ito Image Processing Apparatus, Image Processing Method, and Program
CN102546917A (en) * 2010-12-31 2012-07-04 联想移动通信科技有限公司 Mobile terminal with camera and video processing method therefor
CN105103554A (en) * 2013-03-28 2015-11-25 华为技术有限公司 Method for protecting video frame sequence against packet loss
CN103702016A (en) * 2013-12-20 2014-04-02 广东威创视讯科技股份有限公司 Video denoising method and device
CN104661023A (en) * 2015-02-04 2015-05-27 天津大学 Image or video coding method based on predistortion and training filter
CN105825490A (en) * 2016-03-16 2016-08-03 北京小米移动软件有限公司 Gaussian blur method and device of image

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113613024A (en) * 2021-08-09 2021-11-05 北京金山云网络技术有限公司 Video preprocessing method and device
CN113613024B (en) * 2021-08-09 2023-04-25 北京金山云网络技术有限公司 Video preprocessing method and device
CN115396672A (en) * 2022-08-25 2022-11-25 广东中星电子有限公司 Bit stream storage method, device, electronic equipment and computer readable medium
CN115396672B (en) * 2022-08-25 2024-04-26 广东中星电子有限公司 Bit stream storage method, device, electronic equipment and computer readable medium
CN118646930A (en) * 2024-08-16 2024-09-13 浙江嗨皮网络科技有限公司 Video background processing method, system and storage medium based on network signal strength
CN118646930B (en) * 2024-08-16 2024-11-12 浙江嗨皮网络科技有限公司 Video background processing method, system and storage medium based on network signal strength

Also Published As

Publication number Publication date
WO2020097888A1 (en) 2020-05-22

Similar Documents

Publication Publication Date Title
CN112805990A (en) Video processing method and device, electronic equipment and computer readable storage medium
CN105472205B (en) Real-time video noise reduction method and device in encoding process
CN109685726B (en) Game scene processing method and device, electronic equipment and storage medium
CN101371589B (en) Adaptive filtering to enhance video encoder performance
US20110026591A1 (en) System and method of compressing video content
US11627369B2 (en) Video enhancement control method, device, electronic device, and storage medium
CN110149554B (en) Video image processing method, device, electronic device and storage medium
JP7295950B2 (en) Video enhancement control method, device, electronic device and storage medium
CN113099233B (en) Video encoding method, apparatus, video encoding device and storage medium
CN108337465B (en) Video processing method and device
CN104322065A (en) Terminal and video image compression method
US20120069897A1 (en) Method and device for video-signal processing, transmitter, and corresponding computer program product
US20050169537A1 (en) System and method for image background removal in mobile multi-media communications
CN114554212A (en) Video processing apparatus and method, and computer storage medium
CN110740316A (en) Data coding method and device
CN115623215B (en) Method for playing video, electronic equipment and computer readable storage medium
CN116437102A (en) Method, system, equipment and storage medium for learning universal video coding
CN113709504B (en) Image processing method, intelligent terminal and readable storage medium
CN109151574B (en) Video processing method, video processing device, electronic equipment and storage medium
CN109120979B (en) Video enhancement control method and device and electronic equipment
CN114630123A (en) Adaptive quality enhancement for low-latency video coding
CN111050175A (en) Method and apparatus for video encoding
CN115412559B (en) End cloud resource collaboration method, electronic equipment and readable storage medium
WO2020181540A1 (en) Video processing method and device, encoding apparatus, and decoding apparatus
CN118674660A (en) Image processing method, intelligent terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210514