CN115249208A - Image processing method, video processing method and device, terminal and storage medium - Google Patents
Image processing method, video processing method and device, terminal and storage medium Download PDFInfo
- Publication number
- CN115249208A CN115249208A CN202110460721.0A CN202110460721A CN115249208A CN 115249208 A CN115249208 A CN 115249208A CN 202110460721 A CN202110460721 A CN 202110460721A CN 115249208 A CN115249208 A CN 115249208A
- Authority
- CN
- China
- Prior art keywords
- image
- displayed
- video
- resolution
- scene information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4092—Image resolution transcoding, e.g. by using client-server architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20032—Median filtering
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The disclosure relates to an image processing method, a video processing method and device, a terminal and a storage medium. The method comprises the following steps: determining scene information corresponding to display content in an image to be displayed; acquiring the resolution of the image to be displayed; determining an image adjusting algorithm according to the scene information and the resolution of the image to be displayed; and performing enhancement processing on the image to be displayed through the image adjusting algorithm. By the method, a more ideal enhancement effect can be obtained, so that the use experience of a user can be improved.
Description
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method, a video processing method and apparatus, a terminal, and a storage medium.
Background
With the development of mobile devices and network technologies, more and more users are used to browse videos on devices such as mobile phones or tablet computers. During the process of watching the video, the user usually wants to see the video with higher definition.
In the related art, in order to provide a video with higher definition to a user, a terminal device may perform video enhancement on a played video by using a video enhancement algorithm. The video enhancement algorithm is to add some information or transform data to the original image by a certain means, selectively highlight interesting features in the image or suppress some unwanted features in the image, so that the image is matched with the visual response characteristics, thereby improving the image quality and enhancing the visual effect.
How to enhance video more reasonably and provide better visual effect for users always attracts attention.
Disclosure of Invention
The disclosure provides an image processing method, a video processing method and device, a terminal and a storage medium.
According to a first aspect of embodiments of the present disclosure, there is provided an image processing method, including:
determining scene information corresponding to display content in an image to be displayed;
acquiring the resolution of the image to be displayed;
determining an image adjusting algorithm according to the scene information and the resolution of the image to be displayed;
and performing enhancement processing on the image to be displayed through the image adjusting algorithm.
In some embodiments, the determining an image adjustment algorithm according to the scene information and the resolution of the image to be displayed includes:
determining the image adjusting algorithm according to the scene information, the resolution and the preset corresponding relation of the image to be displayed; the preset corresponding relation comprises the combination of different scenes and resolutions and the mapping between different image adjusting algorithms.
In some embodiments, the resolution in the preset correspondence includes a resolution level, and the method further includes:
comparing the resolution of the image to be displayed with a preset resolution threshold value to obtain a comparison result;
determining the resolution grade of the image to be displayed according to the comparison result;
the determining the image adjusting algorithm according to the scene information, the resolution and the preset corresponding relation of the image to be displayed comprises:
and determining the image adjusting algorithm according to the scene information, the resolution level and the preset corresponding relation of the image to be displayed.
In some embodiments, the determining scene information corresponding to display content in an image to be displayed includes:
and inputting the image to be displayed into a preset scene recognition network model to obtain the scene information of the image to be displayed.
In some embodiments, the determining scene information corresponding to display content in an image to be displayed includes:
determining a region of interest in the image to be displayed;
and determining the scene information of the interested area as the scene information of the image to be displayed.
According to a second aspect of the embodiments of the present disclosure, there is provided a video processing method, including:
acquiring a video to be displayed; the video to be displayed comprises a plurality of frames of images to be displayed;
the image processing method according to the first aspect is performed for each frame of the plurality of frames of images to be displayed.
In some embodiments, the method further comprises:
determining a target frame image from each image to be displayed of the video to be displayed; the target frame image is a partial image in the plurality of frames of images to be displayed;
and determining scene information corresponding to each image to be displayed in the video to be displayed according to the target frame image.
In some embodiments, the determining a target frame image from the images to be displayed of the video to be displayed includes
Selecting each image to be displayed of the video to be displayed according to a preset interval frame number to determine the target frame image;
the determining, according to the target frame image, scene information corresponding to each image to be displayed of the video to be displayed includes:
and in the scenes corresponding to the target frame images, identifying the scene with the highest confidence coefficient of the result or the same scene with the largest quantity as the scene information corresponding to each image to be displayed of the video to be displayed.
According to a third aspect of the embodiments of the present disclosure, there is provided an image processing apparatus including:
the first determining module is configured to determine scene information corresponding to display content in an image to be displayed;
the first acquisition module is configured to acquire the resolution of the image to be displayed;
the second determining module is configured to determine an image adjusting algorithm according to the scene information and the resolution of the image to be displayed;
and the adjusting module is configured to perform enhancement processing on the image to be displayed through the image adjusting algorithm.
In some embodiments, the second determining module is further configured to determine the image adjusting algorithm according to scene information, resolution and a preset corresponding relationship of the image to be displayed; the preset corresponding relation comprises the combination of different scenes and resolutions and the mapping between different image adjusting algorithms.
In some embodiments, the resolution in the preset correspondence includes a resolution level, and the apparatus further includes:
the comparison module is configured to compare the resolution of the image to be displayed with a preset resolution threshold value to obtain a comparison result;
a third determining module configured to determine a resolution level of the image to be displayed according to the comparison result;
the second determining module is further configured to determine the image adjusting algorithm according to the scene information, the resolution level and the preset corresponding relation of the image to be displayed.
In some embodiments, the first determining module is further configured to input the image to be displayed into a predetermined scene recognition network model, and obtain the scene information of the image to be displayed.
In some embodiments, the first determining module is further configured to determine a region of interest in the image to be displayed; and determining the scene information of the interested area as the scene information of the image to be displayed.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a video processing apparatus including:
the second acquisition module is configured to acquire a video to be displayed; the video to be displayed comprises a plurality of frames of images to be displayed;
the method is executed by the image processing apparatus of any one of claims 9 to 13 for each of the plurality of frames of images to be displayed.
In some embodiments, the apparatus further comprises:
the fourth determining module is configured to determine a target frame image from each image to be displayed of the video to be displayed; the target frame image is a partial image in the plurality of frames of images to be displayed; and determining scene information corresponding to each image to be displayed in the video to be displayed according to the target frame image.
In some embodiments, the fourth determining module is further configured to select each to-be-displayed image of the to-be-displayed video according to a predetermined interval frame number, and determine the target frame image; and in the scenes corresponding to the target frame images, identifying the scene with the highest confidence coefficient or the same scene with the largest quantity as the scene information corresponding to each image to be displayed of the video to be displayed.
According to a fifth aspect of the embodiments of the present disclosure, there is provided a terminal, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the image processing method as described in the first aspect above, or the video processing method as described in the second aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a storage medium including:
the instructions in said storage medium, when executed by a processor of a terminal, enable the terminal to perform the image processing method as described in the first aspect above, or the video processing method as described in the second aspect.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
it can be understood that, in the embodiment of the present disclosure, the image adjustment algorithm is determined in combination with the resolution of the image to be displayed and the scene information to perform the enhancement processing on the image, and compared to a manner of using the same set of enhancement algorithms for all images or a manner of performing enhancement singly according to the resolution or the scene, the processing manner of the present disclosure is more flexible and finer, and can reduce the occurrence of the situation that the image quality is deteriorated due to the enhancement for all images using the same algorithm or only for the resolution enhancement. According to the method, the resolution and the scene are combined to determine the corresponding image adjusting algorithm for adjusting, so that the method is a more accurate and fine adjusting method, a more ideal enhancement effect can be obtained, and the user experience can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flowchart illustrating an image processing method according to an embodiment of the present disclosure.
Fig. 2 is a flow chart illustrating a video processing method according to an embodiment of the disclosure.
Fig. 3 is a first flowchart illustrating a video processing method according to an embodiment of the disclosure.
Fig. 4 is a second flowchart of a video processing method according to an embodiment of the disclosure.
Fig. 5 is a third example of a flow of a video processing method according to an embodiment of the disclosure.
Fig. 6 is a diagram illustrating an image processing apparatus according to an exemplary embodiment.
Fig. 7 is a block diagram of a terminal shown in an embodiment of the disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1 is a flowchart illustrating an image processing method according to an embodiment of the present disclosure, and as shown in fig. 1, the image processing method applied in a terminal includes the following steps:
s11, determining scene information corresponding to display content in an image to be displayed;
s12, acquiring the resolution of the image to be displayed;
s13, determining an image adjusting algorithm according to the scene information and the resolution of the image to be displayed;
and S14, enhancing the image to be displayed through the image adjusting algorithm.
In an embodiment of the present disclosure, a terminal device includes: a mobile device and a stationary device; the mobile device includes: a mobile phone, a tablet computer, a wearable device, and the like, and the fixed device includes, but is not limited to, an in-vehicle device and the like.
The image to be displayed may be an image stored in the terminal, or an image received by the terminal and transmitted by another device, for example, the terminal device receives an image transmitted by another electronic device through a wireless communication protocol, where the wireless communication protocol includes: bluetooth protocol, zigBee (ZigBee) protocol, wi-Fi protocol, or the like. In step S11, the terminal device determines scene information corresponding to display content in the image to be displayed, where the scene information is one of the following scenes: a portrait, a plant, a sky, etc., for example, an animal, a snow scene, or other scenes, etc.
The visual attention points of human eyes to display pictures of different scenes are different, for example, for a portrait scene, human eyes pay more attention to the portrait and ignore the parts except the portrait; for green foliage or blue sky, the human eye may be more concerned with the color of the foliage or sky.
When determining scene information corresponding to display content in an image to be displayed, one way is to extract features of the image to be displayed, and determine whether the extracted features meet rules in a predetermined scene to determine the scene. The feature extraction may be color features, edge features, and the like.
For example, for a blue sky scene, which is primarily blue tones, there may be no more noticeable edge features; and for a green plant scene, the color tone is mainly green, and the edge is mostly a short arc line and the like.
In another embodiment of the present disclosure, step S11 includes:
and inputting the image to be displayed into a preset scene recognition network model to obtain the scene information of the image to be displayed.
In the embodiment of the present disclosure, scene information of an image to be displayed is obtained in conjunction with a predetermined scene recognition network model. For example, the predetermined scene recognition network model is a model trained based on a deep learning network. For example, the deep learning network is AlexNet, mobileNet, residual network ResNet, and the like.
Specifically, when a scene recognition network model is obtained based on deep learning network training, image sets corresponding to different scene classifications, for example, image sets corresponding to scenes including human figures, food, animals, sky, plants, and the like, may be constructed. The image set is then divided into two parts, a training set and a validation set, wherein the training set contains a much larger number of images than the validation set. And training the selected deep learning network by using a training set. And after the training is finished, verifying the scene recognition accuracy of the model by using the verification set, and finishing the training of the scene recognition network model if the recognition accuracy of the model meets the preset accuracy requirement. It should be noted that, the number of the scene classification supported by the scene recognition network model is not limited in the present disclosure.
And based on the trained scene recognition network model, the terminal inputs the image to be displayed into the model, and outputs the scene information of the image to be displayed through the calculation of the model. For example, the output of the scene recognition network model is a character and a number, the character represents a scene identifier, the number represents a probability of the scene determined to correspond to the scene identifier, and the probability value ranges from 0 to 1. For example, the output of the scene recognition network model is (person, 0.92), and the probability that the scene information included in the video to be displayed currently is a portrait is 0.92.
It should be noted that, in the embodiment of the present disclosure, before the image to be displayed is input into the predetermined scene recognition network model, the image to be displayed may also be preprocessed, for example, operations such as reducing image noise, improving contrast, and the like, so as to improve the accuracy of scene recognition.
In one embodiment of the present disclosure, step S11 may include:
determining a region of interest in the image to be displayed;
and determining the scene information of the interested area as the scene information of the image to be displayed.
In this embodiment, when determining scene information of an image to be displayed, a region of interest may be determined first, and the region of interest generally refers to a region of interest of human eyes. For example, in the embodiment of the present disclosure, the region of interest may be a local region framed with a preset center of the image to be displayed as a center, and furthermore, the region of interest may be a local region determined based on a saliency detection algorithm. The present disclosure does not specifically limit the manner in which the region of interest is determined.
After the region of interest is determined, the scene information may be determined according to the image content of the region of interest portion, such as according to the foregoing feature extraction manner or the foregoing model identification manner. It can be understood that by the method, the calculation amount can be reduced, and the accuracy of scene recognition can be improved, so that the determined image adjusting algorithm can be more suitable for the image to be displayed.
In step S12, the terminal device also determines the resolution of the image to be displayed. The resolution of the image to be displayed is a parameter for measuring the amount of data in the image, and can be represented in the form of width (W) × height (H), where W is an effective pixel in the transverse direction of the image, and H is an effective pixel in the longitudinal direction of the image. The resolution of an image characterizes the sharpness of the image, the higher the resolution, the sharper the image. The resolution of the image may be an attribute of the image itself, and the terminal device may read the resolution attribute of the image before display after receiving the specified image to be displayed.
In step S13, the terminal device determines an image adjustment algorithm corresponding to the image to be displayed according to the scene information and the resolution of the image to be displayed.
In one embodiment of the present disclosure, the image adjustment algorithm includes at least one of:
a contrast adjustment algorithm;
a saturation adjustment algorithm;
and (5) a sharpening degree adjusting algorithm.
The contrast adjustment algorithm may be, for example, processing the image based on a correlation algorithm such as histogram equalization, histogram matching, and adaptive contrast enhancement.
The Saturation adjustment algorithm may obtain Saturation S (Saturation) by converting RGB values of pixels of an image into an HSL color mode, and adjust the Saturation by adjusting the value of S, for example. And then converting the adjusted image from the HSL color mode to the RGB color mode for display.
The sharpness adjusting algorithm may be an edge smoothing algorithm that reduces the sharpness of the image, such as a frequency domain image smoothing algorithm, e.g., a low-pass gaussian filter, a low-pass butterworth filter, or a spatial domain image smoothing algorithm, e.g., a low-pass convolution filter, a median filter, etc. It is also possible to use some edge sharpening algorithm to improve the sharpening degree of the image, such as gradient sharpening, laplacian operator, sobel operator, and so on.
The image adjustment algorithm may also include other image processing algorithms such as noise adjustment, and the embodiments of the present disclosure are not limited thereto. In the embodiment of the present disclosure, the adjustment of the image adjustment algorithm is not limited to the enhancement of the contrast, the saturation, the sharpening, or the like, and includes the reduction of the contrast, the saturation, the sharpening, or the like. Moreover, different image adjustment algorithms may have a part of the same kind of image processing algorithms, or may include completely different image processing algorithms.
In one embodiment of the present disclosure, step S13 includes: determining whether the confidence coefficient of scene recognition of an image to be displayed is greater than a preset confidence coefficient threshold value; if the confidence coefficient is larger than the threshold value, more weights of the image adjusting algorithm corresponding to the scene information of the image to be displayed are distributed, and the weight of the image adjusting algorithm corresponding to the resolution ratio is lower, so that the image adjusting algorithm is obtained. The confidence coefficient of scene recognition is high, which indicates that the determined scene information of the image to be displayed is reliable.
For example, in a scene of a blue sky, a white cloud, and a green plant, the human eyes are more interested in the color saturation, the contrast, and the like of the picture, so the determined image adjustment algorithm may include contrast enhancement and saturation enhancement, and if the image adjustment algorithm corresponding to the resolution includes saturation reduction and sharpness improvement, the image processing algorithm that reduces the saturation may be omitted.
Similarly, if the confidence of the scene recognition is smaller than the preset confidence threshold, it indicates that the recognition of the scene information may be inaccurate, and at this time, a lower weight of the image adjustment algorithm corresponding to the scene information of the image to be displayed is assigned, and a higher weight of the image adjustment algorithm corresponding to the resolution is assigned.
In another embodiment of the present disclosure, step S13 includes:
determining the image adjusting algorithm according to the scene information, the resolution and the preset corresponding relation of the image to be displayed; the preset corresponding relation comprises the combination of different scenes and resolutions and the mapping between different image adjusting algorithms.
In this embodiment, since the mapping between the combination of different scenes and resolutions and the image adjustment algorithm is stored in the preset correspondence, and the combination of different scenes and resolutions may correspond to different image adjustment algorithms, the image adjustment algorithm may be determined in a targeted manner based on the preset correspondence according to the scene characteristics and the resolution characteristics of the current image to be displayed.
It should be noted that, in any two combinations in the embodiments of the present disclosure, as long as at least one of the scenes or the resolutions is different, the combinations belong to different combinations. In addition, the image adjustment algorithm may be at least one image processing algorithm preset by a developer to adapt to the scene and the resolution based on the characteristics of the scene and the resolution. In the embodiment of the present disclosure, different image adjustment algorithms may be packaged into corresponding program modules.
Furthermore, in an embodiment of the present disclosure, the resolution in the preset correspondence may be, for example, a specific resolution value, such as 1024 × 768; but also a resolution level. When the preset correspondence is created, for example, for different scenes, the corresponding resolution level may be determined by using the same resolution threshold, and of course, different scenes may also use different resolution thresholds to determine the corresponding resolution level, where the resolution level includes at least two levels.
In one embodiment, the resolution in the preset correspondence includes a resolution level, and the method further includes:
comparing the resolution of the image to be displayed with a preset resolution threshold value to obtain a comparison result;
determining the resolution grade of the image to be displayed according to the comparison result;
the determining the image adjusting algorithm according to the scene information, the resolution and the preset corresponding relation of the image to be displayed comprises:
and determining the image adjusting algorithm according to the scene information, the resolution level and the preset corresponding relation of the image to be displayed.
For example, the present disclosure may compare the resolution of the image to be displayed with a preset resolution threshold that does not distinguish scenes, and after determining the resolution level, determine the image adjustment algorithm corresponding to the scene and the resolution level in the preset correspondence. And then or, determining a resolution grade according to a preset resolution threshold corresponding to the scene information, and then determining an image adjusting algorithm corresponding to the scene and the resolution grade in a preset corresponding relation.
In an embodiment of the present disclosure, the presetting of the corresponding relationship includes, for example: the method comprises the steps that for a high-resolution building scene, a corresponding image adjusting algorithm comprises contrast enhancement and sharpening enhancement; the corresponding image adjustment algorithm for the high resolution portrait scene includes color saturation enhancement, while contrast and sharpness are reduced. If the contrast and sharpness of the image are still increased for a high-resolution portrait scene, skin defects may be more pronounced, resulting in negative effects.
The preset correspondence relationship further includes, for example: in the low-resolution green plant scene, the corresponding image adjusting algorithm comprises the steps of reducing the contrast and the sharpening degree and improving the saturation of green; and the corresponding video adjusting algorithm of the green plant scene with high resolution comprises the steps of improving contrast and sharpening degree and improving the saturation of green. For a low-resolution image, since the noise is large and the effective pixels are small, the contrast and the sharpness can be reduced so that the noise condition is not further degraded. If the same approach is taken as for the high resolution image, the noise may be amplified.
In step S14, after determining the image adjustment algorithm based on the scene information and the resolution corresponding to the current image to be displayed in step S13, the image to be displayed may be enhanced through the image adjustment algorithm.
For example, for a high-resolution architectural scene, the details of the building are highlighted by increasing the contrast and sharpness; for a high-resolution portrait scene, the skin defects can be weakened by enhancing the color saturation and reducing the contrast and the sharpening degree, so that the display effect is improved; for a green plant scene with low resolution, the visual effect is improved by removing noise and improving the saturation of green; for the green plant scene with high resolution, the visual effect is integrally improved through the improvement of contrast, sharpening degree and saturation.
After the image to be displayed is enhanced, the enhanced image can be stored or displayed on the terminal.
It can be understood that, in the embodiment of the present disclosure, the image adjustment algorithm is determined in combination with the resolution of the image to be displayed and the scene information to perform the enhancement processing on the image, and compared to a manner of using the same set of enhancement algorithms for all images or a manner of performing enhancement singly according to the resolution or the scene, the processing manner of the present disclosure is more flexible and finer, and can reduce the occurrence of the situation that the image quality is deteriorated due to the enhancement for all images using the same algorithm or only for the resolution enhancement. According to the method, the resolution and the scene are combined to determine the corresponding image adjusting algorithm for adjusting, so that the method is a more accurate and fine adjusting method, a more ideal enhancement effect can be obtained, and the user experience can be improved.
If the image processing method of the present disclosure is applied to a video, the image processing method may be performed on each frame of image of a display video.
Exemplarily, fig. 2 is a flowchart illustrating a video processing method according to an embodiment of the present disclosure, and as shown in fig. 2, the video processing method applied in the terminal includes the following steps:
s10, acquiring a video to be displayed; the video to be displayed comprises a plurality of frames of images to be displayed;
S11A, determining scene information corresponding to display content in each frame of image to be displayed;
S12A, acquiring the resolution of each frame of image to be displayed;
S13A, determining an adjusting algorithm corresponding to each frame of image to be displayed according to the scene information and the resolution of each frame of image to be displayed;
and S14A, enhancing the image to be displayed of each frame based on the adjusting algorithm corresponding to the image to be displayed of each frame.
In the embodiment of the disclosure, the video to be displayed may be a video stored in a local area of the terminal, or may be a video acquired by the terminal from the cloud server online, for example, the video to be displayed may be a video downloaded by the terminal device through installed video playing software. The video to be displayed can also be a video acquired by the terminal device from other electronic devices.
The general video file is composed of a video stream and an audio stream, the audio stream and the video stream are separated from the file stream for playing the video file, the audio stream and the video stream are decoded respectively, and the decoded video frames can be directly rendered, wherein each video frame is an image of each frame corresponding to the video. The audio frames may be sent to a buffer of an audio output device for playback. And when the video is played, the video rendering and the audio playing are synchronously played based on the time stamp.
In this embodiment, images to be displayed of different frames in the video to be displayed determine respective corresponding image adjustment algorithms according to the scene recognition result and the resolution, instead of uniformly using one image adjustment algorithm for the images to be displayed according to the scene and the resolution. It will be appreciated that in this way, more precise and fine adjustment can be provided.
In one embodiment, the method further comprises:
determining a target frame image from each image to be displayed of the video to be displayed; the target frame image is a partial image in the plurality of frames of images to be displayed;
and determining scene information corresponding to each image to be displayed in the video to be displayed according to the target frame image.
In this embodiment, a part of the target frame image is determined from each image to be displayed of the video to be displayed, for example, the target frame image is a designated 20 th frame image, and since the target frame image is a partial image of the image to be displayed, scene information corresponding to each image to be displayed is determined according to the target frame image, so that the calculation amount can be reduced, and the video processing speed can be increased.
In an embodiment, the determining the target frame image from the images to be displayed of the video to be displayed includes
Selecting each image to be displayed of the video to be displayed according to a preset interval frame number to determine the target frame image;
the determining, according to the target frame image, scene information corresponding to each image to be displayed of the video to be displayed includes:
and in the scenes corresponding to the target frame images, identifying the scene with the highest confidence coefficient of the result or the same scene with the largest quantity as the scene information corresponding to each image to be displayed of the video to be displayed.
According to the method and the device, the scene information contained in the adjacent video frames is probably similar, so that the target frame image can be selected according to the preset interval frame number, and the scene information corresponding to each image to be displayed in the video to be displayed is further determined according to the target frame image. For example, the frame to be displayed 1 and the frame to be displayed 1+N (N is greater than 1) are selected, and so on, and a predetermined scene recognition network model is sequentially input or a scene corresponding to each target frame is recognized based on a feature extraction mode.
In the scene recognition results corresponding to the target frame images, the scene with the highest recognition result confidence coefficient or the same scene with the largest quantity is selected as the scene information corresponding to all the images to be displayed in the video to be displayed. The scenes with the highest confidence degrees of the recognition results represent the most possible scene classification of the videos to be displayed, and the scenes with the largest number represent the more concentrated scenes in the videos to be displayed. Therefore, the scene with the highest confidence coefficient of the recognition result or the same scene with the largest quantity is selected as the scene information corresponding to the video to be displayed, the scene characteristics of the video to be displayed can be reflected, the video to be displayed is representative, and the better enhancement processing effect can be obtained.
According to the method and the device, the playing process can be started after the video to be displayed is enhanced, so that the enhanced video is played on the display screen of the terminal equipment, and the super-definition visual effect of video display is obtained.
It can be understood that, before playing a video, not only the resolution of the video to be displayed is considered, but also scene information corresponding to the video to be displayed is considered, and compared with a mode that the same set of video enhancement algorithm is adopted for all different videos or a mode that enhancement is performed according to the resolution or the scene singly, the processing mode of the present disclosure is more flexible and finer, and the occurrence of the situation that the image quality is deteriorated due to the fact that the enhancement is performed on all videos by adopting the same algorithm or only for the resolution enhancement is reduced. The method for adjusting the target video by combining the resolution and the scene to determine the corresponding target video adjusting algorithm is a more accurate and fine adjusting method, and can obtain a more ideal enhancement effect, so that the use experience of a user can be improved. In the embodiment of the present disclosure, in response to detecting an operation of a switch for turning on a video enhancement function, scene information corresponding to each image to be displayed or a target frame image in the video to be displayed may be determined.
In this embodiment, before the terminal device plays the video, for example, if the terminal device detects that an application supporting the video playing function is turned on or detects that a website for browsing the video is accessed, a switch for turning on or off the video enhancement function is displayed on the display screen. And when the terminal detects the operation of starting the video enhancement function of the switch, triggering and determining the scene information corresponding to the image to be displayed in the video to be displayed.
It can be understood that this disclosure is through the mode that sets up the video enhancement switch, whether open the video enhancement function by the user decision, more has the intellectuality, can promote the user and use experience.
In the embodiment of the present disclosure, the video to be displayed may be decoded, and the resolution of the video to be displayed is obtained from the decoded data of the video to be displayed, where the resolutions of the images to be displayed in the video to be displayed are the same.
Since the decoded video data has a data portion corresponding to the storage resolution, the terminal device may acquire the data portion corresponding to the resolution from the decoded data of the video to be displayed, and then acquire the resolution of the video to be displayed from the data portion corresponding to the resolution.
In another embodiment of the present disclosure, the terminal device may further detect a resolution switching operation during a video playing process, and obtain the resolution after switching, so that an image adjustment algorithm corresponding to an unplayed frame image may be adjusted according to the resolution after switching, thereby performing enhancement processing on the unplayed frame image in the video to be displayed based on the adjusted image adjustment algorithm.
It can be understood that, in the video playing process, the present disclosure can also determine the current resolution immediately and adjust the image adjusting algorithm immediately, which has flexibility and can improve the user experience immediately.
Fig. 3 is a first example of a flow of a video processing method according to an embodiment of the disclosure, as shown in fig. 3, including the following steps:
and S21, acquiring scene information of the video to be displayed.
And S22, acquiring the resolution of the video to be displayed.
And S23, acquiring the display gain effect corresponding to the video to be displayed under the resolution and the scene according to different video scenes and the corresponding relation between the resolution and the display gain effect.
The corresponding relationship between different video scenes and resolutions and the display gain effect is a preset corresponding relationship in the present disclosure. The display gain effect is a video adjusting algorithm of the present disclosure, and the display gain effect corresponding to the video to be displayed in the resolution and scene is a target video adjusting algorithm corresponding to the video to be displayed.
And S24, enhancing and optimizing the video to be displayed through the display gain effect.
Fig. 4 is a second flowchart of a video processing method according to an embodiment of the disclosure, as shown in fig. 4, including the following steps:
and S31, detecting the operation of browsing the video.
In this implementation, the operation of browsing the video is detected, for example, the terminal device detects that an application supporting a video playing function is opened, or detects that a website for browsing the video is accessed.
And S32, detecting the operation of opening the switch of the video enhancement function.
And S33, acquiring the video frame of the currently displayed video.
In this embodiment, a video frame of a currently displayed video is acquired, that is, video data of a video to be displayed is acquired by decoding.
And S34, preprocessing the video frame of the displayed video.
In this embodiment, the preprocessing includes denoising processing and the like to improve the recognition accuracy of scene recognition.
And S35, acquiring a video scene by using the scene recognition model.
In this embodiment, the scene recognition model is a scene recognition network model of an embodiment of the present disclosure.
S36, acquiring the resolution of the currently displayed video;
s37, selecting a corresponding display gain effect according to a video scene and resolution;
and S38, outputting the video with the display gain effect.
Fig. 5 is a third example of a flow of a video processing method according to an embodiment of the disclosure, as shown in fig. 5, including the following steps:
and S41, acquiring a video to be displayed.
And S42, judging the scene classification of the video to be displayed.
S43A, if the classification is 1, the resolution of the video to be displayed is obtained.
For example, class 1 is building as a scene.
And S43B, if the video is classified into 2, acquiring the resolution of the video to be displayed.
For example, class 2 is a portrait scene.
S44A, if the resolution is the classification 1, judging whether the resolution is lower than a first threshold value; if so, step S45A is executed, and if not, step S45B is executed.
S44B, if the resolution is the classification 2, judging whether the resolution is lower than a second threshold value; if so, go to step S45C, otherwise, go to step S45D.
In this embodiment, the first threshold and the second threshold may be the same or different.
And S45A, the color saturation is improved, and the contrast and the sharpening degree are reduced.
And S45B, improving the color saturation and the sharpening degree.
And S45C, the color saturation is improved, and the sharpening degree is reduced.
And S45D, improving the color saturation, the sharpening degree and the contrast.
And S46, performing enhancement processing on the video to be displayed.
It can be understood that, in the embodiment of the present disclosure, before playing a video, not only the resolution of the video to be displayed is considered, but also the scene information corresponding to the video to be displayed is considered, and the resolution and the scene are combined to determine the adjustment mode of the corresponding target video adjustment algorithm, which is a more accurate and detailed adjustment mode, and can obtain a more ideal enhancement effect, so that the user experience can be improved.
Fig. 6 is a diagram illustrating an image processing apparatus according to an exemplary embodiment. Referring to fig. 6, the apparatus includes:
the display device comprises a first determining module 101, a second determining module and a display module, wherein the first determining module is configured to determine scene information corresponding to display content in an image to be displayed;
a first obtaining module 102 configured to obtain a resolution of the image to be displayed;
a second determining module 103, configured to determine an image adjusting algorithm according to the scene information and the resolution of the image to be displayed;
and the adjusting module 104 is configured to perform enhancement processing on the image to be displayed through the image adjusting algorithm.
In some embodiments, the second determining module 102 is further configured to determine the image adjusting algorithm according to scene information, resolution and a preset corresponding relationship of the image to be displayed; the preset corresponding relation comprises the combination of different scenes and resolutions and the mapping between different image adjusting algorithms.
In some embodiments, the resolution in the preset correspondence includes a resolution level, and the apparatus further includes:
a comparison module 105 configured to compare the resolution of the image to be displayed with a preset resolution threshold to obtain a comparison result;
a third determining module 106 configured to determine a resolution level of the image to be displayed according to the comparison result;
the second determining module 103 is further configured to determine the image adjusting algorithm according to the scene information, the resolution level, and the preset corresponding relationship of the image to be displayed.
In some embodiments, the first determining module 101 is further configured to input the image to be displayed into a predetermined scene recognition network model, and obtain the scene information of the image to be displayed.
In some embodiments, the first determining module 101 is further configured to determine a region of interest in the image to be displayed; and determining the scene information of the interested area as the scene information of the image to be displayed.
The present disclosure also provides a video processing apparatus, including:
a second obtaining module 201 configured to obtain a video to be displayed; the video to be displayed comprises a plurality of frames of images to be displayed;
and executing the method by the image processing device for each frame of image to be displayed in the plurality of frames of images to be displayed.
In some embodiments, the apparatus further comprises:
a fourth determining module 202, configured to determine a target frame image from each image to be displayed of the video to be displayed; the target frame image is a partial image in the multiple frames of images to be displayed; and determining scene information corresponding to each image to be displayed in the video to be displayed according to the target frame image.
In some embodiments, the fourth determining module 202 is further configured to select each to-be-displayed image of the to-be-displayed video according to a predetermined number of frames at intervals to determine the target frame image; and in the scenes corresponding to the target frame images, identifying the scene with the highest confidence coefficient of the result or the same scene with the largest quantity as the scene information corresponding to each image to be displayed of the video to be displayed. With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 7 is a block diagram illustrating a mobile terminal apparatus 800 according to an example embodiment. For example, the device 800 may be a mobile phone, a mobile computer, etc.
Referring to fig. 7, the apparatus 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operation at the device 800. Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The multimedia component 808 includes a screen that provides an output interface between the device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 800 is in an operational mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the device 800. For example, the sensor assembly 814 may detect the open/closed state of the device 800, the relative positioning of the components, such as a display and keypad of the apparatus 800, the sensor assembly 814 may also detect a change in position of the apparatus 800 or a component of the apparatus 800, the presence or absence of user contact with the apparatus 800, orientation or acceleration/deceleration of the apparatus 800, and a change in temperature of the apparatus 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communications between the apparatus 800 and other devices in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as Wi-Fi,2G, or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 820 of the device 800 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer readable storage medium in which instructions, when executed by a processor of a terminal, enable the terminal to perform an image processing method, the method comprising:
determining scene information corresponding to display content in an image to be displayed;
acquiring the resolution of the image to be displayed;
determining an image adjusting algorithm according to the scene information and the resolution of the image to be displayed;
and performing enhancement processing on the image to be displayed through the image adjusting algorithm.
Or, enabling a terminal to perform a video processing method, the method comprising:
acquiring a video to be displayed; the video to be displayed comprises a plurality of frames of images to be displayed;
and executing the image processing method for each frame of image to be displayed in the plurality of frames of images to be displayed.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (18)
1. An image processing method, characterized in that the method comprises:
determining scene information corresponding to display content in an image to be displayed;
acquiring the resolution of the image to be displayed;
determining an image adjusting algorithm according to the scene information and the resolution of the image to be displayed;
and performing enhancement processing on the image to be displayed through the image adjusting algorithm.
2. The method according to claim 1, wherein determining an image adjustment algorithm according to the scene information and the resolution of the image to be displayed comprises:
determining the image adjusting algorithm according to the scene information, the resolution and the preset corresponding relation of the image to be displayed; the preset corresponding relation comprises the combination of different scenes and resolutions and the mapping between different image adjusting algorithms.
3. The method of claim 2, wherein the resolution in the preset correspondence comprises a resolution level, the method further comprising:
comparing the resolution of the image to be displayed with a preset resolution threshold value to obtain a comparison result;
determining the resolution grade of the image to be displayed according to the comparison result;
the determining the image adjusting algorithm according to the scene information, the resolution and the preset corresponding relation of the image to be displayed comprises:
and determining the image adjusting algorithm according to the scene information, the resolution level and the preset corresponding relation of the image to be displayed.
4. The method according to claim 1, wherein the determining scene information corresponding to display content in the image to be displayed comprises:
and inputting the image to be displayed into a preset scene recognition network model to obtain the scene information of the image to be displayed.
5. The method according to claim 1, wherein the determining scene information corresponding to display content in the image to be displayed comprises:
determining a region of interest in the image to be displayed;
and determining the scene information of the interested area as the scene information of the image to be displayed.
6. A method of video processing, the method comprising:
acquiring a video to be displayed; the video to be displayed comprises a plurality of frames of images to be displayed;
the image processing method of any one of claims 1 to 5 is performed for each of the plurality of frames of images to be displayed.
7. The method of claim 6, further comprising:
determining a target frame image from each image to be displayed of the video to be displayed; the target frame image is a partial image in the plurality of frames of images to be displayed;
and determining scene information corresponding to each image to be displayed in the video to be displayed according to the target frame image.
8. The method of claim 7, wherein determining a target frame image from the images to be displayed of the video to be displayed comprises
Selecting each image to be displayed of the video to be displayed according to a preset interval frame number to determine the target frame image;
the determining, according to the target frame image, scene information corresponding to each image to be displayed of the video to be displayed includes:
and in the scenes corresponding to the target frame images, identifying the scene with the highest confidence coefficient of the result or the same scene with the largest quantity as the scene information corresponding to each image to be displayed of the video to be displayed.
9. An image processing apparatus, characterized in that the apparatus comprises:
the first determining module is configured to determine scene information corresponding to display content in an image to be displayed;
the first acquisition module is configured to acquire the resolution of the image to be displayed;
the second determining module is configured to determine an image adjusting algorithm according to the scene information and the resolution of the image to be displayed;
and the adjusting module is configured to perform enhancement processing on the image to be displayed through the image adjusting algorithm.
10. The apparatus of claim 9,
the second determining module is further configured to determine the image adjusting algorithm according to the scene information, the resolution and a preset corresponding relation of the image to be displayed; the preset corresponding relation comprises the combination of different scenes and resolutions and the mapping between different image adjusting algorithms.
11. The apparatus of claim 10, wherein the resolution in the preset correspondence comprises a resolution level, the apparatus further comprising:
the comparison module is configured to compare the resolution of the image to be displayed with a preset resolution threshold value to obtain a comparison result;
a third determining module configured to determine a resolution level of the image to be displayed according to the comparison result;
the second determining module is further configured to determine the image adjusting algorithm according to the scene information, the resolution level and the preset corresponding relation of the image to be displayed.
12. The apparatus of claim 9,
the first determining module is further configured to input the image to be displayed into a predetermined scene recognition network model, and obtain the scene information of the image to be displayed.
13. The apparatus of claim 9,
the first determination module is further configured to determine a region of interest in the image to be displayed; and determining the scene information of the interested area as the scene information of the image to be displayed.
14. A video processing apparatus, characterized in that the apparatus comprises:
the second acquisition module is configured to acquire a video to be displayed; the video to be displayed comprises a plurality of frames of images to be displayed;
the method performed by the image processing apparatus of any one of claims 9 to 13 for each of the plurality of frames of images to be displayed.
15. The apparatus of claim 14, further comprising:
the fourth determining module is configured to determine a target frame image from each image to be displayed of the video to be displayed; the target frame image is a partial image in the plurality of frames of images to be displayed; and determining scene information corresponding to each image to be displayed in the video to be displayed according to the target frame image.
16. The apparatus of claim 15,
the fourth determining module is further configured to select each image to be displayed of the video to be displayed according to a preset interval frame number, and determine the target frame image; and in the scenes corresponding to the target frame images, identifying the scene with the highest confidence coefficient of the result or the same scene with the largest quantity as the scene information corresponding to each image to be displayed of the video to be displayed.
17. A terminal, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the image processing method of any of claims 1 to 5, or the video processing method of any of claims 6 to 8.
18. A non-transitory computer readable storage medium, instructions in which, when executed by a processor of a terminal, enable the terminal to perform the image processing method of any one of claims 1 to 5, or the video processing method of any one of claims 6 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110460721.0A CN115249208A (en) | 2021-04-27 | 2021-04-27 | Image processing method, video processing method and device, terminal and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110460721.0A CN115249208A (en) | 2021-04-27 | 2021-04-27 | Image processing method, video processing method and device, terminal and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115249208A true CN115249208A (en) | 2022-10-28 |
Family
ID=83697283
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110460721.0A Pending CN115249208A (en) | 2021-04-27 | 2021-04-27 | Image processing method, video processing method and device, terminal and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115249208A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115633131A (en) * | 2022-10-29 | 2023-01-20 | 上海习加智能科技有限公司 | Image processing method and device and NVR-based image processing system |
-
2021
- 2021-04-27 CN CN202110460721.0A patent/CN115249208A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115633131A (en) * | 2022-10-29 | 2023-01-20 | 上海习加智能科技有限公司 | Image processing method and device and NVR-based image processing system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109345485B (en) | Image enhancement method and device, electronic equipment and storage medium | |
CN110619350B (en) | Image detection method, device and storage medium | |
CN104918107B (en) | The identification processing method and device of video file | |
CN109784164B (en) | Foreground identification method and device, electronic equipment and storage medium | |
CN108921178B (en) | Method and device for obtaining image blur degree classification and electronic equipment | |
CN105631803B (en) | The method and apparatus of filter processing | |
CN109509195B (en) | Foreground processing method and device, electronic equipment and storage medium | |
CN110728180B (en) | Image processing method, device and storage medium | |
CN105528765A (en) | Method and device for processing image | |
CN112200040A (en) | Occlusion image detection method, device and medium | |
CN112866801A (en) | Video cover determining method and device, electronic equipment and storage medium | |
CN112449085A (en) | Image processing method and device, electronic equipment and readable storage medium | |
CN111666941A (en) | Text detection method and device and electronic equipment | |
CN109784327B (en) | Boundary box determining method and device, electronic equipment and storage medium | |
CN112188091B (en) | Face information identification method and device, electronic equipment and storage medium | |
CN107507128B (en) | Image processing method and apparatus | |
CN106339705A (en) | Image acquisition method and device | |
WO2020233201A1 (en) | Icon position determination method and device | |
CN106469446B (en) | Depth image segmentation method and segmentation device | |
CN105472228B (en) | Image processing method and device and terminal | |
CN112819695B (en) | Image super-resolution reconstruction method and device, electronic equipment and medium | |
CN115249208A (en) | Image processing method, video processing method and device, terminal and storage medium | |
CN112950503A (en) | Training sample generation method and device and truth value image generation method and device | |
CN112188095A (en) | Photographing method, photographing device and storage medium | |
CN118071648A (en) | Video processing method, device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |