[go: up one dir, main page]

CN112399080A - Video processing method, device, terminal and computer readable storage medium - Google Patents

Video processing method, device, terminal and computer readable storage medium Download PDF

Info

Publication number
CN112399080A
CN112399080A CN202011208409.4A CN202011208409A CN112399080A CN 112399080 A CN112399080 A CN 112399080A CN 202011208409 A CN202011208409 A CN 202011208409A CN 112399080 A CN112399080 A CN 112399080A
Authority
CN
China
Prior art keywords
image
video file
contour
target image
terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011208409.4A
Other languages
Chinese (zh)
Inventor
刘春宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Kugou Computer Technology Co Ltd
Original Assignee
Guangzhou Kugou Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kugou Computer Technology Co Ltd filed Critical Guangzhou Kugou Computer Technology Co Ltd
Priority to CN202011208409.4A priority Critical patent/CN112399080A/en
Publication of CN112399080A publication Critical patent/CN112399080A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates to a video processing method, a video processing device, a video processing terminal and a computer readable storage medium, and belongs to the field of data processing. The method comprises the following steps: displaying a first image of a first video file, wherein the first image is any frame image in the first video file; determining a plurality of first contour points labeled on the first image; respectively determining target image areas to be processed in the multi-frame images of the first video file according to the plurality of first contour points; and performing image processing on the target image area of the multi-frame image to obtain a second video file, so that the target image area in the first video file can be determined without performing target area identification on the first video file through an identification model, and further the target image area is subjected to image processing, so that the identification model does not need to be added in the terminal, and the performance requirement on the terminal is reduced.

Description

Video processing method, device, terminal and computer readable storage medium
Technical Field
The present disclosure relates to the field of data processing, and in particular, to a video processing method, an apparatus, a terminal, and a computer-readable storage medium.
Background
At present, the demand of users for beautifying faces in shot video files is increasing. For example, the user may wish to have a whiter face, fewer blemishes on the face, etc. Therefore, after the terminal shoots the video file, the shot video file needs to be processed, so that the face in the video file can meet the requirements of the user.
In the related technology, a terminal respectively identifies the face of each frame of video image in an acquired video file, then performs skin grinding, whitening and other treatments on the face of each identified frame of video image, and forms a video file by multiple frames of video images after skin grinding and beautifying to obtain the processed video file.
In the related art, the face recognition needs to be performed on the video image in the acquired video file, and therefore, the face recognition model needs to be added to the terminal that performs video processing on the video file, so that the performance requirement on the terminal is high in the process of performing video processing.
Disclosure of Invention
The embodiment of the disclosure provides a video processing method, a video processing device, a terminal and a computer readable storage medium, which can reduce the hardware requirement on the terminal. The technical scheme is as follows:
in one aspect, a video processing method is provided, and the method includes:
displaying a first image of a first video file, wherein the first image is any frame image in the first video file;
determining a plurality of first contour points labeled on the first image;
respectively determining target image areas to be processed in the multi-frame images of the first video file according to the plurality of first contour points;
and carrying out image processing on the target image area of the multi-frame image to obtain a second video file.
In some embodiments, the determining, according to the plurality of first contour points, target image areas to be processed in the plurality of frames of images of the first video file respectively includes:
determining pixel feature values around each first contour point in the first image according to the plurality of first contour points;
determining a plurality of second contour points matched with the pixel characteristic values in a second image according to the pixel characteristic values, wherein the second image is an image except the first image in the first video file;
determining a first contour formed by the plurality of first contour points as the target image area in the first image; and determining a second contour formed by the plurality of second contour points as the target image area in the second image.
In some embodiments, the image processing the target image area of the multi-frame image to obtain a second video file includes:
determining an image processing mode of the target image area in the multi-frame image;
processing the target image area according to the image processing mode;
and forming the second video file by the processed multi-frame images.
In some embodiments, the determining an image processing manner of the target image region includes:
determining the contour feature of the target image area;
and determining an image processing mode corresponding to the contour feature of the target image area.
In some embodiments, the processing the target image region according to the image processing manner includes:
generating a masking layer matched with the image processing mode according to the contour points corresponding to the frame image;
and superposing the mask image layer and a target image area corresponding to the frame image to obtain a processed target image area.
In another aspect, a video processing apparatus is provided, the apparatus comprising:
the display module is used for displaying a first image of a first video file, wherein the first image is any frame image in the first video file;
a first determining module, configured to determine a plurality of first contour points labeled on the first image;
the second determining module is used for respectively determining target image areas to be processed in the multi-frame images of the first video file according to the plurality of first contour points;
and the image processing module is used for carrying out image processing on the target image area of the multi-frame image to obtain a second video file.
In some embodiments, the second determining module comprises:
a first determining unit configured to determine, from the plurality of first contour points, a pixel feature value around each first contour point in the first image;
a second determining unit, configured to determine, according to the pixel feature value, a plurality of second contour points that match the pixel feature value in a second image, where the second image is an image of the first video file other than the first image;
a third determining unit configured to determine a first contour formed by the plurality of first contour points as the target image area in the first image; and determining a second contour formed by the plurality of second contour points as the target image area in the second image.
In some embodiments, the image processing module comprises:
a fourth determining unit, configured to determine, for the target image area in the multi-frame image, an image processing manner of the target image area;
the image processing unit is used for processing the target image area according to the image processing mode;
and the composition unit is used for composing the processed multi-frame images into the second video file.
In some embodiments, the fourth determining unit is configured to determine a contour feature of the target image region; and determining an image processing mode corresponding to the contour feature of the target image area.
In some embodiments, the image processing unit is configured to generate a masking layer matched with the image processing manner according to the contour points corresponding to the frame image; and superposing the mask image layer and a target image area corresponding to the frame image to obtain a processed target image area.
In another aspect, a terminal is provided, where the apparatus includes a processor and a memory, where the memory stores at least one instruction, and the instruction is loaded and executed by the processor to implement the operations performed in the video processing method according to the first aspect.
In another aspect, a computer-readable storage medium is provided, in which at least one instruction is stored, and the instruction is loaded and executed by a processor to implement the operations performed in the video processing method according to the first aspect.
In another aspect, a computer program product is provided, in which program code is enabled, when executed by a processor of a server, to perform the video processing method described in any of the above possible implementations.
The technical scheme provided by the embodiment of the disclosure has the following beneficial effects:
in the embodiment of the disclosure, the terminal can determine the target image area in the multi-frame image of the first video file according to the marked first contour points by marking the plurality of first contour points in the first image of the first video file, so that the target image area in the first video file can be determined without performing target image area identification on the first video file through an identification model, and then the target image area is subjected to image processing, so that the identification model does not need to be added in the terminal, and the performance requirement on the terminal is reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art to be able to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a video processing method provided by an embodiment of the present disclosure;
fig. 2 is a flowchart of a video processing method provided by an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a terminal according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
The embodiment of the disclosure is applied to a terminal, and optionally, the terminal is a mobile phone, a computer, a tablet or wearable equipment and the like. Optionally, the terminal is a terminal with an image capturing function, or the terminal is a device without an image capturing function. And responding to the terminal with the image acquisition function, and acquiring the first video file acquired by the terminal through an internal data interface by the terminal. In response to that the terminal is a device without an image capturing function, an application scenario of the embodiment of the present disclosure further includes a video capturing device, where the video capturing device and the terminal perform data interaction through a network connection or a data interface, and in the embodiment of the present disclosure, this is not particularly limited. Correspondingly, the terminal obtains the first video file collected by the image collecting device through network connection or a data interface.
In addition, the embodiment of the disclosure is applied to a live broadcast scene, a video call scene or a short video distribution scene. Correspondingly, the first video file is a live video file, a video call file or a short video file to be released and the like. In the embodiments of the present disclosure, the application scenarios of the present disclosure are not particularly limited. For example, in a live scene, video processing is performed on a live picture; or, in a video call scene, performing video processing on a picture of the video call; or, in the short video distribution scene, video processing and the like are performed on the short video to be distributed.
Fig. 1 is a flowchart of a video processing method according to an embodiment of the present disclosure. The execution subject of the embodiment of the invention is a terminal, and referring to fig. 1, the method comprises the following steps:
101. and displaying a first image of a first video file, wherein the first image is any frame image in the first video file.
102. A plurality of first contour points labeled on the first image is determined.
103. And respectively determining target image areas to be processed in the multi-frame images of the first video file according to the plurality of first contour points.
104. And carrying out image processing on the target image area of the multi-frame image to obtain a second video file.
In some embodiments, the determining the target image areas to be processed in the multiple frames of images of the first video file according to the plurality of first contour points respectively includes:
determining pixel feature values around each first contour point in the first image according to the plurality of first contour points;
according to the pixel characteristic value, a plurality of second contour points matched with the pixel characteristic value are determined in a second image, wherein the second image is an image except the first image in the first video file;
determining a first contour formed by the plurality of first contour points as the target image area in the first image; and determining a second contour formed by the plurality of second contour points as the target image area in the second image.
In some embodiments, the image processing the target image region of the multi-frame image to obtain a second video file includes:
determining an image processing mode of the target image area in the multi-frame image;
processing the target image area according to the image processing mode;
and composing the second video file by the processed multi-frame images.
In some embodiments, the determining an image processing manner of the target image area includes:
determining the contour feature of the target image area;
and determining an image processing mode corresponding to the contour feature of the target image area.
In some embodiments, the processing the target image region according to the image processing manner includes:
generating a masking layer matched with the image processing mode according to the contour points corresponding to the frame of image;
and superposing the masking layer and a target image area corresponding to the frame image to obtain a processed target image area.
In the embodiment of the disclosure, the terminal can determine the target image area in the multi-frame image of the first video file according to the marked first contour points by marking the plurality of first contour points in the first image of the first video file, so that the target image area in the first video file can be determined without performing target image area identification on the first video file through an identification model, and then the target image area is subjected to image processing, so that the identification model does not need to be added in the terminal, and the performance requirement on the terminal is reduced.
All the above optional technical solutions can be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
Fig. 2 is a flowchart of a video processing method according to an embodiment of the present disclosure. Referring to fig. 2, the method includes:
201. the terminal displays a first image of a first video file.
The first image is any frame image in the first video file. For example, the first image is a cover image of a first video file; or, the first image is a first frame image in a first video file; or, the first image is an image in the currently received first video file. In the disclosed embodiment, the first image is not particularly limited.
The first video file is a video file acquired by functions of not starting video processing, face recognition and the like. Optionally, the first video file is a video file acquired in a live broadcast process; or the first video file is a video file collected in the video communication process; or the first video file is a short video file to be published and the like.
In some possible implementation manners, the terminal acquires the first video file in a live broadcast or video communication process, and the terminal performs video processing on the acquired first video file. In some embodiments, a video processing button is displayed in the live broadcast interface or the video communication interface, and in response to the video processing button being triggered, the terminal takes an image of a currently displayed first video file as a first image; and displaying the first image in the current display interface.
In other possible implementation manners, the terminal acquires a first video file in advance, determines the first video file to be published from the acquired multiple video files, uses a first frame image or a cover image of the first video file as a first image, and displays the first image in the current interface.
Optionally, a video editor is installed in the terminal, and accordingly, the terminal displays the first image of the first video file in the video editor. This process is realized by the following processes (1) to (3), including:
(1) the terminal starts the video editor.
Optionally, the video editor is a stand-alone application, or the video editor is a functional plug-in or applet in another application, or the like. In the embodiment of the present disclosure, the video editor is not particularly limited. For example, the video editor is a stand-alone video editing application, or the video editor is a beauty function plug-in a live application, etc.
(2) And the terminal imports the collected first video file into the video editor.
Optionally, the terminal automatically imports the acquired first video file into a video editor. For example, the first video file is a first video file acquired in a live broadcast process or a video communication process, a video processing button is displayed in a live broadcast picture or a video communication picture, and in response to the video processing button being triggered, the terminal automatically imports the acquired first video file into a video editor in a live broadcast application program or a video communication application program. Or the terminal stores the collected first video file in a video file library, displays a video file selection interface in response to the triggering of a video editing button in the video editor, determines the selected first video file, and imports the selected first video file into the video editor.
(3) The terminal presents the first image in the video editor.
In this implementation, the terminal presents a first image of a first video file in a video editor, so that the first video file is edited in the video editor.
202. The terminal determines a plurality of first contour points labeled on the first image.
In this step, the terminal displays the first image in the current display interface. In some possible implementations, the first image is an editable image, and the terminal receives a plurality of first contour points marked on the first image by the user through the first image. Correspondingly, the user clicks on the first image, the terminal records the position where the click trigger is received, and determines the position as the position of the first contour point.
In the implementation mode, the first contour point marked on the first image is determined by receiving the click operation of the user, so that the accuracy of determining the first contour point is improved, and the image does not need to be operated by using a model and the like, so that the model and the like for identification do not need to be installed in the terminal, and the requirement on the terminal is further reduced.
In a possible implementation manner, when the terminal receives the click operation, the position of the first contour point corresponding to the click operation is directly determined as the first contour point. The terminal directly determines the position corresponding to the clicking operation as the first contour point, and the efficiency of marking the first contour point is improved.
In another possible implementation manner, when the terminal receives a click operation, a position corresponding to the click operation is determined, the position is verified, the position is determined as a first contour point in response to the position being located at a boundary position, a boundary position closest to the position is determined in response to the position not being located at the boundary position, and the boundary position closest to the position is determined as the first contour point. The boundary position refers to a position where the pixel values of the pixels in the first image have obvious difference.
In this implementation manner, the position of the first contour point is made more accurate by verifying the position corresponding to the click operation.
And counting the number of the first contour points in the process that the terminal determines a plurality of first contour points according to the clicking operation. In response to the number of first contour points reaching the preset number, the terminal performs step 203. The preset number is set and changed as required, and in the embodiment of the present disclosure, the preset number is not specifically limited. For example, the number of the plurality of first contour points is 20, 30, or 50, etc.
In addition, in a possible implementation manner, in response to that the number of received clicks reaches the number of the plurality of first contour points, a first prompt message is displayed, where the first prompt message is used to prompt the user that the number of the first contour points reaches the preset number, and step 203 can be executed. In another possible implementation manner, a display area generation button is displayed in the display interface, and in response to that the number of the first contour points does not reach a preset number, the display area generation button is in a non-triggerable state; the region generation button transitions to a triggerable state in response to the number of the plurality of first contour points reaching a preset number, and the terminal performs step 203 in response to the region generation button being triggered.
In this implementation manner, the terminal detects the number of the plurality of first contour points, and determines the timing for executing step 203 according to the number of the plurality of first contour points, so as to prevent the number of the first contour points from being less than a preset number, which results in inaccuracy of the first closed region.
203. The terminal determines pixel characteristic values around each first contour point in the first image according to the plurality of first contour points.
Optionally, the pixel characteristic value is a pixel value of a pixel point, a gray histogram characteristic value, and the like, and in the embodiment of the present disclosure, the pixel characteristic value is not specifically limited.
In this step, the terminal determines the pixel characteristic values of the pixel points around each of the plurality of first contour points. Optionally, the pixel points around each first contour point are four pixel points, namely, an upper pixel point, a lower pixel point, a left pixel point, a right pixel point and a left pixel point of the first contour point; or the pixel points around each first contour point are four pixel points of the first contour point, namely, the upper, lower, left and right pixel points, and the upper, left, right, upper, left, lower and right pixel points are 8 pixel points. In the embodiment of the present disclosure, the number and the positions of the pixel points around the first contour point are not specifically limited.
In a possible implementation manner, after the terminal determines the pixel points around each first contour point, the terminal determines the pixel value of each pixel point, and the pixel values of each point form the pixel characteristic value of the pixel points around the first contour point. In another possible implementation manner, the terminal converts the first image into a gray scale map, determines a gray scale characteristic value of a pixel point around the first contour point in the first image according to the gray scale map, and determines a pixel characteristic value of a pixel point around the first contour point according to the gray scale characteristic value.
204. The terminal determines a plurality of second contour points matched with the pixel characteristic value in the second image according to the pixel characteristic value.
The second image is an image of the first video file except the first image. The pixel feature value matching means that the pixel feature values are the same or similar, and this is not particularly limited in the embodiments of the present disclosure.
In this step, the terminal determines a pixel point in the second image, which is matched with the first contour point in the first image, according to each frame of the second image. And for each frame of second image, the terminal determines the pixel characteristic value of the pixel points around each pixel point in the second image, and determines a plurality of pixel points matched with the first contour point according to the pixel characteristic value of the pixel points around each pixel point.
Optionally, for each frame of the second image, the terminal determines a matched pixel point in the second image according to the first contour point. Optionally, for each frame of second pixel point, the terminal determines a pixel point in the second image, which is matched with the first contour point, according to a pixel point in the adjacent other second images, which is matched with the first contour point.
In a possible implementation manner, the terminal directly determines the pixel points matched with the plurality of first contour points as the second contour points. In the implementation mode, the terminal directly takes the pixel points matched with the first contour points as the second contour points, and the efficiency of determining the second contour points is improved.
In another possible implementation manner, the terminal verifies a plurality of pixel points matched with the plurality of first contour points. Determining the plurality of pixel points as a plurality of second contour points in response to the matching of the image area surrounded by the plurality of pixel points and the first image area surrounded by the plurality of first contour points; and in response to the fact that the image area formed by the plurality of pixel points is not matched with the first image area formed by the plurality of first contour points, unmatched pixel points in the plurality of pixel points are matched again, and other matched pixel points corresponding to the first contour points matched with the pixel points are determined. In the implementation mode, the accuracy of determining the second contour point is improved by checking the plurality of pixel points.
205. The terminal determines a first contour formed by the plurality of first contour points as the target image area in the first image.
In this step, the terminal determines a first contour surrounded by the plurality of first contour points according to the plurality of first contour points, and determines an image area in the first contour as a target image area of the first image. The process is realized by the following steps (1) to (2), and comprises the following steps:
(1) the terminal determines a first contour composed of the plurality of first contour points in the first image.
The terminal determines adjacent first contour points in the first image and connects the adjacent first contour points to form a first contour. And for each first contour point, the terminal determines a first contour point closest to the first contour point, and connects the first contour point with the first contour point closest to the first contour point to obtain a first contour.
Optionally, the first contour is a contour of a face region, or the first contour is a contour of any five sense organs in a face, and the like. In addition, the number of the first contours is set according to needs, for example, in the same first image, the number of the first contours is one, that is, the contours of the face regions; or, the number of the first outlines is three, that is, the outlines of the face regions, the outlines of the regions corresponding to the two eyebrows, and the like. In the disclosed embodiment, the number and type of the first profiles are not particularly limited.
In addition, the first contour is a closed contour.
(2) And the terminal determines the image area in the first contour as a target image area to be processed in the first image.
In this step, the terminal determines an image region in the first image surrounded by the first contour within the first contour, and determines the image region as a target image region.
206. And the terminal determines a second contour formed by the plurality of second contour points as the target image area in the second image.
In this step, a second contour corresponding to each second image is determined according to the correspondence between the plurality of second contour points and the second image, and an image area in the second contour in the second image is determined as a target image area. The process is realized by the following steps (1) to (3), and comprises the following steps:
(1) and the terminal respectively determines a plurality of second contour points corresponding to each second image.
In this step, the terminal obtains the corresponding relationship between the second contour points and the second image, and determines a plurality of second contour points corresponding to the current second image from the corresponding relationship.
(2) The terminal determines a second contour composed of the plurality of second contour points in the second image.
This step is similar to step (1) in step 205, and is not described herein again.
(3) The terminal determines an image area within the second contour as a target image area in the second image.
This step is similar to step (2) in step 205, and is not described herein again.
In addition, the terminal can determine the target image area of the first image first and then determine the target image area of the second image; the terminal can also determine the target image area of the second image firstly and then determine the target image area of the first image; the terminal is also capable of simultaneously determining a target image area of the first image and a target image area of the second image. That is, the terminal performs step 205 and then performs step 206; or, the terminal executes step 206 first and then executes step 205; or, the terminal executes step 205 and step 206 simultaneously, and in the embodiment of the present disclosure, the execution order of step 205 and step 206 is not particularly limited.
207. And the terminal performs image processing on the target image area of the multi-frame image to obtain a second video file.
The video processing comprises processing modes such as skin grinding, whitening, scaling and the like. And for different target image areas, the terminal adopts different video processing modes for processing. For example, in response to that the target image area is a face area, the terminal performs a skin-polishing process, a whitening process, and the like on the face area, and accordingly, the video processing method adopted by the terminal is a skin-polishing process, a whitening process, and the like. In response to that the target image area is an image area corresponding to an eye, the terminal performs amplification processing and the like on the area corresponding to the eye, and correspondingly, the video processing mode adopted by the terminal is an amplification processing mode and the like. Accordingly, the process is realized by the following steps (1) to (3), including:
(1) and determining the image processing mode of the target image area in the multi-frame image.
In one possible implementation manner, the terminal displays image processing buttons corresponding to a plurality of image processing manners, and in response to any image processing button being triggered, the terminal determines the image processing manner corresponding to the button. In another possible implementation manner, the terminal determines a processing manner of the target image area according to the contour feature of the target image area. The process is realized by the following steps (1-1) - (1-2), and comprises the following steps:
and (1-1) the terminal determines the contour characteristics of the target image area.
The contour feature includes features such as a shape, a size, or a relative position with respect to the target object of the contour, and in the embodiment of the present disclosure, the contour feature is not particularly limited. For example, the terminal determines the relative position of the target image area in the image according to the contour feature of the target image area. And the terminal determines the position of the first contour point or the second contour point corresponding to the target image area. And determining the relative position of the target image area according to the position of the first contour point or the second contour point. Or the terminal determines the size of the target image area in the image according to the contour feature of the target image area.
And (1-2) the terminal determines an image processing mode corresponding to the contour feature of the target image area.
In this step, the terminal determines the image processing mode of the target image according to the contour feature of the target image area. In this step, the terminal determines the image processing method of the target image region from the corresponding relationship based on the contour feature.
The method includes that in the process of video processing, the terminal only determines an image processing mode once for a first video file, and processes multiple frames of images in the first video file according to the image processing mode; or, for each frame of image in the first video file, the terminal determines an image processing mode corresponding to the image, and processes the image according to the image processing mode. In the embodiments of the present disclosure, this is not particularly limited.
In the implementation mode, the terminal determines the image processing mode corresponding to the target image area according to the contour feature of the target image area, so that the user does not need to select the image processing mode, image processing can be automatically performed on different target image areas, and the image processing efficiency is improved.
(2) And the terminal processes the target image area according to the image processing mode.
In the step, in response to the image including a target image area, the terminal processes the target image area according to a processing mode corresponding to the target image area; and responding to the fact that the terminal comprises a plurality of target image areas, and processing the target image areas by the terminal according to the processing mode corresponding to each target image area.
In some embodiments, the terminal generates masking layers of different types of images according to the image processing mode, and the image processing of the target image area is realized through the masking layers. The process is realized by the following steps (2-1) - (2-2), and comprises the following steps:
and (2-1) generating a masking layer matched with the image processing mode by the terminal according to the contour points corresponding to the frame of image.
In this step, the terminal generates the mask layer matched with the contour points according to different image processing modes. For example, if the image processing mode is buffing, in this step, the terminal determines a complete buffing layer, determines an area matched with the contour corresponding to the contour point in the buffing layer according to the contour point, and uses the area as a mask layer. Or, the terminal determines a layer region matched with the contour corresponding to the contour point in the blank layer, and performs image processing in the layer region according to the image processing mode to obtain the mask layer.
And (2-2) the terminal superposes the masking layer and the target image area corresponding to the frame image to obtain a processed target image area.
In this step, the terminal superimposes the corresponding mask image and the corresponding frame image to generate a target image area after image processing.
In the implementation mode, the mask layer corresponding to the image processing is directly determined according to the contour points, and the matched mask layer is used for processing the image of the target image area, so that the image processing speed is increased, and the image processing result is optimized.
(3) And the terminal makes the processed multi-frame images into the second video file.
It should be noted that, when processing a plurality of frames of images in a first video, a terminal determines a target image area corresponding to the plurality of frames of images in the first video file, and then performs video processing on the target image area to obtain a second video file. Optionally, the terminal performs image processing on the target image area of one frame of image every time the terminal determines the target image area, and in the embodiment of the present disclosure, the timing of performing image processing is not specifically limited.
It should be noted that, the above steps 201-207 can also be implemented by a server, where the server is a background server of a target application installed in the terminal. Correspondingly, the terminal sends the first video file to the server, the server performs video processing on the first video file to obtain a second video file, the second video file is sent to the terminal, and the terminal receives the second video file sent by the server, wherein the process of performing video processing on the first video file by the server to obtain the second video file is similar to the process of performing video processing on the first video file by the terminal to obtain the second video file, and is not repeated here.
The server is a single server, or a server cluster composed of a plurality of servers, or a cloud server. In the embodiments of the present disclosure, the terminal is not particularly limited.
In the embodiment of the disclosure, the terminal can determine the target image area in the multi-frame image of the first video file according to the marked first contour points by marking the plurality of first contour points in the first image of the first video file, so that the target image area in the first video file can be determined without performing target image area identification on the first video file through an identification model, and then the target image area is subjected to image processing, so that the identification model does not need to be added in the terminal, and the performance requirement on the terminal is reduced.
Fig. 3 is a schematic structural diagram of a video processing apparatus provided in an embodiment of the present disclosure, and referring to fig. 3, the apparatus includes:
a display module 301, configured to display a first image of a first video file, where the first image is any frame image in the first video file;
a first determining module 302, configured to determine a plurality of first contour points labeled on the first image;
a second determining module 303, configured to determine target image areas to be processed in the multi-frame images of the first video file according to the plurality of first contour points, respectively;
the image processing module 304 is configured to perform image processing on the target image area of the multiple frames of images to obtain a second video file.
In some embodiments, referring to fig. 4, the second determining module 303 comprises:
a first determining unit 3011, configured to determine, according to the plurality of first contour points, pixel feature values around each of the first contour points in the first image;
a second determining unit 3012, configured to determine, according to the pixel feature value, a plurality of second contour points that match the pixel feature value in a second image, where the second image is an image of the first video file other than the first image;
a third determining unit 3013, configured to determine a first contour formed by the plurality of first contour points as the target image area in the first image; and determining a second contour formed by the plurality of second contour points as the target image area in the second image.
In some embodiments, with continued reference to fig. 4, the image processing module 304 includes:
a fourth determining unit 3041 configured to determine, for the target image area in the multiple frame images, an image processing manner of the target image area;
an image processing unit 3042, configured to process the target image area according to the image processing manner;
and the composition unit is used for composing the processed multi-frame images into the second video file.
In some embodiments, the fourth determining unit 3041 is configured to determine a contour feature of the target image region; and determining an image processing mode corresponding to the contour feature of the target image area.
In some embodiments, the image processing unit 3042 is configured to generate a mask layer matching the image processing method according to the contour point corresponding to the frame image; and superposing the masking layer and a target image area corresponding to the frame image to obtain a processed target image area.
In the embodiment of the disclosure, the terminal can determine the target image area in the multi-frame image of the first video file according to the marked first contour points by marking the plurality of first contour points in the first image of the first video file, so that the target image area in the first video file can be determined without performing target image area identification on the first video file through an identification model, and then the target image area is subjected to image processing, so that the identification model does not need to be added in the terminal, and the performance requirement on the terminal is reduced.
All the above optional technical solutions can be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
It should be noted that: in the video processing apparatus provided in the above embodiment, only the division of the above functional modules is taken as an example when processing a video, and in practical applications, the above functions can be distributed by different functional modules as needed, that is, the internal structure of the video processing apparatus is divided into different functional modules to complete all or part of the above described functions. In addition, the video processing apparatus and the video processing method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in detail in the video processing method embodiments and are not described herein again.
Fig. 5 is a schematic structural diagram of a terminal according to an embodiment of the present disclosure. Optionally, the terminal 500 is a portable mobile terminal, such as: a smart phone, a tablet computer, a laptop computer, a desktop computer, a head-mounted device, or any other intelligent terminal. In some embodiments, terminal 500 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.
In general, the terminal 500 includes: a processor 501 and a memory 502.
Optionally, processor 501 includes one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. Alternatively, the processor 501 is implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). Optionally, the processor 501 further includes a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 501 is integrated with a GPU (Graphics Processing Unit, image Processing interactor) which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, processor 501 also includes an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
Optionally, memory 502 includes one or more computer-readable storage media that are non-transitory. Memory 502 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 502 is used to store at least one instruction for being possessed by processor 501 to implement the video processing methods provided by the method embodiments of the present disclosure.
In some embodiments, the terminal 500 may further optionally include: a peripheral interface 503 and at least one peripheral. Optionally, the processor 501, the memory 502 and the peripheral interface 503 are connected by a bus or signal lines. Optionally, each peripheral is connected to the peripheral interface 503 via a bus, signal line, or circuit board. Optionally, the peripheral device comprises: at least one of radio frequency circuitry 504, touch screen display 505, camera assembly 506, audio circuitry 507, positioning assembly 508, and power supply 509.
The peripheral interface 503 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 501 and the memory 502. In some embodiments, the processor 501, memory 502, and peripheral interface 503 are integrated on the same chip or circuit board; in some other embodiments, any one or both of the processor 501, the memory 502, and the peripheral interface 503 are implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 504 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 504 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 504 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 504 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. Optionally, the rf circuit 504 communicates with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 8G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 504 further includes NFC (Near Field Communication) related circuits, which are not limited by this disclosure.
The display screen 505 is used to display a UI (User Interface). Optionally, the UI includes graphics, text, icons, video, and any combination thereof. When the display screen 505 is a touch display screen, the display screen 505 also has the ability to capture touch signals on or over the surface of the display screen 505. Optionally, the touch signal is input to the processor 501 as a control signal for processing. In this case, the display screen 505 is also used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 505 is one, providing the front panel of the terminal 500; in other embodiments, the display screens 505 are at least two, respectively disposed on different surfaces of the terminal 500 or in a folded design; in still other embodiments, the display 505 is a flexible display disposed on a curved surface or a folded surface of the terminal 500. Even more, the display screen 505 is arranged in a non-rectangular irregular figure, i.e. a shaped screen. Alternatively, the Display screen 505 is made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 506 is used to capture images or video. Optionally, camera assembly 506 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 506 also includes a flash. Optionally, the flash is a monochrome temperature flash, or alternatively, a bi-color temperature flash. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp and is used for light compensation under different color temperatures.
Optionally, audio circuitry 507 includes a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 501 for processing, or inputting the electric signals to the radio frequency circuit 504 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones are respectively disposed at different positions of the terminal 500. Optionally, the microphone is also an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 501 or the radio frequency circuit 504 into sound waves. Alternatively, the speaker is a conventional membrane speaker, or alternatively, a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to human, but also the electric signal can be converted into a sound wave inaudible to human for use in distance measurement or the like. In some embodiments, audio circuitry 507 also includes a headphone jack.
The positioning component 508 is used for positioning the current geographic Location of the terminal 500 for navigation or LBS (Location Based Service). Alternatively, the Positioning component 508 is a Positioning component based on a GPS (Global Positioning System) in the united states, a beidou System in china, a graves System in russia, or a galileo System in the european union.
Power supply 509 is used to power the various components in terminal 500. Power supply 509 is an alternating current, direct current, disposable or rechargeable battery. When power source 509 comprises a rechargeable battery, the rechargeable battery supports wired or wireless charging. The rechargeable battery is also used to support fast charge technology.
In some embodiments, terminal 500 also includes one or more sensors 510. The one or more sensors 510 include, but are not limited to: acceleration sensor 511, gyro sensor 512, pressure sensor 513, fingerprint sensor 514, optical sensor 515, and proximity sensor 516.
Alternatively, the acceleration sensor 511 detects the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal 500. For example, the acceleration sensor 511 is used to detect the components of the gravitational acceleration in three coordinate axes. Optionally, the processor 501 controls the touch display screen 505 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 511. The acceleration sensor 511 is also used for acquisition of motion data of a game or a user.
The gyro sensor 512 can detect the body direction and the rotation angle of the terminal 500, and the gyro sensor 512 and the acceleration sensor 511 cooperate to acquire the 3D motion of the user on the terminal 500. The processor 501 can implement the following functions according to the data collected by the gyro sensor 512: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Optionally, the pressure sensor 513 is disposed on a side bezel of the terminal 500 and/or on a lower layer of the touch display 505. When the pressure sensor 513 is disposed on the side frame of the terminal 500, a user's holding signal to the terminal 500 is detected, and left-right hand recognition or shortcut operation is performed by the processor 501 according to the holding signal collected by the pressure sensor 513. When the pressure sensor 513 is disposed at the lower layer of the touch display screen 505, the processor 501 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 505. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 514 is used for collecting a fingerprint of the user, and the processor 501 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 514, or the fingerprint sensor 514 identifies the identity of the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the user is authorized by processor 501 to have relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. Optionally, the fingerprint sensor 514 is provided on the front, back or side of the terminal 500. When a physical key or a manufacturer Logo (Logo) is provided on the terminal 500, the fingerprint sensor 514 is integrated with the physical key or the manufacturer Logo.
The optical sensor 515 is used to collect the ambient light intensity. In one embodiment, processor 501 controls the display brightness of touch display 505 based on the intensity of ambient light collected by optical sensor 515. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 505 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 505 is turned down. In another embodiment, processor 501 also dynamically adjusts the shooting parameters of camera head assembly 506 based on the ambient light intensity collected by optical sensor 515.
A proximity sensor 516, also referred to as a distance sensor, is typically disposed on the front panel of the terminal 500. The proximity sensor 516 is used to collect the distance between the user and the front surface of the terminal 500. In one embodiment, when the proximity sensor 516 detects that the distance between the user and the front surface of the terminal 500 gradually decreases, the processor 501 controls the touch display screen 505 to switch from the bright screen state to the dark screen state; when the proximity sensor 516 detects that the distance between the user and the front surface of the terminal 500 becomes gradually larger, the processor 501 controls the touch display screen 505 to switch from the screen-rest state to the screen-on state.
Those skilled in the art will appreciate that the configuration shown in fig. 5 is not intended to be limiting of terminal 500, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
The embodiment of the present disclosure further provides a terminal, where the terminal includes a processor and a memory, where the memory stores at least one instruction, and the instruction is loaded and executed by the processor to implement the operations performed in the video processing method according to the above embodiment.
The embodiment of the present disclosure also provides a computer-readable storage medium, where at least one instruction is stored in the computer-readable storage medium, and the instruction is loaded and executed by a processor to implement the operations performed in the video processing method of the foregoing embodiment.
Those skilled in the art will appreciate that all or part of the steps for implementing the above embodiments can be implemented by hardware, and can also be implemented by a program for instructing relevant hardware, where the program is stored in a computer-readable storage medium, and the above-mentioned storage medium is a read-only memory, a magnetic disk or an optical disk, etc.
The above description is meant to be illustrative of alternative embodiments of the disclosure and not to be construed as limiting the disclosure, and any modifications, equivalents, improvements and the like that are within the spirit and principles of the disclosure are intended to be included within the scope of the disclosure.

Claims (10)

1. A method of video processing, the method comprising:
displaying a first image of a first video file, wherein the first image is any frame image in the first video file;
determining a plurality of first contour points labeled on the first image;
respectively determining target image areas to be processed in the multi-frame images of the first video file according to the plurality of first contour points;
and carrying out image processing on the target image area of the multi-frame image to obtain a second video file.
2. The method according to claim 1, wherein the determining target image areas to be processed in the multiple frames of images of the first video file according to the plurality of first contour points comprises:
determining pixel feature values around each first contour point in the first image according to the plurality of first contour points;
determining a plurality of second contour points matched with the pixel characteristic values in a second image according to the pixel characteristic values, wherein the second image is an image except the first image in the first video file;
determining a first contour formed by the plurality of first contour points as the target image area in the first image; and determining a second contour formed by the plurality of second contour points as the target image area in the second image.
3. The method according to claim 1, wherein the image processing the target image area of the multi-frame image to obtain a second video file comprises:
determining an image processing mode of the target image area in the multi-frame image;
processing the target image area according to the image processing mode;
and forming the second video file by the processed multi-frame images.
4. The method according to claim 3, wherein the determining the image processing mode of the target image area comprises:
determining the contour feature of the target image area;
and determining an image processing mode corresponding to the contour feature of the target image area.
5. The method according to claim 3, wherein the processing the target image region according to the image processing manner comprises:
generating a masking layer matched with the image processing mode according to the contour points corresponding to the frame image;
and superposing the mask image layer and a target image area corresponding to the frame image to obtain a processed target image area.
6. A video processing apparatus, characterized in that the apparatus comprises:
the display module is used for displaying a first image of a first video file, wherein the first image is any frame image in the first video file;
a first determining module, configured to determine a plurality of first contour points labeled on the first image;
the second determining module is used for respectively determining target image areas to be processed in the multi-frame images of the first video file according to the plurality of first contour points;
and the image processing module is used for carrying out image processing on the target image area of the multi-frame image to obtain a second video file.
7. The apparatus of claim 6, wherein the second determining module comprises:
a first determining unit configured to determine, from the plurality of first contour points, a pixel feature value around each first contour point in the first image;
a second determining unit, configured to determine, according to the pixel feature value, a plurality of second contour points that match the pixel feature value in a second image, where the second image is an image of the first video file other than the first image;
a third determining unit configured to determine a first contour formed by the plurality of first contour points as the target image area in the first image; and determining a second contour formed by the plurality of second contour points as the target image area in the second image.
8. The apparatus of claim 6, wherein the image processing module comprises:
a fourth determining unit, configured to determine, for the target image area in the multi-frame image, an image processing manner of the target image area;
the image processing unit is used for processing the target image area according to the image processing mode;
and the composition unit is used for composing the processed multi-frame images into the second video file.
9. A terminal, characterized in that the apparatus comprises a processor and a memory, in which at least one instruction is stored, the instruction being loaded and executed by the processor to implement the operations performed in the video processing method according to any of claims 1 to 5.
10. A computer-readable storage medium having stored therein at least one instruction, which is loaded and executed by a processor to perform operations as performed in the video processing method of any of claims 1 to 5.
CN202011208409.4A 2020-11-03 2020-11-03 Video processing method, device, terminal and computer readable storage medium Pending CN112399080A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011208409.4A CN112399080A (en) 2020-11-03 2020-11-03 Video processing method, device, terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011208409.4A CN112399080A (en) 2020-11-03 2020-11-03 Video processing method, device, terminal and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN112399080A true CN112399080A (en) 2021-02-23

Family

ID=74597356

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011208409.4A Pending CN112399080A (en) 2020-11-03 2020-11-03 Video processing method, device, terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112399080A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113556481A (en) * 2021-07-30 2021-10-26 北京达佳互联信息技术有限公司 Video special effect generation method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109754375A (en) * 2018-12-25 2019-05-14 广州华多网络科技有限公司 Image processing method, system, computer equipment, storage medium and terminal
CN111754386A (en) * 2019-03-26 2020-10-09 杭州海康威视数字技术股份有限公司 Image area shielding method, device, equipment and storage medium
CN111753784A (en) * 2020-06-30 2020-10-09 广州酷狗计算机科技有限公司 Video special effect processing method and device, terminal and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109754375A (en) * 2018-12-25 2019-05-14 广州华多网络科技有限公司 Image processing method, system, computer equipment, storage medium and terminal
CN111754386A (en) * 2019-03-26 2020-10-09 杭州海康威视数字技术股份有限公司 Image area shielding method, device, equipment and storage medium
CN111753784A (en) * 2020-06-30 2020-10-09 广州酷狗计算机科技有限公司 Video special effect processing method and device, terminal and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113556481A (en) * 2021-07-30 2021-10-26 北京达佳互联信息技术有限公司 Video special effect generation method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110992493B (en) Image processing method, device, electronic equipment and storage medium
CN110502954B (en) Video analysis method and device
CN110929651A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111723803B (en) Image processing method, device, equipment and storage medium
CN112907725B (en) Image generation, training of image processing model and image processing method and device
CN111028144B (en) Video face changing method and device and storage medium
CN111447389B (en) Video generation method, device, terminal and storage medium
CN110839174A (en) Image processing method and device, computer equipment and storage medium
CN110189348B (en) Head portrait processing method and device, computer equipment and storage medium
CN110619614B (en) Image processing method, device, computer equipment and storage medium
CN112581358B (en) Training method of image processing model, image processing method and device
CN111754386A (en) Image area shielding method, device, equipment and storage medium
CN110837300B (en) Virtual interaction method and device, electronic equipment and storage medium
CN110807769B (en) Image display control method and device
CN112135191A (en) Video editing method, device, terminal and storage medium
CN112419143A (en) Image processing method, special effect parameter setting method, device, equipment and medium
CN112396076A (en) License plate image generation method and device and computer storage medium
CN112565806A (en) Virtual gift presenting method, device, computer equipment and medium
CN111105474A (en) Font drawing method and device, computer equipment and computer readable storage medium
CN112967261B (en) Image fusion method, device, equipment and storage medium
CN111369434B (en) Method, device, equipment and storage medium for generating spliced video covers
CN110889391B (en) Method, device, computing device and storage medium for processing face images
CN112399080A (en) Video processing method, device, terminal and computer readable storage medium
CN111064994B (en) Video image processing method and device and storage medium
CN110942426B (en) Image processing method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210223

RJ01 Rejection of invention patent application after publication