CN109413478B - Video editing method and device, electronic equipment and storage medium - Google Patents
Video editing method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN109413478B CN109413478B CN201811125999.7A CN201811125999A CN109413478B CN 109413478 B CN109413478 B CN 109413478B CN 201811125999 A CN201811125999 A CN 201811125999A CN 109413478 B CN109413478 B CN 109413478B
- Authority
- CN
- China
- Prior art keywords
- time point
- video
- current
- editing
- subtitle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 77
- 230000008034 disappearance Effects 0.000 claims abstract description 116
- 230000008569 process Effects 0.000 claims description 31
- 238000012545 processing Methods 0.000 claims description 24
- 230000002194 synthesizing effect Effects 0.000 claims description 7
- 238000010586 diagram Methods 0.000 description 12
- 238000004891 communication Methods 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 8
- 238000004590 computer program Methods 0.000 description 5
- 230000003993 interaction Effects 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
- H04N21/4316—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/435—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
- H04N21/4355—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reformatting operations of additional data, e.g. HTML pages on a television screen
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/485—End-user interface for client configuration
- H04N21/4858—End-user interface for client configuration for modifying screen layout parameters, e.g. fonts, size of the windows
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/488—Data services, e.g. news ticker
- H04N21/4884—Data services, e.g. news ticker for displaying subtitles
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
The embodiment of the disclosure provides a video editing method, a video editing device, an electronic device and a storage medium, wherein the method comprises the following steps: acquiring a subtitle text which needs to be added to a current editing video; determining the appearance time point and the disappearance time point of each caption in the caption text in the current editing video according to the operation instruction of the user on the editing interface of the current editing video; and adding each sentence of subtitle in the subtitle text to the current editing video according to the appearance time point and the disappearance time point to generate a video with subtitles. According to the embodiment of the disclosure, the user does not need to input the subtitles one by one and set the corresponding appearance time point and disappearance time point, the whole subtitle text can be directly obtained and added into the video, the operation of the user is simplified, and the subtitle adding efficiency of the video is improved.
Description
Technical Field
The present disclosure relates to video processing technologies, and in particular, to a video editing method and apparatus, an electronic device, and a storage medium.
Background
With the development of the mobile internet, more and more applications are available on the mobile terminal, and many applications for processing videos appear, so that a user can add subtitles to the videos through the applications for processing the videos on the mobile terminal.
In the related technology, when adding subtitles to a video, a user is required to firstly input a first subtitle, then drag a time axis to set time points of appearance and disappearance for the first subtitle, then input a second subtitle, and drag the time axis to set time points of appearance and disappearance for the second subtitle, and so on, the subtitles of the whole video are set, taking a 10-minute video as an example, hundreds of subtitles often need to be added in hours, and the user operation is cumbersome and time is wasted.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a video editing method, apparatus, electronic device, and storage medium.
According to a first aspect of the embodiments of the present disclosure, there is provided a video editing method, including:
acquiring a subtitle text which needs to be added to a current editing video;
determining the appearance time point and the disappearance time point of each caption in the caption text in the current editing video according to the operation instruction of the user on the editing interface of the current editing video;
and adding each sentence of subtitle in the subtitle text to the current editing video according to the appearance time point and the disappearance time point to generate a video with subtitles.
Optionally, the method further includes:
and generating a picture file for each sentence of caption in the caption text.
Optionally, the determining, according to an operation instruction of a user on an editing interface of the current editing video, an appearance time point and a disappearance time point of each sentence of subtitles in the subtitle text in the current editing video includes:
receiving an operation instruction of a user in the process of editing the current edited video;
determining the appearance time point and the disappearance time point of each picture file in the current edited video according to the operation instruction;
adding the subtitles in the subtitle text to the current editing video according to the appearance time point and the disappearance time point to generate a video with subtitles, comprising:
and adding each picture file to the current edited video according to the appearance time point and the disappearance time point to generate a video with subtitles.
Optionally, the receiving, in the process of editing the currently edited video, an operation instruction of a user includes:
receiving a current operation instruction of a user in the process of editing the current editing video, and determining a time point of receiving the current operation instruction when the current operation instruction is a preset operation instruction;
determining the appearance time point and the disappearance time point of each picture file in the current edited video according to the operation instruction, including:
and determining the time point of receiving the current operation instruction as the appearance time point of the current picture file in the current editing video and the disappearance time point of the previous picture file in the current editing video.
Optionally, the method further includes:
and after the appearance time point of the current picture file is determined, continuously displaying the current picture file in the current editing video until the disappearance time point of the current picture file is determined.
Optionally, the adding each picture file to the currently edited video according to the appearance time point and the disappearance time point to generate a video with subtitles includes:
and synthesizing the corresponding picture file and the video frame between the appearance time point and the disappearance time point in the current edited video according to the appearance time point and the disappearance time point to generate the video with the subtitles.
Optionally, the operation instruction is a click event instruction.
Optionally, the obtaining of the subtitle text that needs to be added to the currently edited video includes:
acquiring a keyword of the subtitle text, sending the keyword to a server, and receiving the subtitle text which is returned by the server and corresponds to the keyword, wherein the subtitle text is used as the subtitle text which needs to be added to the current edited video; or
And acquiring the subtitle text pasted by the user or the subtitle text input by the user as the subtitle text needing to be added to the current edited video.
Optionally, the method further includes:
and performing sentence division processing on the subtitle text to obtain each sentence of subtitle forming the subtitle text.
According to a second aspect of the embodiments of the present disclosure, there is provided a video editing apparatus including:
the subtitle text acquisition module is configured to acquire subtitle texts to be added to a current edited video;
the time point determining module is configured to determine an appearance time point and a disappearance time point of each subtitle in the subtitle text in the current edited video according to an operation instruction of a user on an editing interface of the current edited video;
and the subtitle adding module is configured to add each sentence of subtitle in the subtitle text to the current editing video according to the appearance time point and the disappearance time point, and generate a video with subtitles.
Optionally, the apparatus further comprises:
and the picture file generation module is configured to generate a picture file for each caption in the caption text.
Optionally, the time point determining module includes:
an operation instruction receiving unit configured to receive an operation instruction of a user in a process of editing the current edited video;
a time point determining unit configured to determine an appearance time point and a disappearance time point of each of the picture files in the current edited video according to the operation instruction;
the subtitle adding module comprises:
and the subtitle adding unit is configured to add each picture file to the current edited video according to the appearance time point and the disappearance time point, and generate the video with subtitles.
Optionally, the operation instruction receiving unit is specifically configured to:
receiving a current operation instruction of a user in the process of editing the current editing video, and determining a time point of receiving the current operation instruction when the current operation instruction is a preset operation instruction;
the time point determining unit is specifically configured to:
and determining the time point of receiving the current operation instruction as the appearance time point of the current picture file in the current editing video and the disappearance time point of the previous picture file in the current editing video.
Optionally, the apparatus further comprises:
and the subtitle display module is configured to continuously display the current picture file in the current editing video after the appearance time point of the current picture file is determined until the disappearance time point of the current picture file is determined.
Optionally, the subtitle adding unit is specifically configured to:
and synthesizing the corresponding picture file and the video frame between the appearance time point and the disappearance time point in the current edited video according to the appearance time point and the disappearance time point to generate the video with the subtitles.
Optionally, the operation instruction is a click event instruction.
Optionally, the subtitle text obtaining module includes:
the first acquisition unit is configured to acquire a keyword of the subtitle text, send the keyword to a server, and receive the subtitle text corresponding to the keyword returned by the server as the subtitle text needing to be added to the currently edited video; or
And a second acquisition unit configured to acquire the subtitle text pasted by the user or the subtitle text input by the user as the subtitle text to be added to the currently edited video.
Optionally, the apparatus further comprises:
and the clause processing module is configured to perform clause processing on the subtitle text to obtain each sentence of subtitle forming the subtitle text.
According to a third aspect of embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium having instructions therein, which when executed by a processor of a mobile terminal, enable the mobile terminal to perform a video editing method, the method comprising:
acquiring a subtitle text which needs to be added to a current editing video;
determining the appearance time point and the disappearance time point of each caption in the caption text in the current editing video according to the operation instruction of the user on the editing interface of the current editing video;
and adding each sentence of subtitle in the subtitle text to the current editing video according to the appearance time point and the disappearance time point to generate a video with subtitles.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer program, a method of which includes:
acquiring a subtitle text which needs to be added to a current editing video;
determining the appearance time point and the disappearance time point of each caption in the caption text in the current editing video according to the operation instruction of the user on the editing interface of the current editing video;
and adding each sentence of subtitle in the subtitle text to the current editing video according to the appearance time point and the disappearance time point to generate a video with subtitles.
According to a fifth aspect of the embodiments of the present disclosure, there is provided an electronic device including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring a subtitle text which needs to be added to a current editing video;
determining the appearance time point and the disappearance time point of each caption in the caption text in the current editing video according to the operation instruction of the user on the editing interface of the current editing video;
and adding each sentence of subtitle in the subtitle text to the current editing video according to the appearance time point and the disappearance time point to generate a video with subtitles.
Further comprising:
and generating a picture file for each sentence of caption in the caption text.
The determining, according to an operation instruction of a user on an editing interface of the current editing video, an appearance time point and a disappearance time point of each sentence of subtitles in the subtitle text in the current editing video includes:
receiving an operation instruction of a user in the process of editing the current edited video;
determining the appearance time point and the disappearance time point of each picture file in the current edited video according to the operation instruction;
adding the subtitles in the subtitle text to the current editing video according to the appearance time point and the disappearance time point to generate a video with subtitles, comprising:
and adding each picture file to the current edited video according to the appearance time point and the disappearance time point to generate a video with subtitles.
In the process of editing the current editing video, receiving an operation instruction of a user, including:
receiving a current operation instruction of a user in the process of editing the current editing video, and determining a time point of receiving the current operation instruction when the current operation instruction is a preset operation instruction;
determining the appearance time point and the disappearance time point of each picture file in the current edited video according to the operation instruction, including:
and determining the time point of receiving the current operation instruction as the appearance time point of the current picture file in the current editing video and the disappearance time point of the previous picture file in the current editing video.
Further comprising:
and after the appearance time point of the current picture file is determined, continuously displaying the current picture file in the current editing video until the disappearance time point of the current picture file is determined.
Adding each picture file to a current editing video according to the appearance time point and the disappearance time point to generate a video with subtitles, wherein the method comprises the following steps:
and synthesizing the corresponding picture file and the video frame between the appearance time point and the disappearance time point in the current edited video according to the appearance time point and the disappearance time point to generate the video with the subtitles.
The operation instruction is a click event instruction.
The acquiring of the subtitle text to be added to the currently edited video includes:
acquiring a keyword of the subtitle text, sending the keyword to a server, and receiving the subtitle text which is returned by the server and corresponds to the keyword, wherein the subtitle text is used as the subtitle text which needs to be added to the current edited video; or
And acquiring the subtitle text pasted by the user or the subtitle text input by the user as the subtitle text needing to be added to the current edited video.
Further comprising:
and performing sentence division processing on the subtitle text to obtain each sentence of subtitle forming the subtitle text.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the user does not need to input the subtitles one by one and set the corresponding appearance time point and disappearance time point, the whole subtitle text can be directly obtained and added into the video through interaction with the user, the operation of the user is simplified, and the subtitle adding efficiency of the video is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow diagram illustrating a method of video editing in accordance with an exemplary embodiment;
FIG. 2 is a flow diagram illustrating another method of video editing in accordance with an illustrative embodiment;
FIG. 3 is a flow diagram illustrating yet another method of video editing in accordance with an illustrative embodiment;
fig. 4 is a block diagram showing a configuration of a video editing apparatus according to an exemplary embodiment;
fig. 5 is a block diagram illustrating a structure for a video editing apparatus according to an exemplary embodiment;
fig. 6 is a block diagram illustrating a structure of an electronic device according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1 is a flow diagram illustrating a video editing method according to an example embodiment. The video editing method can be used in electronic equipment such as a terminal. As shown in fig. 1, the method specifically includes the following steps.
In step S11, subtitle text that needs to be added to the currently edited video is acquired.
The current editing video is a video which needs to be added with subtitles currently, and may be a video recorded by a user through electronic equipment, or a video downloaded by the user through a network (such as a television play, a movie or a network video). The caption text is the text of all captions of the whole video, including video captions or lyrics.
When a user adds subtitles to a certain video through electronic equipment, the video is required to be opened firstly, an editing interface is entered, the video is the current editing video, and subtitle texts required to be added to the current editing video are acquired on the editing interface. When the subtitle text of the current editing video is obtained, a user can directly input the whole subtitle text and can also provide keywords of the subtitle text for searching to obtain the whole subtitle text.
In step S12, an appearance time point and a disappearance time point of each subtitle in the subtitle text in the current edited video are determined according to an operation instruction of the user on the editing interface of the current edited video.
Wherein, the editing interface can be the whole playing area of the current editing video. The operation instruction is an operation instruction for interactively determining the appearance time point and the disappearance time point of the picture file with a user, for example, the operation instruction can be a click event instruction, a double click event instruction or a sliding instruction, and the operation instruction can be selected as the click event instruction, so that the appearance time point and the disappearance time point of the picture file can be accurately determined, and the problem of subtitle time overlapping caused by manually dragging a time axis to determine the time point in the related technology can be solved.
Generally, the caption text of one video may include one sentence or multiple sentences, and when the caption text includes multiple sentences, the caption text may be displayed in different time periods.
When the subtitle text is added to the current edited video, the appearance time point and the disappearance time point of each subtitle in the current edited video need to be determined, and when the appearance time point and the disappearance time point are determined, the determination can be performed according to interaction with a user, namely, an operation instruction of the user is obtained on an editing interface of the current edited video, and the appearance time point and the disappearance time point of each subtitle in the current edited video are determined according to the operation instruction of the user on the editing interface of the current edited video. For example, an operation instruction may be preset, an appearance time point and a disappearance time point of a subtitle in the video may be determined according to the operation instruction of the user on the editing interface as the preset operation instruction in the process of playing the video on the editing interface of the current editing video, or a shortcut key may be set on the editing interface, and the appearance time point and the disappearance time point of a subtitle in the video may be determined according to the triggering operation of the shortcut key by the user in the process of playing the current editing video.
In step S13, each subtitle in the subtitle text is added to the currently edited video according to the appearance time point and the disappearance time point, and a video with subtitles is generated.
After the appearance time point and the disappearance time point of each sentence of subtitles in the current editing video are determined, each sentence of subtitles in the subtitle text can be added to the current editing video according to the appearance time point and the disappearance time point of each sentence of subtitles in the current editing video, and therefore the current editing video is generated into the video with the subtitles.
According to the video editing method provided by the exemplary embodiment, the subtitle text to be added to the currently edited video is acquired, the appearance time point and the disappearance time point of each subtitle in the subtitle text in the currently edited video are determined according to the operation instruction of the user on the editing interface of the currently edited video, each subtitle is added to the currently edited video, the video with the subtitles is generated, the user does not need to input the subtitles one by one and set the corresponding appearance time point and disappearance time point, the whole subtitle text can be directly acquired and added to the video in the video playing process, the operation of the user is simplified, the addition of the subtitles can be completed after the video is played, and the subtitle adding efficiency of the video is improved. For example, for a video of 10 minutes, if a subtitle of hundred sentences needs to be added, the related technical scheme can complete the addition of the entire video subtitle only in 10 minutes by inputting the subtitle sentence by sentence and setting the appearance time point and the disappearance time point, and the addition can be completed only in 10 minutes by the technical scheme of the exemplary embodiment, thereby greatly improving the efficiency of the addition of the subtitle.
On the basis of the foregoing embodiment, the acquiring a subtitle text book that needs to be added to a currently edited video optionally includes:
acquiring a keyword of the subtitle text, sending the keyword to a server, and receiving the subtitle text which is returned by the server and corresponds to the keyword, wherein the subtitle text is used as the subtitle text which needs to be added to the current edited video; or
And acquiring the subtitle text pasted by the user or the subtitle text input by the user as the subtitle text needing to be added to the current edited video.
For some videos with comparative standards (such as movie and TV plays or song MVs) the subtitle search function can be provided, keywords of subtitle texts input by users are obtained, the obtained keywords are sent to a server, the server searches for corresponding subtitle texts according to the keywords, and the searched subtitle texts are returned. And a subtitle input interface can be provided, and a user can directly edit the whole subtitle text at the interface, so that the sentence-by-sentence input of the user is avoided. The user can also edit the caption text at other positions, paste the whole edited caption text to the interface, and paste the caption text one by one to solve the inconvenience of sentence-by-sentence input of the user.
On the basis of the above embodiment, the method further optionally includes:
and performing sentence division processing on the subtitle text to obtain each sentence of subtitle forming the subtitle text.
The punctuation marks in the caption text can be identified to perform sentence division processing on the caption text to obtain each sentence of caption forming the caption text, so that each sentence of caption can be added to the corresponding time point of the current edited video conveniently in the follow-up process.
Fig. 2 is a flow diagram illustrating another video editing method according to an example embodiment. The present exemplary embodiment provides an alternative to the above-described embodiments. As shown in fig. 2, the method specifically includes the following steps.
In step S21, subtitle text that needs to be added to the currently edited video is acquired.
The specific content is the same as that of step S11 in the previous embodiment, and is not described herein again.
In step S22, a picture file is generated for each subtitle in the subtitle text.
When the subtitles are displayed, the subtitles can be displayed in a picture form, so that when the subtitles are added, the subtitle text needs to be converted into a picture file, one or more picture files can be generated according to sentences in the subtitle text, for example, one picture file can be generated by one subtitle in the subtitle text, and the generated picture files are arranged according to the sequence of corresponding sentences in the subtitle text, so that each picture file can be added to the current edited video in sequence. The picture file may be a transparent picture to avoid blocking the video when playing the video.
Illustratively, the form of the text input by the user can be preset, for example, a subtitle is in a line, so that the efficiency of adding the subtitle can be further improved. Of course, the form of inputting the caption text by the user may not be preset, the user may input a whole segment of the caption text, and the sentence therein is recognized through the form of subsequent sentence division processing, so that the user may freely input the caption text, and the user experience is improved.
In step S23, in the process of editing the current edited video, an operation instruction of the user is received.
When the subtitles are added to the current editing video, the appearance time point and the disappearance time point of each subtitle in the current editing video need to be determined by interaction with a user in the process of playing the current editing video, so that the user needs to operate on an editing interface, and the electronic equipment receives an operation instruction of the user.
In step S24, an appearance time point and a disappearance time point of each picture file in the currently edited video are determined according to the operation instruction.
And traversing each picture file in the playing process of the current edited video, so that when an operation instruction is received, one picture file is added to the current edited video. When an operation instruction is received once, the time of the currently played video frame in the currently edited video is determined to be the appearance time point of the picture file to be added and the disappearance time point of the previous picture file, so that after the currently edited video is played once, the appearance time point and the disappearance time point of each picture file in the currently edited video can be determined.
In step S25, each of the picture files is added to the currently edited video according to the appearance time point and the disappearance time point, and a video with subtitles is generated.
After the appearance time point and the disappearance time point of one picture file are determined, the picture file is added to the current editing video, namely the picture file can be added to the current editing video in the process of determining the time points, and after all the picture files are added to the current editing video, the current editing video is generated into a video with subtitles.
Wherein, according to the appearance time point and the disappearance time point, adding each picture file to the current editing video to generate a video with subtitles, optionally including:
and synthesizing the corresponding picture file and the video frame between the appearance time point and the disappearance time point in the current edited video according to the appearance time point and the disappearance time point to generate the video with the subtitles.
After the appearance time point of a picture file is determined, the picture file and a video frame played in the current editing video are synthesized until the disappearance time point of the current picture file is determined, and during synthesis, the picture file can be synthesized to a preset position (such as a position below a screen) of the video frame, so that the picture file is added to the current editing video, and the video with subtitles is generated.
In the video editing method provided by the exemplary embodiment, on the basis of the above embodiment, a picture file is generated for each sentence of subtitles in a subtitle text, in the process of editing a currently edited video, an operation instruction of a user is received, and an appearance time point and a disappearance time point of each picture file in the currently edited video are determined according to the operation instruction, so that each picture file is added to the currently edited video according to the appearance time point and the disappearance time point, without the need of inputting subtitles one sentence by the user and setting corresponding appearance time point and disappearance time point, a whole subtitle text can be directly obtained and added to the video in the playing process of the video, the operation of the user is simplified, the addition of subtitles can be completed after the video is played, and the subtitle adding efficiency of the video is improved.
Fig. 3 is a flow diagram illustrating another video editing method according to an example embodiment. The present exemplary embodiment provides an alternative to the above-described embodiments. As shown in fig. 3, the method specifically includes the following steps.
In step S31, subtitle text that needs to be added to the currently edited video is acquired.
The specific content is the same as that of step S21 in the previous embodiment, and is not described herein again.
In step S32, a picture file is generated for each subtitle in the subtitle text.
The specific content is the same as that of step S22 in the previous embodiment, and is not described herein again.
In step S33, during the editing of the currently edited video, a current operation instruction of a user is received, and a time point when the current operation instruction is received is determined when the current operation instruction is a preset operation instruction.
The preset operation instruction is a preset operation instruction used for determining the appearance time point and the disappearance time point of the picture file in an interactive manner with a user, for example, the preset operation instruction can be a click event instruction, a double click event instruction or a sliding instruction, and can be selected as the click event instruction, so that the appearance time point and the disappearance time point of the picture file can be determined accurately, and the problem of subtitle time overlapping caused by manually dragging a time axis to determine the time point in the related art can be solved.
When the picture file containing the subtitle is added to the current editing video, the picture file is added to the current editing video according to an operation instruction of a user, and the appearance time point of one picture file in the current editing video and the disappearance time point of the previous picture file in the current editing video are determined by one operation instruction. In the process of editing the current editing video, receiving a current operation instruction of a user, comparing the current operation instruction with a preset operation instruction, and if the current operation instruction is the preset operation instruction, determining a time point of receiving the current operation instruction, so as to be convenient for determining an appearance time point and a disappearance time point of a corresponding picture file according to the time point.
In step S34, the time point at which the current operation instruction is received is determined as an appearance time point of a current picture file in the current edited video and a disappearance time point of a previous picture file in the current edited video.
And the current picture file is a picture file to be added into the current editing video according to the time point of receiving the current operation instruction.
Playing the current editing video on an editing interface, receiving a current operation instruction of a user in the playing process of the current editing video, determining that the time point of receiving the current operation instruction, namely the time point of the current frame played by the current editing video, is the appearance time point of a current picture file, namely a first picture file in the current editing video according to the current operation instruction of the user when the current operation instruction of the user is a preset operation instruction and the current editing video is played continuously, and determining that the time point of receiving the current operation instruction, namely the time point of the current frame played by the current editing video, is the appearance time point of the current picture file, namely a second picture file in the current editing video when the current operation instruction of the user is detected as the preset operation instruction for the second time, and when the current operation instruction of the user is detected to be a preset operation instruction for the third time, determining that the time point of receiving the current operation instruction, namely the time point of the current frame played by the current editing video, is the appearance time point of the current picture file, namely the third picture file in the current editing video, and is the disappearance time point of the previous picture file, namely the second picture file in the current editing video, the current editing video is played until the appearance time point and the disappearance time point of all the picture files are determined.
In an exemplary process of playing the current editing video in the editing interface, according to a received current click event instruction of a user to a video playing area, determining that a time point at which the current click event instruction is received, that is, a time point of a current frame to which the current editing video is played, is an appearance time point of a current picture file in the current editing video and a disappearance time point of a previous picture file in the current editing video, that is, determining that the appearance time point and the disappearance time point of a picture file in the video need to click the event instruction twice. For example, when a click event instruction of a user to a video playing area is received for the first time, determining that a time point of a current frame played by a currently edited video is a time point of occurrence of a first picture file in the video; when a click event instruction of a user to a video playing area is received for the second time, determining that the time point of a current frame played by the current editing video is the disappearance time point of the first picture file in the current editing video and is the appearance time point of the second picture file in the current editing video; when a click event instruction of a user to a video playing area is received for the third time, determining that the time point of a current frame played by the current editing video is the disappearance time point of the second picture file in the current editing video and is the appearance time point of the third picture file in the current editing video; and by analogy, the appearance time point and the disappearance time point of each picture file in the video are obtained. The method and the device have the advantages that the occurrence time point and the disappearance time point of each picture file are determined by receiving the click event instruction of the user to the video playing area in the editing interface, the subtitles are not required to be input sentence by sentence, the time points are not required to be set sentence by sentence, the subtitle adding efficiency is greatly improved, the determined time points are more accurate, and the subtitle time is not overlapped.
In step S35, each of the picture files is added to the currently edited video according to the appearance time point and the disappearance time point, and a video with subtitles is generated.
The specific content is the same as that of step S25 in the previous embodiment, and is not described herein again.
On the basis of the above embodiment, in the process of editing the currently edited video, the current operation instruction of the user is received, the time point of receiving the current operation instruction is determined when the current operation instruction is the preset operation instruction, the time point is determined to determine the appearance time point of the current picture file in the currently edited video and the disappearance time point of the previous picture file in the currently edited video, and the user does not need to drag the time axis to set the appearance time point and the disappearance time point for each sentence of subtitles respectively, so that the subtitles can be added after the video is played, the efficiency of adding the subtitles is improved, and the time point is determined by interacting with the user, so that the determined time point is more accurate, and the subtitle time is not overlapped.
On the basis of the above embodiment, the method may further include:
and after the appearance time point of the current picture file is determined, continuously displaying the current picture file in the current editing video until the disappearance time point of the current picture file is determined.
After the appearance time point of the current picture file is determined, the current picture file and the video frame with the time point in the current editing video being the appearance time point and later are synthesized, and the added subtitles are previewed and displayed in the process of playing the current editing video in the editing interface until the disappearance time point of the current picture file is determined. By adding the subtitles in the video playing process and performing preview display of the subtitles, a user can conveniently check the subtitle adding effect, and the user experience is improved.
Fig. 4 is a block diagram illustrating a structure of a video editing apparatus according to an exemplary embodiment.
As shown in fig. 4, the video editing apparatus includes a subtitle text acquisition module 41, a time point determination module 42, and a subtitle addition module 43.
The subtitle text obtaining module 41 is configured to obtain subtitle text that needs to be added to the currently edited video;
the time point determining module 42 is configured to determine an appearance time point and a disappearance time point of each subtitle in the subtitle text in the currently edited video according to an operation instruction of a user on an editing interface of the currently edited video;
the caption adding module 43 is configured to add each caption in the caption text to the currently edited video according to the appearance time point and the disappearance time point, and generate a video with captions.
Optionally, the apparatus further comprises:
and the picture file generation module is configured to generate a picture file for each caption in the caption text.
Optionally, the time point determining module includes:
an operation instruction receiving unit configured to receive an operation instruction of a user in a process of editing the current edited video;
a time point determining unit configured to determine an appearance time point and a disappearance time point of each of the picture files in the current edited video according to the operation instruction;
the subtitle adding module comprises:
and the subtitle adding unit is configured to add each picture file to the current edited video according to the appearance time point and the disappearance time point, and generate the video with subtitles.
Optionally, the operation instruction receiving unit is specifically configured to:
receiving a current operation instruction of a user in the process of editing the current editing video, and determining a time point of receiving the current operation instruction when the current operation instruction is a preset operation instruction;
the time point determining unit is specifically configured to:
and determining the time point of receiving the current operation instruction as the appearance time point of the current picture file in the current editing video and the disappearance time point of the previous picture file in the current editing video.
Optionally, the apparatus further comprises:
and the subtitle display module is configured to continuously display the current picture file in the current editing video after the appearance time point of the current picture file is determined until the disappearance time point of the current picture file is determined.
Optionally, the subtitle adding unit is specifically configured to:
and synthesizing the corresponding picture file and the video frame between the appearance time point and the disappearance time point in the current edited video according to the appearance time point and the disappearance time point to generate the video with the subtitles.
Optionally, the operation instruction is a click event instruction.
Optionally, the subtitle text obtaining module includes:
the first acquisition unit is configured to acquire a keyword of the subtitle text, send the keyword to a server, and receive the subtitle text corresponding to the keyword returned by the server as the subtitle text needing to be added to the currently edited video; or
And a second acquisition unit configured to acquire the subtitle text pasted by the user or the subtitle text input by the user as the subtitle text to be added to the currently edited video.
Optionally, the apparatus further comprises:
and the clause processing module is configured to perform clause processing on the subtitle text to obtain each sentence of subtitle forming the subtitle text.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
In addition, the present application also provides a computer program, which can be executed by an electronic device, and a specific flow of the computer program is as shown in fig. 1, and the specific steps are as follows:
acquiring a subtitle text which needs to be added to a current editing video;
determining the appearance time point and the disappearance time point of each caption in the caption text in the current editing video according to the operation instruction of the user on the editing interface of the current editing video;
and adding each sentence of subtitle in the subtitle text to the current editing video according to the appearance time point and the disappearance time point to generate a video with subtitles.
Fig. 5 is a block diagram illustrating a structure for a video editing apparatus according to an exemplary embodiment. For example, the apparatus 500 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 5, the apparatus 500 may include one or more of the following components: a processing component 502, a memory 504, a power component 506, a multimedia component 508, an audio component 510, an input/output (I/O) interface 512, a sensor component 514, and a communication component 516.
The processing component 502 generally controls overall operation of the device 500, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 502 may include one or more processors 520 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 502 can include one or more modules that facilitate interaction between the processing component 502 and other components. For example, the processing component 502 can include a multimedia module to facilitate interaction between the multimedia component 508 and the processing component 502.
The memory 504 is configured to store various types of data to support operation at the device 500. Examples of such data include instructions for any application or method operating on device 500, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 504 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 506 provides power to the various components of the device 500. The power components 506 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 500.
The multimedia component 508 includes a screen that provides an output interface between the device 500 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 508 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 500 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 510 is configured to output and/or input audio signals. For example, audio component 510 includes a Microphone (MIC) configured to receive external audio signals when apparatus 500 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 504 or transmitted via the communication component 516. In some embodiments, audio component 510 further includes a speaker for outputting audio signals.
The I/O interface 512 provides an interface between the processing component 502 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 514 includes one or more sensors for providing various aspects of status assessment for the device 500. For example, the sensor assembly 514 may detect an open/closed state of the device 500, the relative positioning of the components, such as a display and keypad of the apparatus 500, the sensor assembly 514 may also detect a change in the position of the apparatus 500 or a component of the apparatus 500, the presence or absence of user contact with the apparatus 500, orientation or acceleration/deceleration of the apparatus 500, and a change in the temperature of the apparatus 500. The sensor assembly 514 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 514 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 514 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 516 is configured to facilitate communication between the apparatus 500 and other devices in a wired or wireless manner. The apparatus 500 may access a wireless network based on a communication standard, such as WiFi, an operator network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 516 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 516 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an example embodiment, the apparatus 500 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors, or other electronic components for performing the operations shown in fig. 1, 2, or 3.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 504 comprising instructions, executable by the processor 520 of the apparatus 500 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
The present application also provides a computer program comprising the operational steps as shown in fig. 1, fig. 2 or fig. 3.
Fig. 6 is a block diagram illustrating a structure of an electronic device according to an example embodiment.
As shown in fig. 6, the electronic device is provided with at least one processor 601 and further comprises a memory 602, which are connected by a data bus 603.
The memory is used to store computer programs or instructions that the processor is used to retrieve and execute to cause the electronic device to perform the operations shown in fig. 1, 2 or 3 below.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.
Claims (18)
1. A video editing method, comprising:
acquiring a subtitle text which needs to be added to a current editing video; the subtitle text is the text of all subtitles of the current editing video;
determining the appearance time point and the disappearance time point of each caption in the caption text in the current editing video according to the operation instruction of the user on the editing interface of the current editing video;
adding each sentence of subtitle in the subtitle text to the current editing video according to the appearing time point and the disappearing time point to generate a video with subtitles;
wherein the method further comprises:
and performing sentence division processing on the subtitle text to obtain each sentence of subtitle forming the subtitle text.
2. The method of claim 1, further comprising:
and generating a picture file for each sentence of caption in the caption text.
3. The method according to claim 2, wherein the determining, according to an operation instruction of a user on an editing interface of the currently edited video, an appearance time point and a disappearance time point of each subtitle in the subtitle text in the currently edited video includes:
receiving an operation instruction of a user in the process of editing the current edited video;
determining the appearance time point and the disappearance time point of each picture file in the current edited video according to the operation instruction;
adding the subtitles in the subtitle text to the current editing video according to the appearance time point and the disappearance time point to generate a video with subtitles, comprising:
and adding each picture file to the current edited video according to the appearance time point and the disappearance time point to generate a video with subtitles.
4. The method according to claim 3, wherein the receiving of the operation instruction of the user in the process of editing the current edited video comprises:
receiving a current operation instruction of a user in the process of editing the current editing video, and determining a time point of receiving the current operation instruction when the current operation instruction is a preset operation instruction;
determining the appearance time point and the disappearance time point of each picture file in the current edited video according to the operation instruction, including:
and determining the time point of receiving the current operation instruction as the appearance time point of the current picture file in the current editing video and the disappearance time point of the previous picture file in the current editing video.
5. The method of claim 4, further comprising:
and after the appearance time point of the current picture file is determined, continuously displaying the current picture file in the current editing video until the disappearance time point of the current picture file is determined.
6. The method according to claim 3, wherein the adding each picture file to a currently edited video according to the appearance time point and the disappearance time point to generate a video with subtitles comprises:
and synthesizing the corresponding picture file and the video frame between the appearance time point and the disappearance time point in the current edited video according to the appearance time point and the disappearance time point to generate the video with the subtitles.
7. The method of claim 1, wherein the operation instruction is a click event instruction.
8. The method of claim 1, wherein the obtaining subtitle text that needs to be added to the currently edited video comprises:
acquiring a keyword of the subtitle text, sending the keyword to a server, and receiving the subtitle text which is returned by the server and corresponds to the keyword, wherein the subtitle text is used as the subtitle text which needs to be added to the current edited video; or
And acquiring the subtitle text pasted by the user or the subtitle text input by the user as the subtitle text needing to be added to the current edited video.
9. A video editing apparatus, comprising:
the subtitle text acquisition module is configured to acquire subtitle texts to be added to a current edited video; the subtitle text is the text of all subtitles of the current editing video;
the time point determining module is configured to determine an appearance time point and a disappearance time point of each subtitle in the subtitle text in the current edited video according to an operation instruction of a user on an editing interface of the current edited video;
the caption adding module is configured to add each sentence of caption in the caption text to the current editing video according to the appearance time point and the disappearance time point, and generate a video with the caption;
the device further comprises:
and the clause processing module is configured to perform clause processing on the subtitle text to obtain each sentence of subtitle forming the subtitle text.
10. The apparatus of claim 9, further comprising:
and the picture file generation module is configured to generate a picture file for each caption in the caption text.
11. The apparatus of claim 10, wherein the time point determining module comprises:
an operation instruction receiving unit configured to receive an operation instruction of a user in a process of editing the current edited video;
a time point determining unit configured to determine an appearance time point and a disappearance time point of each of the picture files in the current edited video according to the operation instruction;
the subtitle adding module comprises:
and the subtitle adding unit is configured to add each picture file to the current edited video according to the appearance time point and the disappearance time point, and generate the video with subtitles.
12. The apparatus according to claim 11, wherein the operation instruction receiving unit is specifically configured to:
receiving a current operation instruction of a user in the process of editing the current editing video, and determining a time point of receiving the current operation instruction when the current operation instruction is a preset operation instruction;
the time point determining unit is specifically configured to:
and determining the time point of receiving the current operation instruction as the appearance time point of the current picture file in the current editing video and the disappearance time point of the previous picture file in the current editing video.
13. The apparatus of claim 12, further comprising:
and the subtitle display module is configured to continuously display the current picture file in the current editing video after the appearance time point of the current picture file is determined until the disappearance time point of the current picture file is determined.
14. The apparatus according to claim 11, wherein the subtitle adding unit is specifically configured to:
and synthesizing the corresponding picture file and the video frame between the appearance time point and the disappearance time point in the current edited video according to the appearance time point and the disappearance time point to generate the video with the subtitles.
15. The apparatus of claim 9, wherein the operation instruction is a click event instruction.
16. The apparatus of claim 9, wherein the subtitle text obtaining module comprises:
the first acquisition unit is configured to acquire a keyword of the subtitle text, send the keyword to a server, and receive the subtitle text corresponding to the keyword returned by the server as the subtitle text needing to be added to the currently edited video; or
And a second acquisition unit configured to acquire the subtitle text pasted by the user or the subtitle text input by the user as the subtitle text to be added to the currently edited video.
17. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the method of any one of claims 1-8.
18. A non-transitory computer readable storage medium having instructions which, when executed by a processor of a mobile terminal, enable the mobile terminal to perform a video editing method, the method comprising the steps of any of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811125999.7A CN109413478B (en) | 2018-09-26 | 2018-09-26 | Video editing method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811125999.7A CN109413478B (en) | 2018-09-26 | 2018-09-26 | Video editing method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109413478A CN109413478A (en) | 2019-03-01 |
CN109413478B true CN109413478B (en) | 2020-04-24 |
Family
ID=65466296
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811125999.7A Active CN109413478B (en) | 2018-09-26 | 2018-09-26 | Video editing method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109413478B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109788335B (en) * | 2019-03-06 | 2021-08-17 | 珠海天燕科技有限公司 | Video subtitle generating method and device |
CN110996167A (en) * | 2019-12-20 | 2020-04-10 | 广州酷狗计算机科技有限公司 | Method and device for adding subtitles in video |
CN112653932B (en) * | 2020-12-17 | 2023-09-26 | 北京百度网讯科技有限公司 | Subtitle generating method, device, equipment and storage medium for mobile terminal |
CN113422996B (en) * | 2021-05-10 | 2023-01-20 | 北京达佳互联信息技术有限公司 | Subtitle information editing method, device and storage medium |
CN114501098B (en) * | 2022-01-06 | 2023-09-26 | 北京达佳互联信息技术有限公司 | Subtitle information editing method, device and storage medium |
CN115134659B (en) * | 2022-06-15 | 2024-06-25 | 阿里巴巴云计算(北京)有限公司 | Video editing and configuring method, device, browser, electronic equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103179093A (en) * | 2011-12-22 | 2013-06-26 | 腾讯科技(深圳)有限公司 | Matching system and method for video subtitles |
CN105979169A (en) * | 2015-12-15 | 2016-09-28 | 乐视网信息技术(北京)股份有限公司 | Video subtitle adding method, device and terminal |
CN205726069U (en) * | 2016-06-24 | 2016-11-23 | 谭圆圆 | Unmanned aerial vehicle control terminal and unmanned aerial vehicle |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009004872A (en) * | 2007-06-19 | 2009-01-08 | Buffalo Inc | One-segment broadcast receiver, one-segment broadcast receiving method and medium recording one-segment broadcast receiving program |
CN101917557B (en) * | 2010-08-10 | 2012-06-27 | 浙江大学 | Method for dynamically adding subtitles based on video content |
CN105763949A (en) * | 2014-12-18 | 2016-07-13 | 乐视移动智能信息技术(北京)有限公司 | Audio video file playing method and device |
-
2018
- 2018-09-26 CN CN201811125999.7A patent/CN109413478B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103179093A (en) * | 2011-12-22 | 2013-06-26 | 腾讯科技(深圳)有限公司 | Matching system and method for video subtitles |
CN105979169A (en) * | 2015-12-15 | 2016-09-28 | 乐视网信息技术(北京)股份有限公司 | Video subtitle adding method, device and terminal |
CN205726069U (en) * | 2016-06-24 | 2016-11-23 | 谭圆圆 | Unmanned aerial vehicle control terminal and unmanned aerial vehicle |
Also Published As
Publication number | Publication date |
---|---|
CN109413478A (en) | 2019-03-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109413478B (en) | Video editing method and device, electronic equipment and storage medium | |
CN107396177B (en) | Video playing method, device and storage medium | |
US20210133459A1 (en) | Video recording method and apparatus, device, and readable storage medium | |
CN107644646B (en) | Voice processing method and device for voice processing | |
CN110602394A (en) | Video shooting method and device and electronic equipment | |
WO2022142871A1 (en) | Video recording method and apparatus | |
CN109951379B (en) | Message processing method and device | |
KR20160132808A (en) | Method and apparatus for identifying audio information | |
CN109063101B (en) | Video cover generation method and device | |
CN110636382A (en) | Method and device for adding visual object in video, electronic equipment and storage medium | |
CN105447109A (en) | Key word searching method and apparatus | |
KR20180037235A (en) | Information processing method and apparatus | |
CN113411516B (en) | Video processing method, device, electronic equipment and storage medium | |
CN108156506A (en) | The progress adjustment method and device of barrage information | |
CN112532931A (en) | Video processing method and device and electronic equipment | |
CN109521938B (en) | Method and device for determining data evaluation information, electronic device and storage medium | |
CN111510556A (en) | Method, device and computer storage medium for processing call information | |
CN109756783B (en) | Poster generation method and device | |
CN113364999B (en) | Video generation method and device, electronic equipment and storage medium | |
CN112764636B (en) | Video processing method, apparatus, electronic device, and computer-readable storage medium | |
CN113905192A (en) | Subtitle editing method and device, electronic equipment and storage medium | |
CN110809184A (en) | Video processing method, device and storage medium | |
CN113744071A (en) | Comment information processing method and device, electronic equipment and storage medium | |
CN113919311A (en) | Data display method and device, electronic equipment and storage medium | |
CN107679123B (en) | Picture naming method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |