[go: up one dir, main page]

CN113556576B - Video generation method and device - Google Patents

Video generation method and device Download PDF

Info

Publication number
CN113556576B
CN113556576B CN202110824917.3A CN202110824917A CN113556576B CN 113556576 B CN113556576 B CN 113556576B CN 202110824917 A CN202110824917 A CN 202110824917A CN 113556576 B CN113556576 B CN 113556576B
Authority
CN
China
Prior art keywords
display
video
layer
duration
end time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110824917.3A
Other languages
Chinese (zh)
Other versions
CN113556576A (en
Inventor
补佳林
唐小辉
叶小瑜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202110824917.3A priority Critical patent/CN113556576B/en
Publication of CN113556576A publication Critical patent/CN113556576A/en
Application granted granted Critical
Publication of CN113556576B publication Critical patent/CN113556576B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/74Browsing; Visualisation therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/812Monomedia components thereof involving advertisement data

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Studio Circuits (AREA)

Abstract

The disclosure provides a video generation method and device. The video generation method comprises the following steps: acquiring configuration information of a user synthesizing a video based on a video template, wherein the configuration information comprises: the identification information of the video template and the identification information of the display objects which are required to be added in each layer of the video template by a user; adding corresponding display objects in each layer of the video template according to the configuration information to synthesize a video; and when the display end time of the display object added in the duration self-adaptive layer of the video template exceeds the default end time of the video template, delaying the display end time of the duration self-adaptive layer according to the display duration required by the display object, wherein the duration self-adaptive layer is a preset layer with adaptively adjustable display duration in the video template.

Description

Video generation method and device
Technical Field
The present disclosure relates generally to the field of video technology, and more particularly, to a video generation method and apparatus.
Background
As short video applications become more popular, short video platforms aggregate a large number of users. The short video platform can use the big data of user behavior to construct a user image so as to accurately push short video advertisements to users, the advertisement promotion effect is recognized by advertisers, and huge short video advertisement industry is gradually formed. Short video manufacturers have put forward online short video production platforms for advertisers.
The technology of the online short video production platform can be categorized into two types: one is to provide an online editing tool based on a timeline that migrates video editing functions on an application App onto a web page, which is more simplified than editing functions on the App due to the limited manner of interaction. The other is short video generation based on the nonlinear special effect manufacturing software AE template, the platform provides a popularization short video template of each industry, and short video advertisements can be generated by replacing materials.
Based on the short video generation mode of the AE template, a designer uses AE to design and derives the AE template. The user only needs to add the materials such as video, pictures and the like, and the service end can replace the placeholder materials of the templates to generate the popularization short video. Templates are generated by designers, are more specialized than users, and have better video quality than users themselves. The user can immediately obtain the synthesized video only by preparing the pictures, the videos and the texts of the product, so that the intelligent degree is higher.
Disclosure of Invention
Exemplary embodiments of the present disclosure provide a video generating method and apparatus capable of adaptively delaying a display end time of a layer in a video template to which a display object is added according to a display duration required for the display object provided by a user.
According to a first aspect of an embodiment of the present disclosure, there is provided a video generating method, including: acquiring configuration information of a user synthesizing a video based on a video template, wherein the configuration information comprises: the identification information of the video template and the identification information of the display objects which are required to be added in each layer of the video template by a user; adding corresponding display objects in each layer of the video template according to the configuration information to synthesize a video; and when the display end time of the display object added in the duration self-adaptive layer of the video template exceeds the default end time of the video template, delaying the display end time of the duration self-adaptive layer according to the display duration required by the display object, wherein the duration self-adaptive layer is a preset layer with adaptively adjustable display duration in the video template.
Optionally, determining whether a display end time of a display object added in a duration adaptive layer of the video template exceeds a default end time of the video template by: determining whether a displayed duration self-adaptive layer exists on an N-th frame after each layer of the video template is added with a corresponding display object; when the N-th frame is provided with the displayed duration self-adaptive layer, determining that the display ending time of the display object added in the displayed duration self-adaptive layer on the N-th frame exceeds the default ending time of the video template, wherein the default total frame number of the video template is N frames.
Optionally, the step of delaying the display end time of the duration adaptive layer according to the display duration required by the display object includes: updating the display duration of each duration adaptive layer displayed on the nth frame to be: a display time period required for displaying the object added thereto; updating the display end time of each duration self-adaptive layer shown on the N-th frame based on the updated display duration of the duration self-adaptive layer; uniformly delaying the display end time of each duration self-adaptive layer displayed on the Nth frame as follows: the latest display end time, wherein the latest display end time is: and the time length is adaptive to the latest display end time among the updated display end times of the layers.
Optionally, the step of delaying the display end time of the duration adaptive layer according to the display duration required by the display object includes: updating the display duration of each duration adaptive layer displayed on the nth frame to be: a display time period required for displaying the object added thereto; updating the display end time of each duration self-adaptive layer shown on the N-th frame based on the updated display duration of the duration self-adaptive layer; updating the display end time of each parent layer of each duration self-adaptive layer based on the updated display end time of each duration self-adaptive layer displayed on the nth frame, wherein each parent layer is: a parent layer comprising at least one of the respective duration adaptive layers; uniformly delaying the display ending time of each duration self-adaptive layer and each father layer to be: the latest display end time, wherein the latest display end time is: and the latest display ending time among the display ending time after each time length self-adaptive layer updating and the display ending time after each father layer updating.
Optionally, based on the updated display end time of each duration adaptive layer displayed on the nth frame, the step of updating the display end time of each parent layer of each duration adaptive layer includes: updating the display end time of each parent layer in the parent layers as follows: the latest display end time among all the updated display end times of the sub-layers included in the display end time.
Optionally, the configuration information further includes: the step of adding the corresponding display object in each layer of the video template according to the configuration information comprises the following steps of: respectively adding the text display objects to the corresponding layers according to the mouth cast style selected for the text display objects aiming at each text display object needing to be added with the mouth cast function; and adding voice data corresponding to the text display object into the video, and setting the playing start time of the voice data in the video based on the display start time of the layer to which the text display object is added.
Optionally, the oral style includes at least one of: caption mouth-broadcasting style, printer mouth-broadcasting style, underline mouth-broadcasting style, rolling caption mouth-broadcasting style, novel mouth-broadcasting style; the subtitle opening broadcasting style is as follows: the characters appear in the form of captions and in synchronization with the voice; the mouth-casting style of the printer is as follows: the characters appear one by one along with the voice playing progress; the underlined mouth-seeding pattern is: the text is underlined along with the voice playing progress; the rolling caption mouth broadcasting style is as follows: the text automatically rolls upwards along with the voice playing progress in the form of subtitles; the novel carousel style is: the text appears in the form of a novel carousel in synchronization with speech.
Optionally, the animation effect of the novel carousel comprises at least one of the following: fade-in and fade-out, zoom, mask left to right display, mask small to large display, and scroll from bottom to top.
Optionally, the configuration information further includes: the user aims at the identification information of the multi-font mixed typesetting style selected by the text display object with the function of adding the mouth cast and/or the key words needing to be highlighted; the step of adding the text display object to the corresponding layer according to the mouth cast style selected for the text display object comprises the following steps of: and adding the text display objects to the corresponding layers according to the selected mouth cast style for the text display objects, the selected multi-font mixed typesetting style and/or the key words to be highlighted.
Optionally, the video template is a nonlinear special effect production software AE template or script template.
Optionally, the configuration information further includes: identification information of a video end template selected by a user and identification information of a display object to be added in each layer of the video end template; the video generation method further comprises the following steps: adding corresponding display objects in each layer of the video ending template according to the configuration information so as to synthesize ending video clips; the video is spliced with the ending video segment.
According to a second aspect of the embodiments of the present disclosure, there is provided a video generating apparatus including: a configuration information acquisition unit configured to acquire configuration information of a user synthesizing a video based on a video template, wherein the configuration information includes: the identification information of the video template and the identification information of the display objects which are required to be added in each layer of the video template by a user; a video synthesis unit configured to add a corresponding display object in each layer of the video template according to the configuration information to synthesize a video; and the end time delay unit is configured to delay the display end time of the duration self-adaptive layer according to the display duration required by the display object when the display end time of the display object added in the duration self-adaptive layer of the video template exceeds the default end time of the video template, wherein the duration self-adaptive layer is a preset layer with the display duration capable of being adaptively adjusted in the video template.
Optionally, the end time postponement unit is configured to determine whether a display end time of a display object added in the duration adaptive layer of the video template exceeds a default end time of the video template by: determining whether a displayed duration self-adaptive layer exists on an N-th frame after each layer of the video template is added with a corresponding display object; when the N-th frame is provided with the displayed duration self-adaptive layer, determining that the display ending time of the display object added in the displayed duration self-adaptive layer on the N-th frame exceeds the default ending time of the video template, wherein the default total frame number of the video template is N frames.
Optionally, the end time delay unit includes: the display duration updating unit is configured to update the display duration of each duration adaptive layer displayed on the nth frame to be: a display time period required for displaying the object added thereto; the end time updating unit is configured to update the display end time of each duration self-adaptive layer shown on the Nth frame based on the updated display duration of the duration self-adaptive layer; a delay unit configured to uniformly delay the display end time of each duration adaptive layer displayed on the nth frame as: the latest display end time, wherein the latest display end time is: and the time length is adaptive to the latest display end time among the updated display end times of the layers.
Optionally, the end time delay unit includes: the display duration updating unit is configured to update the display duration of each duration adaptive layer displayed on the nth frame to be: a display time period required for displaying the object added thereto; the end time updating unit is configured to update the display end time of each duration self-adaptive layer shown on the Nth frame based on the updated display duration of the duration self-adaptive layer; and updating the display end time of each parent layer of each duration self-adaptive layer based on the updated display end time of each duration self-adaptive layer displayed on the nth frame, wherein each parent layer is: a parent layer comprising at least one of the respective duration adaptive layers; a postponing unit, configured to postpone the display end time of each duration adaptive layer and each parent layer in a unified way, where the postponing unit is configured to: the latest display end time, wherein the latest display end time is: and the latest display ending time among the display ending time after each time length self-adaptive layer updating and the display ending time after each father layer updating.
Optionally, the end time updating unit updates, for each parent layer in the respective parent layers, a display end time of the parent layer as: the latest display end time among all the updated display end times of the sub-layers included in the display end time.
Optionally, the configuration information further includes: the method comprises the steps that a user selects identification information of a mouth cast style aiming at a character display object with a mouth cast function to be added, wherein a video synthesis unit is configured to respectively add the character display object with the mouth cast function to a corresponding layer according to the mouth cast style selected for the character display object aiming at each character display object with the mouth cast function to be added; and adding voice data corresponding to the text display object into the video, and setting the playing start time of the voice data in the video based on the display start time of the layer to which the text display object is added.
Optionally, the oral style includes at least one of: caption mouth-broadcasting style, printer mouth-broadcasting style, underline mouth-broadcasting style, rolling caption mouth-broadcasting style, novel mouth-broadcasting style; the subtitle opening broadcasting style is as follows: the characters appear in the form of captions and in synchronization with the voice; the mouth-casting style of the printer is as follows: the characters appear one by one along with the voice playing progress; the underlined mouth-seeding pattern is: the text is underlined along with the voice playing progress; the rolling caption mouth broadcasting style is as follows: the text automatically rolls upwards along with the voice playing progress in the form of subtitles; the novel carousel style is: the text appears in the form of a novel carousel in synchronization with speech.
Optionally, the animation effect of the novel carousel comprises at least one of the following: fade-in and fade-out, zoom, mask left to right display, mask small to large display, and scroll from bottom to top.
Optionally, the configuration information further includes: the user aims at the identification information of the multi-font mixed typesetting style selected by the text display object with the function of adding the mouth cast and/or the key words needing to be highlighted; the video synthesis unit is configured to add each text display object with a mouth playing function to a corresponding layer according to a mouth playing style selected for the text display object, a selected multi-font mixed typesetting style and/or key words needing to be highlighted.
Optionally, the video template is a nonlinear special effect production software AE template or script template.
Optionally, the configuration information further includes: identification information of a video end template selected by a user and identification information of a display object to be added in each layer of the video end template; wherein the video generating apparatus further comprises: an ending synthesis unit configured to add a corresponding display object in each layer of the video ending template according to the configuration information to synthesize an ending video clip; and a splicing unit configured to splice the video with the ending video segment.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device, comprising: at least one processor; at least one memory storing computer-executable instructions, wherein the computer-executable instructions, when executed by the at least one processor, cause the at least one processor to perform the video generation method as described above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, which when executed by at least one processor, causes the at least one processor to perform the video generation method as described above.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising computer instructions which, when executed by at least one processor, implement a video generation method as described above.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
the video support is generated based on the video template to be self-adaptive according to the duration of the display object, so that the flexibility is enhanced;
the method has the advantages that the function of mouth broadcasting is increased, multiple mouth broadcasting styles are supported, multi-font mixed typesetting is supported, highlighting of key words is supported, the mouth broadcasting of novel carousel is supported, and therefore the generated video can bring better visual tension.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
Fig. 1 illustrates an application scenario diagram of a video generation method according to an exemplary embodiment of the present disclosure;
FIG. 2 illustrates a flowchart of a video generation method according to an exemplary embodiment of the present disclosure;
FIG. 3 illustrates a flowchart of a method of delaying a display end time of a layer according to an exemplary embodiment of the present disclosure;
FIG. 4 illustrates an example of a display end time of a delayed layer according to an exemplary embodiment of the present disclosure;
FIG. 5 illustrates a flowchart of a method of delaying a display end time of a layer according to another exemplary embodiment of the present disclosure;
fig. 6 shows a block diagram of a video generating apparatus according to an exemplary embodiment of the present disclosure;
fig. 7 shows a block diagram of an electronic device according to an exemplary embodiment of the present disclosure.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
It should be noted that, in this disclosure, "at least one of the items" refers to a case where three types of juxtaposition including "any one of the items", "a combination of any of the items", "an entirety of the items" are included. For example, "including at least one of a and B" includes three cases side by side as follows: (1) comprises A; (2) comprising B; (3) includes A and B. For example, "at least one of the first and second steps is executed", that is, three cases are juxtaposed as follows: (1) performing step one; (2) executing the second step; (3) executing the first step and the second step.
There are several problems with generating video based on video templates: (1) The flexibility of the video template is poor, the duration of the video generated by the same template is fixed, and a user can only replace materials; (2) The video templates have no universality, so that a designer is required to design corresponding video templates according to the characteristics of each industry; (3) The quality of the stencil depends on the designer, who may be a short plate in the overall process; (4) the client may not find the appropriate video template; (5) When the advertisement object faces the video generated based on the video template for many times, the video with similar forms inevitably brings visual fatigue, and the popularization effect of the advertisement is reduced. The video generation method provided by the disclosure can at least solve at least one problem existing in the related art, specifically, the video generation based on the video template can support the self-adaption according to the duration of the display object, so that the flexibility is enhanced; the method can increase the mouth playing function, support a plurality of mouth playing modes, support multi-font mixed typesetting, support highlighting key words and a plurality of animations by the novel carousel mouth playing, thereby the generated video can bring better visual tension. Hereinafter, exemplary embodiments of a video generation method and apparatus will be described in detail with reference to fig. 1 to 7.
Fig. 1 illustrates an application scenario diagram of a video generation method according to an exemplary embodiment of the present disclosure.
Referring to fig. 1, a user may configure a video based on a video template at a client, for example, select which video template is based on to compose the video, design what display objects are added in different layers of the selected video template, whether to use a streaming function for text display objects, which streaming style is specifically used, etc., and after the configuration is completed, the client may send relevant configuration information to a server, which may execute a video generation method according to an exemplary embodiment of the present disclosure to compose the video template selected by the user and uploaded materials based on the configuration information into a video that meets the user's requirements. For example, the video may be a promotional short video (e.g., an advertising short video).
It should be understood that the video generating method according to the exemplary embodiment of the present disclosure may be applied not only to the above-described scenes but also to other suitable scenes, for example, the video generating method according to the exemplary embodiment of the present disclosure may be performed by a user terminal, which is not limited in this disclosure.
Fig. 2 shows a flowchart of a video generation method according to an exemplary embodiment of the present disclosure.
Referring to fig. 2, in step S101, configuration information of a user synthesizing a video based on a video template is acquired.
Here, the configuration information includes: the identification information of the video template (i.e., the video template on which it is based), and the identification information of the display objects that the user needs to add in each layer of the video template.
As an example, the type of display object may include, but is not limited to, at least one of: video, pictures, text.
As an example, the identification information of the video template may be a name or ID of the video template. As an example, the identification information of the display object may be a name of the display object or a read path of the display object.
As an example, the video template may be a nonlinear special effects production software AE template or script template.
In step S102, according to the configuration information, a corresponding display object is added in each layer of the video template to synthesize a video.
In step S103, when the display end time of the display object added in the duration adaptive layer of the video template exceeds the default end time (i.e., the design end time) of the video template, the display end time of the duration adaptive layer is delayed according to the display duration required by the display object. So that the entire content of the display object can be displayed in its entirety in the synthesized video.
Here, the duration adaptive layer is a preset layer with adaptively adjustable display duration in the video template. For example, the template designer may preset which layers may be duration-adaptive layers according to attribute information of each layer. For example, the layer corresponding to the video title may not be set to a duration adaptive layer.
As an example, when the display object is a video, the display duration required to display the object may be a video duration.
Whether the display end time of the display object added in the duration adaptive layer of the video template exceeds the default end time of the video template can be judged in various suitable ways. As an example, it may be determined whether there is a duration adaptive layer presented on an nth frame after each layer of the video template is added with a corresponding display object; and when the duration self-adaptive layer displayed on the N-th frame exists, determining that the display end time of the display object added in the duration self-adaptive layer displayed on the N-th frame exceeds the default end time of the video template, wherein the default total frame number (i.e. the design total frame number) of the video template is N frames.
Next, an exemplary embodiment of a method of delaying the display end time of a layer (i.e., step S103) according to an exemplary embodiment of the present disclosure will be described with reference to fig. 3 to 5.
Fig. 3 illustrates a flowchart of a method of delaying a display end time of a layer according to an exemplary embodiment of the present disclosure.
Referring to fig. 3, in step S201, the display duration of each duration adaptive layer displayed on the nth frame is updated as follows: the display time period required for displaying the object added thereto.
In step S202, for each duration adaptive layer displayed on the nth frame, based on the updated display duration of the duration adaptive layer, the display end time of the duration adaptive layer is updated.
As an example, a display end time of the duration adaptive layer may be determined and updated based on a display duration of the duration adaptive layer after the duration adaptive layer is updated and a display start time of the duration adaptive layer. For example, the display start time of the duration adaptive layer may be added to the updated display duration, to obtain the updated display end time.
In step S203, the display end time of each duration adaptive layer displayed on the nth frame is uniformly delayed as follows: the latest display end time, wherein the latest display end time is: and the time length is adaptive to the latest display end time among the updated display end times of the layers. That is, the display end time of each duration adaptive layer displayed on the nth frame is uniformly delayed as follows: and the time length is adaptive to the latest display end time among the updated display end times of the layers.
Fig. 4 illustrates an example of a display end time of a delayed layer according to an exemplary embodiment of the present disclosure, as illustrated in fig. 4, where each transversal line segment corresponds to one layer, a start point of the transversal line segment represents a display start time of the layer, an end point of the transversal line segment represents a display end time of the layer, a reference sign before the transversal line segment represents an identification of the layer, where layer (1), layer (2), layer (3), and layer (4) are marked as time-length adaptive layers, and a time-length adaptive layer shown on a last frame after each layer of the video template is added with a corresponding display object is layer (1), layer (3), and layer (4), and according to an embodiment of the present disclosure, a display time length of layer (1) may be updated to a display time length required for a display object added on layer (1) and a display end time to update a display object added on layer (3), a display time length required for a display object to update layer (3) is updated to a display end time required for a display object added on layer (3) is updated, and a display end time length of layer (4) is updated to a display end time required for a layer (4) is updated on a display end time (4) of the layer (4) added on a display object added on a layer (1): the updated display end time of layer (3) is thus used as a reference to the updated display end time of layer (3), and the display end times of layer (1) and layer (4) are delayed. It can be seen that layer (2), although marked as a duration adaptive layer, does not delay the display end time of layer (2) since it is not shown on the last frame.
Fig. 5 illustrates a flowchart of a method of delaying a display end time of a layer according to another exemplary embodiment of the present disclosure.
Referring to fig. 5, in step S301, the display duration of each duration adaptive layer displayed on the nth frame is updated as follows: the display time period required for displaying the object added thereto.
In step S302, for each duration adaptive layer displayed on the nth frame, based on the updated display duration of the duration adaptive layer, the display end time of the duration adaptive layer is updated.
In step S303, based on the updated display end time of each duration adaptive layer shown in the nth frame, the display end time of each parent layer of each duration adaptive layer is updated, where each parent layer is: and the parent layer comprises at least one time length adaptive layer.
As an example, for each of the respective parent layers, the display end time of that parent layer may be updated as: the latest display end time among all the updated display end times of the sub-layers included in the display end time.
In step S304, the display end time of each duration adaptive layer and each parent layer is uniformly delayed as follows: the latest display end time, wherein the latest display end time is: and the latest display ending time among the display ending time after each time length self-adaptive layer updating and the display ending time after each father layer updating.
Returning to fig. 1, as an example, the configuration information may further include: the user aims at the identification information of the mouth cast style selected by the text display object with the mouth cast function.
As an example, the mouth-play function may be understood as playing a voice corresponding to a displayed text while displaying the text.
As an example, in step S102, for each text display object to which a mouth cast function needs to be added, the text display object may be added to a corresponding layer according to a mouth cast style selected for the text display object; and adding voice data corresponding to the text display object into the video, and setting the playing start time of the voice data in the video based on the display start time of the layer to which the text display object is added.
As an example, the oral style may include, but is not limited to, at least one of: caption mouth-broadcasting style, printer mouth-broadcasting style, underline mouth-broadcasting style, rolling caption mouth-broadcasting style, novel mouth-broadcasting style.
As an example, the subtitle opening style may be: text appears in the form of subtitles in synchrony with speech. Similar to the subtitle function of television shows and movies. For example, the time interval of sentences may be set.
As an example, the printer mouth cast style may be: the text appears one by one along with the voice playing progress.
As an example, the underlined mouth cast style may be: the text is underlined with the progress of the voice playing. For example, an icon of a hand or pen may be supported for display on an underline. For example, the placement of the underline, thickness, and color may be supported.
As an example, the rolling subtitle opening style may be: the text automatically scrolls upwards in the form of subtitles with the progress of the voice playback. For example, a large text entry may be supported. For example, a scrolling animation of a closed caption may have a smooth effect. For example, smooth interpolation may be used for text scrolling animation to avoid the problem of rapid speed change due to fewer text lines.
As an example, the novel carousel oral style may be: the text appears in the form of a novel carousel in synchronization with speech.
As an example, the animation effect of the novel carousel may include, but is not limited to, at least one of: fade-in and fade-out, zoom, mask left to right display, mask small to large display, and scroll from bottom to top. Accordingly, the configuration information may further include: identification information of the animation effect of the novel carousel selected by the user.
Further, as an example, the configuration information may further include: the user aims at the identification information of the multi-font mixed typesetting style selected by the text display object with the function of adding the oral playing and/or the key words which need to be highlighted.
As an example, for each text display object to which a mouth cast function needs to be added, the step of adding the text display object to a corresponding layer according to a mouth cast style selected for the text display object may include: and adding the text display objects to the corresponding layers according to the selected mouth cast style for the text display objects, the selected multi-font mixed typesetting style and/or the key words to be highlighted. Therefore, when the synthesized video is played, the text display object can be displayed according to the mouth playing style and the multi-font mixed typesetting style selected by the user, and key words in the text display object can be highlighted.
Further, as an example, the configuration information may further include: identification information of a video end template selected by a user, and identification information of a display object to be added in each layer of the video end template.
As an example, the video generating method according to an exemplary embodiment of the present disclosure may further include: adding corresponding display objects in each layer of the video ending template according to the configuration information so as to synthesize ending video clips; and splicing the video with the ending video segment. Specifically, the ending video segment is spliced after the video.
According to the embodiment of the disclosure, the negative look and feel brought by the sudden end of the video can be relieved by adding the tail. In addition, marketing information such as two-dimensional codes, logo and the like can be added to the tail of the chip.
As an example, the video synthesized by the video generating method according to the exemplary embodiment of the present disclosure may be a promotional short video (e.g., an advertisement short video), and the exemplary embodiment of the present disclosure enriches means for making the advertisement short video by expanding the application of the video template in the advertisement short video service, so that a service party may be helped to make a better advertisement short video, and the generated advertisement short video may achieve a better promotion effect.
Fig. 6 shows a block diagram of a video generating apparatus according to an exemplary embodiment of the present disclosure.
As shown in fig. 6, the video generating apparatus 10 according to an exemplary embodiment of the present disclosure includes: a configuration information acquisition unit 101, a video composition unit 102, and an end time delay unit 103.
Specifically, the configuration information acquisition unit 101 is configured to acquire configuration information of a user synthesizing a video based on a video template, wherein the configuration information includes: the identification information of the video template and the identification information of the display objects which the user needs to add in each layer of the video template.
The video compositing unit 102 is configured to add a corresponding display object in each layer of the video template to composite video according to the configuration information.
The end time delay unit 103 is configured to delay the display end time of the duration adaptive layer according to the display duration required by the display object when the display end time of the display object added in the duration adaptive layer of the video template exceeds the default end time of the video template, where the duration adaptive layer is a preset layer whose display duration can be adaptively adjusted in the video template.
As an example, the end time postponement unit 103 may be configured to determine whether the display end time of the display object added in the duration adaptation layer of the video template exceeds the default end time of the video template by: determining whether a displayed duration self-adaptive layer exists on an N-th frame after each layer of the video template is added with a corresponding display object; when the N-th frame is provided with the displayed duration self-adaptive layer, determining that the display ending time of the display object added in the displayed duration self-adaptive layer on the N-th frame exceeds the default ending time of the video template, wherein the default total frame number of the video template is N frames.
As an example, the end time postponement unit 103 may include: a display duration updating unit (not shown), an end time updating unit (not shown), and a postponement unit (not shown).
In one embodiment, the display duration updating unit is configured to update the display duration of each duration adaptive layer shown on the nth frame to: a display time period required for displaying the object added thereto; the end time updating unit is configured to update the display end time of each duration adaptive layer displayed on the nth frame based on the updated display duration of the duration adaptive layer; the postponement unit is configured to postpone the display end time of each duration adaptive layer displayed on the nth frame uniformly as follows: the latest display end time, wherein the latest display end time is: and the time length is adaptive to the latest display end time among the updated display end times of the layers.
In another embodiment, the display duration updating unit is configured to update the display duration of each duration adaptive layer shown on the nth frame to: a display time period required for displaying the object added thereto; the end time updating unit is configured to update the display end time of each duration adaptive layer displayed on the nth frame based on the updated display duration of the duration adaptive layer; and updating the display end time of each parent layer of each duration self-adaptive layer based on the updated display end time of each duration self-adaptive layer displayed on the nth frame, wherein each parent layer is: a parent layer comprising at least one of the respective duration adaptive layers; the postponement unit is configured to postpone the display end time of each duration adaptive layer and each parent layer uniformly as follows: the latest display end time, wherein the latest display end time is: and the latest display ending time among the display ending time after each time length self-adaptive layer updating and the display ending time after each father layer updating.
As an example, the end time updating unit may update, for each of the respective parent layers, the display end time of the parent layer as: the latest display end time among all the updated display end times of the sub-layers included in the display end time.
As an example, the configuration information may further include: the video synthesis unit 102 may be configured to add, for each text display object to which a mouth cast function needs to be added, the text display object to a corresponding layer according to the mouth cast style selected for the text display object; and adding voice data corresponding to the text display object into the video, and setting the playing start time of the voice data in the video based on the display start time of the layer to which the text display object is added.
As an example, the oral style may include at least one of: caption mouth-broadcasting style, printer mouth-broadcasting style, underline mouth-broadcasting style, rolling caption mouth-broadcasting style, novel mouth-broadcasting style; the subtitle opening broadcasting style is as follows: the characters appear in the form of captions and in synchronization with the voice; the mouth-casting style of the printer is as follows: the characters appear one by one along with the voice playing progress; the underlined mouth-seeding pattern is: the text is underlined along with the voice playing progress; the rolling caption mouth broadcasting style is as follows: the text automatically rolls upwards along with the voice playing progress in the form of subtitles; the novel carousel style is: the text appears in the form of a novel carousel in synchronization with speech.
As an example, the animation effect of the novel carousel may include at least one of: fade-in and fade-out, zoom, mask left to right display, mask small to large display, and scroll from bottom to top.
As an example, the configuration information may further include: the user aims at the identification information of the multi-font mixed typesetting style selected by the text display object with the function of adding the mouth cast and/or the key words needing to be highlighted; the video synthesis unit 102 may be configured to add, for each text display object to which a mouth cast function needs to be added, the text display object to the corresponding layer according to the mouth cast style selected for the text display object, the selected multi-font mixed typesetting style and/or the key words to be highlighted.
As an example, the video template may be a nonlinear special effects production software AE template or script template.
As an example, the configuration information may further include: identification information of a video end template selected by a user, and identification information of a display object to be added in each layer of the video end template. The video generating apparatus 10 according to an exemplary embodiment of the present disclosure may further include: an ending compositing unit (not shown) and a stitching unit (not shown), the ending compositing unit configured to add a corresponding display object in each layer of the video ending template according to the configuration information to composite an ending video clip; a splicing unit is configured to splice the video with the ending video segment.
The specific manner in which the respective units perform the operations in the apparatus of the above embodiments has been described in detail in relation to the embodiments of the method, and will not be described in detail here.
Further, it should be understood that various units in video generating device 10 according to exemplary embodiments of the present disclosure may be implemented as hardware components and/or software components. The individual units may be implemented, for example, using a Field Programmable Gate Array (FPGA) or an Application Specific Integrated Circuit (ASIC), depending on the processing performed by the individual units as defined.
Fig. 7 shows a block diagram of an electronic device according to an exemplary embodiment of the present disclosure. Referring to fig. 7, the electronic device 20 includes: at least one memory 201 and at least one processor 202, said at least one memory 201 having stored therein a set of computer executable instructions that, when executed by the at least one processor 202, perform the video generation method as described in the above exemplary embodiments.
By way of example, the electronic device 20 may be a PC computer, tablet device, personal digital assistant, smart phone, or other device capable of executing the above-described set of instructions. Here, the electronic device 20 is not necessarily a single electronic device, but may be any apparatus or a collection of circuits capable of executing the above-described instructions (or instruction sets) individually or in combination. The electronic device 20 may also be part of an integrated control system or system manager, or may be configured as a portable electronic device that interfaces with either locally or remotely (e.g., via wireless transmission).
In electronic device 20, processor 202 may include a Central Processing Unit (CPU), a Graphics Processor (GPU), a programmable logic device, a special purpose processor system, a microcontroller, or a microprocessor. By way of example, and not limitation, processor 202 may also include an analog processor, a digital processor, a microprocessor, a multi-core processor, a processor array, a network processor, and the like.
The processor 202 may execute instructions or code stored in the memory 201, wherein the memory 201 may also store data. The instructions and data may also be transmitted and received over a network via a network interface device, which may employ any known transmission protocol.
The memory 201 may be integrated with the processor 202, for example, RAM or flash memory disposed within an integrated circuit microprocessor or the like. In addition, the memory 201 may include a stand-alone device, such as an external disk drive, a storage array, or other storage device usable by any database system. The memory 201 and the processor 202 may be operatively coupled or may communicate with each other, such as through an I/O port, network connection, etc., such that the processor 202 is able to read files stored in the memory.
In addition, the electronic device 20 may also include a video display (such as a liquid crystal display) and a user interaction interface (such as a keyboard, mouse, touch input device, etc.). All components of the electronic device 20 may be connected to each other via a bus and/or a network.
According to an exemplary embodiment of the present disclosure, there may also be provided a computer-readable storage medium storing instructions, wherein the instructions, when executed by at least one processor, cause the at least one processor to perform the video generation method as described in the above exemplary embodiment. Examples of the computer readable storage medium herein include: read-only memory (ROM), random-access programmable read-only memory (PROM), electrically erasable programmable read-only memory (EEPROM), random-access memory (RAM), dynamic random-access memory (DRAM), static random-access memory (SRAM), flash memory, nonvolatile memory, CD-ROM, CD-R, CD + R, CD-RW, CD+RW, DVD-ROM, DVD-R, DVD + R, DVD-RW, DVD+RW, DVD-RAM, BD-ROM, BD-R, BD-R LTH, BD-RE, blu-ray or optical disk storage, hard Disk Drives (HDD), solid State Disks (SSD), card memory (such as multimedia cards, secure Digital (SD) cards or ultra-fast digital (XD) cards), magnetic tape, floppy disks, magneto-optical data storage, hard disks, solid state disks, and any other means configured to store computer programs and any associated data, data files and data structures in a non-transitory manner and to provide the computer programs and any associated data, data files and data structures to a processor or computer to enable the processor or computer to execute the programs. The computer programs in the computer readable storage media described above can be run in an environment deployed in a computer device, such as a client, host, proxy device, server, etc., and further, in one example, the computer programs and any associated data, data files, and data structures are distributed across networked computer systems such that the computer programs and any associated data, data files, and data structures are stored, accessed, and executed in a distributed fashion by one or more processors or computers.
According to an exemplary embodiment of the present disclosure, a computer program product may also be provided, instructions in the computer program product being executable by at least one processor to perform the video generation method as described in the above exemplary embodiment.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (22)

1. A video generation method, comprising:
acquiring configuration information of a user synthesizing a video based on a video template, wherein the configuration information comprises: the identification information of the video template and the identification information of the display objects which are required to be added in each layer of the video template by a user;
Adding corresponding display objects in each layer of the video template according to the configuration information to synthesize a video;
when the display end time of the display object added in the duration self-adaptive layer of the video template exceeds the default end time of the video template, delaying the display end time of the duration self-adaptive layer according to the display duration required by the display object so as to enable the whole content of the display object to be completely displayed in the synthesized video,
the time length self-adaptive layer is a layer with self-adaptive adjustment of display time length in the preset video template;
the step of delaying the display end time of the duration adaptive layer according to the display duration required by the display object comprises the following steps:
updating the display time length of each time length adaptive layer displayed on the nth frame as follows: a display time period required for displaying the object added thereto;
updating the display end time of each duration self-adaptive layer shown on the N-th frame based on the updated display duration of the duration self-adaptive layer;
uniformly delaying the display end time of each duration self-adaptive layer displayed on the Nth frame as follows: the latest display end time, wherein the latest display end time is: the latest display end time among the display end times after the self-adaptive layers of each duration are updated;
Wherein the default total frame number of the video template is N frames.
2. The video generation method according to claim 1, wherein it is determined whether a display end time of a display object added in a duration adaptive layer of the video template exceeds a default end time of the video template by:
determining whether a displayed duration self-adaptive layer exists on an N-th frame after each layer of the video template is added with a corresponding display object;
and when the duration self-adaptive layer displayed on the N frame exists, determining that the display ending time of the display object added in the duration self-adaptive layer displayed on the N frame exceeds the default ending time of the video template.
3. The video generating method according to claim 1, wherein the step of delaying the display end time of the duration adaptive layer according to the display duration required for the display object comprises:
updating the display duration of each duration adaptive layer displayed on the nth frame to be: a display time period required for displaying the object added thereto;
updating the display end time of each duration self-adaptive layer shown on the N-th frame based on the updated display duration of the duration self-adaptive layer;
Updating the display end time of each parent layer of each duration self-adaptive layer based on the updated display end time of each duration self-adaptive layer displayed on the nth frame, wherein each parent layer is: a parent layer comprising at least one of the respective duration adaptive layers;
uniformly delaying the display ending time of each duration self-adaptive layer and each father layer to be: the latest display end time, wherein the latest display end time is: and the latest display ending time among the display ending time after each time length self-adaptive layer updating and the display ending time after each father layer updating.
4. The video generation method according to claim 3, wherein the step of updating the display end time of each parent layer of each duration adaptive layer based on the updated display end time of each duration adaptive layer shown on the nth frame comprises:
updating the display end time of each parent layer in the parent layers as follows: the latest display end time among all the updated display end times of the sub-layers included in the display end time.
5. The video generation method according to claim 1, wherein the configuration information further includes: the user selects the identification information of the mouth cast style aiming at the character display object with the mouth cast function,
the step of adding the corresponding display object in each layer of the video template according to the configuration information comprises the following steps:
respectively adding the text display objects to the corresponding layers according to the mouth cast style selected for the text display objects aiming at each text display object needing to be added with the mouth cast function;
and adding voice data corresponding to the text display object into the video, and setting the playing start time of the voice data in the video based on the display start time of the layer to which the text display object is added.
6. The video generation method of claim 5, wherein the oral style comprises at least one of: caption mouth-broadcasting style, printer mouth-broadcasting style, underline mouth-broadcasting style, rolling caption mouth-broadcasting style, novel mouth-broadcasting style;
the subtitle opening broadcasting style is as follows: the characters appear in the form of captions and in synchronization with the voice;
The mouth-casting style of the printer is as follows: the characters appear one by one along with the voice playing progress;
the underlined mouth-seeding pattern is: the text is underlined along with the voice playing progress;
the rolling caption mouth broadcasting style is as follows: the text automatically rolls upwards along with the voice playing progress in the form of subtitles;
the novel carousel style is: the text appears in the form of a novel carousel in synchronization with speech.
7. The video generation method of claim 6, wherein the animation effect of the novel carousel comprises at least one of: fade-in and fade-out, zoom, mask left to right display, mask small to large display, and scroll from bottom to top.
8. The video generation method of claim 5, wherein the configuration information further comprises: the user aims at the identification information of the multi-font mixed typesetting style selected by the text display object with the function of adding the mouth cast and/or the key words needing to be highlighted;
the step of adding the text display object to the corresponding layer according to the mouth cast style selected for the text display object comprises the following steps of:
And adding the text display objects to the corresponding layers according to the selected mouth cast style for the text display objects, the selected multi-font mixed typesetting style and/or the key words to be highlighted.
9. The method for generating video according to claim 1, wherein,
the video template is a nonlinear special effect manufacturing software AE template or script template.
10. The video generation method according to claim 1, wherein the configuration information further includes: identification information of a video end template selected by a user and identification information of a display object to be added in each layer of the video end template;
the video generation method further comprises the following steps:
adding corresponding display objects in each layer of the video ending template according to the configuration information so as to synthesize ending video clips;
the video is spliced with the ending video segment.
11. A video generating apparatus, comprising:
a configuration information acquisition unit configured to acquire configuration information of a user synthesizing a video based on a video template, wherein the configuration information includes: the identification information of the video template and the identification information of the display objects which are required to be added in each layer of the video template by a user;
A video synthesis unit configured to add a corresponding display object in each layer of the video template according to the configuration information to synthesize a video;
an end time delay unit configured to delay the display end time of the duration adaptive layer according to the display duration required by the display object when the display end time of the display object added in the duration adaptive layer of the video template exceeds the default end time of the video template, so that the whole content of the display object can be completely displayed in the synthesized video,
the time length self-adaptive layer is a layer with self-adaptive adjustment of display time length in the preset video template;
wherein the end time delay unit includes:
the display time length updating unit is configured to update the display time length of each time length adaptive layer displayed on the nth frame to be: a display time period required for displaying the object added thereto;
the end time updating unit is configured to update the display end time of each duration self-adaptive layer shown on the Nth frame based on the updated display duration of the duration self-adaptive layer;
A delay unit configured to uniformly delay the display end time of each duration adaptive layer displayed on the nth frame as: the latest display end time, wherein the latest display end time is: the latest display end time among the display end times after the self-adaptive layers of each duration are updated;
wherein the default total frame number of the video template is N frames.
12. The video generation apparatus according to claim 11, wherein the end time delay unit is configured to determine whether a display end time of a display object added in a duration adaptive layer of the video template exceeds a default end time of the video template by:
determining whether a displayed duration self-adaptive layer exists on an N-th frame after each layer of the video template is added with a corresponding display object;
and when the duration self-adaptive layer displayed on the N frame exists, determining that the display ending time of the display object added in the duration self-adaptive layer displayed on the N frame exceeds the default ending time of the video template.
13. The video generating apparatus of claim 11, wherein,
The end time update unit is configured to: updating the display end time of each duration self-adaptive layer shown on the N-th frame based on the updated display duration of the duration self-adaptive layer; and updating the display end time of each parent layer of each duration self-adaptive layer based on the updated display end time of each duration self-adaptive layer displayed on the nth frame, wherein each parent layer is: a parent layer comprising at least one of the respective duration adaptive layers;
the postponement unit is configured to: uniformly delaying the display ending time of each duration self-adaptive layer and each father layer to be: the latest display end time, wherein the latest display end time is: and the latest display ending time among the display ending time after each time length self-adaptive layer updating and the display ending time after each father layer updating.
14. The apparatus according to claim 13, wherein the end time updating unit updates, for each of the respective parent layers, a display end time of the parent layer as: the latest display end time among all the updated display end times of the sub-layers included in the display end time.
15. The video generating apparatus according to claim 11, wherein the configuration information further includes: the user selects the identification information of the mouth cast style aiming at the character display object with the mouth cast function,
the video synthesis unit is configured to add the text display objects to the corresponding layers according to the mouth cast style selected for the text display objects for each text display object needing to be added with the mouth cast function; and adding voice data corresponding to the text display object into the video, and setting the playing start time of the voice data in the video based on the display start time of the layer to which the text display object is added.
16. The video generating apparatus of claim 15, wherein the oral style comprises at least one of: caption mouth-broadcasting style, printer mouth-broadcasting style, underline mouth-broadcasting style, rolling caption mouth-broadcasting style, novel mouth-broadcasting style;
the subtitle opening broadcasting style is as follows: the characters appear in the form of captions and in synchronization with the voice;
the mouth-casting style of the printer is as follows: the characters appear one by one along with the voice playing progress;
The underlined mouth-seeding pattern is: the text is underlined along with the voice playing progress;
the rolling caption mouth broadcasting style is as follows: the text automatically rolls upwards along with the voice playing progress in the form of subtitles;
the novel carousel style is: the text appears in the form of a novel carousel in synchronization with speech.
17. The video generating apparatus of claim 16, wherein the animation effect of the novice carousel comprises at least one of: fade-in and fade-out, zoom, mask left to right display, mask small to large display, and scroll from bottom to top.
18. The video generating apparatus according to claim 15, wherein the configuration information further includes: the user aims at the identification information of the multi-font mixed typesetting style selected by the text display object with the function of adding the mouth cast and/or the key words needing to be highlighted;
the video synthesis unit is configured to add each text display object with a mouth playing function to a corresponding layer according to a mouth playing style selected for the text display object, a selected multi-font mixed typesetting style and/or key words needing to be highlighted.
19. The video generating apparatus of claim 11, wherein,
the video template is a nonlinear special effect manufacturing software AE template or script template.
20. The video generating apparatus according to claim 11, wherein the configuration information further includes: identification information of a video end template selected by a user and identification information of a display object to be added in each layer of the video end template;
wherein the video generating apparatus further comprises:
an ending synthesis unit configured to add a corresponding display object in each layer of the video ending template according to the configuration information to synthesize an ending video clip;
and a splicing unit configured to splice the video with the ending video segment.
21. An electronic device, comprising:
at least one processor;
at least one memory storing computer-executable instructions,
wherein the computer executable instructions, when executed by the at least one processor, cause the at least one processor to perform the video generation method of any of claims 1 to 10.
22. A computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed by at least one processor, cause the at least one processor to perform the video generation method of any of claims 1 to 10.
CN202110824917.3A 2021-07-21 2021-07-21 Video generation method and device Active CN113556576B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110824917.3A CN113556576B (en) 2021-07-21 2021-07-21 Video generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110824917.3A CN113556576B (en) 2021-07-21 2021-07-21 Video generation method and device

Publications (2)

Publication Number Publication Date
CN113556576A CN113556576A (en) 2021-10-26
CN113556576B true CN113556576B (en) 2024-03-19

Family

ID=78103841

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110824917.3A Active CN113556576B (en) 2021-07-21 2021-07-21 Video generation method and device

Country Status (1)

Country Link
CN (1) CN113556576B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1862514A (en) * 2005-05-13 2006-11-15 雅马哈株式会社 Content distributing server, content distributing method, and content distributing program
CN110198420A (en) * 2019-04-29 2019-09-03 北京卡路里信息技术有限公司 Video generation method and device based on nonlinear video editor
CN110266971A (en) * 2019-05-31 2019-09-20 上海萌鱼网络科技有限公司 A kind of short video creating method and system
US10515665B1 (en) * 2016-08-31 2019-12-24 Dataclay, LLC System and method for automating the configuration and sequencing of temporal elements within a digital video composition
CN110708596A (en) * 2019-09-29 2020-01-17 北京达佳互联信息技术有限公司 Method and device for generating video, electronic equipment and readable storage medium
CN111669623A (en) * 2020-06-28 2020-09-15 腾讯科技(深圳)有限公司 Video special effect processing method and device and electronic equipment
CN111739128A (en) * 2020-07-29 2020-10-02 广州筷子信息科技有限公司 Target video generation method and system
CN111899155A (en) * 2020-06-29 2020-11-06 腾讯科技(深圳)有限公司 Video processing method, video processing device, computer equipment and storage medium
CN111899322A (en) * 2020-06-29 2020-11-06 腾讯科技(深圳)有限公司 Video processing method, animation rendering SDK, device and computer storage medium
CN112291484A (en) * 2019-07-23 2021-01-29 腾讯科技(深圳)有限公司 Video synthesis method and device, electronic equipment and storage medium
CN112584061A (en) * 2020-12-24 2021-03-30 咪咕文化科技有限公司 Multimedia universal template generation method, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10008238B2 (en) * 2013-05-02 2018-06-26 Waterston Entertainment (Pty) Ltd System and method for incorporating digital footage into a digital cinematographic template

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1862514A (en) * 2005-05-13 2006-11-15 雅马哈株式会社 Content distributing server, content distributing method, and content distributing program
US10515665B1 (en) * 2016-08-31 2019-12-24 Dataclay, LLC System and method for automating the configuration and sequencing of temporal elements within a digital video composition
CN110198420A (en) * 2019-04-29 2019-09-03 北京卡路里信息技术有限公司 Video generation method and device based on nonlinear video editor
CN110266971A (en) * 2019-05-31 2019-09-20 上海萌鱼网络科技有限公司 A kind of short video creating method and system
CN112291484A (en) * 2019-07-23 2021-01-29 腾讯科技(深圳)有限公司 Video synthesis method and device, electronic equipment and storage medium
CN110708596A (en) * 2019-09-29 2020-01-17 北京达佳互联信息技术有限公司 Method and device for generating video, electronic equipment and readable storage medium
CN111669623A (en) * 2020-06-28 2020-09-15 腾讯科技(深圳)有限公司 Video special effect processing method and device and electronic equipment
CN111899155A (en) * 2020-06-29 2020-11-06 腾讯科技(深圳)有限公司 Video processing method, video processing device, computer equipment and storage medium
CN111899322A (en) * 2020-06-29 2020-11-06 腾讯科技(深圳)有限公司 Video processing method, animation rendering SDK, device and computer storage medium
CN111739128A (en) * 2020-07-29 2020-10-02 广州筷子信息科技有限公司 Target video generation method and system
CN112584061A (en) * 2020-12-24 2021-03-30 咪咕文化科技有限公司 Multimedia universal template generation method, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113556576A (en) 2021-10-26

Similar Documents

Publication Publication Date Title
CN1152335C (en) Authoring device and authoring method for creating multimedia files
CN102256049B (en) Automation story generates
US9729907B2 (en) Synchronizing a plurality of digital media streams by using a descriptor file
US20060204214A1 (en) Picture line audio augmentation
US20160300594A1 (en) Video creation, editing, and sharing for social media
US20120177345A1 (en) Automated Video Creation Techniques
CN106804005A (en) The preparation method and mobile terminal of a kind of video
JP2024523812A (en) Audio sharing method, device, equipment and medium
CN113722535B (en) Method for generating book recommendation video, electronic device and computer storage medium
CN110062270A (en) Advertisement demonstration method and device
KR101982221B1 (en) System and method for editing digital contents based on web
US20180143741A1 (en) Intelligent graphical feature generation for user content
US10694222B2 (en) Generating video content items using object assets
US10721519B2 (en) Automatic generation of network pages from extracted media content
US11941728B2 (en) Previewing method and apparatus for effect application, and device, and storage medium
US20200142572A1 (en) Generating interactive, digital data narrative animations by dynamically analyzing underlying linked datasets
CN112004137A (en) Intelligent video creation method and device
US11093120B1 (en) Systems and methods for generating and broadcasting digital trails of recorded media
KR101950674B1 (en) APP system having a function of editing motion picture
CN113038151B (en) Video editing method and video editing device
CN113556576B (en) Video generation method and device
JP2011254342A (en) Method for editing video, device for editing video, and program for editing video
KR101576094B1 (en) System and method for adding caption using animation
KR20180046419A (en) System of making interactive smart contents based on cloud service
US20080115062A1 (en) Video user interface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant