[go: up one dir, main page]

CN109286760B - An entertainment video production method and terminal thereof - Google Patents

An entertainment video production method and terminal thereof Download PDF

Info

Publication number
CN109286760B
CN109286760B CN201811142023.0A CN201811142023A CN109286760B CN 109286760 B CN109286760 B CN 109286760B CN 201811142023 A CN201811142023 A CN 201811142023A CN 109286760 B CN109286760 B CN 109286760B
Authority
CN
China
Prior art keywords
video
camera
background
person
entertainment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811142023.0A
Other languages
Chinese (zh)
Other versions
CN109286760A (en
Inventor
刘向前
童小林
康英永
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Lianshang Network Technology Group Co.,Ltd.
Original Assignee
Shanghai Lianshang Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Lianshang Network Technology Co Ltd filed Critical Shanghai Lianshang Network Technology Co Ltd
Priority to CN201811142023.0A priority Critical patent/CN109286760B/en
Publication of CN109286760A publication Critical patent/CN109286760A/en
Application granted granted Critical
Publication of CN109286760B publication Critical patent/CN109286760B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本发明的目的是提供一种娱乐视频制作方法及其终端,该方法,包括:通过第一摄像装置拍摄获取第一摄像视频;从所述第一摄像视频提取出第一人物视频;将所述第一人物视频与背景视频合成以形成娱乐视频。根据本发明的娱乐视频制作方法及其终端,用户通过前置的第一摄像装置自拍得到第一摄像视频,然后提取出其中的第一人物视频,即用户自拍视频人物形象,最后将该用户自拍视频人物形象合成至后置的摄像装置拍摄的背景视频,使用户人物视频形象与背景视频中的其他人物视频形象按照预设动作效果进行互动,可以让用户与自己喜欢的明星、新闻人物在视频中互动,满足用户便捷地拍摄创意视频的需求。

Figure 201811142023

The purpose of the present invention is to provide an entertainment video production method and a terminal thereof. The method includes: capturing a first camera video by a first camera device; extracting a first person video from the first camera video; The first person video is combined with the background video to form an entertainment video. According to the entertainment video production method and the terminal thereof of the present invention, the user obtains the first camera video by taking a selfie with the front first camera device, and then extracts the first character video, that is, the user takes a selfie video character image, and finally takes the user selfie The video character image is synthesized into the background video shot by the rear camera, so that the user's character video image interacts with other characters in the background video according to the preset action effects, allowing users to interact with their favorite stars and news figures in the video. to meet the needs of users to conveniently shoot creative videos.

Figure 201811142023

Description

Entertainment video production method and terminal thereof
Technical Field
The invention relates to the field of computers, in particular to an entertainment video production method and a terminal thereof.
Background
The existing photographing function becomes an important index just needed by the mobile phone, most of the mobile phones are provided with a front camera and a rear camera, a scheme for enhancing the photographing capability through a plurality of rear cameras is provided, and a scheme for self-photographing of a high-pixel front camera is also provided. In the prior art, when a user sees a favorite star or a favorite news celebrity on a certain video, the user wants to interact with the favorite star or the favorite news celebrity in the video, such as mutual handshake and hugging, simple video synthesis can be performed on a computer through a video editor, but the user cannot interact with video characters, the video synthesis operation is complex, and the interaction feeling among the synthesized video characters is poor.
Disclosure of Invention
The invention aims to provide an entertainment video production method and a terminal thereof, which are used for solving the problem of producing interactive videos of users and video characters.
According to a first aspect of the present invention, there is provided an entertainment video production method comprising:
shooting through a first camera device to obtain a first camera video;
extracting a first person video from the first camera video;
the first person video is composited with a background video to form an entertainment video.
Further, before the first person video is combined with the background video to form the entertainment video, the method of the present invention further includes:
and shooting and acquiring a second camera video serving as the background video through a second camera device.
Further, before the first person video is combined with the background video to form the entertainment video, the method of the present invention further includes:
shooting through a second camera device to obtain a second camera video;
sending the second camera video to a server; the server inquires and matches a background video corresponding to the second camera video according to the second camera video;
and receiving the background video sent by the server.
Further, the method of the present invention extracts a first person video from the first captured video, and includes:
acquiring a figure outline corresponding to the background video;
and arranging the extracted first person video in the person outline.
Further, the method of the present invention, the obtaining the person outline corresponding to the background video, includes:
determining an interaction type corresponding to the background video through a selection operation;
and acquiring the figure outline corresponding to the interaction type.
Further, the method of the present invention, which combines the first person video and the background video to form an entertainment video, further includes:
analyzing and acquiring a second person video in the background video;
adding an effect video or an effect image corresponding to the interaction type to the second character video, and/or adding an effect audio corresponding to the interaction type to the entertainment video.
Further, the method of the present invention obtains a first camera video by shooting with a first camera device, and includes:
displaying the first camera video through a display device during shooting;
displaying the human figure in the first camera video.
Further, the method of the present invention, which combines the first person video and the background video to form the entertainment video, comprises:
matching the time axis of the first camera video with the time axis of the background video according to the time axis of the background video;
the first person video is arranged at a preset position of the background video;
and adjusting the size of the first human video according to a preset size.
Further, the method of the present invention further comprises:
analyzing and acquiring a second person video in the background video;
analyzing the action gesture of the second character video;
setting an interaction type according to the action posture;
setting an effect video, an effect image and/or an effect audio corresponding to the interaction type according to the action gesture;
setting a figure outline corresponding to the interaction type according to the action posture;
and shooting the first camera video according to the figure outline.
According to a second aspect of the present invention, there is provided a terminal comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the operations of the method of the invention.
According to a third aspect of the present invention, there is provided a computer-readable storage medium, wherein the computer-readable storage medium stores instructions for performing the method according to the present invention.
According to the entertainment video production method and the terminal thereof, the user obtains the first camera video through the self-shooting of the front first camera device, then the first person video is extracted, namely the self-shooting video character image of the user is extracted, and finally the self-shooting video character image of the user is synthesized to the background video shot by the rear camera device, so that the user character video image and other character video images in the background video are interacted according to the preset action effect, the user can interact with the favorite star and the favorite news character in the video, and the requirement of the user for conveniently shooting the creative video is met.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments made with reference to the following drawings:
FIG. 1 is a schematic flow chart of a method for producing an entertainment video according to a first embodiment of the present invention;
FIG. 2 is a flowchart illustrating a method for producing an entertainment video according to a second embodiment of the present invention;
FIG. 3 is a flowchart illustrating a method for producing an entertainment video according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of an entertainment video production apparatus according to a fourth embodiment of the present invention.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
The present invention is described in further detail below with reference to the attached drawing figures.
In a typical configuration of the invention, the terminal, the device serving the network and the trusted party each comprise one or more processors (CPU), input/output interfaces, network interfaces and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
Fig. 1 is a schematic flow chart of an entertainment video production method according to a first embodiment of the present invention, and as shown in fig. 1, the entertainment video production method according to the first embodiment of the present invention includes:
and step S101, shooting and acquiring a first camera shooting video through a first camera shooting device.
The first camera device can be a front camera device of the terminal equipment, and a self-timer video of a user is shot through the front first camera device to serve as a first camera video.
And step S102, extracting a first person video from the first camera video.
The method comprises the steps of obtaining a first camera shooting video shot by a first camera shooting device, wherein a self-shot character image of a user is extracted from the first camera shooting video shot by the first camera shooting device arranged at the front, namely, the self-shot character image video of the user without a background image is obtained and serves as the first character video.
And step S103, synthesizing the first human video and the background video to form an entertainment video.
The method comprises the steps that a background video is a video containing other characters, such as movies, TV shows, news and the like, the extracted first person video shot by a user is displayed in front of the background video, the size and the position of the first person video are adjusted to be consistent with the other character images of the background video, the character image of the first person video and the other character images of the background video form an interactive relation, and therefore the user can synthesize an entertainment video. In the entertainment video, the self-timer character image shot by the user through the front first camera device can form an interactive relation with other character images in the background video, and the effect of shooting the creative entertainment video is achieved.
For example, a background video is a news broadcast video, a broadcast host in the news broadcast video carries out news commentary, and a user wants to shoot a video which is jointly commentated by the user and the host, then the user shoots a first camera shooting video through a first camera shooting device arranged in front of the terminal equipment, and the user simulates the action form of the broadcast commentary when shooting the first camera shooting video. Then from the first video of making a video recording of autodyne, extract the personage image that user self carried out the broadcast and explained, through carrying out personage discernment to the video picture of each frame promptly, draw out autodyne personage's image wherein, make up into autodyne personage video according to the time axis of the first video of making a video recording of autodyne again, as first personage video. And then displaying the background video on the terminal equipment, displaying the first person video in a suspending way before the background video, and enabling a user to adjust the position and the size of the first person video so as to enable the first person video and the broadcasting host in the background video to be displayed side by side, and synthesizing the first person video with the background video according to the adjusted position and size, thereby forming the entertainment video in which the self-portrait image of the user and the broadcasting host of the background are displayed on a common background picture.
Fig. 2 is a schematic flow chart of an entertainment video production method according to a second embodiment of the present invention, and as shown in fig. 2, the entertainment video production method according to the second embodiment of the present invention includes:
in step S201, a second camera device captures and acquires a second camera video as a background video.
The user can shoot the background video through a second camera device arranged behind the terminal equipment. For example, when a user watches a tv series, and wants to combine an entertainment video with a character in the tv series, the user captures a video clip of the tv series as a background video by a second camera device disposed at the rear. In addition, the video clip shot by the user can also be sent to the video server, the video server inquires and matches the same clear video clip according to the shot video clip, and sends the clear video clip back to the user terminal equipment as the background video, so that a clearer background video can be obtained.
Step S202, acquiring the figure outline corresponding to the background video.
In order to better combine the first person video with the background video, the person outline needs to be acquired, the first person video obtained by shooting is limited in the person outline, and then the first person video limited by the person outline is combined with the background video. For example, when synthesizing a news broadcast video, the broadcaster of the background video is a half-seated image, and the self-portrait video taken by the user is a whole-body standing video, in this case, it is necessary to set the character outline of the background video to be an upper-body character outline, and when displaying the first character video, the image of the upper body of the whole-body character image self-photographed by the user is displayed only in the upper-body character outline, and as the first character video, the image of the lower body of the whole-body self-portrait of the user is not displayed because it is not in the upper-body character outline, so that the upper-body character self-portrait of the user is obtained, and it is convenient to form an interactive image with the broadcaster who maintains a sitting posture.
Specifically, the step S202 of acquiring the human figure corresponding to the background video includes:
step S2021, determining an interaction type corresponding to the background video through a selection operation.
Step S2022, obtaining a character outline corresponding to the interaction type.
Wherein, a plurality of types of interaction with the characters of the background video are preset, such as a type of intimacy, a type of normal, a type of hate and the like. For example, in the type of close-type interaction, the user can take an interactive video hugging a background person; in the common type of interaction, a user can shoot an interactive video which keeps a distance from a background person; in a hate-type interaction type, the user can take an interactive video of the character hitting the background. Each interaction type has its corresponding character outline, which may be dynamic, i.e. a character outline that can perform a simple action. After the user acquires the background video, the user freely selects an interaction type according to options given by the popped interaction menu to determine the interaction relation with the background figure in the background video, and the terminal equipment acquires a preset figure outline corresponding to the interaction type according to the interaction type selected by the user.
Step S203, a first camera device captures and acquires a first camera video. The first camera shooting video is displayed through a display device during shooting; displaying the human figure in the first camera video.
After the user acquires the figure outline corresponding to the interaction type, shooting a self-shooting video according to the figure outline. For example, if the user selects the affinity type interaction type, that is, the user performs hugging interaction with a background figure, the terminal device obtains a figure outline corresponding to the affinity type interaction type and showing a hugging action. When the user shoots the self-timer video through the front first camera device, the first camera video collected by the first camera device is displayed in real time on the display device of the terminal equipment, the figure outline of the hugging action is displayed before the first camera video at the moment, the user can simulate the hugging action, the figure image of the user is placed in the figure outline, the edge of the figure image of the user is aligned with the figure outline, the hugging action of the figure outline is simulated in the figure outline, and the shooting of the first camera video is completed.
And step S204, extracting a first person video from the first camera video.
The portrait identification technology can be used for analyzing and identifying each frame of picture in the first camera video, so that the user self-shooting figure images in each frame of picture are extracted, and each frame of user self-shooting figure images are synthesized into the first portrait video according to the time axis of the first camera video.
Step S205, the extracted first person video is arranged in the person outline.
And displaying the extracted first person video in the previously obtained person outline so as to limit the displayed first person video through the person outline, thereby increasing the sense of reality of the interactive entertainment video synthesized later.
And step S206, analyzing and acquiring a second person video in the background video.
The method comprises the steps of shooting a background video by a first camera device, shooting a plurality of first images by a second camera device, and shooting a plurality of second images by a second camera device, wherein the first images are shot by the second camera device and are respectively used as first image images of the background video, and the second images are shot by the first camera device and are respectively used as second image images of the background video.
Step S207, adding the effect video or the effect image corresponding to the interaction type to the second character video, and/or adding the effect audio corresponding to the interaction type to the entertainment video.
After the user selects the interaction type, an animation effect and an audio effect corresponding to the interaction type can be obtained to be added to the extracted second character video of the background character, for example, if the user selects the hate type interaction type, the character profile corresponding to the interaction type is an action of striking the background character, a striking effect animation and an audio frequency corresponding to the hate type interaction type are added to the extracted second character video of the background character, for example, the audio frequency is a sound of boxing striking, and the animation effect is that after the second character video is struck, an effect image of a black eye is added to the eye part of the second character video, so that the interactive entertainment video with stronger reality sense is formed.
And step S208, synthesizing the first human video and the background video to form an entertainment video.
Specifically, the step S208 of synthesizing the first human video and the background video includes:
step S2081, matching the time axis of the first camera video with the time axis of the background video according to the time axis of the background video;
step S2082, arranging the first human video at a preset position of the background video;
step S2083, adjusting the size of the first human video according to a preset size.
When the first person video is synthesized to the background video, the background video can be displayed on a display device of the terminal device, an animation effect corresponding to the interaction type is added to a second person video extracted from the background video, and the first person video is displayed before the background video, wherein the edge of the first person video is defined by a person outline corresponding to the interaction type. The user can adjust the size and the position of the first human video through operations of sliding left and right, dragging and the like on the display device at the moment, and the reality sense of the interactive entertainment video is increased. Because the user adjustment is not professional, the position of the first person video can be arranged at a fixed preset position in the background video, and the situation that the reality sense of the entertainment video is reduced due to the fact that the user cannot adjust the position is avoided. When the first person video is synthesized with the background video, in order to ensure that the person actions of the first person video and the background video are consistent and coordinated, the time axis of the first person video needs to be adjusted according to the time axis of the background video, the time axis of the first person video is coordinated and consistent with the time axis of the background video, and the first person video and the background video are prevented from being inconsistent when the interactive video is synthesized.
Fig. 3 is a schematic flow chart of an entertainment video production method according to a third embodiment of the present invention, and as shown in fig. 3, the entertainment video production method according to the third embodiment of the present invention includes:
step S301, shooting and acquiring a second camera shooting video through a second camera shooting device; sending the second camera video to a server; the server inquires and matches a background video corresponding to the second camera video according to the second camera video; and receiving the background video sent by the server.
In order to obtain a clearer background video, after a certain drama segment is obtained by shooting through a second camera device arranged at the rear end, for example, the terminal device can send the obtained drama segment to a video server, and the video server queries and matches a drama clear segment which is the same as the drama segment from a video database according to the drama segment and sends the drama clear segment as the background video to the terminal device.
Step S302, analyzing and acquiring a second person video in the background video.
Step S303, analyzing the action posture of the second person video.
And step S304, setting an interaction type according to the action posture.
And S305, setting an effect video, an effect image and/or an effect audio corresponding to the interaction type according to the action gesture.
And S306, setting a character outline corresponding to the interaction type according to the action posture.
After the clear background video sent by the server is obtained, the interaction type can be set according to the background video, and an effect video, an effect image, an effect audio and the like corresponding to the interaction type are added to the background video. For example, if the clear segment of the tv play acquired by the video server is a martial action scene, the background character in the scene is analyzed and extracted by using the portrait recognition technology, and the interaction type is set as an attack type interaction type according to the martial action posture of the background character obtained by the analysis. In the background video, animation effects such as video, audio, images, and the like are added to the second character video of each background character, and a character outline having a hitting action is set according to the attack type interaction type.
Step S307, shooting the first camera shooting video according to the human figure outline through a first camera shooting device. The first camera shooting video is displayed through a display device during shooting; displaying the human figure in the first camera video.
When a user self-shoots a first camera video through a front first camera device, the first camera video is displayed in real time on terminal equipment, a figure outline obtained according to an interaction type is displayed, and the user simulates the action of the figure outline to finish self-shooting.
Step S308, extracting a first person video from the first camera video;
step S309, the extracted first person video is arranged in the person outline.
After the self-shooting is completed, the self-shooting character image obtained by shooting is further limited through a character outline so as to extract and obtain a first character video which only comprises the self-shooting character image of the user and accords with the background video interaction type.
Step S310, the first person video and the background video are synthesized to form an entertainment video. Wherein an effect video or an effect image corresponding to the type of interaction is added to the second character video, and/or an effect audio corresponding to the type of interaction is added to the entertainment video. Matching the time axis of the first camera video with the time axis of the background video according to the time axis of the background video; the first person video is arranged at a preset position of the background video; and adjusting the size of the first human video according to a preset size.
When the first person video is synthesized to the background video, the background video can be displayed on a display device of the terminal device, an animation effect corresponding to the interaction type is added to a second person video extracted from the background video, the first person video is displayed before the background video, and the edge of the first person video is defined by a person outline corresponding to the interaction type. The user can adjust the size and the position of the first human video through operations of sliding left and right, dragging and the like on the display device at the moment, and the reality sense of the interactive entertainment video is increased. When the first person video is synthesized with the background video, in order to ensure that the actions of the first person video and the background video are consistent and coordinated, the time axis of the first person video needs to be adjusted according to the time axis of the background video, the time axis of the first person video and the time axis of the background video are coordinated and consistent, and the first person video and the background video are prevented from being inconsistent when the interactive video is synthesized.
Fig. 4 is a schematic structural diagram of an entertainment video production apparatus according to a fourth embodiment of the present invention, and as shown in fig. 4, an entertainment video production apparatus according to the fourth embodiment of the present invention includes: a first camera 41, a first extraction module 42, a second camera 43, a second extraction module 44, an interaction type module 45 and a composition module 46.
And a first camera 41 for shooting and acquiring the first camera video. Wherein a person outline corresponding to an interaction type is displayed within the first camera video.
And a first extraction module 42, configured to extract a first person video from the first captured video.
And a second camera 43 for shooting and acquiring a second camera video as the background video.
And the second extraction module 44 is configured to analyze and obtain a second person video in the background video.
An interaction type module 45 for: analyzing the action gesture of the second character video; setting an interaction type according to the action posture; setting an effect video, an effect image and/or an effect audio corresponding to the interaction type according to the action gesture; setting a figure outline corresponding to the interaction type according to the action posture; adding an effect video or an effect image corresponding to the interaction type to the second character video, and/or adding an effect audio corresponding to the interaction type to the entertainment video.
A composition module 46 for compositing the first person video with a background video to form an entertainment video.
According to an embodiment of the present invention, there is also provided a terminal including:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the operations of the method as described in embodiments one to three of the present invention.
According to an embodiment of the present invention, there is also provided a computer-readable storage medium, which stores instructions for performing the method according to the first to third embodiments of the present invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
It should be noted that the present invention may be implemented in software and/or in a combination of software and hardware, for example, as an Application Specific Integrated Circuit (ASIC), a general purpose computer or any other similar hardware device. In one embodiment, the software program of the present invention may be executed by a processor to implement the steps or functions described above. Also, the software programs (including associated data structures) of the present invention can be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Further, some of the steps or functions of the present invention may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
In addition, some of the present invention can be applied as a computer program product, such as computer program instructions, which when executed by a computer, can invoke or provide the method and/or technical solution according to the present invention through the operation of the computer. Program instructions which invoke the methods of the present invention may be stored on a fixed or removable recording medium and/or transmitted via a data stream on a broadcast or other signal-bearing medium and/or stored within a working memory of a computer device operating in accordance with the program instructions. An embodiment according to the invention herein comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform a method and/or solution according to embodiments of the invention as described above.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (10)

1. An entertainment video production method, comprising:
analyzing and acquiring a second person video in the background video;
analyzing the action gesture of the second character video;
setting an interaction type according to the action posture;
setting an effect video, an effect image and/or an effect audio corresponding to the interaction type according to the action gesture;
setting a figure outline corresponding to the interaction type according to the action posture;
according to the figure outline, shooting through a first camera device to obtain a first camera video;
extracting a first person video from the first camera video;
the first person video is composited with a background video to form an entertainment video.
2. The method of claim 1, further comprising, prior to compositing the first person video with a background video to form an entertainment video:
and shooting and acquiring a second camera video serving as the background video through a second camera device.
3. The method of claim 1, further comprising, prior to compositing the first person video with a background video to form an entertainment video:
shooting through a second camera device to obtain a second camera video;
sending the second camera video to a server; the server inquires and matches a background video corresponding to the second camera video according to the second camera video;
and receiving the background video sent by the server.
4. The method of claim 1, wherein extracting the first person video from the first camera video comprises:
acquiring a figure outline corresponding to the background video;
and arranging the extracted first person video in the person outline.
5. The method of claim 4, wherein the obtaining the human figure outline corresponding to the background video comprises:
determining an interaction type corresponding to the background video through a selection operation;
and acquiring the figure outline corresponding to the interaction type.
6. The method of claim 5, wherein the first person video is composited with a background video to form an entertainment video, further comprising:
analyzing and acquiring a second person video in the background video;
adding an effect video or an effect image corresponding to the interaction type to the second character video, and/or adding an effect audio corresponding to the interaction type to the entertainment video.
7. The method of claim 4, wherein capturing the first camera video with the first camera device comprises:
displaying the first camera video through a display device during shooting;
displaying the human figure in the first camera video.
8. The method of any of claims 1 to 7, wherein compositing the first human video with a background video to form an entertainment video comprises:
matching the time axis of the first camera video with the time axis of the background video according to the time axis of the background video;
the first person video is arranged at a preset position of the background video;
and adjusting the size of the first human video according to a preset size.
9. A terminal, comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the operations of the method of any of claims 1 to 8.
10. A computer-readable storage medium storing instructions for performing the method of any one of claims 1 to 8.
CN201811142023.0A 2018-09-28 2018-09-28 An entertainment video production method and terminal thereof Active CN109286760B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811142023.0A CN109286760B (en) 2018-09-28 2018-09-28 An entertainment video production method and terminal thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811142023.0A CN109286760B (en) 2018-09-28 2018-09-28 An entertainment video production method and terminal thereof

Publications (2)

Publication Number Publication Date
CN109286760A CN109286760A (en) 2019-01-29
CN109286760B true CN109286760B (en) 2021-07-16

Family

ID=65182015

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811142023.0A Active CN109286760B (en) 2018-09-28 2018-09-28 An entertainment video production method and terminal thereof

Country Status (1)

Country Link
CN (1) CN109286760B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111695376A (en) * 2019-03-13 2020-09-22 阿里巴巴集团控股有限公司 Video processing method, video processing device and electronic equipment
CN112037227B (en) * 2020-09-09 2024-02-20 脸萌有限公司 Video shooting method, device, equipment and storage medium
CN113596574A (en) * 2021-07-30 2021-11-02 维沃移动通信有限公司 Video processing method, video processing apparatus, electronic device, and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105847676A (en) * 2016-03-28 2016-08-10 乐视控股(北京)有限公司 Image processing method and apparatus
CN106730815A (en) * 2016-12-09 2017-05-31 福建星网视易信息系统有限公司 The body-sensing interactive approach and system of a kind of easy realization
CN107920213A (en) * 2017-11-20 2018-04-17 深圳市堇茹互动娱乐有限公司 Image synthesizing method, terminal and computer-readable recording medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102055834B (en) * 2009-10-30 2013-12-11 Tcl集团股份有限公司 Double-camera photographing method of mobile terminal
CN103248830A (en) * 2013-04-10 2013-08-14 东南大学 Real-time video combination method for augmented reality scene of mobile intelligent terminal
CN104424624B (en) * 2013-08-28 2018-04-10 中兴通讯股份有限公司 A kind of optimization method and device of image synthesis
CN106204426A (en) * 2016-06-30 2016-12-07 广州华多网络科技有限公司 A kind of method of video image processing and device
CN108566521A (en) * 2018-06-26 2018-09-21 蒋大武 A kind of image synthesizing system for scratching picture based on natural image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105847676A (en) * 2016-03-28 2016-08-10 乐视控股(北京)有限公司 Image processing method and apparatus
CN106730815A (en) * 2016-12-09 2017-05-31 福建星网视易信息系统有限公司 The body-sensing interactive approach and system of a kind of easy realization
CN107920213A (en) * 2017-11-20 2018-04-17 深圳市堇茹互动娱乐有限公司 Image synthesizing method, terminal and computer-readable recording medium

Also Published As

Publication number Publication date
CN109286760A (en) 2019-01-29

Similar Documents

Publication Publication Date Title
CN109729420B (en) Picture processing method and device, mobile terminal and computer readable storage medium
CN106375674B (en) Method and apparatus for finding and using video portions associated with adjacent still images
CN105872810B (en) A kind of media content sharing method and device
CN112637670B (en) Video generation method and device
CN108989830A (en) A kind of live broadcasting method, device, electronic equipment and storage medium
CN107770626A (en) Processing method, image synthesizing method, device and the storage medium of video material
WO2023104102A1 (en) Live broadcasting comment presentation method and apparatus, and device, program product and medium
CN109286760B (en) An entertainment video production method and terminal thereof
CN111683266A (en) Method and terminal for configuring subtitles through simultaneous translation of videos
CN110677734B (en) Video synthesis method and device, electronic equipment and storage medium
WO2019114330A1 (en) Video playback method and apparatus, and terminal device
EP4543013A1 (en) Video data processing method and device, equipment, system, and storage medium
CN111757137A (en) Multi-channel close-up playing method and device based on single-shot live video
KR20140089829A (en) Method and apparatus for controlling animated image in an electronic device
WO2022214101A1 (en) Video generation method and apparatus, electronic device, and storage medium
CN111327823A (en) Video generation method and device and corresponding storage medium
CN107977184A (en) A kind of method for playing music and device based on virtual reality technology
US20230316529A1 (en) Image processing method and apparatus, device and storage medium
CN112287771A (en) Method, apparatus, server and medium for detecting video events
CN114363688A (en) Video processing method and device and non-volatile computer readable storage medium
CN113709545A (en) Video processing method and device, computer equipment and storage medium
JP2025067931A (en) Program and imaging device
CN110433491A (en) Movement sync response method, system, device and the storage medium of virtual spectators
CN114554232A (en) Mixed reality live broadcast method and system based on naked eye 3D
CN112887796A (en) Video generation method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 201306 2, building 979, Yun Han Road, mud town, Pudong New Area, Shanghai

Patentee after: Shanghai Lianshang Network Technology Group Co.,Ltd.

Country or region after: China

Address before: 201306 2, building 979, Yun Han Road, mud town, Pudong New Area, Shanghai

Patentee before: SHANGHAI LIANSHANG NETWORK TECHNOLOGY Co.,Ltd.

Country or region before: China